[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6] x86/hvm: Implement hvmemul_write() using real mappings



>>> On 27.09.17 at 14:39, <aisaila@xxxxxxxxxxxxxxx> wrote:
> From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> 
> An access which crosses a page boundary is performed atomically by x86
> hardware, albeit with a severe performance penalty.  An important corner 
> case
> is when a straddled access hits two pages which differ in whether a
> translation exists, or in net access rights.
> 
> The use of hvm_copy*() in hvmemul_write() is problematic, because it 
> performs
> a translation then completes the partial write, before moving onto the next
> translation.
> 
> If an individual emulated write straddles two pages, the first of which is
> writable, and the second of which is not, the first half of the write will
> complete before #PF is raised from the second half.
> 
> This results in guest state corruption as a side effect of emulation, which
> has been observed to cause windows to crash while under introspection.
> 
> Introduce the hvmemul_{,un}map_linear_addr() helpers, which translate an
> entire contents of a linear access, and vmap() the underlying frames to
> provide a contiguous virtual mapping for the emulator to use.  This is the
> same mechanism as used by the shadow emulation code.
> 
> This will catch any translation issues and abort the emulation before any
> modifications occur.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Signed-off-by: Alexandru Isaila <aisaila@xxxxxxxxxxxxxxx>
> Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

This very clearly needs dropping with ...

> Changes since V5:
>       - Added address size check
>       - Added a pages local variable that holds the number of pages
>       - Addded the !mapping check

... these.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.