|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Ping: [PATCH] x86/HVM: use single (atomic) MOV for aligned emulated writes
On 16.09.2019 11:40, Jan Beulich wrote:
> Using memcpy() may result in multiple individual byte accesses
> (dependening how memcpy() is implemented and how the resulting insns,
> e.g. REP MOVSB, get carried out in hardware), which isn't what we
> want/need for carrying out guest insns as correctly as possible. Fall
> back to memcpy() only for accesses not 2, 4, or 8 bytes in size.
>
> Suggested-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> TBD: Besides it still being open whether the linear_write() path also
> needs playing with, a question also continues to be whether we'd
> want to extend this to reads as well. linear_{read,write}()
> currently don't use hvmemul_map_linear_addr(), i.e. in both cases
> I'd need to also fiddle with __hvm_copy() (perhaps by making the
> construct below a helper function).
>
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1324,7 +1324,14 @@ static int hvmemul_write(
> if ( !mapping )
> return linear_write(addr, bytes, p_data, pfec, hvmemul_ctxt);
>
> - memcpy(mapping, p_data, bytes);
> + /* Where possible use single (and hence generally atomic) MOV insns. */
> + switch ( bytes )
> + {
> + case 2: write_u16_atomic(mapping, *(uint16_t *)p_data); break;
> + case 4: write_u32_atomic(mapping, *(uint32_t *)p_data); break;
> + case 8: write_u64_atomic(mapping, *(uint64_t *)p_data); break;
> + default: memcpy(mapping, p_data, bytes); break;
> + }
>
> hvmemul_unmap_linear_addr(mapping, addr, bytes, hvmemul_ctxt);
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |