[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 02/34] x86/HVM: grow MMIO cache data size to 64 bytes
>>> On 25.10.18 at 20:36, <andrew.cooper3@xxxxxxxxxx> wrote: > On 18/09/18 12:53, Jan Beulich wrote: >> This is needed before enabling any AVX512 insns in the emulator. Change >> the way alignment is enforced at the same time. >> >> Add a check that the buffer won't actually overflow, and while at it >> also convert the check for accesses to not cross page boundaries. >> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> >> --- >> v3: New. >> >> --- a/xen/arch/x86/hvm/emulate.c >> +++ b/xen/arch/x86/hvm/emulate.c >> @@ -866,7 +866,18 @@ static int hvmemul_phys_mmio_access( >> int rc = X86EMUL_OKAY; >> >> /* Accesses must fall within a page. */ >> - BUG_ON((gpa & ~PAGE_MASK) + size > PAGE_SIZE); >> + if ( (gpa & ~PAGE_MASK) + size > PAGE_SIZE ) >> + { >> + ASSERT_UNREACHABLE(); >> + return X86EMUL_UNHANDLEABLE; >> + } >> + >> + /* Accesses must not overflow the cache's buffer. */ >> + if ( size > sizeof(cache->buffer) ) >> + { >> + ASSERT_UNREACHABLE(); >> + return X86EMUL_UNHANDLEABLE; >> + } >> >> /* >> * hvmemul_do_io() cannot handle non-power-of-2 accesses or >> --- a/xen/include/asm-x86/hvm/vcpu.h >> +++ b/xen/include/asm-x86/hvm/vcpu.h >> @@ -42,15 +42,14 @@ struct hvm_vcpu_asid { >> }; >> >> /* >> - * We may read or write up to m256 as a number of device-model >> + * We may read or write up to m512 as a number of device-model >> * transactions. >> */ >> struct hvm_mmio_cache { >> unsigned long gla; >> unsigned int size; >> uint8_t dir; >> - uint8_t pad[3]; /* make buffer[] long-aligned */ >> - uint8_t buffer[32]; >> + uint8_t buffer[64] __aligned(sizeof(long)); > > Don't we want it 16-byte aligned, rather than 8? Why? We don't access the buffer via SIMD insns. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |