[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 1/2] x86/hvm: split all linear reads and writes at page boundary
>>> On 14.03.19 at 23:30, <igor.druzhinin@xxxxxxxxxx> wrote: > Ruling out page straddling at linear level makes it easier to > distinguish chunks that require proper handling as MMIO access > and not complete them as page straddling memory transactions > prematurely. This doesn't change the general behavior. > > Signed-off-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> with one cosmetic aspect taken care of (can be done while committing): > @@ -1106,20 +1119,9 @@ static int linear_read(unsigned long addr, unsigned > int bytes, void *p_data, > if ( pfec & PFEC_insn_fetch ) > return X86EMUL_UNHANDLEABLE; > > - offset = addr & ~PAGE_MASK; > - if ( offset + bytes <= PAGE_SIZE ) > - return hvmemul_linear_mmio_read(addr, bytes, p_data, pfec, > - hvmemul_ctxt, > - known_gla(addr, bytes, pfec)); > - > - /* Split the access at the page boundary. */ > - part1 = PAGE_SIZE - offset; > - rc = linear_read(addr, part1, p_data, pfec, hvmemul_ctxt); > - if ( rc == X86EMUL_OKAY ) > - rc = linear_read(addr + part1, bytes - part1, p_data + part1, > - pfec, hvmemul_ctxt); > - return rc; > - > + return hvmemul_linear_mmio_read(addr, bytes, p_data, pfec, > + hvmemul_ctxt, > + known_gla(addr, bytes, pfec)); > case HVMTRANS_gfn_paged_out: Please retain the blank line above here (and also in the write case). I notice that sadly the change doesn't allow removing the respective logic from hvmemul_linear_mmio_access() yet, due to its use by hvmemul_cmpxchg(). Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |