[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 2/2] x86/hvm/emulate: make sure rep I/O emulation does not cross GFN boundaries
When emulating a rep I/O operation it is possible that the ioreq will describe a single operation that spans multiple GFNs. This is fine as long as all those GFNs fall within an MMIO region covered by a single device model, but unfortunately the higher levels of the emulation code do not guarantee that. This is something that should almost certainly be fixed, but in the meantime this patch makes sure that MMIO is truncated at GFN boundaries and hence the appropriate device model is re-evaluated for each target GFN. Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx> --- Cc: Jan Beulich <jbeulich@xxxxxxxx> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> --- xen/arch/x86/hvm/emulate.c | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 8385c62145..d6a81ec4d1 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -184,8 +184,23 @@ static int hvmemul_do_io( hvmtrace_io_assist(&p); } - vio->io_req = p; + /* + * Make sure that we truncate rep MMIO at any GFN boundary. This is + * necessary to ensure that the correct device model is targetted + * or that we correctly handle a rep op spanning MMIO and RAM. + */ + if ( unlikely(p.count > 1) && p.type == IOREQ_TYPE_COPY ) + { + unsigned long off = p.addr & ~PAGE_MASK; + p.count = min_t(unsigned long, + p.count, + p.df ? + (off + p.size) / p.size : + (PAGE_SIZE - off) / p.size); + } + + vio->io_req = p; rc = hvm_io_intercept(&p); /* -- 2.11.0 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |