[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen stable-4.10] x86/HVM: suppress I/O completion for port output
commit 696b24dfe1cf1e9f084e5074f85abf8693fd52f5 Author: Jan Beulich <jbeulich@xxxxxxxx> AuthorDate: Fri Apr 13 16:25:25 2018 +0200 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Fri Apr 13 16:25:25 2018 +0200 x86/HVM: suppress I/O completion for port output We don't break up port requests in case they cross emulation entity boundaries, and a write to an I/O port is necessarily the last operation of an instruction instance, so there's no need to re-invoke the full emulation path upon receiving the result from an external emulator. In case we want to properly split port accesses in the future, this change will need to be reverted, as it would prevent things working correctly when e.g. the first part needs to go to an external emulator, while the second part is to be handled internally. While this addresses the reported problem of Windows paging out the buffer underneath an in-process REP OUTS, it does not address the wider problem of the re-issued insn (to the insn emulator) being prone to raise an exception (#PF) during a replayed, previously successful memory access (we only record prior MMIO accesses). Leaving aside the problem tried to be worked around here, I think the performance aspect alone is a good reason to change the behavior. Also take the opportunity and change bool_t -> bool as hvm_vcpu_io_need_completion()'s return type. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx> master commit: 91afb8139f954a06e564d4915bc7d6a8575e2812 master date: 2018-04-11 10:42:24 +0200 --- xen/arch/x86/hvm/emulate.c | 6 +++++- xen/include/asm-x86/hvm/vcpu.h | 6 ++++-- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index f88a01118e..b282089e03 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -281,7 +281,11 @@ static int hvmemul_do_io( rc = hvm_send_ioreq(s, &p, 0); if ( rc != X86EMUL_RETRY || currd->is_shutting_down ) vio->io_req.state = STATE_IOREQ_NONE; - else if ( data_is_addr ) + /* + * This effectively is !hvm_vcpu_io_need_completion(vio), slightly + * optimized and using local variables we have available. + */ + else if ( data_is_addr || (!is_mmio && dir == IOREQ_WRITE) ) rc = X86EMUL_OKAY; } break; diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h index d93166fb92..bd4e4843db 100644 --- a/xen/include/asm-x86/hvm/vcpu.h +++ b/xen/include/asm-x86/hvm/vcpu.h @@ -91,10 +91,12 @@ struct hvm_vcpu_io { const struct g2m_ioport *g2m_ioport; }; -static inline bool_t hvm_vcpu_io_need_completion(const struct hvm_vcpu_io *vio) +static inline bool hvm_vcpu_io_need_completion(const struct hvm_vcpu_io *vio) { return (vio->io_req.state == STATE_IOREQ_READY) && - !vio->io_req.data_is_ptr; + !vio->io_req.data_is_ptr && + (vio->io_req.type != IOREQ_TYPE_PIO || + vio->io_req.dir != IOREQ_WRITE); } struct nestedvcpu { -- generated by git-patchbot for /home/xen/git/xen.git#stable-4.10 _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |