[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v4] x86: properly handle MSI-X unmask operation from guests
patch revision history ---------------------- v1: Initial patch to handle this issue involving changing the hypercall interface v2:Totally handled inside hypervisor. v3:Change some logics of handling msi-x pending unmask operations. v4:Some changes related to coding style according to Andrew Cooper's comments From 51c5b7f1f9c8f319da8adf021b39e18fbd3bf314 Mon Sep 17 00:00:00 2001 From: Feng Wu <feng.wu@xxxxxxxxx> Date: Wed, 13 Nov 2013 21:43:48 -0500 Subject: [PATCH] x86: properly handle MSI-X unmask operation from guests For a pass-through device with MSI-x capability, when guest tries to unmask the MSI-x interrupt for the passed through device, xen doesn't clear the mask bit for MSI-x in hardware in the following scenario, which will cause network disconnection: 1. Guest masks the MSI-x interrupt 2. Guest updates the address and data for it 3. Guest unmasks the MSI-x interrupt (This is the problematic step) In the step #3 above, Xen doesn't handle it well. When guest tries to unmask MSI-X interrupt, it traps to Xen, Xen just returns to Qemu if it notices that address or data has been modified by guest before, then Qemu will update Xen with the latest value of address/data by hypercall. However, in this whole process, the MSI-X interrupt unmask operation is missing, which means Xen doesn't clear the mask bit in hardware for the MSI-X interrupt, so it remains disabled, that is why it loses the network connection. This patch fixes this issue. Signed-off-by: Feng Wu <feng.wu@xxxxxxxxx> --- xen/arch/x86/hvm/io.c | 3 +++ xen/arch/x86/hvm/vmsi.c | 18 ++++++++++++++++++ xen/include/asm-x86/domain.h | 8 ++++++++ xen/include/xen/pci.h | 1 + 4 files changed, 30 insertions(+), 0 deletions(-) diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index deb7b92..6c83c25 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -297,6 +297,9 @@ void hvm_io_assist(ioreq_t *p) break; } + if ( msix_post_handler(curr) ) + gdprintk(XENLOG_ERR, "msix_post_handler error\n"); + if ( p->state == STATE_IOREQ_NONE ) vcpu_end_shutdown_deferral(curr); } diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c index 4826b4a..2cdd0dc 100644 --- a/xen/arch/x86/hvm/vmsi.c +++ b/xen/arch/x86/hvm/vmsi.c @@ -293,7 +293,11 @@ static int msixtbl_write(struct vcpu *v, unsigned long address, /* exit to device model if address/data has been modified */ if ( test_and_clear_bit(nr_entry, &entry->table_flags) ) + { + v->arch.pending_msix_unmask.valid = 1; + v->arch.pending_msix_unmask.ctrl_address = address; goto out; + } virt = msixtbl_addr_to_virt(entry, address); if ( !virt ) @@ -528,3 +532,17 @@ void msixtbl_pt_cleanup(struct domain *d) spin_unlock(&d->arch.hvm_domain.msixtbl_list_lock); local_irq_restore(flags); } + +int msix_post_handler(struct vcpu *v) +{ + int rc; + + if ( v->arch.pending_msix_unmask.valid == 0 ) + return 0; + + v->arch.pending_msix_unmask.valid = 0; + + rc = msixtbl_write(v, v->arch.pending_msix_unmask.ctrl_address, 4, 0); + return rc != X86EMUL_OKAY ? -1 : 0; +} + diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h index 9d39061..89b1adc 100644 --- a/xen/include/asm-x86/domain.h +++ b/xen/include/asm-x86/domain.h @@ -375,6 +375,12 @@ struct pv_vcpu spinlock_t shadow_ldt_lock; }; +struct pending_msix_unmask_info +{ + unsigned long ctrl_address; + bool_t valid; +}; + struct arch_vcpu { /* @@ -439,6 +445,8 @@ struct arch_vcpu /* A secondary copy of the vcpu time info. */ XEN_GUEST_HANDLE(vcpu_time_info_t) time_info_guest; + + struct pending_msix_unmask_info pending_msix_unmask; } __cacheline_aligned; /* Shorthands to improve code legibility. */ diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h index cadb525..ce8f6ff 100644 --- a/xen/include/xen/pci.h +++ b/xen/include/xen/pci.h @@ -147,5 +147,6 @@ struct pirq; int msixtbl_pt_register(struct domain *, struct pirq *, uint64_t gtable); void msixtbl_pt_unregister(struct domain *, struct pirq *); void msixtbl_pt_cleanup(struct domain *d); +int msix_post_handler(struct vcpu *v); #endif /* __XEN_PCI_H__ */ -- 1.7.1 Thanks, Feng _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |