[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] Adding back guest MSI eoi support for unmaskable MSI interrupt
Hi, Keir, Sorry, I made a mistake when removing the MSI IRQ storm logic by wrongly removing the guest MSI EOI hook for unmaskable MSI interrupt. Unmaskable MSI interrupt is of ACKTYPE_EOI and hook on guest MSI EOI writes are needed. And IRQ ratelimit has no effect on this type of interrupt (since the "disable" IRQ logic actually did nothing for unmaskable MSI). So I add them back with this patch. Could you please review what I missed in the attached patch? Thanks in advance! Note: guest MSI EOI hook for ACKTYPE_NONE IRQ is *not* adding back. The MSI IRQ storm (originally for maskable MSI) logic still keeps removed. Description: This patch adds back proper guest MSI EOI hook for correctly handling unmaskable MSI interrupt, which is wrongly removed by changset 23703. Signed-off-by: Shan Haitao <haitao.shan@xxxxxxxxx> diff -r 31dd84463eec xen/arch/x86/hvm/vlapic.c --- a/xen/arch/x86/hvm/vlapic.c Sat Jul 16 09:25:48 2011 +0100 +++ b/xen/arch/x86/hvm/vlapic.c Mon Jul 18 14:13:19 2011 +0800 @@ -400,6 +400,8 @@ void vlapic_EOI_set(struct vlapic *vlapi if ( vlapic_test_and_clear_vector(vector, &vlapic->regs->data[APIC_TMR]) ) vioapic_update_EOI(vlapic_domain(vlapic), vector); + + hvm_dpci_msi_eoi(current->domain, vector); } int vlapic_ipi( diff -r 31dd84463eec xen/drivers/passthrough/io.c --- a/xen/drivers/passthrough/io.c Sat Jul 16 09:25:48 2011 +0100 +++ b/xen/drivers/passthrough/io.c Mon Jul 18 14:13:19 2011 +0800 @@ -421,6 +421,56 @@ int hvm_do_IRQ_dpci(struct domain *d, st } #ifdef SUPPORT_MSI_REMAPPING +/* called with d->event_lock held */ +static void __msi_pirq_eoi(struct hvm_pirq_dpci *pirq_dpci) +{ + irq_desc_t *desc; + + if ( (pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) && + (pirq_dpci->flags & HVM_IRQ_DPCI_MACH_MSI) ) + { + struct pirq *pirq = dpci_pirq(pirq_dpci); + + BUG_ON(!local_irq_is_enabled()); + desc = pirq_spin_lock_irq_desc(pirq, NULL); + if ( !desc ) + return; + desc_guest_eoi(desc, pirq); + } +} + +static int _hvm_dpci_msi_eoi(struct domain *d, + struct hvm_pirq_dpci *pirq_dpci, void *arg) +{ + int vector = (long)arg; + + if ( (pirq_dpci->flags & HVM_IRQ_DPCI_MACH_MSI) && + (pirq_dpci->gmsi.gvec == vector) ) + { + int dest = pirq_dpci->gmsi.gflags & VMSI_DEST_ID_MASK; + int dest_mode = !!(pirq_dpci->gmsi.gflags & VMSI_DM_MASK); + + if ( vlapic_match_dest(vcpu_vlapic(current), NULL, 0, dest, + dest_mode) ) + { + __msi_pirq_eoi(pirq_dpci); + return 1; + } + } + + return 0; +} + +void hvm_dpci_msi_eoi(struct domain *d, int vector) +{ + if ( !iommu_enabled || !d->arch.hvm_domain.irq.dpci ) + return; + + spin_lock(&d->event_lock); + pt_pirq_iterate(d, _hvm_dpci_msi_eoi, (void *)(long)vector); + spin_unlock(&d->event_lock); +} + static int hvm_pci_msi_assert(struct domain *d, struct hvm_pirq_dpci *pirq_dpci) { @@ -458,6 +508,14 @@ static int _hvm_dirq_assist(struct domai else hvm_pci_intx_assert(d, device, intx); pirq_dpci->pending++; + +#ifdef SUPPORT_MSI_REMAPPING + if ( pirq_dpci->flags & HVM_IRQ_DPCI_TRANSLATE ) + { + /* for translated MSI to INTx interrupt, eoi as early as possible */ + __msi_pirq_eoi(pirq_dpci); + } +#endif } /* diff -r 31dd84463eec xen/include/asm-x86/hvm/io.h --- a/xen/include/asm-x86/hvm/io.h Sat Jul 16 09:25:48 2011 +0100 +++ b/xen/include/asm-x86/hvm/io.h Mon Jul 18 14:13:19 2011 +0800 @@ -139,5 +139,6 @@ struct hvm_hw_stdvga { void stdvga_init(struct domain *d); void stdvga_deinit(struct domain *d); +extern void hvm_dpci_msi_eoi(struct domain *d, int vector); #endif /* __ASM_X86_HVM_IO_H__ */ Attachment:
add_msi_eoi_logic_for_unmaskable_msi.patch _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |