[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen stable-4.4] IOMMU: generalize and correct softirq processing during Dom0 device setup



commit cdac89d18725608c655f211cc4d5a642dab1a047
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Fri Mar 14 17:25:27 2014 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Fri Mar 14 17:25:27 2014 +0100

    IOMMU: generalize and correct softirq processing during Dom0 device setup
    
    c/s 21039:95f5a4ce8f24 ("VT-d: reduce default verbosity") having put a
    call to process_pending_softirqs() in VT-d's domain_context_mapping()
    was wrong in two ways: For one we shouldn't be doing this when setting
    up a device during DomU assignment. And then - I didn't check whether
    that was the case already back then - we shouldn't call that function
    with the pcidevs_lock (or in fact any spin lock) held.
    
    Move the "preemption" into generic code, at once dealing with further
    actual (too much output elsewhere - particularly on systems with very
    many host bridge like devices - having been observed to still cause the
    watchdog to trigger when enabled) and potential (other IOMMU code may
    also end up being too verbose) issues.
    
    Do the "preemption" once per device actually being set up when in
    verbose mode, and once per bus otherwise.
    
    Note that dropping pcidevs_lock around the process_pending_softirqs()
    invocation is specifically not a problem here: We're in an __init
    function and aren't racing with potential additions/removals of PCI
    devices. Not acquiring the lock in setup_dom0_pci_devices() otoh is not
    an option, as there are too many places that assert the lock being
    held.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Xiantao Zhang <xiantao.zhang@xxxxxxxxx>
    master commit: 9ef5aa944a6a0df7f5938983043c7e46f158bbc6
    master date: 2014-03-04 10:52:20 +0100
---
 xen/drivers/passthrough/pci.c       |   15 +++++++++++++++
 xen/drivers/passthrough/vtd/iommu.c |    4 ----
 2 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index c5c8344..ff78142 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -27,6 +27,7 @@
 #include <xen/delay.h>
 #include <xen/keyhandler.h>
 #include <xen/radix-tree.h>
+#include <xen/softirq.h>
 #include <xen/tasklet.h>
 #include <xsm/xsm.h>
 #include <asm/msi.h>
@@ -922,6 +923,20 @@ static int __init _setup_dom0_pci_devices(struct pci_seg 
*pseg, void *arg)
                 printk(XENLOG_WARNING "Dom%d owning %04x:%02x:%02x.%u?\n",
                        pdev->domain->domain_id, pseg->nr, bus,
                        PCI_SLOT(devfn), PCI_FUNC(devfn));
+
+            if ( iommu_verbose )
+            {
+                spin_unlock(&pcidevs_lock);
+                process_pending_softirqs();
+                spin_lock(&pcidevs_lock);
+            }
+        }
+
+        if ( !iommu_verbose )
+        {
+            spin_unlock(&pcidevs_lock);
+            process_pending_softirqs();
+            spin_lock(&pcidevs_lock);
         }
     }
 
diff --git a/xen/drivers/passthrough/vtd/iommu.c 
b/xen/drivers/passthrough/vtd/iommu.c
index 5f10034..e2a4778 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -31,7 +31,6 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
-#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #include <asm/hvm/vmx/vmx.h>
@@ -1494,9 +1493,6 @@ static int domain_context_mapping(
         break;
     }
 
-    if ( iommu_verbose )
-        process_pending_softirqs();
-
     return ret;
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.4

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.