[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] AMD/intremap: Prevent use of per-device vector maps until irq logic is fixed



XSA-36 changed the default vector map mode from global to per-device.  This is
because a global vector map does not prevent one PCI device from impersonating
another and launching a DoS on the system.

However, the per-device vector map logic is broken for devices with multiple
MSI-X vectors, which can either result in a failed ASSERT() or misprogramming
of a guests interrupt remapping tables.  The core problem is not trivial to
fix.

In an effort to get AMD systems back to a non-regressed state, introduce a new
type of vector map called per-device-vector maps.  This uses per-device vector
maps in the IOMMU, but uses a single used_vector map for the core IRQ logic.

This patch is intended to be removed as soon as the per-device logic is fixed
correctly.

Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

---
This patch specifically does not introduce a command line argument for
this mode to avoid needing to carry it forever more for compatibility reasons.

Unfortunately, the per-device logic is going to be very complicated to fix.
Under the current irq architecture, by the time you can work out you have a
problem in map_domain_pirq(), it is far too late to fix it in a compatible
way.  It would be possible to "fix" the issue by failing the hypercall, but
is not acceptable IMO.  One logical way to fix the issue would be to reassign
one of the irqs to a different vector, but that requires waiting for another
interrupt, and trashes the PCI device's used_vector table.

The best solution I can see is to have create_irq() know about which PCI
device the irq belongs to, but I cant find a nice way of making this
information available.

George: This patch should go into xen-4.3 (as well as being backported) as it
is specifically to work around a regression caused by XSA-36

diff -r 84e4d183fa8b -r 1294e2e36ef5 xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -399,7 +399,8 @@ static vmask_t *irq_get_used_vector_mask
 {
     vmask_t *ret = NULL;
 
-    if ( opt_irq_vector_map == OPT_IRQ_VECTOR_MAP_GLOBAL )
+    if ( opt_irq_vector_map == OPT_IRQ_VECTOR_MAP_GLOBAL ||
+         opt_irq_vector_map == OPT_IRQ_VECTOR_MAP_PERDEV_GLOBAL )
     {
         struct irq_desc *desc = irq_to_desc(irq);
 
diff -r 84e4d183fa8b -r 1294e2e36ef5 xen/drivers/passthrough/amd/pci_amd_iommu.c
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -238,6 +238,21 @@ int __init amd_iov_detect(void)
     }
     if ( !amd_iommu_perdev_intremap )
         printk(XENLOG_WARNING "AMD-Vi: Using global interrupt remap table is 
not recommended (see XSA-36)!\n");
+
+    /* Per-device vector map logic is broken for devices with multiple MSI-X
+     * interrupts (and would also be for multiple MSI, if Xen supported it).
+     *
+     * Until this is fixed, use per-device-global vector tables to avoid the
+     * security vulnerability of global maps, and the buggy behaviour of
+     * per-device maps in map_domain_pirq().
+     */
+    if ( opt_irq_vector_map == OPT_IRQ_VECTOR_MAP_PERDEV )
+    {
+        printk(XENLOG_WARNING "AMD-Vi BUG: per-device vector map logic is 
broken.  "
+               "Using per-device-global maps instead until a fix is found\n");
+        opt_irq_vector_map = OPT_IRQ_VECTOR_MAP_PERDEV_GLOBAL;
+    }
+
     return scan_pci_devices();
 }
 
diff -r 84e4d183fa8b -r 1294e2e36ef5 xen/include/asm-x86/irq.h
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -57,6 +57,7 @@ extern bool_t opt_noirqbalance;
 #define OPT_IRQ_VECTOR_MAP_NONE    1 /* None */ 
 #define OPT_IRQ_VECTOR_MAP_GLOBAL  2 /* One global vector map (no vector 
sharing) */ 
 #define OPT_IRQ_VECTOR_MAP_PERDEV  3 /* Per-device vetor map (no vector 
sharing w/in a device) */
+#define OPT_IRQ_VECTOR_MAP_PERDEV_GLOBAL 4 /* Remove me when PERDEV logic is 
fixed */
 
 extern int opt_irq_vector_map;
 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.