[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v3 4/6] xen/x86: Allow stubdom access to irq created for msi.



From: Simon Gaiser <simon@xxxxxxxxxxxxxxxxxxxxxx>

Stubdomains need to be given sufficient privilege over the guest which it
provides emulation for in order for PCI passthrough to work correctly.
When a HVM domain try to enable MSI, QEMU in stubdomain calls
PHYSDEVOP_map_pirq, but later it needs to call XEN_DOMCTL_bind_pt_irq as
part of xc_domain_update_msi_irq. Allow for that as part of
PHYSDEVOP_map_pirq.

This is not needed for PCI INTx, because IRQ in that case is known
beforehand and the stubdomain is given permissions over this IRQ by
libxl__device_pci_add (there's a do_pci_add against the stubdomain).

Based on 
https://github.com/OpenXT/xenclient-oe/blob/5e0e7304a5a3c75ef01240a1e3673665b2aaf05e/recipes-extended/xen/files/stubdomain-msi-irq-access.patch
 by Eric Chanudet <chanudete@xxxxxxxxxxxx>.

Signed-off-by: Simon Gaiser <simon@xxxxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
---
Changes in v3:
 - extend commit message

With this patch, stubdomain will be able to create and map multiple irq
(DoS possibility?), as only target domain is validated in practice. Is
that ok? If not, what additional limits could be applied here?
In INTx case the problem doesn't apply, because toolstack grant access
to particular IRQ and no allocation happen on stubdomain request. But in
MSI case, it isn't that easy as IRQ number isn't known before (as
explained in the commit message).
---
 xen/arch/x86/irq.c     | 23 +++++++++++++++++++++++
 xen/arch/x86/physdev.c |  9 +++++++++
 2 files changed, 32 insertions(+)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 8b44d6c..67c67d4 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2674,6 +2674,21 @@ int allocate_and_map_msi_pirq(struct domain *d, int 
index, int *pirq_p,
         {
     case MAP_PIRQ_TYPE_MULTI_MSI:
             irq = create_irq(NUMA_NO_NODE);
+            if ( !(irq < nr_irqs_gsi || irq >= nr_irqs) &&
+                    current->domain->target == d )
+            {
+                ret = irq_permit_access(current->domain, irq);
+                if ( ret ) {
+                    dprintk(XENLOG_G_ERR,
+                            "dom%d: can't grant it's stubdom (%d) access to "
+                            "irq %d for msi: %d!\n",
+                            d->domain_id,
+                            current->domain->domain_id,
+                            irq,
+                            ret);
+                    return ret;
+                }
+            }
         }
 
         if ( irq < nr_irqs_gsi || irq >= nr_irqs )
@@ -2717,7 +2732,15 @@ int allocate_and_map_msi_pirq(struct domain *d, int 
index, int *pirq_p,
         case MAP_PIRQ_TYPE_MSI:
             if ( index == -1 )
         case MAP_PIRQ_TYPE_MULTI_MSI:
+            {
+                if ( current->domain->target == d &&
+                        irq_deny_access(current->domain, irq) )
+                    dprintk(XENLOG_G_ERR,
+                            "dom%d: can't revoke stubdom's access to irq 
%d!\n",
+                            d->domain_id,
+                            irq);
                 destroy_irq(irq);
+            }
             break;
         }
     }
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index 3a3c158..de59e39 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -164,6 +164,15 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
 
     pcidevs_lock();
     spin_lock(&d->event_lock);
+    if ( current->domain->target == d)
+    {
+        int irq = domain_pirq_to_irq(d, pirq);
+        if ( irq <= 0 || irq_deny_access(current->domain, irq) )
+            dprintk(XENLOG_G_ERR,
+                    "dom%d: can't revoke stubdom's access to irq %d!\n",
+                    d->domain_id,
+                    irq);
+    }
     ret = unmap_domain_pirq(d, pirq);
     spin_unlock(&d->event_lock);
     pcidevs_unlock();
-- 
git-series 0.9.1

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.