[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP fix





On 29/01/14 10:56, Oleksandr Tyshchenko wrote:
Hello all,

I just recollected about one hack which we created
as we needed to route HW IRQ in domU.

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 9d793ba..d0227b9 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -989,8 +989,6 @@ static void domcreate_launch_dm(libxl__egc *egc,
libxl__multidev *multidev,

          LOG(DEBUG, "dom%d irq %d", domid, irq);

-        ret = irq >= 0 ? xc_physdev_map_pirq(CTX->xch, domid, irq, &irq)
-                       : -EOVERFLOW;
          if (!ret)
              ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
          if (ret < 0) {
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 2e4b11f..b54c08e 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -85,7 +85,7 @@ int domain_vgic_init(struct domain *d)
      if ( d->domain_id == 0 )
          d->arch.vgic.nr_lines = gic_number_lines() - 32;
      else
-        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
+        d->arch.vgic.nr_lines = gic_number_lines() - 32; /* We do
need SPIs for the guest */

      d->arch.vgic.shared_irqs =
          xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 75e2df3..ba88901 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -29,6 +29,7 @@
  #include <asm/page.h>
  #include <public/domctl.h>
  #include <xsm/xsm.h>
+#include <asm/gic.h>

  static DEFINE_SPINLOCK(domctl_lock);
  DEFINE_SPINLOCK(vcpu_alloc_lock);
@@ -782,8 +783,11 @@ long
do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
              ret = -EINVAL;
          else if ( xsm_irq_permission(XSM_HOOK, d, pirq, allow) )
              ret = -EPERM;
-        else if ( allow )
-            ret = pirq_permit_access(d, pirq);
+        else if ( allow ) {
+            struct dt_irq irq = {pirq + NR_LOCAL_IRQS,0};
+            ret = pirq_permit_access(d, irq.irq);
+            gic_route_irq_to_guest(d, &irq, "");
+        }
          else
              ret = pirq_deny_access(d, pirq);
      }
(END)

It seems, the following patch can violate the logic about routing
physical IRQs only to CPU0.

I forgot the smp_processor_id() in gic_route_irq_to_guest(). As this function is only called (for upstream) when dom0 is building, only CPU0 is used.

In gic_route_irq_to_guest() we need to call gic_set_irq_properties()
where the one of the parameters is cpumask_of(smp_processor_id()).
But in this part of code this function can be executed on CPU1. And as
result this can cause to the fact that the wrong value would set to
target CPU mask.
Please, confirm my assumption.
If I am right we have to add a basic HW IRQ routing to DomU in a right way.

With this current implementation, IRQ will be routed to CPU which has done the hypercall. This CPU could be different than the CPU where the domU (assuming we have only 1 VCPU) is running. I think for both dom0 and domU, in case of the VCPU is pinned, we should say use the cpumask of this VCPU.

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.