[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP fix



On Wed, 29 Jan 2014, Oleksandr Tyshchenko wrote:
> Hello all,
> 
> I just recollected about one hack which we created
> as we needed to route HW IRQ in domU.
> 
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 9d793ba..d0227b9 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -989,8 +989,6 @@ static void domcreate_launch_dm(libxl__egc *egc,
> libxl__multidev *multidev,
> 
>          LOG(DEBUG, "dom%d irq %d", domid, irq);
> 
> -        ret = irq >= 0 ? xc_physdev_map_pirq(CTX->xch, domid, irq, &irq)
> -                       : -EOVERFLOW;
>          if (!ret)
>              ret = xc_domain_irq_permission(CTX->xch, domid, irq, 1);
>          if (ret < 0) {
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 2e4b11f..b54c08e 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -85,7 +85,7 @@ int domain_vgic_init(struct domain *d)
>      if ( d->domain_id == 0 )
>          d->arch.vgic.nr_lines = gic_number_lines() - 32;
>      else
> -        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
> +        d->arch.vgic.nr_lines = gic_number_lines() - 32; /* We do
> need SPIs for the guest */
> 
>      d->arch.vgic.shared_irqs =
>          xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index 75e2df3..ba88901 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -29,6 +29,7 @@
>  #include <asm/page.h>
>  #include <public/domctl.h>
>  #include <xsm/xsm.h>
> +#include <asm/gic.h>
> 
>  static DEFINE_SPINLOCK(domctl_lock);
>  DEFINE_SPINLOCK(vcpu_alloc_lock);
> @@ -782,8 +783,11 @@ long
> do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>              ret = -EINVAL;
>          else if ( xsm_irq_permission(XSM_HOOK, d, pirq, allow) )
>              ret = -EPERM;
> -        else if ( allow )
> -            ret = pirq_permit_access(d, pirq);
> +        else if ( allow ) {
> +            struct dt_irq irq = {pirq + NR_LOCAL_IRQS,0};
> +            ret = pirq_permit_access(d, irq.irq);
> +            gic_route_irq_to_guest(d, &irq, "");
> +        }
>          else
>              ret = pirq_deny_access(d, pirq);
>      }
> (END)
> 
> It seems, the following patch can violate the logic about routing
> physical IRQs only to CPU0.
> In gic_route_irq_to_guest() we need to call gic_set_irq_properties()
> where the one of the parameters is cpumask_of(smp_processor_id()).
> But in this part of code this function can be executed on CPU1. And as
> result this can cause to the fact that the wrong value would set to
> target CPU mask.
> 
> Please, confirm my assumption.

That is correct.


> If I am right we have to add a basic HW IRQ routing to DomU in a right way.

We could add the cpumask parameter to gic_route_irq_to_guest. Or maybe
for now we could just hardcode the cpumask of cpu0
gic_route_irq_to_guest.

However keep in mind that if you plan on routing SPIs to guests other
than dom0, receiving all the interrupts on cpu0 might not be great for
performances.

It is impressive how small is this patch, if this is all is needed to
get irq routing to guests working.



> On Tue, Jan 28, 2014 at 9:25 PM, Oleksandr Tyshchenko
> <oleksandr.tyshchenko@xxxxxxxxxxxxxxx> wrote:
> > Hello Julien,
> >
> > Please see inline
> >
> >> gic_irq_eoi is only called for physical IRQ routed to the guest (eg:
> >> hard drive, network, ...). As far as I remember, these IRQs are only
> >> routed to CPU0.
> >
> >
> > I understand.
> >
> > But, I have created debug patch to show the issue:
> >
> > diff --git a/xen/common/smp.c b/xen/common/smp.c
> > index 46d2fc6..6123561 100644
> > --- a/xen/common/smp.c
> > +++ b/xen/common/smp.c
> > @@ -22,6 +22,8 @@
> >  #include <xen/smp.h>
> >  #include <xen/errno.h>
> >
> > +int locked = 0;
> > +
> >  /*
> >   * Structure and data for smp_call_function()/on_selected_cpus().
> >   */
> > @@ -53,11 +55,19 @@ void on_selected_cpus(
> >  {
> >      unsigned int nr_cpus;
> >
> > +    locked = 0;
> > +
> >      ASSERT(local_irq_is_enabled());
> >
> >      if (!spin_trylock(&call_lock)) {
> > +
> > +    locked = 1;
> > +        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
> > %08lx\n", __func__, __LINE__,
> > +                 cpumask_of(smp_processor_id())->bits[0],
> > selected->bits[0]);
> > +
> >          if (smp_call_function_interrupt())
> >              return;
> > +
> >          spin_lock(&call_lock);
> >      }
> >
> > @@ -78,6 +88,10 @@ void on_selected_cpus(
> >
> >  out:
> >      spin_unlock(&call_lock);
> > +
> > +    if (locked)
> > +        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
> > %08lx\n", __func__, __LINE__,
> > +            cpumask_of(smp_processor_id())->bits[0], selected->bits[0]);
> >  }
> >
> >  int smp_call_function_interrupt(void)
> > @@ -86,6 +100,10 @@ int smp_call_function_interrupt(void)
> >      void *info = call_data.info;
> >      unsigned int cpu = smp_processor_id();
> >
> > +     if (locked)
> > +        printk("\n>>>>> %s: line: %d, cpu_mask_curr: %08lx, cpu_mask_sel:
> > %08lx\n", __func__, __LINE__,
> > +            cpumask_of(smp_processor_id())->bits[0],
> > call_data.selected.bits[0]);
> > +
> >      if ( !cpumask_test_cpu(cpu, &call_data.selected) )
> >          return -EPERM;
> >
> > Our issue (simultaneous cross-interrupts) has occurred during boot domU:
> >
> > [    7.507812] oom_adj 2 => oom_score_adj 117
> > [    7.507812] oom_adj 4 => oom_score_adj 235
> > [    7.507812] oom_adj 9 => oom_score_adj 529
> > [    7.507812] oom_adj 15 => oom_score_adj 1000
> > [    8.835937] PVR_K:(Error): PVRSRVOpenDCDeviceKM: no devnode matching
> > index 0 [0, ]
> > (XEN)
> > (XEN) >>>>> on_selected_cpus: line: 65, cpu_mask_curr: 00000002,
> > cpu_mask_sel: 00000001
> > (XEN)
> > (XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000002,
> > cpu_mask_sel: 00000002
> > (XEN)
> > (XEN) >>>>> on_selected_cpus: line: 93, cpu_mask_curr: 00000001,
> > cpu_mask_sel: 00000002
> > (XEN)
> > (XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000001,
> > cpu_mask_sel: 00000001
> > (XEN)
> > (XEN) >>>>> on_selected_cpus: line: 93, cpu_mask_curr: 00000002,
> > cpu_mask_sel: 00000001
> > (XEN)
> > (XEN) >>>>> smp_call_function_interrupt: line: 104, cpu_mask_curr: 00000002,
> > cpu_mask_sel: 00000000
> > [   11.023437] usbcore: registered new interface driver usbfs
> > [   11.023437] usbcore: registered new interface driver hub
> > [   11.023437] usbcore: registered new device driver usb
> > [   11.039062] usbcore: registered new interface driver usbhid
> > [   11.039062] usbhid: USB HID core driver
> >
> >>
> >> Do you pass-through PPIs to dom0?
> >
> >
> > If I understand correctly that PPIs is irqs from 16 to 31.
> > So yes, I do. I see timer's irqs and maintenance irq which routed to both
> > CPUs.
> >
> > And I have printed all irqs which fall to gic_route_irq_to_guest and
> > gic_route_irq functions.
> > ...
> > (XEN) GIC initialization:
> > (XEN)         gic_dist_addr=0000000048211000
> > (XEN)         gic_cpu_addr=0000000048212000
> > (XEN)         gic_hyp_addr=0000000048214000
> > (XEN)         gic_vcpu_addr=0000000048216000
> > (XEN)         gic_maintenance_irq=25
> > (XEN) GIC: 192 lines, 2 cpus, secure (IID 0000043b).
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000001
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000001
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000001
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000001
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 104, cpu_mask: 00000001
> > (XEN) Using scheduler: SMP Credit Scheduler (credit)
> > (XEN) Allocated console ring of 16 KiB.
> > (XEN) VFP implementer 0x41 architecture 4 part 0x30 variant 0xf rev 0x0
> > (XEN) Bringing up CPU1
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 25, cpu_mask: 00000002
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 30, cpu_mask: 00000002
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 26, cpu_mask: 00000002
> > (XEN)
> > (XEN) >>>>> gic_route_irq: irq: 27, cpu_mask: 00000002
> > (XEN) CPU 1 booted.
> > (XEN) Brought up 2 CPUs
> > (XEN) *** LOADING DOMAIN 0 ***
> > (XEN) Populate P2M 0xc8000000->0xd0000000 (1:1 mapping for dom0)
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 61, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 62, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 63, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 64, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 66, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 67, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 153, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 105, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 106, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 102, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 137, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 138, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 113, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 69, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 70, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 71, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 72, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 73, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 74, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 75, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 76, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 77, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 78, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 79, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 112, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 145, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 158, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 86, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 82, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 83, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 84, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 85, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 187, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 186, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 188, cpu: 0
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 189, cpu: 0
> > (XEN) Loading kernel from boot module 2
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 0, irq: 57, cpu: 0
> > (XEN) Loading zImage from 00000000c0000040 to
> > 00000000c8008000-00000000c8304eb0
> > (XEN) Loading dom0 DTB to 0x00000000cfe00000-0x00000000cfe03978
> > (XEN) Std. Loglevel: All
> > (XEN) Guest Loglevel: All
> > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to
> > Xen)
> > (XEN) Freed 252kB init memory.
> > [    0.000000] /cpus/cpu@0 missing clock-frequency property
> > [    0.000000] /cpus/cpu@1 missing clock-frequency property
> > [    0.093750] omap_l3_noc ocp.2: couldn't find resource 2
> > [    0.265625] ahci ahci.0.auto: can't get clock
> > [    0.867187] Freeing init memory: 224K
> > Parsing config from /xen/images/DomUAndroid.cfg
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 105, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 61, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 62, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 63, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 64, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 65, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 66, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 67, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 153, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 69, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 70, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 71, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 72, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 73, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 74, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 75, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 76, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 77, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 78, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 79, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 102, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 137, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 138, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 88, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 89, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 93, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 94, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 92, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 152, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 97, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 98, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 123, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 80, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 115, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 118, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 126, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 128, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 91, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 41, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 42, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 48, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 131, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 44, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 45, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 46, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 47, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 40, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 158, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 146, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 60, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 85, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 87, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 133, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 142, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 143, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 53, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 164, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 51, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 134, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 50, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 108, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 109, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 124, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 125, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 110, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 112, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 68, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 101, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 99, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 100, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 103, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 132, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 56, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 135, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 136, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 139, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 58, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 140, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 141, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 49, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 54, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 55, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 144, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 32, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 33, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 34, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 35, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 36, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 39, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 43, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 52, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 59, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 120, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 90, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 107, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 119, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 121, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 122, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 129, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 130, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 151, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 154, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 155, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 156, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 160, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 162, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 163, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 157, cpu: 1
> > (XEN)
> > (XEN) >>>>> gic_route_irq_to_guest: domid: 1, irq: 173, cpu: 1
> > Daemon running with PID 569
> > ...
> >>
> >>
> >> --
> >> Julien Grall
> >
> >
> >
> >
> > --
> >
> > Name | Title
> > GlobalLogic
> > P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
> > www.globallogic.com
> >
> > http://www.globallogic.com/email_disclaimer.txt
> 
> 
> 
> -- 
> 
> Name | Title
> GlobalLogic
> P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
> www.globallogic.com
> 
> http://www.globallogic.com/email_disclaimer.txt
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.