[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xhci_hcd intterrupt affinity in Dom0/DomU limited to single interrupt





From: Ian Campbell <ian.campbell@xxxxxxxxxx>
To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>; Justin Acker <ackerj67@xxxxxxxxx>
Cc: "boris.ostrovsky@xxxxxxxxxx" <boris.ostrovsky@xxxxxxxxxx>; "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
Sent: Wednesday, September 2, 2015 9:49 AM
Subject: Re: [Xen-devel] xhci_hcd intterrupt affinity in Dom0/DomU limited to single interrupt

On Wed, 2015-09-02 at 08:53 -0400, Konrad Rzeszutek Wilk wrote:
> On Tue, Sep 01, 2015 at 11:09:38PM +0000, Justin Acker wrote:
> >
> >      From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> >  To: Justin Acker <ackerj67@xxxxxxxxx>
> > Cc: "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>;
> > boris.ostrovsky@xxxxxxxxxx
> >  Sent: Tuesday, September 1, 2015 4:56 PM
> >  Subject: Re: [Xen-devel] xhci_hcd intterrupt affinity in Dom0/DomU
> > limited to single interrupt
> >   
> > On Tue, Sep 01, 2015 at 05:39:46PM +0000, Justin Acker wrote:
> > > Taking this to the dev list from users.
> > >
> > > Is there a way to force or enable pirq delivery to a set of cpus as
> > > opposed to single device from being a assigned a single pirq so that
> > > its interrupt can be distributed across multiple cpus? I believe the
> > > device drivers do support multiple queues when run natively without
> > > the Dom0 loaded. The device in question is the xhci_hcd driver for
> > > which I/O transfers seem to be slowed when the Dom0 is loaded. The
> > > behavior seems to pass through to the DomU if pass through is
> > > enabled. I found some similar threads, but most relate to Ethernet
> > > controllers. I tried some of the x2apic and x2apic_phys dom0 kernel
> > > arguments, but none distributed the pirqs. Based on the reading
> > > relating to IRQs for Xen, I think pinning the pirqs to cpu0 is done
> > > to avoid an interrupt storm. I tried IRQ balance and when
> > > configured/adjusted it will balance individual pirqs, but not
> > > multiple interrupts.
> >
> > Yes. You can do it with smp affinity:
> >
> > https://cs.uwaterloo.ca/~brecht/servers/apic/SMP-affinity.txt
> > Yes, this does allow for assigning a specific interrupt to a single
> > cpu, but it will not spread the interrupt load across a defined group
> > or all cpus. Is it possible to define a range of CPUs or spread the
> > interrupt load for a device across all cpus as it does with a native
> > kernel without the Dom0 loaded?
>
> It should be. Did you try giving it an mask that puts the interrupts on
> all the CPUs?
> (0xf) ?
> >
> > I don't follow the "behavior seems to pass through to the DomU if pass
> > through is enabled" ?
> > The device interrupts are limited to a single pirq if the device is
> > used directly in the Dom0. If the device is passed through to a DomU -
> > i.e. the xhci_hcd controller - then the DomU cannot spread the
> > interrupt load across the cpus in the VM.
>
> Why? How are you seeing this? The method by which you use smp affinity
> should
> be exactly the same.
>
> And it looks to me that the device has a single pirq as well when booting
> as baremetal right?
>
> So the issue here is that you want to spread the interrupt delivery to happen across
> all of the CPUs. The smp_affinity should do it. Did you try modifying it by hand (you may
> want to kill irqbalance when you do this just to make sure it does not write its own values in)?

It sounds then like the real issue is that under native irqbalance is
writing smp_affinity values with potentially multiple bits set while under
Xen it is only setting a single bit?

Justin, is the contents of /proc/irq/<IRQ>/smp_affinity for the IRQ in
question under Native and Xen consistent with that supposition?



Ian, I think the mask is the same in both cases.  With irqbalance enabled, the interrupts are mapped - seemed randomly - to various cpus, but only one cpu per interrupt in all cases.

With irqbalance disabled at boot and the same kernel version used with Dom0 and baremetal.

With Dom0 loaded:
cat /proc/irq/78/smp_affinity
ff

Baremetal kernel:
cat /proc/irq/27/smp_affinity
ff


Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.