[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xhci_hcd intterrupt affinity in Dom0/DomU limited to single interrupt





From: Jan Beulich <JBeulich@xxxxxxxx>
To: Justin Acker <ackerj67@xxxxxxxxx>
Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
Sent: Thursday, September 3, 2015 6:15 AM
Subject: Re: [Xen-devel] xhci_hcd intterrupt affinity in Dom0/DomU limited to single interrupt

(re-adding xen-devel)

>>> On 02.09.15 at 19:17, <ackerj67@xxxxxxxxx> wrote:
>      From: Jan Beulich <jbeulich@xxxxxxxx>
>  Sent: Wednesday, September 2, 2015 4:58 AM
>>>> Justin Acker <ackerj67@xxxxxxxxx> 09/02/15 1:14 AM >>>
>> 00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller (rev 04) (prog-if 30 [XHCI])
>>    Subsystem: Dell Device 053e
>>    Flags: bus master, medium devsel, latency 0, IRQ 78
>>    Memory at f7f20000 (64-bit, non-prefetchable) [size=64K]
>>    Capabilities: [70] Power Management version 2
>>    Capabilities: [80] MSI: Enable+ Count=1/8 Maskable- 64bit+
>
> This shows that the driver could use up to 8 MSI IRQs, but chose to use just
> one. If
> this is the same under Xen and the native kernel, the driver likely doesn't
> know any
> better. If under native more interrupts are being used, there might be an
> issue with
> Xen specific code in the kernel or hypervisor code. We'd need to see details
> to be
> able to tell.
>
> Please let me know what details I should provide.
>
> Jan

Please, first of all, get you reply style fixed. Just look at the above
and tell me how a reader should figure which parts of the text were
written by whom.

Together with other replies you sent, I first of all wonder whether
you've understood what you've been told: Any interrupt delivered
via the event channel mechanism can't be delivered to more than
one CPU unless it gets moved around them by a tool or manually.
When you set the affinity to more than on (v)CPU, the kernel will
pick one (usually the first) out of the provided set and bind the
event channel to that vCPU.

     I am still confused as to whether any device, or in this case xhci_hcd, can use more than one cpu at any given time. My     understanding based on David's response is that it cannot due to the event channel mapping. The device interrupt can be pinned     to a specific cpu by specifying the affinity. I was hoping there was a way to allow the driver's interrupt to be scheduled to     use more than 1 CPU at any given time.

As to, in the XHCI case, using multi-vector MSI: Please tell use
whether the lspci output still left in context above was with a
kernel running natively or under Xen. In the former case, the
driver may need improving. In the latter case we'd need to see,
for comparison, the same output with a natively running kernel. If
it matches the Xen one, same thing (driver may need improving).
If it doesn't match, maximum verbosity hypervisor and kernel logs
would be what we'd need to start with.




Jan

Above driver context was from a native kernel. However, the driver appears to load the same in both cases.

Native kernel:

00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 05) (prog-if 30 [XHCI])
    Subsystem: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI
    Flags: bus master, medium devsel, latency 0, IRQ 27
    Memory at f7e20000 (64-bit, non-prefetchable) [size=64K]
    Capabilities: [70] Power Management version 2
    Capabilities: [80] MSI: Enable+ Count=1/8 Maskable- 64bit+
    Kernel driver in use: xhci_hcd

With Dom0 loaded:

00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 05) (prog-if 30 [XHCI])
    Subsystem: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI
    Flags: bus master, medium devsel, latency 0, IRQ 78
    Memory at f7e20000 (64-bit, non-prefetchable) [size=64K]
    Capabilities: [70] Power Management version 2
    Capabilities: [80] MSI: Enable+ Count=1/8 Maskable- 64bit+
    Kernel driver in use: xhci_hcd



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.