[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] VT-d: bind IRQs to CPUs local to the node the IOMMU is on
>>> On 12.12.11 at 18:50, Keir Fraser <keir@xxxxxxx> wrote: > On 12/12/2011 15:53, "Jan Beulich" <JBeulich@xxxxxxxx> wrote: > >> This extends create_irq() to take a node parameter, allowing the >> resulting IRQ to have its destination set to a CPU on that node right >> away, which is more natural than having to post-adjust this (and >> get e.g. a new IRQ vector assigned despite a fresh one was just >> obtained). >> >> All other callers of create_irq() pass NUMA_NO_NODE for the time being. > > I don't know about this one. Does the current 'inefficient' way things work > really matter? The depends on the NUMA interconnect. My general perspective on this is that whenever NUMA locality information is available, we should aim at making use of it (unless it conflicts with something else*). And there's certainly ways to go in this respect. * When coming up with this, I actually looked at whether using the proximity information now passed down by Dom0 for PCI devices could be used for properly binding at least MSI interrupts. That didn't turn out reasonable, since we're already setting the IRQ affinity to match the pCPU the target vCPU is running on (which is likely providing greater benefit, as this allows avoiding IPIs; the efficiency of this can certainly be tweaked - I'm meanwhile thinking that we might be overly aggressive here, but that's to some part related to the scheduler apparently migrating vCPU-s more often than really desirable). But for Xen internal interrupts (like in the case of the IOMMU ones) this clearly ought to be done when possible. For the AMD case I just wasn't able to spot whether locality information is available. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |