[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V4 0/3] xen-netback: switch to NAPI + kthread 1:1 model



On Thu, Aug 22, 2013 at 04:12:38PM +0100, Wei Liu wrote:
> On Tue, Aug 06, 2013 at 03:17:46PM +0100, Andrew Bennieston wrote:
> > On 06/08/13 14:29, David Vrabel wrote:
> > >On 06/08/13 14:16, Pasi Kärkkäinen wrote:
> > >>On Tue, Aug 06, 2013 at 10:06:00AM +0100, Wei Liu wrote:
> > >>>
> > >>>IRQs are distributed to 4 cores by hand in the new model, while in the
> > >>>old model vifs are automatically distributed to 4 kthreads.
> > >>>
> > >>
> > >>Hmm.. so with these patches applied is is *required* to do manual 
> > >>configuration in dom0 to get good performance?
> > >
> > >This should be irqbalanced's job.  The existing version doesn't do a
> > >good enough job yet though.  Andrew Bennieston may have more details.
> > >
> > >David
> > >
> > 
> > irqbalance 1.0.6 [1] includes a patch [2] from Wei Liu [3] that adds
> > support for balancing `xen-dyn-event' interrupts. When I have compiled
> > this version and run it under Xen(Server) I noticed that the
> > interrupts are indeed moving between cores, but not necessarily in
> > what I would call an obvious or optimal way (e.g. several VIF
> > interrupts are being grouped onto a single dom0 VCPU at times). I plan
> > on investigating this further when time permits.
> > 
> > I also noticed that, from time to time, the irqbalance process
> > disappears. I tracked this down to a segfault that occurs when a VM
> > shuts down and an IRQ disappears during one of irqbalance's periodic
> > rescans. I'm hoping to be able to narrow this down sufficiently to
> > identify the cause and ideally fix it, but I don't have a lot of time
> > to work on this at the moment.
> > 
> > As for the impact on Wei's patches, without irqbalance it would be
> > trivial to automatically assign (via a script, on VM start) the
> > interrupts for a particular VIF to a particular dom0 vCPU in a
> > round-robin fashion, just as VIFs were previously assigned to netback
> > kthreads. This would result in broadly the same performance as before,
> > while an improved irqbalanced should give better performance and
> > fairness when two different VIFs would otherwise be competing for the
> > same resources.
> > 
> 
> So can I conclude that this model doesn't incur severe performance
> regression, on the other hand it has its advantage on fairness so it's
> worth upstreaming?

Yes, I think so.

As far as initial interrupt affinity settings, I think it'd be great
if the default value could be set in the kernel. I'd rather not
require toolchain work or external scripts to get reasonable spread.

That said, I can understand why we might hesitate to do this in the
kernel, and there's a fair president set through scripts like
set_irq_affinity.sh [1].

On the other hand, the Hyper-V support for Linux guests assigns
affinity on a round-robin basis [2].

> If so I will post another series shortly with all comments addressed.

Please do.

> Wei.

[1] 
http://www.intel.com/content/dam/doc/application-note/82575-82576-82598-82599-ethernet-controllers-interrupts-appl-note.pdf
[2] 
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=a11984

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.