[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model



On 11/06/13 11:15, Wei Liu wrote:
On Tue, Jun 11, 2013 at 11:06:43AM +0100, David Vrabel wrote:
On 27/05/13 12:29, Wei Liu wrote:
* This is a xen-devel only post, since we have not reached concesus on
   what to add / remove in this new model. This series tries to be
   conservative about adding in new feature compared to V1.

This series implements NAPI + kthread 1:1 model for Xen netback.

This model
  - provides better scheduling fairness among vifs
  - is prerequisite for implementing multiqueue for Xen network driver

The first two patches are ground work for the third patch. First one
simplifies code in netback, second one can reduce memory footprint if we
switch to 1:1 model.

The third patch has the real meat:
  - make use of NAPI to mitigate interrupt
  - kthreads are not bound to CPUs any more, so that we can take
    advantage of backend scheduler and trust it to do the right thing

Change since V1:
  - No page pool in this version. Instead page tracking facility is
    removed.

Andrew Bennieston has done some performance measurements with (I think)
the V1 series and it shows a significant decrease in performance of
from-guest traffic even with only two VIFs.

Andrew will be able to comment more on this.

Andrew, can you also make available your results for others to review?


In my third series there is also simple performance figures attached.
Andrew could you please have a look at that as well?

If you have time, could you try my third series? In the third series,
the only possible performance impact is the new model, which should
narrow the problem down.

Wei, I finally have the results from testing your V3 patches. They are available at:

http://wiki.xenproject.org/wiki/Xen-netback_NAPI_%2B_kThread_V3_performance_testing

This time, the base for the tests was linux-next, rather than v3.6.11 (mostly to reduce the effort in backporting patches) so the results can't be directly compared to the V1, but I still ran tests without, then with, your patches, so you should be able to see the direct effect of those patches.

The summary is that there is (as expected) no impact on the dom0 -> VM measurements, and the VM -> dom0 measurements are identical with and without the patches up to 4 concurrently transmitting VMs or so, after which the original version outperforms the patched version. The difference becomes less pronounced as the number of TCP streams is increased, though.

My conclusion from these results would be that your V3 patches have fairly minimal performance impact, although they should improve _fairness_ (due to the kthread per VIF) on the transmit (into VM) pathway, and simplify the handling of the receive (out of VM) scenario too.

In other news, it looks like the throughput in general has improved between 3.6 and -next :)

Cheers,
Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.