[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/3 V2] xen-netback: switch to NAPI + kthread 1:1 model



On 11/06/13 11:15, Wei Liu wrote:
On Tue, Jun 11, 2013 at 11:06:43AM +0100, David Vrabel wrote:
On 27/05/13 12:29, Wei Liu wrote:
* This is a xen-devel only post, since we have not reached concesus on
   what to add / remove in this new model. This series tries to be
   conservative about adding in new feature compared to V1.

This series implements NAPI + kthread 1:1 model for Xen netback.

This model
  - provides better scheduling fairness among vifs
  - is prerequisite for implementing multiqueue for Xen network driver

The first two patches are ground work for the third patch. First one
simplifies code in netback, second one can reduce memory footprint if we
switch to 1:1 model.

The third patch has the real meat:
  - make use of NAPI to mitigate interrupt
  - kthreads are not bound to CPUs any more, so that we can take
    advantage of backend scheduler and trust it to do the right thing

Change since V1:
  - No page pool in this version. Instead page tracking facility is
    removed.

Andrew Bennieston has done some performance measurements with (I think)
the V1 series and it shows a significant decrease in performance of
from-guest traffic even with only two VIFs.

Andrew will be able to comment more on this.

Andrew, can you also make available your results for others to review?

Absolutely; there is now a page at http://wiki.xenproject.org/wiki/Xen-netback_NAPI_%2B_kThread_V1_performance_testing detailing the tests I performed and the results I saw, along with some summary text from my analysis.

Note that I also performed these tests without manually distributing IRQs across cores, and the performance was, as expected, rather poor. I didn't include those plots on the Wiki page since they don't really provide any new information.


In my third series there is also simple performance figures attached.
Andrew could you please have a look at that as well?

I had a look at those; I think they agree with my tests where there is overlap. The tests I performed were repeated a number of times and covered a broader range of scenarios and have associated error bars which provide a measure of variability between tests (as well as indicating the statistical significance of differences between tests).

The error bars can also be interpreted in terms of fairness; smaller error bars mean that all TCP streams across all VIFs attain similar throughput to each other. Larger error bars mean that there is quite a lot of variation from one stream to another, e.g. as a stream or VIF may be starved of resources.

If you have time, could you try my third series? In the third series,
the only possible performance impact is the new model, which should
narrow the problem down.
Wei.

I am going to test the V3 patches as soon as I get the time; hopefully later this week, or early next week. I'll post the results once I have them.

Andrew.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.