[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Poor network performance - caused by inadequate vif configuration?


  • To: <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: "Schmidt, Werner (Werner)" <wernerschmidt@xxxxxxxxx>
  • Date: Thu, 24 May 2007 15:16:29 +0200
  • Delivery-date: Thu, 24 May 2007 06:15:01 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AceeBb2bmMX3nMxyQ76CGYE3o6Z1bg==
  • Thread-topic: Poor network performance - caused by inadequate vif configuration?

All,

 

similar to some mail threads found in this forum and some other xen-related threads, I had problems with the network performance of my test system:

 

  • software base of dom0/domU: RHEL5 (Xen 3.0.3, Redhat 2.6.18-8el5xen SMP kernel)
  • ibm x306 servers with 3Ghz P4 /MT support; coupled via Gigabit Ethernet switch
  • standard xen bridging network configuration
  • test tool: iperf
  • Xen domUs working in PV mode (the P4 does not support VT)

 

the data transfer rates with ‘iperf’ were as follows:

·         dom0/machine 1  => dom0/machine 2 ~800MBit/s

·         domU/machine 1  => dom0/machine 2 ~700MBit/s

·         dom0/machine 1  => domU/machine 2 ~ 40MBit/s

 

this flaw of the last test case and the difference between test case 2 and 3 remained more or less constant with various configs of the test systems:

  • credit or  sedf scheduler
  • various configs of the schedulers
  • copy mode and flipping mode of netfront driver

 

A detailed analysis with tcpdump/wireshark showed that there must be some losses of data within the TCP stream, resulting  in TCP retransmissions and therefore breaks within data transfer (in one test case I saw a transmission gap of 200 ms caused by TCP retransmissions every 230 ms - explaining the breakdown of the data rate).

 

Now, looking for the reason for the data losses (this was the reason why I checked the copy mode of the netfront driver) I noticed that the txqueuelen parameter of the vif devices connecting the bridge to the domUs were set to ‘32’ (no idea where and for what reason this value is configured initially - note that the txqueuelen value for Ethernet devices is set to 1000 ).

After changing this parameter to higher values (128-512) I got a much higher performance in test case 3: tcp throughput now reaches values of 700MBit/s and higher; using iperf –d option (tcp data streams in both directions) now gave sum values of more than 900 MBit/s.

 

I’ll evaluate also the other test cases parameter settings to find out the best setting of the parameters, but I think a suited configuration of the txqueuelen parameter of the vif interfaces will be most important for getting a good network performance for a configuration as described above (comparable to other virtualization solutions)

  

 

Regards

Werner

 

 

 

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.