[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] large packet support in netfront driver and guest network throughput


  • To: "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • From: Anirban Chakraborty <abchak@xxxxxxxxxxx>
  • Date: Thu, 12 Sep 2013 17:53:02 +0000
  • Accept-language: en-US
  • Delivery-date: Thu, 12 Sep 2013 17:53:36 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: AQHOr+Dtab7v4C01XU6hPj6YULek5Q==
  • Thread-topic: large packet support in netfront driver and guest network throughput

Hi All,

I am sure this has been answered somewhere in the list in the past, but I can't 
find it. I was wondering if the linux guest netfront driver has GRO support in 
it. tcpdump shows packets coming in with 1500 bytes, although the eth0 in dom0 
and the vif corresponding to the linux guest in dom0 is showing that they 
receive large packet:

In dom0:
eth0      Link encap:Ethernet  HWaddr 90:E2:BA:3A:B1:A4  
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
tcpdump -i eth0 -nnvv -s 1500 src 10.84.20.214
17:38:25.155373 IP (tos 0x0, ttl 64, id 54607, offset 0, flags [DF], proto TCP 
(6), length 29012)
    10.84.20.214.51041 > 10.84.20.213.5001: Flags [.], seq 276592:305552, ack 
1, win 229, options [nop,nop,TS val 65594025 ecr 65569225], length 28960

vif4.0    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF  
          UP BROADCAST RUNNING NOARP PROMISC  MTU:1500  Metric:1
tcpdump -i vif4.0 -nnvv -s 1500 src 10.84.20.214
17:38:25.156364 IP (tos 0x0, ttl 64, id 54607, offset 0, flags [DF], proto TCP 
(6), length 29012)
    10.84.20.214.51041 > 10.84.20.213.5001: Flags [.], seq 276592:305552, ack 
1, win 229, options [nop,nop,TS val 65594025 ecr 65569225], length 28960


In the guest:
eth0      Link encap:Ethernet  HWaddr CA:FD:DE:AB:E1:E4  
          inet addr:10.84.20.213  Bcast:10.84.20.255  Mask:255.255.255.0
          inet6 addr: fe80::c8fd:deff:feab:e1e4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
tcpdump -i eth0 -nnvv -s 1500 src 10.84.20.214
10:38:25.071418 IP (tos 0x0, ttl 64, id 15074, offset 0, flags [DF], proto TCP 
(6), length 1500)
    10.84.20.214.51040 > 10.84.20.213.5001: Flags [.], seq 17400:18848, ack 1, 
win 229, options [nop,nop,TS val 65594013 ecr 65569213], length 1448

Is the packet on transfer from netback to net front is segmented into MTU size? 
Is GRO not supported in the guest?

I am seeing extremely low throughput on a 10Gb/s link. Two linux guests (Centos 
6.4 64bit, 4 VCPU and 4GB of memory) are running on two different XenServer 
6.1s and iperf session between them shows at most 3.2 Gbps. 
I am using linux bridge as network backend switch. Dom0 is configured to have 
2940MB of RAM.
In most cases, after a few runs the throughput drops to ~2.2 Gbps. top shows 
that the netback thread in dom0 is having about 70-80% CPU utilization. I have 
checked the dom0 network configuration and there is no QoS policy in place etc. 
So, my question is that is PCI passthrough only option to get line rate in the 
guests? Is there any benchmark of maximum throughput achieved in the guests 
using PV drivers and without PCI pass thru? Also, what could be the reason for 
throughput drop in the guests (from ~3.2 to ~2.2 Gbps) consistently after few 
runs of iperf?

Any pointer will be highly appreciated.

thanks,
Anirban 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.