[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] DomU Slow Networking?



Hello Xen Community!

I'm an everyday user of Xen.  I seem to be experiencing a strange issue
with the latest Xen release, 4.0.  I believe this is also present in
earlier versions.  I'd love some help pinpointing the cause.

My domU network communications with the outside world appear to be
slower than expected.

iperf from outside host <=> dom0 (normal "iperf -s" on dom0, "iperf -c
dom0" on outside host):
[  3]  0.0-10.1 sec  80.6 MBytes  67.0 Mbits/sec
and
[  4]  0.0-10.2 sec  80.6 MBytes  66.1 Mbits/sec

iperf from outside host <=> domU (ditto):
[  3]  0.0-10.0 sec  59.5 MBytes  49.7 Mbits/sec
and
[  4]  0.0-10.1 sec  59.5 MBytes  49.2 Mbits/sec

iperf from dom0 <=> domU (server in domU):
[  3]  0.0-10.0 sec  4.25 GBytes  3.65 Gbits/sec
and
[  5]  0.0-10.0 sec  4.25 GBytes  3.65 Gbits/sec

So, since the B/W between domU and dom0 is very very high and the
latency is low, why is it still faster for me to transfer files TO the
dom0 and then later from the dom0 to the domU?  This is an issue for
fileservers and whatnot.

In the real world, an rsync from outside -> dom0 gets around 80Mbps
sustained, and an rsync from outside -> domU gets 10Mbps.  An rsync
from outside -> dom0 -> domU via SSH tunneling still gets 10Mbps!

I'm using network-bridge.  The dom0 is running pv_ops Gentoo Linux x64
2.6.32.11 (jeremy's git) and Xen 4.  The domU is 2.6.31-gentoo-r10, a
vanillaish pv_ops. Both servers have a dedicated, reliable 100Mbps
uplink. dom0 CPU is not loaded. The dom0 NIC is a BCM5714.  For the
rsync performance above, the dest filesystem is on phy-backed disks
separate from the dom0 disks, although the dom0 is doing dm-crypt
processing.  Nonetheless, the dom0 CPU usage is not significant and
disk bandwidth doesn't seem to be the bottleneck.

netstat -i on dom0:
Kernel Interface table
Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP
TX-OVR Flg br0    1500 0  647940331      0      0 0      294288838
0      0      0 BMRU eth0   1500 0   1621553      0      0
0             7      0      0      0 BMRU eth1   1500 0  423554110
0     22 0      214062434      0      0      0 BMPRU lo    16436 0
74883      0      0 0         74883      0      0      0 LRU tap1.
1500 0   1005921      0      0 0       3661769      0      0      0
BMRU vif1.  1500 0         0      0      0 0             0      0
2419332      0 BMPRU vif16  1500 0     11980      0      0 0
443636      0    705      0 BMPRU vif2.  1500 0  10899823      0      0
0      13239829      0     93      0 BMPRU vif3.  1500 0   4145086
0      0 0       6586798      0     11      0 BMPRU vif4.  1500
0        10      0      0 0       2418717      0    138      0 BMPRU

There are no drops or errors in there.  Well, OK, there's one interface
that's ALL TX drops; I don't know what's going on with that, but it's
a vif and this problem isn't limited to one particular domU.

Why would I be observing this behavior?  Will I have to use my dom0 as
a staging point forever?

Bryan Jacobs

Attachment: signature.asc
Description: PGP signature

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.