[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows



On Sat, Mar 01, 2008 at 09:21:24AM -0500, jim burns wrote:
> On Thursday 28 February 2008 06:07:00 am Pasi Kärkkäinen wrote:
> > I can recommend iperf too.
> >
> > Make sure you use the same iperf version everywhere.
> 
> Ok, here's my results.
> 
> Equipment: core duo 2300, 1.66ghz each, sata drive configured for UDMA/100
> System: fc8 32bit pae, xen 3.1.2, xen.gz 3.1.0-rc7, dom0 2.6.21
> Tested hvm: XP Pro SP2, 2002
> 

What NIC you have? What driver and version of the driver? 
"ethtool -i device"

Did you try disabling checksum offloading? "ethtool -K ethX tx off"
Try that on dom0 and/or on domU. Maybe also "ethtool -K ethX tso off"

Does your ethX interface have errors? Check with "ifconfig ethX".

Do you have tcp retransmits? Check with "netstat -s".

> Method:
> 
> The version tested was 1.7.0, to avoid having to apply the kernel patch that 
> comes with 2.0.2. The binaries downloaded were from the project homepage 
> http://dast.nlanr.net/Projects/Iperf/#download. For linux, I chose the 'Linux 
> libc 2.3 binary and (on fc8 at least) I still had to install the 
> compat-libstdc++-33 package to get it to run.
>
> The server/listening side was always the dom0, invoked with 'iperf -s'. The 
> first machine is a linux fc8 pv domu, the second is another machine on my 
> subnet with a 100Mbps nic pipeline inbetween, and the rest are the various 
> drivers on a winxp hvm. The invoked command was 'iperf -c dom0-hostname -t 
> 60'. '-t 60' sets the runtime to 60 secs. I used the default buffer size 
> (8k), mss/mtu, and window size (which actually varies between the client and 
> the server). I averaged 3 tcp runs.
> 

I think it might be a good idea to "force" good/big tcp window size to get
comparable results.. 

> For the udp tests, the default bandwidth is 1 Mbps (add the '-b 1000000' flag 
> to the command above). I added or subtracted a 0 till I got a packet loss 
> percentage of more than 0% and less than 5%, or an observed throughput 
> significantly less than the request (in other words, a stress test). In the 
> table below, 'udp Mpbs' is the observed, and '-b Mpbs' is the requested rate. 
> (The server has to be invoked with 'iperf -s -u'.)
> 
> machine  | tcp Mbps| udp Mbps| -b Mbps | udp packet loss
> fc8 domu |   1563  |     48.6|     100 |    .08%
> on subnet|     79.8|      5.4|      10 |   3.5%
> gplpv    |     19.8|      2.0|      10 |   0.0%
> realtek  |      9.6|      1.8|      10 |   0.0%
> 
> Conclusions: The pv domu tcp rate is a blistering 1.5 Gbps, showing that a 
> software nic *can* be even faster than a 100 Mpbs hardware nic, at least for 
> pv. The machine on the same subnet ('on subnet') achieved 80% of the max rate 
> supported by the hardware. Presumably, since the udp rates are consistently 
> less than the tcp ones, there was a lot of tcp retransmits. gplpv is twice as 
> fast as realtek for tcp, about the same for udp. 19.8/8 = ~2.5 MBps, which is 
> about the rate I was getting with my domu to dom0 file copies. I don't expect 
> pv data rates from an hvm, but it should be interesting to see how much 
> faster James & Andy can get this to go. Btw, this was gplpv 0.8.4.
> 
> Actually, pretty good work so far guys!
> 

Thanks for the benchmarks!

I find it weird that you get "only" 80 Mbit/sec from physical network to
dom0.. You should be able to easily reach near 100 Mbit/sec from/to LAN.

And UDP results are really weird.. something is causing a lot of errors.. 

Some things to check:

- txqueuelen of ethX device. I guess 1000 is the default nowadays.. try with
  bigger values too. This applies to dom0 and to linux domU.

- txqueuelen of vifX.Y devices on dom0. Default has been really small, so
  make sure to configure that bigger too.. This applies to both linux
  and windows vm's. 

- Check sysctl net.core.netdev_max_backlog setting.. it should be at least
  1000, possibly even more.. this applies to dom0 and linux domU.

-- Pasi

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.