[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] HVM PV unmodified driver performance



Hi, 

(posted on xen-users but maybe this list is more appropriate?)

I have been testing the unmodified_drivers from xen-unstable on my FC6 machine 
and I have a couple of questions regarding the results. It seems that I only 
get accelerated network performance in one direction namely sends from the HVM 
guest. I used iperf to benchmark performance between the HVM guest and the FC6 
Dom0:

HVM - No PV drivers
Sends:
  [ ID] Interval       Transfer     Bandwidth
  [  3]  0.0-10.0 sec  54.9 MBytes  46.0 Mbits/sec
Receives:
  [ ID] Interval       Transfer     Bandwidth
  [  4]  0.0- 8.2 sec  17.8 MBytes  18.3 Mbits/sec

HVM - with PV net driver
Sends:
  [ ID] Interval       Transfer     Bandwidth
  [  4]  0.0-10.0 sec   788 MBytes   660 Mbits/sec
Receives:
  [ ID] Interval       Transfer     Bandwidth
  [  3]  0.0-10.0 sec  8.52 MBytes  7.13 Mbits/sec

As you can see the PV driver improves network performance when sending from the 
HVM guest (FC6 - 2.6.18) but if anything the receive/read performance is worse 
than when using the ioemu rtl1839 driver. Is this expected behaviour? Does it 
matter that I'm running xen-3.0.3 but using the xen-unstable umodified_drivers 
source? xen-unstable has support for building against the 2.6.18 kernel whereas 
3.0.3 does not. Is this message on start-up normal?: "netfront: device eth1 has 
copying receive path". From what I've read the PV drivers for Linux should 
accelerate performance in both directions....

Here's my vif config line:
  vif = [ 'bridge=xenbr0' , 'type=ioemu, bridge=xenbr0' ]

I boot a "diskless" FC6 image from the network using pxe (etherboot for the 
rtl1839) and then load the unmodifed_drivers modules and bring up the network 
on eth1 (eth0 being the ioemu rtl1839). Am I doing anything wrong or is this 
performance expected behaviour? 

Also I tried building the unmodified_drivers against both 32bit and 64bit guest 
FC6 kernels/images - they work fine with 64bit Dom0 & 64bit HVM guests but with 
a 64bit Dom0 and 32bit HVM guest the "xenbus.ko" module hangs on the insmod - 
another known issue/limitation?

Any help or hints greatly appreciated!

Regards,

Daire

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.