[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows

Scott McKenzie wrote:
On Sat, 26 Apr 2008 16:49:59 +1000, "James Harper"
<james.harper@xxxxxxxxxxxxxxxx> wrote:
Here's the latest version. Took me ages to get to the bottom of what
turned out to be a pretty simple problem - windows can give us more
pages in a packet than Linux can handle but Linux (netback) doesn't
complain about it, it just creates corrupt packets :(

Download from http://www.meadowcourt.org/WindowsXenPV-0.8.9.zip

I'll probably do another release very shortly, mainly to reduce memory
consumption on the tx side so that more interfaces can run at once.

>From the testing I've done, on a UP windows DomU, with iperf options '-l
1M -w 1M', with the iperf server running in Dom0, I get TX throughput of
about 1.5Gbits/second and RX throughput of about 0.5Gbits/second. When I
tried it under SMP it worked, but the performance was horrible. Probably
best if you don't run it under SMP for the moment :)

I did have a test performance of 2.5Gbits/second, but now that I have to
copy the windows buffers into my own buffers to reduce page usage, I
seem to only be able to get about 1.5Gbits/second out of it... This kind
of makes sense given that DomU to Dom0 network performance is going to
be CPU and Memory bandwidth bound.


Hi James

I've tested this release on my system (fresh install) and I'm still getting
the duplicate disk problem when I boot with the /gplpv option.

There has been some talk lately that this may be a fault of the Red Hat
kernel.  FWIW I'm running CentOS 5.1 64bit, kernel is


I've just installed openSUSE on my system to test the drivers with their kernel and Xen version. I took a copy of my Windows HVM, booted it, installed 0.8.9, rebooted without /gplpv, rebooted with /gplpv and I had two disk devices appearing in device manager. So it doesn't look like it's the dom0 kernel that's causing this problem.

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.