[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Re: Windows Disk performance


  • To: "Christian Tramnitz" <chris.ace@xxxxxxx>, <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
  • Date: Sun, 8 Jun 2008 23:19:54 +1000
  • Delivery-date: Sun, 08 Jun 2008 06:20:33 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AcjJO/g5yiRSbMYxTECvwlyDuw1UCAALHlJQ
  • Thread-topic: [Xen-users] Re: Windows Disk performance

> 
> When it comes to PV driver's performance this is an interesting topic.
> I've seen posts giving the opposite (not exact numbers but direction)
> result, so it would be interesting to find out what causes this.
> 

I just did a bit of testing myself... the '32K; 100% Read; 0% random'
test in iometer performs inconsistently when using the qemu drivers. I
tried it once and it gave me 35MB/s. I then tried the gplpv drivers and
they gave me around 23MB/s. I'm now trying the qemu drivers and they
aren't getting past 19MB/s. I'm using an LVM snapshot at the moment,
which is probably something to do with the inconsistent results....

I also tried fiddling with the '# of Outstanding I/O s', changing it to
16 (the maximum number of concurrent requests scsiport will give me).
For qemu, there was no change. For gplpv, my numbers went up to 66MB/s
(from 23MB/s). I'm a little unsure of how much trust to put in that
though as hdparm in Dom0 gives me a maximum of 35MB/s on that lvm
device, so I can't quite figure out how a HVM DomU could be getting
better results than the hdparm baseline figure.

I'm just about to upload 0.9.8, which fixes a performance bug that would
cause a huge slowdown (iometer dropped from 23MB/s to 0.5MB/s :) if too
many outstanding requests were issued at once. It also prints some
statistics to the debug log (viewable via DebugView from
sysinternals.com) every 60 seconds which may or may not be useful.

Unless the above bug was affecting things, and I am not sure that it
was, the reduction in performance may be due to the way that xenpci is
now notifying the child drivers (eg xenvbd and xennet) that an interrupt
has occurred. This should affect xennet equally though. It was changed
with the wdf->wdm rewrite.

James


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.