[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Disk IO tuning



Dear Florian Heigl,

I attempted that, and so the 
Virtual System: 39.4 MB/s

Hardware System: 321 MB/s

I think my bottle neck is on the host side... any suggestions?

Andrew.

On Wed, Dec 14, 2011 at 4:10 PM, Florian Heigl <florian.heigl@xxxxxxxxx> wrote:
Hi,

2011/12/14 Andrew Wells <agwells0714@xxxxxxxxx>:
> Hardware:
> dd if=/dev/zero of=/data/gp/test.file bs=4096 count=1000000

It would be very helpful to start by testing sync writes using a
better blocksize. Otherwise you're just testing how fast you can fill
your dom0s buffer cache. Xen will not use such unsafe writes unless
you're using a file:// device.

Also, even if our Linux apps run using 4K pages, the IO speed in dd
will be quite bad using that. This would be most interesting if you
expect a lot of paging from the domUs. Not saying that this isn't
something worth testing, but rather first find out the full sequential
speed, and then use something different from dd to test 4K random IOs.
Sequential + 4K is really not going to happen a lot.

so use
1) conv=fdatasync at the end of the line
2) bs=1M count=1024

Yes, the 1024 "MB" will not be enough to fill the arrays cache.
But you're looking for host IO bottlenecks, so this would be very
sensible to not try to starve the array, but the host only.



--
the purpose of libvirt is to provide an abstraction layer hiding all
xen features added since 2006 until they were finally understood and
copied by the kvm devs.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.