[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] poor IO perfomance



 

> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of 
> Vitaliy Okulov
> Sent: 30 May 2007 13:11
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-users] poor IO perfomance
> 
> Здравствуйте, xen-users.
> 
> Just test domU IO perfomance:
> 
> sda1 configure via phy:/dev/sdb1. Benchmark with dbench 
> (dbench -D /usr/src -s 10
> -t 120) - 102 Mb/s
> 
> Native sistem (mount /dev/sdb1 /mnt && dbench -D /mnt -s 10 -t 120) -
> 140 Mb/s
> 
> How i can speedup dbench?

Probably not that easy. If you have multiple disk-controllers (that is, 
multiple devices according to for example "lspci"), you can give one device to 
the guest, that should give the same performance as native (assuming nothing 
else interferes with DomU - if two domains share the same CPU it would of 
course not give the same performance as native, for example). 

The disk-IO request goes through Dom0 even if the device is "phy:", as the 
device that is connected to "/dev/sdb1" is on a disk-controller owned by Dom0, 
so there will be some latency overhead, and unless the "queue" is of infinite 
length, that latency will affect the transfer rate.

You have to understand that any form of virtualization does add overhead - a 
bit like the raw disk-write performance is (or should be) higher than if you 
write to the disk with a file-system - but I don't think anyone would prefer to 
refer to their e-mails or documents by saying "please give me blocks 12313287, 
12241213 and 12433823" instead of "/usr/doc/mytext.doc" - so the overhead is 
"accepted" because it makes the system more usable. In the virtualization case, 
there is usually a REASON for wishing to use virtualization: either that the 
system is underutilized, which means that it's CPU and IO capacity isn't used 
to full potential. Merging two systems that have about 20-30% utilization would 
still give some "spare" for expansion as well as for the virtualization 
overhead. 

--
Mats
> 
> -- 
> С уважением,
>  Vitaliy                          mailto:vitaliy.okulov@xxxxxxxxx
> 
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
> 
> 
> 



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.