[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Re: Xen Disk I/O performance vs native performance:Xen I/O is definitely super super super slow


  • To: "Fajar A. Nugraha" <fajar@xxxxxxxxx>
  • From: "DOGUET Emmanuel" <Emmanuel.DOGUET@xxxxxxxx>
  • Date: Fri, 13 Feb 2009 10:00:16 +0100
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 13 Feb 2009 01:01:11 -0800
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AcmNkRtJM/WiNHn8Q0OxYZildUH6vwAEIr7AAAXlg5A=
  • Thread-topic: [Xen-users] Re: Xen Disk I/O performance vs native performance:Xen I/O is definitely super super super slow

 

I have mount domU partition on dom0 for testing and it's OK.
But same partiton on domU side is slow.

Strange.




>-----Message d'origine-----
>De : xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
>[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] De la part de 
>DOGUET Emmanuel
>Envoyé : vendredi 13 février 2009 07:14
>À : Fajar A. Nugraha
>Cc : xen-users@xxxxxxxxxxxxxxxxxxx
>Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs native 
>performance:Xen I/O is definitely super super super slow
>
> 
>
>I do '5G 'test because linux FS have very good caching 
>system.. and the RAID controler too. 
>
>For my domU, I'm agree with you but I don't find the problem. 
>And what about the qemu-dm.. it seem to be a HVM functionnality?
>
>
>dom0:
>Linux host33 2.6.18-128.el5xen #1 SMP Wed Dec 17 12:01:40 EST 
>2008 x86_64 x86_64 x86_64 GNU/Linux
>
>domU
>Linux host33-v1 2.6.18-128.el5xen #1 SMP Wed Dec 17 12:01:40 
>EST 2008 x86_64 x86_64 x86_64 GNU/Linux
>
>
>Bye
>
>
>>-----Message d'origine-----
>>De : Fajar A. Nugraha [mailto:fajar@xxxxxxxxx] 
>>Envoyé : vendredi 13 février 2009 05:11
>>À : DOGUET Emmanuel
>>Cc : xen-users@xxxxxxxxxxxxxxxxxxx
>>Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native 
>>performance: Xen I/O is definitely super super super slow
>>
>>On Thu, Feb 12, 2009 at 10:02 PM, DOGUET Emmanuel
>><Emmanuel.DOGUET@xxxxxxxx> wrote:
>>> - dd if=/dev/zero of=TEST bs=4k count=1250000 (5Gb for avoid 
>>mem cache).
>>
>>> dom0:  5120000000 bytes (5.1 GB) copied, 139.492 seconds, 36.7 MB/s
>>> domU    5120000000 bytes (5.1 GB) copied, 279.251 seconds, 18.3 MB/s
>>
>>Here's what I get using "dd if=/dev/zero of=testfile bs=4k 
>>count=524288"
>>
>>dom0: 2147483648 bytes (2.1 GB) copied, 14.5523 seconds, 148 MB/s
>>domU: 2147483648 bytes (2.1 GB) copied, 14.8254 seconds, 145 MB/s
>>
>>Since I only allocate 512M for dom0 and domU, 2G test file is enough
>>to avoid memory cache effects. As you can see the performance is
>>similar between dom0 and domU. Maybe you're using HVM? Try "uname -a"
>>on your domU. If it shows a xen kernel then it's PV.
>>
>>It might also be because of the difference in disks used or another
>>I/O-intensive process running on your server, since I got over 140
>>MB/s while you only get 36 MB/s on dom0.
>>
>>My point is PV domU should have similar I/O performance to dom0 when
>>configured correctly (e.g. using LVM or partition-backed storage). If
>>there's a huge difference (like what you get) then maybe the source of
>>the problem is elsewhere, not in Xen.
>>
>>Regards,
>>
>>Fajar
>>
>
>_______________________________________________
>Xen-users mailing list
>Xen-users@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-users
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.