[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:XenI/O is definitely super super super slow



To be sure I redo all tests with adding "conv=fdatasync" to dd :
   dd conv=fdatasync if=/dev/zero of=TEST bs=4k count=128000

Results don't really change.

For memory, all my domU have 512Mo of memory.

On dom0, I checked with 200Mo or 4Go free memory, this doesn't change results. I adjust dd count parameter to create 512Mo or 8Go file, and have similar results (there is a small variation of course, about 10%).

I checked also with the Debian Lenny Xen kernel for DomU (so without ext4), and have same results to.

I really don't think it's a FS issue ; but I suppose the writeback feature of ext4 avoid this write problems (I didn't try to disable it).

Olivier

DOGUET Emmanuel a écrit :
If you have differences between ext3 and ext4, it can be due to the FS cache?
In your tests what are the memory of dom0 and domU?
Very strange our write problem to RAID harware :/

For my tests, I use "standard" redhat kernel.


  Bye.

-----Message d'origine-----
De : xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
bounces@xxxxxxxxxxxxxxxxxxx] De la part de Olivier B.
Envoyé : mercredi 25 février 2009 20:47
Cc : xen-users
Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs
nativeperformance:XenI/O is definitely super super super slow

I did some tests to on some servers ( time dd if=/dev/zero of=TEST bs=4k
count=512000 )

First server : raid 1 hardware with a PCI 32bits 3ware card.
dom0 (ext3) :    39MB/s
domU (ext3) :     1.4MB/s !!!
domU (ext4) :    40MB/s

Second server : raid 1 software with 2 SATA disks
dom0 (ext3) :     96MB/s
domU (ext3) :     91MB/s
domU (ext4) :     94MB/s

Note : I use vanilla kernel on DomU.

So :
- I see a big write problem from domU on hardware raid
- the writeback feature of ext4 seem to "erase" this problem

Olivier

DOGUET Emmanuel a écrit :
I have finished my tests on 3 servers. On each we loose some bandwidth
with XEN. On our 10 platform ... We always loose some bandwidth, I think
it's normal. Just the bench method who must differ?
I have made bench (write only) between hardware and software RAID under
XEN (see attachment).
Linux Software RAID is always faster than HP Raid. I must try too the
"512MB+Cache Write" option for the HP Raid.
So my problems seem to be here.


-------------------------
HP DL 380
Quad core
-------------------------
Test: dd if=/dev/zero of=TEST bs=4k count=1250000



          Hardware     Hardware    Software     Software
           RAID 5        RAID 5      RAID 5       RAID 5
          4 x 146G      8 x 146G   4 x 146G     8 x 146G
dom0
(1024MB,
 1 cpu)     32MB        22MB        88MB (*)    144MB (*)

domU
( 512MB,
 1 cpu)      8MB        5MB         34MB        31MB

domU
 (4096MB,
 2 cpu)     --          7MB         51MB        35MB



*: don't understand this difference.


This performance seems to be good for you?




         Best regards.





-----Message d'origine-----
De : DOGUET Emmanuel
Envoyé : mardi 24 février 2009 17:50
À : DOGUET Emmanuel; Fajar A. Nugraha
Cc : xen-users; Joris Dobbelsteen
Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs
nativeperformance:Xen I/O is definitely super super super slow

For resuming :

on RAID 0

        dom0: 80MB      domU:   56MB                    Loose: 30M

on RAID1

        dom0: 80MB      domU:  55 MB            Loose: 32%

on RAID5:

        dom0: 30MB      domU:   9MB                     Loose: 70%



So loose seem to be "exponantial" ?




-----Message d'origine-----
De : xen-users-bounces@xxxxxxxxxxxxxxxxxxx
[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] De la part de
DOGUET Emmanuel
Envoyé : mardi 24 février 2009 14:22
À : Fajar A. Nugraha
Cc : xen-users; Joris Dobbelsteen
Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs
nativeperformance:Xen I/O is definitely super super super slow


I have made another test on another server (DL 380)

And same thing!

I'm always use this test :

dd if=/dev/zero of=TEST bs=4k count=1250000

(be careful with memory cache)


TEST WITH 2 RAID 5 (include system on RAID 5, 3x146G + 3x146G)
---------------------------------------------------------------

dom0: 1GO, 1CPU, 2 RAID 5

       rootvg(c0d0p1):         4596207616 bytes (4.6 GB)
copied, 158.284 seconds, 29.0 MB/s
       datavg(c0d1p1):         5120000000 bytes (5.1 GB)
copied, 155.414 seconds, 32.9 MB/s

domU: 512M, 1CPU         on System LVM/RAID5 (rootvg)

       5120000000 bytes (5.1 GB) copied, 576.923 seconds, 8.9 MB/s

domU: 512M, 1CPU         on DATA LVM/RAID5 (datavg)

       5120000000 bytes (5.1 GB) copied, 582.611 seconds, 8.8 MB/s

domU: 512M, 1 CPU on same RAID without LVM

       5120000000 bytes (5.1 GB) copied, 808.957 seconds, 6.3 MB/s


TEST WITH RAID 0 (dom0 system on RAID 1)
---------------------------------------

dom0   1GO RAM 1CPU

       on system (RAID1):
       i3955544064 bytes (4.0 GB) copied, 57.4314 seconds, 68.9 MB/s

       on direct HD (RAID 0 of cssiss), no LVM
       5120000000 bytes (5.1 GB) copied, 62.5497 seconds, 81.9 MB/s

dom0   4GO RAM 4CPU



domU:  4GO, 4 CPU

       on direct HD (RAID 0), no LVM.
       5120000000 bytes (5.1 GB) copied, 51.2684 seconds, 99.9 MB/s


domU: 4GO, 4CPU  same HD but ONE LVM on it

       5120000000 bytes (5.1 GB) copied, 51.5937 seconds, 99.2 MB/s


TEST with only ONE RAID 5 (6 x 146G)
------------------------------------

dom0 : 1024MB - 1CPUI (RHEL 5.3)

       5120000000 bytes (5.1 GB) copied, 231.113 seconds, 22.2 MB/s


512MB - 1 CPU
       5120000000 bytes (5.1 GB) copied, 1039.42 seconds, 4.9 MB/s


512MB - 1 CPU - ONLY 1 VDB [LVM] (root, no swap)

       (too slow ..stopped :P)
       4035112960 bytes (4.0 GB) copied, 702.883 seconds, 5.7 MB/s

512MB - 1 CPU - On a file (root, no swap)

       1822666752 bytes (1.8 GB) copied, 2753.91 seconds, 662 kB/s

4GB - 2 CPU
       5120000000 bytes (5.1 GB) copied, 698.681 seconds, 7.3 MB/s





-----Message d'origine-----
De : Fajar A. Nugraha [mailto:fajar@xxxxxxxxx]
Envoyé : samedi 14 février 2009 06:23
À : DOGUET Emmanuel
Cc : xen-users
Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native
performance:Xen I/O is definitely super super super slow

2009/2/13 DOGUET Emmanuel <Emmanuel.DOGUET@xxxxxxxx>:

I have mount domU partition on dom0 for testing and it's OK.
But same partiton on domU side is slow.

Strange.

Strange indeed. At least that ruled-out hardware problems :)
Could try with a "simple" domU?
- 1 vcpu
- 512 M memory
- only one vbd

this should isolate whether or not the problem is on your particular
domU (e.g. some config parameter actually make domU slower).

Your config file should have only few lines, like this

memory = "512"
vcpus=1
disk = ['phy:/dev/rootvg/bdd-root,xvda1,w' ]
vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ]
vfb =['type=vnc']
bootloader="/usr/bin/pygrub"

Regards,

Fajar


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.