[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Re: Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow


  • To: "Fajar A. Nugraha" <fajar@xxxxxxxxx>
  • From: "DOGUET Emmanuel" <Emmanuel.DOGUET@xxxxxxxx>
  • Date: Thu, 12 Feb 2009 16:02:12 +0100
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 12 Feb 2009 07:03:05 -0800
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AcmNGpdWud5oO0MmTMS1JRGkvThJGgAAYPQA
  • Thread-topic: [Xen-users] Re: Xen Disk I/O performance vs native performance: Xen I/O is definitely super super super slow

 

>-----Message d'origine-----
>De : Fajar A. Nugraha [mailto:fajar@xxxxxxxxx] 
>Envoyé : jeudi 12 février 2009 15:03
>À : DOGUET Emmanuel
>Cc : xen-users@xxxxxxxxxxxxxxxxxxx
>Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native 
>performance: Xen I/O is definitely super super super slow
>
>On Thu, Feb 12, 2009 at 8:37 PM, DOGUET Emmanuel
><Emmanuel.DOGUET@xxxxxxxx> wrote:
>>
>>        Oops sorry!
>>
>> We use only phy: with LVM. PV only (Linux on domU,Linux form dom0).
>> LVM is on hardware RAID.
>
>That's better :) Now for more questions :
>What kind of test did you run? How did you determine that "domU was 2x
>slower than dom0"?
>How much memory did you assign to domU and dom0? Are other programs
>running? What were the results (how many seconds, how many MBps, etc.)

With :

- Oracle (Create table space : time to create and iostat)
and
- dd if=/dev/zero of=TEST bs=4k count=1250000 (5Gb for avoid mem cache).


New platform with :

Dom0:  4Go  (Quad Core)
DomU1: 4Go  2 VCPUS
DomU2: 10Go 4 VCPUS

Trying two with only one DomU.

this problem is only with 2 platform.


Example with configuration with 2 RAID (HP ML370, 32 bits):

dom0:  5120000000 bytes (5.1 GB) copied, 139.492 seconds, 36.7 MB/s
domU    5120000000 bytes (5.1 GB) copied, 279.251 seconds, 18.3 MB/s

        release                : 2.6.18-53.1.21.el5xen
        version                : #1 SMP Wed May 7 09:10:58 EDT 2008
        machine                : i686
        nr_cpus                : 4
        nr_nodes               : 1
        sockets_per_node       : 2
        cores_per_socket       : 1
        threads_per_core       : 2
        cpu_mhz                : 3051
        hw_caps                : bfebfbff:00000000:00000000:00000080:00004400
        total_memory           : 4863
        free_memory            : 1
        xen_major              : 3
        xen_minor              : 1
        xen_extra              : .0-53.1.21.el5
        xen_caps               : xen-3.0-x86_32p
        xen_pagesize           : 4096
        platform_params        : virt_start=0xf5800000
        xen_changeset          : unavailable
        cc_compiler            : gcc version 4.1.2 20070626 (Red Hat 4.1.2-14)
        cc_compile_by          : brewbuilder
        cc_compile_domain      : build.redhat.com
        cc_compile_date        : Wed May  7 08:39:04 EDT 2008
        xend_config_format     : 2



Example with configuration with 1 RAID (HP DL360, 64bits):

dom0:   5120000000 bytes (5.1 GB) copied, 170.3 seconds, 30.1 MB/s
domU:           5120000000 bytes (5.1 GB) copied, 666.184 seconds, 7.7 MB/s


        release                : 2.6.18-128.el5xen
        version                : #1 SMP Wed Dec 17 12:01:40 EST 2008
        machine                : x86_64
        nr_cpus                : 8
        nr_nodes               : 1
        sockets_per_node       : 2
        cores_per_socket       : 4
        threads_per_core       : 1
        cpu_mhz                : 2666
        hw_caps                :        
bfebfbff:20000800:00000000:00000140:000ce3bd:00000000:00000001
        total_memory           : 18429
        free_memory            : 0
        node_to_cpu            : node0:0-7
        xen_major              : 3
        xen_minor              : 1
        xen_extra              : .2-128.el5
        xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p
        xen_pagesize           : 4096
        platform_params        : virt_start=0xffff800000000000
        xen_changeset          : unavailable
        cc_compiler            : gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)
        cc_compile_by          : mockbuild
        cc_compile_domain      : redhat.com
        cc_compile_date        : Wed Dec 17 11:37:15 EST 2008
        xend_config_format     : 2




PS:  I don't use virt-install but generate myself the xmdomain.cfg. So PV or 
HVM????


  Bye bye.


>
>I've had good results so far, with domU's disk I/O performance is
>similar or equal to dom0. A simple
>
>time dd if=/dev/zero of=test1G bs=1M count=1024
>
>took about 5 seconds and give me about 200 MB/s on idle dom0 and domU.
>This is on IBM, hardware RAID, 7x144GB RAID5 + 1 hot spare 2.5" SAS
>disk. Both dom0 and domU has 512MB memory.
>
>>
>> For the RAID my question was (I'm bad in English):
>>
>> It's better to have :
>>
>> *case 1*
>> Dom0 and DomU   on         hard-drive 1 (with HP raid: c0d0)
>>
>> Or
>>
>> *case 2*
>> Dom0            on      hard-drive 1    (if HP raid: c0d0)
>> DomU            on      hard-drive 2    (if HP raid: c0d1)
>>
>>
>
>Depending on how you use it, it might not matter :)
>General rule-of-thumb, more disks should provide higher I/O throughput
>when setup properly. In general (like when all disks are the same, for
>general-purpose domUs) I'd simply put all available disks in a RAID5
>(or multiple RAID5s for lots of disks) and put them all in a single
>VG.
>
>Regards,
>
>Fajar
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.