[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Raid1 performance



On Thu, Apr 08, 2010 at 08:23:19PM +0200, blub@xxxxxxxx wrote:
> > On Thu, Apr 08, 2010 at 07:00:11PM +0200, blub@xxxxxxxx wrote:
> >> hello
> >> i've been trying to achieve reasonable disk i/o performance on xen.
> >> on debian 5.0.4 (xen 3.2.1, kernel 2.6.26) the pv/hvm performance is
> >> alright with a raid1 inside the domU. unfortunately the qemu version
> >> (0.9.0) does not cover my needs.
> >>
> >> so i've been trying xen 3.4.2 & 4.0.0rc8 on gentoo (2.6.31)
> >> on both versions, the disk i/o performance drops to zero when copying
> >> large files. initially the performance is ok (~30mbyte/s) but drops to
> >> zero after ~300mb, i've tried it with wget/ftp/scp
> >> this happens with both PV & HVM domU's. i've tried multiple nic drivers
> >>
> >> the performance seems alright if no raid1 is running on the domU
> >> i cannot run the domU raid1's on the dom0 since there are multiple dom0s
> >> connecting to multiple FC SAN's and the raid1's are done across the
> >> SAN's
> >>
> >> (btw, the performance with raid1's on the SAN's is excellent on dom0)
> >>
> >> anyone else has experienced this issue?
> >>
> >
> > Did you make sure dom0 has more weight than the domUs, so that it is able
> > to process the IO requests?
> >
> > See: http://wiki.xensource.com/xenwiki/XenBestPractices
> >
> > -- Pasi
> >
> >
> 
> yes, the box has 8 physical cores, and the domU test installation only
> uses two of them
> 

How many cores does dom0 have? 
Did you set up the xen credit scheduler weights? 
Did you pin the vcpus to specific cores? 

What does "iostat 1" show in dom0 while you run the test? 
What does "xm top" show in dom0 while you run test? 

-- Pasi


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.