[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Re: poor domU VBD performance.


  • To: "peter bier" <peter_bier@xxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
  • Date: Fri, 1 Apr 2005 18:46:58 +0100
  • Delivery-date: Fri, 01 Apr 2005 17:47:08 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcU22aIeUlJeyl7oSSCbKJtainSzHQACENPA
  • Thread-topic: [Xen-devel] Re: poor domU VBD performance.

 
> Now I have switched back to the filesystem operations. I do 
> this by copying a "/usr" subtree from a slackware-10.0 
> installation containg about 750 MB in 2200 directories and 
> 37000 files. Copying these  files with target directory on 
> the same device as the source directory, I get between 90 and 
> 93% of the per- formance in Dom0, when I work with DomU. When 
> copying form a directory on one device into a directory of 
> another device, performance in DomU leaks more behind that of 
> Dom0. It's only 50 to 60 percent of the Dom0 performance. The 
> performance is  less than it is when using only one disk. I 
> found out that the sum of the business of the two disks as 
> reported by iostat on Dom0 is always slightly above 100%.  
> Does this reflect that the reading and the writing both  go 
> through the VDB driver ? Both devices are never 100 % busy.

That latest 2.0-testing tree has some further blk queue plugging
enhancements along with a fix for another nasty performance bug. It
would be interesting to know whether that improves things.

It's possible that the blkring currently just isn't big enough if you're
trying to drive multiple devices with independent requests. 

Ian

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.