[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] poor domU VBD performance.



On Tuesday 29 March 2005 16:45, Kurt Garloff wrote:
> Hi Ian,
>
> On Tue, Mar 29, 2005 at 07:09:50PM +0100, Ian Pratt wrote:
> > We'd really appreciate your help on this, or from someone else at SuSE
> > who actually understands the Linux block layer?
>
> I'm Cc'ing Jens ...
>
> > In the 2.6 blkfront driver, what scheduler should we be registering
> > with? What should we be setting as max_sectors? Are there other
> > parameters we should be setting that we aren't? (block size?)
>
> I think noop is a good choice for secondary domains, as you don't
> want to be too clever there, otherwise you stack a clever scheduler
> on top of a clever scheduler. noop basically only does front- and
> backmerging to make the request sizes larger.
>
> But you probably should initialize the readahead sectors.
>
> Please test attached patch.

This should help the case where one is doing buffered IO (so readahead gets 
used) but for o_direct, I still think we will have a problem.  On Dom0, I can 
drive 58MB/sec with sequential read with o_direct with just a 32k request 
size, but on domU with the same request size I can only get ~6MB/sec.  I am 
still wondering is somthing is up with the backend driver.  It apperas that 
the backend driver only submits requests to the actual device every 10ms. 
With a much larger request size (for o_direct) or a large readahead, 10ms is 
often enough to keep the disk streaming data.  With smaller request sizes or 
small read ahaad, the disk just doesn't read effciently.  

-Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.