[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 7/7] xen-block: implement indirect descriptors



On Wed, Apr 17 2013, Konrad Rzeszutek Wilk wrote:
> On Wed, Apr 17, 2013 at 07:04:51PM +0200, Roger Pau Monné wrote:
> > On 17/04/13 16:25, Konrad Rzeszutek Wilk wrote:
> > >>> Perhaps the xen-blkfront part of the patch should be just split out to 
> > >>> make
> > >>> this easier?
> > >>>
> > >>> Perhaps what we really should have is just the 'max' value of megabytes
> > >>> we want to handle on the ring.
> > >>>
> > >>> As right now 32 ring requests * 32 segments = 4MB.  But if the user 
> > >>> wants
> > >>> to se the max: 32 * 4096 = so 512MB (right? each request would handle 
> > >>> now 16MB
> > >>> and since we have 32 of them = 512MB).
> > >>
> > >> I've just set that to something that brings a performance benefit
> > >> without having to map an insane number of persistent grants in blkback.
> > >>
> > >> Yes, the values are correct, but the device request queue (rq) is only
> > >> able to provide read requests with 64 segments or write requests with
> > >> 128 segments. I haven't been able to get larger requests, even when
> > >> setting this to 512 or higer.
> > > 
> > > What are you using to drive the requests? 'fio'?
> > 
> > Yes, I've tried fio with several "bs=" values, but it doesn't seem to
> > change the size of the underlying requests. Have you been able to get
> > bigger requests?
> 
> Martin, Jens,
> Any way to drive more than 128 segments?

If the driver is bio based, then there's a natural size constraint on
the number of vecs in the bio. So to get truly large requests, the
driver would need to merge incoming sequential IOs (similar to how it's
done for rq based drivers).

-- 
Jens Axboe


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.