[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request in blkfront



On Mon, Sep 17, 2012 at 06:33:29AM +0000, Duan, Ronghui wrote:
> At last, I saw the regression in random io.
> This is a patch to fix the performance regression. Original the pending 
> request members are allocated from the stack, I alloc them when each request 
> arrives in my last patch. But it will hurt performance. In this fix, I alloc 
> all of them when blkback init. But due to some bugs there, we can't free it, 
> the same to other pending requests member. I am looking for the reason. But 
> have no idea for this now. 

Right. When I implemented something similar to this (allocate at startup
those pools of pages), I had the same problem of freeing the grant array
blowing up the machine.

But... this was before
http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=commit;h=2fc136eecd0c647a6b13fcd00d0c41a1a28f35a5

- which might be the fix for this.

> Konrad, thanks for your comments. Could you have a try when you have time.
> 
> -ronghui
> 
> > -----Original Message-----
> > From: Duan, Ronghui
> > Sent: Thursday, September 13, 2012 10:06 PM
> > To: Konrad Rzeszutek Wilk; Stefano Stabellini
> > Cc: Jan Beulich; Ian Jackson; xen-devel@xxxxxxxxxxxxx
> > Subject: RE: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request 
> > in
> > blkfront
> > 
> > > > > But you certainly shouldn't be proposing features getting used
> > > > > unconditionally or by default that benefit one class of backing
> > > > > devices and severely penalize others.
> > > >
> > > > Right.
> > > > I am wondering.. Considering that the in-kernel blkback is mainly
> > > > used with physical partitions, is it possible that your patches
> > > > cause a regression with unmodified backends that don't support the
> > > > new protocol, like QEMU for example?
> > >
> > > Well for right now I am just using the most simple configuration to
> > > eliminate any extra variables (stacking of components). So my
> > > "testing" has been just on phy:/dev/sda,xvda,w with the sda being a 
> > > Corsair
> > SSD.
> > 
> > I totally agree that we should not break others when enable what we want.
> > But just from my mind, the patch only have a little overhead in the
> > front/backend code path. It will induce pure random IO with a little 
> > overhead.
> > I tried the 4K read case, I just got 50MB/s w/o the patch. I need a more
> > powerful disk to verified it.
> > 
> > Ronghui
> > 
> > 
> > > -----Original Message-----
> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@xxxxxxxxxx]
> > > Sent: Thursday, September 13, 2012 9:24 PM
> > > To: Stefano Stabellini
> > > Cc: Jan Beulich; Duan, Ronghui; Ian Jackson; xen-devel@xxxxxxxxxxxxx
> > > Subject: Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per
> > > request in blkfront
> > >
> > > On Thu, Sep 13, 2012 at 12:05:35PM +0100, Stefano Stabellini wrote:
> > > > On Thu, 13 Sep 2012, Jan Beulich wrote:
> > > > > >>> On 13.09.12 at 04:28, "Duan, Ronghui" <ronghui.duan@xxxxxxxxx> 
> > > > > >>> wrote:
> > > > > >> And with your patch got:
> > > > > >>   read : io=4096.0MB, bw=92606KB/s, iops=23151 , runt=
> > > > > >> 45292msec
> > > > > >>
> > > > > >> without:
> > > > > >>   read : io=4096.0MB, bw=145187KB/s, iops=36296 , runt=
> > > > > >> 28889msec
> > > > > >>
> > > > > > What type of backend file you are using? In order to remove the
> > > > > > influence of cache in Dom0, I use a physical partition as backend.
> > > > >
> > > > > But you certainly shouldn't be proposing features getting used
> > > > > unconditionally or by default that benefit one class of backing
> > > > > devices and severely penalize others.
> > > >
> > > > Right.
> > > > I am wondering.. Considering that the in-kernel blkback is mainly
> > > > used with physical partitions, is it possible that your patches
> > > > cause a regression with unmodified backends that don't support the
> > > > new protocol, like QEMU for example?
> > >
> > > Well for right now I am just using the most simple configuration to
> > > eliminate any extra variables (stacking of components). So my
> > > "testing" has been just on phy:/dev/sda,xvda,w with the sda being a 
> > > Corsair
> > SSD.
> 


> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.