[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3] xen-blkfront: dynamic configuration of per-vbd resources



On Wed, Jul 27, 2016 at 07:21:05PM +0800, Bob Liu wrote:
> 
> On 07/27/2016 06:59 PM, Roger Pau Monné wrote:
> > On Wed, Jul 27, 2016 at 11:21:25AM +0800, Bob Liu wrote:
> > [...]
> >> +static ssize_t dynamic_reconfig_device(struct blkfront_info *info, 
> >> ssize_t count)
> >> +{
> >> +  /*
> >> +   * Prevent new requests even to software request queue.
> >> +   */
> >> +  blk_mq_freeze_queue(info->rq);
> >> +
> >> +  /*
> >> +   * Guarantee no uncompleted reqs.
> >> +   */
> > 
> > I'm also wondering, why do you need to guarantee that there are no 
> > uncompleted requests? The resume procedure is going to call blkif_recover 
> > that will take care of requeuing any unfinished requests that are on the 
> > ring.
> > 
> 
> Because there may have requests in the software request queue with more 
> segments than
> we can handle(if info->max_indirect_segments is reduced).
> 
> The blkif_recover() can't handle this since blk-mq was introduced,
> because there is no way to iterate the sw-request queues(blk_fetch_request() 
> can't be used by blk-mq).
> 
> So there is a bug in blkif_recover(), I was thinking implement the suspend 
> function of blkfront_driver like:

Hm, this is a regression and should be fixed ASAP. I'm still not sure I 
follow, don't blk_queue_max_segments change the number of segments the 
requests on the queue are going to have? So that you will only have to 
re-queue the requests already on the ring?

Waiting for the whole queue to be flushed before suspending is IMHO not 
acceptable, it introduces an unbounded delay during migration if the backend 
is slow for some reason.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.