[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xen-blkfront: simplify resume?



On Thu, 2011-03-24 at 17:47 -0400, Keir Fraser wrote:
> On 24/03/2011 09:31, "Daniel Stodden" <daniel.stodden@xxxxxxxxxx> wrote:
> 
> > Dear xen-devel.
> > 
> > I think the blkif_recover (blkfront's transparent VM resume) stuff looks
> > quite overcomplicated.
> > 
> > We copy the ring message to a shadow request allocated during submit, a
> > process involving some none-obvious-looking get_id_from_freelist()
> > subroutine to obtain a vector slot, and a memcpy.
> > 
> > When receiving a resume callback from xenstore, we memcpy the entire
> > shadow vector, reset the original one to zero, then reallocate the
> > thereby freed shadow entries and not only copy the message on the ring,
> > but the shadow back into the shadow vector just freed to keep stuff
> > consistent. Hmmm.
> > 
> > I wonder, should we just take the pending request and push it back onto
> > the request_queue (with a blk_requeue_request)?
> 
> Are you suggesting to get rid of the shadow state? It is needed, because
> in-flight requests can be overwritten by out-of-order responses written into
> the shared ring by the backend driver.

I was suggesting just that while missing the somewhat essential fact
that we're currently using the segment vectors in shadow state as the
single backing store for our gref lists. :)

I'm aware that this is a duplex channel sharing message slots, and also
wouldn't suggest some daredevil mode which reads critical state  back
from the sring even if that were not the case.

Now, blkif segments are by far the are the most significant payload, not
much point in isolating them. Nor does scattering the memcpies look like
a particularly good idea.

Also, one might want to add least a few more paranoia BUG_ON/fail-if in
case of request/response mismatch (id, op, etc) than we currently do.

So keeping the full message makes perfect sense.

In summary, yesterdays idea was 'Yeah, maybe'. Right now it's rather
'hell, no' :)

Still, pushing requests back on the queue seems more straightforward
than what's happening now. Once I get it to run and it still looks good.

Also, I might have found a pretty optimization for the shadow copies.

Cheers + Thanks,
Daniel

>  -- Keir
> 
> > Different from the present code, this should also help preserve original
> > submit order if done right. (Don't panic, not like it matters a lot
> > anymore since the block barrier flags are gone.)
> > 
> > If we want to keep the shadow copy, let's do so with a prep_rq_fn. It
> > gets called before the request gets pulled off the queue. Looks nicer,
> > and one can arrange things so it only gets called once.
> > 
> > Counter opinions?
> > 
> > Thanks,
> > Daniel
> > 
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.