[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] blktap2 control function (version 2)



Thanks for your responses.  Can I ask you to try to (a) separate out
these patches, and (b) explain the reasoning behind them in more
detail ?  If you can provide a separate message with a separate patch
for each change, with an explanation at greater length, it will be
much easier for us to evaluate them.  Thanks.

There are a few things I can comment on as is:

eXeC001er writes ("Re: [Xen-devel] [PATCH] blktap2 control function (version 
2)"):
> [Ian:]
> > Um, can you explain why this is necessary or reasonable ?  Why three
> > tries ?  Why do we need to poll for this at all ?  Surely if this
> > helps in any particular situation it means we have a race, which
> > should be fixed.
> 
> (sometimes) When i use pygrub i have error: Disk isn't accessible.

I'm afraid that reply doesn't address my comments.  What is the race
and why is your fix correct ?

> > 3. Bug fix for error: "Error: Device 51952 not connected" (in config file
> > for DomU we should be use prefix "tap2:tapdisk:xxx" (tap2:xxx) for devices
> > from (aio, ram, qcow, vhd, remus) or "tap:tapdisk:xxx" (tap:xxx) for devices
> > from (sync, vmdk, qcow2, ioemu))

After discussing this with my colleagues, it's not clear to me that we
should be exposing the difference between blktap vs blktap2 in domain
config files.  xl tries blktap2 first and uses it if available, and
then uses blktap if not.  Is that not the correct behaviour ?

> Does that mean that xen-unstable needs fixing too ?  I'd rather apply
> a change to xen-unstable first and test it there.
> 
> xen-unstable have my previous patch, but it can lead to regression (if use 
> 'tap:xxx' instead  'tap:tapdisk:xxx')

Should we revert your previous patch while we discuss it ?

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.