[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] RFH: loopback & blktap(2) and CDROM



On Wed, 14 Nov 2012, Ian Campbell wrote:
> On Wed, 2012-11-14 at 11:13 +0000, Philipp Hahn wrote:
> > Hello Ian,
> > 
> > thank you for your excellent answer.
> > 
> > On Friday 09 November 2012 16:49:32 Ian Campbell wrote:
> > > > 4. I read somewhere that "mixing loobback with tapdisk is a bad idea",
> > > > but I can't find that again, so I'm wondering if that (wrong) claim got
> > > > somehow stuck in my memory. I can only imagin two scenarios
> > > > a) one domU accessing two disk images, one with lookback, the other one
> > > > with blktap. That looks okay.
> > > > b) two domUs accessing the same disk image, one with lookback, the other
> > > > one with blktap. That looks broken but neither would I expect that to
> > > > work (shared disk semantics aside)
> > >
> > > loopback is dangerous because it reports success before the data has
> > > really hit the underlying device.
> > ...
> > > Probably ok for r/o devices
> > > though and r/w shared disks need care for plenty of other reasons!
> > 
> > Okay, my current problem is getting the CDROM case right.
> > 
> > > > 5. While experimenting with a Linux domU I noticed that the boot process
> > > > sometimes gets stuck when I declare one disk as "hda" and a second one
> > > > as "xvda". The 2.6.32 kernel detects a clash in /sys/block naming and 
> > > > I'm
> > > > stuck in the "XENBUS: Waiting for devices to initialise: 295s..." count
> > > > down. Is this because hda and xvda overlap because of the PVonHVM case
> > > > (ide-block-major has 64 minors per device for partitions, while
> > > > scsi-block-major and xen-block-major only have 16 minors per device
> > >
> > > When you ask for hda you actually get hda+xvda in order to allow for the
> > > switchover described above. So you've actually asked for 2 xvda's --
> > > don't do that ;-)
> > >
> > > > , that is
> > > > hda=xvd[abcd], hdb=xvd[efgh], ...)?
> > >
> > > Almost. hda=xvda, hda[1234...]=xvda[1234...], hdb=xvdb ... hdd=xvdd.
> > >
> > > There is no hde so xvde is a good starting point for pure PV disks if
> > > you are mixing ide and pure PV.
> > 
> > With Linux 3.2.30 as domU I get the same as you, but with 2.6.32 I get a 
> > different:
> 
> Ah, that's right. Some early versions of PVHVM disk support tried to
> rename things to avoid clashes. I suspect that either 2.6.32 or the
> backport to Debian of the PVHVM stuff might have included that.
> 196cfe2ae8fcdc03b3c7d627e7dfe8c0ce7229f9 is the upstream commit which
> removed this behaviour.
> 
> >  domU is configured with hda=hdb=disk, hdc=cdrom, but inside the 
> > domU I get /dev/xdva and /dev/xvde for the disk, and /dev/scd0 for the 
> > cdrom.
> 
> xvda and xvde is odd, I'd have expected either xvda+b or xvde+f. Perhaps
> Stefano can remember what the behaviour was supposed to be here.

there might have been a version of the blkfront patch that if you had
hda and xvda in your config file would get you:

xvda - the PV disk corresponding to hda
xvde - the PV disk that is called xvda in your config file but that has
       been renamed to avoid clashes

upstream that behavior is long gone


> > With "xen_emul_unplug=never" I get hda* and hdb*.
> 
> As expected, good.
> 
> > For testing I created 60 partitions (3 primary+57 extended) on hdb, but as 
> > most SCSI, SATA and XEN majors only support 16 minors per device, I only 
> > see 
> > the first 15 partitions on /dev/xvde{,1..15}. With "..unplug=never" I see 
> > them all, but 16..60 are provided by block-major 259 (blkext).
> 
> Hrm. I wonder of blkfront needs to do some magic to enable this blkext
> thing then?

I thought that it is not possible to have more than 16 partitions on an
IDE disk, that would be the reason why you also can't have more than 16
partitions on a PV disk corresponding to an emulated IDE disk (xvda
corresponding to hda).

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.