[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] RFH: loopback & blktap(2) and CDROM



I wanted to open a new thread with respect to Blktap2 and Blktap,but i'll rather post here . I have been using a loopdevice for all my VMs because,i cant seem to find Blktap2 on any of the newer kernels  .Has Blktap2 been dropped,even though its better than Blktap? Also,for some odd reason i have never been able to bring up my VMs with Blktap (tap:tapdisk:aio) since all my vms get stuck at 'XENBUS: Waiting for devices to initialise: 295s...'

Am i doing anything wrong? I have seen this happen 2-3 different VM servers with different Dom0 kernels,which is quite odd.

Thanks !!

> From: Ian.Campbell@xxxxxxxxxx
> To: hahn@xxxxxxxxxxxxx
> Date: Fri, 9 Nov 2012 15:49:32 +0000
> CC: Xen-users@xxxxxxxxxxxxx
> Subject: Re: [Xen-users] RFH: loopback & blktap(2) and CDROM
>
> On Sat, 2012-10-27 at 09:54 +0100, Philipp Hahn wrote:
> > 1. With pure-HV the domU gets an emulated IDE (or whatever) disk. The
> > emulation is done by qemu-dm, which opens whatever is given to it: directly a
> > file, a block-device in dom0, etc.
>
> Correct.
>
> In this case I think the PV device is technically created but never
> opened since the frontend never connects.
>
> > 2. With pure-PV the domU does not get an emulated IDE, but must use blkfront
> > to access the disk. For this to work blkback needs a block device, which is
> > either directly taken from a dom0 block device or provided via look-back or
> > blktap.
>
> specifically blktap2, yes. blktap1 worked differently (as will blktap3!)
>
> > 3. Because of PVonHVM Xen provides the domU with both an IDE view and an XVD
> > view of evey block device. The domU boots using the IDE view and when the PV
> > drivers are loaded, disables the emulated view and switches over to the XVD
> > view.
>
> Correct.
>
> > Now some questions:
> >
> > 4. I read somewhere that "mixing loobback with tapdisk is a bad idea", but I
> > can't find that again, so I'm wondering if that (wrong) claim got somehow
> > stuck in my memory. I can only imagin two scenarios
> > a) one domU accessing two disk images, one with lookback, the other one with
> > blktap. That looks okay.
> > b) two domUs accessing the same disk image, one with lookback, the other one
> > with blktap. That looks broken but neither would I expect that to work
> > (shared disk semantics aside)
>
> loopback is dangerous because it reports success before the data has
> really hit the underlying device. This is a problem because e.g. your
> filesystem journalling relies on these guarantees. This affects anything
> using it including blkback (this is one of the reasons xl doesn't setup
> this configuration automatically).
>
> There have been patches to fix this (by implementing O_DIRECT in the
> loop driver IIRC) ages ago which looked like getting resurrected
> recently (by upstream for non-Xen related reasons which I can't recall),
> but I don't know what the status is there
>
> One reason I can think of for your scenario b to be dangerous is if qemu
> doesn't use O_DIRECT while blktap2 does. So qemu gets caching while
> blktap2 doesn't and all hell breaks loose. Probably ok for r/o devices
> though and r/w shared disks need care for plenty of other reasons!
>
> > 5. While experimenting with a Linux domU I noticed that the boot process
> > sometimes gets stuck when I declare one disk as "hda" and a second one
> > as "xvda". The 2.6.32 kernel detects a clash in /sys/block naming and I'm
> > stuck in the "XENBUS: Waiting for devices to initialise: 295s..." count down.
> > Is this because hda and xvda overlap because of the PVonHVM case
> > (ide-block-major has 64 minors per device for partitions, while
> > scsi-block-major and xen-block-major only have 16 minors per device
>
> When you ask for hda you actually get hda+xvda in order to allow for the
> switchover described above. So you've actually asked for 2 xvda's --
> don't do that ;-)
>
> > , that is
> > hda=xvd[abcd], hdb=xvd[efgh], ...)?
>
> Almost. hda=xvda, hda[1234...]=xvda[1234...], hdb=xvdb ... hdd=xvdd.
>
> There is no hde so xvde is a good starting point for pure PV disks if
> you are mixing ide and pure PV.
>
> > 6. How should I make an .iso image file accassable to a domU?
> > If a use tap:/var/lib/libvirt/images/some.iso tapdisk2 claims the image and
> > passes phy:/dev/xen/blktapX to qemu-dm, which I can access fine, but eject
> > does not work, since qemu only sees the phy: device and can't open another
> > file.
> > xen-blockfront in PVonHVM and Windows-GPLPV driver both reject CDROM-devices,
> > so the CDROM remains IDE emulated.
>
> That's right -- PV drivers aren't used for HVM CD-ROM devices so that
> media change etc can be supported. I think in this case you want to use
> file:// and let qemu open the device direct. There will be no PV path in
> this case, so no need for tap etc.
>
> > So this looks like I should use file:ioemu:/var/lib/libvirt/images/some.iso
> > instead for HV domUs, because QEMU would be able to open the file itself (and
> > change it). Any loopback or blktap would be pointless, because the PVonHVM
> > drivers refuse to work for CDROMs any way.
>
> Correct.
>
> > But for PV-domUs there's no qemu-dm doing IDE emulation, so using blktap or
> > loopback there is a must.
>
> Correct. You don't get any media change etc capabilities here. Anything
> which looks like you do for a PV guest is actually doing hotplug of the
> vbd device.
>
> loopback isn't so dangerous for cdroms since they are readonly.
>
> > Correct?
>
> I think you've mostly got it right, yes.
>
> You are using xend whereas my most up to date knowledge is libxl, there
> are a few subtle differences regarding which backend is selected for
> various disk configurations but I think they are broadly speaking the
> same.
>
> Ian.
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxx
> http://lists.xen.org/xen-users
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.