[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] where does qemu fit in?

Ian Campbell wrote:
On Wed, 2013-01-02 at 17:30 +0000, Miles Fidelman wrote:
Ian Campbell wrote:
On Wed, 2013-01-02 at 16:35 +0000, Miles Fidelman wrote:

Can you elaborate on this just a bit?  It sounds like one can use qemu
drivers from an HVM, to access the PV drivers.  But what about the other
way around?  I'm running PV guests, so normally qemu doesn't apply,
but... what I'm trying to figure out is whether there's some way to use
the qemu drivers to access the sheepdog cluster file system - which only
provides a qemu interface.
I'm not sure what you mean by "qemu drivers". Do you mean virtio?
I still don't really know what you mean. Apart from being a Xen specific
term qemu-dm provides emulation of physical devices and Xen PV devices.
Drivers are typically things in the guest which talk to those.

Well, maybe I'm using the term a bit loosely, to refer to the processing chain between an o/s call from userspace to a specific hardware or virtual device.

It looks like "virtio on xen" is one implementation (per http://wiki.xen.org/wiki/Virtio_On_Xen) - though perhaps not yet integrated into the main distribution (the page seems a little dated).

Meanwhile, it looks like Qemu is the basis for Xen's native virtual device support for HVMs, but it's not clear what support is available to PVMs. (The description of virtio on xen describes both virtio for HVM guests, and PV guests). Still trying to understand how all the pieces fit together, particularly when running without hardware virtualization support (thinking about integration with a couple of older boxes).

So it looks like sheepdog can contain disk images which are exposed to
the guest as either an emulated IDE device or a virtio block device
(although those slides mention neither explicitly) and the clustered
filesystem aspect is not actually exposed to the guest.

I suppose it ought to be possible to have the qdisk backend in Qemu use
the sheepdog backend instead of a file or block device or whatever.

It would be worth trying a vbd configured with
"backendtype=qdisk,target=sheepdog:<foo>" since sheepdog:<foo> seems to
be the syntax (judging from the "Start the VM" section of

This does seem to be the experiment to try (project for the weekend, I think).

Thanks for "backendtype=qdisk,target=sheepdog:<foo>"- that's given me a good hook for searching through documentation.

I don't know if the Xen toolstack supports passing arbitrary disk
configurations to qemu though, someone (like you) would need to
investigate and possibly do some plumbing on the Xen toolstack side (I'd
be happy to advise on list if it comes to this).

Ahh, and this gets to the heart of the question, and I think I found an answer in the Xen 4.2 release notes (http://wiki.xen.org/wiki/Xen_4.2_Feature_List):

" â Support for upstream qemu

 * Used by default when required for PV guests (e.g. qdisk backend or
   VFB support)"

Which seems to imply that a PV guest CAN access a qdisk, and since upstream qemu supports sheepdog, this might just all work without having to assemble a custom build. (Though I'm still trying to find an architectural diagram to help me visualize all the interfaces and data flows).
The same sort of thing ought to apply to using the Ceph backend in qemu.
It would be worth googling to see if anyone has done this since I'd
expect the approach ought to carry across.

Looks like Ceph is a lot easier to integrate - since it provides a kernel-based block device interface (/dev/rbd). From the ceph documentation: "QEMU can pass a block device from the host on to a guest, but since QEMU 0.15, thereâs no need to map an image as a block device on the host. Instead, QEMU can access an image as a virtual block device directly via librbd."

Thanks again!


In theory, there is no difference between theory and practice.
In practice, there is.   .... Yogi Berra

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.