[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] linux stubdom

On Tue, 2013-01-29 at 15:46 +0000, Markus Hochholdinger wrote:
> Hello list,
> I'm reading a lot about xen stub-domains, but I'm wondering if I can use a 
> linux stubdom to serve a "transformed" block device for the according domU. 
> The wiki page(s) and the stubdom-directory in the source code leaves a lot of 
> questions to me I'm hoping someone can answer me here.

I think the thing you are looking for is a "driver domain" rather than a
stubdomain, http://wiki.xen.org/wiki/Driver_Domain. You'll likely find
more useful info if you google for that rather than stubdomain.

> So my question are:
> * What are the requirements to run linux inside a stubdom? Is a current pvops
>   kernel enough or has the linux kernel to be modified for a stubdom? If yes,
>   I would prepare a kernel and a minimal rootfs within an initrd to setup my
>   blockdevice for the domU.

You can use Linux as a block driver storage domain, yes.

> * How can I offer a block device (created within the stubdom) from the stubdom
>   to the domU? Are there any docs how to configure this?
> * If the above is not possible, how could I offer a blockdevice from one domU
>   to another domU as block device? Are there any docs how to do this?

Since a driver domain is also a domU (just one which happens to provide
services to other domains) these are basically the same question. People
more often do this with network driver domains but block ought to be
possible too (although there may be a certain element of having to take
the pieces and build something yourself).

Essentially you just need to a) make the block device bits available in
the driver domain b) run blkback (or some other block backend) in the
driver domain and c) tell the toolstack that a particular device is
provided by the driver domain when you build the guest.

For a) one would usually use PCI passthrough to pass a storage
controller to the driver domain and use the regular drivers in there.
But you could also use e.g. iSCSI or NFS (I guess). If you want to also
use this controller for dom0's disks then that's a bit more complex...

For b) that's just a case of compiling in the appropriate driver and
installing the appropriate hotplug scripts in the domain.

For c) I'm not entirely sure how you do that with either xend or xl in
practice. I know there have been some patches on xen-devel not so long
ago to improve things for xl support of disk driver domains. It possible
that you might need to hack the toolstack a bit to get this to work, and
depending on how and when the disk images are constructed you may need
some out of band communication between the toolstack domain and driver
domain to actually create the underlying devices.

The biggest problem I can see is supporting Windows HVM, since the
device model also needs to have access to the disk in order to provide
the emulated devices (at least initially, hopefully you have PV
drivers). The usual way to do this is to attach a PV device to the
domain running the device model where the backend is supported by the
driver domain as well. Again you might need to hack up a few things to
get this working.

> What I'm trying to do:
> * In my case, I make logical volumes available on all hosts with the same path
>   on each host. So I can assemble a software raid1 where each device lives on
>   a different server while not loosing the possibility of live migration.
> * Configure a domU with two block devices, the according (linux) stubdomu
>   assembles a software raid1 (linux md device) and presents this md device
>   to the domU. So the domU hasn't to handle anything with the software raid1
>   but has ONE redundant block device for its usage.
> * I have two use cases, one is a HVM domU, where something like windows is
>   running and because the lack of (good) software raid1 I use the software
>   raid1 of linux for baking the block device inside the stubdom.
>   The other use case is a PV domU where the admin of the virtualization
>   environment is not the admin of the domU and therefore isn't able to manage
>   the software raid1 inside the domU. So the stubdom could be used to manage
>   the software raid1 without interfering within the domU.

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.