[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Developmentstatus for Xen with Ceph as replicated storage
On Thu, May 24, 2018 at 12:01 PM, thg <nospam@xxxxxxxxx> wrote: > Hi everybody, > > in 2013 there was an announcment, that XenServer will fully support the > RBDs from Ceph, to use them as blockdevice for VMs (see > <https://ceph.com/geen-categorie/xenserver-support-for-rbd/>) > > What is about Xen itself, how it is supported? I know that you can map > an RBD as device and use it for putting a VM-image on it. But this is a > "manual" process and thus not usable for cloud-servers with many VMs. > > Anybody who has experiences with this or an other (working) option? Well fundamentally *something* has to convert block read/writes into network packets on the Ceph protocol. There are three basic ways this could be done: 1. The guest could have a Ceph driver, and speak to Ceph directly. 2. You could have a user-level process which speaks both Ceph and some other protocol (say, the Xen PV protocol or an emulated disk) that does the conversion; for example, QEMU. 3. You could have the dom0 kernel do it. #3 is what you describe here. The issue I suspect you're facing is that you don't want to have to manually create device nodes every time you create a guest, and copy that device node in to the config file. The solution to that for upstream xen is block hotplug scripts. You can see examples in /etc/xen/scripts/block-*; for example, block-iscsi. It looks like someone has started to do that work here: https://github.com/FlorianHeigl/xen-ceph-rbd The warning there about it not working with pygrub is out of date -- that should work now (but you'll probably want to give it some good testing just in case). -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |