[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] HVM domU on storage driver domain
On Tue, Jan 17, 2017 at 10:06:07PM +0800, G.R. wrote: > On Tue, Jan 17, 2017 at 7:57 PM, Kuba <kuba.0000@xxxxx> wrote: > > > W dniu 2017-01-16 o 17:06, G.R. pisze: > > > >> Hi all, > >> I'm trying out the storage driver domain feature > >> > > > > Hi > > > > A while ago, with a great deal of help from Roger Pau Monné, I managed to > > use FreeBSD domU as storage driver domain to provide storage for other > > domUs. > > > > The main difference is that it didn't require any network-based protocol > > (iSCSI etc.) between the domains. > > > > Typically your domU's frontend driver is connected to a block device > > inside dom0 via dom0's backend driver. But Xen has the ability to connect > > your domU's frontend driver directly to the backend driver of another domU. > > In short, you can create a storage driver domain, create a block device > > inside it (e.g. a zvol) and than create another domU using this block > > device directly, just as if it was provided by dom0. > > > > Here you can find the steps that should get you started. It was a while > > ago and required to apply a patch to Xen; I don't know what's its status > > right now, but since FreeNAS is based on FreeBSD, it might be worth to take > > a look: > > > > https://lists.xenproject.org/archives/html/xen-users/2014-08/msg00003.html > > > Hi Kuba, > The information you provided sounds fairly interesting! Thank you soooo > much~~ > Strangely enough, the same patch quoted in your link is still relevant and > required after 2.5 years and 4 major release! > Roger, do you meant to submit your patch but some how get it lost? I guess I completely forgot about it and never properly sent it to the list, sorry. The problem is that now I don't have a system that would allow me to test it, so if I need to resend it I would need some confirmation that it's still working as expected. From code inspection the issue seem to be there still. > Without the patch: > frontend `/local/domain/5/device/vbd/51712' devtype `vbd' expected backend > `/local/domain/0/backend/qdisk/5/51712' got > `/local/domain/1/backend/vbd/5/51712', ignoring > frontend `/local/domain/5/device/vbd/51712' devtype `vbd' expected backend > `/local/domain/0/backend/qdisk/5/51712' got > `/local/domain/1/backend/vbd/5/51712', ignoring > > With the patch: > Using xvda for guest's hda > ******************* BLKFRONT for /local/domain/9/device/vbd/51712 ********** > > > backend at /local/domain/1/backend/vbd/9/51712 > 156250000 sectors of 512 bytes > ************************** > blk_open(/local/domain/9/device/vbd/51712) -> 5 > > However, I do NOT have the luck as Kuba had for a working system. (My first > attempt yesterday at least give me a booting screen :-)) > What I see is the following errors: > Parsing config from ruibox.cfg > libxl: error: libxl_dm.c:1963:stubdom_xswait_cb: Stubdom 9 for 8 startup: > startup timed out > libxl: error: libxl_create.c:1504:domcreate_devmodel_started: device model > did not start: -9 > libxl: error: libxl_device.c:1264:device_destroy_be_watch_cb: timed out > while waiting for /local/domain/1/backend/vbd/9/51712 to be removed > libxl: error: libxl.c:1647:devices_destroy_cb: libxl__devices_destroy > failed for 9 > libxl: error: libxl_device.c:1264:device_destroy_be_watch_cb: timed out > while waiting for /local/domain/1/backend/vbd/8/51712 to be removed > libxl: error: libxl.c:1647:devices_destroy_cb: libxl__devices_destroy > failed for 8 > libxl: error: libxl.c:1575:libxl__destroy_domid: non-existant domain 8 > libxl: error: libxl.c:1534:domain_destroy_callback: unable to destroy guest > with domid 8 > libxl: error: libxl.c:1463:domain_destroy_cb: destruction of domain 8 failed I'm not really sure about what's going wrong here, did you create the driver domain guest with "driver_domain=1" in the config file? Roger. _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxx https://lists.xen.org/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |