[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] HVM domU on storage driver domain



Hello,

Could you please fix your mail client so that it properly quotes messages?
Not adding ">" to quotes makes it hard to follow the conversation.

On Fri, Jan 20, 2017 at 10:27:35AM +0800, G.R. wrote:
> 2017年1月20日 上午1:41,"Roger Pau Monné" <roger.pau@xxxxxxxxxx>写道:
> 
> On Tue, Jan 17, 2017 at 10:06:07PM +0800, G.R. wrote:
> > On Tue, Jan 17, 2017 at 7:57 PM, Kuba <kuba.0000@xxxxx> wrote:
> >
> > > W dniu 2017-01-16 o 17:06, G.R. pisze:
> > >
> > >> Hi all,
> > >> I'm trying out the storage driver domain feature
> > >>
> > >
> > > Hi
> > >
> > > A while ago, with a great deal of help from Roger Pau Monné, I managed
> to
> > > use FreeBSD domU as storage driver domain to provide storage for other
> > > domUs.
> > >
> > > The main difference is that it didn't require any network-based protocol
> > > (iSCSI etc.) between the domains.
> > >
> > > Typically your domU's frontend driver is connected to a block device
> > > inside dom0 via dom0's backend driver. But Xen has the ability to
> connect
> > > your domU's frontend driver directly to the backend driver of another
> domU.
> > > In short, you can create a storage driver domain, create a block device
> > > inside it (e.g. a zvol) and than create another domU using this block
> > > device directly, just as if it was provided by dom0.
> > >
> > > Here you can find the steps that should get you started. It was a while
> > > ago and required to apply a patch to Xen; I don't know what's its status
> > > right now, but since FreeNAS is based on FreeBSD, it might be worth to
> take
> > > a look:
> > >
> > > https://lists.xenproject.org/archives/html/xen-users/2014-
> 08/msg00003.html
> > >
> > Hi Kuba,
> > The information you provided sounds fairly interesting! Thank you soooo
> > much~~
> > Strangely enough, the same patch quoted in your link is still relevant and
> > required after 2.5 years and 4 major release!
> > Roger, do you meant to submit your patch but some how get it lost?
> 
> I guess I completely forgot about it and never properly sent it to the list,
> sorry. The problem is that now I don't have a system that would allow me to
> test it, so if I need to resend it I would need some confirmation that it's
> still working as expected. From code inspection the issue seem to be there
> still.
> 
> Yes, that patch is definitely required and working.
> While through eyeballing I think the patch should be able to apply
> directly, the automatic patch failed and I had to do it manually. I suspect
> this is due to some format change to the patch in the email archive.

OK, will try to find time to submit it.

> > Without the patch:
> > frontend `/local/domain/5/device/vbd/51712' devtype `vbd' expected backend
> > `/local/domain/0/backend/qdisk/5/51712' got
> > `/local/domain/1/backend/vbd/5/51712', ignoring
> > frontend `/local/domain/5/device/vbd/51712' devtype `vbd' expected backend
> > `/local/domain/0/backend/qdisk/5/51712' got
> > `/local/domain/1/backend/vbd/5/51712', ignoring
> >
> > With the patch:
> > Using xvda for guest's hda
> > ******************* BLKFRONT for /local/domain/9/device/vbd/51712
> **********
> >
> >
> > backend at /local/domain/1/backend/vbd/9/51712
> > 156250000 sectors of 512 bytes
> > **************************
> > blk_open(/local/domain/9/device/vbd/51712) -> 5
> >
> > However, I do NOT have the luck as Kuba had for a working system. (My
> first
> > attempt yesterday at least give me a booting screen :-))
> > What I see is the following errors:
> > Parsing config from ruibox.cfg
> > libxl: error: libxl_dm.c:1963:stubdom_xswait_cb: Stubdom 9 for 8 startup:
> > startup timed out
> > libxl: error: libxl_create.c:1504:domcreate_devmodel_started: device model
> > did not start: -9
> > libxl: error: libxl_device.c:1264:device_destroy_be_watch_cb: timed out
> > while waiting for /local/domain/1/backend/vbd/9/51712 to be removed
> > libxl: error: libxl.c:1647:devices_destroy_cb: libxl__devices_destroy
> > failed for 9
> > libxl: error: libxl_device.c:1264:device_destroy_be_watch_cb: timed out
> > while waiting for /local/domain/1/backend/vbd/8/51712 to be removed
> > libxl: error: libxl.c:1647:devices_destroy_cb: libxl__devices_destroy
> > failed for 8
> > libxl: error: libxl.c:1575:libxl__destroy_domid: non-existant domain 8
> > libxl: error: libxl.c:1534:domain_destroy_callback: unable to destroy
> guest
> > with domid 8
> > libxl: error: libxl.c:1463:domain_destroy_cb: destruction of domain 8
> failed
> 
> I'm not really sure about what's going wrong here, did you create the driver
> domain guest with "driver_domain=1" in the config file?
> 
> Yes I did, even though I do not understand what is going on behind that
> config. The manual is not clear enough and the tutorial even does not
> mention it at all.
> 
> Do you have any suggestion about what I should do next to help
> understanding the situation here?

Can you provide the output with `xl -vvv ...`? That will be more verbose.

Roger.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
https://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.