[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Commit 3a6c9 breaks QEMU on FreeBSD/Xen
On Wed, Jan 25, 2017 at 11:05:29AM +0000, Roger Pau Monné wrote: > On Tue, Jan 24, 2017 at 01:30:02PM -0800, Stefano Stabellini wrote: > > On Tue, 24 Jan 2017, Stefano Stabellini wrote: > > > On Tue, 24 Jan 2017, Roger Pau Monné wrote: > > > > Hello, > > > > > > > > The following commit: > > > > > > > > commit 3a6c9172ac5951e6dac2b3f6cbce3cfccdec5894 > > > > Author: Juergen Gross <jgross@xxxxxxxx> > > > > Date: Tue Nov 22 07:10:58 2016 +0100 > > > > > > > > xen: create qdev for each backend device > > > > > > > > Prevents me from running QEMU on FreeBSD/Xen, the following is printed > > > > on the > > > > QEMU log: > > > > > > > > char device redirected to /dev/pts/2 (label serial0) > > > > xen be core: xen be core: can't open gnttab device > > > > can't open gnttab device > > > > xen be core: xen be core: can't open gnttab device > > > > can't open gnttab device > > > > > > > > # xl create -c ~/domain.cfg > > > > Parsing config from /root/domain.cfg > > > > libxl: error: libxl_dm.c:2201:device_model_spawn_outcome: Domain > > > > 32:domain 32 device model: spawn failed (rc=-3) > > > > libxl: error: libxl_create.c:1506:domcreate_devmodel_started: Domain > > > > 32:device model did not start: -3 > > > > libxl: error: libxl_dm.c:2315:kill_device_model: Device Model already > > > > exited > > > > libxl: error: libxl.c:1572:libxl__destroy_domid: Domain 32:Non-existant > > > > domain > > > > libxl: error: libxl.c:1531:domain_destroy_callback: Domain 32:Unable to > > > > destroy guest > > > > libxl: error: libxl.c:1458:domain_destroy_cb: Domain 32:Destruction of > > > > domain failed > > > > # cat /var/log/xen/qemu-dm-domain.log > > > > char device redirected to /dev/pts/2 (label serial0) > > > > xen be core: xen be core: can't open gnttab device > > > > can't open gnttab device > > > > xen be core: xen be core: can't open gnttab device > > > > can't open gnttab device > > > > > > > > I'm not really familiar with any of that code, but I think that using > > > > qdev_init_nofail is wrong, since on FreeBSD/Xen for example we don't yet > > > > support the gnttab device, so initialization of the Xen Qdisk backend > > > > can fail > > > > (and possibly the same applies to Linux if someone decides to compile a > > > > kernel > > > > without the gnttab device). Yet QEMU can be used without the Qdisk > > > > backend. > > > > > > How did you manage to configure QEMU before? The configure script had > > > xc_gnttab_open calls in it up to Xen 4.6. > > > > I know the answer! Because the configure script only compiles the code, > > doesn't try to run it. xc_gnttab_open compiled correctly but returned > > error when executed. Is that right? > > Yes, I'm quite that's right. FreeBSD is using gnttab_unimp.c, which implements > xengnttab_open, so compilation will not fail. > > > > > > > > I am happy to support a use case where the kernel doesn't have gntdev, > > > but it needs to be explicit: we need to detect it in the configure > > > script, then avoid the initialization of devices which require it. > > > > I would still prefer configure to be able to detect this case. If it > > cannot be made to detect it, then we can try to figure out a way to > > catch the initialization errors at run time. > > I think it's better to simply fail to initialize Xen Qdisk at runtime, or else > a xen-tools/QEMU compiled on a non-Xen environment won't get gnttab and as a > consequence Xen Qdisk support enabled, and I think it's quite common for > distros to compile Xen packages on non-Xen environments (where /dev/xen/gnttab > is not available). Ping? I'm not really sure how to solve this because I have zero experience with QEMU internals (all this qdev stuff). Can we restore the previous behavior, where the failure to initialize a device wouldn't prevent QEMU from starting? Roger. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |