[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] QEMU as non-root and PCI passthrough do not mix



On Thu, 14 Jan 2016, Ian Campbell wrote:
> On Tue, 2016-01-12 at 16:52 +0000, Stefano Stabellini wrote:
> > PCI passthrough cannot work if QEMU is run as a non-root process today,
> > as QEMU needs to open /dev/mem to mmap the MSI-X table of the device and
> > read/write relevant nodes on sysfs.
> >
> > Update the docs to reflect that.
> >
> > Run QEMU as root and print a warning if at least one PCI device has been
> > assigned to the guest at domain creation. Print a debug message on pci
> > hotplug.
> >
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> >
> > diff --git a/docs/misc/qemu-deprivilege.txt b/docs/misc/qemu-
> > deprivilege.txt
> > index dde74ab..cf52547 100644
> > --- a/docs/misc/qemu-deprivilege.txt
> > +++ b/docs/misc/qemu-deprivilege.txt
> > @@ -29,3 +29,13 @@ adduser --no-create-home --system xen-qemuuser-shared
> > Â
> > Â3) root
> > ÂAs a last resort, libxl will start QEMU as root.
> > +
> > +
> > +Please note that QEMU will still be run as root when PCI devices are
> > +assigned to the virtual machine (if you specified pci=["$PCI_BDF"] in
> > +your VM config file, where $PCI_BDF is the PCI BDF of the device you
> > +want to assign). If you want to hotplug a PCI device sometime after the
> > +VM has started, you need to make sure that the QEMU instance of that VM
> > +has root privileges (for example by not specifying either
> > +xen-qemuuser-shared or xen-qemuuser-domid$domid, or by giving root
> > +privileges to xen-qemuuser-domid$domid).
> > diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> > index 0aaefd9..6b98750 100644
> > --- a/tools/libxl/libxl_dm.c
> > +++ b/tools/libxl/libxl_dm.c
> > @@ -1254,6 +1254,12 @@ static int
> > libxl__build_device_model_args_new(libxl__gc *gc,
> > ÂÂÂÂÂÂÂÂÂÂÂÂÂbreak;
> > ÂÂÂÂÂÂÂÂÂ}
> > Â
> > +ÂÂÂÂÂÂÂÂ/* Do not run QEMU as non-root if PCI devices are assigned */
> > +ÂÂÂÂÂÂÂÂif (guest_config->num_pcidevs > 0) {
> > +ÂÂÂÂÂÂÂÂÂÂÂÂLOG(WARN, "Cannot run QEMU as non-root when PCI devices are
> > being assigned to the guest VM");
> > +ÂÂÂÂÂÂÂÂÂÂÂÂgoto end_search;
> > +ÂÂÂÂÂÂÂÂ}
>
> What if b_info->device_model_user is NULL or == "root"? Doesn't this warn
> even then?

I meant to warn even if device_model_user is NULL because it is the
default and I think it is fair to inform the user about this. But I
think you are right that we don't want to warn if device_model_user is
specified as "root".


> Conversely if it is != root and num_pcidevs > 0 then it ought to error out,
> since running as root when the config explicitly says otherwise would be
> wrong I think.

OK, I'll error out in that case.


> > +
> > ÂÂÂÂÂÂÂÂÂif (b_info->device_model_user) {
> > ÂÂÂÂÂÂÂÂÂÂÂÂÂuser = b_info->device_model_user;
> > ÂÂÂÂÂÂÂÂÂÂÂÂÂgoto end_search;
> > diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> > index dc10cb7..04d0dd4 100644
> > --- a/tools/libxl/libxl_pci.c
> > +++ b/tools/libxl/libxl_pci.c
> > @@ -1176,6 +1176,9 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t
> > domid,
> > Â{
> > ÂÂÂÂÂAO_CREATE(ctx, domid, ao_how);
> > ÂÂÂÂÂint rc;
> > +
> > +ÂÂÂÂLOG(DEBUG, "QEMU needs to be run as root for PCI passthrough to work");
>
> Shouldn't there be an if here, and/or an error return?

Unfortunately we cannot get the user used to run QEMU with from here.
However, even without this change, there is already plenty of
information printed out:

- xl prints:
libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an
error message from QMP server: Device initialization failed

- qemu prints:
[00:03.0] xen_pt_initfn: Error: Failed to "open" the real pci device. rc: -13


> > +
> > ÂÂÂÂÂrc = libxl__device_pci_add(gc, domid, pcidev, 0);
> > ÂÂÂÂÂlibxl__ao_complete(egc, ao, rc);
> > ÂÂÂÂÂreturn AO_INPROGRESS;
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.