[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 4/4] xen/x86: Allow stubdom access to irq created for msi.



On Wed, Jan 16, 2019 at 02:49:14PM +0100, Marek Marczykowski-Górecki wrote:
> On Wed, Jan 16, 2019 at 01:20:04PM +0100, Roger Pau Monné wrote:
> > On Wed, Jan 16, 2019 at 11:52:18AM +0100, Marek Marczykowski-Górecki wrote:
> > > On Wed, Jan 16, 2019 at 10:21:29AM +0100, Roger Pau Monné wrote:
> > > > On Tue, Jan 15, 2019 at 04:36:31PM +0100, Marek Marczykowski-Górecki 
> > > > wrote:
> > > > > From: Simon Gaiser <simon@xxxxxxxxxxxxxxxxxxxxxx>
> > > > > 
> > > > > Stubdomains need to be given sufficient privilege over the guest 
> > > > > which it
> > > > > provides emulation for in order for PCI passthrough to work correctly.
> > > > > When a HVM domain try to enable MSI, QEMU in stubdomain calls
> > > > > PHYSDEVOP_map_pirq, but later it needs to call XEN_DOMCTL_bind_pt_irq 
> > > > > as
> > > > > part of xc_domain_update_msi_irq. Allow for that as part of
> > > > > PHYSDEVOP_map_pirq.
> > > > 
> > > > I see, that's not a problem AFAICT for PCI INTx because the IRQ in
> > > > that case is known beforehand, and the stubdomain is given permissions
> > > > over this IRQ by libxl__device_pci_add (there's a do_pci_add against
> > > > the stubdomain).
> > > 
> > > Exactly.
> > 
> > I would maybe consider adding something like this to the commit
> > message, so it's clear why PCI INTx works but not MSI interrupts.
> > 
> > > 
> > > > > 
> > > > > Based on 
> > > > > https://github.com/OpenXT/xenclient-oe/blob/5e0e7304a5a3c75ef01240a1e3673665b2aaf05e/recipes-extended/xen/files/stubdomain-msi-irq-access.patch
> > > > >  by Eric Chanudet <chanudete@xxxxxxxxxxxx>.
> > > > > 
> > > > > Signed-off-by: Simon Gaiser <simon@xxxxxxxxxxxxxxxxxxxxxx>
> > > > > Signed-off-by: Marek Marczykowski-Górecki 
> > > > > <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
> > > > > ---
> > > > > This is only one part of fixing MSI with QEMU in stubdomain. The other
> > > > > part is allowing stubdomain to actually enable MSI in PCI config 
> > > > > space.
> > > > > QEMU does that through pcifront/back connected to the stubdomain (see
> > > > > hw/xen/xen_pt_msi.c:msi_msix_enable()), but pciback by default refuse
> > > > > write to that register.
> > > > > Easy, less safe solution: enable permissive mode for the device.
> > > > > Safer solution - enable access to this register for stubdomain only
> > > > > (pciback patch that add such flag + libxl patch to set it for relevant
> > > > >  devices)
> > > > > The whole story:
> > > > > https://www.qubes-os.org/news/2017/10/18/msi-support/
> > > > > 
> > > > > Any other ideas? Which one is preferred upstream?
> > > > 
> > > > IMO, and please correct me if I'm wrong, QEMU in the stubdomain will
> > > > receive the PCI config space write to enable MSI, and since this
> > > > stub-QEMU runs in PV mode I think it should use the PV way to enable
> > > > MSI, ie: the same that Linux pcifront uses to enable MSI for
> > > > passed-through devices.
> > > > 
> > > > Is this something that sounds sensible?
> > > 
> > > We've considered this option too. Let me quote Simon on that (from the
> > > link above):
> > > 
> > >     The enable command that pcifront sends is intended for the normal PV 
> > > use
> > >     case where the device is passed to the VM itself (via pcifront) rather
> > >     than to the stub domain target. While the command is called 
> > > enable_msi,
> > >     pciback does much more than simply setting the enable flag. It also
> > >     configures IRQ handling in the dom0 kernel, adapts the MSI masking, 
> > > and
> > >     more. This makes sense in the PV case, but in the HVM case, the MSI
> > >     configuration is done by QEMU, so this most likely won’t work 
> > > correctly.
> > 
> > Oh great, that's unfortunate. Both pciback functions end up calling
> > into msi_capability_init in the Linux kernel, which does indeed more
> > than just toggling the PCI config space enable bit.
> > 
> > OTOH adding a bypass to pciback so the stubdom is able to write to the
> > PCI register in order to toggle the enable bit seems quite clumsy. Not
> > to mention that you would be required to update Dom0 kernel in order to
> > fix the issue.
> > 
> > Do you think it makes sense to add a domctl to enable/disable MSI(X)?
> > 
> > This way the bug could be fixed by just updating Xen (and the
> > stubdomain).
> 
> Indeed in case of stubdomain, that would make sens, as other PCI passthrough
> related operations already bypass pcifront/back anyway.
> 
> And I agree with Jan, that physdevop makes more sense, if going this way.

While not against using physdevop if we agree that a new hypercall is
the way to go, I would prefer a domctl because this hypercall would
only be used by toolstack components, and thus doesn't need to be
added to the public stable ABI available to all guests, even if the
functionality is actually limited to stubdomains.

Anyway, not a big deal, and if others think physdevop is more suitable
I'm OK with it.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.