[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI passthrough with stubdomain



Marek Marczykowski-Górecki, on Fri 23 Sep 2016 10:48:14 +0200, wrote:
> I'm still trying to get PCI passthrough working with stubdomain and
> without qemu in dom0 (even for just vfb/vkbd backends). How is this
> supposed to work?

Just as I recall from my memory:

> 1. Should xen-pcifront in the target domain be used (and consequently,
> should xen-pciback be set for it)?

I guess that could work.

> Currently xen-pciback is set for both
> stubdomain and target domain, which presents a race condition and
> xen-pciback refuses to setup one of them.

Yes, that's expected, for the reason you say.


For each question, I'll answer for two cases: either plain PCI drivers
in the guest, or xen-pvPCI drivers.


* Using plain PCI drivers.
**************************
I.e. no PV from the point of view of the guest, it directly pokes I/O
ports emulated by qemu.

> 1a. How does it look in case of qemu in dom0 (no stubdomain)?

qemu uses libpci (from pciutils) to access the board and pass through
requests coming from the guest. no pciback should thus be set since qemu
pokes directly from dom0.

> 2. What operations are done through stubdomain and what are handled
> directly by Xen (itself, or with help of IOMMU)? I'd guess config space
> accesses are done through device model. Anything else?

When using a qemu stubdomain, qemu still uses libpci to access the
board. The stubdom/ directory contains a patch to make the stubdom
libpci use the minios PV frontend. Thus, the pciback should be set for
the stubdom, since that's the one which will poke it through PV.  I
don't remember how the guest iomemory access are handled. At worse
they're trapped into qemu that does the access from the stubdom. At best
qemu maps the pages into the guest memory, and thus the guest directly
accesses it through Xen (potentially made safe by IOMMU). I guess I
implemented the latter, but that's far away, so my memory might be wrong
:)

> 3. What changes (if any) are required in qemu-xen to have it working in
> stubdomain in regards to PCI passthrough? Should it just work (assuming
> Linux-based stubdomain with xen-pcifront driver)?

IIRC it should just work, just need to set a PV pcifront on the stubdom
when using a stubdom. In the dom0 case qemu will access the PCI board
directly.


* Using PV drivers from the guest.
**********************************
So the PV is running its own PV drivers.
The stubdom thus doesn't have to know about it.

> 1a. How does it look in case of qemu in dom0 (no stubdomain)?

qemu will not manage it, the guest will talk directly with the pciback,
to be set on the guest.

> 2. What operations are done through stubdomain and what are handled
> directly by Xen (itself, or with help of IOMMU)? I'd guess config space
> accesses are done through device model. Anything else?

Everything goes through Xen, potentially with the use of IOMMU. config
space goes through PV too, values are read from the xenstore. See
minios' pcifront_physical_to_virtual for an instance how it's
implemented. The iomemory accesses are done by the guest PV driver by
just mapping the right physical pages.

> 3. What changes (if any) are required in qemu-xen to have it working in
> stubdomain in regards to PCI passthrough? Should it just work (assuming
> Linux-based stubdomain with xen-pcifront driver)?

qemu is not involved, so it doesn't have to be changed :)


* to summarize
**************

If running PV drivers in the guest, you set the pciback on the guest, be
it run with stubdom or not. 
If running plain drivers in the guest,
  * when not using a stubdom you don't need to set a pciback.
  * when using a stubdom you need to set a pciback on the stubdom.

So the unfortunate thing is that when using stubdom, you'd have to set
the pciback either on the guest (to run a PV driver in it), or on the
stubdom (to run a plain driver in the guest, and let mini-os use PV to
let qemu pass the board through)

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.