[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [Xen-users] Passthrough support ?



We also mentioned about xen_pci_sharedinfo in previous mails
Well As I read in the linux code , xen_pci_sharedinfo contains xen_pci_op where 
front end specifies which
operation is to be done on which pci device. And then it keeps checking the 
status of this device through another flag in
xen_pci_sharedinfo. I infer here that dom0's drivers are being used. Can you 
point me towards a code path where domU actually
uses its own drivers having owned the PCI device ?
Mark Williamson wrote:

1. When PCI Passthrough support  'is not enabled' , how does domU access
PCI device ?
    I suppose they continue to communicate using pcifront/back split
drivers and now dom0's
    drivers are used ( right ? )

If you don't have PCI passthrough (as provided by pcifront/back) then the domU can't really have direct access to a PCI device. The only accesses it'll be able to make are by using the virtual block / net interfaces to request dom0 do some IO for it.

2. PCI-passthrough enables domU to own the PCI device. Now dom0 can no
longer use that device .

Yep.

    If domU has mapped address space of the required  PCI device , why
need it talk to dom0 any  further ?

DomU owns the PCI device, but dom0 owns the PCI *bus* hardware. To query (and potentially set) information in "PCI configuration space", the domU will have to talk to dom0 somehow, because that information comes from the PCI bus hardware, not from the device itself. The PCI config space information includes the location of the mmio and port io regions used by the device, so without this communication the domU wouldn't know how to talk to its device.

Come to that, the PCI config space information is needed to determine what the device *is* so the domU knows what driver to use to talk to it ;-)

The PCI bus control hardware can only be owned by dom0, so it is necessary to indirect these operations through dom0. pciback doesn't necessarily pass through config space data unmodified to pcifront (there used to be a number of different options on how it passed stuff through) and it doesn't necessarily allow pcifront to alter config space (for safety reasons). But in the end it provides enough of the functionality of PCI config space to enable the domU to find and identify the PCI device(s) it owns, and load the appropriate drivers.

    What kind of hand shake is involved ( setup and tear down as you
have mentioned ) between domU and
    domU  " in passthrough case" ?

Well, the operations on PCI config space will go through dom0. That's the main thing I can think of right now, but there may be other operations too.

3. DomU discovers all PCI devices through  Xenstore.

I'm not actually sure on this point, but you should be able to figure it out with a scan through the code. Post here if you have trouble understanding sections!

 requires to talk to its assigned device(s).
How are devices "assigned"  to domU ?  I am specifically talking about
late binding.

pciback needs to "claim" the device in dom0 so that no "real" device drivers in dom0 will try to use it. We don't want the domains fighting over it ;-) After dom0 boot time, you can move devices to the control of pciback by writing values to sysfs to tell any current driver to release its hold on the device, then tell pciback to grab it.

Once pciback has grabbed a device, the rest of dom0 Linux doesn't know that a domU is using the device. As far as Linux is concerned, the device is owned by pciback.

To give the domain access to the device, dom0 needs to issue some hypercalls giving the domU the rights to map the appropriate mmio regions, and access the appropriate IO regions. It'll also need to have setup a connection between pcifront and pciback. I guess it might stick something in Xenstore - maybe you can see what happens in the code? Some of this will be done in pciback, some is probably administered in the tools code.

I'm sorry to be a bit vague. It's a while since I looked at this code, so please bare in mind that I'm a little rusty ;-)

Cheers,
Mark


 Having
obtained this information, communication with the device is possible
directly using IO ports, memory IO regions, and DMA.

2. But, in case of emulation drivers of dom0 are used  where as in case
of passthrough ( as the name suggests ) native drivers in domU are used
.
For true emulation (qemu device model), a userspace process in dom0
handles modelling a "real" device and then issues IO using normal
userspace APIs. These get serviced by the dom0 kernel using the normal
device driver.

For PV drivers, the frontend driver in the domU kernel issues requests
which are picked up by the backend driver in dom0's kernel, which then
issues requests into the IO stack.  Again this uses the normal device
driver in dom0 to talk to the actual device, it's just that the request
is made using a kernel-internal API rather than a userspace API (which
results in slightly different actions being taken).

3. dom0 provides a virtual PCI device { an interface for device-OPs and
status  of this virtual device} to domU and  through associated event
channel domU makes
   "synchronous" use of this device.
domU uses this for control plane operations, but for most work it can
talk to its PCI device directly without going through dom0.

===
Queries:

1. What i am really not so sure about is ... passthrough case
    Will there be requirement to map the address space of this PCI
device in domU ?  Will the page which was being shared  so-far
{xen_pci_sharedinfo}
     for emulation , be "flipped"  ( transferred ) into domU ?
xen_pci_sharedinfo - is that the page used to talk to the PCI backend
from pcifront?  If so, then no, that's just used for dom0-domU
communications.
Well As I read in the xen0linux code , xen_pci_sharedinfo contains
xen_pci_op where front end specifies which
operation is to be done on which pci device. And then it keeps checking
the status of this device through another flag in
xen_pci_sharedinfo. I infer here that dom0's drivers are being used. Can
you point me towards a code path where domU actually
uses its own drivers having owned the PCI device ?

In order to map the address space of the PCI device directly, the domU is
given permissions to map the IO memory regions of that device into it's
page tables.  I think this is now possible to do using a grant table
operation...

It is also given permission to access certain IO port ranges so that it
can use the device's port IO interfaces.

     2. Well ,
Having read the code for linux (dom0,domU) I see that there are split
device  drivers for PCI. (pci front and PCIback). Which are normally
   communicating over xenbus.  which looks almost like other split
drivers.  How exactly then passthrough enables use of domU's drivers ?
The key thing to understand is that the pcifront / pciback is basically
just used for setup and teardown, not for the actual IO.  The real IO is
done directly by the domU without going through dom0.  For the block and
net drivers, *all* IO goes through dom0.

3. And if passthrough support isnt provided how will communication
between pcifront-pciback  be different ? ( I guess netbsd , freebsd do
not have passthrough support yet )
I'm not entirely clear what you're asking here, but I'll take a stab at
it:

if pcifront (in domU) and pciback (in dom0) aren't available then
passthrough won't work.  The dom0 has to support the backend functions of
PCI passthrough and the domU has to know how to talk to it.  It's also
implicit that they're using the same interface version to talk to each
other - I'm not sure whether that's frozen stable or not.

So *if* NetBSD lacks pciback support, it can't pass PCI devices to guests
that do.  Similarly, *if* it lacks pcifront support, it can't have
devices passed to it.

4. What  restricts other domUs from accessing PCI device given to other
domU via passrthrough support.
There are some restrictions on what can be done in PCI config space to
prevent a guest fouling things up.  These need to be relaxed for some
awkward devices, though.

For the device IO itself, domUs are only allowed to map mmio regions and
access io ports that are relevant to their device.  It's possible for
these to overlap with those for other devices, in which case you're
trusting the domU to be well behaved.  More crucially, though, giving a
domain a device with DMA capabilities is equivalent to giving it the
ability to subvert the entire machine.  DMA can't be sandboxed on most
current hardware, so if you give DMA rights to a VM it's automatically
just as trusted as dom0 with respect to not fooling about with other
domains, hardware, etc.

Cheers,
Mark
Thanks,
Sanket

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel





_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.