[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Passthrough support ?



> I  have been trying to understand PCI -passthrough support.  Please
> correct me if I am wrong in my following inferences.
>
> 1. Device emulation and pass through are both implemented using split
> drivers.

I'm going to be pedantic now ;-)

Device "emulation" is really what we do for fully virtualised (HVM) guests: 
the device models provided by QEmu emulate real world devices in terms of 
their responses to particular port IOs, mmapped IO operations, etc.  This 
isn't done using a front / back model because the guest is just using it's 
normal drivers for the "real" devices.

The virtual devices used by the PV drivers are implemented using split 
drivers, though, as you say.

The PCI passthrough support for PV guest is also implemented using a split 
driver that implements the functions of the PCI bus in order to give the 
guest the information it requires to talk to its assigned device(s).  Having 
obtained this information, communication with the device is possible directly 
using IO ports, memory IO regions, and DMA.

> 2. But, in case of emulation drivers of dom0 are used  where as in case
> of passthrough ( as the name suggests ) native drivers in domU are used .

For true emulation (qemu device model), a userspace process in dom0 handles 
modelling a "real" device and then issues IO using normal userspace APIs.  
These get serviced by the dom0 kernel using the normal device driver.

For PV drivers, the frontend driver in the domU kernel issues requests which 
are picked up by the backend driver in dom0's kernel, which then issues 
requests into the IO stack.  Again this uses the normal device driver in dom0 
to talk to the actual device, it's just that the request is made using a 
kernel-internal API rather than a userspace API (which results in slightly 
different actions being taken).

> 3. dom0 provides a virtual PCI device { an interface for device-OPs and
> status  of this virtual device} to domU and  through associated event
> channel domU makes
>    "synchronous" use of this device.

domU uses this for control plane operations, but for most work it can talk to 
its PCI device directly without going through dom0.

> ===
> Queries:
>
> 1. What i am really not so sure about is ... passthrough case
>     Will there be requirement to map the address space of this PCI
> device in domU ?  Will the page which was being shared  so-far
> {xen_pci_sharedinfo}
>      for emulation , be "flipped"  ( transferred ) into domU ?

xen_pci_sharedinfo - is that the page used to talk to the PCI backend from 
pcifront?  If so, then no, that's just used for dom0-domU communications.

In order to map the address space of the PCI device directly, the domU is 
given permissions to map the IO memory regions of that device into it's page 
tables.  I think this is now possible to do using a grant table operation...

It is also given permission to access certain IO port ranges so that it can 
use the device's port IO interfaces.

>      2. Well , 
> Having read the code for linux (dom0,domU) I see that there are split
> device  drivers for PCI. (pci front and PCIback). Which are normally
>    communicating over xenbus.  which looks almost like other split
> drivers.  How exactly then passthrough enables use of domU's drivers ?

The key thing to understand is that the pcifront / pciback is basically just 
used for setup and teardown, not for the actual IO.  The real IO is done 
directly by the domU without going through dom0.  For the block and net 
drivers, *all* IO goes through dom0.

> 3. And if passthrough support isnt provided how will communication
> between pcifront-pciback  be different ? ( I guess netbsd , freebsd do
> not have passthrough support yet )

I'm not entirely clear what you're asking here, but I'll take a stab at it:

if pcifront (in domU) and pciback (in dom0) aren't available then passthrough 
won't work.  The dom0 has to support the backend functions of PCI passthrough 
and the domU has to know how to talk to it.  It's also implicit that they're 
using the same interface version to talk to each other - I'm not sure whether 
that's frozen stable or not.

So *if* NetBSD lacks pciback support, it can't pass PCI devices to guests that 
do.  Similarly, *if* it lacks pcifront support, it can't have devices passed 
to it.

> 4. What  restricts other domUs from accessing PCI device given to other
> domU via passrthrough support.

There are some restrictions on what can be done in PCI config space to prevent 
a guest fouling things up.  These need to be relaxed for some awkward 
devices, though.

For the device IO itself, domUs are only allowed to map mmio regions and 
access io ports that are relevant to their device.  It's possible for these 
to overlap with those for other devices, in which case you're trusting the 
domU to be well behaved.  More crucially, though, giving a domain a device 
with DMA capabilities is equivalent to giving it the ability to subvert the 
entire machine.  DMA can't be sandboxed on most current hardware, so if you 
give DMA rights to a VM it's automatically just as trusted as dom0 with 
respect to not fooling about with other domains, hardware, etc.

Cheers,
Mark

-- 
Dave: Just a question. What use is a unicyle with no seat?  And no pedals!
Mark: To answer a question with a question: What use is a skateboard?
Dave: Skateboards have wheels.
Mark: My wheel has a wheel!

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.