[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Determining iommu groups in Xen?



On Fri, Aug 29, 2014 at 09:27:14AM +0100, Andrew Cooper wrote:
> On 29/08/2014 01:35, Peter Kay wrote:
> >
> >
> > On 28 August 2014 19:45, Peter Kay <syllopsium@xxxxxxxxxxxxxxxx
> > <mailto:syllopsium@xxxxxxxxxxxxxxxx>> wrote:
> >
> >
> >
> >     On 28 August 2014 19:02:47 BST, Andrew Cooper
> >     <andrew.cooper3@xxxxxxxxxx <mailto:andrew.cooper3@xxxxxxxxxx>> wrote:
> >     >On 28/08/14 18:53, Peter Kay wrote:
> >     >>
> >     >> On 28 August 2014 18:13:07 BST, Andrew Cooper
> >     ><andrew.cooper3@xxxxxxxxxx <mailto:andrew.cooper3@xxxxxxxxxx>> wrote:
> >
> >     >> An iommu group, as far as I'm aware, is the group of devices
> >     that are
> >     >not protected from each other. In KVM, you must pass through the
> >     entire
> >     >group to a VM at once, unless a 'don't go crying to me if it stomps
> >     >over your memory space or worse' patch is applied to the kennel
> >     >claiming that everything is fine.
> >     >
> >     >I have googled the term in the meantime, and it is what I initially
> >     >thought.
> >     >
> >     >All PCI devices passed though to the same domain share the same
> >     single
> >     >"iommu group" per Kernel/KVM terminology.  There is not currently any
> >     >support for multiple iommu contexts within a single VM.
> >     >
> >     >~Andrew
> >
> >  
> > See  http://lxr.free-electrons.com/source/drivers/iommu/iommu.c  and
> > intel-iommu.c (or amd-iommu.c). It is based on the ACS capability of
> > the upstream device. See in particular intel_iommu_add_device()
> >
> > From  https://www.kernel.org/doc/Documentation/vfio.txt
> >
> > 'Therefore, while for the most part an IOMMU may have device level
> > granularity, any system is susceptible to reduced granularity.  The
> > IOMMU API therefore supports a notion of IOMMU groups.  A group is
> > a set of devices which is isolatable from all other devices in the
> > system.  Groups are therefore the unit of ownership used by VFIO'
> >
> > So far as reliable quirks go for ACS protection, see
> > drivers/pci/quirks.c static const u16 pci_quirk_intel_pch_acs_ids[]
> > and Red Hat bugzilla 1037684
> >
> > I'll have to do some more testing to see if lspci -t is a reasonable
> > indication of iommu groups or if I can write some code to figure them out.
> >
> > Obviously returning the information from the Linux source is
> > ultimately not really a good idea(*), because the dom0 may not be
> > Linux. It is in my case, because NetBSD is (unfortunately) not yet
> > functional enough for my needs and I don't want to use Solaris derived
> > OS, but that doesn't help everyone else.
> >
> > (*) Assuming it's possible at all, as the Linux dom0 is running on top
> > of Xen and therefore is restricted in some ways.
> >
> > PK
> 
> Ah right.  I see now.  The IOMMU groups are kernel/errata logic applied
> to the system which impose restrictions as to which devices cannot
> safely/functionally be split apart.
> 
> There is absolutely nothing like this in Xen, or dom0 (as dom0 is
> unaware of IOMMUs in general).  If I recall correctly, it does feature
> on the wishlist of the XenServer team of which I am am member, pending
> some copious quantites of free time.  I know for certain that the libxl
> and Xapi toolstacks do not have logic like this, leaving all passthrough
> setup in the manual hands of the host administrator.
> 
> Konrad: Probably an item for the 4.6 wishlist/featurelist.  It will
> probably mix well with the other IO-NUMA stuff which has been deferred.

Done.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.