[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/5] iommu: introduce iommu_groups



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 15 May 2019 15:18
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: Julien Grall <julien.grall@xxxxxxx>; Andrew Cooper 
> <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>; Ian Jackson 
> <Ian.Jackson@xxxxxxxxxx>;
> Stefano Stabellini <sstabellini@xxxxxxxxxx>; xen-devel 
> <xen-devel@xxxxxxxxxxxxxxxxxxxx>; Konrad
> Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>
> Subject: Re: [PATCH 4/5] iommu: introduce iommu_groups
> 
> >>> On 08.05.19 at 15:24, <paul.durrant@xxxxxxxxxx> wrote:
> > --- a/xen/drivers/passthrough/iommu.c
> > +++ b/xen/drivers/passthrough/iommu.c
> > @@ -655,6 +655,82 @@ static void iommu_dump_p2m_table(unsigned char key)
> >      }
> >  }
> >
> > +#ifdef CONFIG_HAS_PCI
> > +
> > +struct iommu_group {
> > +    unsigned int id;
> > +    unsigned int index;
> > +    struct list_head devs_list;
> > +};
> 
> Could these additions as a whole go into a new groups.c?

Sure.

> 
> > +int iommu_group_assign(struct pci_dev *pdev)
> > +{
> > +    const struct iommu_ops *ops;
> > +    unsigned int id;
> > +    struct iommu_group *grp;
> > +
> > +    ops = iommu_get_ops();
> > +    if ( !ops || !ops->get_device_group_id )
> 
> The way iommu_get_ops() works the left side of the || is pointless.

Yes, this was a hangover from a previous variant of patch #3, which I'm going 
to drop anyway.

> 
> > +        return 0;
> > +
> > +    id = ops->get_device_group_id(pdev->seg, pdev->bus, pdev->devfn);
> > +    grp = get_iommu_group(id);
> 
> I don't think solitary devices should be allocated a group. Also
> you don't handle failure of ops->get_device_group_id().

True, it can fail in the VT-d case. Not clear what to do in that case though; I 
guess assume - for now - that the device gets its own group.
I think all devices should get a group. The group will ultimately be the unit 
of assignment to a VM and, in the best case, we *expect* each device to have 
its own group... it's only when there are quirks, legacy bridges etc. that 
multiple devices should end up in the same group. This is consistent with 
Linux's IOMMU groups.

> 
> > +    if ( ! grp )
> 
> Nit: Stray blank.

Oh yes.

> 
> > --- a/xen/include/xen/pci.h
> > +++ b/xen/include/xen/pci.h
> > @@ -75,6 +75,9 @@ struct pci_dev {
> >      struct list_head alldevs_list;
> >      struct list_head domain_list;
> >
> > +    struct list_head grpdevs_list;
> 
> Does this separate list provide much value? The devices in a group
> are going to move between two domain_list-s all in one go, so
> once you know the domain of one you'll be able to find the rest by
> iterating that domain's list. Is the fear that such an iteration may
> be tens of thousands of entries long, and hence become an issue
> when traversed? I have no idea how many PCI devices the biggest
> systems today would have, but if traversal was an issue, then it
> would already be with the code we've got now.

I'd prefer to keep it... It makes the re-implementation of the domctl in the 
next patch more straightforward.

  Paul

> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.