[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 4/5] iommu: introduce iommu_groups
>>> On 08.05.19 at 15:24, <paul.durrant@xxxxxxxxxx> wrote: > --- a/xen/drivers/passthrough/iommu.c > +++ b/xen/drivers/passthrough/iommu.c > @@ -655,6 +655,82 @@ static void iommu_dump_p2m_table(unsigned char key) > } > } > > +#ifdef CONFIG_HAS_PCI > + > +struct iommu_group { > + unsigned int id; > + unsigned int index; > + struct list_head devs_list; > +}; Could these additions as a whole go into a new groups.c? > +int iommu_group_assign(struct pci_dev *pdev) > +{ > + const struct iommu_ops *ops; > + unsigned int id; > + struct iommu_group *grp; > + > + ops = iommu_get_ops(); > + if ( !ops || !ops->get_device_group_id ) The way iommu_get_ops() works the left side of the || is pointless. > + return 0; > + > + id = ops->get_device_group_id(pdev->seg, pdev->bus, pdev->devfn); > + grp = get_iommu_group(id); I don't think solitary devices should be allocated a group. Also you don't handle failure of ops->get_device_group_id(). > + if ( ! grp ) Nit: Stray blank. > --- a/xen/include/xen/pci.h > +++ b/xen/include/xen/pci.h > @@ -75,6 +75,9 @@ struct pci_dev { > struct list_head alldevs_list; > struct list_head domain_list; > > + struct list_head grpdevs_list; Does this separate list provide much value? The devices in a group are going to move between two domain_list-s all in one go, so once you know the domain of one you'll be able to find the rest by iterating that domain's list. Is the fear that such an iteration may be tens of thousands of entries long, and hence become an issue when traversed? I have no idea how many PCI devices the biggest systems today would have, but if traversal was an issue, then it would already be with the code we've got now. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |