[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/5] iommu: introduce iommu_groups



> -----Original Message-----
[snip]
> >> > --- a/xen/include/xen/pci.h
> >> > +++ b/xen/include/xen/pci.h
> >> > @@ -75,6 +75,9 @@ struct pci_dev {
> >> >      struct list_head alldevs_list;
> >> >      struct list_head domain_list;
> >> >
> >> > +    struct list_head grpdevs_list;
> >>
> >> Does this separate list provide much value? The devices in a group
> >> are going to move between two domain_list-s all in one go, so
> >> once you know the domain of one you'll be able to find the rest by
> >> iterating that domain's list. Is the fear that such an iteration may
> >> be tens of thousands of entries long, and hence become an issue
> >> when traversed? I have no idea how many PCI devices the biggest
> >> systems today would have, but if traversal was an issue, then it
> >> would already be with the code we've got now.
> >
> > I'd prefer to keep it... It makes the re-implementation of the domctl in the
> > next patch more straightforward.
> 
> I can accept this as the positive side. But there's extra storage
> needed (not much, but anyway), and the more (independent)
> lists we have that devices can be on, the more likely it'll be that
> one of them gets screwed up at some point (e.g. by forgetting
> to remove a device from one of them prior to de-allocation).

Ok, I'll drop the list and just match on the grp pointer.

  Paul

> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.