[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/5] iommu: introduce iommu_groups



>>> On 31.05.19 at 15:55, <Paul.Durrant@xxxxxxxxxx> wrote:
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: 15 May 2019 15:18
>> 
>> >>> On 08.05.19 at 15:24, <paul.durrant@xxxxxxxxxx> wrote:
>> > +    id = ops->get_device_group_id(pdev->seg, pdev->bus, pdev->devfn);
>> > +    grp = get_iommu_group(id);
>> 
>> I don't think solitary devices should be allocated a group. Also
>> you don't handle failure of ops->get_device_group_id().
> 
> True, it can fail in the VT-d case. Not clear what to do in that case though; 
> I guess assume - for now - that the device gets its own group.
> I think all devices should get a group. The group will ultimately be the 
> unit of assignment to a VM and, in the best case, we *expect* each device to 
> have its own group... it's only when there are quirks, legacy bridges etc. 
> that multiple devices should end up in the same group. This is consistent 
> with Linux's IOMMU groups.

Well, I'm not worried much about consistency with Linux here, as
you're not cloning their implementation anyway (afaict). To me at
this point wrapping individual devices in groups looks like just extra
overhead with no real gain. But, granted, the gain may appear
later.

>> > --- a/xen/include/xen/pci.h
>> > +++ b/xen/include/xen/pci.h
>> > @@ -75,6 +75,9 @@ struct pci_dev {
>> >      struct list_head alldevs_list;
>> >      struct list_head domain_list;
>> >
>> > +    struct list_head grpdevs_list;
>> 
>> Does this separate list provide much value? The devices in a group
>> are going to move between two domain_list-s all in one go, so
>> once you know the domain of one you'll be able to find the rest by
>> iterating that domain's list. Is the fear that such an iteration may
>> be tens of thousands of entries long, and hence become an issue
>> when traversed? I have no idea how many PCI devices the biggest
>> systems today would have, but if traversal was an issue, then it
>> would already be with the code we've got now.
> 
> I'd prefer to keep it... It makes the re-implementation of the domctl in the 
> next patch more straightforward.

I can accept this as the positive side. But there's extra storage
needed (not much, but anyway), and the more (independent)
lists we have that devices can be on, the more likely it'll be that
one of them gets screwed up at some point (e.g. by forgetting
to remove a device from one of them prior to de-allocation).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.