[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 06/10] vpci: Make every domain handle its own BARs



On 13.11.2020 13:41, Oleksandr Andrushchenko wrote:
> 
> On 11/13/20 1:35 PM, Jan Beulich wrote:
>> On 13.11.2020 12:02, Oleksandr Andrushchenko wrote:
>>> On 11/13/20 12:50 PM, Jan Beulich wrote:
>>>> On 13.11.2020 11:46, Oleksandr Andrushchenko wrote:
>>>>> On 11/13/20 12:25 PM, Jan Beulich wrote:
>>>>>> On 13.11.2020 07:32, Oleksandr Andrushchenko wrote:
>>>>>>> I'll try to replace is_hardware_domain with something like:
>>>>>>>
>>>>>>> +/*
>>>>>>> + * Detect whether physical PCI devices in this segment belong
>>>>>>> + * to the domain given, e.g. on x86 all PCI devices live in hwdom,
>>>>>>> + * but in case of ARM this might not be the case: those may also
>>>>>>> + * live in driver domains or even Xen itself.
>>>>>>> + */
>>>>>>> +bool pci_is_hardware_domain(struct domain *d, u16 seg)
>>>>>>> +{
>>>>>>> +#ifdef CONFIG_X86
>>>>>>> +    return is_hardware_domain(d);
>>>>>>> +#elif CONFIG_ARM
>>>>>>> +    return pci_is_owner_domain(d, seg);
>>>>>>> +#else
>>>>>>> +#error "Unsupported architecture"
>>>>>>> +#endif
>>>>>>> +}
>>>>>>> +
>>>>>>> +/*
>>>>>>> + * Get domain which owns this segment: for x86 this is always hardware
>>>>>>> + * domain and for ARM this can be different.
>>>>>>> + */
>>>>>>> +struct domain *pci_get_hardware_domain(u16 seg)
>>>>>>> +{
>>>>>>> +#ifdef CONFIG_X86
>>>>>>> +    return hardware_domain;
>>>>>>> +#elif CONFIG_ARM
>>>>>>> +    return pci_get_owner_domain(seg);
>>>>>>> +#else
>>>>>>> +#error "Unsupported architecture"
>>>>>>> +#endif
>>>>>>> +}
>>>>>>>
>>>>>>> This is what I use to properly detect the domain that really owns 
>>>>>>> physical host bridge
>>>>>> I consider this problematic. We should try to not let Arm's and x86'es
>>>>>> PCI implementations diverge too much, i.e. at least the underlying basic
>>>>>> model would better be similar. For example, if entire segments can be
>>>>>> assigned to a driver domain on Arm, why should the same not be possible
>>>>>> on x86?
>>>>> Good question, probably in this case x86 == ARM and I can use
>>>>>
>>>>> pci_is_owner_domain for both architectures instead of using 
>>>>> is_hardware_domain for x86
>>>>>
>>>>>> Furthermore I'm suspicious about segments being the right granularity
>>>>>> here. On ia64 multiple host bridges could (and typically would) live
>>>>>> on segment 0. Iirc I had once seen output from an x86 system which was
>>>>>> apparently laid out similarly. Therefore, just like for MCFG, I think
>>>>>> the granularity wants to be bus ranges within a segment.
>>>>> Can you please suggest something we can use as a hint for such a 
>>>>> detection logic?
>>>> The underlying information comes from ACPI tables, iirc. I don't
>>>> recall the details, though - sorry.
>>> Ok, so seg + bus should be enough for both ARM and Xen then, right?
>>>
>>> pci_get_hardware_domain(u16 seg, u8 bus)
>> Whether an individual bus number can suitable express things I can't
>> tell; I did say bus range, but of course if you care about just
>> individual devices, then a single bus number will of course do.
> 
> I can implement the lookup whether a PCI host bridge owned by a particular
> 
> domain with something like:
> 
> struct pci_host_bridge *bridge = pci_find_host_bridge(seg, bus);
> 
> return bridge->dt_node->used_by == d->domain_id;
> 
> Could you please give me a hint how this can be done on x86?

Bridges can't be assigned to other than the hardware domain right
now. Earlier on I didn't say you should get this to work, only
that I think the general logic around what you add shouldn't make
things more arch specific than they really should be. That said,
something similar to the above should still be doable on x86,
utilizing struct pci_seg's bus2bridge[]. There ought to be
DEV_TYPE_PCI_HOST_BRIDGE entries there, albeit a number of them
(provided by the CPUs themselves rather than the chipset) aren't
really host bridges for the purposes you're after.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.