|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 11/11] xen/arm: Translate virtual PCI bus topology for guests
On 01.10.2021 09:57, Oleksandr Andrushchenko wrote:
>
>
> On 01.10.21 10:42, Jan Beulich wrote:
>> On 30.09.2021 18:57, Oleksandr Andrushchenko wrote:
>>> [snip]
>>>
>>>>> + bool found = false;
>>>>> +
>>>>> + pcidevs_lock();
>>>>> + list_for_each_entry ( vdev, &d->vdev_list, list )
>>>>> + {
>>>>> + if ( vdev->sbdf.sbdf == sbdf->sbdf )
>>>>> + {
>>>>> + /* Replace virtual SBDF with the physical one. */
>>>>> + *sbdf = vdev->pdev->sbdf;
>>>>> + found = true;
>>>>> + break;
>>>>> + }
>>>>> + }
>>>>> + pcidevs_unlock();
>>>> As per the comments on the earlier patch, locking as well as placement
>>>> may need reconsidering.
>>> I was thinking about the locking happening here.
>>> So, there are 4 sources where we need to manipulate d->vdev_list:
>>> 1. XEN_DOMCTL_assign_device
>>> 2. XEN_DOMCTL_test_assign_device
>>> 3. XEN_DOMCTL_deassign_device
>>> 4. MMIO handlers
>>> 5. Do I miss others?
>>>
>>> The first three already use pcidevs_{lock|unlock} and there it seems
>>> to be ok as those get called when PCI devices are discovered by Dom0
>>> and during guest domain creation. So, this is assumed not to happen
>>> frequently and can be accepted wrt global locking.
>>>
>>> What is more important is the fourth case, where in order to redirect
>>> configuration space access from virtual SBDF to physical SBDF we need
>>> to traverse the d->vdev_list each time the guest accesses PCI configuration
>>> space. This means that with each such access we take a BIG PCI lock...
>>>
>>> That being said, I think that we may want having a dedicated per-domain
>>> lock for d->vdev_list handling, e.g. d->vdev_lock.
>>> At the same time we may also consider that even for guests it is acceptable
>>> to use pcidevs_{lock|unlock} as this will not affect PCI memory space access
>>> and only has influence during device setup.
>>>
>>> I would love to hear your opinion on this
>> I've voiced my opinion already: Using the global lock really is an
>> abuse, which would require good justification. Hence unless there's
>> anything speaking against a per-domain lock, that's imo the only
>> suitable route to go. Nesting rules with the global lock may want
>> explicitly spelling out.
> I do understand your concern here and also support the idea that
> the less we wait for locks the better. Nevertheless, even if I introduce
> d->vdev_lock, which will obviously help MMIO traps, the rest will remain
> under pcidevs_{lock|unlock}, e.g. XEN_DOMCTL_assign_device,
> XEN_DOMCTL_test_assign_device and XEN_DOMCTL_deassign_device
> and the underlying code like vpci_{assign|deassign}_device in my case
Well, it's entirely usual that certain operations require more than one
lock.
> Anyways, I'll implement a per-domain d->vdev_lock
Thanks.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |