[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] vpci: introduce per-domain lock to protect vpci structure


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Date: Fri, 11 Feb 2022 12:14:35 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ElQ75QBlI0Kxwoeo9HL4MsI7cdTdXqwa9XEr14uRJsw=; b=Dvd5Rss8brRYW8m5wc2Aa2gZLQoioxBKuHqUzEd+4NsKJQGijTra5Rgbdl9nNKPFS2oI/uEedxoJSRAlSKTplxq7lFivm6briD8JkOofhAxy6++XcAvszM6eZu5E3E4QhfnV+6/JQitWjKBmApBi3oL84LdMbpGDTPmv9KK7wWzC7iYDUBHWMR1TCWVRPwXNB+P4u3WHoFzRmPT5MlVk3B9xdqnWkGU+zE7gTRgiDYsD9kJwtyockZatCMQQsm7q4mWbZi7dwjZr38I6It9eriza5roFLyYs3+WjHQG23WaIGmqRbnHFGnnjIe5a3MCZIMSYH0sNnIh/PuPP+KFDzg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Fv26pfbdpMOnRE3T7wYassSTmzd5cBZfKnQ8tDDp/PM2aI80uubhN5VxCN8RKtInIgSEWVyrJh+NRhjE35MwCkShQjiki0hoDFXwTWf7IqC3peMxjglOiIhuzXKD/6hVksplTZYyz6TrvY64YLCiEPs/a3w7CmLLDkTPEquQqJALliqp6xUA/1XlkF3VeKwdpJEj7AzuevR47B3nLNNreSyzOyjiRjnJUctFu4Bx4zkwSGzXZCgRc4jusshQ3wP5ae1xwa+VE2EnBGB9dXw3O/3+3v4SOWdazdk96HNFtoRHfh6dfL0RY5H+ss2CEkCj3iQ+GN4UYfzvZRYqYx9Upg==
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "jbeulich@xxxxxxxx" <jbeulich@xxxxxxxx>, "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Delivery-date: Fri, 11 Feb 2022 12:14:46 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHYHboQT3cBWYI1/EGunE7hOwop6ayM95MAgAEU1ACAADOqgIAABlcA
  • Thread-topic: [PATCH] vpci: introduce per-domain lock to protect vpci structure


On 11.02.22 13:51, Roger Pau Monné wrote:
> On Fri, Feb 11, 2022 at 08:46:59AM +0000, Oleksandr Andrushchenko wrote:
>>
>> On 10.02.22 18:16, Roger Pau Monné wrote:
>>> On Wed, Feb 09, 2022 at 03:36:27PM +0200, Oleksandr Andrushchenko wrote:
>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
>>>>
>>>> Introduce a per-domain read/write lock to check whether vpci is present,
>>>> so we are sure there are no accesses to the contents of the vpci struct
>>>> if not. This lock can be used (and in a few cases is used right away)
>>>> so that vpci removal can be performed while holding the lock in write
>>>> mode. Previously such removal could race with vpci_read for example.
>>> Sadly there's still a race in the usage of pci_get_pdev_by_domain wrt
>>> pci_remove_device, and likely when vPCI gets also used in
>>> {de}assign_device I think.
>>>
>> How about the below? It seems to guarantee that we can access pdev
>> without issues and without requiring pcidevs_lock to be used?
> Hm, I'm unsure this is correct.
Yes, we need pcidevs as rwlock in order to solve this reliably...
>   It's in general a bad idea to use a
> per-domain lock approach to protect the consistency of elements moving
> between domains.
>
> In order for this to be safe you will likely need to hold both the
> source and the destination per-domain locks, and then you could also
> get into ABBA lock issues unless you always take the lock in the same
> order.
>
> I think it's safer to use a global lock in this case (pcidevs_lock).
>
>> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
>> index e8b09d77d880..fd464a58b3b3 100644
>> --- a/xen/drivers/passthrough/pci.c
>> +++ b/xen/drivers/passthrough/pci.c
>> @@ -937,8 +937,14 @@ static int deassign_device(struct domain *d, uint16_t 
>> seg, uint8_t bus,
>>        }
>>
>>        devfn = pdev->devfn;
>> +#ifdef CONFIG_HAS_VPCI
>> +    write_lock(&d->vpci_rwlock);
>> +#endif
>>        ret = iommu_call(hd->platform_ops, reassign_device, d, target, devfn,
>>                         pci_to_dev(pdev));
>> +#ifdef CONFIG_HAS_VPCI
>> +    write_unlock(&d->vpci_rwlock);
>> +#endif
>>        if ( ret )
>>            goto out;
>>
>> @@ -1474,6 +1480,9 @@ static int assign_device(struct domain *d, u16 seg, u8 
>> bus, u8 devfn, u32 flag)
>>        const struct domain_iommu *hd = dom_iommu(d);
>>        struct pci_dev *pdev;
>>        int rc = 0;
>> +#ifdef CONFIG_HAS_VPCI
>> +    struct domain *old_d;
>> +#endif
>>
>>        if ( !is_iommu_enabled(d) )
>>            return 0;
>> @@ -1487,15 +1496,34 @@ static int assign_device(struct domain *d, u16 seg, 
>> u8 bus, u8 devfn, u32 flag)
>>        ASSERT(pdev && (pdev->domain == hardware_domain ||
>>                        pdev->domain == dom_io));
>>
>> +#ifdef CONFIG_HAS_VPCI
>> +    /* pdev->domain is either hwdom or dom_io. We do not want the later. */
>> +    old_d = pdev->domain == hardware_domain ? pdev->domain : NULL;
>> +    if ( old_d )
>> +        write_lock(&old_d->vpci_rwlock);
>> +#endif
>> +
>>        rc = pdev_msix_assign(d, pdev);
> I don't think you need the vpci lock for this operation.
>
>>        if ( rc )
>> +    {
>> +#ifdef CONFIG_HAS_VPCI
>> +        if ( old_d )
>> +            write_unlock(&old_d->vpci_rwlock);
>> +#endif
>>            goto done;
>> +    }
>>
>>        pdev->fault.count = 0;
>>
>>        if ( (rc = iommu_call(hd->platform_ops, assign_device, d, devfn,
>>                              pci_to_dev(pdev), flag)) )
>> +    {
>> +#ifdef CONFIG_HAS_VPCI
>> +        if ( old_d )
>> +            write_unlock(&old_d->vpci_rwlock);
>> +#endif
> Like I've mentioned above, I'm unsure this is correct. You are holding
> the lock of the previous domain, but at some point the device will be
> assigned to a new domain, so that change won't be protected by the
> lock of the new domain.
>
> Thanks, Roger.

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.