[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 11/11] xen/arm: Translate virtual PCI bus topology for guests


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Date: Thu, 30 Sep 2021 16:57:39 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=uDoYUOXlLzB4VCICsvsP6JEGGAbdMCw2O3Y2VNCsizk=; b=FJeWVhgTfb0OTP5PU3xlBGtAkU7kzoTmiC1prjH1n5AniGTzRks3Vh1ydDXIlH+Z1VZrZhOZ6EoDN6ZXCN42tOtmlSz9i8wMAnYAkYvyOP0QSCRzUJK/hf7EeSI+syVsEqXht5pYc8Dy2JZLyjPDM0o/Hq5r0tY8ti0TNZ8bG0uH+vThVO267tJnEN0tymCf3vr3RfIuYWBS6DGo0qAcjBn8eErK/9ZWs8RhdSjC84qIq37lZl8FmU/gzL73ZENMajthTWtvVpslynAf5/wXy/9k2BbgSwilZJreFLnwDt2xY7xXWArDVfK/56UvACsVyOvMD5vn0J41PBl1r4bBdg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LqFP0/pJp8jOQv9eMC8HRaXrZ6AtMaICPkm7Vxw/xHxWXzJywZZO+3iUMnKWJL1WumZy1FK0Y2/cv7UG9y+/JJG9Oc/+ew1ylPve3xJ5oOgNzONBfwVhTDcmzUFlmub+8YwFcEuNTOG0IsZCNFhOqO+5rYNlhqQjZoOJM/pEy810Dyc2y64U23JAGTphpKInafoDER7YJQJmoN2voPmPvw2GsKp2Cx9J0mxzNkKFnl8PVIekTvLBBUrWUpAAZLJsEPoDp25I03/GSZljsGlUgGi+U+DkSzUi+UgZLVKkBZWCOiK2Ip7Zp5iWuCSlvPMRbOICzqrMKMU7xmEIK1Ek1Q==
  • Authentication-results: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
  • Cc: "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, "roger.pau@xxxxxxxxxx" <roger.pau@xxxxxxxxxx>, Oleksandr Andrushchenko <andr2000@xxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 30 Sep 2021 16:57:55 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHXtdAlBwDHpaYUBUq6q0PlsxoTp6u8Ra2AgACHPgA=
  • Thread-topic: [PATCH v3 11/11] xen/arm: Translate virtual PCI bus topology for guests

[snip]

>> +    bool found = false;
>> +
>> +    pcidevs_lock();
>> +    list_for_each_entry ( vdev, &d->vdev_list, list )
>> +    {
>> +        if ( vdev->sbdf.sbdf == sbdf->sbdf )
>> +        {
>> +            /* Replace virtual SBDF with the physical one. */
>> +            *sbdf = vdev->pdev->sbdf;
>> +            found = true;
>> +            break;
>> +        }
>> +    }
>> +    pcidevs_unlock();
> As per the comments on the earlier patch, locking as well as placement
> may need reconsidering.
I was thinking about the locking happening here.
So, there are 4 sources where we need to manipulate d->vdev_list:
1. XEN_DOMCTL_assign_device
2. XEN_DOMCTL_test_assign_device
3. XEN_DOMCTL_deassign_device
4. MMIO handlers
5. Do I miss others?

The first three already use pcidevs_{lock|unlock} and there it seems
to be ok as those get called when PCI devices are discovered by Dom0
and during guest domain creation. So, this is assumed not to happen
frequently and can be accepted wrt global locking.

What is more important is the fourth case, where in order to redirect
configuration space access from virtual SBDF to physical SBDF we need
to traverse the d->vdev_list each time the guest accesses PCI configuration
space. This means that with each such access we take a BIG PCI lock...

That being said, I think that we may want having a dedicated per-domain
lock for d->vdev_list handling, e.g. d->vdev_lock.
At the same time we may also consider that even for guests it is acceptable
to use pcidevs_{lock|unlock} as this will not affect PCI memory space access
and only has influence during device setup.

I would love to hear your opinion on this
> Jan
>
Thank you,
Oleksandr

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.