[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 11/11] xen/arm: Translate virtual PCI bus topology for guests


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Date: Fri, 1 Oct 2021 07:57:18 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oul6kvddgGmqeZvIIOQ6ekwdrkDx2Bt8XLV61qriklE=; b=l/1j3ILZF2o6BgTJDPdt4E4KbRNMri+aCogkkky+wlnVZ9bOvueePMQ8NUm2Fl5AzT3Fgm/4l86A+1g/v30zZRWBYMYKeVTdP2dX2XWRGvmeaDqiVLBkc933EuU5voMD/PbM/lMQiGDi0m0BkTajyj9GFRjRfSn9odBNB+yiak9SH07dkH++qWaOYrYSe+tklPCfktM+Mk0V3fB+jefrD2DnA3hE7/wotz7kByl5CqPR7ukuyXFp93jKWjqVnKGl4fwq4x9x9HmRZVrzsMi3Y7AGXyP8nxz6Goo5FgpVaBtcFqPOPiAJ3Ef7zh/iIR0NK4jRjEX2fSj8Bf+0/tvSfg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Vht6sfeNUOEv61EmalB08Bfav3VJZLlJBv1G3+CMv2rzWpKXk7gFN7tg+WqEhQziC1p5Cn9YabI4WbHKVNRJYVYTnGx5XcBmZo8NQaKBzpUkHT1YTGPIMykWgxEKzHzJBPAIY/JVxwxHBZ1v5C9ajUInpuM888t5ngvAz8PHh6LVhkROZcT8irDFHSKWbYMWExZr8cSaPq1k660DQ0kk4YaDH3O+/zdECxHdz1ycSQ+w95hZ7sQQujdaLzxlGR8ASNuQecOBkjTbkcpaIPf2/ANypaF84I31FDeUjC5Oud9eVzWGVBfvlYfqS6fPQkYalFDYgBVV/Ez8yf45q1vIig==
  • Authentication-results: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
  • Cc: "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, "roger.pau@xxxxxxxxxx" <roger.pau@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Delivery-date: Fri, 01 Oct 2021 07:57:33 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHXtdAlBwDHpaYUBUq6q0PlsxoTp6u8Ra2AgACHPgCAAPdBgIAABBwA
  • Thread-topic: [PATCH v3 11/11] xen/arm: Translate virtual PCI bus topology for guests


On 01.10.21 10:42, Jan Beulich wrote:
> On 30.09.2021 18:57, Oleksandr Andrushchenko wrote:
>> [snip]
>>
>>>> +    bool found = false;
>>>> +
>>>> +    pcidevs_lock();
>>>> +    list_for_each_entry ( vdev, &d->vdev_list, list )
>>>> +    {
>>>> +        if ( vdev->sbdf.sbdf == sbdf->sbdf )
>>>> +        {
>>>> +            /* Replace virtual SBDF with the physical one. */
>>>> +            *sbdf = vdev->pdev->sbdf;
>>>> +            found = true;
>>>> +            break;
>>>> +        }
>>>> +    }
>>>> +    pcidevs_unlock();
>>> As per the comments on the earlier patch, locking as well as placement
>>> may need reconsidering.
>> I was thinking about the locking happening here.
>> So, there are 4 sources where we need to manipulate d->vdev_list:
>> 1. XEN_DOMCTL_assign_device
>> 2. XEN_DOMCTL_test_assign_device
>> 3. XEN_DOMCTL_deassign_device
>> 4. MMIO handlers
>> 5. Do I miss others?
>>
>> The first three already use pcidevs_{lock|unlock} and there it seems
>> to be ok as those get called when PCI devices are discovered by Dom0
>> and during guest domain creation. So, this is assumed not to happen
>> frequently and can be accepted wrt global locking.
>>
>> What is more important is the fourth case, where in order to redirect
>> configuration space access from virtual SBDF to physical SBDF we need
>> to traverse the d->vdev_list each time the guest accesses PCI configuration
>> space. This means that with each such access we take a BIG PCI lock...
>>
>> That being said, I think that we may want having a dedicated per-domain
>> lock for d->vdev_list handling, e.g. d->vdev_lock.
>> At the same time we may also consider that even for guests it is acceptable
>> to use pcidevs_{lock|unlock} as this will not affect PCI memory space access
>> and only has influence during device setup.
>>
>> I would love to hear your opinion on this
> I've voiced my opinion already: Using the global lock really is an
> abuse, which would require good justification. Hence unless there's
> anything speaking against a per-domain lock, that's imo the only
> suitable route to go. Nesting rules with the global lock may want
> explicitly spelling out.
I do understand your concern here and also support the idea that
the less we wait for locks the better. Nevertheless, even if I introduce
d->vdev_lock, which will obviously help MMIO traps, the rest will remain
under pcidevs_{lock|unlock}, e.g. XEN_DOMCTL_assign_device,
XEN_DOMCTL_test_assign_device and XEN_DOMCTL_deassign_device
and the underlying code like vpci_{assign|deassign}_device in my case

Anyways, I'll implement a per-domain d->vdev_lock
>
> Jan
>
Thank you,
Oleksandr

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.