[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v8 11/13] vpci: add initial support for virtual PCI bus topology


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Date: Fri, 21 Jul 2023 00:43:48 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CO17EFDJgsFR9UJhz+9HOaYW0nSUraGFVT/+sQwxqVs=; b=Mr+8ommbMrkFalVRnDvy6/8EHSqTv2VU55gE0g5zI92rhUAfK3GjawaYIw67TrpvW4I3EvtdsqOMjzqydhCnxu+AAUMm9ZWHdbYum2lzRRxrHmenHrcLZj/5RgRDz7WeQI7fw7CA7gwX1s7FOZQrA6I2LG1JK39C1t4eQ+nhEnBfa8OB1mxOe1M8QoaeoQzchqDuGanx+iqPHDo9FLzDYOyilKdfH0M0IPxaLZ8y/kU1Xt41tuQywCbg0gK139RCVHHiiEAin6FR0auW0NkDejRMsF7OcwskS36fdcpZBRKKIWmnw/Qy1fzWYN5kL6HQc6ljpgVdEu8g3Je56fQGWQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kxwknxcZOJUgpHshiiXafYmULwWywwdunJm2OClk+/mNDZS3VGHTxMxjEDkByjJ7vfqIpt0nNpKKT/h2ow1AR0tTV9ga+5ahHF9LoZ5DXdi3pTSBjcxyIyBvtUOgTibxwHbtHHi4sTtLW6K12ZrhtZV42XbuNtdKi1rJfnImjgzg3Jl/QOZuRF/ZV2cWkmV7KKjx/lmp/stV2Lzl1O6g0rBHnXZVnvF+d3bnwTjin4mSH9agFMpSlSnN60I8BFoOa0OKz7INXxLDAcyF7hgxKdmQhyjqP9ad30kxe5l1erWs5Aqa5uzWjjEcb0M20XVag8SqjtyNLBkBTrbYPxWEOg==
  • Cc: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 21 Jul 2023 00:44:08 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHZuqGtdYewPbeaYUqE4k9SPFwhoq/CN8sAgAEqGIA=
  • Thread-topic: [PATCH v8 11/13] vpci: add initial support for virtual PCI bus topology

Hi Jan,

Jan Beulich <jbeulich@xxxxxxxx> writes:

> On 20.07.2023 02:32, Volodymyr Babchuk wrote:
>> --- a/xen/drivers/vpci/vpci.c
>> +++ b/xen/drivers/vpci/vpci.c
>> @@ -46,6 +46,16 @@ void vpci_remove_device(struct pci_dev *pdev)
>>          return;
>>  
>>      spin_lock(&pdev->vpci->lock);
>> +
>> +#ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT
>> +    if ( pdev->vpci->guest_sbdf.sbdf != ~0 )
>> +    {
>> +        __clear_bit(pdev->vpci->guest_sbdf.dev,
>> +                    &pdev->domain->vpci_dev_assigned_map);
>> +        pdev->vpci->guest_sbdf.sbdf = ~0;
>> +    }
>> +#endif
>
> The lock acquired above is not ...

vpci_remove_device() is called when d->pci_lock is already held.

But, I'll move this hunk before spin_lock(&pdev->vpci->lock); we don't
need to hold it while cleaning vpci_dev_assigned_map

>> @@ -115,6 +129,54 @@ int vpci_add_handlers(struct pci_dev *pdev)
>>  }
>>  
>>  #ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT
>> +static int add_virtual_device(struct pci_dev *pdev)
>> +{
>> +    struct domain *d = pdev->domain;
>> +    pci_sbdf_t sbdf = { 0 };
>> +    unsigned long new_dev_number;
>> +
>> +    if ( is_hardware_domain(d) )
>> +        return 0;
>> +
>> +    ASSERT(pcidevs_locked());
>> +
>> +    /*
>> +     * Each PCI bus supports 32 devices/slots at max or up to 256 when
>> +     * there are multi-function ones which are not yet supported.
>> +     */
>> +    if ( pdev->info.is_extfn )
>> +    {
>> +        gdprintk(XENLOG_ERR, "%pp: only function 0 passthrough supported\n",
>> +                 &pdev->sbdf);
>> +        return -EOPNOTSUPP;
>> +    }
>> +
>> +    write_lock(&pdev->domain->pci_lock);
>> +    new_dev_number = find_first_zero_bit(d->vpci_dev_assigned_map,
>> +                                         VPCI_MAX_VIRT_DEV);
>> +    if ( new_dev_number >= VPCI_MAX_VIRT_DEV )
>> +    {
>> +        write_unlock(&pdev->domain->pci_lock);
>> +        return -ENOSPC;
>> +    }
>> +
>> +    __set_bit(new_dev_number, &d->vpci_dev_assigned_map);
>
> ... the same as the one held here, so the bitmap still isn't properly
> protected afaics, unless the intention is to continue to rely on
> the global PCI lock (assuming that one's held in both cases, which I
> didn't check it is). Conversely it looks like the vPCI lock isn't
> held here. Both aspects may be intentional, but the locks being
> acquired differing requires suitable code comments imo.

As I stated above, vpci_remove_device() is called when d->pci_lock is
already held.


> I've also briefly looked at patch 1, and I'm afraid that still lacks
> commentary about intended lock nesting. That might be relevant here
> in case locking visible from patch / patch context isn't providing
> the full picture.
>

There is
    ASSERT(rw_is_write_locked(&pdev->domain->pci_lock));
at the beginning of vpci_remove_device(), which is added by
"vpci: use per-domain PCI lock to protect vpci structure".

I believe, it will be more beneficial to review series from the
beginning.

>> +    /*
>> +     * Both segment and bus number are 0:
>> +     *  - we emulate a single host bridge for the guest, e.g. segment 0
>> +     *  - with bus 0 the virtual devices are seen as embedded
>> +     *    endpoints behind the root complex
>> +     *
>> +     * TODO: add support for multi-function devices.
>> +     */
>> +    sbdf.devfn = PCI_DEVFN(new_dev_number, 0);
>> +    pdev->vpci->guest_sbdf = sbdf;
>> +    write_unlock(&pdev->domain->pci_lock);
>
> With the above I also wonder whether this lock can't (and hence
> should) be dropped a little earlier (right after fiddling with the
> bitmap).

This is the good observation, thanks.

-- 
WBR, Volodymyr


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.