[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v8 11/13] vpci: add initial support for virtual PCI bus topology


  • To: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 20 Jul 2023 08:50:54 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mzPCDQ1G5kYuVruJltocpsHaYbMhVh5OnzVCm18QlOA=; b=oc9A6UJ+qJVztLb3QQyWVRCpwSnYeYDT/itVgHJErnxjlaEKh9f4gGsJ+MGpprf6q10J9L/a7KvpkKyMKoh540lQ9r1TfEh4pAApeLKDRyelDxlYICUfoXZuanaKHIpKkG8VM2G0Z62R278hNrodv6ILHSUqV4+vXQxziv8esm0DEqi6fZhCC8UOUFRUDUFfti5wD8fC8BcSrAz2gV8bpos9ytpvEMLV9A1l9i0xE2hS89L2xibyC+HWWzwlxVd59KlXp/F53pBKt0QR9Yx8ZSjpowul5GnskYOBEvWkySUr9+KhnPEMTPYIr6k+T6yteIMZSAuNWEgsIHXmhO1bMg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=auBNnELcrwK514pEalxTVhNwaf0pwymzwhjUnkaK7fTQbSGYz6EIH5UJzPxdQ/ppgbL+4tBO76t3RAzmf8rFHcvLHDppqF6P1tJ+mXs8cw9scrQeB8vmRpAagglW17l46RB4B4y723FuwepTWewFO+7apMaZ4tlOX6N/iy+4LNWYNpdz1OfEV+IFC5chThSi8s8orcV+2KukU7nNaJ7Uh5radxlXkullkU8hRhJge0ma0WghoeXfeuooMWVGaulGCMLSJNVGW6fRe1Qcvn8WE/ws9SRva5xoqCrlwJi5AoHJ5Q9+tNfekKZsmbK/EJZOkZA+AzdlDjVnntskhRLNWg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 20 Jul 2023 06:51:04 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 20.07.2023 02:32, Volodymyr Babchuk wrote:
> --- a/xen/drivers/vpci/vpci.c
> +++ b/xen/drivers/vpci/vpci.c
> @@ -46,6 +46,16 @@ void vpci_remove_device(struct pci_dev *pdev)
>          return;
>  
>      spin_lock(&pdev->vpci->lock);
> +
> +#ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT
> +    if ( pdev->vpci->guest_sbdf.sbdf != ~0 )
> +    {
> +        __clear_bit(pdev->vpci->guest_sbdf.dev,
> +                    &pdev->domain->vpci_dev_assigned_map);
> +        pdev->vpci->guest_sbdf.sbdf = ~0;
> +    }
> +#endif

The lock acquired above is not ...

> @@ -115,6 +129,54 @@ int vpci_add_handlers(struct pci_dev *pdev)
>  }
>  
>  #ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT
> +static int add_virtual_device(struct pci_dev *pdev)
> +{
> +    struct domain *d = pdev->domain;
> +    pci_sbdf_t sbdf = { 0 };
> +    unsigned long new_dev_number;
> +
> +    if ( is_hardware_domain(d) )
> +        return 0;
> +
> +    ASSERT(pcidevs_locked());
> +
> +    /*
> +     * Each PCI bus supports 32 devices/slots at max or up to 256 when
> +     * there are multi-function ones which are not yet supported.
> +     */
> +    if ( pdev->info.is_extfn )
> +    {
> +        gdprintk(XENLOG_ERR, "%pp: only function 0 passthrough supported\n",
> +                 &pdev->sbdf);
> +        return -EOPNOTSUPP;
> +    }
> +
> +    write_lock(&pdev->domain->pci_lock);
> +    new_dev_number = find_first_zero_bit(d->vpci_dev_assigned_map,
> +                                         VPCI_MAX_VIRT_DEV);
> +    if ( new_dev_number >= VPCI_MAX_VIRT_DEV )
> +    {
> +        write_unlock(&pdev->domain->pci_lock);
> +        return -ENOSPC;
> +    }
> +
> +    __set_bit(new_dev_number, &d->vpci_dev_assigned_map);

... the same as the one held here, so the bitmap still isn't properly
protected afaics, unless the intention is to continue to rely on
the global PCI lock (assuming that one's held in both cases, which I
didn't check it is). Conversely it looks like the vPCI lock isn't
held here. Both aspects may be intentional, but the locks being
acquired differing requires suitable code comments imo.

I've also briefly looked at patch 1, and I'm afraid that still lacks
commentary about intended lock nesting. That might be relevant here
in case locking visible from patch / patch context isn't providing
the full picture.

> +    /*
> +     * Both segment and bus number are 0:
> +     *  - we emulate a single host bridge for the guest, e.g. segment 0
> +     *  - with bus 0 the virtual devices are seen as embedded
> +     *    endpoints behind the root complex
> +     *
> +     * TODO: add support for multi-function devices.
> +     */
> +    sbdf.devfn = PCI_DEVFN(new_dev_number, 0);
> +    pdev->vpci->guest_sbdf = sbdf;
> +    write_unlock(&pdev->domain->pci_lock);

With the above I also wonder whether this lock can't (and hence
should) be dropped a little earlier (right after fiddling with the
bitmap).

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.