[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 3/4] vpci: use pcidevs locking to protect MMIO handlers


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Date: Tue, 9 Aug 2022 20:33:02 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=goQEol/P2znPQQotiJy0LTLLYpOVZ0Czi4c0xmYmRxg=; b=hozD/JqADAVcaGvOfRlEYo/8UYmYLgEb8SU3WU5H0hMhuPpJS2g11qEz+uL8qvxmxU9zbRCv3DQsSuriJnYNwo/t00XTt6Wm+7v7QyWoCdBS3VSgLwfSUrO0Ov17NSOSJ/mvh9kFlLJ4WjnLWhIoM/ivquPWbZbw+mYbE7MeuICl6bNA+Tu3NkUkIJvk+HQD9FRfykORKO2pTGCCrfj7eL8+rGSiWgMm62g0AgQ6TJBOF1IjK+5pIPCgFmWNIHJ1lHmDHx1gQhSQOnDsO239+MPdFFjSpMTNckcKuHHOCi3YJE2M0we/Rc6+9YUegMdZlNC6FMorm5ylOcuC+0KdHA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LNH5PsOr+jXDN5Yae1cmxvo7KGEAa0gkm0AeuI8vNujxWUXTklxzPU0bQ+2jjr/578A4BkqjLPSs4KvGyYx06po1PxkM1VK9VuqrP+jD8NSCO5c5gw7pI6o4tAajicHkeSi01f5SZ8E6wFa1hYF8X84QaL9OTVq6w/B7CCfokgkOfrNHvYc2kb/RodQICHJtLPIBDM2F+0sbvMO2j8/m6waYHArmPe05Oas7qWu7JpjLb2gWub81k0EdV1CFD4N/LXr6zwZA8D9uBiNwPdEof1ZWGLoyynnownnavJ7thNVqWchLU+PSu7VpWo4+eyKrw+1OZybgm4Adw9fn+p2exw==
  • Cc: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 09 Aug 2022 20:33:40 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHYmuuJA1lWCBgnj0iqi/IOtvD1PK2aAUWAgA0kaAA=
  • Thread-topic: [PATCH v2 3/4] vpci: use pcidevs locking to protect MMIO handlers

Hello Jan,

Jan Beulich <jbeulich@xxxxxxxx> writes:

> On 18.07.2022 23:15, Volodymyr Babchuk wrote:
>> --- a/xen/arch/x86/hvm/vmsi.c
>> +++ b/xen/arch/x86/hvm/vmsi.c
>> @@ -891,10 +891,16 @@ void vpci_msix_arch_init_entry(struct vpci_msix_entry 
>> *entry)
>>      entry->arch.pirq = INVALID_PIRQ;
>>  }
>>  
>> -int vpci_msix_arch_print(const struct vpci_msix *msix)
>> +int vpci_msix_arch_print(const struct domain *d, const struct vpci_msix 
>> *msix)
>
> I don't think the extra parameter is needed:
>
>> @@ -911,11 +917,23 @@ int vpci_msix_arch_print(const struct vpci_msix *msix)
>>          if ( i && !(i % 64) )
>>          {
>>              struct pci_dev *pdev = msix->pdev;
>
> You get hold of pdev here, and hence you can take the domain from pdev.

Yes, makes sense.

>> +            pci_sbdf_t sbdf = pdev->sbdf;
>>  
>>              spin_unlock(&msix->pdev->vpci->lock);
>> +            pcidevs_read_unlock();
>> +
>> +            /* NB: we still hold rcu_read_lock(&domlist_read_lock); here. */
>>              process_pending_softirqs();
>> -            /* NB: we assume that pdev cannot go away for an alive domain. 
>> */
>
> I think this comment wants retaining, as the new one you add is about
> a different aspect.
>
>> -            if ( !pdev->vpci || !spin_trylock(&pdev->vpci->lock) )
>> +
>> +            if ( !pcidevs_read_trylock() )
>> +                return -EBUSY;
>> +            pdev = pci_get_pdev_by_domain(d, sbdf.seg, sbdf.bus, 
>> sbdf.devfn);
>> +            /*
>> +             * FIXME: we may find a re-allocated pdev's copy here.
>> +             * Even occupying the same address as before. Do our best.
>> +             */
>> +            if ( !pdev || (pdev != msix->pdev) || !pdev->vpci ||
>
> Despite the comment: What guarantees that msix isn't a dangling pointer
> at this point? At the very least I think you need to check !pdev->vpci
> first. And I'm afraid I don't view "do our best" as good enough here
> (considering the patch doesn't carry an RFC tag). And no, I don't have
> any good suggestion other than "our PCI device locking needs a complete
> overhaul". Quite likely what we need is a refcounter per device, which
> - as long as non-zero - prevents removal.

Refcounter itself is a good idea, but I'm not liking where all this
goes. We already are reworking locking by adding rw-locks with counters,
adding refcounter on top of this will complicate things even further.

I'm starting to think that complete PCI device locking rework may be
simpler solution, actually. By any chance, were there any prior
discussion on how proper locking should look like? 

>
>> +                 !spin_trylock(&pdev->vpci->lock) )
>>                  return -EBUSY;
>
> Don't you need to drop the pcidevs lock on this error path?

Yeah, you are right.

>
>> @@ -450,10 +465,15 @@ static int cf_check init_bars(struct pci_dev *pdev)
>>      uint16_t cmd;
>>      uint64_t addr, size;
>>      unsigned int i, num_bars, rom_reg;
>> -    struct vpci_header *header = &pdev->vpci->header;
>> -    struct vpci_bar *bars = header->bars;
>> +    struct vpci_header *header;
>> +    struct vpci_bar *bars;
>>      int rc;
>>  
>> +    ASSERT(pcidevs_write_locked());
>> +
>> +    header = &pdev->vpci->header;
>> +    bars = header->bars;
>
> I'm not convinced the code movement here does us any good. (Same
> apparently elsewhere below.)
>
>> @@ -277,6 +282,9 @@ void vpci_dump_msi(void)
>>  
>>          printk("vPCI MSI/MSI-X d%d\n", d->domain_id);
>>  
>> +        if ( !pcidevs_read_trylock() )
>> +            continue;
>
> Note how this lives ahead of ...
>
>>          for_each_pdev ( d, pdev )
>>          {
>
> ... the loop, while ...
>
>> @@ -310,7 +318,7 @@ void vpci_dump_msi(void)
>>                  printk("  entries: %u maskall: %d enabled: %d\n",
>>                         msix->max_entries, msix->masked, msix->enabled);
>>  
>> -                rc = vpci_msix_arch_print(msix);
>> +                rc = vpci_msix_arch_print(d, msix);
>>                  if ( rc )
>>                  {
>>                      /*
>> @@ -318,12 +326,13 @@ void vpci_dump_msi(void)
>>                       * holding the lock.
>>                       */
>>                      printk("unable to print all MSI-X entries: %d\n", rc);
>> -                    process_pending_softirqs();
>> -                    continue;
>> +                    goto pdev_done;
>>                  }
>>              }
>>  
>>              spin_unlock(&pdev->vpci->lock);
>> + pdev_done:
>> +            pcidevs_read_unlock();
>
> ... this is still inside the loop body.
>
>> @@ -332,10 +334,14 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, 
>> unsigned int size)
>>          return data;
>>      }
>>  
>> +    pcidevs_read_lock();
>>      /* Find the PCI dev matching the address. */
>>      pdev = pci_get_pdev_by_domain(d, sbdf.seg, sbdf.bus, sbdf.devfn);
>> -    if ( !pdev )
>> +    if ( !pdev || (pdev && !pdev->vpci) )
>
> Simpler
>
>     if ( !pdev || !pdev->vpci )
>
> ?
>
>> @@ -381,6 +387,7 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, 
>> unsigned int size)
>>          ASSERT(data_offset < size);
>>      }
>>      spin_unlock(&pdev->vpci->lock);
>> +    pcidevs_read_unlock();
>
> I guess this is too early and wants to come after ...
>
>>      if ( data_offset < size )
>>      {
>
> ... this if, which - even if it doesn't use pdev - still accesses the
> device.
>
> Both comments equally apply to vpci_write().
>
>> --- a/xen/include/xen/pci.h
>> +++ b/xen/include/xen/pci.h
>> @@ -161,6 +161,7 @@ void pcidevs_unlock(void);
>>  bool __must_check pcidevs_locked(void);
>>  
>>  void pcidevs_read_lock(void);
>> +int pcidevs_read_trylock(void);
>
> This declaration wants adding alongside the introduction of the
> function or, if the series was structured that way, at the time of the
> dropping of "static" from the function (which from a Misra perspective
> would likely be better).
>
> Jan


-- 
Volodymyr Babchuk at EPAM


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.