[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v13.2 01/14] vpci: use per-domain PCI lock to protect vpci structure


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • Date: Mon, 19 Feb 2024 09:14:12 -0500
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Cj7gA1sm3uWjyRxLyFk5BtQpLh30GWilhCVGfPP9bQk=; b=Cg6MjuAs1143ED73DNQ6axZUDf9uvS9R8/533Cq8bjF2Q8tsoo/WImgQiU7hKcn5VwagDk3YrbZ8TjJJPZ1XHbffaT5vDyX3ZzeRpAQfWEJPFqoe8ElEBWarEkjyXfKFiDBiMrVDguHkNlvSzfkaYJFZm0MasseQXKm7AimrsT4hB9UBYWulja4bVfWS+4AxT9k98C5v7PJWtOzqMutn27M1GzllEvMgtSEyR9fpL2vQXHMGu/L5KqF0MK77LPv3XVqlI49gqgXVYhgm1CodXJqjC5SbwtPi8ALspmM1cN+wS8hQwjQadc9O+n3oA2QMhRDz29NE8J/sH56CwAYitA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gSzhBZFx/pPqKpy+H7xCJyDmzFgpnoBBJI4rs8qPmnZY16KzwvIadoZVyyhiljxowgdNERJzFnmpdkLw3LV8tqgoSPD2Seu73cFxTZ0AS7e4ZWqH6WnV6VWgP+8ukXLmaynIKQBUDQgs6WBbowVdzfVmZcxcJr5yBZiiUGY7YFxU3HgmPJ/6wo7/8xn2iGgJvXjKznU+i9XGCrPNbm9C8XsTV7E8wenKnlacX7ypbPfvJrA+VINrw7yJA8MakHpJ1Zuy/rmGbexIqPKdNHAj+yzKdGeUQBoqte/0sSTgcKR3k3azAOtyS7oJZMysMyNAs0wUq++RHcnPXhqzqeM6pw==
  • Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 19 Feb 2024 14:14:26 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 2/19/24 08:12, Jan Beulich wrote:
> On 19.02.2024 13:47, Stewart Hildebrand wrote:
>> On 2/19/24 07:10, Jan Beulich wrote:
>>> On 19.02.2024 12:47, Stewart Hildebrand wrote:
>>>> @@ -895,6 +891,15 @@ int vpci_msix_arch_print(const struct vpci_msix *msix)
>>>>  {
>>>>      unsigned int i;
>>>>  
>>>> +    /*
>>>> +     * Assert that d->pdev_list doesn't change. 
>>>> ASSERT_PDEV_LIST_IS_READ_LOCKED
>>>> +     * is not suitable here because it may allow either pcidevs_lock() or
>>>> +     * d->pci_lock to be held, but here we rely on d->pci_lock being 
>>>> held, not
>>>> +     * pcidevs_lock().
>>>> +     */
>>>> +    ASSERT(rw_is_locked(&msix->pdev->domain->pci_lock));
>>>> +    ASSERT(spin_is_locked(&msix->pdev->vpci->lock));
>>>
>>> There's no "d" in sight here, so it's a little odd that "d" is being talked
>>> about. But I guess people can infer what's meant without too much trouble.
>>
>> I can s/d->pci_lock/msix->pdev->domain->pci_lock/ for the next rev.
> 
> Or simply drop the d-s? That would be better for readability's sake,
> I think.

OK

>>>> @@ -313,17 +316,36 @@ void vpci_dump_msi(void)
>>>>                  {
>>>>                      /*
>>>>                       * On error vpci_msix_arch_print will always return 
>>>> without
>>>> -                     * holding the lock.
>>>> +                     * holding the locks.
>>>>                       */
>>>>                      printk("unable to print all MSI-X entries: %d\n", rc);
>>>> -                    process_pending_softirqs();
>>>> -                    continue;
>>>> +                    goto pdev_done;
>>>>                  }
>>>>              }
>>>>  
>>>> +            /*
>>>> +             * Unlock locks to process pending softirqs. This is
>>>> +             * potentially unsafe, as d->pdev_list can be changed in
>>>> +             * meantime.
>>>> +             */
>>>>              spin_unlock(&pdev->vpci->lock);
>>>> +            read_unlock(&d->pci_lock);
>>>> +        pdev_done:
>>>>              process_pending_softirqs();
>>>> +            if ( !read_trylock(&d->pci_lock) )
>>>> +            {
>>>> +                printk("unable to access other devices for the domain\n");
>>>> +                goto domain_done;
>>>> +            }
>>>>          }
>>>> +        read_unlock(&d->pci_lock);
>>>> +    domain_done:
>>>> +        /*
>>>> +         * We need this label at the end of the loop, but some
>>>> +         * compilers might not be happy about label at the end of the
>>>> +         * compound statement so we adding an empty statement here.
>>>> +         */
>>>> +        ;
>>>
>>> As to "some compilers": Are there any which accept a label not followed
>>> by a statement? Depending on the answer, this comment may be viewed as
>>> superfluous. Or else I'd ask about wording: Besides a grammar issue I
>>> also don't view it as appropriate that a comment talks about "adding"
>>> something when its adjacent code that is meant. That something is there
>>> when the comment is there, hence respective wording should imo be used.
>>
>> It seems like hit or miss whether gcc would accept it or not (prior
>> discussion at [1]). I agree the comment is rather lengthy for what it's
>> trying to convey. I'd be happy to either remove the comment or reduce
>> it to:
>>
>>     domain_done:
>>         ; /* Empty statement to make some compilers happy */
>>
>> [1] 
>> https://lore.kernel.org/xen-devel/98b8c131-b0b9-f46c-5f46-c2136f2e3b4e@xxxxxxx/
> 
> This earlier discussion only proves that there is at least one compiler
> objecting. There's no proof there that any compiler exists which, as a
> language extension, actually permits such syntax. Yet if the comment
> was purely about normal language syntax, then imo it should be zapped
> altogether, not just be shrunk.

I'll zap it



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.