[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v18 1/2] xen/arm: translate virtual PCI bus topology for guests


  • To: Julien Grall <julien@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • Date: Fri, 18 Apr 2025 11:39:04 -0400
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PVeIV3TPFlHL1zKr6H+V/tMzJ2XZve2HlijwB8LiBcs=; b=RFOxP9P1cSGGZk+5a9wi3UCn1SZTYSAloCt4f9FQLl9oP0AMREl0heakAlQzqdkhdJWLhPDcqVBK2tRTO/P6s0QI/IdVenDYoJUKxw7lwyULiKZyIsrGZkIUlMaK92zvc1DIPpNL6DePdv3k56tkeKVOlFtEBLhUhG8asdAagsa6JjVsaBb6SdEhQGNlIsNYNOpayvy6IVtzHmklmlFUHQPCask7dWy4BF+O47sw1Qx0vWJV8MwmV224YDZjtjxbaM1y1xYK3soj0BMbNlw87adHCYpNRs5Y2rRfm6aA7a5RX7UghHyVR0YCOV73k/J6kTTdxLL60tuO3UBYyBxOAw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=rm6yjCBCxINNiY7GTYdIWqe/JWybi5mThAJf/dHVz+DJwpOQXp6RwPNqNJP7zAEGFDJhPe0zEBGqCj8iwPkNka2g906lXH5eVymH6+FQweCcg2Oo/2WD2l1rBf6xU0/pkk3Lc/pQ8ZclUkS4T5hRXen/YMQr6VOlVnALqyD8AS3I9K3BGn2X3cuv9zDPAZuqvhAgT/AeZJxBimiS24ymDa/A5alrH9m8LQJhV+GJiBWzYbsi+dS+jX3aaLMwSND+fohOFBnkhxg2qRblKKm6LpaQ2+L9dMuqJaIkCw0aVUptQpi92JhuWTRYvZqgmuzr0SLz62+Pw7/sfypaTDbfsA==
  • Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Jiqian Chen <Jiqian.Chen@xxxxxxx>, Mykyta Poturai <Mykyta_Poturai@xxxxxxxx>
  • Delivery-date: Fri, 18 Apr 2025 15:39:25 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 4/7/25 13:02, Stewart Hildebrand wrote:
> On 3/30/25 17:59, Julien Grall wrote:
>> Hi Stewart,
>>
>> I realize this series is at v18, but there are a few bits security-wise
>> that concerns me. They don't have to be handled now because vPCI is
>> still in tech preview. But we probably want to keep track of any issues
>> if this hasn't yet been done in the code.
> 
> No worries, we want to get this right. Thank you for taking a look.
> 
>> On 25/03/2025 17:27, Stewart Hildebrand wrote:
>>> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
>>> index 1e6aa5d799b9..49c9444195d7 100644
>>> --- a/xen/drivers/vpci/vpci.c
>>> +++ b/xen/drivers/vpci/vpci.c
>>> @@ -81,6 +81,32 @@ static int assign_virtual_sbdf(struct pci_dev *pdev)
>>>       return 0;
>>>   }
>>>   +/*
>>> + * Find the physical device which is mapped to the virtual device
>>> + * and translate virtual SBDF to the physical one.
>>> + */
>>> +bool vpci_translate_virtual_device(struct domain *d, pci_sbdf_t *sbdf)
>>> +{
>>> +    const struct pci_dev *pdev;
>>> +
>>> +    ASSERT(!is_hardware_domain(d));
>>> +    read_lock(&d->pci_lock);
>>> +
>>> +    for_each_pdev ( d, pdev )
>>
>> I can't remember whether this was discussed before (there is no
>> indication in the commit message). What's the maximum number of PCI
>> devices supported? Do we limit it?
>>
>> Asking because iterating through the list could end up to be quite
>> expensive.
> 
> 32. See xen/include/xen/vpci.h:
> #define VPCI_MAX_VIRT_DEV       (PCI_SLOT(~0) + 1)
> 
>> Also, not a change for this patch, but it seems vcpi_{read,write} will
>> also do another lookup. This is quite innefficient. We should consider
>> return a pdev and use it for vcpi_{read,write}.
> 
> Hm. Right. Indeed, a future consideration.
> 
>>> +    {
>>> +        if ( pdev->vpci && (pdev->vpci->guest_sbdf.sbdf == sbdf->sbdf) )
>>> +        {
>>> +            /* Replace guest SBDF with the physical one. */
>>> +            *sbdf = pdev->sbdf;
>>> +            read_unlock(&d->pci_lock);
>>> +            return true;
>>> +        }
>>> +    }
>>> +
>>> +    read_unlock(&d->pci_lock);
>>
>> At the point this is unlocked, what prevent the sbdf to be detached from
>> the domain?
> 
> Nothing.
> 
>> If nothing, would this mean the domain can access something it should
>> not?
> 
> Yep. I have a patch in the works to acquire the lock in the I/O
> handlers. I would prefer doing this in a pre-patch so that we don't
> temporarily introduce the race condition.

I spoke too soon. If the pdev were deassigned right after dropping the
lock here, the access would get rejected in the 2nd pdev lookup inside
vpci_{read,write}, due to the pdev not belonging to the domain any more.

In hindsight, moving the lock (as I did in v19) was not strictly
necessary. Anyway, this can all be simplified by calling
vpci_translate_virtual_device() from within vpci_{read,write}. I'll send
v20 with this approach, then we can decide if we like v18 or v20 better.

>>> +    return false;
>>> +}
>>> +
>>>   #endif /* CONFIG_HAS_VPCI_GUEST_SUPPORT */
>>>     void vpci_deassign_device(struct pci_dev *pdev)
>>> diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h
>>> index 807401b2eaa2..e355329913ef 100644
>>> --- a/xen/include/xen/vpci.h
>>> +++ b/xen/include/xen/vpci.h
>>> @@ -311,6 +311,18 @@ static inline int __must_check 
>>> vpci_reset_device(struct pci_dev *pdev)
>>>       return vpci_assign_device(pdev);
>>>   }
>>>   +#ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT
>>> +bool vpci_translate_virtual_device(struct domain *d, pci_sbdf_t *sbdf);
>>> +#else
>>> +static inline bool vpci_translate_virtual_device(struct domain *d,
>>> +                                                 pci_sbdf_t *sbdf)
>>> +{
>>> +    ASSERT_UNREACHABLE();
>>> +
>>> +    return false;
>>> +}
>>> +#endif
>>> +
>>>   #endif
>>>     /*
> 
> 




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.