[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v18 1/2] xen/arm: translate virtual PCI bus topology for guests


  • To: Julien Grall <julien@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • Date: Mon, 7 Apr 2025 13:02:38 -0400
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LQohm3RX+7vq9bvnPLtAOfQkh47F+8kzBMlvvuTKuSM=; b=HEzeoiShqgn5YMOQA8c4DDNAhi5C7afiw5hSMHffnXy42ULKNy5TjtFoF4SAozbi3nWIJIFYHkcC7d7hrkGw5Bdcoa8WQlfeH1a09yoDPVQnhyduPCeUsC9GuSdwi9VX41J1iAPMycNtNDBzuo91aVkUtlOhpaTs4TvnaWwnAkwEfmtsFHDjA1Wkg9G4ZKdLD4az0nVGMbHUw1Trb1yF+hsHsZOYN0Wn0fSGjr3i/X6YfbpTVW6MpDx98sdgGPye1UanUs8NY2x3aq5Vf71f0w7lSrSA+LFfYATgVwd6k2nhgFD71dIL9b7T2L3Iby9nsCNKnG3Tg4CgaImV1taEOQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=VQd30CzeX+zPQ+DmVwkkoi8Bcp5ZhSIqcPtZNEhZhKleyONF3OQnt9KgfTeygPauTkpOiIT8GdIhKOaiSSFyJCR4fluNb/RYB0JcGBj1dDLU/Kve/eNzrAJ03woJuJnNGa9XkdB40os/YqSutsLlWo5gk85P0tnszsembrQahJMhQgecg2Zh+1G+sBbu4E6aP0lKywWM69wnQ2HZQpSkS0bvLnk4zd1lDUygQOfm4LchGkz5U0gxs1RXmZQxr5ZS3SMg8Jd8iVZGzK7+N0WXCThD6NEagtf2p5/Ns/FMgnVPHwYpN5IDL2HWZUE7Sb8qZLJtAQpFRyKEvGNCS5nkAA==
  • Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Jiqian Chen <Jiqian.Chen@xxxxxxx>, Mykyta Poturai <Mykyta_Poturai@xxxxxxxx>
  • Delivery-date: Mon, 07 Apr 2025 17:02:59 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 3/30/25 17:59, Julien Grall wrote:
> Hi Stewart,
> 
> I realize this series is at v18, but there are a few bits security-wise
> that concerns me. They don't have to be handled now because vPCI is
> still in tech preview. But we probably want to keep track of any issues
> if this hasn't yet been done in the code.

No worries, we want to get this right. Thank you for taking a look.

> On 25/03/2025 17:27, Stewart Hildebrand wrote:
>> diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
>> index 1e6aa5d799b9..49c9444195d7 100644
>> --- a/xen/drivers/vpci/vpci.c
>> +++ b/xen/drivers/vpci/vpci.c
>> @@ -81,6 +81,32 @@ static int assign_virtual_sbdf(struct pci_dev *pdev)
>>       return 0;
>>   }
>>   +/*
>> + * Find the physical device which is mapped to the virtual device
>> + * and translate virtual SBDF to the physical one.
>> + */
>> +bool vpci_translate_virtual_device(struct domain *d, pci_sbdf_t *sbdf)
>> +{
>> +    const struct pci_dev *pdev;
>> +
>> +    ASSERT(!is_hardware_domain(d));
>> +    read_lock(&d->pci_lock);
>> +
>> +    for_each_pdev ( d, pdev )
> 
> I can't remember whether this was discussed before (there is no
> indication in the commit message). What's the maximum number of PCI
> devices supported? Do we limit it?
> 
> Asking because iterating through the list could end up to be quite
> expensive.

32. See xen/include/xen/vpci.h:
#define VPCI_MAX_VIRT_DEV       (PCI_SLOT(~0) + 1)

> Also, not a change for this patch, but it seems vcpi_{read,write} will
> also do another lookup. This is quite innefficient. We should consider
> return a pdev and use it for vcpi_{read,write}.

Hm. Right. Indeed, a future consideration.

>> +    {
>> +        if ( pdev->vpci && (pdev->vpci->guest_sbdf.sbdf == sbdf->sbdf) )
>> +        {
>> +            /* Replace guest SBDF with the physical one. */
>> +            *sbdf = pdev->sbdf;
>> +            read_unlock(&d->pci_lock);
>> +            return true;
>> +        }
>> +    }
>> +
>> +    read_unlock(&d->pci_lock);
> 
> At the point this is unlocked, what prevent the sbdf to be detached from
> the domain?

Nothing.

> If nothing, would this mean the domain can access something it should
> not?

Yep. I have a patch in the works to acquire the lock in the I/O
handlers. I would prefer doing this in a pre-patch so that we don't
temporarily introduce the race condition.

>> +    return false;> +}
>> +
>>   #endif /* CONFIG_HAS_VPCI_GUEST_SUPPORT */
>>     void vpci_deassign_device(struct pci_dev *pdev)
>> diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h
>> index 807401b2eaa2..e355329913ef 100644
>> --- a/xen/include/xen/vpci.h
>> +++ b/xen/include/xen/vpci.h
>> @@ -311,6 +311,18 @@ static inline int __must_check vpci_reset_device(struct 
>> pci_dev *pdev)
>>       return vpci_assign_device(pdev);
>>   }
>>   +#ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT
>> +bool vpci_translate_virtual_device(struct domain *d, pci_sbdf_t *sbdf);
>> +#else
>> +static inline bool vpci_translate_virtual_device(struct domain *d,
>> +                                                 pci_sbdf_t *sbdf)
>> +{
>> +    ASSERT_UNREACHABLE();
>> +
>> +    return false;
>> +}
>> +#endif
>> +
>>   #endif
>>     /*




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.