[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 10/11] vpci: Add initial support for virtual PCI bus topology


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Date: Wed, 3 Nov 2021 08:59:52 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Vf5AVhjAp6QbPQiqqwU+fUJP4Odh8MzRqHC2PNscXtQ=; b=jjeIn5De0M5muu+G/p7spbumQfXlqd74s1KA9uP3HwXBl7h1yx+xj8aXcI73iagliRujSbOldYvmVlm90ozxVImI0DjuoHOjwBdzYFW6eby2Xu/SbKlI4qm0ZQ+FBcdfS+qPyz4Tcq4met33a4T5nylmKwTFOS37h6pPeGp/DwzH1tpbi9Hq++IaIGRvQJD1ChBFuHOlH7SE3Pb6skI6AJQ0U7zeftZtm751JaBCxejrilU2LBxj6vh5fUA2M0ASYwL++hi9sJeHsB2CtwD7uPg8KnqPrlWnoeE2oRJP1gSm0u03hRbr4XqKGYQwXkvN8P5hMFejukMmjVOn3wXNAA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Lq0naNXFMvwErvdwGYnZtHbRFRBqEZzLnF0JbRoQr2qNJfP5ULwk38LWFHqlvTDqje9gDYUZgj8aT84oUp6BN4mzIvkALIu3eq2VGbezc29ScuQd9jtnex1aMbKHtWcW7ZxTwI5lnufbcBg5POayUfIktERy0IMbb1Ux0qXA5vAXCZb/U80sqkYE8+NVRhgD45meUn7p9hc+MuIqofqjxnwAIC8jaWBfXRyrJnj1TKaAAkmmf52SikdX15OrvXnLJrdIAF64MhKOlPPV95WbYc154vRzRM+LsQG+DOM8NTFaOrjS9RLTF+5whsu2ZjNQotDaE6tj33QUWof6UYlBNw==
  • Cc: "jbeulich@xxxxxxxx" <jbeulich@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>
  • Delivery-date: Wed, 03 Nov 2021 09:00:04 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHXtdAlOCu2AnkIOUC4Prsocox3sqvlTvCAgAw/FQCAACalgIAAAgiA
  • Thread-topic: [PATCH v3 10/11] vpci: Add initial support for virtual PCI bus topology


On 03.11.21 10:52, Roger Pau Monné wrote:
> On Wed, Nov 03, 2021 at 06:34:16AM +0000, Oleksandr Andrushchenko wrote:
>> Hi, Roger
>>
>> On 26.10.21 14:33, Roger Pau Monné wrote:
>>> On Thu, Sep 30, 2021 at 10:52:22AM +0300, Oleksandr Andrushchenko wrote:
>>>> diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
>>>> index 43b8a0817076..33033a3a8f8d 100644
>>>> --- a/xen/include/xen/pci.h
>>>> +++ b/xen/include/xen/pci.h
>>>> @@ -137,6 +137,24 @@ struct pci_dev {
>>>>        struct vpci *vpci;
>>>>    };
>>>>    
>>>> +#ifdef CONFIG_HAS_VPCI_GUEST_SUPPORT
>>>> +struct vpci_dev {
>>>> +    struct list_head list;
>>>> +    /* Physical PCI device this virtual device is connected to. */
>>>> +    const struct pci_dev *pdev;
>>>> +    /* Virtual SBDF of the device. */
>>>> +    union {
>>>> +        struct {
>>>> +            uint8_t devfn;
>>>> +            uint8_t bus;
>>>> +            uint16_t seg;
>>>> +        };
>>>> +        pci_sbdf_t sbdf;
>>>> +    };
>>>> +    struct domain *domain;
>>>> +};
>>>> +#endif
>>> I wonder whether this is strictly needed. Won't it be enough to store
>>> the virtual (ie: guest) sbdf inside the existing vpci struct?
>>>
>>> It would avoid the overhead of the translation you do from pdev ->
>>> vdev, and there doesn't seem to be anything relevant stored in
>>> vpci_dev apart from the virtual sbdf.
>> TL;DR It seems it might be needed from performance POV. If not implemented
>> for every MMIO trap we use a global PCI lock, e.g. pcidevs_{lock|unlock}.
>> Note: pcidevs' lock is a recursive lock
>>
>> There are 2 sources of access to virtual devices:
>> 1. During initialization when we add, assign or de-assign a PCI device
>> 2. At run-time when we trap configuration space access and need to
>> translate virtual SBDF into physical SBDF
>> 3. At least de-assign can run concurrently with MMIO handlers
>>
>> Now let's see which locks are in use while doing that.
>>
>> 1. No struct vpci_dev is used.
>> 1.1. We remove the structure and just add pdev->vpci->guest_sbdf as you 
>> suggest
>> 1.2. To protect virtual devices we use pcidevs_{lock|unlock}
>> 1.3. Locking happens on system level
>>
>> 2. struct vpci_dev is used
>> 2.1. We have a per-domain lock vdev_lock
>> 2.2. Locking happens on per domain level
>>
>> To compare the two:
>>
>> 1. Without vpci_dev
>> pros: much simpler code
>> pros/cons: global lock is used during MMIO handling, but it is a recursive 
>> lock
>>
>> 2. With vpc_dev
>> pros: per-domain locking
>> cons: more code
>>
>> I have implemented the two methods and we need to decide
>> which route we go.
> We could always see about converting the pcidevs lock into a rw one if
> it turns out there's too much contention. PCI config space accesses
> shouldn't be that common or performance critical, so having some
> contention might not be noticeable.
>
> TBH I would start with the simpler solution (add guest_sbdf and use
> pci lock) and move to something more complex once issues are
> identified.
Ok, the code is indeed way simpler with guest_sbdf and pci lock
So, I'll use this approach for now
>
> Regards, Roger.
Thank you,
Oleksandr

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.