[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 11/11] xen/arm: Translate virtual PCI bus topology for guests


  • To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 27 Sep 2021 15:34:31 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=MBkKeAyOsdBo8BLeLFIlpGAfBFrCok3ULzRD7lKMQ3k=; b=Jsx0zjggaefKkyAsQBm+dYQPZGp+INSE/D4llL/NIwvz2LHJ3dVhF257YiHB33J4d7EfxubarrPeyrWYyh3KZTVJDw2s34wlaTipcOAR+kK1Y9Qah3n7YRWlx8MrRpSdw37kuCaztfFeSdlnhAcvbwCbY3+oQG5ntTtOa0nACPATKsM4xT5+/Yv2osn7NUlhNWdVPBe3eQQbVqR65gpwHuJtuqilDsdUAB9OzT/yZ9Fjp8rRlPnl4lw2y+n8Ys6WnQjQR+qi6XGQwEPxjhFUFo0bGE8S19I0heykxm12RjIjAg+BT5XcQSBp1w7EP5fAym546HVow14paDTzNCdbBA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=digkVlnaD6pwyuHBlLR1Gs6X17Li2NKWmaOktEuUOyBBeWLUMID0Lhc1rFRQRCDFlTsvkgzQoZ0SeBAVMC3KrqzP7I5nuIZmQCllLaRoQi1jD2j302N5f+krjVGSYRMgfM2hAtaDq/TFBhC6KFbZF8kuf5qqgmNujHfFgZKI2qk90NWW2/kTYRUfxTyUhKPtvqOYHSqpniY2FdHmewmVFnEf3GyT1rx30b9wYmODvMsBMlXGhQNjVrVU/R8q+8zEz48/1dv1I6iId0/7gCtqJxYRkXG2vQQckQFGfM7J7zYeJhb8Lh7COCh4XdKoUCWEvT+Zwe5+iPfD7quw+H57ug==
  • Authentication-results: lists.xenproject.org; dkim=none (message not signed) header.d=none;lists.xenproject.org; dmarc=none action=none header.from=suse.com;
  • Cc: "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, "roger.pau@xxxxxxxxxx" <roger.pau@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 27 Sep 2021 13:34:55 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 27.09.2021 14:08, Oleksandr Andrushchenko wrote:
> On 27.09.21 14:31, Jan Beulich wrote:
>> On 23.09.2021 14:55, Oleksandr Andrushchenko wrote:
>>> --- a/xen/drivers/passthrough/pci.c
>>> +++ b/xen/drivers/passthrough/pci.c
>>> @@ -890,6 +890,31 @@ int pci_remove_virtual_device(struct domain *d, const 
>>> struct pci_dev *pdev)
>>>       return 0;
>>>   }
>>>   
>>> +/*
>>> + * Find the physical device which is mapped to the virtual device
>>> + * and translate virtual SBDF to the physical one.
>>> + */
>>> +bool pci_translate_virtual_device(struct vcpu *v, pci_sbdf_t *sbdf)
>> Why struct vcpu, when you only need ...
>>
>>> +{
>>> +    struct domain *d = v->domain;
>> ... this? It's also not really logical for this function to take a
>> struct vcpu, as the translation should be uniform within a domain.
> Agree, struct domain is just enough
>>
>> Also - const please (as said elsewhere before, ideally wherever possible
>> and sensible).
> Ok
>>
>>> +    struct vpci_dev *vdev;
>>> +    bool found = false;
>>> +
>>> +    pcidevs_lock();
>>> +    list_for_each_entry ( vdev, &d->vdev_list, list )
>>> +    {
>>> +        if ( vdev->sbdf.sbdf == sbdf->sbdf )
>>> +        {
>>> +            /* Replace virtual SBDF with the physical one. */
>>> +            *sbdf = vdev->pdev->sbdf;
>>> +            found = true;
>>> +            break;
>>> +        }
>>> +    }
>> For a DomU with just one or at most a couple of devices, such a brute
>> force lookup may be fine. What about Dom0 though? The physical topology
>> gets split at the segment level, so maybe this would by a reasonable
>> granularity here as well?
> 
> Not sure I am following why topology matters here. We are just trying to
> match one SBDF (as seen by the guest) to other SBDF (physical,
> as seen by Dom0), so we can proxy DomU's configuration space access
> to the proper device in Dom0.

Topology here matters only in so far as I've suggested to have separate
lists per segment, to reduce look times. Other methods of avoiding a
fully linear search are of course possible as well.

>>> +    pcidevs_unlock();
>>> +    return found;
>> Nit: Blank line please ahead of the main "return" of a function.
> Sure
>>
>>> +}
>>> +
>>>   /* Caller should hold the pcidevs_lock */
>>>   static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
>>>                              uint8_t devfn)
>> Seeing this function in context (which patch 2 adds without any #ifdef
>> around it afaics),
> 
> I believe you are talking about vpci_deassign_device here
> vpci_{assign|deassign}_device seem to be not called on x86 PVH as of now,
> this is true.
> 
>>   will this new function needlessly be built on x86 as
>> well?
> 
> It will at the moment. But in order to avoid ifdefery I would like
> to still implement it as an empty function for x86.
> 
>>   (I didn't look at other intermediate patches yet, so please
>> forgive if I've missed the addition of an #ifdef.)
> 
> So, I can gate this with HAS_VPCI_GUEST_SUPPORT in patch 2
> (HAS_VPCI_GUEST_SUPPORT is introduced in patch 4, so I'll move it to 2)
> Does this sound good?

Yes.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.