[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v12.2 01/15] vpci: use per-domain PCI lock to protect vpci structure


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • From: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • Date: Tue, 30 Jan 2024 09:59:49 -0500
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=citrix.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=u6ON6/VbrVMQALM5lG/NOPJDuBJFzRAmmCfxKkR2QkQ=; b=JKT5/6eI5KOKJnW8Pr/R5Hc7JCULN1Ub1IPuqeF2axd7hq1DaBOaVjMhij6SsKWS4RzOGt+mGW4tadhKQfajtOdIjXbfjy38BZfJstILLmv72B9Hx7oaq+otXQqE1EFaWZGb5D1RT855DiI0IFobgBeTUePc/0mBL/dci3OfHwv/KcuSTS3+bG4JqjSCIZurZSgrFGamdSADVjJjZgDeH+jtsyUkK+sDibEJZjEr6hr0lWWi1tK2mDYLlnpJWO3aHDzlC1fGHYCAvnwsbxnrlwJmj7rE7VRUXP5jUHTi4OBB4LweFF0weFsLAfz1GD84N4c3oNDq22t9V6LIuqQYWQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aLcxriRrHAsu7JlCqyVU2f3MCOsb4JOI4S5RV6G05W9U8p6UlusI+4t08FacYwZLU1xWmJOZWoayxQR1+7rISzm/90PpuGAh1GjyYxNZN/H4fz0wrs2JRSJqb9hGFr3ygIlkK+/p1585y3YJbn6Va/cNuVQv17QPIOzRZYhIodlhNhIFz8eWQR7lPT+PCebbR0feotGUolpvp0qikvyICn+RCl7iJd1YE0QIo0GoYNUMBEfqqHoe10hGb6Gk+x3Nkzg8m3CLfba7kEtxJuEsMTynkn2nVIAr7D1wta3RIiVc5FWsnc8dhevXX5QKayONTAiG8XvFrpN/P9kMvd0bSQ==
  • Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 30 Jan 2024 15:00:21 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 1/24/24 00:00, Stewart Hildebrand wrote:
> On 1/23/24 10:07, Roger Pau Monné wrote:
>> On Tue, Jan 23, 2024 at 03:32:12PM +0100, Jan Beulich wrote:
>>> On 15.01.2024 20:43, Stewart Hildebrand wrote:
>>>> @@ -2888,6 +2888,8 @@ int allocate_and_map_msi_pirq(struct domain *d, int 
>>>> index, int *pirq_p,
>>>>  {
>>>>      int irq, pirq, ret;
>>>>  
>>>> +    ASSERT(pcidevs_locked() || rw_is_locked(&d->pci_lock));
>>>
>>> If either lock is sufficient to hold here, ...
>>>
>>>> --- a/xen/arch/x86/physdev.c
>>>> +++ b/xen/arch/x86/physdev.c
>>>> @@ -123,7 +123,9 @@ int physdev_map_pirq(domid_t domid, int type, int 
>>>> *index, int *pirq_p,
>>>>  
>>>>      case MAP_PIRQ_TYPE_MSI:
>>>>      case MAP_PIRQ_TYPE_MULTI_MSI:
>>>> +        pcidevs_lock();
>>>>          ret = allocate_and_map_msi_pirq(d, *index, pirq_p, type, msi);
>>>> +        pcidevs_unlock();
>>>>          break;
>>>
>>> ... why is it the global lock that's being acquired here?
>>>
>>
>> IIRC (Stewart can further comment) this is done holding the pcidevs
>> lock to keep the path unmodified, as there's no need to hold the
>> per-domain rwlock.
>>
> 
> Although allocate_and_map_msi_pirq() was itself acquiring the global 
> pcidevs_lock() before this patch, we could just as well use 
> read_lock(&d->pci_lock) here instead now. It seems like a good optimization 
> to make, so if there aren't any objections I'll change it to 
> read_lock(&d->pci_lock).
> 

Actually, I take this back. As mentioned in the cover letter of this series, 
and has been reiterated in recent discussions, the goal with this is to keep 
existing (non-vPCI) code paths as unmodified as possible. So I'll keep it as 
pcidevs_lock() here.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.