[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v13 01/14] vpci: use per-domain PCI lock to protect vpci structure


  • To: Jan Beulich <jbeulich@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • Date: Thu, 15 Feb 2024 00:26:44 -0500
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LGcn4dr4r7yoZtaNsJN7hiO4rVa2xVLAOV1RLsTcJwo=; b=DD6rywfTSn1aGTrV+5hiQGg5lV45xUriqI3JfzHHJnLKouWiVt5eywOZHmJXqh9OBsfFtZK27SvgvlKPxom05leLviISh/Y/AQ2/XGEG7dx0I9cNTDF2R6nSMmNb5T69WR5dXCqyTOPY3HJO59qw7gekVwhQONKqjnp1lldpoeuAZx2cnOvl0z1NUyDM8t+sSduuFHouQFuHbY++JdvUFbg0YeM4c4RXfCpwJsDO2fui3Tw1gwzLiw6Ep9ZM4fPfl0Hlly50jQQ2EVjIKwuQ20nsVaNyrIWHKb63jQnXAcyguH8HbceK72jElWcdEIIfHwFsZ74+BdOp+lM3Xx54CQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hY37+ZY0yvxYqFDLS8Qi3Wl42jClpieRu+pd677bL/zqlNHi7Rbp5wxvj5AuPgtLYx7Bv3QQqRG5dqCGStI4o4SrF49bbfG7yy3Oy9mc3qtLW3gSyfd6273oU/w0sZpPPvwniNfRS8KgH0YLsaQ3FB/Z1s9XD8VyperxOop9J/ve0emG1GeGA1S/rn0Is6YClBUlbfsEHB3BnG7jrJvVd4iTALGrRUtFRzk2b5VhdT4bevZo3qLo37fcVWOs5wNvIhc27zGZ7Uy/FaipbiYuZczStVEuKGpZIZS4oqyY2RFf9vOvnpJDE8UOMUt+pRNAUlLy7dxUzMYI34IbtLSmBw==
  • Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 15 Feb 2024 05:27:10 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 2/14/24 06:38, Jan Beulich wrote:
> On 02.02.2024 22:33, Stewart Hildebrand wrote:
>> --- a/xen/arch/x86/physdev.c
>> +++ b/xen/arch/x86/physdev.c
>> @@ -123,7 +123,9 @@ int physdev_map_pirq(domid_t domid, int type, int 
>> *index, int *pirq_p,
>>  
>>      case MAP_PIRQ_TYPE_MSI:
>>      case MAP_PIRQ_TYPE_MULTI_MSI:
>> +        pcidevs_lock();
>>          ret = allocate_and_map_msi_pirq(d, *index, pirq_p, type, msi);
>> +        pcidevs_unlock();
>>          break;
> 
> I'm afraid I need to come back to this: This is the only place where this
> patch retains (moves) use of the global lock. By moving, its scope is
> actually extended. It was previously said that conversion doesn't happen
> to limit the scope of what is changing. But with allocate_and_map_msi_pirq()
> being happy about either lock being held, I'm having a hard time seeing why
> here the global lock would continue to need using. To me doing so suggests
> uncertainty whether the checking in the function is actually correct.

I understand the concern. I've gone back and forth on this one in
particular myself [1]. I've tested it both ways (i.e. as shown here with
pcidevs_lock() and replacing it with read_lock(&d->pci_lock)). In the
analysis I've done, I also cannot find a need to continue using
pcidevs_lock() here. read_lock(&d->pci_lock) is sufficient. Let's be
clear: both ways are correct. The only reason I left it at
pcidevs_lock() for v13 was to make sure the code was doing what the
commit description and notes in the cover letter say.

allocate_and_map_msi_pirq() acquires write_lock(&d->event_lock), and
accesses to non-domain-specific data fall in the category of reading
__read_mostly globals or are appropriately protected by desc->lock
and/or vector_lock and/or cmpxchg.

I'll change it to read_lock(&d->pci_lock) in this one particular
instance (and update the description appropriately).

[1] 
https://lore.kernel.org/xen-devel/3c1023c4-25fb-41f1-83eb-03cbc1c3720e@xxxxxxx/



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.