[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/9] AMD/IOMMU: re-work locking around sending of commands


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 9 Jun 2021 14:22:29 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YbD62tgf+tx/Qy9YR6Hs49XELMFmV8y0hAW3fnr51Io=; b=EFbS1Iy+nJZ4iYSbZkAEpMk8ZCLI8Bwaqf9efAvDqOjWpebsDLJ7im/04+7ebvtANwa3jKSPoVd7dvHpx12AoC0wQB9Qh9j48BRD+RHYYoxn6LWoOrkIVycft3zLSGUOzjMWaKgQyREicA35pT7tv1zxjtQKwvcDCfUaY+9jV/NGqo19Zns4wgoT8tDbWUy2PyogxV+h0EPBPN0KZ2XFgajAyKFq2allIuu2+uz4WaRKEavlIrXgKAwiGoUPf+/0GpuzY3cBn2oJsRJAiKeyO48OWWkqLsTpomXsFXONol7FWbH2EaF+weG87IGrQK9jF370I+Oqb/P8JiFCnC/ZXw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hwH4+Tk7h+5Cdk6+L4vs212yrB8RvPkHgAlSkCH9JQ9zRhRjyx3YY2Y9w2XpZ7u1sMG0ZRrjLk6ZBWEo8nTbxJ4ewF/w/efonju5kgG+ja+BmLZ5nM0eGU/WGeMCxqtwseSOJY+64ZJ+uxKl9pt2IW9xgo+WSjvq2Dn3EfFJ/Gx+cMFWTbJ9byhK5qinX/8k2IzUH+C1tCPjHsjwc4hjgk9xu7kKY0chDsuzuijwnjxiHZiLJheuSlZGc/OPwLro1GPBWrjF+Wwp8VwDFHoiMbKHxqwLwbaisYpzcIwYVx+JPOtPSojh6N9P2IuZPo/48qFgkfXgmdbfponZvdoE7A==
  • Authentication-results: lists.xenproject.org; dkim=none (message not signed) header.d=none;lists.xenproject.org; dmarc=none action=none header.from=suse.com;
  • Cc: Paul Durrant <paul@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 09 Jun 2021 12:22:39 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 09.06.2021 12:53, Andrew Cooper wrote:
> On 09/06/2021 10:27, Jan Beulich wrote:
>> It appears unhelpful to me for flush_command_buffer() to block all
>> progress elsewhere for the given IOMMU by holding its lock while
>> waiting for command completion. Unless the lock is already held,
>> acquire it in send_iommu_command(). Release it in all cases in
>> flush_command_buffer(), before actually starting the wait loop.
>>
>> Some of the involved functions did/do get called with the lock already
>> held: For amd_iommu_flush_intremap() we can simply move the locking
>> inside. For amd_iommu_flush_device() and amd_iommu_flush_all_caches()
>> the lock now gets dropped in the course of the function's operation.
>>
>> Where touching function headers anyway, also adjust types used to be
>> better in line with our coding style and, where applicable, the
>> functions' callers.
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> Reviewed-by: Paul Durrant <paul@xxxxxxx>
> 
> Honestly, I'm -2 to this.  It is horrible obfuscation of the logic.
> 
> I agree with the premise of not holding the lock when we don't need to,
> but moving the lock/unlocks into different functions makes it impossible
> to follow.  (Also, the static analysers are going to scream at this
> patch, and rightfully so IMO.)

Just to make it explicit - the immediate goal isn't so much to
shrink the locked regions as much as possible, but first of all to
avoid spin-waiting in flush_command_buffer() while holding the lock
(and having IRQs off).

> send_iommu_command() is static, as is flush_command_buffer(), so there
> is no need to split the locking like this AFAICT.
> 
> Instead, each amd_iommu_flush_* external accessor knows exactly what it
> is doing, and whether a wait descriptor is wanted. 
> flush_command_buffer() wants merging into send_iommu_command() as a
> "bool wait" parameter, at which point the locking and unlocking moves
> entirely into send_iommu_command() with no pointer games.

Then I can only guess you didn't look closely at the pci_amd_iommu.c
part of the change? You may rest assured that I wouldn't have taken
the chosen route if there was a reasonable alternative (within the
current overall code structure). In fact I had tried first with what
you suggest, and had to make it the way it was posted because of the
requirements of these callers.

I'm also pretty certain Paul wouldn't have given his R-b if there
was this simple an alternative.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.