[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 3/8] x86/iommu: iommu_igfx, iommu_qinval and iommu_snoop are VT-d specific


  • To: Xenia Ragiadakou <burzalodowa@xxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • Date: Thu, 12 Jan 2023 18:24:26 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Kv7SYk6PubM4TjXQAt73Phr45Sl1rUK9hEL18EWdA8M=; b=BIu9vqyGry3CbuvTv7zvsV3X1uu4ZXdzk6sFluufQ3jVYjm3Q1FTzEqDZF85ZpLeJKVzfNkresRz9OY0jJEYwbLaoBkhCb3UQGApdvQ9YHV/tx/ajYCNS8bHAGcd+/TtNH7t5YbKdji1vStngiF2Jy4pKpE3KlVnnsnQ8rgbh3nnWcUgkhMgdHeT4Ji0MC+SIY07WBcTpW00C2sN7XHytNNfoCFMry88DlD+WxEggd3uVyB+zROteGpJtoBWXUy8RBUXczznagUtTCSByFSDuVy4+k9QnBahHzKUGkAOekykbH7LbZuA0pcoEzScbH5lVpH/OFbSmLColXYfM9lKkw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HlwwafbivKOmOpmlXKehDy7zDfPclirSjXA4yRz52Y4hA2tuZBzcaj94w14mGbI95y5iPxxiek/FlY0fzYtwWFn/6wzY3Q9Hw8890NXHd0MfCHFxC5nx7oM06QaMrCNM1ZC5U0dUF1U44jN4qJbKi/eyTU12UxQ+bMAV+CHNQqjDi54TD2t/XDJ7egOgT1Ncl0IdckXmd37OP5AcT/Rk4+5HPO+pQmAjLeHZ3S4vcoL1fWtjKRu7De18soKKB6Dmgx6vbYfeVzVj3U833WwfuiMlSMv1UsZafcosgTY9DZTrKrnDzduCG4W9U3FOhJkRlTHrpgKmiqSTPx0djstrKw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Paul Durrant <paul@xxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 12 Jan 2023 18:24:47 +0000
  • Ironport-data: A9a23:Dazqlqnb+EbW3ZTylEGNevDo5gxEJ0RdPkR7XQ2eYbSJt1+Wr1Gzt xIcDTyGb66KN2XyKI11O4iw9EIFvsTRmNc3TgNo+XgxESMWpZLJC+rCIxarNUt+DCFhoGFPt JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icf3grHmeIcQ954Tp7gek1n4V0ttawBgKJq LvartbWfVSowFaYCEpNg064gE4p7auaVA8w5ARkPqgS5QWGzRH5MbpETU2PByqgKmVrNrbSq 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/ f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3 fkDGCoKYkCMvLiNy723dO4vmsF7NuC+aevzulk4pd3YJdAPZMmZBo/stZpf1jp2gd1SF/HDY cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVE3ieeyWDbWUoXiqcF9t0CUv G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapDTe3ip6M03TV/wEQxJxAPSXmmoMKHl0yTA/1ZL xcd1ioX+P1aGEuDC4OVsweDiHSZpAwVX91cFPIzwA6Iw6vQpQ2eAwAsTDRMddgnv88eXiEx2 xmCmNaBLSRmrbm9WX+bsLCOoluaKSUTaGMPeyIAZQ8E+MX45pE+iArVSdRuG7Lzicf6cQwc2 BiPpSk6wr8V3cgC0vzh+Uid2m3z4J/UUgQy+wPbGHq/6R90b5KkYIru7kXH6fFHL8CSSVzpU GU4pvVyJdsmVfml/BFhis1UdF11z55p6AHhvGM=
  • Ironport-hdrordr: A9a23:SzDMYqMu9MuIAcBcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHZIBjlmMJMZF8p3USyswha31M+3K6asoyAgAAE+gCAAEGaAIAALOIA
  • Thread-topic: [PATCH v2 3/8] x86/iommu: iommu_igfx, iommu_qinval and iommu_snoop are VT-d specific

On 12/01/2023 3:43 pm, Xenia Ragiadakou wrote:
>
> On 1/12/23 13:49, Xenia Ragiadakou wrote:
>>
>> On 1/12/23 13:31, Jan Beulich wrote:
>>> On 04.01.2023 09:44, Xenia Ragiadakou wrote:
>>>
>>>> --- a/xen/include/xen/iommu.h
>>>> +++ b/xen/include/xen/iommu.h
>>>> @@ -74,9 +74,13 @@ extern enum __packed iommu_intremap {
>>>>      iommu_intremap_restricted,
>>>>      iommu_intremap_full,
>>>>   } iommu_intremap;
>>>> -extern bool iommu_igfx, iommu_qinval, iommu_snoop;
>>>>   #else
>>>>   # define iommu_intremap false
>>>> +#endif
>>>> +
>>>> +#ifdef CONFIG_INTEL_IOMMU
>>>> +extern bool iommu_igfx, iommu_qinval, iommu_snoop;
>>>> +#else
>>>>   # define iommu_snoop false
>>>>   #endif
>>>
>>> Do these declarations really need touching? In patch 2 you didn't move
>>> amd_iommu_perdev_intremap's either.
>>
>> Ok, I will revert this change (as I did in v2 of patch 2) since it is
>> not needed.
>
> Actually, my patch was altering the current behavior by defining
> iommu_snoop as false when !INTEL_IOMMU.
>
> IIUC, there is no control over snoop behavior when using the AMD
> iommu. Hence, iommu_snoop should evaluate to true for AMD iommu.
> However, when using the INTEL iommu the user can disable it via the
> "iommu" param, right?
>
> If that's the case then iommu_snoop needs to be moved from vtd/iommu.c
> to x86/iommu.c and iommu_snoop assignment via iommu param needs to be
> guarded by CONFIG_INTEL_IOMMU.
>

Pretty much everything Xen thinks it knows about iommu_snoop is broken.

AMD IOMMUs have had this capability since the outset, but it's the FC
bit (Force Coherent).  On Intel, the capability is optional, and
typically differs between IOMMUs in the same system.

Treating iommu_snoop as a single global is buggy, because (when
available) it's always a per-SBDF control.  It is used to take a TLP and
force it to be coherent even when the device was trying to issue a
non-coherent access.

Intel systems typically have a dedicated IOMMU for the IGD, which always
issues coherent accesses (its memory access happens as an adjunct to the
LLC, not as something that communicates with the memory controller
directly), so the IOMMU doesn't offer snoop control, and Xen "levels"
this down to "the system can't do snoop control".


Xen is very confused when it comes to cacheability correctness.  I still
have a pile of post-XSA-402 work pending, and it needs to start with
splitting Xen's idea of "domain can use reduced cacheability" from
"domain has a device", and work incrementally from there.

But in terms of snoop_control, it's strictly necessary for the cases
where the guest kernel thinks it is using reduced cacheability, but it
isn't because of something the hypervisor has done.  But beyond that,
forcing snoop behind the back of a guest which is using reduced
cacheability is just a waste of performance.

~Andrew

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.