[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v7 00/14] IOMMU: superpage support when not sharing pagetables


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 5 Jul 2022 14:51:24 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qJN+mR+PAwJlyoU8lO+l9qqi0anunK/pXx4zylTd/KY=; b=G1TDMhT64Nee1aAEF9bikrkAP/KPZVg8sjtGgfjAiVkvQtb7iyIk6zYd2fAb4Boruhj54AwcL9bm4y4KSu3Zes59Og+dCkkWeuh+UOpPqWLVGTYGNaMwOfqn6Lezoa43TNLaViIpsPyj4tL9de+5+/sYdNoLmxtTOShUxVsnNUMgizKiOsdb54GRNfnslcjdcWfyju2wv6ai62X1iUlxnptwPK50w0t1+c3+XZKSKWLTU+Iv1IbYIwXkR5EizQJDiX5ny6UE/5BWfIxIlQj1YafpO4lPSwl4V0LOxsuEKrsM76ffgH7J5V3bPstCRNPZPklrQJJWVnrppIE0kaGsQA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UsjRHB8fAJzzaVtsi0vI4ftteJvfcM3PGASdlaOgt32SJM+jJP7ep6x93PN7j0FtaElevGjjV23KuMfKIOtFJn3q2uef/ZqfjtFd4DexnKkT8cwtM0i72buO7tkw426i6TUfTYzAqEVdnrArFT0dX+qZ8T9bY2Hot1m6QqONChaanbdIn+gOWLwcrYBk8MIQfX4oyT8TlY/4mUhoHLFsR73eMK9JPyu1RcaARrPAKQA8SsY/0+tJ4z9TEmRxI34/+2kZZz9p4/uIFQUf+LEDTcFqqbr9fJEAnKNgkZqXSuaLlkQVFIpdSFovoukqcQVnwIN73DUSognVGIZd6CF05w==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Tue, 05 Jul 2022 12:51:36 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 05.07.2022 14:41, Jan Beulich wrote:
> For a long time we've been rather inefficient with IOMMU page table
> management when not sharing page tables, i.e. in particular for PV (and
> further specifically also for PV Dom0) and AMD (where nowadays we never
> share page tables). While up to about 3.5 years ago AMD code had logic
> to un-shatter page mappings, that logic was ripped out for being buggy
> (XSA-275 plus follow-on).
> 
> This series enables use of large pages in AMD and Intel (VT-d) code;
> Arm is presently not in need of any enabling as pagetables are always
> shared there. It also augments PV Dom0 creation with suitable explicit
> IOMMU mapping calls to facilitate use of large pages there. Depending
> on the amount of memory handed to Dom0 this improves booting time
> (latency until Dom0 actually starts) quite a bit; subsequent shattering
> of some of the large pages may of course consume some of the saved time.
> 
> Known fallout has been spelled out here:
> https://lists.xen.org/archives/html/xen-devel/2021-08/msg00781.html
> 
> See individual patches for details on the v7 changes.
> 
> 01: iommu: add preemption support to iommu_{un,}map()
> 02: IOMMU/x86: perform PV Dom0 mappings in batches
> 03: IOMMU/x86: support freeing of pagetables
> 02: IOMMU/x86: new command line option to suppress use of superpage mappings
> 03: AMD/IOMMU: allow use of superpage mappings
> 04: VT-d: allow use of superpage mappings
> 05: x86: introduce helper for recording degree of contiguity in page tables
> 06: IOMMU/x86: prefill newly allocate page tables
> 07: AMD/IOMMU: free all-empty page tables
> 08: VT-d: free all-empty page tables
> 09: AMD/IOMMU: replace all-contiguous page tables by superpage mappings
> 10: VT-d: replace all-contiguous page tables by superpage mappings
> 11: IOMMU/x86: add perf counters for page table splitting / coalescing
> 12: VT-d: fold dma_pte_clear_one() into its only caller

And I realize I've successfully screwed up numbering here. The order
of patches if correct, though - it's just that from the 4th patch
onwards all numbers are off by 2.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.