[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen master] libxl, docs: Add per-arch extra default paging memory



commit 156a239ea288972425f967ac807b3cb5b5e14874
Author:     Henry Wang <Henry.Wang@xxxxxxx>
AuthorDate: Mon Jun 6 06:17:27 2022 +0000
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Tue Oct 11 14:28:37 2022 +0200

    libxl, docs: Add per-arch extra default paging memory
    
    This commit adds a per-arch macro `EXTRA_DEFAULT_PAGING_MEM_MB`
    to the default paging memory size, in order to cover the p2m
    pool for extended regions of a xl-based guest on Arm.
    
    For Arm, the extra default paging memory is 128MB.
    For x86, the extra default paging memory is zero, since there
    are no extended regions on x86.
    
    Also update the xl.cfg documentation to add Arm documentation
    according to code changes.
    
    This is part of CVE-2022-33747 / XSA-409.
    
    Signed-off-by: Henry Wang <Henry.Wang@xxxxxxx>
    Reviewed-by: Julien Grall <jgrall@xxxxxxxxxx>
---
 docs/man/xl.cfg.5.pod.in        |  5 +++++
 tools/libs/light/libxl_arch.h   | 11 +++++++++++
 tools/libs/light/libxl_create.c |  7 ++++++-
 3 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index b2901e04cf..31e58b73b0 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2725,6 +2725,11 @@ are not using hardware assisted paging (i.e. you are 
using shadow
 mode) and your guest workload consists of a very large number of
 similar processes then increasing this value may improve performance.
 
+On Arm, this field is used to determine the size of the guest P2M pages
+pool, and the default value is the same as x86 HAP mode, plus 512KB to
+cover the extended regions. Users should adjust this value if bigger
+P2M pool size is needed.
+
 =back
 
 =head2 Device-Model Options
diff --git a/tools/libs/light/libxl_arch.h b/tools/libs/light/libxl_arch.h
index 03b89929e6..247cca130f 100644
--- a/tools/libs/light/libxl_arch.h
+++ b/tools/libs/light/libxl_arch.h
@@ -99,10 +99,21 @@ void libxl__arch_update_domain_config(libxl__gc *gc,
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
 #define ACPI_INFO_PHYSICAL_ADDRESS 0xfc000000
+#define EXTRA_DEFAULT_PAGING_MEM_MB 0
 
 int libxl__dom_load_acpi(libxl__gc *gc,
                          const libxl_domain_build_info *b_info,
                          struct xc_dom_image *dom);
+
+#else
+
+/*
+ * 128MB extra default paging memory on Arm for extended regions. This
+ * value is normally enough for domains that are not running backend.
+ * See the `shadow_memory` in xl.cfg documentation for more information.
+ */
+#define EXTRA_DEFAULT_PAGING_MEM_MB 128
+
 #endif
 
 #endif
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index b9dd2deedf..612eacfc7f 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -1035,12 +1035,17 @@ unsigned long 
libxl__get_required_paging_memory(unsigned long maxmem_kb,
      * plus 1 page per MiB of RAM for the P2M map (for non-PV guests),
      * plus 1 page per MiB of RAM to shadow the resident processes (for shadow
      * mode guests).
+     * plus 1 page per MiB of RAM for the architecture specific
+     * EXTRA_DEFAULT_PAGING_MEM_MB. On x86, this value is zero. On Arm, this
+     * value is 128 MiB to cover domain extended regions (enough for domains
+     * that are not running backend).
      * This is higher than the minimum that Xen would allocate if no value
      * were given (but the Xen minimum is for safety, not performance).
      */
     return 4 * (256 * smp_cpus +
                 ((type != LIBXL_DOMAIN_TYPE_PV) + !hap) *
-                (maxmem_kb / 1024));
+                (maxmem_kb / 1024) +
+                EXTRA_DEFAULT_PAGING_MEM_MB);
 }
 
 static unsigned long libxl__get_required_iommu_memory(unsigned long maxmem_kb)
--
generated by git-patchbot for /home/xen/git/xen.git#master



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.