[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 6/8] mm/pdx: Standardize region validation wrt pdx compression



Hi Alejandro,

On 17/07/2023 17:03, Alejandro Vallejo wrote:
Regions must be occasionally validated for pdx compression validity. That
is, whether any of the machine addresses spanning the region have a bit set
in the pdx "hole" (which is expected to always contain zeroes). There are
a few such tests through the code, and they all check for different things.

This patch replaces all such occurences with a call to a centralized

Typo: s/occurences/occurrences/

function that checks a region for validity.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@xxxxxxxxx>
---
  xen/arch/x86/x86_64/mm.c |  2 +-
  xen/common/efi/boot.c    |  6 +++---
  xen/common/pdx.c         | 13 +++++++++++--
  xen/include/xen/pdx.h    |  9 +++++++++
  4 files changed, 24 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 60db439af3..914e65c26c 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1168,7 +1168,7 @@ static int mem_hotadd_check(unsigned long spfn, unsigned 
long epfn)
      if ( (spfn | epfn) & ((1UL << PAGETABLE_ORDER) - 1) )
          return 0;
- if ( (spfn | epfn) & pfn_hole_mask )
+    if ( !pdx_is_region_compressible(spfn, epfn) )
          return 0;
/* Make sure the new range is not present now */
diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 24169b7b50..b098a8c030 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -14,6 +14,7 @@
  #include <xen/multiboot.h>
  #include <xen/param.h>
  #include <xen/pci_regs.h>
+#include <xen/pdx.h>
  #include <xen/pfn.h>
  #if EFI_PAGE_SIZE != PAGE_SIZE
  # error Cannot use xen/pfn.h here!
@@ -1647,7 +1648,7 @@ static bool __init cf_check ram_range_valid(unsigned long 
smfn, unsigned long em
  {
      unsigned long sz = pfn_to_pdx(emfn - 1) / PDX_GROUP_COUNT + 1;
- return !(smfn & pfn_hole_mask) &&
+    return pdx_is_region_compressible(smfn, emfn) &&
             find_next_bit(pdx_group_valid, sz,
                           pfn_to_pdx(smfn) / PDX_GROUP_COUNT) < sz;
  }
@@ -1759,8 +1760,7 @@ void __init efi_init_memory(void)
              prot |= _PAGE_NX;
if ( pfn_to_pdx(emfn - 1) < (DIRECTMAP_SIZE >> PAGE_SHIFT) &&
-             !(smfn & pfn_hole_mask) &&
-             !((smfn ^ (emfn - 1)) & ~pfn_pdx_bottom_mask) )
+             pdx_is_region_compressible(smfn, emfn))
          {
              if ( (unsigned long)mfn_to_virt(emfn - 1) >= HYPERVISOR_VIRT_END )
                  prot &= ~_PAGE_GLOBAL;
diff --git a/xen/common/pdx.c b/xen/common/pdx.c
index 99d4a90a50..72845e4bab 100644
--- a/xen/common/pdx.c
+++ b/xen/common/pdx.c
@@ -88,7 +88,7 @@ bool __mfn_valid(unsigned long mfn)
  }
/* Sets all bits from the most-significant 1-bit down to the LSB */
-static uint64_t __init fill_mask(uint64_t mask)
+static uint64_t fill_mask(uint64_t mask)
  {
      while (mask & (mask + 1))
          mask |= mask + 1;
@@ -96,6 +96,15 @@ static uint64_t __init fill_mask(uint64_t mask)
      return mask;
  }
+bool pdx_is_region_compressible(unsigned long smfn, unsigned long emfn)

For newer interface, I would rather prefer if we use start + size. It is easier to reason (you don't have to wonder whether 'emfn' is inclusive or not) and avoid issue in the case you are trying to handle a region right at the end of the address space as emfn would be 0 in the non-inclusive case (not much a concern for MFNs as the last one should be invalid, but it makes harder to reason).

+{
+    uint64_t base = smfn << PAGE_SHIFT;

On Arm32, physical address are up to 40-bit. So you want to cast smfn to uint64_t before shifting. That said, it would be best to use pfn_to_paddr() and possibly switch to paddr_t for the type.

Note that I understand that the rest of the PDX code is using uint64_t. So I would be ok if you don't want to switch to paddr_t.

+    uint64_t len = (emfn - smfn) << PAGE_SHIFT;

Same here.

+
+    return !(smfn & pfn_hole_mask) &&
+           !(pdx_region_mask(base, len) & ~ma_va_bottom_mask);
+}
+
  /* We don't want to compress the low MAX_ORDER bits of the addresses. */
  uint64_t __init pdx_init_mask(uint64_t base_addr)
  {
@@ -103,7 +112,7 @@ uint64_t __init pdx_init_mask(uint64_t base_addr)
                           (uint64_t)1 << (MAX_ORDER + PAGE_SHIFT)) - 1);
  }
-uint64_t __init pdx_region_mask(uint64_t base, uint64_t len)
+uint64_t pdx_region_mask(uint64_t base, uint64_t len)
  {
      /*
       * We say a bit "moves" in a range if there exist 2 addresses in that
diff --git a/xen/include/xen/pdx.h b/xen/include/xen/pdx.h
index f8ca0f5821..5378e664c2 100644
--- a/xen/include/xen/pdx.h
+++ b/xen/include/xen/pdx.h
@@ -77,6 +77,15 @@ extern unsigned long pfn_top_mask, ma_top_mask;
                           (sizeof(*frame_table) & -sizeof(*frame_table)))
  extern unsigned long pdx_group_valid[];
+/**
+ * Validate a region's compatibility with the current compression runtime
+ *
+ * @param smfn Start mfn
+ * @param emfn End mfn (non-inclusive)
+ * @return True iff the region can be used with the current compression
+ */
+bool pdx_is_region_compressible(unsigned long smfn, unsigned long emfn);
+
  /**
   * Calculates a mask covering "moving" bits of all addresses of a region
   *

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.