[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v9 4/6] xen/x86: use arch_get_ram_range to get information from E820 map


  • To: <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Wei Chen <wei.chen@xxxxxxx>
  • Date: Fri, 18 Nov 2022 18:45:06 +0800
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2wQ8/FZBsYtn+E4uyHcWiQeWkJ4kxRPSI3oqXiHsNW8=; b=FbiXuRPBscn1NIaAeyT6z0J/8lCJ0W5UutTGTUln2tzOK7qcXJ1yP8RjG03bf8M4sc5q49/0cX+dTZgU3nJ50nt3LJTZpPHInVHPSYLTU/FlHaF/txVdUsVvy1xPzFPGKgTBCcNTA6sgv7Q7boCQf1S7tL0lAe7W0QpmyXuMdmLUEYaMftACkZ06QJLg97ph/QW1VlhhXr1xM7v47TASWtG6iZQ32RweL7WoQTdU4oeIyHjiEjtF92Psf6DB6D9ujlWf/FcUGZAgpbLBapKo+Ww/639a/ChT5iJSCFf7l+yuzNg15tTdrcoxxkfUtmtSTFXCWmcLr2RQXFI2bD//EA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=m7hJM/s2p8/2b4e6bGUl1QVvwH4A1qQac8ew2EOfQMrCaEtWXeMsmnIpoimPhPun5GvSeczT+m1NE9C2q/rJ/i+mobhm1Ye7YsHxkmqwkEJ3JJfvR5lHJfnqXQytYNlzzXiVSaepPCYwdUMfCCc4WD37V0qmrVfn3ope19LkYKzbtqOSpm3ownYDnHLb3jc3yeArCgSQr0xRZchiTzq0Bph54rmUAwqKxyB3plFyrwi61IrQXNiECNQCnXZ//qW69FyS2QG7SQpfLZ2E3HabAgNCQZuXMnI5pqtu06FcHG4GJpSRoAaVQGKJhuMVylALKz1ldVsd4LNbBQMi6+x4zQ==
  • Cc: <nd@xxxxxxx>, Wei Chen <wei.chen@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • Delivery-date: Fri, 18 Nov 2022 10:45:52 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true

The sanity check of nodes_cover_memory is also a requirement of
other architectures that support NUMA. But now, the code of
nodes_cover_memory is tied to the x86 E820. In this case, we
introduce arch_get_ram_range to decouple architecture specific
memory map from this function. This means, other architectures
like Arm can also use it to check its node and memory coverage
from bootmem info.

Depends arch_get_ram_range, we make nodes_cover_memory become
architecture independent. We also use neutral words to replace
SRAT and E820 in the print message of this function. This will
to make the massage seems more common.

As arch_get_ram_range use unsigned int for index, we also adjust
the index in nodes_cover_memory from int to unsigned int.

Signed-off-by: Wei Chen <wei.chen@xxxxxxx>
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
---
v8 -> v9:
1. No change.
v7 -> v8:
1. No change.
v6 -> v7:
1. No change.
v5 -> v6:
1. No change.
v4 -> v5:
1. Add Rb.
2. Adjust the code comments.
v3 -> v4:
1. Move function comment to header file.
2. Use bool for found, and add a new "err" for the return
   value of arch_get_ram_range.
3. Use -ENODATA instead of -EINVAL for non-RAM type ranges.
v2 -> v3:
1. Rename arch_get_memory_map to arch_get_ram_range.
2. Use -ENOENT instead of -ENODEV to indicate end of memory map.
3. Add description to code comment that arch_get_ram_range returns
   RAM range in [start, end) format.
v1 -> v2:
1. Use arch_get_memory_map to replace arch_get_memory_bank_range
   and arch_get_memory_bank_number.
2. Remove the !start || !end check, because caller guarantee
   these two pointers will not be NULL.
---
 xen/arch/x86/numa.c    | 15 +++++++++++++++
 xen/arch/x86/srat.c    | 30 ++++++++++++++++++------------
 xen/include/xen/numa.h | 13 +++++++++++++
 3 files changed, 46 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
index 90b2a22591..fa8caaa084 100644
--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -9,6 +9,7 @@
 #include <xen/nodemask.h>
 #include <xen/numa.h>
 #include <asm/acpi.h>
+#include <asm/e820.h>
 
 #ifndef Dprintk
 #define Dprintk(x...)
@@ -93,3 +94,17 @@ unsigned int __init arch_get_dma_bitsize(void)
                  flsl(node_start_pfn(node) + node_spanned_pages(node) / 4 - 1)
                  + PAGE_SHIFT, 32);
 }
+
+int __init arch_get_ram_range(unsigned int idx, paddr_t *start, paddr_t *end)
+{
+    if ( idx >= e820.nr_map )
+        return -ENOENT;
+
+    if ( e820.map[idx].type != E820_RAM )
+        return -ENODATA;
+
+    *start = e820.map[idx].addr;
+    *end = *start + e820.map[idx].size;
+
+    return 0;
+}
diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c
index ce507dac9e..1a108a34c6 100644
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -452,37 +452,43 @@ acpi_numa_memory_affinity_init(const struct 
acpi_srat_mem_affinity *ma)
    Make sure the PXMs cover all memory. */
 static int __init nodes_cover_memory(void)
 {
-       int i;
+       unsigned int i;
 
-       for (i = 0; i < e820.nr_map; i++) {
-               int j, found;
+       for (i = 0; ; i++) {
+               int err;
+               unsigned int j;
+               bool found;
                paddr_t start, end;
 
-               if (e820.map[i].type != E820_RAM) {
-                       continue;
-               }
+               /* Try to loop memory map from index 0 to end to get RAM 
ranges. */
+               err = arch_get_ram_range(i, &start, &end);
 
-               start = e820.map[i].addr;
-               end = e820.map[i].addr + e820.map[i].size;
+               /* Reached the end of the memory map? */
+               if (err == -ENOENT)
+                       break;
+
+               /* Skip non-RAM entries. */
+               if (err)
+                       continue;
 
                do {
-                       found = 0;
+                       found = false;
                        for_each_node_mask(j, memory_nodes_parsed)
                                if (start < nodes[j].end
                                    && end > nodes[j].start) {
                                        if (start >= nodes[j].start) {
                                                start = nodes[j].end;
-                                               found = 1;
+                                               found = true;
                                        }
                                        if (end <= nodes[j].end) {
                                                end = nodes[j].start;
-                                               found = 1;
+                                               found = true;
                                        }
                                }
                } while (found && start < end);
 
                if (start < end) {
-                       printk(KERN_ERR "SRAT: No PXM for e820 range: "
+                       printk(KERN_ERR "NUMA: No NODE for RAM range: "
                                "[%"PRIpaddr", %"PRIpaddr"]\n", start, end - 1);
                        return 0;
                }
diff --git a/xen/include/xen/numa.h b/xen/include/xen/numa.h
index 04556f3a6f..9da0e7d555 100644
--- a/xen/include/xen/numa.h
+++ b/xen/include/xen/numa.h
@@ -80,6 +80,19 @@ static inline nodeid_t __attribute_pure__ 
phys_to_nid(paddr_t addr)
 #define node_end_pfn(nid)       (NODE_DATA(nid)->node_start_pfn + \
                                  NODE_DATA(nid)->node_spanned_pages)
 
+/*
+ * This function provides the ability for caller to get one RAM entry
+ * from architectural memory map by index.
+ *
+ * This function will return zero if it can return a proper RAM entry.
+ * Otherwise it will return -ENOENT for out of scope index, or other
+ * error codes, e.g. return -ENODATA for non-RAM type memory entry.
+ *
+ * Note: the range is exclusive at the end, e.g. [*start, *end).
+ */
+extern int arch_get_ram_range(unsigned int idx,
+                              paddr_t *start, paddr_t *end);
+
 #endif
 
 #endif /* _XEN_NUMA_H */
-- 
2.25.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.