[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen master] x86/NUMA: correct memnode_shift calculation for single node system



commit 0db195c1a9947240b354abbefd2afac6c73ad6a8
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Thu Sep 29 14:39:52 2022 +0200
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Thu Sep 29 14:39:52 2022 +0200

    x86/NUMA: correct memnode_shift calculation for single node system
    
    SRAT may describe even a single node system (including such with
    multiple nodes, but only one having any memory) using multiple ranges.
    Hence simply counting the number of ranges (note that function
    parameters are mis-named) is not an indication of the number of nodes in
    use. Since we only care about knowing whether we're on a single node
    system, accounting for this is easy: Increment the local variable only
    when adjacent ranges are for different nodes. That way the count may
    still end up larger than the number of nodes in use, but it won't be
    larger than 1 when only a single node has any memory.
    
    To compensate populate_memnodemap() now needs to be prepared to find
    the correct node ID already in place for a range. (This could of course
    also happen when there's more than one node with memory, while at least
    one node has multiple adjacent ranges, provided extract_lsb_from_nodes()
    would also know to recognize this case.)
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
---
 xen/arch/x86/numa.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c
index 627ae8aa95..1bc82c60aa 100644
--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -78,7 +78,8 @@ static int __init populate_memnodemap(const struct node 
*nodes,
         if ( (epdx >> shift) >= memnodemapsize )
             return 0;
         do {
-            if ( memnodemap[spdx >> shift] != NUMA_NO_NODE )
+            if ( memnodemap[spdx >> shift] != NUMA_NO_NODE &&
+                 (!nodeids || memnodemap[spdx >> shift] != nodeids[i]) )
                 return -1;
 
             if ( !nodeids )
@@ -114,7 +115,7 @@ static int __init allocate_cachealigned_memnodemap(void)
  * maximum possible shift.
  */
 static int __init extract_lsb_from_nodes(const struct node *nodes,
-                                         int numnodes)
+                                         int numnodes, const nodeid_t *nodeids)
 {
     int i, nodes_used = 0;
     unsigned long spdx, epdx;
@@ -127,7 +128,8 @@ static int __init extract_lsb_from_nodes(const struct node 
*nodes,
         if ( spdx >= epdx )
             continue;
         bitfield |= spdx;
-        nodes_used++;
+        if ( !i || !nodeids || nodeids[i - 1] != nodeids[i] )
+            nodes_used++;
         if ( epdx > memtop )
             memtop = epdx;
     }
@@ -144,7 +146,7 @@ int __init compute_hash_shift(struct node *nodes, int 
numnodes,
 {
     int shift;
 
-    shift = extract_lsb_from_nodes(nodes, numnodes);
+    shift = extract_lsb_from_nodes(nodes, numnodes, nodeids);
     if ( memnodemapsize <= ARRAY_SIZE(_memnodemap) )
         memnodemap = _memnodemap;
     else if ( allocate_cachealigned_memnodemap() )
--
generated by git-patchbot for /home/xen/git/xen.git#master



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.