[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] x86/NUMA: correct memnode_shift calculation for single node system


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 27 Sep 2022 16:15:19 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RTLi2iGdZYIlnjw1ZxuuWrnZlomFDGrRKcaedNt44lI=; b=dglA1ohTX+vXfgSXETHXWfr+BvpuRXAUE1p7umDZ5XFp4AHWrSRGoPLF6DOWGyp7XUss1sb8h9ubISULbt2wZqaDYvlQfbUNz10Iy/b9N1e1GMrs4m4DeAzXq3wIlOfVSwndNH2oX/wVG32VjqUBlOrI9hmGPSXiviZ2QKeTkezIGaVV3+NWR2ZvAvgp5necV8RilRnNcZCbH70IlDlL3F/1TYuJE6CW7u6rlPxRwCJl+8Yxsds8jp6B4f25cR5SznH+AhkkVPwipE3AkPcb1AVx5KcQhMguvgBptGgvr+7lJdhXG5lAhxBaTmj/wesP/GnX6CQUFxLz3flSekhK2A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SFAuDTNHCAU4kpA7oHUtk1zFyevbiUupXqAvXxK5EZSnw/JFe+7Teqvl+lIfEfa7cdghXrlAqkv8lpHbSam+kcTUfB+rQuyAHy5rlv9Fwr2fRXSDeW22PF5einMfvsZvPJM3XwcEHxDLH6tfskn3U9rBDtYSyygRH/sBpdYjxG0OmWPYFALc4K7JKy/r8LWM7S6xVekHJD1Y/sAMkgq4ur3D8J0r7je1YXMl740V3/3NoFxLYbffzx9jOirGKXaU0X7IWVsLY4CvqXh2KrwGGbMQvICBGqO/YgSN10nA0b3gWt5xos5n9j4s8Ykc6Aqux7O8rvryIrq9EozRw3HXfg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Tue, 27 Sep 2022 14:15:24 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

SRAT may describe even a single node system (including such with
multiple nodes, but only one having any memory) using multiple ranges.
Hence simply counting the number of ranges (note that function
parameters are mis-named) is not an indication of the number of nodes in
use. Since we only care about knowing whether we're on a single node
system, accounting for this is easy: Increment the local variable only
when adjacent ranges are for different nodes. That way the count may
still end up larger than the number of nodes in use, but it won't be
larger than 1 when only a single node has any memory.

To compensate populate_memnodemap() now needs to be prepared to find
the correct node ID already in place for a range. (This could of course
also happen when there's more than one node with memory, while at least
one node has multiple adjacent ranges, provided extract_lsb_from_nodes()
would also know to recognize this case.)

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
On my Skylake system this changes memnodemapsize from 17 to 1 (and the
shift from 20 to 63).

--- a/xen/arch/x86/numa.c
+++ b/xen/arch/x86/numa.c
@@ -78,7 +78,8 @@ static int __init populate_memnodemap(co
         if ( (epdx >> shift) >= memnodemapsize )
             return 0;
         do {
-            if ( memnodemap[spdx >> shift] != NUMA_NO_NODE )
+            if ( memnodemap[spdx >> shift] != NUMA_NO_NODE &&
+                 (!nodeids || memnodemap[spdx >> shift] != nodeids[i]) )
                 return -1;
 
             if ( !nodeids )
@@ -114,7 +115,7 @@ static int __init allocate_cachealigned_
  * maximum possible shift.
  */
 static int __init extract_lsb_from_nodes(const struct node *nodes,
-                                         int numnodes)
+                                         int numnodes, const nodeid_t *nodeids)
 {
     int i, nodes_used = 0;
     unsigned long spdx, epdx;
@@ -127,7 +128,7 @@ static int __init extract_lsb_from_nodes
         if ( spdx >= epdx )
             continue;
         bitfield |= spdx;
-        nodes_used++;
+        nodes_used += i == 0 || !nodeids || nodeids[i - 1] != nodeids[i];
         if ( epdx > memtop )
             memtop = epdx;
     }
@@ -144,7 +145,7 @@ int __init compute_hash_shift(struct nod
 {
     int shift;
 
-    shift = extract_lsb_from_nodes(nodes, numnodes);
+    shift = extract_lsb_from_nodes(nodes, numnodes, nodeids);
     if ( memnodemapsize <= ARRAY_SIZE(_memnodemap) )
         memnodemap = _memnodemap;
     else if ( allocate_cachealigned_memnodemap() )



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.