[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 13/37] xen/x86: decouple processor_nodes_parsed from acpi numa functions


  • To: <wei.chen@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>, <sstabellini@xxxxxxxxxx>, <julien@xxxxxxx>
  • From: Wei Chen <wei.chen@xxxxxxx>
  • Date: Thu, 23 Sep 2021 20:02:12 +0800
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=r4IUu2+SwohNK8zI3PdrUOxfoLsLYpgk2ynTlN2bdKg=; b=fy6femylwVDfk4J9U7BlSfQguQgIKrJEN0GZ42AhquyzhmS0nNXoov35Ygl+hm2yAbOxkW7qqlW+7gcMu+QGRhnNmpVvrYmeaTXrlY4C4O2G0TR+BkdoYDMn21Bk1fonB59MZvFTrsZ2pv9AoXENAm5ZdRRty0BwA6EZXaBsqRFUjKYnRNAI8c3xcxYPvZpfgdEY7i6KdoM3Yaq8WpQRFGZK7SAS3pEPxnw08saS6ButYYufZkDnBDAggbfqIwO27bBdy2epbWKk3VqsFAiNX3zzY08X36NtabalxMLqZn6N7OvgPTItvnK3LttI99J3Xb2VxEqSpdfyl42R5hbLgQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MUPfU67MOThvum4MqrB4LDEr0CsJPB0GaNUHxOhO+iDY6ZoTFWLxhU30gL8OedBww41a8cjrIAdZj4/+H3hTaCl67PlWVSTtFMtD9MtTpVVgTBn8+VUjapjTERJNGvGN+nHkX/lqqwxnUiuVYfo5MfPupOI95Mm8gwyiUFQFn6Q5F+d4C2DyHSgKE22rUjavKYUqljjBgQ+9t349TqmRKr7jlAtk3EI6hqsNm7Cv9teicSfSifvNs3XgR3e/zD/LYQOJMx6m2omL7A0Y16MHYlFuzhpm32XNLmqRSoXZNuDbbuw8RHFdHN7v3UFbVYBmEfWopVTuPyAwARXHHwPDGA==
  • Cc: <Bertrand.Marquis@xxxxxxx>
  • Delivery-date: Thu, 23 Sep 2021 12:04:17 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true

Xen is using processor_nodes_parsed to record parsed processor nodes
from ACPI table or other firmware provided resource table. This
variable is used in ACPI numa functions directly. In follow-up
patchs, neutral NUMA code will be abstracted and move to other files.
So in this patch, we introduce numa_set_processor_nodes_parsed helper
to decouple processor_nodes_parsed from acpi numa functions.

Signed-off-by: Wei Chen <wei.chen@xxxxxxx>
---
 xen/arch/x86/srat.c        | 9 +++++++--
 xen/include/asm-x86/numa.h | 1 +
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/srat.c b/xen/arch/x86/srat.c
index aa07a7e975..9276a52138 100644
--- a/xen/arch/x86/srat.c
+++ b/xen/arch/x86/srat.c
@@ -104,6 +104,11 @@ nodeid_t setup_node(unsigned pxm)
        return node;
 }
 
+void  __init numa_set_processor_nodes_parsed(nodeid_t node)
+{
+       node_set(node, processor_nodes_parsed);
+}
+
 bool __init numa_memblks_available(void)
 {
        if (num_node_memblks < NR_NODE_MEMBLKS)
@@ -236,7 +241,7 @@ acpi_numa_x2apic_affinity_init(const struct 
acpi_srat_x2apic_cpu_affinity *pa)
        }
 
        apicid_to_node[pa->apic_id] = node;
-       node_set(node, processor_nodes_parsed);
+       numa_set_processor_nodes_parsed(node);
        acpi_numa = 1;
 
        if (opt_acpi_verbose)
@@ -271,7 +276,7 @@ acpi_numa_processor_affinity_init(const struct 
acpi_srat_cpu_affinity *pa)
                return;
        }
        apicid_to_node[pa->apic_id] = node;
-       node_set(node, processor_nodes_parsed);
+       numa_set_processor_nodes_parsed(node);
        acpi_numa = 1;
 
        if (opt_acpi_verbose)
diff --git a/xen/include/asm-x86/numa.h b/xen/include/asm-x86/numa.h
index 78e044a390..295f875a51 100644
--- a/xen/include/asm-x86/numa.h
+++ b/xen/include/asm-x86/numa.h
@@ -77,6 +77,7 @@ extern int valid_numa_range(paddr_t start, paddr_t end, 
nodeid_t node);
 extern bool numa_memblks_available(void);
 extern int numa_update_node_memblks(nodeid_t node,
                paddr_t start, paddr_t size, bool hotplug);
+extern void numa_set_processor_nodes_parsed(nodeid_t node);
 
 void srat_parse_regions(paddr_t addr);
 extern u8 __node_distance(nodeid_t a, nodeid_t b);
-- 
2.25.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.