[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 1/2] xen/mm: add a NUMA node parameter to scrub_free_pages()


  • To: xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Date: Thu, 8 Jan 2026 18:55:35 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=uEiGmtpnHYUxAMzor9yJzVzpxxC+/PYjhnu5IaklfEw=; b=AC9j5UeAWYt/gkcVSP2B5LQOnvM5XCTBlKF1AR1+oVnhDMK+l6PUX26UMozIwrUvoe9QSJsuKRpRn8t+Qmi0tgpPzFragw6TBkoX7H0BzKPkG0/de6mOPOjYkvk8qQjHl6GQUj/+F/ioVVZT7a3E+ERGZrfYtaL9qciBQ4mbe50MV7Yx52OKZAtcwk+8Dd1nbZZP5Cbaa7f3MByJnT+0KDVquGHf5bhyShu3gYX6fILXCW3HyHJ5DWtceGFQTigXInuR+fsLppgzTXTuxHg7gd/vSbrqfwwzJE4oZmj3fbBMgV8x5Zs2BBk/gkVGLhDM9i8kPT+krg/C53FoPMgq8Q==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=ofSAmoLf2q0PdiS18M30QYAWfDXcWhWtfbwf61E1JCfIJv/FEaBpTZR7zixeOZGcdlMSD3LsR1keKGGDbp4P6eTOJvwbi/Ml5NCwGeUUHBqpytxn6aiJlB1DzB9sEFOUD6S9YeFl1jAj+0leNfh2Lo6ayIawstCv35rgW/xvxElsZXxgp1suHsBx+VisR49VVfyPi0XQ/G2uWzhHhdxfXllyEro34FjaFxTrufOuIC3ZFN7HxW532uy6holqmcK84S0ATdLcYsUjnHslFn3aocUrTzuPNdQiIoFB5y81W/MnFZos9wrIshsBsy86pnf5IpelqT8axbtILF/pNqAmlg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • Delivery-date: Thu, 08 Jan 2026 17:56:50 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Such parameter allow requesting to scrub memory only from the specified
node.  If there's no memory to scrub from the requested node the function
returns false.  If the node is already being scrubbed from a different CPU
the function returns true so the caller can differentiate whether there's
still pending work to do.

No functional change intended.  Existing callers are switched to use the
new interface, albeit they all pass NUMA_NO_NODE to keep the current
behavior.

Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
---
 xen/arch/arm/domain.c   |  2 +-
 xen/arch/x86/domain.c   |  2 +-
 xen/common/page_alloc.c | 17 ++++++++++++++---
 xen/include/xen/mm.h    |  3 ++-
 4 files changed, 18 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 47973f99d935..dff7554417ea 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -75,7 +75,7 @@ static void noreturn idle_loop(void)
          * and then, after it is done, whether softirqs became pending
          * while we were scrubbing.
          */
-        else if ( !softirq_pending(cpu) && !scrub_free_pages() &&
+        else if ( !softirq_pending(cpu) && !scrub_free_pages(NUMA_NO_NODE) &&
                   !softirq_pending(cpu) )
             do_idle();
 
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 7632d5e2d62d..276c485a204f 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -166,7 +166,7 @@ static void noreturn cf_check idle_loop(void)
          * and then, after it is done, whether softirqs became pending
          * while we were scrubbing.
          */
-        else if ( !softirq_pending(cpu) && !scrub_free_pages() &&
+        else if ( !softirq_pending(cpu) && !scrub_free_pages(NUMA_NO_NODE) &&
                   !softirq_pending(cpu) )
         {
             if ( guest )
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 2efc11ce095f..248c44df32b3 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1339,16 +1339,27 @@ static void cf_check scrub_continue(void *data)
     }
 }
 
-bool scrub_free_pages(void)
+bool scrub_free_pages(nodeid_t node)
 {
     struct page_info *pg;
     unsigned int zone;
     unsigned int cpu = smp_processor_id();
     bool preempt = false;
-    nodeid_t node;
     unsigned int cnt = 0;
 
-    node = node_to_scrub(true);
+    if ( node != NUMA_NO_NODE )
+    {
+        if ( !node_need_scrub[node] )
+            /* Nothing to scrub. */
+            return false;
+
+        if ( node_test_and_set(node, node_scrubbing) )
+            /* Another CPU is scrubbing it. */
+            return true;
+    }
+    else
+        node = node_to_scrub(true);
+
     if ( node == NUMA_NO_NODE )
         return false;
 
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 426362adb2f4..7067c9ec0405 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -65,6 +65,7 @@
 #include <xen/compiler.h>
 #include <xen/mm-frame.h>
 #include <xen/mm-types.h>
+#include <xen/numa.h>
 #include <xen/types.h>
 #include <xen/list.h>
 #include <xen/spinlock.h>
@@ -90,7 +91,7 @@ void init_xenheap_pages(paddr_t ps, paddr_t pe);
 void xenheap_max_mfn(unsigned long mfn);
 void *alloc_xenheap_pages(unsigned int order, unsigned int memflags);
 void free_xenheap_pages(void *v, unsigned int order);
-bool scrub_free_pages(void);
+bool scrub_free_pages(nodeid_t node);
 #define alloc_xenheap_page() (alloc_xenheap_pages(0,0))
 #define free_xenheap_page(v) (free_xenheap_pages(v,0))
 
-- 
2.51.0




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.