[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 1/2] xen/mm: don't unconditionally clear PGC_need_scrub in alloc_heap_pages()


  • To: xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Date: Wed, 25 Mar 2026 11:08:02 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mjvhhxyU4hEONnorl8Vk2zeEBhj4jDACdC0hApD8M4M=; b=j5776z7D6fJiao2nv9KUdB8xelQvLxmtVYzAyJwjKlu7pjJhUjtge0piuq9uwENo7G3U+vMtbjnCzizkMhf9BHnfWA6Xm341f+CmjftF2uLYNAynAQIjAqt67j8XVwYO49lAUiSoUT89YonXOGwiQbvbz1j+gh2YAHZ3yWNWvoExogZcE2rtx7KQOynd2ghY5eOwovzKsynEj/50htD4WTwefl/VVg2QmN9KAFH7lO3/i45CIPGcUbCQoSZJXklnB0ufMQNfk/x5P9DRzwoDQe4QF4h/sKFoh2nAgceCbeaWKu9B1AD4x36Zvxs1cbPzJiRq/1SZ7P2a49RCaQ55Og==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=QaFV3PKRxOkTSwq6Cg7GvCGBAyFbn5ChnZGyhH1VFqoODWZWa8gtS+Y/1OWbRFgAKUAhqhHUVp1afYC63IzPUPI3r7m9Bcc4xACFbecvtsvUyHrZHKt4l64xjOMXLM9jZ1JOklKuesnfAT+5EWh3b3ijdPpmqWbKIbY9FIi/8G+NEvHGJYQG1JrequqzTrDQZGXPp6dy9RkMH2zlCT90yvIlglT7fkBX9M0H4VCoGqvoQ+lfOTQMnM2wiF9ARkiCqr3+bret5oWZd2mDOyzFbNchWUl1+vDyAiRelqTo8zLN18QaJXMHlt8PPO27XIhwN/GCaxUpN2rxW4qrGfsPaw==
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=selector1 header.d=citrix.com header.i="@citrix.com" header.h="From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck"
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Ayden Bottos <aydenbottos12@xxxxxxxxx>
  • Delivery-date: Wed, 25 Mar 2026 10:15:00 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

alloc_heap_pages() will unconditionally clear PGC_need_scrub, even when
MEMF_no_scrub is requested.  This is kind of expected as otherwise some
callers will assert on seeing non-expected flags set on the count_info
field.

Introduce a new MEMF bit to signal to alloc_heap_pages() that non-scrubbed
pages should keep the PGC_need_scrub bit set. This fixes returning dirty
pages from alloc_domheap_pages() without the PGC_need_scrub bit set for
populate_physmap() to consume.

With the above change alloc_domheap_pages() needs an adjustment to cope
with allocated pages possibly having the PGC_need_scrub set.

Fixes: 83a784a15b47 ("xen/mm: allow deferred scrub of physmap populate 
allocated pages")
Reported-by: Ayden Bottos <aydenbottos12@xxxxxxxxx>
Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
---
This issue was initially reported to the Xen Security Team, and it did turn
out to not require an XSA only because the code hasn't been part of any
release, otherwise an XSA would have been issued.

The Security Team would like to thanks Ayden for the prompt report.

In the scrubbing loop in alloc_heap_pages() i should better be unsigned
long.
---
 xen/common/memory.c     |  3 ++-
 xen/common/page_alloc.c | 31 ++++++++++++++++++++++---------
 xen/include/xen/mm.h    |  2 ++
 3 files changed, 26 insertions(+), 10 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 918510f287a0..f0ff1311881c 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -345,7 +345,8 @@ static void populate_physmap(struct memop_args *a)
                 unsigned int scrub_start = 0;
                 unsigned int memflags =
                     a->memflags | (d->creation_finished ? 0
-                                                        : MEMF_no_scrub);
+                                                        : (MEMF_no_scrub |
+                                                           MEMF_keep_scrub));
                 nodeid_t node =
                     (a->memflags & MEMF_exact_node) ? 
MEMF_get_node(a->memflags)
                                                     : NUMA_NO_NODE;
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 588b5b99cbc7..1316dfbd15ee 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -989,6 +989,8 @@ static struct page_info *alloc_heap_pages(
     ASSERT(zone_lo <= zone_hi);
     ASSERT(zone_hi < NR_ZONES);
 
+    ASSERT(!(memflags & MEMF_keep_scrub) || (memflags & MEMF_no_scrub));
+
     if ( unlikely(order > MAX_ORDER) )
         return NULL;
 
@@ -1110,17 +1112,26 @@ static struct page_info *alloc_heap_pages(
     {
         bool cold = d && d != current->domain;
 
-        for ( i = 0; i < (1U << order); i++ )
+        if ( !(memflags & MEMF_no_scrub) )
         {
-            if ( test_and_clear_bit(_PGC_need_scrub, &pg[i].count_info) )
+            for ( i = 0; i < (1U << order); i++ )
             {
-                if ( !(memflags & MEMF_no_scrub) )
+                if ( test_and_clear_bit(_PGC_need_scrub, &pg[i].count_info) )
+                {
                     scrub_one_page(&pg[i], cold);
-
-                dirty_cnt++;
+                    dirty_cnt++;
+                }
+                else
+                    check_one_page(&pg[i]);
             }
-            else if ( !(memflags & MEMF_no_scrub) )
-                check_one_page(&pg[i]);
+        }
+        else
+        {
+            for ( i = 0; i < (1U << order); i++ )
+                if ( (memflags & MEMF_keep_scrub)
+                     ? test_bit(_PGC_need_scrub, &pg[i].count_info)
+                     : test_and_clear_bit(_PGC_need_scrub, &pg[i].count_info) )
+                    dirty_cnt++;
         }
 
         if ( dirty_cnt )
@@ -2696,8 +2707,10 @@ struct page_info *alloc_domheap_pages(
 
             for ( i = 0; i < (1UL << order); i++ )
             {
-                ASSERT(!pg[i].count_info);
-                pg[i].count_info = PGC_extra;
+                ASSERT(!(pg[i].count_info &
+                         ~((memflags & MEMF_keep_scrub) ? PGC_need_scrub
+                                                        : 0UL)));
+                pg[i].count_info |= PGC_extra;
             }
         }
         if ( assign_page(pg, order, d, memflags) )
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index d80bfba6d393..0639fc0d21fb 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -208,6 +208,8 @@ struct npfec {
 #define  MEMF_no_refcount (1U<<_MEMF_no_refcount)
 #define _MEMF_populate_on_demand 1
 #define  MEMF_populate_on_demand (1U<<_MEMF_populate_on_demand)
+#define _MEMF_keep_scrub  2
+#define  MEMF_keep_scrub  (1U<<_MEMF_keep_scrub)
 #define _MEMF_no_dma      3
 #define  MEMF_no_dma      (1U<<_MEMF_no_dma)
 #define _MEMF_exact_node  4
-- 
2.51.0




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.