[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v2 3/7] xen/page_alloc: Add static per-NUMA-node counts of free pages



The static per-NUMA-node count of free pages is the sum of free memory
in all zones of a node. It's an optimisation to avoid doing that operation
frequently in the following patches that introduce per-NUMA-node claims.

---
Changed since v1:
- Added ASSERT(per_node_avail_pages[node] >= request) as requested
  during review by Roger: Comment by me: As we have
  ASSERT(avail[node][zone] >= request);
  directly before it, request is already valid, so this checks
  that per_node_avail_pages[node] is not mis-accounted too low.

Signed-off-by: Bernhard Kaindl <bernhard.kaindl@xxxxxxxxx>
Signed-off-by: Alejandro Vallejo <alejandro.garciavallejo@xxxxxxx>
---
 xen/common/page_alloc.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 7e90b9cc1e..43de9296fd 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -486,6 +486,9 @@ static unsigned long node_need_scrub[MAX_NUMNODES];
 static unsigned long *avail[MAX_NUMNODES];
 static long total_avail_pages;
 
+/* Per-NUMA-node counts of free pages */
+static unsigned long per_node_avail_pages[MAX_NUMNODES];
+
 static DEFINE_SPINLOCK(heap_lock);
 static long outstanding_claims; /* total outstanding claims by all domains */
 
@@ -1066,6 +1069,8 @@ static struct page_info *alloc_heap_pages(
 
     ASSERT(avail[node][zone] >= request);
     avail[node][zone] -= request;
+    ASSERT(per_node_avail_pages[node] >= request);
+    per_node_avail_pages[node] -= request;
     total_avail_pages -= request;
     ASSERT(total_avail_pages >= 0);
 
@@ -1226,6 +1231,8 @@ static int reserve_offlined_page(struct page_info *head)
             continue;
 
         avail[node][zone]--;
+        ASSERT(per_node_avail_pages[node] > 0);
+        per_node_avail_pages[node]--;
         total_avail_pages--;
         ASSERT(total_avail_pages >= 0);
 
@@ -1550,6 +1557,7 @@ static void free_heap_pages(
     }
 
     avail[node][zone] += 1 << order;
+    per_node_avail_pages[node] += 1 << order;
     total_avail_pages += 1 << order;
     if ( need_scrub )
     {
-- 
2.43.0




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.