[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 1/2] xen: introduce a no lock version function of free_heap_pages



Function free_heap_pages() needs to get the heap_lock every time. This lock
may become a problem when there are many CPUs trying to free pages in parallel.

This patch introduces a no lock version function __free_heap_pages(), it can be
used to free a batch of pages after grabbing the heap_lock.

Signed-off-by: Bob Liu <bob.liu@xxxxxxxxxx>
---
 xen/common/page_alloc.c |   13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 601319c..56826b4 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -808,8 +808,8 @@ static int reserve_offlined_page(struct page_info *head)
     return count;
 }
 
-/* Free 2^@order set of pages. */
-static void free_heap_pages(
+/* No lock version, the caller must hold heap_lock */
+static void __free_heap_pages(
     struct page_info *pg, unsigned int order)
 {
     unsigned long mask, mfn = page_to_mfn(pg);
@@ -819,8 +819,6 @@ static void free_heap_pages(
     ASSERT(order <= MAX_ORDER);
     ASSERT(node >= 0);
 
-    spin_lock(&heap_lock);
-
     for ( i = 0; i < (1 << order); i++ )
     {
         /*
@@ -894,7 +892,14 @@ static void free_heap_pages(
 
     if ( tainted )
         reserve_offlined_page(pg);
+}
 
+/* Free 2^@order set of pages. */
+static void free_heap_pages(
+    struct page_info *pg, unsigned int order)
+{
+    spin_lock(&heap_lock);
+    __free_heap_pages(pg, order);
     spin_unlock(&heap_lock);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.