[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] page_alloc: Hold heap_lock while adjusting page states to/from PGC_state_free.



# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1284394111 -3600
# Node ID 69e8bb164683c76e0cd787df21b98c73905a61e6
# Parent  e300bfa3c0323ac08e7b8cd9fb40f9f1ab548543
page_alloc: Hold heap_lock while adjusting page states to/from PGC_state_free.

This avoids races with buddy-merging logic in free_heap_pages().

Signed-off-by: Keir Fraser <keir.fraser@xxxxxxxxxx>
---
 xen/common/page_alloc.c |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff -r e300bfa3c032 -r 69e8bb164683 xen/common/page_alloc.c
--- a/xen/common/page_alloc.c   Mon Sep 13 17:05:45 2010 +0100
+++ b/xen/common/page_alloc.c   Mon Sep 13 17:08:31 2010 +0100
@@ -415,8 +415,6 @@ static struct page_info *alloc_heap_page
     if ( d != NULL )
         d->last_alloc_node = node;
 
-    spin_unlock(&heap_lock);
-
     cpus_clear(mask);
 
     for ( i = 0; i < (1 << order); i++ )
@@ -437,6 +435,8 @@ static struct page_info *alloc_heap_page
         pg[i].u.inuse.type_info = 0;
         page_set_owner(&pg[i], NULL);
     }
+
+    spin_unlock(&heap_lock);
 
     if ( unlikely(!cpus_empty(mask)) )
     {
@@ -533,6 +533,8 @@ static void free_heap_pages(
     ASSERT(order <= MAX_ORDER);
     ASSERT(node >= 0);
 
+    spin_lock(&heap_lock);
+
     for ( i = 0; i < (1 << order); i++ )
     {
         /*
@@ -560,8 +562,6 @@ static void free_heap_pages(
             pg[i].tlbflush_timestamp = tlbflush_current_time();
     }
 
-    spin_lock(&heap_lock);
-
     avail[node][zone] += 1 << order;
     total_avail_pages += 1 << order;
 

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.