[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] xmalloc: return unused full pages on multi-page allocations



# HG changeset patch
# User Jan Beulich <jbeulich@xxxxxxxx>
# Date 1318493023 -7200
# Node ID 8c1e7830112f5bc57c3539c3c1e7eb3ca3539f1f
# Parent  46ca8ea42d4c674e0792e792300710afec3f6e24
xmalloc: return unused full pages on multi-page allocations

Certain (boot time) allocations are relatively large (particularly when
building with high NR_CPUS), but can also happen to be pretty far away
from a power-of-two size. Utilize the page allocator's (other than
Linux'es) capability of allowing to return space from higher-order
allocations in smaller pieces to return the unused parts immediately.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
Acked-by: Keir Fraser <keir@xxxxxxx>
---


diff -r 46ca8ea42d4c -r 8c1e7830112f xen/common/xmalloc_tlsf.c
--- a/xen/common/xmalloc_tlsf.c Thu Oct 13 10:02:34 2011 +0200
+++ b/xen/common/xmalloc_tlsf.c Thu Oct 13 10:03:43 2011 +0200
@@ -527,13 +527,21 @@
 static void *xmalloc_whole_pages(unsigned long size)
 {
     struct bhdr *b;
-    unsigned int pageorder = get_order_from_bytes(size + BHDR_OVERHEAD);
+    unsigned int i, pageorder = get_order_from_bytes(size + BHDR_OVERHEAD);
+    char *p;
 
     b = alloc_xenheap_pages(pageorder, 0);
     if ( b == NULL )
         return NULL;
 
-    b->size = (1 << (pageorder + PAGE_SHIFT));
+    b->size = PAGE_ALIGN(size + BHDR_OVERHEAD);
+    for ( p = (char *)b + b->size, i = 0; i < pageorder; ++i )
+        if ( (unsigned long)p & (PAGE_SIZE << i) )
+        {
+            free_xenheap_pages(p, i);
+            p += PAGE_SIZE << i;
+        }
+
     return (void *)b->ptr.buffer;
 }
 
@@ -611,7 +619,20 @@
     }
 
     if ( b->size >= PAGE_SIZE )
-        free_xenheap_pages((void *)b, get_order_from_bytes(b->size));
+    {
+        unsigned int i, order = get_order_from_bytes(b->size);
+
+        BUG_ON((unsigned long)b & ((PAGE_SIZE << order) - 1));
+        for ( i = 0; ; ++i )
+        {
+            if ( !(b->size & (PAGE_SIZE << i)) )
+                continue;
+            b->size -= PAGE_SIZE << i;
+            free_xenheap_pages((void *)b + b->size, i);
+            if ( i + 1 >= order )
+                break;
+        }
+    }
     else
         xmem_pool_free(p, xenpool);
 }

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.