[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-3.4-testing] xmalloc_tlsf: Fall back to xmalloc_whole_pages() if xmem_pool_alloc() fails.



# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1256289859 -3600
# Node ID 2beca5f48ffed21c4b56cabd34707e09b4c31068
# Parent  7bd37c5c72893a783a00b3068df0c81b3ceb911c
xmalloc_tlsf: Fall back to xmalloc_whole_pages() if xmem_pool_alloc() fails.

This was happening for xmalloc request sizes between 3921 and 3951
bytes. The reason being that xmem_pool_alloc() may add extra padding
to the requested size, making the total block size greater than a
page.

Rather than add yet more smarts about TLSF to _xmalloc(), we just
dumbly attempt any request smaller than a page via xmem_pool_alloc()
first, then fall back on xmalloc_whole_pages() if this fails.

Based on bug diagnosis and initial patch by John Byrne <john.l.byrne@xxxxxx>

Signed-off-by: Keir Fraser <keir.fraser@xxxxxxxxxx>
xen-unstable changeset:   20349:87bc0d49137b
xen-unstable date:        Wed Oct 21 09:21:01 2009 +0100
---
 xen/common/xmalloc_tlsf.c |   10 +++++-----
 1 files changed, 5 insertions(+), 5 deletions(-)

diff -r 7bd37c5c7289 -r 2beca5f48ffe xen/common/xmalloc_tlsf.c
--- a/xen/common/xmalloc_tlsf.c Fri Oct 23 10:20:28 2009 +0100
+++ b/xen/common/xmalloc_tlsf.c Fri Oct 23 10:24:19 2009 +0100
@@ -542,7 +542,7 @@ static void tlsf_init(void)
 
 void *_xmalloc(unsigned long size, unsigned long align)
 {
-    void *p;
+    void *p = NULL;
     u32 pad;
 
     ASSERT(!in_irq());
@@ -555,10 +555,10 @@ void *_xmalloc(unsigned long size, unsig
     if ( !xenpool )
         tlsf_init();
 
-    if ( size >= (PAGE_SIZE - (2*BHDR_OVERHEAD)) )
+    if ( size < PAGE_SIZE )
+        p = xmem_pool_alloc(size, xenpool);
+    if ( p == NULL )
         p = xmalloc_whole_pages(size);
-    else
-        p = xmem_pool_alloc(size, xenpool);
 
     /* Add alignment padding. */
     if ( (pad = -(long)p & (align - 1)) != 0 )
@@ -592,7 +592,7 @@ void xfree(void *p)
         ASSERT(!(b->size & 1));
     }
 
-    if ( b->size >= (PAGE_SIZE - (2*BHDR_OVERHEAD)) )
+    if ( b->size >= PAGE_SIZE )
         free_xenheap_pages((void *)b, get_order_from_bytes(b->size));
     else
         xmem_pool_free(p, xenpool);

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.