[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] xmalloc_tlsf: Fall back to xmalloc_whole_pages() if xmem_pool_alloc() fails.



# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1256113261 -3600
# Node ID 87bc0d49137bb1d66758766b39dbaf558aabd043
# Parent  9ead82c46efd7f95428a186e3dd3e8587ec9d811
xmalloc_tlsf: Fall back to xmalloc_whole_pages() if xmem_pool_alloc() fails.

This was happening for xmalloc request sizes between 3921 and 3951
bytes. The reason being that xmem_pool_alloc() may add extra padding
to the requested size, making the total block size greater than a
page.

Rather than add yet more smarts about TLSF to _xmalloc(), we just
dumbly attempt any request smaller than a page via xmem_pool_alloc()
first, then fall back on xmalloc_whole_pages() if this fails.

Based on bug diagnosis and initial patch by John Byrne <john.l.byrne@xxxxxx>

Signed-off-by: Keir Fraser <keir.fraser@xxxxxxxxxx>
---
 xen/common/xmalloc_tlsf.c |   10 +++++-----
 1 files changed, 5 insertions(+), 5 deletions(-)

diff -r 9ead82c46efd -r 87bc0d49137b xen/common/xmalloc_tlsf.c
--- a/xen/common/xmalloc_tlsf.c Wed Oct 21 08:51:10 2009 +0100
+++ b/xen/common/xmalloc_tlsf.c Wed Oct 21 09:21:01 2009 +0100
@@ -553,7 +553,7 @@ static void tlsf_init(void)
 
 void *_xmalloc(unsigned long size, unsigned long align)
 {
-    void *p;
+    void *p = NULL;
     u32 pad;
 
     ASSERT(!in_irq());
@@ -566,10 +566,10 @@ void *_xmalloc(unsigned long size, unsig
     if ( !xenpool )
         tlsf_init();
 
-    if ( size >= (PAGE_SIZE - (2*BHDR_OVERHEAD)) )
+    if ( size < PAGE_SIZE )
+        p = xmem_pool_alloc(size, xenpool);
+    if ( p == NULL )
         p = xmalloc_whole_pages(size);
-    else
-        p = xmem_pool_alloc(size, xenpool);
 
     /* Add alignment padding. */
     if ( (pad = -(long)p & (align - 1)) != 0 )
@@ -603,7 +603,7 @@ void xfree(void *p)
         ASSERT(!(b->size & 1));
     }
 
-    if ( b->size >= (PAGE_SIZE - (2*BHDR_OVERHEAD)) )
+    if ( b->size >= PAGE_SIZE )
         free_xenheap_pages((void *)b, get_order_from_bytes(b->size));
     else
         xmem_pool_free(p, xenpool);

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.