[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] x86: adjust Dom0 initial memory allocation strategy



# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1268659530 0
# Node ID 257589edefb36f79b7ea63896ec5aa0a565c83d4
# Parent  bc0087c3e75ee73398c8c03d8942a5909c730b6b
x86: adjust Dom0 initial memory allocation strategy

Simply trying order-9 allocations until they won't succeed anymore
may consume unnecessarily much memory from the DMA zone (since the
page allocator will try to fulfill the request by using memory from
that zone when only lower order memory blocks are left in all other
zones). To avoid using DMA zone memory, make alloc_chunk() try to
allocate a second smaller chunk and use that one in favor of the
first one if it came from a higher addressed memory. This way, all
memory outside the DMA zone will be consumed before eating into that
zone.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
---
 xen/arch/x86/domain_build.c |   23 ++++++++++++++++++++++-
 1 files changed, 22 insertions(+), 1 deletion(-)

diff -r bc0087c3e75e -r 257589edefb3 xen/arch/x86/domain_build.c
--- a/xen/arch/x86/domain_build.c       Mon Mar 15 13:24:33 2010 +0000
+++ b/xen/arch/x86/domain_build.c       Mon Mar 15 13:25:30 2010 +0000
@@ -126,7 +126,8 @@ static struct page_info * __init alloc_c
     struct domain *d, unsigned long max_pages)
 {
     struct page_info *page;
-    unsigned int order;
+    unsigned int order, free_order;
+
     /*
      * Allocate up to 2MB at a time: It prevents allocating very large chunks
      * from DMA pools before the >4GB pool is fully depleted.
@@ -139,6 +140,26 @@ static struct page_info * __init alloc_c
     while ( (page = alloc_domheap_pages(d, order, 0)) == NULL )
         if ( order-- == 0 )
             break;
+    /*
+     * Make a reasonable attempt at finding a smaller chunk at a higher
+     * address, to avoid allocating from low memory as much as possible.
+     */
+    for ( free_order = order; page && order--; )
+    {
+        struct page_info *pg2;
+
+        if ( d->tot_pages + (1 << order) > d->max_pages )
+            continue;
+        pg2 = alloc_domheap_pages(d, order, 0);
+        if ( pg2 > page )
+        {
+            free_domheap_pages(page, free_order);
+            page = pg2;
+            free_order = order;
+        }
+        else if ( pg2 )
+            free_domheap_pages(pg2, order);
+    }
     return page;
 }
 

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.