[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 3.3-rc] memblock: Fix alloc failure due to dumb underflow protection in memblock_find_in_range_node()



7bd0b0f0da "memblock: Reimplement memblock allocation using reverse
free area iterator" implemented simple top-down allocator using
reverse memblock iterator.  To avoid underflow in the allocator loop,
it simply raised the lower boundary to the requested size under the
assumption that requested size would be far smaller than available
memblocks.

This causes early page table allocation failure under certain
configurations.  Fix it by checking for underflow directly instead of
bumping up lower bound.

Signed-off-by: Tejun Heo <tj@xxxxxxxxxx>
Reported-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
LKML-Reference: <20120110202838.GA10402@xxxxxxxxxxxxxxxxxxx>
---
Sorry, I wrote the patch description and everything but forgot to
actually send it out. :)

Ingo, the new memblock allocator went too far with simplification and
caused unnecessary allocation failure.  The fix is fairly obvious and
simple.  Can you please route this patch?

Thanks.

 mm/memblock.c |    7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 2f55f19..77b5f22 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -106,14 +106,17 @@ phys_addr_t __init_memblock 
memblock_find_in_range_node(phys_addr_t start,
        if (end == MEMBLOCK_ALLOC_ACCESSIBLE)
                end = memblock.current_limit;
 
-       /* adjust @start to avoid underflow and allocating the first page */
-       start = max3(start, size, (phys_addr_t)PAGE_SIZE);
+       /* avoid allocating the first page */
+       start = max_t(phys_addr_t, start, PAGE_SIZE);
        end = max(start, end);
 
        for_each_free_mem_range_reverse(i, nid, &this_start, &this_end, NULL) {
                this_start = clamp(this_start, start, end);
                this_end = clamp(this_end, start, end);
 
+               if (this_end < size)
+                       continue;
+
                cand = round_down(this_end - size, align);
                if (cand >= this_start)
                        return cand;

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.