[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] What's the reason for padding the alloc_bitmap in page_alloc.c?



I see the code in init_boot_allocator that added an extra longword to the size of alloc_bitmap. Is there really a chance for overrun in map_alloc or map_free as the comment suggests? I would think that allocating or freeing more than we have would be considered a bug. I don't know the history of that padding but I wonder if it is from another overflow that is fixable. Doing bitmap_size = (max_page + 7)/8 should be enough.

   /*
    * Allocate space for the allocation bitmap. Include an extra longword
    * of padding for possible overrun in map_alloc and map_free.
    */
   bitmap_size  = max_page / 8;
   bitmap_size += sizeof(unsigned long);
   bitmap_size  = round_pgup(bitmap_size);
   alloc_bitmap = (unsigned long *)maddr_to_virt(bitmap_start);


In page_alloc.c end_boot_allocator, there's a slight chance that the allocated_in_map macro can cause an overflow. This is because it uses i+1 as the parameter and i goes to i < max_pages (if i == max_pages then we overflow).

Most of the time, this is ok because the allocation is bigger than needed. But if it is just right, then we could have a problem.

   /* Pages that are free now go to the domain sub-allocator. */
   for ( i = 0; i < max_page; i++ )
   {
       curr_free = next_free;
       next_free = !allocated_in_map(i+1);
       if ( next_free )
           map_alloc(i+1, 1); /* prevent merging in free_heap_pages() */
       if ( curr_free )
           free_heap_pages(pfn_dom_zone_type(i), mfn_to_page(i), 0);
   }


and with allocate_in_map as:

#define allocated_in_map(_pn)                 \
( !! (alloc_bitmap[(_pn)/PAGES_PER_MAPWORD] & \
    (1UL<<((_pn)&(PAGES_PER_MAPWORD-1)))) )

We see here that we are indexing past the bitmap and can cause problems. I wonder if this was actually happening and the above padding was done to hide this bug. Well the padding does fix the bug, but I don't think it's the proper approach.

Attached is a patch to fix the overflow problem in end_boot_allocator. I'll run it some time without that padding (do bitmap_size = (max_page + 7)/8 instead) and see if there's any other problems. I'll also put a test into map_alloc and map_free to tell me if something goes past max_page or bitmap size.

I would hate to have 134,184,960 bytes of memory: 134,184,960 / 4096 = 32,760 : 32,760 / 8 = 4095 : 4095 + 4 = 4099 And now we have an extra page for the bitmap without it ever being used. (OK that was hypothetical, but does make a small point. ;)

-- Steve

just in case you want the patch.

Signed-off-by: Steven Rostedt <srostedt@xxxxxxxxxx>




diff -r bbabdebc54ad xen/common/page_alloc.c
--- a/xen/common/page_alloc.c   Wed Jul 19 21:13:36 2006 +0100
+++ b/xen/common/page_alloc.c   Fri Aug 04 22:52:23 2006 -0400
@@ -256,7 +256,7 @@ void end_boot_allocator(void)
 void end_boot_allocator(void)
 {
     unsigned long i, j;
-    int curr_free = 0, next_free = 0;
+    int curr_free, prev_free;
 
     memset(avail, 0, sizeof(avail));
 
@@ -265,15 +265,18 @@ void end_boot_allocator(void)
             INIT_LIST_HEAD(&heap[i][j]);
 
     /* Pages that are free now go to the domain sub-allocator. */
-    for ( i = 0; i < max_page; i++ )
-    {
-        curr_free = next_free;
-        next_free = !allocated_in_map(i+1);
-        if ( next_free )
-            map_alloc(i+1, 1); /* prevent merging in free_heap_pages() */
+    prev_free = !allocated_in_map(0);
+    for ( i = 1; i < max_page; i++ )
+    {
+        curr_free = !allocated_in_map(i);
         if ( curr_free )
-            free_heap_pages(pfn_dom_zone_type(i), mfn_to_page(i), 0);
-    }
+            map_alloc(i, 1); /* prevent merging in free_heap_pages() */
+        if ( prev_free )
+            free_heap_pages(pfn_dom_zone_type(i-1), mfn_to_page(i-1), 0);
+        prev_free = curr_free;
+    }
+    if ( prev_free )
+        free_heap_pages(pfn_dom_zone_type(i-1), mfn_to_page(i-1), 0);
 }
 
 /* Hand the specified arbitrary page range to the specified heap zone. */
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.