[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] xen/heap: pass order to free_heap_pages() in heap init



Hi Jan,

On 09/06/2022 14:22, Jan Beulich wrote:
  /*
   * This function should only be called with valid pages from the same NUMA
- * node.
+ * node and the same zone.
   *
   * Callers should use is_contig_page() first to check if all the pages
   * in a range are contiguous.
@@ -1817,8 +1829,22 @@ static void init_contig_heap_pages(struct page_info *pg, 
unsigned long nr_pages,
while ( s < e )
      {
-        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
-        s += 1UL;
+        /*
+         * For s == 0, we simply use the largest increment by checking the
+         * index of the MSB set. For s != 0, we also need to ensure that the

"The MSB" reads as it it was not in line with the code; at least I would,
short of it being spelled out, assume it can only be the page's address
which is meant (as is the case for LSB below). But it's the MSB of the
range's size.

Indeed. I have reworded the comment to:

        /*
         * For s == 0, we simply use the largest increment by checking the
* MSB of the region size. For s != 0, we also need to ensure that the * chunk is properly sized to end at power-of-two alignment. We do this
         * by checking the LSB of the start address and use its index as
         * the increment. Both cases need to be guarded by MAX_ORDER.
         *
* Note that the value of ffsl() and flsl() starts from 1 so we need
         * to decrement it by 1.
         */


+         * chunk is properly sized to end at power-of-two alignment. We do this
+         * by checking the LSB set and use its index as the increment. Both
+         * cases need to be guarded by MAX_ORDER.
+         *
+         * Note that the value of ffsl() and flsl() starts from 1 so we need
+         * to decrement it by 1.
+         */
+        int inc_order = min(MAX_ORDER, flsl(e - s) - 1);
+
+        if ( s )
+            inc_order = min(inc_order, ffsl(s) - 1);
+        free_heap_pages(mfn_to_page(_mfn(s)), inc_order, need_scrub);
+        s += (1UL << inc_order);
      }
  }
@@ -1856,12 +1882,13 @@ static void init_heap_pages(
      for ( i = 0; i < nr_pages; )
      {
          unsigned int nid = phys_to_nid(page_to_maddr(pg));
+        unsigned int zone = page_to_zone(pg);
          unsigned long left = nr_pages - i;
          unsigned long contig_pages;
for ( contig_pages = 1; contig_pages < left; contig_pages++ )
          {
-            if ( !is_contig_page(pg + contig_pages, nid) )
+            if ( !is_contig_page(pg + contig_pages, nid, zone) )
                  break;
          }

Coming back to your reply to my comment on patch 1: I think this
addition of the node check is actually an argument for inlining the
function's body here (and then dropping the function). That way the
separate-Xen-heap aspect is visible at the place where it matters,
rather than requiring an indirection via looking at the helper
function (and leaving a dead parameter in the opposite case). But as
said - I'm not going to insist as long as the helper function has a
suitable name (representing what it does and not misguiding anyone
with the common "contig"-means-addresses implication).

I have followed your suggestion in patch #1:
  * is_contig_page() is now inlined
  * init_contig_heap_pages() was renamed to _init_heap_pages()

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.