[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH 1/2] xen/heap: Split init_heap_pages() in two
From: Julien Grall <jgrall@xxxxxxxxxx> At the moment, init_heap_pages() will call free_heap_pages() page by page. To reduce the time to initialize the heap, we will want to provide multiple pages at the same time. init_heap_pages() is now split in two parts: - init_heap_pages(): will break down the range in multiple set of contiguous pages. For now, the criteria is the pages should belong to the same NUMA node. - init_contig_pages(): will initialize a set of contiguous pages. For now the pages are still passed one by one to free_heap_pages(). Note that the comment before init_heap_pages() is heavily outdated and does not reflect the current code. So update it. This patch is a merge/rework of patches from David Woodhouse and Hongyan Xia. Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx> ---- Interestingly, I was expecting this patch to perform worse. However, from testing there is a small increase in perf. That said, I split the patch because it keeps refactoring and optimization separated. --- xen/common/page_alloc.c | 82 +++++++++++++++++++++++++++-------------- 1 file changed, 55 insertions(+), 27 deletions(-) diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 3e6504283f1e..a1938df1406c 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -1778,16 +1778,55 @@ int query_page_offline(mfn_t mfn, uint32_t *status) } /* - * Hand the specified arbitrary page range to the specified heap zone - * checking the node_id of the previous page. If they differ and the - * latter is not on a MAX_ORDER boundary, then we reserve the page by - * not freeing it to the buddy allocator. + * init_contig_heap_pages() is intended to only take pages from the same + * NUMA node. */ +static bool is_contig_page(struct page_info *pg, unsigned int nid) +{ + return (nid == (phys_to_nid(page_to_maddr(pg)))); +} + +/* + * This function should only be called with valid pages from the same NUMA + * node. + * + * Callers should use is_contig_page() first to check if all the pages + * in a range are contiguous. + */ +static void init_contig_heap_pages(struct page_info *pg, unsigned long nr_pages, + bool need_scrub) +{ + unsigned long s, e; + unsigned int nid = phys_to_nid(page_to_maddr(pg)); + + s = mfn_x(page_to_mfn(pg)); + e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1)); + if ( unlikely(!avail[nid]) ) + { + bool use_tail = !(s & ((1UL << MAX_ORDER) - 1)) && + (find_first_set_bit(e) <= find_first_set_bit(s)); + unsigned long n; + + n = init_node_heap(nid, s, nr_pages, &use_tail); + BUG_ON(n > nr_pages); + if ( use_tail ) + e -= n; + else + s += n; + } + + while ( s < e ) + { + free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub); + s += 1UL; + } +} + static void init_heap_pages( struct page_info *pg, unsigned long nr_pages) { unsigned long i; - bool idle_scrub = false; + bool need_scrub = scrub_debug; /* * Keep MFN 0 away from the buddy allocator to avoid crossing zone @@ -1812,35 +1851,24 @@ static void init_heap_pages( spin_unlock(&heap_lock); if ( system_state < SYS_STATE_active && opt_bootscrub == BOOTSCRUB_IDLE ) - idle_scrub = true; + need_scrub = true; - for ( i = 0; i < nr_pages; i++ ) + for ( i = 0; i < nr_pages; ) { - unsigned int nid = phys_to_nid(page_to_maddr(pg+i)); + unsigned int nid = phys_to_nid(page_to_maddr(pg)); + unsigned long left = nr_pages - i; + unsigned long contig_pages; - if ( unlikely(!avail[nid]) ) + for ( contig_pages = 1; contig_pages < left; contig_pages++ ) { - unsigned long s = mfn_x(page_to_mfn(pg + i)); - unsigned long e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1)); - bool use_tail = (nid == phys_to_nid(pfn_to_paddr(e - 1))) && - !(s & ((1UL << MAX_ORDER) - 1)) && - (find_first_set_bit(e) <= find_first_set_bit(s)); - unsigned long n; - - n = init_node_heap(nid, mfn_x(page_to_mfn(pg + i)), nr_pages - i, - &use_tail); - BUG_ON(i + n > nr_pages); - if ( n && !use_tail ) - { - i += n - 1; - continue; - } - if ( i + n == nr_pages ) + if ( !is_contig_page(pg + contig_pages, nid) ) break; - nr_pages -= n; } - free_heap_pages(pg + i, 0, scrub_debug || idle_scrub); + init_contig_heap_pages(pg, contig_pages, need_scrub); + + pg += contig_pages; + i += contig_pages; } } -- 2.32.0
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |