[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [RFC PATCH] page_alloc: use first half of higher order chunks when halving
From: Matt Rushton <mrushton@xxxxxxxxxx> This patch makes the Xen heap allocator use the first half of higher order chunks instead of the second half when breaking them down for smaller order allocations. Linux currently remaps the memory overlapping PCI space one page at a time. Before this change this resulted in the mfns being allocated in reverse order and led to discontiguous dom0 memory. This forced dom0 to use bounce buffers for doing DMA and resulted in poor performance. This change more gracefully handles the dom0 use case and returns contiguous memory for subsequent allocations. Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx Cc: Keir Fraser <keir@xxxxxxx> Cc: Jan Beulich <jbeulich@xxxxxxxx> Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Cc: Tim Deegan <tim@xxxxxxx> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Signed-off-by: Matt Rushton <mrushton@xxxxxxxxxx> Signed-off-by: Matt Wilson <msw@xxxxxxxxxx> --- xen/common/page_alloc.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 601319c..27e7f18 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -677,9 +677,10 @@ static struct page_info *alloc_heap_pages( /* We may have to halve the chunk a number of times. */ while ( j != order ) { - PFN_ORDER(pg) = --j; + struct page_info *pg2; + pg2 = pg + (1 << --j); + PFN_ORDER(pg) = j; page_list_add_tail(pg, &heap(node, zone, j)); - pg += 1 << j; } ASSERT(avail[node][zone] >= request); -- 1.7.9.5 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |