[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH RFC 2/4] mm/page_alloc: place pages to tail in __putback_isolated_page()
On 24.09.20 12:37, Vlastimil Babka wrote: > On 9/16/20 8:34 PM, David Hildenbrand wrote: >> __putback_isolated_page() already documents that pages will be placed to >> the tail of the freelist - this is, however, not the case for >> "order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be >> the case for all existing users. > > I think here should be a sentence saying something along "Thus this patch > introduces a FOP_TO_TAIL flag to really ensure moving pages to tail." Agreed, thanks! > >> This change affects two users: >> - free page reporting >> - page isolation, when undoing the isolation. >> >> This behavior is desireable for pages that haven't really been touched >> lately, so exactly the two users that don't actually read/write page >> content, but rather move untouched pages. >> >> The new behavior is especially desirable for memory onlining, where we >> allow allocation of newly onlined pages via undo_isolate_page_range() >> in online_pages(). Right now, we always place them to the head of the >> free list, resulting in undesireable behavior: Assume we add >> individual memory chunks via add_memory() and online them right away to >> the NORMAL zone. We create a dependency chain of unmovable allocations >> e.g., via the memmap. The memmap of the next chunk will be placed onto >> previous chunks - if the last block cannot get offlined+removed, all >> dependent ones cannot get offlined+removed. While this can already be >> observed with individual DIMMs, it's more of an issue for virtio-mem >> (and I suspect also ppc DLPAR). >> >> Note: If we observe a degradation due to the changed page isolation >> behavior (which I doubt), we can always make this configurable by the >> instance triggering undo of isolation (e.g., alloc_contig_range(), >> memory onlining, memory offlining). >> >> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> >> Cc: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx> >> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> >> Cc: Michal Hocko <mhocko@xxxxxxxxxx> >> Cc: Dave Hansen <dave.hansen@xxxxxxxxx> >> Cc: Vlastimil Babka <vbabka@xxxxxxx> >> Cc: Wei Yang <richard.weiyang@xxxxxxxxxxxxxxxxx> >> Cc: Oscar Salvador <osalvador@xxxxxxx> >> Cc: Mike Rapoport <rppt@xxxxxxxxxx> >> Cc: Scott Cheloha <cheloha@xxxxxxxxxxxxx> >> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> >> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> >> --- >> mm/page_alloc.c | 10 +++++++++- >> 1 file changed, 9 insertions(+), 1 deletion(-) >> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index 91cefb8157dd..bba9a0f60c70 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -89,6 +89,12 @@ typedef int __bitwise fop_t; >> */ >> #define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0)) >> >> +/* >> + * Place the freed page to the tail of the freelist after buddy merging. >> Will >> + * get ignored with page shuffling enabled. >> + */ >> +#define FOP_TO_TAIL ((__force fop_t)BIT(1)) >> + >> /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ >> static DEFINE_MUTEX(pcp_batch_high_lock); >> #define MIN_PERCPU_PAGELIST_FRACTION (8) >> @@ -1040,6 +1046,8 @@ static inline void __free_one_page(struct page *page, >> unsigned long pfn, >> >> if (is_shuffle_order(order)) >> to_tail = shuffle_pick_tail(); >> + else if (fop_flags & FOP_TO_TAIL) >> + to_tail = true; > > Should we really let random shuffling decision have a larger priority than > explicit FOP_TO_TAIL request? Wei Yang mentioned that there's a call to > shuffle_zone() anyway to process a freshly added memory, so we don't need to > do > that also during the process of addition itself? Might help with your goal of > reducing dependencies even on systems that do have shuffling enabled? So, we do have cases where generic_online_page() -> __free_pages_core() isn't called (see patch #4): generic_online_page() is used in two cases: 1. Direct memory onlining in online_pages(). Here, we call shuffle_zone(). 2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV balloon and virtio-mem), when parts of a section are kept fake-offline to be fake-onlined later on. While we shuffle in the fist instance the whole zone, we wouldn't shuffle in the second case. But maybe this should be tackled (just like when alloc_contig_free() a large contiguous range, memory offlining failing, alloc_contig_range() failing) by manually shuffling the zone again. That would be cleaner, and the right thing to do when exposing large, contiguous ranges again to the buddy. Thanks! -- Thanks, David / dhildenb
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |