[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH RFC 3/4] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate()
On 25.09.20 04:45, Wei Yang wrote: > On Thu, Sep 24, 2020 at 01:13:29PM +0200, Vlastimil Babka wrote: >> On 9/16/20 8:34 PM, David Hildenbrand wrote: >>> Page isolation doesn't actually touch the pages, it simply isolates >>> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist. >>> >>> We already place pages to the tail of the freelists when undoing >>> isolation via __putback_isolated_page(), let's do it in any case >>> (e.g., if order == pageblock_order) and document the behavior. >>> >>> This change results in all pages getting onlined via online_pages() to >>> be placed to the tail of the freelist. >>> >>> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> >>> Cc: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx> >>> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> >>> Cc: Michal Hocko <mhocko@xxxxxxxxxx> >>> Cc: Dave Hansen <dave.hansen@xxxxxxxxx> >>> Cc: Vlastimil Babka <vbabka@xxxxxxx> >>> Cc: Wei Yang <richard.weiyang@xxxxxxxxxxxxxxxxx> >>> Cc: Oscar Salvador <osalvador@xxxxxxx> >>> Cc: Mike Rapoport <rppt@xxxxxxxxxx> >>> Cc: Scott Cheloha <cheloha@xxxxxxxxxxxxx> >>> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> >>> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> >>> --- >>> include/linux/page-isolation.h | 2 ++ >>> mm/page_alloc.c | 36 +++++++++++++++++++++++++++++----- >>> mm/page_isolation.c | 8 ++++++-- >>> 3 files changed, 39 insertions(+), 7 deletions(-) >>> >>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h >>> index 572458016331..a36be2cf4dbb 100644 >>> --- a/include/linux/page-isolation.h >>> +++ b/include/linux/page-isolation.h >>> @@ -38,6 +38,8 @@ struct page *has_unmovable_pages(struct zone *zone, >>> struct page *page, >>> void set_pageblock_migratetype(struct page *page, int migratetype); >>> int move_freepages_block(struct zone *zone, struct page *page, >>> int migratetype, int *num_movable); >>> +int move_freepages_block_tail(struct zone *zone, struct page *page, >>> + int migratetype); >>> >>> /* >>> * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE. >>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >>> index bba9a0f60c70..75b0f49b4022 100644 >>> --- a/mm/page_alloc.c >>> +++ b/mm/page_alloc.c >>> @@ -899,6 +899,15 @@ static inline void move_to_free_list(struct page >>> *page, struct zone *zone, >>> list_move(&page->lru, &area->free_list[migratetype]); >>> } >>> >>> +/* Used for pages which are on another list */ >>> +static inline void move_to_free_list_tail(struct page *page, struct zone >>> *zone, >>> + unsigned int order, int migratetype) >>> +{ >>> + struct free_area *area = &zone->free_area[order]; >>> + >>> + list_move_tail(&page->lru, &area->free_list[migratetype]); >>> +} >> >> There are just 3 callers of move_to_free_list() before this patch, I would >> just >> add the to_tail parameter there instead of new wrapper. For callers with >> constant parameter, the inline will eliminate it anyway. > > Got the same feeling :-) I once was told boolean parameters are the root of all evil, so I tried to keep them file-local :) One thing to be aware of is, that inline optimizations won't help as long as this function is in mm/page_alloc.c, see below. > >> >>> static inline void del_page_from_free_list(struct page *page, struct zone >>> *zone, >>> unsigned int order) >>> { >>> @@ -2323,7 +2332,7 @@ static inline struct page >>> *__rmqueue_cma_fallback(struct zone *zone, >>> */ >>> static int move_freepages(struct zone *zone, >>> struct page *start_page, struct page *end_page, >>> - int migratetype, int *num_movable) >>> + int migratetype, int *num_movable, bool to_tail) >>> { >>> struct page *page; >>> unsigned int order; >>> @@ -2354,7 +2363,10 @@ static int move_freepages(struct zone *zone, >>> VM_BUG_ON_PAGE(page_zone(page) != zone, page); >>> >>> order = page_order(page); >>> - move_to_free_list(page, zone, order, migratetype); >>> + if (to_tail) >>> + move_to_free_list_tail(page, zone, order, migratetype); >>> + else >>> + move_to_free_list(page, zone, order, migratetype); >>> page += 1 << order; >>> pages_moved += 1 << order; >>> } >>> @@ -2362,8 +2374,9 @@ static int move_freepages(struct zone *zone, >>> return pages_moved; >>> } >>> >>> -int move_freepages_block(struct zone *zone, struct page *page, >>> - int migratetype, int *num_movable) >>> +static int __move_freepages_block(struct zone *zone, struct page *page, >>> + int migratetype, int *num_movable, >>> + bool to_tail) >>> { >>> unsigned long start_pfn, end_pfn; >>> struct page *start_page, *end_page; >>> @@ -2384,7 +2397,20 @@ int move_freepages_block(struct zone *zone, struct >>> page *page, >>> return 0; >>> >>> return move_freepages(zone, start_page, end_page, migratetype, >>> - num_movable); >>> + num_movable, to_tail); >>> +} >>> + >>> +int move_freepages_block(struct zone *zone, struct page *page, >>> + int migratetype, int *num_movable) >>> +{ >>> + return __move_freepages_block(zone, page, migratetype, num_movable, >>> + false); >>> +} >>> + >>> +int move_freepages_block_tail(struct zone *zone, struct page *page, >>> + int migratetype) >>> +{ >>> + return __move_freepages_block(zone, page, migratetype, NULL, true); >>> } >> >> Likewise, just 5 callers of move_freepages_block(), all in the files you're >> already changing, so no need for this wrappers IMHO. As long as we don't want to move the implementation to the header, we'll need it for the constant propagation to work at compile time (we don't really have link-time optimizations). Or am I missing something? Thanks! -- Thanks, David / dhildenb
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |