[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RESEND RFC 2/8] mm: Place unscrubbed pages at the end of pagelist
On 27/02/17 00:37, Boris Ostrovsky wrote: > . so that it's easy to find pages that need to be scrubbed (those pages are > now marked with _PGC_need_scrub bit). > > Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> > --- > xen/common/page_alloc.c | 97 +++++++++++++++++++++++++++++++++++---------- > xen/include/asm-x86/mm.h | 4 ++ > 2 files changed, 79 insertions(+), 22 deletions(-) > > diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c > index 352eba9..653bb91 100644 > --- a/xen/common/page_alloc.c > +++ b/xen/common/page_alloc.c > @@ -385,6 +385,8 @@ typedef struct page_list_head > heap_by_zone_and_order_t[NR_ZONES][MAX_ORDER+1]; > static heap_by_zone_and_order_t *_heap[MAX_NUMNODES]; > #define heap(node, zone, order) ((*_heap[node])[zone][order]) > > +static unsigned node_need_scrub[MAX_NUMNODES]; This will overflow if there is 16TB of ourstanding memory needing scrubbed per node. In the worst case, all available patches could be oustanding for scrubbing, so should be counted using the same types. > + > static unsigned long *avail[MAX_NUMNODES]; > static long total_avail_pages; > > @@ -935,11 +942,16 @@ static bool_t can_merge(struct page_info *head, > unsigned int node, > (phys_to_nid(page_to_maddr(head)) != node) ) > return 0; > > + if ( !!need_scrub ^ > + !!test_bit(_PGC_need_scrub, &head->count_info) ) > + return 0; > + > return 1; > } > > static void merge_chunks(struct page_info *pg, unsigned int node, > - unsigned int zone, unsigned int order) > + unsigned int zone, unsigned int order, > + bool_t need_scrub) Can't you calculate need_scrub from *pg rather than passing an extra parameter? > { > ASSERT(spin_is_locked(&heap_lock)); > > @@ -970,12 +982,49 @@ static void merge_chunks(struct page_info *pg, unsigned > int node, > } > > PFN_ORDER(pg) = order; > - page_list_add_tail(pg, &heap(node, zone, order)); > + if ( need_scrub ) > + page_list_add_tail(pg, &heap(node, zone, order)); > + else > + page_list_add(pg, &heap(node, zone, order)); > +} > + > +static void scrub_free_pages(unsigned int node) > +{ > + struct page_info *pg; > + unsigned int i, zone; > + int order; > + > + ASSERT(spin_is_locked(&heap_lock)); > + > + if ( !node_need_scrub[node] ) > + return; > + > + for ( zone = 0; zone < NR_ZONES; zone++ ) > + { > + for ( order = MAX_ORDER; order >= 0; order-- ) > + { > + while ( !page_list_empty(&heap(node, zone, order)) ) > + { > + /* Unscrubbed pages are always at the end of the list. */ > + pg = page_list_last(&heap(node, zone, order)); > + if ( !test_bit(_PGC_need_scrub, &pg[0].count_info) ) &pg->count_info > + break; > + > + for ( i = 0; i < (1 << order); i++) 1U, and probably unsigned long. Similarly later. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |