[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 3/9] mm: Scrub pages in alloc_heap_pages() if needed



On Fri, Mar 24, 2017 at 01:04:58PM -0400, Boris Ostrovsky wrote:
> When allocating pages in alloc_heap_pages() first look for clean pages. If 
> none
> is found then retry, take pages marked as unscrubbed and scrub them.
> 
> Note that we shouldn't find unscrubbed pages in alloc_heap_pages() yet. 
> However,
> this will become possible when we stop scrubbing from free_heap_pages() and
> instead do it from idle loop.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>

Again, s/bool_t/bool/.

>   found: 
> +    need_scrub = !!test_bit(_PGC_need_scrub, &pg->count_info);
> +
>      /* We may have to halve the chunk a number of times. */
>      while ( j != order )
>      {
>          PFN_ORDER(pg) = --j;
> -        page_list_add(pg, &heap(node, zone, j));
> +        if ( need_scrub )
> +        {
> +            pg->count_info |= PGC_need_scrub;
> +            page_list_add_tail(pg, &heap(node, zone, j));
> +        }
> +        else
> +            page_list_add(pg, &heap(node, zone, j));

This is getting repetitive. Please consider adding a function.


    /* Pages that need scrub are added to tail, otherwise to head */
    void add_to_page_list(pg, node, zone, order, need_scrub)
    {
        if ( need_scrub )
        {
            pg->count_info |= PGC_need_scrub;
            page_list_add_tail(pg, &heap(node, zone, order));
        }
        else
            page_list_add(pg, &heap(node, zone, order));
    }

Might be more appropriate to add it in previous patch and replace all
plain page_list_add{,_tail} with it.

>          pg += 1 << j;
>      }
> +    if ( need_scrub )
> +        pg->count_info |= PGC_need_scrub;
>  
>      ASSERT(avail[node][zone] >= request);
>      avail[node][zone] -= request;
> @@ -823,6 +859,15 @@ static struct page_info *alloc_heap_pages(
>      if ( d != NULL )
>          d->last_alloc_node = node;
>  
> +    if ( need_scrub )
> +    {
> +        for ( i = 0; i < (1 << order); i++ )
> +            scrub_one_page(&pg[i]);
> +        pg->count_info &= ~PGC_need_scrub;
> +        node_need_scrub[node] -= (1 << order);
> +    }
> +
> +
>      for ( i = 0; i < (1 << order); i++ )
>      {
>          /* Reference count must continuously be zero for free pages. */
> -- 
> 1.7.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.