|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 4/8] mm: Scrub memory from idle loop
>>> Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> 06/22/17 8:56 PM >>>
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1019,15 +1019,85 @@ static int reserve_offlined_page(struct page_info
> *head)
> return count;
> }
>
> -static void scrub_free_pages(unsigned int node)
> +static nodemask_t node_scrubbing;
> +
> +/*
> + * If get_node is true this will return closest node that needs to be
> scrubbed,
> + * with appropriate bit in node_scrubbing set.
> + * If get_node is not set, this will return *a* node that needs to be
> scrubbed.
> + * node_scrubbing bitmask will no be updated.
> + * If no node needs scrubbing then NUMA_NO_NODE is returned.
> + */
> +static unsigned int node_to_scrub(bool get_node)
> {
> - struct page_info *pg;
> - unsigned int zone;
> + nodeid_t node = cpu_to_node(smp_processor_id()), local_node;
> + nodeid_t closest = NUMA_NO_NODE;
> + u8 dist, shortest = 0xff;
>
> - ASSERT(spin_is_locked(&heap_lock));
> + if ( node == NUMA_NO_NODE )
> + node = 0;
>
> - if ( !node_need_scrub[node] )
> - return;
> + if ( node_need_scrub[node] &&
> + (!get_node || !node_test_and_set(node, node_scrubbing)) )
> + return node;
> +
> + /*
> + * See if there are memory-only nodes that need scrubbing and choose
> + * the closest one.
> + */
> + local_node = node;
> + for ( ; ; )
> + {
> + do {
> + node = cycle_node(node, node_online_map);
> + } while ( !cpumask_empty(&node_to_cpumask(node)) &&
> + (node != local_node) );
> +
> + if ( node == local_node )
> + break;
> +
> + /*
> + * Grab the node right away. If we find a closer node later we will
> + * release this one. While there is a chance that another CPU will
> + * not be able to scrub that node when it is searching for scrub work
> + * at the same time it will be able to do so next time it wakes up.
> + * The alternative would be to perform this search under a lock but
> + * then we'd need to take this lock every time we come in here.
> + */
I'm fine with your choice of simply explaining your decision, but ...
> + if ( node_need_scrub[node] )
> + {
> + if ( !get_node )
> + return node;
> +
> + dist = __node_distance(local_node, node);
> + if ( (dist < shortest || closest == NUMA_NO_NODE) &&
> + !node_test_and_set(node, node_scrubbing) )
... wouldn't the comment better be placed ahead of this if()?
> @@ -1050,17 +1120,42 @@ static void scrub_free_pages(unsigned int node)
> scrub_one_page(&pg[i]);
> pg[i].count_info &= ~PGC_need_scrub;
> node_need_scrub[node]--;
> + cnt += 100; /* scrubbed pages add heavier weight. */
> + }
> + else
> + cnt++;
> +
> + /*
> + * Scrub a few (8) pages before becoming eligible for
> + * preemption. But also count non-scrubbing loop
> iterations
> + * so that we don't get stuck here with an almost clean
> + * heap.
> + */
> + if ( cnt > 800 && softirq_pending(cpu) )
> + {
> + preempt = true;
> + break;
> }
> }
>
> - page_list_del(pg, &heap(node, zone, order));
> - page_list_add_scrub(pg, node, zone, order,
> INVALID_DIRTY_IDX);
> + if ( i >= (1U << order) - 1 )
> + {
> + page_list_del(pg, &heap(node, zone, order));
> + page_list_add_scrub(pg, node, zone, order,
> INVALID_DIRTY_IDX);
> + }
> + else
> + pg->u.free.first_dirty = i + 1;
>
> - if ( node_need_scrub[node] == 0 )
> - return;
> + if ( preempt || (node_need_scrub[node] == 0) )
> + goto out;
> }
> } while ( order-- != 0 );
> }
> +
> + out:
> + spin_unlock(&heap_lock);
> + node_clear(node, node_scrubbing);
> + return softirq_pending(cpu) || (node_to_scrub(false) != NUMA_NO_NODE);
While I can see why you use it here, the softirq_pending() looks sort of
misplaced: While invoking it twice in the caller will look a little odd too,
I still think that's where the check belongs.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |