[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3] Xen: Spread boot time page scrubbing across all available CPU's



. snip.. [agreed to the comments]
> > -        spin_lock(&heap_lock);
> > +        region[i].rem = rem;
> > +        region[i].per_cpu_sz /= cpumask_weight(&all_worker_cpus);
> > +        max_per_cpu_sz = region[i].per_cpu_sz;
> > +        if ( max_per_cpu_sz && chunk_size > max_per_cpu_sz )
> > +            chunk_size = max_per_cpu_sz;
> 
> So you may end up limiting chunk size further on each iteration.
> That's surely not very efficient.

Tim also raised this. I think a better option would be to
stash the 'chunk_size' then in the 'region' and have each
NODE worker select the most optimal value.

That will require making the loop that kicks off all the CPUs to not
increment 'chunk' based on the 'chunk size'

 chunk = 0; chunk < max_per_cpu_sz; chunk += chunk_size
                                             ^^^^^^^^^^

but on a different value. Perhaps just make it a max iteration value:

 j = 0; j < max_per_cpu_size / opt_bootscrub_chunk; j++

and each NOED worker will be given as 'offset' the 'j' value and can
figure out whether:

   r->start + r->offset * r->per_cpu_sz >= r->end;
                return
   else
                do the work

(and we would add 'end' in the new region).
> 
> > +        cpumask_copy(&region[i].cpu, &all_worker_cpus);
> >  
> > -        /* Re-check page status with lock held. */
> > -        if ( page_state_is(pg, free) )
> > -            scrub_one_page(pg);
> > +        for ( offset = 0; offset < max_per_cpu_sz; offset += chunk_size )
> > +        {
> > +            region[i].offset = offset;
> >  
> > -        spin_unlock(&heap_lock);
> > -    }
> > +            process_pending_softirqs();
> > +
> > +            spin_lock(&heap_lock);
> > +            on_selected_cpus(&all_worker_cpus, smp_scrub_heap_pages, 
> > &region[i], 1);
> 
> So in round 2 you're telling all CPUs to scrub one node's memory. That
> ought to be particularly inefficient when you have relatively many
> CPUs and relatively little memory on the node. In the hope that nodes

Aha! I knew I missed one case.

> without CPUs aren't that common (or are relatively symmetric in terms
> of their count relative to the count of nodes with CPUs), wouldn't it be
> better (and yielding simpler code) to have each CPU scrub one such
> node in its entirety (ultimately, if this turns out to be a more common
> case, node distance could even be taken into consideration when
> picking which CPU does what).

That would work too. The non-CPU-full-NUMA case is an oddity that I have only
reproduced if I use 'numa=fake=8' on a single socket machine. In that case
the 1st node has all of the CPUs and the rest are without. And it made it faster
to just use all of the available CPU cores on the rest of the nodes.

Perhaps a check:

        (end - start)  < opt_bootscrub_chunk * 2

If the amount of memory per node is less than bootscrub chunk size (assume it 
is in
bytes) times two per then use just one CPU. 

But if it is larger, then split the amount of _all_worker_cpus_ by the
total count of nodes - and have them work in lock-step? Something like this:

 node = 0;
 non-cpu-nodes = 0;
 for_each_online_node(node)
 {
        if ( cpumask_empty(&node_cpus) )
                non-cpu-nodes ++;
 }
 cnt = cpumask_weight(&all_worker_cpus) / non-cpu-nodes;
 for (node = 0, i = 0; i < cpumask_weight(&all_worker_cpus); i++)
 {
        cpumask_clear(&r->cpus[node]);
        for (j = i; j < cnt + i; j++)
                cpumask_set(&r->cpus[node], j);
                /* Take care to deal with SMT threads. */

 }

then we would have just a subset of CPUs hammering on the non-CPU-nodes.


P.S.
I don't know of any real-life scenarios where there are NUMA nodes without
CPUs.

> 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.