[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 08/14] xen: derive NUMA node affinity from hard and soft CPU affinity
On 11/18/2013 06:17 PM, Dario Faggioli wrote: if a domain's NUMA node-affinity (which is what controls memory allocations) is provided by the user/toolstack, it just is not touched. However, if the user does not say anything, leaving it all to Xen, let's compute it in the following way: 1. cpupool's cpus & hard-affinity & soft-affinity 2. if (1) is empty: cpupool's cpus & hard-affinity This guarantees memory to be allocated from the narrowest possible set of NUMA nodes, ad makes it relatively easy to set up NUMA-aware scheduling on top of soft affinity. Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx> Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx> --- Changes from v2: * the loop computing the mask is now only executed when it really is useful, as suggested during review; * the loop, and all the cpumask handling is optimized, in a way similar to what was suggested during review. --- xen/common/domain.c | 62 +++++++++++++++++++++++++++++++++------------------ 1 file changed, 40 insertions(+), 22 deletions(-) diff --git a/xen/common/domain.c b/xen/common/domain.c index d6ac4d1..721678a 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -353,17 +353,17 @@ struct domain *domain_create( void domain_update_node_affinity(struct domain *d) { - cpumask_var_t cpumask; - cpumask_var_t online_affinity; + cpumask_var_t dom_cpumask, dom_cpumask_soft; + cpumask_t *dom_affinity; const cpumask_t *online; struct vcpu *v; - unsigned int node; + unsigned int cpu; - if ( !zalloc_cpumask_var(&cpumask) ) + if ( !zalloc_cpumask_var(&dom_cpumask) ) return; - if ( !alloc_cpumask_var(&online_affinity) ) + if ( !zalloc_cpumask_var(&dom_cpumask_soft) ) { - free_cpumask_var(cpumask); + free_cpumask_var(dom_cpumask); return; } @@ -371,31 +371,49 @@ void domain_update_node_affinity(struct domain *d) spin_lock(&d->node_affinity_lock); - for_each_vcpu ( d, v ) - { - cpumask_and(online_affinity, v->cpu_hard_affinity, online); - cpumask_or(cpumask, cpumask, online_affinity); - } - /* - * If d->auto_node_affinity is true, the domain's node-affinity mask - * (d->node_affinity) is automaically computed from all the domain's - * vcpus' vcpu-affinity masks (the union of which we have just built - * above in cpumask). OTOH, if d->auto_node_affinity is false, we - * must leave the node-affinity of the domain alone. + * If d->auto_node_affinity is true, let's compute the domain's + * node-affinity and update d->node_affinity accordingly. if false, + * just leave d->auto_node_affinity alone. */ if ( d->auto_node_affinity ) { + /* + * We want the narrowest possible set of pcpus (to get the narowest + * possible set of nodes). What we need is the cpumask of where the + * domain can run (the union of the hard affinity of all its vcpus), + * and the full mask of where it would prefer to run (the union of + * the soft affinity of all its various vcpus). Let's build them. + */ + cpumask_clear(dom_cpumask); + cpumask_clear(dom_cpumask_soft); + for_each_vcpu ( d, v ) + { + cpumask_or(dom_cpumask, dom_cpumask, v->cpu_hard_affinity); + cpumask_or(dom_cpumask_soft, dom_cpumask_soft, + v->cpu_soft_affinity); + } + /* Filter out non-online cpus */ + cpumask_and(dom_cpumask, dom_cpumask, online); + /* And compute the intersection between hard, online and soft */ + cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask); + + /* + * If not empty, the intersection of hard, soft and online is the + * narrowest set we want. If empty, we fall back to hard&online. + */ + dom_affinity = cpumask_empty(dom_cpumask_soft) ? + dom_cpumask : dom_cpumask_soft; + nodes_clear(d->node_affinity); - for_each_online_node ( node ) - if ( cpumask_intersects(&node_to_cpumask(node), cpumask) ) - node_set(node, d->node_affinity); + for_each_cpu( cpu, dom_affinity ) + node_set(cpu_to_node(cpu), d->node_affinity); } spin_unlock(&d->node_affinity_lock); - free_cpumask_var(online_affinity); - free_cpumask_var(cpumask); + free_cpumask_var(dom_cpumask_soft); + free_cpumask_var(dom_cpumask); } _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |