[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RESEND 01/12] xen: numa-sched: leave node-affinity alone if not in "auto" mode
On 12/11/2013 08:11, "Jan Beulich" <JBeulich@xxxxxxxx> wrote: > Hi Keir, > > below the one remaining patch mentioned yesterday. > > Jan > >>>> On 05.11.13 at 15:34, Dario Faggioli <dario.faggioli@xxxxxxxxxx> wrote: >> If the domain's NUMA node-affinity is being specified by the >> user/toolstack (instead of being automatically computed by Xen), >> we really should stick to that. This means domain_update_node_affinity() >> is wrong when it filters out some stuff from there even in "!auto" >> mode. >> >> This commit fixes that. Of course, this does not mean node-affinity >> is always honoured (e.g., a vcpu won't run on a pcpu of a different >> cpupool) but the necessary logic for taking into account all the >> possible situations lives in the scheduler code, where it belongs. >> >> What could happen without this change is that, under certain >> circumstances, the node-affinity of a domain may change when the >> user modifies the vcpu-affinity of the domain's vcpus. This, even >> if probably not a real bug, is at least something the user does >> not expect, so let's avoid it. >> >> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx> >> Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx> Acked-by: Keir Fraser <keir@xxxxxxx> >> --- >> This has been submitted already as a single patch on its own. >> Since this series needs the change done here, just include it >> in here, instead of pinging the original submission and deferring >> posting this series. >> --- >> xen/common/domain.c | 28 +++++++++------------------- >> 1 file changed, 9 insertions(+), 19 deletions(-) >> >> diff --git a/xen/common/domain.c b/xen/common/domain.c >> index 5999779..af31ab4 100644 >> --- a/xen/common/domain.c >> +++ b/xen/common/domain.c >> @@ -352,7 +352,6 @@ void domain_update_node_affinity(struct domain *d) >> cpumask_var_t cpumask; >> cpumask_var_t online_affinity; >> const cpumask_t *online; >> - nodemask_t nodemask = NODE_MASK_NONE; >> struct vcpu *v; >> unsigned int node; >> >> @@ -374,28 +373,19 @@ void domain_update_node_affinity(struct domain *d) >> cpumask_or(cpumask, cpumask, online_affinity); >> } >> >> + /* >> + * If d->auto_node_affinity is true, the domain's node-affinity mask >> + * (d->node_affinity) is automaically computed from all the domain's >> + * vcpus' vcpu-affinity masks (the union of which we have just built >> + * above in cpumask). OTOH, if d->auto_node_affinity is false, we >> + * must leave the node-affinity of the domain alone. >> + */ >> if ( d->auto_node_affinity ) >> { >> - /* Node-affinity is automaically computed from all vcpu-affinities >> */ >> + nodes_clear(d->node_affinity); >> for_each_online_node ( node ) >> if ( cpumask_intersects(&node_to_cpumask(node), cpumask) ) >> - node_set(node, nodemask); >> - >> - d->node_affinity = nodemask; >> - } >> - else >> - { >> - /* Node-affinity is provided by someone else, just filter out cpus >> - * that are either offline or not in the affinity of any vcpus. */ >> - nodemask = d->node_affinity; >> - for_each_node_mask ( node, d->node_affinity ) >> - if ( !cpumask_intersects(&node_to_cpumask(node), cpumask) ) >> - node_clear(node, nodemask);//d->node_affinity); >> - >> - /* Avoid loosing track of node-affinity because of a bad >> - * vcpu-affinity has been specified. */ >> - if ( !nodes_empty(nodemask) ) >> - d->node_affinity = nodemask; >> + node_set(node, d->node_affinity); >> } >> >> sched_set_node_affinity(d, &d->node_affinity); > > > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |