|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] Reflect cpupool in numa node affinity (v2)
On Mon, Jan 23, 2012 at 12:55 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>> On 23.01.12 at 13:12, Juergen Gross <juergen.gross@xxxxxxxxxxxxxx> wrote:
>> - introduce and use common macros for selecting cpupool based cpumasks
>
> I had hoped that you would do this as a separate prerequisite patch,
> but that it's in here I think there's no need to break it up.
But it would be a lot easier to check the logic if the two patches
were separate; both now, and when someone in the future tries is doing
archaeology to figure out what's going on. I won't NACK a patch that
has them together, but I would strongly encourage you make them two
patches. :-)
-George
>
>>@@ -333,23 +334,38 @@ struct domain *domain_create(
>>
>> void domain_update_node_affinity(struct domain *d)
>> {
>>- cpumask_t cpumask;
>>+ cpumask_var_t cpumask;
>>+ cpumask_var_t online_affinity;
>>+ const cpumask_t *online;
>> nodemask_t nodemask = NODE_MASK_NONE;
>> struct vcpu *v;
>> unsigned int node;
>>
>>- cpumask_clear(&cpumask);
>>+ if (!zalloc_cpumask_var(&cpumask))
>>+ return;
>>+ if (!alloc_cpumask_var(&online_affinity))
>
> This doesn't get freed at the end of the function. Also, with the rest
> of the function being formatted properly, you ought to insert spaces
> after the initial and before the final parentheses of the if-s.
>
> Jan
>
>>+ {
>>+ free_cpumask_var(cpumask);
>>+ return;
>>+ }
>>+
>>+ online = cpupool_online_cpumask(d->cpupool);
>> spin_lock(&d->node_affinity_lock);
>>
>> for_each_vcpu ( d, v )
>>- cpumask_or(&cpumask, &cpumask, v->cpu_affinity);
>>+ {
>>+ cpumask_and(online_affinity, v->cpu_affinity, online);
>>+ cpumask_or(cpumask, cpumask, online_affinity);
>>+ }
>>
>> for_each_online_node ( node )
>>- if ( cpumask_intersects(&node_to_cpumask(node), &cpumask) )
>>+ if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
>> node_set(node, nodemask);
>>
>> d->node_affinity = nodemask;
>> spin_unlock(&d->node_affinity_lock);
>>+
>>+ free_cpumask_var(cpumask);
>> }
>>
>>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |