|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2 of 3] switch to dynamically allocated cpumask in domain_update_node_affinity()
On Tue, 2012-01-24 at 05:54 +0000, Juergen Gross wrote:
> # HG changeset patch
> # User Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
> # Date 1327384410 -3600
> # Node ID 08232960ff4bed750d26e5f1ff53972fee9e0130
> # Parent 99f98e501f226825fbf16f6210b4b7834dff5df1
> switch to dynamically allocated cpumask in
> domain_update_node_affinity()
>
> cpumasks should rather be allocated dynamically.
>
> Signed-off-by: juergen.gross@xxxxxxxxxxxxxx
>
> diff -r 99f98e501f22 -r 08232960ff4b xen/common/domain.c
> --- a/xen/common/domain.c Tue Jan 24 06:53:06 2012 +0100
> +++ b/xen/common/domain.c Tue Jan 24 06:53:30 2012 +0100
> @@ -333,23 +333,27 @@ struct domain *domain_create(
>
> void domain_update_node_affinity(struct domain *d)
> {
> - cpumask_t cpumask;
> + cpumask_var_t cpumask;
> nodemask_t nodemask = NODE_MASK_NONE;
> struct vcpu *v;
> unsigned int node;
>
> - cpumask_clear(&cpumask);
> + if ( !zalloc_cpumask_var(&cpumask) )
> + return;
If this ends up always failing we will never set node_affinity to
anything at all. Granted that is already a pretty nasty situation to be
in but perhaps setting the affinity to NODE_MASK_ALL on failure is
slightly more robust?
> +
> spin_lock(&d->node_affinity_lock);
>
> for_each_vcpu ( d, v )
> - cpumask_or(&cpumask, &cpumask, v->cpu_affinity);
> + cpumask_or(cpumask, cpumask, v->cpu_affinity);
>
> for_each_online_node ( node )
> - if ( cpumask_intersects(&node_to_cpumask(node), &cpumask) )
> + if ( cpumask_intersects(&node_to_cpumask(node), cpumask) )
> node_set(node, nodemask);
>
> d->node_affinity = nodemask;
> spin_unlock(&d->node_affinity_lock);
> +
> + free_cpumask_var(cpumask);
> }
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |