[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] sched: credit2: respect per-vcpu hard affinity



Hey, I came across this patch again for other reasons, and I realized
I've a few more (minor) comments:

On Sun, 2015-02-08 at 17:45 -1000, Justin T. Weaver wrote:
> From: "Justin T. Weaver" <jtweaver@xxxxxxxxxx>

> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index cf53770..de8fb5a 100644
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c

> @@ -2127,16 +2212,25 @@ csched2_init(struct scheduler *ops)
>  
>      prv->load_window_shift = opt_load_window_shift;
>  
> +    cpumask = xzalloc_bytes(nr_cpu_ids * sizeof(cpumask_t *));
> +    if ( cpumask == NULL )
> +        return -ENOMEM;
> +
This can probably be xzalloc_array(), or even xmalloc_array(), if we
don't care zeroing all elements (see below about why I actually don't
think we need).

>      return 0;
>  }
>  
>  static void
>  csched2_deinit(const struct scheduler *ops)
>  {
> +    int i;
>      struct csched2_private *prv;
>  
>      prv = CSCHED2_PRIV(ops);
>      xfree(prv);
> +
> +    for ( i = 0; i < nr_cpu_ids; i++ )
> +        free_cpumask_var(cpumask[i]);
> +    xfree(cpumask);
>
Do we need the loop? I mean, all the pcpus go through
csched2_alloc_pdata(), when being activated under this scheduler, which
allocates their scratch masks. They then go through
csched2_free_pdata(), when deactivated from this scheduler, which frees
their scratch masks.

I would then expect that, when we get to here, all the elemtns of the
scratch mask array that were allocated, have also been freed, and hence
only freeing the array is necessary.

Am I missing something?

Regards,
Dario

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.