[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 7/7] xen: sched_rt: print useful affinity info when dumping



Hi Dario,

2015-03-17 10:12 GMT-04:00 Dario Faggioli <dario.faggioli@xxxxxxxxxx>:
> On Mon, 2015-03-16 at 16:30 -0400, Meng Xu wrote:
>> Hi Dario,
>>
> Hey,
>
>> 2015-03-16 13:05 GMT-04:00 Dario Faggioli <dario.faggioli@xxxxxxxxxx>:
>
>> >
>> > This change also takes the chance to add a scratch
>> > cpumask, to avoid having to create one more
>> > cpumask_var_t on the stack of the dumping routine.
>>
>> Actually, I have a question about the strength of this design. When we
>> have a machine with many cpus, we will end up with allocating a
>> cpumask for each cpu.
>>
> Just FTR, what we will end up allocating is:
>  - an array of *pointers* to cpumasks with as many elements as the
>    number of pCPUs,
>  - a cpumask *only* for the pCPUs subjected to an instance of the RTDS
>    scheduler.
>
> So, for instance, if you have 64 pCPUs, but are using the RTDS scheduler
> only in a cpupool with 2 pCPUs, you'll have an array of 64 pointers to
> cpumask_t, but only 2 actual cpumasks.
>
>> Is this better than having a cpumask_var_t on
>> the stack of the dumping routine, since the dumping routine is not in
>> the hot path?
>>
> George and Jan replied to this already, I think. Allow me to add just a
> few words:
>> > Such scratch area can be used to kill most of the
>> > cpumasks_var_t local variables in other functions
>> > in the file, but that is *NOT* done in this chage.
>> >
> This is the point, actually! As said here, this is not only for the sake
> of the dumping routine. In fact, ideally, someone will, in the near
> future, go throughout the whole file and kill most of the cpumask_t
> local variables, and most of the cpumask dynamic allocations, in favour
> of using this scratch area.
>
>> > @@ -409,6 +423,10 @@ rt_init(struct scheduler *ops)
>> >      if ( prv == NULL )
>> >          return -ENOMEM;
>> >
>> > +    _cpumask_scratch = xmalloc_array(cpumask_var_t, nr_cpu_ids);
>>
>> Is it better to use xzalloc_array?
>>
> Why? IMO, not really. I'm only free()-ing (in rt_free_pdata()) the
> elements of the array that have been previously successfully allocated
> (in rt_alloc_pdata()), so I don't think there is any special requirement
> for all the elements to be NULL right away.

OK. I see.

>
>> > +    if ( _cpumask_scratch == NULL )
>> > +        return -ENOMEM;
>> > +
>> >      spin_lock_init(&prv->lock);
>> >      INIT_LIST_HEAD(&prv->sdom);
>> >      INIT_LIST_HEAD(&prv->runq);
>> > @@ -426,6 +444,7 @@ rt_deinit(const struct scheduler *ops)
>> >  {
>> >      struct rt_private *prv = rt_priv(ops);
>> >
>> > +    xfree(_cpumask_scratch);
>> >      xfree(prv);
>> >  }
>> >
>> > @@ -443,6 +462,9 @@ rt_alloc_pdata(const struct scheduler *ops, int cpu)
>> >      per_cpu(schedule_data, cpu).schedule_lock = &prv->lock;
>> >      spin_unlock_irqrestore(&prv->lock, flags);
>> >
>> > +    if ( !alloc_cpumask_var(&_cpumask_scratch[cpu]) )
>>
>> Is it better to use zalloc_cpumask_var() here?
>>
> Nope. It's a scratch area, after all, so one really should not assume it
> to be in a specific state (e.g., no bits set as you're suggesting) when
> using it.

I see the point. Now I got it. :-)

>
> Thanks and Regards,

Thank you very much for clarification! :-)

Best,

Meng

-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.