[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 15/16] xen: sched: scratch space for cpumasks on Credit2

On 24/03/16 12:44, George Dunlap wrote:
> On 18/03/16 19:27, Andrew Cooper wrote:
>> On 18/03/16 19:06, Dario Faggioli wrote:
>>> like what's there already in both Credit1 and RTDS. In
>>> fact, when playing with affinity, a lot of cpumask
>>> manipulation is necessary, inside of various functions.
>>> To avoid having a lot of cpumask_var_t on the stack,
>>> this patch introduces a global scratch area.
>>> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
>>> ---
>>> Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
>>> ---
>> I would suggest instead going with
>> static DEFINE_PER_CPU(cpumask_t, csched2_cpumask_scratch);
>> Functions which want to use it can use
>> cpumask_t *scratch = &this_cpu(csched2_cpumask_scratch);
>> This avoids all this opencoded allocation/refcounting, the chance that
>> starting a scheduler would fail for memory reasons, and one extra
>> cpumask in the per-cpu data area won't break the bank.
> Going a bit further, since (according to the changelog) both credit1 and
> rtds also do this, would it make sense to have schedule.c define these,
> and allow any of the schedulers to use them?
> (Assuming that both credit1 and rtds allocate exactly one mask per cpu.)

If more than one scheduler needs scratch space, then yes.  That would be
better than having one scratch per scheduler per cpu.

After all, we have things like keyhandler_scratch, for a similar reason.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.