[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 21/48] xen/sched: use sched_resource cpu instead smp_processor_id in schedulers



On 12.09.2019 11:34, Juergen Gross wrote:
> On 09.09.19 16:17, Jan Beulich wrote:
>> On 09.08.2019 16:58, Juergen Gross wrote:
>>> @@ -1825,8 +1825,9 @@ static struct task_slice
>>>   csched_schedule(
>>>       const struct scheduler *ops, s_time_t now, bool_t 
>>> tasklet_work_scheduled)
>>>   {
>>> -    const int cpu = smp_processor_id();
>>> -    struct list_head * const runq = RUNQ(cpu);
>>> +    const unsigned int cpu = smp_processor_id();
>>> +    const unsigned int sched_cpu = sched_get_resource_cpu(cpu);
>>> +    struct list_head * const runq = RUNQ(sched_cpu);
>>
>> By retaining a local variable named "cpu" you make it close to
>> impossible to notice, during a re-base, an addition to the
>> function still referencing a variable of this name. Similarly
>> review is being made harder because one needs to go hunt all
>> the remaining uses of "cpu". For example there a trace entry
>> being generated, and it's not obvious to me whether this wouldn't
>> better also used sched_cpu.
> 
> Okayy, I'll rename "cpu" to "my_cpu".

We've got a number of instances of "this_cpu" in such cases already,
but no single "my_cpu". May I suggest to stick to this naming here
as well?

> I used cpu in the trace entry on purpose, as it might be interesting on
> which cpu the entry has been produced.

Right, that's how I understood it; it simply seemed like there
might be a similarly valid view to the contrary.

>>> @@ -1967,7 +1968,7 @@ csched_schedule(
>>>       if ( snext->pri > CSCHED_PRI_TS_OVER )
>>>           __runq_remove(snext);
>>>       else
>>> -        snext = csched_load_balance(prv, cpu, snext, &ret.migrated);
>>> +        snext = csched_load_balance(prv, sched_cpu, snext, &ret.migrated);
>>
>> And in a case like this one I wonder whether passing a "sort of
>> CPU" isn't sufficiently confusing, compared to e.g. simply
>> passing the corresponding unit.
> 
> I guess you mean sched_resource.

Not sure - with scheduling acting on units, it would seem to me that
passing around the unit pointers would be the most appropriate thing.

> I don't think changing the parameter type is a good idea. We need both
> (resource and cpu number) on caller and callee side, but the main
> object csched_load_balance() is working on is the cpu number.

I see. Part of my thinking here also was towards the added type
safety if passing pointers instead of numeric values.

>>> @@ -1975,12 +1976,12 @@ csched_schedule(
>>>        */
>>>       if ( !tasklet_work_scheduled && snext->pri == CSCHED_PRI_IDLE )
>>>       {
>>> -        if ( !cpumask_test_cpu(cpu, prv->idlers) )
>>> -            cpumask_set_cpu(cpu, prv->idlers);
>>> +        if ( !cpumask_test_cpu(sched_cpu, prv->idlers) )
>>> +            cpumask_set_cpu(sched_cpu, prv->idlers);
>>>       }
>>> -    else if ( cpumask_test_cpu(cpu, prv->idlers) )
>>> +    else if ( cpumask_test_cpu(sched_cpu, prv->idlers) )
>>>       {
>>> -        cpumask_clear_cpu(cpu, prv->idlers);
>>> +        cpumask_clear_cpu(sched_cpu, prv->idlers);
>>>       }
>>
>> And this looks to be a pretty gross abuse of CPU masks then.
>> (Nevertheless I can see that using a CPU as a vehicle here is
>> helpful to limit the scope of the already long series, but I
>> think it needs to be made much more apparent what is meant.)
> 
> I don't think it is an abuse. Think of it as a cpumask where only
> the bits related to the resource's master_cpus can be set.

Well, I understand that this was your thinking behind retaining
the use of CPU masks here. With the "master_cpu" naming it may
indeed end up looking less abuse-like, but I still wonder (as
also suggested elsewhere) whether an ID concept similar to that
of APIC ID vs (derived) core/socket/node ID wouldn't be better
in cases where an ID is to represent multiple CPUs.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.