[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC V2 45/45] xen/sched: add scheduling granularity enum



>>> On 06.05.19 at 12:20, <jgross@xxxxxxxx> wrote:
> On 06/05/2019 12:01, Jan Beulich wrote:
>>>>> On 06.05.19 at 11:23, <jgross@xxxxxxxx> wrote:
>>> On 06/05/2019 10:57, Jan Beulich wrote:
>>>> . Yet then I'm a little puzzled by its use here in the first place.
>>>> Generally I think for_each_cpu() uses in __init functions are
>>>> problematic, as they then require further code elsewhere to
>>>> deal with hot-onlining. A pre-SMP-initcall plus use of CPU
>>>> notifiers is typically more appropriate.
>>>
>>> And that was mentioned in the cover letter: cpu hotplug is not yet
>>> handled (hence the RFC status of the series).
>>>
>>> When cpu hotplug is being added it might be appropriate to switch the
>>> scheme as you suggested. Right now the current solution is much more
>>> simple.
>> 
>> I see (I did notice the cover letter remark, but managed to not
>> honor it when writing the reply), but I'm unconvinced if incurring
>> more code churn by not dealing with things the "dynamic" way
>> right away is indeed the "more simple" (overall) solution.
> 
> Especially with hotplug things are becoming more complicated: I'd like
> to have the final version fall back to smaller granularities in case
> e.g. the user has selected socket scheduling and two sockets have
> different numbers of cores. With hotplug such a situation might be
> discovered only with some domUs already running, so how should we
> react in that case? Doing panic() is no option, so either we reject
> onlining the additional socket, or we adapt by dynamically modifying the
> scheduling granularity. Without that being discussed I don't think it
> makes sense to put a lot effort into a solution which is going to be
> rejected in the end.

Hmm, where's the symmetry requirement coming from? Socket
scheduling should mean as many vCPU-s on one socket as there
are cores * threads; similarly core scheduling (number of threads).
Statically partitioning domains would seem an intermediate step
at best only anyway, as that requires (on average) leaving more
resources (cores/threads) idle than with a dynamic partitioning
model.

As to your specific question how to react: Since bringing online
a full new socket implies bringing online all its cores / threads one
by one anyway, a "too small" socket in your scheme would
simply result in the socket remaining unused until "enough"
cores/threads have appeared. Similarly the socket would go
out of use as soon as one of its cores/threads gets offlined.
Obviously this ends up problematic for the last usable socket.

But with the static partitioning you describe I also can't really
see how "xen-hptool smt-disable" is going to work.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.