[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC V2 45/45] xen/sched: add scheduling granularity enum
On 06/05/2019 12:01, Jan Beulich wrote: >>>> On 06.05.19 at 11:23, <jgross@xxxxxxxx> wrote: >> On 06/05/2019 10:57, Jan Beulich wrote: >>>>>> On 06.05.19 at 08:56, <jgross@xxxxxxxx> wrote: >>>> void scheduler_percpu_init(unsigned int cpu) >>>> { >>>> struct scheduler *sched = per_cpu(scheduler, cpu); >>>> struct sched_resource *sd = per_cpu(sched_res, cpu); >>>> + const cpumask_t *mask; >>>> + unsigned int master_cpu; >>>> + spinlock_t *lock; >>>> + struct sched_item *old_item, *master_item; >>>> + >>>> + if ( system_state == SYS_STATE_resume ) >>>> + return; >>>> + >>>> + switch ( opt_sched_granularity ) >>>> + { >>>> + case SCHED_GRAN_cpu: >>>> + mask = cpumask_of(cpu); >>>> + break; >>>> + case SCHED_GRAN_core: >>>> + mask = per_cpu(cpu_sibling_mask, cpu); >>>> + break; >>>> + case SCHED_GRAN_socket: >>>> + mask = per_cpu(cpu_core_mask, cpu); >>>> + break; >>>> + default: >>>> + ASSERT_UNREACHABLE(); >>>> + return; >>>> + } >>>> >>>> - if ( system_state != SYS_STATE_resume ) >>>> + if ( cpu == 0 || cpumask_weight(mask) == 1 ) >>> >>> At least outside of x86 specific code I think we should avoid >>> introducing (further?) assumptions that seeing CPU 0 on a >>> CPU initialization path implies this being while booting the >>> system. I wonder anyway whether the right side of the || >>> doesn't render the left side redundant. >> >> On the boot cpu this function is called before e.g. cpu_sibling_mask >> is initialized. I can have a try using: >> >> if ( cpumask_weight(mask) <= 1 ) > > Or re-order things such that it gets set in time? That might be difficult. I've ended up with: if ( !mask || cpumask_weight(mask) == 1 ) > >>>> +static unsigned int __init sched_check_granularity(void) >>>> +{ >>>> + unsigned int cpu; >>>> + unsigned int siblings, gran = 0; >>>> + >>>> + for_each_online_cpu( cpu ) >>> >>> You want to decide for one of two possible styles, but not a mixture >>> of both: >>> >>> for_each_online_cpu ( cpu ) >>> >>> or >>> >>> for_each_online_cpu(cpu) >> >> Sorry, will correct. >> >>> >>> . Yet then I'm a little puzzled by its use here in the first place. >>> Generally I think for_each_cpu() uses in __init functions are >>> problematic, as they then require further code elsewhere to >>> deal with hot-onlining. A pre-SMP-initcall plus use of CPU >>> notifiers is typically more appropriate. >> >> And that was mentioned in the cover letter: cpu hotplug is not yet >> handled (hence the RFC status of the series). >> >> When cpu hotplug is being added it might be appropriate to switch the >> scheme as you suggested. Right now the current solution is much more >> simple. > > I see (I did notice the cover letter remark, but managed to not > honor it when writing the reply), but I'm unconvinced if incurring > more code churn by not dealing with things the "dynamic" way > right away is indeed the "more simple" (overall) solution. Especially with hotplug things are becoming more complicated: I'd like to have the final version fall back to smaller granularities in case e.g. the user has selected socket scheduling and two sockets have different numbers of cores. With hotplug such a situation might be discovered only with some domUs already running, so how should we react in that case? Doing panic() is no option, so either we reject onlining the additional socket, or we adapt by dynamically modifying the scheduling granularity. Without that being discussed I don't think it makes sense to put a lot effort into a solution which is going to be rejected in the end. I'm fine with doing a proper implementation for the non-RFC variant with a generally accepted design. Juergen _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |