[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 3/3] xen: Remove buggy initial placement algorithm
The initial placement algorithm sometimes picks cpus outside of the mask it's given, does a lot of unnecessary bitmasking, does its own separate load calculation, and completely ignores vcpu hard and soft affinities. Just get rid of it and rely on the schedulers to do initial placement. Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx> --- Since many of scheduler cpu_pick functions have a strong preference to just leave the cpu where it is (in particular, credit1 and rt), this may cause some cpus to be overloaded when creating a lot of domains. Arguably this should be fixed in the schedulers themselves. The core problem with default_vcpu0_location() is that it chooses its initial cpu based on the sibling of pcpu 0, not the first available sibling in the online mask; so if pcpu 1 ends up being less "busy" than all the cpus in the pool, then it ends up being chosen even though it's not in the pool. Fixing the algorithm would involve starting with the sibling map of cpumask_first(online) rather than 0, and then having all sibling checks not only test that the result of cpumask_next() < nr_cpu_ids, but that the result is in online. Additionally, as far as I can tell, the cpumask_test_cpu(i, &cpu_exclude_map) at the top of the for_each_cpu() loop can never return false; and this both this test and the cpumask_or() are unnecessary and should be removed. CC: Dario Faggioli <dario.faggioli@xxxxxxxxxx> CC: Anshul Makkar <anshul.makkar@xxxxxxxxxx> CC: Meng Xu <mengxu@xxxxxxxxxxxxx> CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> CC: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> CC: Jan Beulich <jbeulich@xxxxxxxx> CC: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx> CC: Tim Deegan <tim@xxxxxxx> CC: Wei Liu <wei.liu2@xxxxxxxxxx> --- xen/common/domctl.c | 50 +------------------------------------------------- 1 file changed, 1 insertion(+), 49 deletions(-) diff --git a/xen/common/domctl.c b/xen/common/domctl.c index b784e6c..cc85e27 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -217,54 +217,6 @@ void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info) memcpy(info->handle, d->handle, sizeof(xen_domain_handle_t)); } -static unsigned int default_vcpu0_location(cpumask_t *online) -{ - struct domain *d; - struct vcpu *v; - unsigned int i, cpu, nr_cpus, *cnt; - cpumask_t cpu_exclude_map; - - /* Do an initial CPU placement. Pick the least-populated CPU. */ - nr_cpus = cpumask_last(&cpu_online_map) + 1; - cnt = xzalloc_array(unsigned int, nr_cpus); - if ( cnt ) - { - rcu_read_lock(&domlist_read_lock); - for_each_domain ( d ) - for_each_vcpu ( d, v ) - if ( !(v->pause_flags & VPF_down) - && ((cpu = v->processor) < nr_cpus) ) - cnt[cpu]++; - rcu_read_unlock(&domlist_read_lock); - } - - /* - * If we're on a HT system, we only auto-allocate to a non-primary HT. We - * favour high numbered CPUs in the event of a tie. - */ - cpumask_copy(&cpu_exclude_map, per_cpu(cpu_sibling_mask, 0)); - cpu = cpumask_first(&cpu_exclude_map); - i = cpumask_next(cpu, &cpu_exclude_map); - if ( i < nr_cpu_ids ) - cpu = i; - for_each_cpu(i, online) - { - if ( cpumask_test_cpu(i, &cpu_exclude_map) ) - continue; - if ( (i == cpumask_first(per_cpu(cpu_sibling_mask, i))) && - (cpumask_next(i, per_cpu(cpu_sibling_mask, i)) < nr_cpu_ids) ) - continue; - cpumask_or(&cpu_exclude_map, &cpu_exclude_map, - per_cpu(cpu_sibling_mask, i)); - if ( !cnt || cnt[i] <= cnt[cpu] ) - cpu = i; - } - - xfree(cnt); - - return cpu; -} - bool_t domctl_lock_acquire(void) { /* @@ -691,7 +643,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) continue; cpu = (i == 0) ? - default_vcpu0_location(online) : + cpumask_first(online) : cpumask_cycle(d->vcpu[i-1]->processor, online); if ( alloc_vcpu(d, i, cpu) == NULL ) -- 2.1.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |