[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] xen/sched: Re-position the domain_update_node_affinity() call during vcpu construction
alloc_vcpu()'s call to domain_update_node_affinity() has existed for a decade, but its effort is mostly wasted. alloc_vcpu() is called in a loop for each vcpu, bringing them into existence. The values of the affinity masks are still default, which is allcpus in general, or a processor singleton for pinned domains. Furthermore, domain_update_node_affinity() itself loops over all vcpus accumulating the masks, making it a scalability concern with large numbers of vcpus. Move it to be called once after all vcpus are constructed, which has the same net effect, but with fewer intermediate memory allocations and less cpumask arithmetic. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> --- CC: Jan Beulich <JBeulich@xxxxxxxx> CC: Wei Liu <wei.liu2@xxxxxxxxxx> CC: Roger Pau Monné <roger.pau@xxxxxxxxxx> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx> CC: Julien Grall <julien.grall@xxxxxxx> CC: Dario Faggioli <dfaggioli@xxxxxxxx> This perhaps wants backporting to the maintenance trees, which is why I've rebased it backwards over my other construction changes. --- xen/arch/arm/domain_build.c | 2 ++ xen/arch/x86/hvm/dom0_build.c | 2 ++ xen/arch/x86/pv/dom0_build.c | 1 + xen/common/domain.c | 3 --- xen/common/domctl.c | 1 + 5 files changed, 6 insertions(+), 3 deletions(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index 2a383c8..5389217 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -2242,6 +2242,8 @@ int __init construct_dom0(struct domain *d) vcpu_switch_to_aarch64_mode(d->vcpu[i]); } + domain_update_node_affinity(d); + v->is_initialised = 1; clear_bit(_VPF_down, &v->pause_flags); diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index 22e335f..c63d7f0 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -600,6 +600,8 @@ static int __init pvh_setup_cpus(struct domain *d, paddr_t entry, cpu = p->processor; } + domain_update_node_affinity(d); + rc = arch_set_info_hvm_guest(v, &cpu_ctx); if ( rc ) { diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c index 96ff0ee..44418b2 100644 --- a/xen/arch/x86/pv/dom0_build.c +++ b/xen/arch/x86/pv/dom0_build.c @@ -709,6 +709,7 @@ int __init dom0_construct_pv(struct domain *d, cpu = p->processor; } + domain_update_node_affinity(d); d->arch.paging.mode = 0; /* Set up CR3 value for write_ptbase */ diff --git a/xen/common/domain.c b/xen/common/domain.c index 78c450e..6229ba7 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -193,9 +193,6 @@ struct vcpu *alloc_vcpu( /* Must be called after making new vcpu visible to for_each_vcpu(). */ vcpu_check_shutdown(v); - if ( !is_idle_domain(d) ) - domain_update_node_affinity(d); - return v; } diff --git a/xen/common/domctl.c b/xen/common/domctl.c index ee0983d..faf26e7 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -590,6 +590,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) goto maxvcpu_out; } + domain_update_node_affinity(d); ret = 0; maxvcpu_out: -- 2.1.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |