[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen staging] xen/sched: Re-position the domain_update_node_affinity() call during vcpu construction



commit 1dfb8e6e0948912d1fd96d6ed9034527c5c74f31
Author:     Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
AuthorDate: Thu Sep 6 14:40:56 2018 +0100
Commit:     Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
CommitDate: Tue Sep 11 17:34:35 2018 +0100

    xen/sched: Re-position the domain_update_node_affinity() call during vcpu 
construction
    
    alloc_vcpu()'s call to domain_update_node_affinity() has existed for a 
decade,
    but its effort is mostly wasted.
    
    alloc_vcpu() is called in a loop for each vcpu, bringing them into 
existence.
    The values of the affinity masks are still default, which is allcpus in
    general, or a processor singleton for pinned domains.
    
    Furthermore, domain_update_node_affinity() itself loops over all vcpus
    accumulating the masks, making it quadratic with the number of vcpus.
    
    Move it to be called once after all vcpus are constructed, which has the 
same
    net effect, but with fewer intermediate memory allocations and less cpumask
    arithmetic.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Acked-by: Julien Grall <julien.grall@xxxxxxx>
    Reviewed-by: Dario Faggioli <dfaggioli@xxxxxxxx>
---
 xen/arch/arm/domain_build.c   | 2 ++
 xen/arch/x86/hvm/dom0_build.c | 2 ++
 xen/arch/x86/pv/dom0_build.c  | 1 +
 xen/common/domain.c           | 3 ---
 xen/common/domctl.c           | 1 +
 5 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index d4dabc7bea..af941e1982 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2242,6 +2242,8 @@ int __init construct_dom0(struct domain *d)
             vcpu_switch_to_aarch64_mode(d->vcpu[i]);
     }
 
+    domain_update_node_affinity(d);
+
     v->is_initialised = 1;
     clear_bit(_VPF_down, &v->pause_flags);
 
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 90f70ec60a..5724883d8c 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -600,6 +600,8 @@ static int __init pvh_setup_cpus(struct domain *d, paddr_t 
entry,
             cpu = p->processor;
     }
 
+    domain_update_node_affinity(d);
+
     rc = arch_set_info_hvm_guest(v, &cpu_ctx);
     if ( rc )
     {
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index 976ba8d16b..21d262b62b 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -709,6 +709,7 @@ int __init dom0_construct_pv(struct domain *d,
             cpu = p->processor;
     }
 
+    domain_update_node_affinity(d);
     d->arch.paging.mode = 0;
 
     /* Set up CR3 value for write_ptbase */
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 9a541971dd..a043812687 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -193,9 +193,6 @@ struct vcpu *alloc_vcpu(
     /* Must be called after making new vcpu visible to for_each_vcpu(). */
     vcpu_check_shutdown(v);
 
-    if ( !is_idle_domain(d) )
-        domain_update_node_affinity(d);
-
     return v;
 }
 
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index ed047b7cd7..3df41ad833 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -575,6 +575,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) 
u_domctl)
                 goto maxvcpu_out;
         }
 
+        domain_update_node_affinity(d);
         ret = 0;
 
     maxvcpu_out:
--
generated by git-patchbot for /home/xen/git/xen.git#staging

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.