[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen stable-4.5] xen: Have schedulers revise initial placement



commit 2e56416a7a1c4c9c98452464f827dd792164c262
Author:     George Dunlap <george.dunlap@xxxxxxxxxx>
AuthorDate: Tue Sep 6 12:11:28 2016 +0200
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Tue Sep 6 12:11:28 2016 +0200

    xen: Have schedulers revise initial placement
    
    The generic domain creation logic in
    xen/common/domctl.c:default_vcpu0_location() attempts to try to do
    initial placement load-balancing by placing vcpu 0 on the least-busy
    non-primary hyperthread available.  Unfortunately, the logic can end
    up picking a pcpu that's not in the online mask.  When this is passed
    to a scheduler such which assumes that the initial assignment is
    valid, it causes a null pointer dereference looking up the runqueue.
    
    Furthermore, this initial placement doesn't take into account hard or
    soft affinity, or any scheduler-specific knowledge (such as historic
    runqueue load, as in credit2).
    
    To solve this, when inserting a vcpu, always call the per-scheduler
    "pick" function to revise the initial placement.  This will
    automatically take all knowledge the scheduler has into account.
    
    csched2_cpu_pick ASSERTs that the vcpu's pcpu scheduler lock has been
    taken.  Grab and release the lock to minimize time spend with irqs
    disabled.
    
    Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    Reviewed-by: Meng Xu <mengxu@xxxxxxxxxxxxx>
    Reviwed-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
    master commit: 9f358ddd69463fa8fb65cf67beb5f6f0d3350e32
    master date: 2016-07-26 10:42:49 +0100
---
 xen/common/sched_credit.c  |  3 +++
 xen/common/sched_credit2.c | 10 +++++++++-
 xen/common/sched_rt.c      |  5 +++++
 3 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index 8a20f08..57d7558 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -893,6 +893,9 @@ csched_vcpu_insert(const struct scheduler *ops, struct vcpu 
*vc)
 
     BUG_ON( is_idle_vcpu(vc) );
 
+    /* This is safe because vc isn't yet being scheduled */
+    vc->processor = csched_cpu_pick(ops, vc);
+
     lock = vcpu_schedule_lock_irq(vc);
 
     if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running )
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 2ab0304..94c17e3 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -269,6 +269,7 @@ struct csched2_dom {
     uint16_t nr_vcpus;
 };
 
+static int csched2_cpu_pick(const struct scheduler *ops, struct vcpu *vc);
 
 /*
  * Time-to-credit, credit-to-time.
@@ -870,9 +871,16 @@ csched2_vcpu_insert(const struct scheduler *ops, struct 
vcpu *vc)
     /* FIXME: Do we need the private lock here? */
     list_add_tail(&svc->sdom_elem, &svc->sdom->vcpu);
 
-    /* Add vcpu to runqueue of initial processor */
+    /* csched2_cpu_pick() expects the pcpu lock to be held */
+    lock = vcpu_schedule_lock_irq(vc);
+
+    vc->processor = csched2_cpu_pick(ops, vc);
+
+    spin_unlock_irq(lock);
+
     lock = vcpu_schedule_lock_irq(vc);
 
+    /* Add vcpu to runqueue of initial processor */
     runq_assign(ops, vc);
 
     vcpu_schedule_unlock_irq(lock, vc);
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index e50041c..c1256d8 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -169,6 +169,8 @@ struct rt_dom {
     struct domain *dom;         /* pointer to upper domain */
 };
 
+static int rt_cpu_pick(const struct scheduler *ops, struct vcpu *vc);
+
 /*
  * Useful inline functions
  */
@@ -552,6 +554,9 @@ rt_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
 
     BUG_ON( is_idle_vcpu(vc) );
 
+    /* This is safe because vc isn't yet being scheduled */
+    vc->processor = rt_cpu_pick(ops, vc);
+
     lock = vcpu_schedule_lock_irq(vc);
 
     now = NOW();
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.5

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.