[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen stable-4.1] SEDF: avoid gathering vCPU-s on pCPU0



commit 16c0caad7d44e80535d44c0690a691d22f74d378
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Tue Mar 12 16:29:11 2013 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Tue Mar 12 16:29:11 2013 +0100

    SEDF: avoid gathering vCPU-s on pCPU0
    
    The introduction of vcpu_force_reschedule() in 14320:215b799fa181 was
    incompatible with the SEDF scheduler: Any vCPU using
    VCPUOP_stop_periodic_timer (e.g. any vCPU of half way modern PV Linux
    guests) ends up on pCPU0 after that call. Obviously, running all PV
    guests' (and namely Dom0's) vCPU-s on pCPU0 causes problems for those
    guests rather sooner than later.
    
    So the main thing that was clearly wrong (and bogus from the beginning)
    was the use of cpumask_first() in sedf_pick_cpu(). It is being replaced
    by a construct that prefers to put back the vCPU on the pCPU that it
    got launched on.
    
    However, there's one more glitch: When reducing the affinity of a vCPU
    temporarily, and then widening it again to a set that includes the pCPU
    that the vCPU was last running on, the generic scheduler code would not
    force a migration of that vCPU, and hence it would forever stay on the
    pCPU it last ran on. Since that can again create a load imbalance, the
    SEDF scheduler wants a migration to happen regardless of it being
    apparently unnecessary.
    
    Of course, an alternative to checking for SEDF explicitly in
    vcpu_set_affinity() would be to introduce a flags field in struct
    scheduler, and have SEDF set a "always-migrate-on-affinity-change"
    flag.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Keir Fraser <keir@xxxxxxx>
    master changeset: e6a6fd63652814e5c36a0016c082032f798ced1f
    master date: 2013-03-04 10:17:52 +0100
---
 xen/common/sched_sedf.c |    3 ++-
 xen/common/schedule.c   |    3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/common/sched_sedf.c b/xen/common/sched_sedf.c
index 70a1bc8..b1d6a2a 100644
--- a/xen/common/sched_sedf.c
+++ b/xen/common/sched_sedf.c
@@ -452,7 +452,8 @@ static int sedf_pick_cpu(const struct scheduler *ops, 
struct vcpu *v)
 
     online = SEDF_CPUONLINE(v->domain->cpupool);
     cpus_and(online_affinity, v->cpu_affinity, *online);
-    return first_cpu(online_affinity);
+    return cycle_cpu(v->vcpu_id % cpus_weight(online_affinity) - 1,
+                     online_affinity);
 }
 
 /*
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index a116a71..001da3d 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -628,7 +628,8 @@ int vcpu_set_affinity(struct vcpu *v, cpumask_t *affinity)
     old_affinity = v->cpu_affinity;
     v->cpu_affinity = *affinity;
     *affinity = old_affinity;
-    if ( !cpu_isset(v->processor, v->cpu_affinity) )
+    if ( VCPU2OP(v)->sched_id == XEN_SCHEDULER_SEDF ||
+         !cpu_isset(v->processor, v->cpu_affinity) )
         set_bit(_VPF_migrating, &v->pause_flags);
 
     vcpu_schedule_unlock_irq(v);
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.1

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.