[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 01/16] xen: sched: fix locking when allocating an RTDS pCPU

as doing that include changing the scheduler lock
mapping for the pCPU itself, and the correct way
of doing that is:
 - take the lock that the pCPU is using right now
   (which may be the lock of another scheduler);
 - change the mapping of the lock to the RTDS one;
 - release the lock (the one that has actually been

Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
Cc: Meng Xu <mengxu@xxxxxxxxxxxxx>
Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
Cc: Tianyang Chen <tiche@xxxxxxxxxxxxxx>
 xen/common/sched_rt.c |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index c896a6f..d98bfb6 100644
--- a/xen/common/sched_rt.c
+++ b/xen/common/sched_rt.c
@@ -653,11 +653,16 @@ static void *
 rt_alloc_pdata(const struct scheduler *ops, int cpu)
     struct rt_private *prv = rt_priv(ops);
+    spinlock_t *old_lock;
     unsigned long flags;
-    spin_lock_irqsave(&prv->lock, flags);
+    /* Move the scheduler lock to our global runqueue lock.  */
+    old_lock = pcpu_schedule_lock_irqsave(cpu, &flags);
     per_cpu(schedule_data, cpu).schedule_lock = &prv->lock;
-    spin_unlock_irqrestore(&prv->lock, flags);
+    /* _Not_ pcpu_schedule_unlock(): per_cpu().schedule_lock changed! */
+    spin_unlock_irqrestore(old_lock, flags);
     if ( !alloc_cpumask_var(&_cpumask_scratch[cpu]) )
         return NULL;

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.