[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/6] xen: sched: clarify use cases of schedule_cpu_switch()



On Thu, 2015-10-15 at 10:25 +0200, Juergen Gross wrote:
> Maybe it would be a good idea to move setting of per_cpu(cpupool,
> cpu)
> into schedule_cpu_switch(). Originally I didn't do that to avoid
> spreading too much cpupool related actions outside of cpupool.c. But
> with those ASSERT()s added hiding that action will cause more
> confusion
> than having the modification of per_cpu(cpupool, cpu) here.
> 
Coming back to this.

When reworking the series, I tried to move 'per_cpu(cpupool, cpu)=c' in
schedule_cpu_switch() and, about this part:

> When doing the code movement the current behaviour regarding sequence
> of changes must be kept, of course. So when adding the cpu to a pool
> the cpupool information must be set _before_ taking the scheduler
> lock,
> while when removing this must happen after release of the lock.
> 
I don't think I see why I can't do as in the attached patch (i.e., just
update the cpupool per-cpu variable at the bottom of
schedule_cpu_switch() ).

I don't see anything in the various schedulers' code that relies on
that variable, except one thing in sched_credit.c, which looks safe to
me. And indeed I think that even any future potential code being added
there, should either not depend on it, or do it "safely".

Also, all the operations done in schedule_cpu_switch() itself, depend
either on per_cpu(scheduler) or on per_cpu(schedule_data) being updated
properly, rather than on per_cpu(cpupool) (including the locking that
you are mentioning above).

What am I missing?

Regards,
Dario
---
diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
index e79850b..8e7b723 100644
--- a/xen/common/cpupool.c
+++ b/xen/common/cpupool.c
@@ -261,19 +261,13 @@ int cpupool_move_domain(struct domain *d, struct cpupool 
*c)
 static int cpupool_assign_cpu_locked(struct cpupool *c, unsigned int cpu)
 {
     int ret;
-    struct cpupool *old;
     struct domain *d;
 
     if ( (cpupool_moving_cpu == cpu) && (c != cpupool_cpu_moving) )
         return -EBUSY;
-    old = per_cpu(cpupool, cpu);
-    per_cpu(cpupool, cpu) = c;
     ret = schedule_cpu_switch(cpu, c);
     if ( ret )
-    {
-        per_cpu(cpupool, cpu) = old;
         return ret;
-    }
 
     cpumask_clear_cpu(cpu, &cpupool_free_cpus);
     if (cpupool_moving_cpu == cpu)
@@ -326,7 +320,6 @@ static long cpupool_unassign_cpu_helper(void *info)
             cpumask_clear_cpu(cpu, &cpupool_free_cpus);
             goto out;
         }
-        per_cpu(cpupool, cpu) = NULL;
         cpupool_moving_cpu = -1;
         cpupool_put(cpupool_cpu_moving);
         cpupool_cpu_moving = NULL;
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index d7e2d98..9072540 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -1486,6 +1486,17 @@ void __init scheduler_init(void)
         BUG();
 }
 
+/*
+ * Move a pCPU outside of the influence of the scheduler of its current
+ * cpupool, or subject it to the scheduler of a new cpupool.
+ *
+ * For the pCPUs that are removed from their cpupool, their scheduler becomes
+ * &ops (the default scheduler, selected at boot, which also services the
+ * default cpupool). However, as these pCPUs are not really part of any pool,
+ * there won't be any scheduling event on them, not even from the default
+ * scheduler. Basically, they will just sit idle until they are explicitly
+ * added back to a cpupool.
+ */
 int schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
 {
     unsigned long flags;
@@ -1494,9 +1505,24 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool 
*c)
     void *ppriv, *ppriv_old, *vpriv, *vpriv_old;
     struct scheduler *old_ops = per_cpu(scheduler, cpu);
     struct scheduler *new_ops = (c == NULL) ? &ops : c->sched;
+    struct cpupool *old_pool = per_cpu(cpupool, cpu);
+
+    /*
+     * pCPUs only move from a valid cpupool to free (i.e., out of any pool),
+     * or from free to a valid cpupool. In the former case (which happens when
+     * c is NULL), we want the CPU to have been marked as free already, as
+     * well as to not be valid for the source pool any longer, when we get to
+     * here. In the latter case (which happens when c is a valid cpupool), we
+     * want the CPU to still be marked as free, as well as to not yet be valid
+     * for the destination pool.
+     */
+    ASSERT(c != old_pool && (c != NULL || old_pool != NULL));
+    ASSERT(cpumask_test_cpu(cpu, &cpupool_free_cpus));
+    ASSERT((c == NULL && !cpumask_test_cpu(cpu, old_pool->cpu_valid)) ||
+           (c != NULL && !cpumask_test_cpu(cpu, c->cpu_valid)));
 
     if ( old_ops == new_ops )
-        return 0;
+        goto out;
 
     idle = idle_vcpu[cpu];
     ppriv = SCHED_OP(new_ops, alloc_pdata, cpu);
@@ -1524,6 +1550,9 @@ int schedule_cpu_switch(unsigned int cpu, struct cpupool 
*c)
     SCHED_OP(old_ops, free_vdata, vpriv_old);
     SCHED_OP(old_ops, free_pdata, ppriv_old, cpu);
 
+ out:
+    per_cpu(cpupool, cpu) = c;
+
     return 0;
 }
 
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: xen-sched-asserts-in-schedule_cpu_switch.patch
Description: Text Data

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.