[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 35/47] xen/sched: make vcpu_wake() and vcpu_sleep() core scheduling aware



On 25.09.19 15:07, Jürgen Groß wrote:
On 24.09.19 13:55, Jan Beulich wrote:
On 14.09.2019 10:52, Juergen Gross wrote:
@@ -765,16 +774,22 @@ void vcpu_wake(struct vcpu *v)
  {
      unsigned long flags;
      spinlock_t *lock;
+    struct sched_unit *unit = v->sched_unit;
      TRACE_2D(TRC_SCHED_WAKE, v->domain->domain_id, v->vcpu_id);
-    lock = unit_schedule_lock_irqsave(v->sched_unit, &flags);
+    lock = unit_schedule_lock_irqsave(unit, &flags);
      if ( likely(vcpu_runnable(v)) )
      {
          if ( v->runstate.state >= RUNSTATE_blocked )
              vcpu_runstate_change(v, RUNSTATE_runnable, NOW());
-        sched_wake(vcpu_scheduler(v), v->sched_unit);
+        sched_wake(vcpu_scheduler(v), unit);

Is this correct / necessary when the unit is not asleep as a whole?
After all the corresponding sched_sleep() further up is called
conditionally only.

Oh, indeed. Will change that.

It turned out this is not so easy at it seemed.

I encountered dom0 boot hangs with making the call conditional, even
when running in cpu scheduling mode. I guess the reason is that a vcpu
can call do_poll() which will try to put itself to sleep and in some
cases call vcpu_wake() in case the condition already changed. In that
case we need the sched_wake() call even if the unit is still running.


@@ -2067,9 +2160,29 @@ static void sched_slave(void)
      now = NOW();
+    v = unit2vcpu_cpu(prev, cpu);
+    if ( v && v->force_context_switch )
+    {
+        v = sched_force_context_switch(vprev, v, cpu, now);
+
+        if ( v )
+        {
+            pcpu_schedule_unlock_irq(lock, cpu);

I can't figure what it is that guarantees that this unlock isn't
going to be followed ...

+            sched_context_switch(vprev, v, false, now);
+        }
+
+        do_softirq = true;
+    }
+
      if ( !prev->rendezvous_in_cnt )
      {
          pcpu_schedule_unlock_irq(lock, cpu);

... by another unlock here. Or wait - is sched_context_switch()
(and perhaps other functions involved there) lacking a "noreturn"
annotation?

Indeed it is. Like context_switch() today. :-)

I'll annotate the functions.

And now I discovered that on ARM context_switch is _not_ "noreturn".
So thanks for noticing that problem. I have fixed it in order to
avoid a latent problem in case we want to support core scheduling on
ARM some day (and yes: that would only have been a problem in core
mode).


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.