[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] xen: idle_loop: either deal with tasklets or go idle



In fact, there exists two kind of tasklets: vCPU and
softirq context tasklets. When we want to do vCPU
context tasklet work, we force the idle vCPU (of a
particular pCPU) into execution, and run it from there.

This means there are two possible reasons for choosing
to run the idle vCPU:
1) we want a pCPU to go idle,
2) wa want to run some vCPU context tasklet work.

If we're in case 2), it does not make sense to try to
see whether we can go idle, and only afterwords (as the
check _will_ fail), go processing tasklets.

This patch rearranges the code of the body of the idle
vCPUs, so that we actually check whether we are in
case 1) or 2), and act accordingly.

As a matter of fact, this also means that we do not
check for any tasklet work to be done, after waking up
from idle. This is not a problem, because:
a) for softirq context tasklets, if any is queued
   "during" wakeup from idle, TASKLET_SOFTIRQ is
   raised, and the call to do_softirq() (which is still
   happening *after* the wakeup) will take care of it;
b) for vCPU context tasklets, if any is queued "during"
   wakeup from idle, SCHEDULE_SOFTIRQ is raised and
   do_softirq() (happening after the wakeup) calls
   the scheduler. The scheduler sees that there is
   tasklet work pending and confirms the idle vCPU
   in execution, which then will get to execute
   do_tasklet().

Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
---
Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
Cc: Julien Grall <julien.grall@xxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Cc: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
---
 xen/arch/arm/domain.c     |   21 ++++++++++++++-------
 xen/arch/x86/domain.c     |   12 +++++++++---
 xen/common/tasklet.c      |   10 +---------
 xen/include/xen/tasklet.h |   12 +++++++++++-
 4 files changed, 35 insertions(+), 20 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 76310ed..0ceeb5b 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -41,20 +41,27 @@ DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
 
 void idle_loop(void)
 {
+    unsigned int cpu = smp_processor_id();
+
     for ( ; ; )
     {
-        if ( cpu_is_offline(smp_processor_id()) )
+        if ( cpu_is_offline(cpu) )
             stop_cpu();
 
-        local_irq_disable();
-        if ( cpu_is_haltable(smp_processor_id()) )
+        /* Are we here for running vcpu context tasklets, or for idling? */
+        if ( unlikely(tasklet_work_to_do(cpu)) )
+            do_tasklet(cpu);
+        else
         {
-            dsb(sy);
-            wfi();
+            local_irq_disable();
+            if ( cpu_is_haltable(cpu) )
+            {
+                dsb(sy);
+                wfi();
+            }
+            local_irq_enable();
         }
-        local_irq_enable();
 
-        do_tasklet();
         do_softirq();
         /*
          * We MUST be last (or before dsb, wfi). Otherwise after we get the
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 49388f4..d06700d 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -112,12 +112,18 @@ static void play_dead(void)
 
 static void idle_loop(void)
 {
+    unsigned int cpu = smp_processor_id();
+
     for ( ; ; )
     {
-        if ( cpu_is_offline(smp_processor_id()) )
+        if ( cpu_is_offline(cpu) )
             play_dead();
-        (*pm_idle)();
-        do_tasklet();
+
+        /* Are we here for running vcpu context tasklets, or for idling? */
+        if ( unlikely(tasklet_work_to_do(cpu)) )
+            do_tasklet(cpu);
+        else
+            (*pm_idle)();
         do_softirq();
         /*
          * We MUST be last (or before pm_idle). Otherwise after we get the
diff --git a/xen/common/tasklet.c b/xen/common/tasklet.c
index 365a777..0465751 100644
--- a/xen/common/tasklet.c
+++ b/xen/common/tasklet.c
@@ -104,19 +104,11 @@ static void do_tasklet_work(unsigned int cpu, struct 
list_head *list)
 }
 
 /* VCPU context work */
-void do_tasklet(void)
+void do_tasklet(unsigned int cpu)
 {
-    unsigned int cpu = smp_processor_id();
     unsigned long *work_to_do = &per_cpu(tasklet_work_to_do, cpu);
     struct list_head *list = &per_cpu(tasklet_list, cpu);
 
-    /*
-     * Work must be enqueued *and* scheduled. Otherwise there is no work to
-     * do, and/or scheduler needs to run to update idle vcpu priority.
-     */
-    if ( likely(*work_to_do != (TASKLET_enqueued|TASKLET_scheduled)) )
-        return;
-
     spin_lock_irq(&tasklet_lock);
 
     do_tasklet_work(cpu, list);
diff --git a/xen/include/xen/tasklet.h b/xen/include/xen/tasklet.h
index 8c3de7e..1a3f861 100644
--- a/xen/include/xen/tasklet.h
+++ b/xen/include/xen/tasklet.h
@@ -40,9 +40,19 @@ DECLARE_PER_CPU(unsigned long, tasklet_work_to_do);
 #define TASKLET_enqueued   (1ul << _TASKLET_enqueued)
 #define TASKLET_scheduled  (1ul << _TASKLET_scheduled)
 
+static inline bool tasklet_work_to_do(unsigned int cpu)
+{
+    /*
+     * Work must be enqueued *and* scheduled. Otherwise there is no work to
+     * do, and/or scheduler needs to run to update idle vcpu priority.
+     */
+    return per_cpu(tasklet_work_to_do, cpu) == (TASKLET_enqueued|
+                                                TASKLET_scheduled);
+}
+
 void tasklet_schedule_on_cpu(struct tasklet *t, unsigned int cpu);
 void tasklet_schedule(struct tasklet *t);
-void do_tasklet(void);
+void do_tasklet(unsigned int cpu);
 void tasklet_kill(struct tasklet *t);
 void tasklet_init(
     struct tasklet *t, void (*func)(unsigned long), unsigned long data);


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.