[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] cpuidle: fix the menu governor to enhance IO performance



# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1260777293 0
# Node ID 2d9c58c29a94033ad7e51c2ec23fc66a92a9e391
# Parent  3d505c9f1b7344e2debe4f1a905c6d42a179b93d
cpuidle: fix the menu governor to enhance IO performance

this is a revised version of linux upstream commit
69d25870f20c4b2563304f2b79c5300dd60a067e:

"
    cpuidle: fix the menu governor to boost IO performance

    Fix the menu idle governor which balances power savings, energy
    efficiency
    and performance impact.

    The reason for a reworked governor is that there have been
    serious
    performance issues reported with the existing code on Nehalem
    server
    systems.

    To show this I'm sure Andrew wants to see benchmark results:
    (benchmark is "fio", "no cstates" is using "idle=3Dpoll")

            no cstates  current linux   new algorithm
    1 disk      107 Mb/s    85 Mb/s     105 Mb/s
    2 disks     215 Mb/s    123 Mb/s    209 Mb/s
    12 disks    590 Mb/s    320 Mb/s    585 Mb/s

    In various power benchmark measurements, no degredation was found
    by our
    measurement&diagnostics team.  Obviously a small percentage more
    power was
    used in the "fio" benchmark, due to the much higher performance.

    Signed-off-by: Arjan van de Ven <arjan@xxxxxxxxxxxxxxx>
    Cc: Venkatesh Pallipadi <venkatesh.pallipadi@xxxxxxxxx>
    Cc: Len Brown <lenb@xxxxxxxxxx>
    Cc: Ingo Molnar <mingo@xxxxxxx>
    Cc: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>
    Cc: Yanmin Zhang <yanmin_zhang@xxxxxxxxxxxxxxx>
    Acked-by: Ingo Molnar <mingo@xxxxxxx>
    Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
    Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
    Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
"

in Xen version, most logic is similar and with only one exception:
linux use nr_iowait and loadavg to track the pending I/O request,
which however is not visible to Xen. so Xen use the do_irq frequency
to estimate the I/O pressure. this is not as accurate as linux, and
the better approach is to convey guest latency requirement to
hypervisor by virtual C state. this can be the future enhancement.

the detail algorithm description is in code comment. with this new
algorithm, fio benchmark performance improve ~5% with 1 disk. and no
power degration is found in idle case.

Signed-off-by: Yu Ke <ke.yu@xxxxxxxxx>
---
 xen/arch/x86/acpi/cpuidle_menu.c |  233 ++++++++++++++++++++++++++++++++-------
 xen/arch/x86/hpet.c              |    5 
 xen/arch/x86/irq.c               |    4 
 xen/include/asm-x86/irq.h        |    2 
 xen/include/xen/cpuidle.h        |    2 
 5 files changed, 206 insertions(+), 40 deletions(-)

diff -r 3d505c9f1b73 -r 2d9c58c29a94 xen/arch/x86/acpi/cpuidle_menu.c
--- a/xen/arch/x86/acpi/cpuidle_menu.c  Mon Dec 14 07:52:22 2009 +0000
+++ b/xen/arch/x86/acpi/cpuidle_menu.c  Mon Dec 14 07:54:53 2009 +0000
@@ -30,22 +30,146 @@
 #include <xen/acpi.h>
 #include <xen/timer.h>
 #include <xen/cpuidle.h>
-
-#define BREAK_FUZZ      4       /* 4 us */
-#define PRED_HISTORY_PCT   50
-#define USEC_PER_SEC 1000000
+#include <asm/irq.h>
+
+#define BUCKETS 6
+#define RESOLUTION 1024
+#define DECAY 4
+#define MAX_INTERESTING 50000
+
+/*
+ * Concepts and ideas behind the menu governor
+ *
+ * For the menu governor, there are 3 decision factors for picking a C
+ * state:
+ * 1) Energy break even point
+ * 2) Performance impact
+ * 3) Latency tolerance (TBD: from guest virtual C state)
+ * These these three factors are treated independently.
+ *
+ * Energy break even point
+ * -----------------------
+ * C state entry and exit have an energy cost, and a certain amount of time in
+ * the  C state is required to actually break even on this cost. CPUIDLE
+ * provides us this duration in the "target_residency" field. So all that we
+ * need is a good prediction of how long we'll be idle. Like the traditional
+ * menu governor, we start with the actual known "next timer event" time.
+ *
+ * Since there are other source of wakeups (interrupts for example) than
+ * the next timer event, this estimation is rather optimistic. To get a
+ * more realistic estimate, a correction factor is applied to the estimate,
+ * that is based on historic behavior. For example, if in the past the actual
+ * duration always was 50% of the next timer tick, the correction factor will
+ * be 0.5.
+ *
+ * menu uses a running average for this correction factor, however it uses a
+ * set of factors, not just a single factor. This stems from the realization
+ * that the ratio is dependent on the order of magnitude of the expected
+ * duration; if we expect 500 milliseconds of idle time the likelihood of
+ * getting an interrupt very early is much higher than if we expect 50 micro
+ * seconds of idle time.
+ * For this reason we keep an array of 6 independent factors, that gets
+ * indexed based on the magnitude of the expected duration
+ *
+ * Limiting Performance Impact
+ * ---------------------------
+ * C states, especially those with large exit latencies, can have a real
+ * noticable impact on workloads, which is not acceptable for most sysadmins,
+ * and in addition, less performance has a power price of its own.
+ *
+ * As a general rule of thumb, menu assumes that the following heuristic
+ * holds:
+ *     The busier the system, the less impact of C states is acceptable
+ *
+ * This rule-of-thumb is implemented using average interrupt interval:
+ * If the exit latency times multiplier is longer than the average
+ * interrupt interval, the C state is not considered a candidate
+ * for selection due to a too high performance impact. So the smaller
+ * the average interrupt interval is, the smaller C state latency should be
+ * and thus the less likely a busy CPU will hit such a deep C state.
+ *
+ */
+
+struct perf_factor{
+    s_time_t    time_stamp;
+    s_time_t    duration;
+    unsigned int irq_count_stamp;
+    unsigned int irq_sum;
+};
 
 struct menu_device
 {
     int             last_state_idx;
     unsigned int    expected_us;
-    unsigned int    predicted_us;
-    unsigned int    current_predicted_us;
-    unsigned int    last_measured_us;
-    unsigned int    elapsed_us;
+    u64             predicted_us;
+    unsigned int    measured_us;
+    unsigned int    exit_us;
+    unsigned int    bucket;
+    u64             correction_factor[BUCKETS];
+    struct perf_factor pf;
 };
 
 static DEFINE_PER_CPU(struct menu_device, menu_devices);
+
+static inline int which_bucket(unsigned int duration)
+{
+   int bucket = 0;
+
+   if (duration < 10)
+       return bucket;
+   if (duration < 100)
+       return bucket + 1;
+   if (duration < 1000)
+       return bucket + 2;
+   if (duration < 10000)
+       return bucket + 3;
+   if (duration < 100000)
+       return bucket + 4;
+   return bucket + 5;
+}
+
+/*
+ * Return the average interrupt interval to take I/O performance
+ * requirements into account. The smaller the average interrupt
+ * interval to be, the more busy I/O activity, and thus the higher
+ * the barrier to go to an expensive C state.
+ */
+
+/* 5 milisec sampling period */
+#define SAMPLING_PERIOD     5000000
+
+/* for I/O interrupt, we give 8x multiplier compared to C state latency*/
+#define IO_MULTIPLIER       8
+
+static inline s_time_t avg_intr_interval_us(void)
+{
+    struct menu_device *data = &__get_cpu_var(menu_devices);
+    s_time_t    duration, now;
+    s_time_t    avg_interval;
+    unsigned int irq_sum;
+
+    now = NOW();
+    duration = (data->pf.duration + (now - data->pf.time_stamp)
+            * (DECAY - 1)) / DECAY;
+
+    irq_sum = (data->pf.irq_sum + (this_cpu(irq_count) - 
data->pf.irq_count_stamp)
+            * (DECAY - 1)) / DECAY;
+
+    if (irq_sum == 0)
+        /* no irq recently, so return a big enough interval: 1 sec */
+        avg_interval = 1000000;
+    else
+        avg_interval = duration / irq_sum / 1000; /* in us */
+
+    if ( duration >= SAMPLING_PERIOD){
+        data->pf.time_stamp = now;
+        data->pf.duration = duration;
+        data->pf.irq_count_stamp= this_cpu(irq_count);
+        data->pf.irq_sum = irq_sum;
+    }
+
+    return avg_interval;
+}
 
 static unsigned int get_sleep_length_us(void)
 {
@@ -62,57 +186,86 @@ static int menu_select(struct acpi_proce
 {
     struct menu_device *data = &__get_cpu_var(menu_devices);
     int i;
-
-    /* determine the expected residency time */
+    s_time_t    io_interval;
+
+    /*  TBD: Change to 0 if C0(polling mode) support is added later*/
+    data->last_state_idx = CPUIDLE_DRIVER_STATE_START;
+    data->exit_us = 0;
+
+    /* determine the expected residency time, round up */
     data->expected_us = get_sleep_length_us();
 
-    /* Recalculate predicted_us based on prediction_history_pct */
-    data->predicted_us *= PRED_HISTORY_PCT;
-    data->predicted_us += (100 - PRED_HISTORY_PCT) *
-        data->current_predicted_us;
-    data->predicted_us /= 100;
+    data->bucket = which_bucket(data->expected_us);
+
+    io_interval = avg_intr_interval_us();
+
+    /*
+     * if the correction factor is 0 (eg first time init or cpu hotplug
+     * etc), we actually want to start out with a unity factor.
+     */
+    if (data->correction_factor[data->bucket] == 0)
+        data->correction_factor[data->bucket] = RESOLUTION * DECAY;
+
+    /* Make sure to round up for half microseconds */
+    data->predicted_us = DIV_ROUND(
+            data->expected_us * data->correction_factor[data->bucket],
+            RESOLUTION * DECAY);
 
     /* find the deepest idle state that satisfies our constraints */
-    for ( i = 2; i < power->count; i++ )
+    for ( i = CPUIDLE_DRIVER_STATE_START + 1; i < power->count; i++ )
     {
         struct acpi_processor_cx *s = &power->states[i];
 
-        if ( s->target_residency > data->expected_us + s->latency )
+        if (s->target_residency > data->predicted_us)
             break;
-        if ( s->target_residency > data->predicted_us )
+        if (s->latency * IO_MULTIPLIER > io_interval)
             break;
         /* TBD: we need to check the QoS requirment in future */
+        data->exit_us = s->latency;
+        data->last_state_idx = i;
     }
 
-    data->last_state_idx = i - 1;
-    return i - 1;
+    return data->last_state_idx;
 }
 
 static void menu_reflect(struct acpi_processor_power *power)
 {
     struct menu_device *data = &__get_cpu_var(menu_devices);
-    struct acpi_processor_cx *target = &power->states[data->last_state_idx];
-    unsigned int last_residency; 
+    unsigned int last_idle_us = power->last_residency;
     unsigned int measured_us;
-
-    last_residency = power->last_residency;
-    measured_us = last_residency + data->elapsed_us;
-
-    /* if wrapping, set to max uint (-1) */
-    measured_us = data->elapsed_us <= measured_us ? measured_us : -1;
-
-    /* Predict time remaining until next break event */
-    data->current_predicted_us = max(measured_us, data->last_measured_us);
-
-    /* Distinguish between expected & non-expected events */
-    if ( last_residency + BREAK_FUZZ
-         < data->expected_us + target->latency )
-    {
-        data->last_measured_us = measured_us;
-        data->elapsed_us = 0;
-    }
+    u64 new_factor;
+
+    measured_us = last_idle_us;
+
+    /*
+     * We correct for the exit latency; we are assuming here that the
+     * exit latency happens after the event that we're interested in.
+     */
+    if (measured_us > data->exit_us)
+        measured_us -= data->exit_us;
+
+    /* update our correction ratio */
+
+    new_factor = data->correction_factor[data->bucket]
+        * (DECAY - 1) / DECAY;
+
+    if (data->expected_us > 0 && data->measured_us < MAX_INTERESTING)
+        new_factor += RESOLUTION * measured_us / data->expected_us;
     else
-        data->elapsed_us = measured_us;
+        /*
+         * we were idle so long that we count it as a perfect
+         * prediction
+         */
+        new_factor += RESOLUTION;
+
+    /*
+     * We don't want 0 as factor; we always want at least
+     * a tiny bit of estimated time.
+     */
+    if (new_factor == 0)
+        new_factor = 1;
+
+    data->correction_factor[data->bucket] = new_factor;
 }
 
 static int menu_enable_device(struct acpi_processor_power *power)
diff -r 3d505c9f1b73 -r 2d9c58c29a94 xen/arch/x86/hpet.c
--- a/xen/arch/x86/hpet.c       Mon Dec 14 07:52:22 2009 +0000
+++ b/xen/arch/x86/hpet.c       Mon Dec 14 07:54:53 2009 +0000
@@ -211,6 +211,9 @@ static void hpet_interrupt_handler(int i
         struct cpu_user_regs *regs)
 {
     struct hpet_event_channel *ch = (struct hpet_event_channel *)data;
+
+    this_cpu(irq_count)--;
+
     if ( !ch->event_handler )
     {
         printk(XENLOG_WARNING "Spurious HPET timer interrupt on HPET timer 
%d\n", ch->idx);
@@ -692,6 +695,8 @@ int hpet_broadcast_is_available(void)
 
 int hpet_legacy_irq_tick(void)
 {
+    this_cpu(irq_count)--;
+
     if ( !legacy_hpet_event.event_handler )
         return 0;
     legacy_hpet_event.event_handler(&legacy_hpet_event);
diff -r 3d505c9f1b73 -r 2d9c58c29a94 xen/arch/x86/irq.c
--- a/xen/arch/x86/irq.c        Mon Dec 14 07:52:22 2009 +0000
+++ b/xen/arch/x86/irq.c        Mon Dec 14 07:54:53 2009 +0000
@@ -517,6 +517,8 @@ void irq_set_affinity(int irq, cpumask_t
     cpus_copy(desc->pending_mask, mask);
 }
 
+DEFINE_PER_CPU(unsigned int, irq_count);
+
 asmlinkage void do_IRQ(struct cpu_user_regs *regs)
 {
     struct irqaction *action;
@@ -527,6 +529,8 @@ asmlinkage void do_IRQ(struct cpu_user_r
     struct cpu_user_regs *old_regs = set_irq_regs(regs);
     
     perfc_incr(irqs);
+
+    this_cpu(irq_count)++;
 
     if (irq < 0) {
         ack_APIC_irq();
diff -r 3d505c9f1b73 -r 2d9c58c29a94 xen/include/asm-x86/irq.h
--- a/xen/include/asm-x86/irq.h Mon Dec 14 07:52:22 2009 +0000
+++ b/xen/include/asm-x86/irq.h Mon Dec 14 07:54:53 2009 +0000
@@ -105,6 +105,8 @@ extern atomic_t irq_err_count;
 extern atomic_t irq_err_count;
 extern atomic_t irq_mis_count;
 
+DECLARE_PER_CPU(unsigned int, irq_count);
+
 int pirq_shared(struct domain *d , int irq);
 
 int map_domain_pirq(struct domain *d, int pirq, int irq, int type,
diff -r 3d505c9f1b73 -r 2d9c58c29a94 xen/include/xen/cpuidle.h
--- a/xen/include/xen/cpuidle.h Mon Dec 14 07:52:22 2009 +0000
+++ b/xen/include/xen/cpuidle.h Mon Dec 14 07:54:53 2009 +0000
@@ -86,4 +86,6 @@ extern struct cpuidle_governor *cpuidle_
 extern struct cpuidle_governor *cpuidle_current_governor;
 void cpuidle_disable_deep_cstate(void);
 
+#define CPUIDLE_DRIVER_STATE_START  1
+
 #endif /* _XEN_CPUIDLE_H */

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.