[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [linux-2.6.18-xen] cpufreq: minor clean-ups for ondemand governor on Xen.
# HG changeset patch # User Keir Fraser <keir@xxxxxxxxxxxxx> # Date 1194259231 0 # Node ID 98de2b1494230cfe5393486d747442dd8343703c # Parent d827dfc6593e7300c68de12106ad510b6f832cda cpufreq: minor clean-ups for ondemand governor on Xen. The cpufreq ondemand governor patch for Xen included some out of order code and some test code; re-order the code to assign a variable before passing it to a function and remove the test code. Signed-off-by: Mark Langsdorf <mark.langsdorf@xxxxxxx> --- drivers/cpufreq/cpufreq_ondemand.c | 8 +++++--- 1 files changed, 5 insertions(+), 3 deletions(-) diff -r d827dfc6593e -r 98de2b149423 drivers/cpufreq/cpufreq_ondemand.c --- a/drivers/cpufreq/cpufreq_ondemand.c Thu Nov 01 09:07:45 2007 -0600 +++ b/drivers/cpufreq/cpufreq_ondemand.c Mon Nov 05 10:40:31 2007 +0000 @@ -96,6 +96,7 @@ static inline cputime64_t get_cpu_idle_t return retval; } + /************************** sysfs interface ************************/ static ssize_t show_sampling_rate_max(struct cpufreq_policy *policy, char *buf) { @@ -281,15 +282,16 @@ static int dbs_calc_load(struct cpu_dbs_ unsigned int j; cpumask_t cpumap; + policy = this_dbs_info->cur_policy; + cpumap = policy->cpus; + op.cmd = XENPF_getidletime; set_xen_guest_handle(op.u.getidletime.cpumap_bitmap, (uint8_t *) cpus_addr(cpumap)); - op.u.getidletime.cpumap_nr_cpus = NR_CPUS;// num_online_cpus(); + op.u.getidletime.cpumap_nr_cpus = NR_CPUS; set_xen_guest_handle(op.u.getidletime.idletime, idletime); if (HYPERVISOR_platform_op(&op)) return 200; - policy = this_dbs_info->cur_policy; - cpumap = policy->cpus; for_each_cpu_mask(j, cpumap) { cputime64_t total_idle_nsecs, tmp_idle_nsecs; cputime64_t total_wall_nsecs, tmp_wall_nsecs; _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |