[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen master] silence affinity messages on suspend/resume
commit 10d70e7830f1051d9190b4a3b9be60bcabf3d27f Author: Juergen Gross <jgross@xxxxxxxx> AuthorDate: Thu Mar 3 08:55:30 2016 +0100 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Thu Mar 3 08:55:30 2016 +0100 silence affinity messages on suspend/resume When taking cpus offline for suspend or bringing them online on resume again the scheduler might issue debug messages when temporarily breaking vcpu affinity or restoring the original affinity settings. The resume message can be removed completely, while the message when breaking affinity should only be issued if the breakage is permanent. Suggested-by: Jan Beulich <jbeulich@xxxxxxxx> Signed-off-by: Juergen Gross <jgross@xxxxxxxx> Acked-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx> --- xen/common/schedule.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/xen/common/schedule.c b/xen/common/schedule.c index 7523968..13803ec 100644 --- a/xen/common/schedule.c +++ b/xen/common/schedule.c @@ -615,7 +615,6 @@ void restore_vcpu_affinity(struct domain *d) if ( v->affinity_broken ) { - printk(XENLOG_DEBUG "Restoring affinity for %pv\n", v); cpumask_copy(v->cpu_hard_affinity, v->cpu_hard_affinity_saved); v->affinity_broken = 0; } @@ -670,14 +669,14 @@ int cpu_disable_scheduler(unsigned int cpu) if ( cpumask_empty(&online_affinity) && cpumask_test_cpu(cpu, v->cpu_hard_affinity) ) { - printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v); - if (system_state == SYS_STATE_suspend) { cpumask_copy(v->cpu_hard_affinity_saved, v->cpu_hard_affinity); v->affinity_broken = 1; } + else + printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v); cpumask_setall(v->cpu_hard_affinity); } -- generated by git-patchbot for /home/xen/git/xen.git#master _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |