[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen-unstable] CPUIDLE: Avoid remnant LAPIC timer intr while force hpetbroadcast
# HG changeset patch # User Keir Fraser <keir.fraser@xxxxxxxxxx> # Date 1221041786 -3600 # Node ID 020b8340e83938b1b7693bffbd445f616063ea22 # Parent 9ee24da5a488c2899026df0ad4172fe497f631fb CPUIDLE: Avoid remnant LAPIC timer intr while force hpetbroadcast LAPIC will stop during C3, and resume to work after exit from C3. Considering below case: The LAPIC timer was programmed to expire after 1000us, but CPU enter C3 after 100us and exit C3 at 9xxus. 0us: reprogram_timer(1000us) 100us: entry C3, LAPIC timer stop 9xxus: exit C3 due to unexpected event, LAPIC timer continue running 10xxus: reprogram_timer(1000us), fail due to the past expiring time. ......: no timer softirq raised, no change to LAPIC timer. ......: if entry C3 again, HPET will be forced reprogramed to now+small_slop. ......: if entry C2, no change to LAPIC. 18xxus: LAPIC timer expires unexpectedly if no C3 entries after 10xxus. Signed-off-by: Wei Gang <gang.wei@xxxxxxxxx> --- xen/arch/x86/hpet.c | 10 +++++++++- 1 files changed, 9 insertions(+), 1 deletion(-) diff -r 9ee24da5a488 -r 020b8340e839 xen/arch/x86/hpet.c --- a/xen/arch/x86/hpet.c Wed Sep 10 11:09:08 2008 +0100 +++ b/xen/arch/x86/hpet.c Wed Sep 10 11:16:26 2008 +0100 @@ -233,7 +233,15 @@ void hpet_broadcast_exit(void) if ( cpu_test_and_clear(cpu, ch->cpumask) ) { - reprogram_timer(per_cpu(timer_deadline, cpu)); + if ( !reprogram_timer(per_cpu(timer_deadline, cpu)) ) + { + /* + * The deadline must have passed -- trigger timer work now. + * Also cancel any outstanding LAPIC event. + */ + reprogram_timer(0); + raise_softirq(TIMER_SOFTIRQ); + } if ( cpus_empty(ch->cpumask) && ch->next_event != STIME_MAX ) reprogram_hpet_evt_channel(ch, STIME_MAX, 0, 0); _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |