[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen-unstable] x86/hpet: eliminate cpumask_lock
# HG changeset patch # User Jan Beulich <jbeulich@xxxxxxxxxx> # Date 1301043797 0 # Node ID a65612bcbb921e98a8843157bf365e4ab16e8144 # Parent 941119d58655f2b2df86d9ecc4cb502bbc5e783c x86/hpet: eliminate cpumask_lock According to the (now getting removed) comment in struct hpet_event_channel, this was to prevent accessing a CPU's timer_deadline after it got cleared from cpumask. This can be done without a lock altogether - hpet_broadcast_exit() can simply clear the bit, and handle_hpet_broadcast() can read timer_deadline before looking at the mask a second time (the cpumask bit was already found set by the surrounding loop). Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx> Acked-by: Gang Wei <gang.wei@xxxxxxxxx> --- diff -r 941119d58655 -r a65612bcbb92 xen/arch/x86/hpet.c --- a/xen/arch/x86/hpet.c Fri Mar 25 09:01:37 2011 +0000 +++ b/xen/arch/x86/hpet.c Fri Mar 25 09:03:17 2011 +0000 @@ -34,18 +34,6 @@ int shift; s_time_t next_event; cpumask_t cpumask; - /* - * cpumask_lock is used to prevent hpet intr handler from accessing other - * cpu's timer_deadline after the other cpu's mask was cleared -- - * mask cleared means cpu waken up, then accessing timer_deadline from - * other cpu is not safe. - * It is not used for protecting cpumask, so set ops needn't take it. - * Multiple cpus clear cpumask simultaneously is ok due to the atomic - * feature of cpu_clear, so hpet_broadcast_exit() can take read lock for - * clearing cpumask, and handle_hpet_broadcast() have to take write lock - * for read cpumask & access timer_deadline. - */ - rwlock_t cpumask_lock; spinlock_t lock; void (*event_handler)(struct hpet_event_channel *); @@ -199,17 +187,18 @@ /* find all expired events */ for_each_cpu_mask(cpu, ch->cpumask) { - write_lock_irq(&ch->cpumask_lock); + s_time_t deadline; - if ( cpu_isset(cpu, ch->cpumask) ) - { - if ( per_cpu(timer_deadline, cpu) <= now ) - cpu_set(cpu, mask); - else if ( per_cpu(timer_deadline, cpu) < next_event ) - next_event = per_cpu(timer_deadline, cpu); - } + rmb(); + deadline = per_cpu(timer_deadline, cpu); + rmb(); + if ( !cpu_isset(cpu, ch->cpumask) ) + continue; - write_unlock_irq(&ch->cpumask_lock); + if ( deadline <= now ) + cpu_set(cpu, mask); + else if ( deadline < next_event ) + next_event = deadline; } /* wakeup the cpus which have an expired event. */ @@ -602,7 +591,6 @@ hpet_events[i].shift = 32; hpet_events[i].next_event = STIME_MAX; spin_lock_init(&hpet_events[i].lock); - rwlock_init(&hpet_events[i].cpumask_lock); wmb(); hpet_events[i].event_handler = handle_hpet_broadcast; } @@ -729,9 +717,7 @@ if ( !reprogram_timer(per_cpu(timer_deadline, cpu)) ) raise_softirq(TIMER_SOFTIRQ); - read_lock_irq(&ch->cpumask_lock); cpu_clear(cpu, ch->cpumask); - read_unlock_irq(&ch->cpumask_lock); if ( !(ch->flags & HPET_EVT_LEGACY) ) { _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |