[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re: [PATCH] CPUIDLE: shorten hpet spin_lock holding time
On Wednesday, 2010-4-21 6:03 PM, Keir Fraser wrote: > On 21/04/2010 10:52, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx> wrote: > >> Okay, one concern I still have is over possible races around >> cpuidle_wakeup_mwait(). It makes use of a cpumask >> cpuidle_mwait_flags, avoiding an IPI to cpus in the mask. However, >> there is nothing to stop the CPU having cleared itself from that >> cpumask before cpuidle does the write to softirq_pending. In that >> case, even assuming the CPU is now non-idle and so wakeup is >> spurious, a subsequent attempt to raise_softirq(TIMER_SOFTIRQ) will >> incorrectly not IPI because the flag is already set in >> softirq_pending? If a CPU cleared itself from cpuidle_mwait_flags, then this CPU didn't need a IPI to be waken. And one useless write to softirq_pending doesn't have any side effect. So this case should be acceptable. > Oh, another one, which can **even occur without your patch**: CPU A > adds itself to cpuidle_mwait_flags while cpuidle_wakeup_mwait() is > running. That function doesn't see CPU A in its first read of the > cpumask so it does not set TIMER_SOFTIRQ for CPU A. But it then > *does* see CPU A in its final read of the cpumask, and hence clears A > from the caller's mask. Hence the caller does not pass CPU A to > cpumask_raise_softirq(). Hence CPU A is erroneously not woken. Nice shot. This issue can be resolved via keeping and using a snapshot of cpuidle_mwait_flags in cpuidle_wakeup_mwait. > Isn't the synchronisation around those mwait/monitor functions just > inherently broken, even without your patch, and your patch just makes > it worse? :-) How about now after fixing the second concern you mentioned? Jimmy _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |