[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH] CPUIDLE: shorten hpet spin_lock holding time



On 21/04/2010 10:06, "Wei, Gang" <gang.wei@xxxxxxxxx> wrote:

>> It fixes the unsafe accesses to timer_deadline_{start,end} but I
>> still think this optimisation is misguided and also unsafe. There is
>> nothing to stop new CPUs being added to ch->cpumask after you start
>> scanning ch->cpumask. For example, some new CPU which has a
>> timer_deadline_end greater than ch->next_event, so it does not
>> reprogram the HPET. But handle_hpet_broadcast is already mid-scan and
>> misses this new CPU, so it does not reprogram the HPET either. Hence
>> no timer fires for the new CPU and it misses its deadline.
> 
> This will not happen. ch->next_event has already been set as STIME_MAX before
> start scanning ch->cpumask, so the new CPU with smallest timer_deadline_end
> will reprogram the HPET successfully.

Okay, then CPU A executes hpet_broadcast_enter() and programs the HPET
channel for its timeout X. Meanwhile concurrently executing
handle_hpet_broadcast misses CPU A but finds some other CPU B with timeout Y
much later than X, and erroneously programs the HPET channel with Y, causing
CPU A to miss its deadline by an arbitrary amount.

I dare say I can carry on finding races. :-)

> I think it is another story. Enlarging timer_slop is one way to aligned &
> reduce breakevents, it do have effects to save power and possibly bring larger
> latency. What I am trying to address here is how to reduce spin_lock overheads
> in idel entry/exit path. The spin_lock overheads along with other overheads in
> the system with 32pcpu/64vcpu caused >25% cpu utilization while all guest are
> idle.

So far it's looked to me like a correctness/performance tradeoff. :-D

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.