[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 0/6] x86: cpuidle overheads reduction



Experiments shows that for systems with more than 64 logical cpus and without 
always running apic timer, if the interrupt rate raise to several thousands Hz 
per cpu, the deep C-state entry/exit overheads rise a lot, from several percent 
to over 50%. This is mainly resulted by the deep C-state wakeup logic - one 
hpet channel need to be used for waking up a lot of cpus.

We used to try shorten the hpet channel spinlock holding time to reduce the 
racing cost around hpet channel. But it is still not enough for 64 logical cpus 
case.

This patchset fixes 2 obvious little bugs in cpuidle code, uses stime to count 
c-state residency in NONSTOP_TSC case, remove hpet access in 
hpet_broadcast_exit, and redirect some hpet lock users to a new rwlock. 

For a special simulated mass breakevent case, this patchset can reduce cpuidle 
overhead from >50% to <15%, increasing C3 residency from 30% to > 60%.

[PATCH1/6] cpuidle: fix wrapped ticks calculation for pm timer
[PATCH2/6] cpuidle: reduce redundant cost in cstate_restore_tsc for nonstop tsc
[PATCH3/6] cpuidle: use stime to count c-state residency in NONSTOP_TSC case
[PATCH4/6] cpuidle: remove hpet access in hpet_broadcast_exit
[PATCH5/6] cpuidle: redirect some hpet lock users to a new cpumask_lock
[PATCH6/6] cpuidle: redefine cpumask_lock as rwlock_t

Jimmy
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.