[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] cpuidle causing Dom0 soft lockups



>From: Jan Beulich
>Sent: 2010年2月5日 18:37
>
>>>> "Tian, Kevin" <kevin.tian@xxxxxxxxx> 05.02.10 10:52 >>>
>>Are 100,000 per second a sum-up on all non-duty CPUs or observed on
>>just one? How about the level without the patch?
>
>That's 100,000 per CPU per second. Normally on an idle system the
>number is about 25 per CPU per second.
>

I think I know the reason. With your patch, now only one duty CPU
will update global jiffies, however this duty CPU may be preempted
over several ticks. On the other hand, all other non-duty CPU 
calculate their singleshot time expirations based on jiffies (see
stop_hz_timer). There're some conditions to get jiffies+1 as result.
When duty CPU is preempted, it's possible for jiffies value behind 
actual time for several ticks. That means non-duty CPU may request
an old time to Xen which expires and be pended with a timer interrupt
immediately by Xen. Then vcpu_block returns immediately. As 
non-duty CPU is in idle loop, this will loop to get your interrupt stat
very high until duty CPU gets re-scheduled to update jiffies.

Without your patch, jiffies can be updated timely as long as there's
running CPU in dom0.

Possible options is to take jiffies, system timestamp, and per-cpu
timestamp in stop_hz_timer, to generate a local latest value. Or
a per-cpu jiffies can be generated in per-cpu timer interrupt, which
is only used in per-cpu style like in stop_hz_timer.

Thanks
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.