[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] Fix softlockup issue after vcpu hotplug
On 1/2/07 23:44, "Graham, Simon" <Simon.Graham@xxxxxxxxxxx> wrote: > I thought about this - the problem is I don't know what the current > value of the watchdog is, so if stolen is greater than zero, I need to > do it once immediately and then once every 5s or so in the loop - I cant > just do it the first n times through the loop because then I might do > 10s worth of jiffy updates following all the watchdog touches... (BTW - > the test for NS_PER_TICK*100 was just for the purposes of > instrumentation) I don't mean to touch it only every 5s in the loop, I mean to touch it every time round the loop but only if stolen is greater than five seconds: while (delta >= NS_PER_TICK) { ...; if (stolen > <five seconds>) touch_softlockup_watchdog(); } The point is that you don't want to touch the watchdog whenever you have small amounts of time stolen from you because that will happen very often (wakeup latencies, preemption) and cause the watchdog to not do its job properly and/or in a timely fashion when something *does* go wrong. If you touch it just about every time you enter the timer ISR you may as well disable the softlockup mechanism altogether! :-) The only theoretical problem with this approach is if you got time stolen that accumulated to more than five seconds, but this happened in two or more bursts, back-to-back. Then no one stolen period would be enough to trigger the touch, but also the guest may not be running for long enough to schedule the softlockup thread. I really don't believe this would be an issue ever in practise however, given sane scheduling parameters and load on the system. If the system were loaded/configured so it could happen, the guest would be in dire straits for other reasons. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |