|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 1/5] x86/time: deal with negative deltas in get_s_time_fixed()
On 06.01.2026 21:10, Антон Марков wrote: > Hi, I'm not sure about the other places. In hvm_load_cpu_ctxt > (xen/arch/x86/hvm/hvm.c ), it was easy to catch because > process_pending_softirqs is frequently called there, which in turn > processes softirqs from the timer (where the timestamp is updated). > After I fixed sync_tsc in hvm_load_cpu_ctxt, the problem stopped > reproducing under no load. However, when the number of vCPUs is 4 times > greater than the number of CPUs (under heavy load), the problem rarely > reoccurs (mostly during snapshot restores during > process_pending_softirqs calls), and this is no longer a simple case. If > get_s_time_fixed can indeed be interrupted during execution after > rdtsc_ordered, then the current fix is insufficient. It's necessary to > atomically copy "t->stamp" to the stack using local_irq_disable and > local_irq_enable (as in local_time_calibration), and then work with the > copy, confident in its lifetime and immutability. But until > get_s_time_fixed is proven to be interruptible, this is premature, so > your fix is sufficient. I think I need more information and testing to > say more. While the cpu_calibration per-CPU variable is updated from IRQ context, the cpu_time one isn't. Hence t->stamp's contents cannot change behind the back of get_s_time_fixed(). I wonder whether ... > Regarding the other scale_delta calls, if they include values > calculated from externally saved tsc values that could have become > stale during the process_pending_softirqs call, this definitely needs to > be fixed. ... another similar issue (possibly one not included in the set of remarks I have in the patch, as none of those look related to what you describe) might be causing the remaining, more rare problems you say you see. That set of remarks is actually a result of me going over all other scale_delta() calls, but of course I may have got the analysis wrong. As to using 4 times as many vCPU-s as there are pCPU-s (and then heavy load) - while I don't think we have a support statement for such upstream (when probably we should), iirc for our (SUSE's) products we would consider that unsupported. Just fyi. Also, btw, please don't top-post. Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |