[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] Kernel printk timestamps and walltime drift
Keir, Was I correct in my understanding on how the timestamp is being obtained via this call sequence? -> vprintk -> printk_clock -> sched_clock -> rdtscll In other words, does your patchset use the rdtscll instruction on an i386, pv, 32-bit linux to compute determine that time? If not, how is it derived (what file/function should I look at for sched_clock?). While you are right that the only artifact that we have observed is the drifting timestamps, a future product of ours may need to have an accurate TSC presentation to the VM and if the time is being derived from the TSC as I'm conjecturing, then this drift is something we're going to have to worry about then. Thank you Keir and Dan -----Original Message----- From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx] Sent: Friday, June 13, 2008 5:36 PM To: dan.magenheimer@xxxxxxxxxx; Roger Cruz Cc: xen-devel Subject: Re: [Xen-devel] Kernel printk timestamps and walltime drift On 13/6/08 22:21, "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx> wrote: > Hi Roger -- > > Sorry, I made a bad assumption... the solution I provided > works for hvm domains. For pvm domains, the guest clock > will generally be determined by xen system time, and as > Keir said, if the underlying clock xen is using skews from > wallclock time, then xen system time will skew also. > > I think the solution for this situation is to ensure > that /proc/sys/xen/independent_wallclock is set to 0 > for each of your pvm domains and run ntpd on domain0. Since sched_clock() is not built on top of xtime, this won't help. sched_clock()'s implementation is tightly bound to Xen system time in our Linux patchset. It could be changed, but really I think these timestamps are the only noticeable artefact. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |