[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH?] monotonically increasing Xen system time



Xen itself doesn't in most cases care about a bit of skew across CPUs. So I
think the cuurent get_s_time() is fine, and then build monotonicity on top
where we want it (notably hvm_get_guest_time() as has already been done).

 -- Keir

On 28/7/08 18:04, "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx> wrote:

> (This is probably post-3.3, but feedback would be appreciated
> as I would hope it could go into 3.3.1 if it doesn't make 3.3.)
> 
> I've finally surrendered to the fact that intra-CPU stime skew
> can't be reduced down to the point where it can be ignored,
> at least on non-tsc-invariant boxes.  It always seems to
> max out at least at several microseconds, and in some cases
> in tens of microseconds.  This is probably a result of
> crystal oscillation drift, and perhaps the "beating" of
> the platform timer crystal vs the tsc crystal.
> 
> So the attached patch adds a get_s_time_mono() call that
> always returns a monotonically INcreasing (not just
> non-decreasing) stime.  A stime_minstep is computed that
> guarantees that mono_stime can't increase faster than
> stime, even if all processors are pounding on stime
> in a loop.  The result is the resolution for mono_stime.
> (On my dual-core box, it's 24ns... your mileage may vary.)
> 
> I want to use this in hvm_get_guest_time() (and thus for
> softtsc) but it may also be appropriate for at least some
> of the many uses of NOW() in Xen.  If so, it might make
> sense that this should be the default get_s_time() and the
> current get_s_time() should be renamed get_local_s_time().
> In any case, there are most likely other uses for it
> in Xen so I didn't want to build it just into
> hvm_get_guest_time().
> 
> (Note that init_xen_time() was moved down in __start_xen()
> because num_online_cpus() gave the wrong answer at its
> current position.)
> 
> Signed-off-by: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
> 
> ===================================
> Thanks... for the memory
> I really could use more / My throughput's on the floor
> The balloon is flat / My swap disk's fat / I've OOM's in store
> Overcommitted so much
> (with apologies to the late great Bob Hope)



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.