[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [xen-devel] System time monotonicity



> >But I also observe that all of the hvm platform timer (pit,
> >hpet, and pmtimer) code is built on top of the physical TSC
> >plus the vmx/svm tsc_offset which doesn't seem to be affected
> >by the Xen TSC synchronization.  True?
>
> For cpus on same system bus driven by one crystal, TSC drift among
> cpus may be just dozen of cycles after boot time sync, which is
> negligible enough compared to migration overhead and thus
> it's unlikely
> to have HVM guest to observe a non-monotonic behavior after resume.

I agree this case is not much of a problem.

> The issue comes with cpus running on different frequency, like driven
> by multiple crystals or on-demand frequency change which affects TSC
> too. HVM guest can be configured to avoid migrating among cpus with
> different TSC freq, like limiting its cpu affinity to cpus on
> same system bus.

These are the cases I am worried about.  The linux kernel seems
to have a number of cases that mark TSC as unstable, but
Xen does not, nor (I think) does Xen expose this information
anywhere.  So it seems SMP guests need to be pinned to physical
CPUs that are measured to have sync'ed TSC's to guarantee that
the (virtual) platform timer is monotonic.

> Or you have to configure HVM guest to not trust TSC...

Yes, that's what I'm thinking... like Linux, Xen could/should
build virtual platform timers on a physical clocksource other
than tsc if all of the potential vcpu->pcpu mappings are not
on sync'd-TSC-pcpus.

I assume this problem is worse with multi-socket Hypertransport
and future Intel QPI boxes?  Or is TSC (and frequency changing)
synchronized for such systems?

Thanks,
Dan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.