[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/5] x86/time: implement tsc as clocksource
>>> On 22.03.16 at 16:51, <joao.m.martins@xxxxxxxxxx> wrote: > > On 03/22/2016 12:46 PM, Jan Beulich wrote: >>>>> On 22.03.16 at 13:41, <joao.m.martins@xxxxxxxxxx> wrote: >> >>> >>> On 03/18/2016 08:21 PM, Andrew Cooper wrote: >>>> On 17/03/16 16:12, Joao Martins wrote: >>>>> Introduce support for using TSC as platform time which is the highest >>>>> resolution time and most performant to get (~20 nsecs). Though there >>>>> are also several problems associated with its usage, and there isn't a >>>>> complete (and architecturally defined) guarantee that all machines >>>>> will provide reliable and monotonic TSC across all CPUs, on different >>>>> sockets and different P/C states. I believe Intel to be the only that >>>>> can guarantee that. For this reason it's set with less priority when >>>>> compared to HPET unless adminstrator changes "clocksource" boot option >>>>> to "tsc". Upon initializing it, we also check for time warps and >>>>> appropriate CPU features i.e. TSC_RELIABLE, CONSTANT_TSC and >>>>> NONSTOP_TSC. And in case none of these conditions are met, we fail in >>>>> order to fallback to the next available clocksource. >>>>> >>>>> It is also worth noting that with clocksource=tsc there isn't any >>>>> need to synchronise with another clocksource, and I could verify that >>>>> great portion the time skew was eliminated and seeing much less time >>>>> warps happening. With HPET I observed ~500 warps in the period >>>>> of 1h of around 27 us, and with TSC down to 50 warps in the same >>>>> period having each warp < 100 ns. The warps still exist though but >>>>> are only related to cross CPU calibration (being how much it takes to >>>>> rendezvous with master), in which a later patch in this series aims to >>>>> solve. >>>>> >>>>> Signed-off-by: Joao Martins <joao.m.martins@xxxxxxxxxx> >>>>> --- >>>>> Cc: Keir Fraser <keir@xxxxxxx> >>>>> Cc: Jan Beulich <jbeulich@xxxxxxxx> >>>>> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> >>>> >>>> Some style corrections, but no functional problems. >>>> >>>> Reviewed-by Andrew Cooper <andrew.cooper3@xxxxxxxxxx> >>>> >>> I found out one issue in the tsc clocksource initialization path with >>> respect to >>> the reliability check. This check is running when initializing platform >>> timer, >>> but not all CPUS are up at that point (init_xen_time()) which means that the >>> check will always succeed. So for clocksource=tsc I need to defer >>> initialization >>> to a later point (on verify_tsc_reliability()) and if successful switch to >>> that >>> clocksource. Unless you disagree, v2 would have this and just requires one >>> additional preparatory change prior to this patch. >> >> Hmm, that's suspicious when thinking about CPU hotplug: What >> do you intend to do when a CPU comes online later, failing the >> check? > Good point, but I am not sure whether that would happen. The initcall > verify_tsc_reliability seems to be called only for the boot processor. Or are > you saying that it's case that initcalls are called too when hotplugging cpus > later on? If that's the case then my suggestion wouldn't work as you point > out - > or rather without having runtime switching support as you point out below. Looks like I didn't express myself clearly enough: "failing the check" wasn't meant to imply the failure would actually occur, but rather that failure would occur if the check was run on that CPU. IOW the case of a CPU getting hotplugged which doesn't satisfy the needs for selecting TSC as the clock source. >> So far we don't do runtime switching of the clock source, >> and I'm not sure that's a good idea to do when there are running >> guests. > Totally agree, but I would be proposing to be at initialization phase where > there wouldn't be guests running. We would start with HPET, then only switch > to > TSC if that check has passed (and would happen once). > > Simpler alternative would be to call init_xen_time after all CPUs are > brought up > (and would also keep this patch as is), but I am not sure about the > repercussions of that. I don't see how either would help with the hotplug case. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |