[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/8] x86/time: improve cross-CPU clock monotonicity (and more)



On 06/15/2016 11:26 AM, Jan Beulich wrote:
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -1328,21 +1328,51 @@ static void time_calibration(void *unuse
>                       &r, 1);
>  }
>  
> +static struct {
> +    s_time_t local_stime, master_stime;
> +} ap_bringup_ref;
> +
> +void time_latch_stamps(void) {
                                ^ style: shouldn't this bracket be on a new 
line?

> +    unsigned long flags;
> +    u64 tsc;
> +
> +    local_irq_save(flags);
> +    ap_bringup_ref.master_stime = read_platform_stime();
> +    tsc = rdtsc();
> +    local_irq_restore(flags);
> +
> +    ap_bringup_ref.local_stime = get_s_time_fixed(tsc);
> +}
> +
>  void init_percpu_time(void)
>  {
>      struct cpu_time *t = &this_cpu(cpu_time);
>      unsigned long flags;
> +    u64 tsc;
>      s_time_t now;
>  
>      /* Initial estimate for TSC rate. */
>      t->tsc_scale = per_cpu(cpu_time, 0).tsc_scale;
>  
>      local_irq_save(flags);
> -    t->local_tsc_stamp = rdtsc();
>      now = read_platform_stime();
Do we need to take another read from the platform timer here? We could
just reuse the one on ap_bringup_ref.master_stime. See also below.

> +    tsc = rdtsc();
>      local_irq_restore(flags);
>  
>      t->stime_master_stamp = now;
> +    /*
> +     * To avoid a discontinuity (TSC and platform clock can't be expected
> +     * to be in perfect sync), initialization here needs to match up with
> +     * local_time_calibration()'s decision whether to use its fast path.
> +     */
> +    if ( boot_cpu_has(X86_FEATURE_CONSTANT_TSC) )
> +    {
> +        if ( system_state < SYS_STATE_smp_boot )
> +            now = get_s_time_fixed(tsc);
> +        else
> +            now += ap_bringup_ref.local_stime - ap_bringup_ref.master_stime;
> +    }
> +    t->local_tsc_stamp    = tsc;

As far as my current experimentation (both v1 of this patch and the whole 
series on
v2), the source of the remaining warps could be fixed with this seeding. Though 
I
think this seeding might not yet be correct. Essentially the boot CPU you do
get_s_time_fixed(tsc), which basically gives you a value strictly calculated 
with TSC
since boot_tsc_stamp. For the other CPUs you append the time skew between TSC 
and
platform timer calculated before AP bringup, with a value just read on the 
platform
timer. Which I assume you take this as an approximation skew between TSC and 
platform
timer.

So, before this patch cpu_time is filled like this:

CPU 0: M0 , T0
CPU 1: M1 , T1
CPU 2: M2 , T2

With this patch it goes like:

CPU 0: L0 , T0
CPU 1: L1 = (M1 + (L - M)), T1
CPU 2: L2 = (M2 + (L - M)), T2

Given that,

1) values separated by commas are respectively local stime, tsc that are
written in cpu_time
2) M2 > M1 > M0 as values read from platform timer.
3) L and M solely as values calculated on CPU doing AP bringup.

After this seeding, local_time_calibration will increment the deltas between
calibrations every second, based entirely on a monotonically increasing TSC. 
Altough
I see two issues here: 1) appending to a new read from platform time which might
already be offsetted by the one taken from the boot CPU and 2) the skew you 
calculate
don't account for the current tsc. Which leads to local stime and tsc still not 
being
a valid pair for the secondary CPUs. So it would be possible that 
get_s_time(S0) on
CPU0 immediately followed by a get_s_time(S1) on CPU1 [S1 > S0] ends up 
returning a
value backwards IOW L0 + scale(S0 - T0) is bigger than L1 + scale(S1 - T1). 
That is
independently of the deltas being added on every calibration.

So how about we do the seeding in another way?

1) Relying on individual CPU TSC like you do on CPU 0:

     t->stamp.master_stime = now;
+    t->stamp.local_tsc = 0;
+    t->stamp.local_stime = 0;
+
     if ( boot_cpu_has(X86_FEATURE_CONSTANT_TSC) )
     {
         now = get_s_time_fixed(tsc);
     }

(...)

Or 2) taking into account the skew between platform timer and tsc we take on
init_percpu_time. Diff below based on this series:

@@ -1394,11 +1508,14 @@ void init_percpu_time(void)
     t->tsc_scale = per_cpu(cpu_time, 0).tsc_scale;

     local_irq_save(flags);
-    now = read_platform_stime();
+    now = ap_bringup_ref.master_stime;
     tsc = rdtsc_ordered();
     local_irq_restore(flags);

     t->stamp.master_stime = now;
+    t->stamp.local_tsc = boot_tsc_stamp;
+    t->stamp.local_stime = 0;
+

     /*
      * To avoid a discontinuity (TSC and platform clock can't be expected
      * to be in perfect sync), initialization here needs to match up with
@@ -1406,10 +1523,12 @@ void init_percpu_time(void)
      */
     if ( boot_cpu_has(X86_FEATURE_CONSTANT_TSC) )
     {
+        u64 stime = get_s_time_fixed(tsc);
+
         if ( system_state < SYS_STATE_smp_boot )
-            now = get_s_time_fixed(tsc);
+            now = stime;
         else
-            now += ap_bringup_ref.local_stime - ap_bringup_ref.master_stime;
+            now += stime - ap_bringup_ref.master_stime;
     }

Second option follows a similar thinking of this patch, but on both ways the
local_stime will match the tsc on init_percpu_time, thus being a matched pair. 
I have
a test similar to check_tsc_warp but with get_s_time() and I no longer observe 
time
going backwards. But without the above I still observe it even on short periods.
Thoughts? (Sorry for the long answer)

Joao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.