[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v4 1/3] x86/time: latch to-be-written TSC value early in rendezvous loop
To reduce latency on time_calibration_tsc_rendezvous()'s last loop iteration, read the value to be written on the last iteration at the end of the loop body (i.e. in particular at the end of the second to last iteration). On my single-socket 18-core Skylake system this reduces the average loop exit time on CPU0 (from the TSC write on the last iteration to until after the main loop) from around 32k cycles to around 29k (albeit the values measured on separate runs vary quite significantly). Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- v4: Different approach. v3: New. --- Of course it would also be nice to avoid the pretty likely branch misprediction on the last iteration. But with the static prediction hints having been rather short-lived in the architecture, I don't see any good means to do so. --- a/xen/arch/x86/time.c +++ b/xen/arch/x86/time.c @@ -1683,7 +1683,7 @@ static void time_calibration_tsc_rendezv int i; struct calibration_rendezvous *r = _r; unsigned int total_cpus = cpumask_weight(&r->cpu_calibration_map); - uint64_t tsc = 0; + uint64_t tsc = 0, master_tsc = 0; /* Loop to get rid of cache effects on TSC skew. */ for ( i = 4; i >= 0; i-- ) @@ -1708,7 +1708,7 @@ static void time_calibration_tsc_rendezv atomic_inc(&r->semaphore); if ( i == 0 ) - write_tsc(r->master_tsc_stamp); + write_tsc(master_tsc); while ( atomic_read(&r->semaphore) != (2*total_cpus - 1) ) cpu_relax(); @@ -1730,7 +1730,7 @@ static void time_calibration_tsc_rendezv } if ( i == 0 ) - write_tsc(r->master_tsc_stamp); + write_tsc(master_tsc); atomic_inc(&r->semaphore); while ( atomic_read(&r->semaphore) > total_cpus ) @@ -1739,9 +1739,17 @@ static void time_calibration_tsc_rendezv /* Just in case a read above ended up reading zero. */ tsc += !tsc; + + /* + * To reduce latency of the TSC write on the last iteration, + * fetch the value to be written into a local variable. To avoid + * introducing yet another conditional branch (which the CPU may + * have difficulty predicting well) do this on all iterations. + */ + master_tsc = r->master_tsc_stamp; } - time_calibration_rendezvous_tail(r, tsc, r->master_tsc_stamp); + time_calibration_rendezvous_tail(r, tsc, master_tsc); } /* Ordinary rendezvous function which does not modify TSC values. */
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |