[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 6/6] x86/time: implement PVCLOCK_TSC_STABLE_BIT



>>> On 05.04.16 at 23:34, <joao.m.martins@xxxxxxxxxx> wrote:
> On 04/05/2016 01:22 PM, Jan Beulich wrote:
>>>>> On 29.03.16 at 15:44, <joao.m.martins@xxxxxxxxxx> wrote:
>> But
>> I'm opposed to this: For one, the variable being static here
>> means there is nothing that actually suppresses CPU hotplug
>> to happen.
>> And then I think this can, for all practical purposes,
>> be had by suitably using existing command line options, namely
>> "max_cpus=", such that set_nr_cpu_ids() won't allow for any
>> further CPUs to get added. Albeit I admit that if someone was
>> to bring down some CPU and then hotplug another one, we
>> might still be in trouble. So maybe the better approach would
>> be to fail onlining of CPUs that don't meet the criteria when
>> "clocksource=tsc"?
> True - max_cpus would produce the same effect. But I should point out
> that even when clocksource=tsc the rendezvous would be std_rendezvous. So 
> the
> reference TSC is CPU 0 and tsc_timestamps are of the individual
> CPUs. So perhaps the criteria would be for clocksource=tsc and 
> use_tsc_stable_bit.

Oh, of course I didn't mean this to be the precise condition, just
an outline. Considering use_tsc_stable_bit certainly makes sense.

>>> @@ -1440,6 +1468,13 @@ static void time_calibration(void *unused)
>>>          .semaphore = ATOMIC_INIT(0)
>>>      };
>>>  
>>> +    if ( use_tsc_stable_bit )
>>> +    {
>>> +        local_irq_disable();
>>> +        r.master_stime = read_platform_stime(&r.master_tsc_stamp);
>>> +        local_irq_enable();
>>> +    }
>> 
>> So this can't be in time_calibration_nop_rendezvous() because
>> you want to avoid the actual rendezvousing. But isn't the then
>> possibly much larger gap between read_platform_stime() (which
>> parallels the rdtsc()-s in the other two cases) and get_s_time()
>> invocation going to become a problem?
> Perhaps I am not not seeing the potential problem of this.

I'm not sure there's a problem, I'm just asking because I've noticed
this behavioral difference.

> The main
> difference I see between both would be the base system time: 
> read_platform_stime
> uses stime_platform_stamp as base, and computes a difference from the
> read_counter (i.e. rdtsc() ) with previously saved platform-wide stamp
> (platform_timer_stamp). get_s_time uses the stime_local_stamp (updated from
> stime_master_stamp on local_time_calibration) as base plus delta from 
> rdtsc()
> with local_tsc_stamp. And since this is now all TSC, and TSC monotonically
> increase and is synchronized across CPUs, both calls would end up returning 
> the
> same or a always up-to-date value, whether cpu_time have a larger gap or not
> from stime_platform_stamp. Unless the concern you are raising comes from the
> fact CPU 0 calibrates much sooner than the last calibrated CPU, as opposed 
> to
> roughly at the same time with std_rendezvous?

In a way, yes. I'm concerned by the two time stamps no longer
being obtained at (almost) the same time. If that's not having
any bad consequences, the better.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.