[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/8] x86/time: improve cross-CPU clock monotonicity (and more)



>>> On 21.06.16 at 14:05, <joao.m.martins@xxxxxxxxxx> wrote:

> 
> On 06/17/2016 08:32 AM, Jan Beulich wrote:
>>>>> On 16.06.16 at 22:27, <joao.m.martins@xxxxxxxxxx> wrote:
>>>> I.e. my plan was, once the backwards moves are small enough, to maybe
>>>> indeed compensate them by maintaining a global variable tracking
>>>> the most recently returned value. There are issues with such an
>>>> approach too, though: HT effects can result in one hyperthread
>>>> making it just past that check of the global, then hardware
>>>> switching to the other hyperthread, NOW() producing a slightly
>>>> larger value there, and hardware switching back to the first
>>>> hyperthread only after the second one consumed the result of
>>>> NOW(). Dario's use would be unaffected by this aiui, as his NOW()
>>>> invocations are globally serialized through a spinlock, but arbitrary
>>>> NOW() invocations on two hyperthreads can't be made such that
>>>> the invoking party can be guaranteed to see strictly montonic
>>>> values.
>>>>
>>>> And btw., similar considerations apply for two fully independent
>>>> CPUs, if one runs at a much higher P-state than the other (i.e.
>>>> the faster one could overtake the slower one between the
>>>> montonicity check in NOW() and the callers consuming the returned
>>>> values). So in the end I'm not sure it's worth the performance hit
>>>> such a global montonicity check would incur, and therefore I didn't
>>>> make a respective patch part of this series.
>>>>
>>>
>>> Hm, guests pvclock should have faced similar issues too as their
>>> local stamps for each vcpu diverge. Linux commit 489fb49 ("x86, paravirt: 
>>> Add a
>>> global synchronization point for pvclock") depicts a fix to similar 
>>> situations to the
>>> scenarios you just described - which lead to have a global variable to keep 
>>> track of
>>> most recent timestamp. One important chunk of that commit is pasted below 
>>> for
>>> convenience:
>>>
>>> --
>>> /*
>>>  * Assumption here is that last_value, a global accumulator, always goes
>>>  * forward. If we are less than that, we should not be much smaller.
>>>  * We assume there is an error marging we're inside, and then the correction
>>>  * does not sacrifice accuracy.
>>>  *
>>>  * For reads: global may have changed between test and return,
>>>  * but this means someone else updated poked the clock at a later time.
>>>  * We just need to make sure we are not seeing a backwards event.
>>>  *
>>>  * For updates: last_value = ret is not enough, since two vcpus could be
>>>  * updating at the same time, and one of them could be slightly behind,
>>>  * making the assumption that last_value always go forward fail to hold.
>>>  */
>>>  last = atomic64_read(&last_value);
>>>  do {
>>>      if (ret < last)
>>>          return last;
>>>      last = atomic64_cmpxchg(&last_value, last, ret);
>>>  } while (unlikely(last != ret));
>>> --
>> 
>> Meaning they decided it's worth the overhead. But (having read
>> through the entire description) they don't even discuss whether this
>> indeed eliminates _all_ apparent backward moves due to effects
>> like the ones named above.
>>
>> Plus, the contention they're facing is limited to a single VM, i.e. likely
>> much more narrow than that on an entire physical system. So for
>> us to do the same in the hypervisor, quite a bit more of win would
>> be needed to outweigh the cost.
>> 
> Experimental details look very unclear too - likely running the time
> warp test for 5 days would get some of these cases cleared out. But
> as you say it should be much more narrow that of an entire system.
> 
> BTW It was implicit in the discussion but my apologies for not
> formally/explicitly stating. So FWIW:
> 
> Tested-by: Joao Martins <joao.m.martins@xxxxxxxxxx>

Thanks, but this ...

> This series is certainly a way forward into improving cross-CPU monotonicity,
> and I am seeing indeed less occurrences of time going backwards on my 
> systems.

... leaves me guessing whether the above was meant for just this
patch, or the entire series.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.