[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 1/7] introduce time managment in xtf



On 04/10/2018 12:36 PM, Roger Pau Monné wrote:
this file is introduce to be able to implement an inter domain
communication protocol over xenstore. For synchronization purpose, we do
really want to be able to "control" time

common/time.c: since_boot_time gets the time in nanoseconds from the
moment the VM has booted

Signed-off-by: Paul Semel <phentex@xxxxxxxxx>
---

This seems to be missing a list of changes between v2 and v3. Please
add such a list when posting new versions.

+uint64_t since_boot_time(void)
+{
+    uint64_t tsc;
+    uint32_t ver1, ver2;
+    uint64_t system_time;
+    uint64_t old_tsc;
+
+    do
+    {
+        do
+        {
+            ver1 = ACCESS_ONCE(shared_info.vcpu_info[0].time.version);
+            smp_rmb();
+        } while ( (ver1 & 1) == 1 );
+
+        system_time = ACCESS_ONCE(shared_info.vcpu_info[0].time.system_time);
+        old_tsc = ACCESS_ONCE(shared_info.vcpu_info[0].time.tsc_timestamp);
+        smp_rmb();
+        ver2 = ACCESS_ONCE(shared_info.vcpu_info[0].time.version);
+        smp_rmb();
+    } while ( ver1 != ver2 );

This is still overly complicated IMO, and you have not replied to my
question of whether doing the scale_delta below is OK.

About this scale_delta, we discussed with Andrew, and we are going to use
another version of the function as far as I remember. That's why I am not
taking care of it for the moment.

You should send that version then :).


AFAICT uou _cannot_ access any of the vcpu_time_info fields without
checking for the version (in order to avoid reading inconsistent data
during an update), yet below you read tsc_to_system_mul and
tsc_shift.


I'm sorry, I am probably not getting your point here, because I am already
checking for the version. I was actually checking for the wc_version too in
the first version of those patches, but after chatting with Andrew, It
appeared that it was not necessary..

AFAICT the following should work:

do
{
       ver1 = shared_info.vcpu_info[0].time.version;
       smp_rmb();

       system_time = shared_info.vcpu_info[0].time.system_time;
       tsc_timestamp = shared_info.vcpu_info[0].time.tsc_timestamp;
       tsc_to_system_mul = shared_info.vcpu_info[0].time.tsc_to_system_mul;
       tsc_shift = shared_info.vcpu_info[0].time.tsc_shift;
       tsc = rdtsc_ordered();
       /* NB: this barrier is probably not needed if rdtsc is serializing. */
       smp_rmb();

       ver2 = ACCESS_ONCE(shared_info.vcpu_info[0].time.version);
} while ( ver2 & 1 || ver1 != ver2 );


Just a (probably dumb) question. Why aren't you doing ACCESS_ONCE on every
shared_info field accesses ?
As far as I understand, we need to do this as most as we can to avoid having
completely broken data (or security issues). Am I missing something ?

ACCESS_ONCE prevents the reordering of the reads, but here AFAICT we
don't really care about the order in which way they are performed as
long as they are all done before the read barrier (smp_rmb).

Note that I used ACCESS_ONCE for the last access to the version field.
I've done that to prevent the compiler from optimizing the code as:

} while ( shared_info.vcpu_info[0].time.version & 1 ||
            ver1 != shared_info.vcpu_info[0].time.version );

Which would be incorrect, since we want to use the same version data
for both checks in the while loop condition.


Okay, I really thought that it was also used to ensure that the accesses are
not splitted into multiple instructions (for optimizations because of the
loop), and thus put us in trouble if the shared memory was modified in
between.

If the memory is modified in between either (ver2 & 1) == 1 or ver1 !=
ver2 because that's the protocol between the hypervisor and the guest
in order to update vpcu_time_info, so we will discard the read data
and start the loop again.

Oh okay, I get it now, sorry for missing this !

Thanks,

--
Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.