[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[qemu-xen staging] goldfish_rtc: Fix non-atomic read behaviour of TIME_LOW/TIME_HIGH



commit 8380b3a453c38f040e7ca2105418802344cc23d0
Author:     Jessica Clarke <jrtc27@xxxxxxxxxx>
AuthorDate: Sat Jul 18 01:49:34 2020 +0100
Commit:     Alistair Francis <alistair.francis@xxxxxxx>
CommitDate: Wed Jul 22 09:39:46 2020 -0700

    goldfish_rtc: Fix non-atomic read behaviour of TIME_LOW/TIME_HIGH
    
    The specification says:
    
       0x00  TIME_LOW   R: Get current time, then return low-order 32-bits.
       0x04  TIME_HIGH  R: Return high 32-bits from previous TIME_LOW read.
    
       ...
    
       To read the value, the kernel must perform an IO_READ(TIME_LOW),
       which returns an unsigned 32-bit value, before an IO_READ(TIME_HIGH),
       which returns a signed 32-bit value, corresponding to the higher half
       of the full value.
    
    However, we were just returning the current time for both. If the guest
    is unlucky enough to read TIME_LOW and TIME_HIGH either side of an
    overflow of the lower half, it will see time be in the future, before
    jumping backwards on the next read, and Linux currently relies on the
    atomicity guaranteed by the spec so is affected by this. Fix this
    violation of the spec by caching the correct value for TIME_HIGH
    whenever TIME_LOW is read, and returning that value for any TIME_HIGH
    read.
    
    Signed-off-by: Jessica Clarke <jrtc27@xxxxxxxxxx>
    Reviewed-by: Peter Maydell <peter.maydell@xxxxxxxxxx>
    Reviewed-by: Richard Henderson <richard.henderson@xxxxxxxxxx>
    Message-Id: <20200718004934.83174-1-jrtc27@xxxxxxxxxx>
    Signed-off-by: Alistair Francis <alistair.francis@xxxxxxx>
---
 hw/rtc/goldfish_rtc.c         | 17 ++++++++++++++---
 include/hw/rtc/goldfish_rtc.h |  1 +
 2 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/hw/rtc/goldfish_rtc.c b/hw/rtc/goldfish_rtc.c
index 01e9d2b083..6ddd45cce0 100644
--- a/hw/rtc/goldfish_rtc.c
+++ b/hw/rtc/goldfish_rtc.c
@@ -94,12 +94,22 @@ static uint64_t goldfish_rtc_read(void *opaque, hwaddr 
offset,
     GoldfishRTCState *s = opaque;
     uint64_t r = 0;
 
+    /*
+     * From the documentation linked at the top of the file:
+     *
+     *   To read the value, the kernel must perform an IO_READ(TIME_LOW), which
+     *   returns an unsigned 32-bit value, before an IO_READ(TIME_HIGH), which
+     *   returns a signed 32-bit value, corresponding to the higher half of the
+     *   full value.
+     */
     switch (offset) {
     case RTC_TIME_LOW:
-        r = goldfish_rtc_get_count(s) & 0xffffffff;
+        r = goldfish_rtc_get_count(s);
+        s->time_high = r >> 32;
+        r &= 0xffffffff;
         break;
     case RTC_TIME_HIGH:
-        r = goldfish_rtc_get_count(s) >> 32;
+        r = s->time_high;
         break;
     case RTC_ALARM_LOW:
         r = s->alarm_next & 0xffffffff;
@@ -216,7 +226,7 @@ static const MemoryRegionOps goldfish_rtc_ops = {
 
 static const VMStateDescription goldfish_rtc_vmstate = {
     .name = TYPE_GOLDFISH_RTC,
-    .version_id = 1,
+    .version_id = 2,
     .pre_save = goldfish_rtc_pre_save,
     .post_load = goldfish_rtc_post_load,
     .fields = (VMStateField[]) {
@@ -225,6 +235,7 @@ static const VMStateDescription goldfish_rtc_vmstate = {
         VMSTATE_UINT32(alarm_running, GoldfishRTCState),
         VMSTATE_UINT32(irq_pending, GoldfishRTCState),
         VMSTATE_UINT32(irq_enabled, GoldfishRTCState),
+        VMSTATE_UINT32(time_high, GoldfishRTCState),
         VMSTATE_END_OF_LIST()
     }
 };
diff --git a/include/hw/rtc/goldfish_rtc.h b/include/hw/rtc/goldfish_rtc.h
index 16f9f9e29d..9bd8924f5f 100644
--- a/include/hw/rtc/goldfish_rtc.h
+++ b/include/hw/rtc/goldfish_rtc.h
@@ -41,6 +41,7 @@ typedef struct GoldfishRTCState {
     uint32_t alarm_running;
     uint32_t irq_pending;
     uint32_t irq_enabled;
+    uint32_t time_high;
 } GoldfishRTCState;
 
 #endif
--
generated by git-patchbot for /home/xen/git/qemu-xen.git#staging



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.