[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen staging-4.13] xen/sched: Add missing memory barrier in vcpu_block()



commit 8384641f6b1abe6c9f916a219100b352e5f30d27
Author:     Julien Grall <jgrall@xxxxxxxxxx>
AuthorDate: Fri Mar 5 15:48:45 2021 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Fri Mar 5 15:48:45 2021 +0100

    xen/sched: Add missing memory barrier in vcpu_block()
    
    The comment in vcpu_block() states that the events should be checked
    /after/ blocking to avoids wakeup waiting race. However, from a generic
    perspective, set_bit() doesn't prevent re-ordering. So the following
    could happen:
    
    CPU0  (blocking vCPU A)         |   CPU1 ( unblock vCPU A)
                                    |
    A <- read local events          |
                                    |   set local events
                                    |   test_and_clear_bit(_VPF_blocked)
                                    |       -> Bail out as the bit if not set
                                    |
    set_bit(_VFP_blocked)           |
                                    |
    check A                         |
    
    The variable A will be 0 and therefore the vCPU will be blocked when it
    should continue running.
    
    vcpu_block() is now gaining an smp_mb__after_atomic() to prevent the CPU
    to read any information about local events before the flag _VPF_blocked
    is set.
    
    Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Ash Wilding <ash.j.wilding@xxxxxxxxx>
    Acked-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
    Acked-by: Dario Faggioli <dfaggioli@xxxxxxxx>
    
    atomics: introduce smp_mb__[after|before]_atomic() barriers
    
    When using atomic variables for synchronization barriers are needed
    to ensure proper data serialization. Introduce smp_mb__before_atomic()
    and smp_mb__after_atomic() as in the Linux kernel for that purpose.
    
    Use the same definitions as in the Linux kernel.
    
    Suggested-by: Jan Beulich <jbeulich@xxxxxxxx>
    Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
    Acked-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Julien Grall <jgrall@xxxxxxxxxx>
    master commit: 109e8177fd4a225e7025c4c17d2c9537b550b4ed
    master date: 2021-02-26 09:47:23 +0000
    master commit: c301211a511111caca29f3bd797eb13965026c78
    master date: 2020-03-26 12:42:19 +0100
---
 xen/common/schedule.c        | 2 ++
 xen/include/asm-arm/system.h | 3 +++
 xen/include/asm-x86/system.h | 3 +++
 3 files changed, 8 insertions(+)

diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index 6b1ae7bf8c..a698a13698 100644
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -1366,6 +1366,8 @@ void vcpu_block(void)
 
     set_bit(_VPF_blocked, &v->pause_flags);
 
+    smp_mb__after_atomic();
+
     arch_vcpu_block(v);
 
     /* Check for events /after/ blocking: avoids wakeup waiting race. */
diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
index e5d062667d..65d5c8e423 100644
--- a/xen/include/asm-arm/system.h
+++ b/xen/include/asm-arm/system.h
@@ -30,6 +30,9 @@
 
 #define smp_wmb()       dmb(ishst)
 
+#define smp_mb__before_atomic()    smp_mb()
+#define smp_mb__after_atomic()     smp_mb()
+
 /*
  * This is used to ensure the compiler did actually allocate the register we
  * asked it for some inline assembly sequences.  Apparently we can't trust
diff --git a/xen/include/asm-x86/system.h b/xen/include/asm-x86/system.h
index 069f422f0d..7e5891f3df 100644
--- a/xen/include/asm-x86/system.h
+++ b/xen/include/asm-x86/system.h
@@ -233,6 +233,9 @@ static always_inline unsigned long __xadd(
 #define set_mb(var, value) do { xchg(&var, value); } while (0)
 #define set_wmb(var, value) do { var = value; smp_wmb(); } while (0)
 
+#define smp_mb__before_atomic()    do { } while (0)
+#define smp_mb__after_atomic()     do { } while (0)
+
 /**
  * array_index_mask_nospec() - generate a mask that is ~0UL when the
  *      bounds check succeeds and 0 otherwise
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.13



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.