[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 4/4] xen/x86: Correct mandatory and SMP barrier definitions



Barriers are a complicated topic, a common source of confusion in submitted
code, and their incorrect use is a common cause of bugs.  It *really* doesn't
help when Xen's API is the same as Linux, but its ABI different.

Bring the two back in line, so programmers stand a chance of actually getting
their use correct.

As Xen has no current need for mandatory barriers, leave them commented out to
avoid accidential misue.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
---
CC: Jan Beulich <JBeulich@xxxxxxxx>
---
 xen/include/asm-x86/system.h        | 31 +++++++++++++++++++++++--------
 xen/include/asm-x86/x86_64/system.h |  3 ---
 2 files changed, 23 insertions(+), 11 deletions(-)

diff --git a/xen/include/asm-x86/system.h b/xen/include/asm-x86/system.h
index 9cb6fd7..9cd401a 100644
--- a/xen/include/asm-x86/system.h
+++ b/xen/include/asm-x86/system.h
@@ -164,23 +164,38 @@ static always_inline unsigned long __xadd(
     ((typeof(*(ptr)))__xadd(ptr, (typeof(*(ptr)))(v), sizeof(*(ptr))))
 
 /*
+ * Mandatory barriers, for the ordering of reads and writes with MMIO devices
+ * mapped with reduced cacheability.
+ *
+ * Xen has no such device drivers, and therefore no need for mandatory
+ * barriers.  These these are hidden to avoid their misuse; If a future need
+ * is found, they can be re-introduced, but chances are very good that a
+ * programmer actually should be using the smp_*() barriers.
+ *
+#define mb()            asm volatile ("mfence" ::: "memory")
+#define rmb()           asm volatile ("lfence" ::: "memory")
+#define wmb()           asm volatile ("sfence" ::: "memory")
+ */
+
+/*
+ * SMP barriers, for ordering of reads and writes between CPUs, most commonly
+ * used with shared memory.
+ *
  * Both Intel and AMD agree that, from a programmer's viewpoint:
  *  Loads cannot be reordered relative to other loads.
  *  Stores cannot be reordered relative to other stores.
- * 
+ *  Loads may be reordered ahead of an unaliasing store.
+ *
  * Intel64 Architecture Memory Ordering White Paper
  * <http://developer.intel.com/products/processor/manuals/318147.pdf>
- * 
+ *
  * AMD64 Architecture Programmer's Manual, Volume 2: System Programming
  * <http://www.amd.com/us-en/assets/content_type/\
  *  white_papers_and_tech_docs/24593.pdf>
  */
-#define rmb()           barrier()
-#define wmb()           barrier()
-
-#define smp_mb()        mb()
-#define smp_rmb()       rmb()
-#define smp_wmb()       wmb()
+#define smp_mb()        asm volatile ("mfence" ::: "memory")
+#define smp_rmb()       barrier()
+#define smp_wmb()       barrier()
 
 #define set_mb(var, value) do { xchg(&var, value); } while (0)
 #define set_wmb(var, value) do { var = value; smp_wmb(); } while (0)
diff --git a/xen/include/asm-x86/x86_64/system.h 
b/xen/include/asm-x86/x86_64/system.h
index 7026c05..bdf45e5 100644
--- a/xen/include/asm-x86/x86_64/system.h
+++ b/xen/include/asm-x86/x86_64/system.h
@@ -79,7 +79,4 @@ static always_inline __uint128_t __cmpxchg16b(
     _rc;                                                                \
 })
 
-#define mb()                    \
-    asm volatile ( "mfence" : : : "memory" )
-
 #endif /* __X86_64_SYSTEM_H__ */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.