[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[RFC PATCH 6/6] xen/arm: Remove dependency on gcc builtin __sync_fetch_and_add()



Now that we have explicit implementations of LL/SC and LSE atomics
helpers after porting Linux's versions to Xen, we can drop the reference
to gcc's builtin __sync_fetch_and_add().

This requires some fudging using container_of() because the users of
__sync_fetch_and_add(), namely xen/spinlock.c, expect the ptr to be
directly to the u32 being modified while the atomics helpers expect the
ptr to be to an atomic_t and then access that atomic_t's counter member.

NOTE: spinlock.c is using u32 for the value being added while the atomics
helpers use int for their counter member. This shouldn't actually matter
because we do the addition in assembly and the compiler isn't smart enough
to detect signed integer overflow in inline assembly, but I thought it worth
calling out in the commit message.

Signed-off-by: Ash Wilding <ash.j.wilding@xxxxxxxxx>
---
 xen/include/asm-arm/arm32/atomic.h |  2 +-
 xen/include/asm-arm/system.h       | 10 +++++++++-
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/xen/include/asm-arm/arm32/atomic.h 
b/xen/include/asm-arm/arm32/atomic.h
index 544a4ba492..5cf13cc8fa 100644
--- a/xen/include/asm-arm/arm32/atomic.h
+++ b/xen/include/asm-arm/arm32/atomic.h
@@ -200,6 +200,7 @@ static inline int atomic_add_return(int i, atomic_t *v)
 
        return ret;
 }
+#define atomic_fetch_add(i, v) atomic_add_return(i, v)
 
 static inline int atomic_sub_return(int i, atomic_t *v)
 {
@@ -212,5 +213,4 @@ static inline int atomic_sub_return(int i, atomic_t *v)
        return ret;
 }
 
-
 #endif /* __ASM_ARM_ARM32_ATOMIC_H */
diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
index 65d5c8e423..86c50915d9 100644
--- a/xen/include/asm-arm/system.h
+++ b/xen/include/asm-arm/system.h
@@ -3,6 +3,7 @@
 #define __ASM_SYSTEM_H
 
 #include <xen/lib.h>
+#include <xen/kernel.h>
 #include <public/arch-arm.h>
 
 #define sev()           asm volatile("sev" : : : "memory")
@@ -58,7 +59,14 @@ static inline int local_abort_is_enabled(void)
     return !(flags & PSR_ABT_MASK);
 }
 
-#define arch_fetch_and_add(x, v) __sync_fetch_and_add(x, v)
+#define arch_fetch_and_add(ptr, x) ({                                   \
+    int ret;                                                            \
+                                                                        \
+    atomic_t * tmp = container_of((int *)(&(x)), atomic_t, counter);    \
+    ret = atomic_fetch_add(x, tmp);                                     \
+                                                                        \
+    ret;                                                                \
+})
 
 extern struct vcpu *__context_switch(struct vcpu *prev, struct vcpu *next);
 
-- 
2.24.3 (Apple Git-128)




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.