[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen master] xen/arm: Enforce alignment check in debug build for {read, write}_atomic



commit 34f8b971b2dd1968fd5b9bf4ce1247dc9d31f6b5
Author:     Ayan Kumar Halder <ayankuma@xxxxxxx>
AuthorDate: Tue Nov 8 09:45:03 2022 +0000
Commit:     Julien Grall <jgrall@xxxxxxxxxx>
CommitDate: Tue Dec 6 18:19:50 2022 +0000

    xen/arm: Enforce alignment check in debug build for {read, write}_atomic
    
    Xen provides helper to atomically read/write memory (see {read,
    write}_atomic()). Those helpers can only work if the address is aligned
    to the size of the access (see B2.2.1 ARM DDI 08476I.a).
    
    On Arm32, the alignment is already enforced by the processor because
    HSCTLR.A bit is set (it enforce alignment for every access). For Arm64,
    this bit is not set because memcpy()/memset() can use unaligned access
    for performance reason (the implementation is taken from the Cortex
    library).
    
    To avoid any overhead in production build, the alignment will only be
    checked using an ASSERT. Note that it might be possible to do it in
    production build using the acquire/exclusive version of load/store. But
    this is left to a follow-up (if wanted).
    
    Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@xxxxxxx>
    Signed-off-by: Julien Grall <julien@xxxxxxx>
    Reviewed-by: Michal Orzel <michal.orzel@xxxxxxx>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>
    Acked-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
---
 xen/arch/arm/include/asm/atomic.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/arm/include/asm/atomic.h 
b/xen/arch/arm/include/asm/atomic.h
index 1f60c28b1b..64314d59b3 100644
--- a/xen/arch/arm/include/asm/atomic.h
+++ b/xen/arch/arm/include/asm/atomic.h
@@ -78,6 +78,7 @@ static always_inline void read_atomic_size(const volatile 
void *p,
                                            void *res,
                                            unsigned int size)
 {
+    ASSERT(IS_ALIGNED((vaddr_t)p, size));
     switch ( size )
     {
     case 1:
@@ -102,6 +103,7 @@ static always_inline void write_atomic_size(volatile void 
*p,
                                             void *val,
                                             unsigned int size)
 {
+    ASSERT(IS_ALIGNED((vaddr_t)p, size));
     switch ( size )
     {
     case 1:
--
generated by git-patchbot for /home/xen/git/xen.git#master



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.