[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN v2] xen/Arm: Enforce alignment check for atomic read/write



Hi Ayan,

To me the title and the explaination below suggests...

On 04/11/2022 16:23, Ayan Kumar Halder wrote:
From: Ayan Kumar Halder <ayankuma@xxxxxxx>

Refer ARM DDI 0487I.a ID081822, B2.2.1
"Requirements for single-copy atomicity

- A read that is generated by a load instruction that loads a single
general-purpose register and is aligned to the size of the read in the
instruction is single-copy atomic.

-A write that is generated by a store instruction that stores a single
general-purpose register and is aligned to the size of the write in the
instruction is single-copy atomic"

On AArch32, the alignment check is enabled at boot time by setting HSCTLR.A bit.
("HSCTLR, Hyp System Control Register").
However in AArch64, alignment check is not enabled at boot time.

... you want to enable the alignment check on AArch64 always. However, this is not possible to do because memcpy() is using unaligned access.

I think the commit message/title should clarify that the check is *only* done during debug build. IOW, there are no enforcement in producation build.

The alternative would be to use a BUG_ON() but that might be too high overhead.

Cheers,


Thus, one needs to check for alignment when performing atomic operations.

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@xxxxxxx>
Reviewed-by: Michal Orzel <michal.orzel@xxxxxxx
---

Changes from :-
v1 - 1. Referred to the latest Arm Architecture Reference Manual in the commit
message.

  xen/arch/arm/include/asm/atomic.h | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/xen/arch/arm/include/asm/atomic.h 
b/xen/arch/arm/include/asm/atomic.h
index 1f60c28b1b..64314d59b3 100644
--- a/xen/arch/arm/include/asm/atomic.h
+++ b/xen/arch/arm/include/asm/atomic.h
@@ -78,6 +78,7 @@ static always_inline void read_atomic_size(const volatile 
void *p,
                                             void *res,
                                             unsigned int size)
  {
+    ASSERT(IS_ALIGNED((vaddr_t)p, size));
      switch ( size )
      {
      case 1:
@@ -102,6 +103,7 @@ static always_inline void write_atomic_size(volatile void 
*p,
                                              void *val,
                                              unsigned int size)
  {
+    ASSERT(IS_ALIGNED((vaddr_t)p, size));
      switch ( size )
      {
      case 1:

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.