[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 4/6] arm/mpu: Provide access to the MPU region from the C code


  • To: Luca Fancellu <luca.fancellu@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: "Orzel, Michal" <michal.orzel@xxxxxxx>
  • Date: Thu, 22 May 2025 10:19:18 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=lLOkqDNf69QhHHgjWOU//VCf3VKO5hIug8BBi2sn3rY=; b=xMr2AKm6fu43dh5pylSgKF3NGCaM8xaJGlJ8oLP3jbvgAOlS+LPgMaKIXo/J6hPoR66t02IH0mHn2RlcT2b4HxVqv3MkYBocCIKCqLzNRPKE/KN+Ba7yKmOnlBEVbeLMbzFDc+Bv0gU521kmBvBp1b6JUxCY3THHeC5nyaiA3/rbHk4VaFBY3XDf2j4U6kGZwhmmBCU95x0Xt2/R9PwSkNDYZ9FPuf+ZCx/N1JrsjQie/cOx/+noheBRbGFVU50N0RdtAjeBdK5gMkrPFsrAzIL0/FuPCZEdivlgK/HFsaYaIhVKwisMTq49cLUCYxILfBpYw27CkGGPDL5bEcyDMA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=BisBPan5gIwhPtCeTwo3AVJ6peS+Sva24HtjD7SXhaWpRzSHpVQmLHNtDd/UDgx6m7U5r1VUrCTFL7CQBfAW5/KZcOgTqQ1mp8b1SUcBVTeZCmzwTp5xTewFu0Gt6hQpOosdcWjROKbsdEwE2rJ7LGCtEe+Ijr+llFp72r0XHhSehipuAFM8jI1AIkThI6EJm3NtqVD9szZ+Ts1CjiSbAmoiR8KwNk1k3fctZ6cUmLWNRIa42PvIQq+YBDEK9az6nM/QclGhOzs1qfVoCZpUNjCDN1a5royjIUB3OYsXedTWDseevqnIZYDv2jQP/MfNzNsClVloQnTUaA0kPr06Ag==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Thu, 22 May 2025 08:19:31 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>


On 13/05/2025 10:45, Luca Fancellu wrote:
> Implement some utility function in order to access the MPU regions
NIT: s/function/functions

> from the C world.
> 
> Signed-off-by: Luca Fancellu <luca.fancellu@xxxxxxx>
> ---
> v5 changes:
>  - move MPU_REGION_RES0 to arm64, fixed typos and code style.
> v4 changes:
>  - moved back PRBAR0_EL2/PRLAR0_EL2 to mm.c and protect
>    them with CONFIG_ARM_64, changed comments, fixed typos and code
>    style
>  - Add PRBAR_EL2_(n) definition, to be overriden by Arm32
>  - protect prepare_selector, read_protection_region,
>    write_protection_region by Arm64 to ensure compilation on both
>    arm32 and arm64, Arm32 will modify that later while introducing
>    the arm32 bits.
> v3 changes:
>  - Moved PRBAR0_EL2/PRLAR0_EL2 to arm64 specific
>  - Modified prepare_selector() to be easily made a NOP
>    for Arm32, which can address up to 32 region without
>    changing selector and it is also its maximum amount
>    of MPU regions.
> ---
> ---
>  xen/arch/arm/include/asm/arm64/mpu.h |   2 +
>  xen/arch/arm/include/asm/mpu/mm.h    |  34 ++++++++
>  xen/arch/arm/mpu/mm.c                | 119 +++++++++++++++++++++++++++
>  3 files changed, 155 insertions(+)
> 
> diff --git a/xen/arch/arm/include/asm/arm64/mpu.h 
> b/xen/arch/arm/include/asm/arm64/mpu.h
> index d3c055a2e53b..0fed6c8e5828 100644
> --- a/xen/arch/arm/include/asm/arm64/mpu.h
> +++ b/xen/arch/arm/include/asm/arm64/mpu.h
> @@ -5,6 +5,8 @@
>  
>  #ifndef __ASSEMBLY__
>  
> +#define MPU_REGION_RES0        (0xFFFFULL << 48)
> +
>  /* Protection Region Base Address Register */
>  typedef union {
>      struct __packed {
> diff --git a/xen/arch/arm/include/asm/mpu/mm.h 
> b/xen/arch/arm/include/asm/mpu/mm.h
> index 409b4dd53dc6..2ee908801665 100644
> --- a/xen/arch/arm/include/asm/mpu/mm.h
> +++ b/xen/arch/arm/include/asm/mpu/mm.h
> @@ -41,6 +41,40 @@ static inline struct page_info *virt_to_page(const void *v)
>      return mfn_to_page(mfn);
>  }
>  
> +/* Utility function to be used whenever MPU regions are modified */
> +static inline void context_sync_mpu(void)
> +{
> +    /*
> +     * ARM DDI 0600B.a, C1.7.1
> +     * Writes to MPU registers are only guaranteed to be visible following a
> +     * Context synchronization event and DSB operation.
Isn't it misleading to people reading this code that does not match when it
comes to order of operations?

> +     */
> +    dsb(sy);
> +    isb();
> +}
> +
> +/*
> + * The following API requires context_sync_mpu() after being used to modify 
> MPU
> + * regions:
> + *  - write_protection_region
> + */
> +
> +/*
> + * Reads the MPU region with index @sel from the HW.
> + *
> + * @pr_read: mpu protection region returned by read operation.
> + * @sel: which mpu protection region to read
NIT: I mentioned that in the past that I find it a bit too much duplicated
information in the comment. It could very well be:
/* Reads the MPU region (into @pr_read) with index @sel from the HW */

> + */
> +void read_protection_region(pr_t *pr_read, uint8_t sel);
> +
> +/*
> + * Writes the MPU region with index @sel to the HW.
> + *
> + * @pr_write: mpu protection region passed through write operation.
> + * @sel: which mpu protection region to write
> + */
> +void write_protection_region(const pr_t *pr_write, uint8_t sel);
> +
>  #endif /* __ARM_MPU_MM_H__ */
>  
>  /*
> diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c
> index ee035a54b942..46883cbd4af9 100644
> --- a/xen/arch/arm/mpu/mm.c
> +++ b/xen/arch/arm/mpu/mm.c
> @@ -8,6 +8,8 @@
>  #include <xen/sizes.h>
>  #include <xen/types.h>
>  #include <asm/mpu.h>
> +#include <asm/mpu/mm.h>
> +#include <asm/sysregs.h>
>  
>  struct page_info *frame_table;
>  
> @@ -26,6 +28,35 @@ DECLARE_BITMAP(xen_mpumap_mask, MAX_MPU_REGION_NR) \
>  /* EL2 Xen MPU memory region mapping table. */
>  pr_t __section(".data.page_aligned") xen_mpumap[MAX_MPU_REGION_NR];
>  
> +#ifdef CONFIG_ARM_64
> +/*
> + * The following are needed for the cases: GENERATE_WRITE_PR_REG_CASE
> + * and GENERATE_READ_PR_REG_CASE with num==0
> + */
> +#define PRBAR0_EL2 PRBAR_EL2
> +#define PRLAR0_EL2 PRLAR_EL2
> +
> +#define PRBAR_EL2_(n)   PRBAR##n##_EL2
> +#define PRLAR_EL2_(n)   PRLAR##n##_EL2
> +
> +#endif
> +
> +#define GENERATE_WRITE_PR_REG_CASE(num, pr)                                 \
> +    case num:                                                               \
> +    {                                                                       \
> +        WRITE_SYSREG(pr->prbar.bits & ~MPU_REGION_RES0, PRBAR_EL2_(num));   \
> +        WRITE_SYSREG(pr->prlar.bits & ~MPU_REGION_RES0, PRLAR_EL2_(num));   \
> +        break;                                                              \
> +    }
> +
> +#define GENERATE_READ_PR_REG_CASE(num, pr)                      \
> +    case num:                                                   \
> +    {                                                           \
> +        pr->prbar.bits = READ_SYSREG(PRBAR_EL2_(num));          \
> +        pr->prlar.bits = READ_SYSREG(PRLAR_EL2_(num));          \
> +        break;                                                  \
> +    }
> +
>  static void __init __maybe_unused build_assertions(void)
>  {
>      /*
> @@ -36,6 +67,94 @@ static void __init __maybe_unused build_assertions(void)
>      BUILD_BUG_ON(PAGE_SIZE != SZ_4K);
>  }
>  
> +#ifdef CONFIG_ARM_64
> +/*
> + * Armv8-R supports direct access and indirect access to the MPU regions 
> through
> + * registers:
> + *  - indirect access involves changing the MPU region selector, issuing an 
> isb
> + *    barrier and accessing the selected region through specific registers
> + *  - direct access involves accessing specific registers that point to
> + *    specific MPU regions, without changing the selector, avoiding the use 
> of
> + *    a barrier.
> + * For Arm64 the PR{B,L}AR_ELx (for n=0) and PR{B,L}AR<n>_ELx (for n=1..15) 
> are
> + * used for the direct access to the regions selected by
> + * PRSELR_EL2.REGION<7:4>:n, so 16 regions can be directly accessed when the
> + * selector is a multiple of 16, giving access to all the supported memory
> + * regions.
> + */
> +static void prepare_selector(uint8_t *sel)
> +{
> +    uint8_t cur_sel = *sel;
> +
> +    /*
> +     * {read,write}_protection_region works using the direct access to the 
> 0..15
> +     * regions, so in order to save the isb() overhead, change the PRSELR_EL2
> +     * only when needed, so when the upper 4 bits of the selector will 
> change.
> +     */
> +    cur_sel &= 0xF0U;
Here you use &=
+NIT: you could do that at the definition (but maybe it's clearer this way in
your opinion)

> +    if ( READ_SYSREG(PRSELR_EL2) != cur_sel )
> +    {
> +        WRITE_SYSREG(cur_sel, PRSELR_EL2);
> +        isb();
> +    }
> +    *sel = *sel & 0xFU;
but here you don't.

> +}
> +
> +void read_protection_region(pr_t *pr_read, uint8_t sel)
> +{
> +    prepare_selector(&sel);
> +
> +    switch ( sel )
> +    {
> +        GENERATE_READ_PR_REG_CASE(0, pr_read);
> +        GENERATE_READ_PR_REG_CASE(1, pr_read);
> +        GENERATE_READ_PR_REG_CASE(2, pr_read);
> +        GENERATE_READ_PR_REG_CASE(3, pr_read);
> +        GENERATE_READ_PR_REG_CASE(4, pr_read);
> +        GENERATE_READ_PR_REG_CASE(5, pr_read);
> +        GENERATE_READ_PR_REG_CASE(6, pr_read);
> +        GENERATE_READ_PR_REG_CASE(7, pr_read);
> +        GENERATE_READ_PR_REG_CASE(8, pr_read);
> +        GENERATE_READ_PR_REG_CASE(9, pr_read);
> +        GENERATE_READ_PR_REG_CASE(10, pr_read);
> +        GENERATE_READ_PR_REG_CASE(11, pr_read);
> +        GENERATE_READ_PR_REG_CASE(12, pr_read);
> +        GENERATE_READ_PR_REG_CASE(13, pr_read);
> +        GENERATE_READ_PR_REG_CASE(14, pr_read);
> +        GENERATE_READ_PR_REG_CASE(15, pr_read);
> +    default:
> +        BUG(); /* Can't happen */
I think that MISRA requires adding break even for impossible default cases.

> +    }
> +}
> +
> +void write_protection_region(const pr_t *pr_write, uint8_t sel)
> +{
> +    prepare_selector(&sel);
> +
> +    switch ( sel )
> +    {
> +        GENERATE_WRITE_PR_REG_CASE(0, pr_write);
> +        GENERATE_WRITE_PR_REG_CASE(1, pr_write);
> +        GENERATE_WRITE_PR_REG_CASE(2, pr_write);
> +        GENERATE_WRITE_PR_REG_CASE(3, pr_write);
> +        GENERATE_WRITE_PR_REG_CASE(4, pr_write);
> +        GENERATE_WRITE_PR_REG_CASE(5, pr_write);
> +        GENERATE_WRITE_PR_REG_CASE(6, pr_write);
> +        GENERATE_WRITE_PR_REG_CASE(7, pr_write);
> +        GENERATE_WRITE_PR_REG_CASE(8, pr_write);
> +        GENERATE_WRITE_PR_REG_CASE(9, pr_write);
> +        GENERATE_WRITE_PR_REG_CASE(10, pr_write);
> +        GENERATE_WRITE_PR_REG_CASE(11, pr_write);
> +        GENERATE_WRITE_PR_REG_CASE(12, pr_write);
> +        GENERATE_WRITE_PR_REG_CASE(13, pr_write);
> +        GENERATE_WRITE_PR_REG_CASE(14, pr_write);
> +        GENERATE_WRITE_PR_REG_CASE(15, pr_write);
> +    default:
> +        BUG(); /* Can't happen */
> +    }
> +}
> +#endif
Please add /* CONFIG_ARM_64 */

> +
>  void __init setup_mm(void)
>  {
>      BUG_ON("unimplemented");

Other than that:
Reviewed-by: Michal Orzel <michal.orzel@xxxxxxx>

~Michal




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.