[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 3/6] arm/mpu: Populate a new region in Xen MPU mapping table


  • To: Hari Limaye <hari.limaye@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: "Orzel, Michal" <michal.orzel@xxxxxxx>
  • Date: Mon, 7 Jul 2025 10:10:28 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oPMqGddF6TVxE0mjaSIx0F96tU/UgIBYrGENvAv1vdo=; b=EmgcQjt9Y48QtlijTIXLkex6MPsfXPEvdVz1L3o6robKsIh/zGrFH8wIr/zskTt9MdM9042RJcLmWOdnun56yEwriQZDq1zg0vxxPJJIPD8iXOoKr/ILNEWN7lhxXz3pu2m/YExw4z7dzd7jao+Lh56LVKig+Yrv7wDt5UXTHAXO17tVOHFNVKdWnGj1XQqnS9Lzw01+C1JTwK0Vyv+hTloi2EmQYQsuYWem0NNzhn+OrXi0kHPQn31D1NVJ/fTsEckbudGb6X34mzrJd64sLetk+pr5jLWY1ePD/whUzAxMyCkDR3kfPyKXYSoYL0esGe36/EhVPfdlEdCAjKJuIg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=D91rR5stYrj5yj/DzlgBqVdNRq6e8prc6Oz5N7OY0HuitzMKwQtkKPFhS/poH4CpYIHJ8303YGq39RiiipnuU4E+lxVpjdGUzdO9bWOr8vcz7jwMSyMxt+VU2uezy36iIMq5YDc8Qc9fLv064TkKkIWPqLGgBqDgD9OrknFAq8X3euVuSuPPKhN9uBYckJORGaq59idGrJEvgjECpqzyiYP/UJjRmiObnVC5xBcB/AXmI7C/Jk8Mrm/IUu8q1vOWQWubumP1c46noUV3TimFt+aO2gZKhAaHgJysIOtM7o5DSdSFobT1515PJjENSvDlceGViTSAwjZx/A0Gnk2QcA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: luca.fancellu@xxxxxxx, Penny Zheng <Penny.Zheng@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Wei Chen <wei.chen@xxxxxxx>
  • Delivery-date: Mon, 07 Jul 2025 08:10:52 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>


On 02/07/2025 16:13, Hari Limaye wrote:
> From: Penny Zheng <Penny.Zheng@xxxxxxx>
> 
> Introduce map_pages_to_xen() that is implemented using a new helper,
> xen_mpumap_update(), which is responsible for updating Xen MPU memory
> mapping table(xen_mpumap), including creating a new entry, updating
> or destroying an existing one, it is equivalent to xen_pt_update in MMU.
> 
> This commit only implements populating a new entry in Xen MPU memory mapping
> table(xen_mpumap).
> 
> Signed-off-by: Penny Zheng <penny.zheng@xxxxxxx>
> Signed-off-by: Wei Chen <wei.chen@xxxxxxx>
> Signed-off-by: Luca Fancellu <luca.fancellu@xxxxxxx>
> Signed-off-by: Hari Limaye <hari.limaye@xxxxxxx>
> ---
> Changes from v1:
> - Simplify if condition
> - Use normal printk
> - Use %# over 0x%
> - Add same asserts as in Patch 4
> ---
>  xen/arch/arm/include/asm/mpu/mm.h |  12 ++++
>  xen/arch/arm/mpu/mm.c             | 100 ++++++++++++++++++++++++++++++
>  2 files changed, 112 insertions(+)
> 
> diff --git a/xen/arch/arm/include/asm/mpu/mm.h 
> b/xen/arch/arm/include/asm/mpu/mm.h
> index 81e47b9d0b..101002f7d4 100644
> --- a/xen/arch/arm/include/asm/mpu/mm.h
> +++ b/xen/arch/arm/include/asm/mpu/mm.h
> @@ -64,6 +64,7 @@ static inline void context_sync_mpu(void)
>   * The following API requires context_sync_mpu() after being used to modify 
> MPU
>   * regions:
>   *  - write_protection_region
> + *  - xen_mpumap_update
>   */
>  
>  /* Reads the MPU region (into @pr_read) with index @sel from the HW */
> @@ -72,6 +73,17 @@ void read_protection_region(pr_t *pr_read, uint8_t sel);
>  /* Writes the MPU region (from @pr_write) with index @sel to the HW */
>  void write_protection_region(const pr_t *pr_write, uint8_t sel);
>  
> +/*
> + * Maps an address range into the MPU data structure and updates the HW.
> + * Equivalent to xen_pt_update in an MMU system.
> + *
> + * @param base      Base address of the range to map (inclusive).
> + * @param limit     Limit address of the range to map (exclusive).
> + * @param flags     Flags for the memory range to map.
> + * @return          0 on success, negative on error.
> + */
> +int xen_mpumap_update(paddr_t base, paddr_t limit, unsigned int flags);
> +
>  /*
>   * Creates a pr_t structure describing a protection region.
>   *
> diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c
> index 25e7f36c1e..dd54b66901 100644
> --- a/xen/arch/arm/mpu/mm.c
> +++ b/xen/arch/arm/mpu/mm.c
> @@ -6,6 +6,7 @@
>  #include <xen/lib.h>
>  #include <xen/mm.h>
>  #include <xen/sizes.h>
> +#include <xen/spinlock.h>
>  #include <xen/types.h>
>  #include <asm/mpu.h>
>  #include <asm/mpu/mm.h>
> @@ -29,6 +30,8 @@ DECLARE_BITMAP(xen_mpumap_mask, MAX_MPU_REGION_NR) \
>  /* EL2 Xen MPU memory region mapping table. */
>  pr_t __cacheline_aligned __section(".data") xen_mpumap[MAX_MPU_REGION_NR];
>  
> +static DEFINE_SPINLOCK(xen_mpumap_lock);
> +
>  static void __init __maybe_unused build_assertions(void)
>  {
>      /*
> @@ -162,6 +165,103 @@ int mpumap_contains_region(pr_t *table, uint8_t 
> nr_regions, paddr_t base,
>      return MPUMAP_REGION_NOTFOUND;
>  }
>  
> +/*
> + * Allocate a new free EL2 MPU memory region, based on bitmap 
> xen_mpumap_mask.
I would clarify that you allocate entry in bitmap, not a region.

> + * @param idx   Set to the index of the allocated EL2 MPU region on success.
> + * @return      0 on success, otherwise -ENOENT on failure.
> + */
> +static int xen_mpumap_alloc_entry(uint8_t *idx)
> +{
> +    ASSERT(spin_is_locked(&xen_mpumap_lock));
> +
> +    *idx = find_first_zero_bit(xen_mpumap_mask, max_mpu_regions);
> +    if ( *idx == max_mpu_regions )
> +    {
> +        printk(XENLOG_ERR "mpu: EL2 MPU memory region mapping pool 
> exhausted\n");
> +        return -ENOENT;
> +    }
> +
> +    set_bit(*idx, xen_mpumap_mask);
Empty line here please.

> +    return 0;
> +}
> +
> +/*
> + * Update the entry in the MPU memory region mapping table (xen_mpumap) for 
> the
> + * given memory range and flags, creating one if none exists.
> + *
> + * @param base  Base address (inclusive).
> + * @param limit Limit address (exclusive).
> + * @param flags Region attributes (a combination of PAGE_HYPERVISOR_XXX)
> + * @return      0 on success, otherwise negative on error.
> + */
> +static int xen_mpumap_update_entry(paddr_t base, paddr_t limit,
> +                                   unsigned int flags)
> +{
> +    uint8_t idx;
> +    int rc;
> +
> +    ASSERT(spin_is_locked(&xen_mpumap_lock));
> +
> +    rc = mpumap_contains_region(xen_mpumap, max_mpu_regions, base, limit, 
> &idx);
> +    if ( !(rc == MPUMAP_REGION_NOTFOUND) )
Why not ( rc != MPUMAP_REGION_NOTFOUND )?

> +        return -EINVAL;
> +
> +    /* We are inserting a mapping => Create new region. */
> +    if ( flags & _PAGE_PRESENT )
Wouldn't it make more sense to have this check before calling
mpumap_contains_region()?

> +    {
> +        rc = xen_mpumap_alloc_entry(&idx);
> +        if ( rc )
> +            return -ENOENT;
> +
> +        xen_mpumap[idx] = pr_of_addr(base, limit, flags);
> +
> +        write_protection_region(&xen_mpumap[idx], idx);
> +    }
> +
> +    return 0;
> +}
> +
> +int xen_mpumap_update(paddr_t base, paddr_t limit, unsigned int flags)
> +{
> +    int rc;
> +
> +    ASSERT(IS_ALIGNED(base, PAGE_SIZE));
> +    ASSERT(IS_ALIGNED(limit, PAGE_SIZE));
What's the purpose of these asserts given the exact same check a few lines 
below?

> +    ASSERT(base <= limit);
Hmm, given limit being exclusive and later subtraction to become inclusive, if
we pass base == limit, you will end up with limit being smaller than base. Also,
if limit == 0, you will underflow it.

> +
> +    if ( flags_has_rwx(flags) )
> +    {
> +        printk("Mappings should not be both Writeable and Executable\n");
> +        return -EINVAL;
> +    }
> +
> +    if ( !IS_ALIGNED(base, PAGE_SIZE) || !IS_ALIGNED(limit, PAGE_SIZE) )
> +    {
> +        printk("base address %#"PRIpaddr", or limit address %#"PRIpaddr" is 
> not page aligned\n",
> +               base, limit);
> +        return -EINVAL;
> +    }
> +
> +    spin_lock(&xen_mpumap_lock);
> +
> +    rc = xen_mpumap_update_entry(base, limit, flags);
> +
> +    spin_unlock(&xen_mpumap_lock);
> +
> +    return rc;
> +}
> +
> +int map_pages_to_xen(unsigned long virt, mfn_t mfn, unsigned long nr_mfns,
> +                     unsigned int flags)
> +{
> +    int rc = xen_mpumap_update(mfn_to_maddr(mfn),
What do you expect to be passed as virt? I would expect maddr which could save
you the conversion here.

> +                               mfn_to_maddr(mfn_add(mfn, nr_mfns)), flags);
> +    if ( !rc )
> +        context_sync_mpu();
Shouldn't this be called inside xen_mpumap_update() and within the locked 
section?

> +
> +    return rc;
> +}
> +
>  void __init setup_mm(void)
>  {
>      BUG_ON("unimplemented");

~Michal




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.