[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 9/9] xen/arm: mm: Use memory flags for modify_xen_mappings rather than custom one



On Mon, Oct 02, 2017 at 06:31:50PM +0100, Julien Grall wrote:
> This will help to consolidate the page-table code and avoid different
> path depending on the action to perform.
> 
> Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
> Reviewed-by: Andre Przywara <andre.przywara@xxxxxxx>
> Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> 
> ---
> 

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>

> Cc: Ross Lagerwall <ross.lagerwall@xxxxxxxxxx>
> 
>     arch_livepatch_secure is now the same as on x86. It might be
>     possible to combine both, but I left that alone for now.
> 
>     Changes in v3:
>         - Add Stefano's reviewed-by
> 
>     Changes in v2:
>         - Add Andre's reviewed-by
> ---
>  xen/arch/arm/livepatch.c   |  6 +++---
>  xen/arch/arm/mm.c          |  5 ++---
>  xen/include/asm-arm/page.h | 11 -----------
>  3 files changed, 5 insertions(+), 17 deletions(-)
> 
> diff --git a/xen/arch/arm/livepatch.c b/xen/arch/arm/livepatch.c
> index 3e53524365..279d52cc6c 100644
> --- a/xen/arch/arm/livepatch.c
> +++ b/xen/arch/arm/livepatch.c
> @@ -146,15 +146,15 @@ int arch_livepatch_secure(const void *va, unsigned int 
> pages, enum va_type type)
>      switch ( type )
>      {
>      case LIVEPATCH_VA_RX:
> -        flags = PTE_RO; /* R set, NX clear */
> +        flags = PAGE_HYPERVISOR_RX;
>          break;
>  
>      case LIVEPATCH_VA_RW:
> -        flags = PTE_NX; /* R clear, NX set */
> +        flags = PAGE_HYPERVISOR_RW;
>          break;
>  
>      case LIVEPATCH_VA_RO:
> -        flags = PTE_NX | PTE_RO; /* R set, NX set */
> +        flags = PAGE_HYPERVISOR_RO;
>          break;
>  
>      default:
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 57afedf0be..705bdd9cce 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1041,8 +1041,8 @@ static int create_xen_entries(enum xenmap_operation op,
>                  else
>                  {
>                      pte = *entry;
> -                    pte.pt.ro = PTE_RO_MASK(flags);
> -                    pte.pt.xn = PTE_NX_MASK(flags);
> +                    pte.pt.ro = PAGE_RO_MASK(flags);
> +                    pte.pt.xn = PAGE_XN_MASK(flags);
>                      if ( !pte.pt.ro && !pte.pt.xn )
>                      {
>                          printk("%s: Incorrect combination for addr=%lx\n",
> @@ -1085,7 +1085,6 @@ int destroy_xen_mappings(unsigned long v, unsigned long 
> e)
>  
>  int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int flags)
>  {
> -    ASSERT((flags & (PTE_NX | PTE_RO)) == flags);
>      return create_xen_entries(MODIFY, s, INVALID_MFN, (e - s) >> PAGE_SHIFT,
>                                flags);
>  }
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index e2b3e402d0..e4be83a7bc 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -96,17 +96,6 @@
>  #define PAGE_HYPERVISOR_WC      (_PAGE_DEVICE|MT_NORMAL_NC)
>  
>  /*
> - * Defines for changing the hypervisor PTE .ro and .nx bits. This is only to 
> be
> - * used with modify_xen_mappings.
> - */
> -#define _PTE_NX_BIT     0U
> -#define _PTE_RO_BIT     1U
> -#define PTE_NX          (1U << _PTE_NX_BIT)
> -#define PTE_RO          (1U << _PTE_RO_BIT)
> -#define PTE_NX_MASK(x)  (((x) >> _PTE_NX_BIT) & 0x1U)
> -#define PTE_RO_MASK(x)  (((x) >> _PTE_RO_BIT) & 0x1U)
> -
> -/*
>   * Stage 2 Memory Type.
>   *
>   * These are valid in the MemAttr[3:0] field of an LPAE stage 2 page
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> https://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.