[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/9] xen/arm64: head: Don't map too much in boot_third


  • To: Julien Grall <julien@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Michal Orzel <michal.orzel@xxxxxxx>
  • Date: Mon, 26 Jun 2023 13:28:48 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fEMjlyfz11+zfZvWX2Op5i4BQP+/ppJsce2J3bHhaAg=; b=Ia8MsjMMS2V3EuhWACYB/9jt44raUcDAbiCz8gyO3/AK14SgO4D/pVWQuv5V06o09M2185uiCRiQxC2paWo4CfdzI++VVbfRYW/VMr5wcVqlI2fxSq1sbyoZ8uLUo2azzr7obqL3ZMO9uLDTHEV0Yg5T32ztKe0LQ3gDU1IT4qlX//FErEIrgJie8mhxVN8AdQ0HM0n9wpdVETLZmaOOQqSrL431bsnoR6ruxd5CNp5iQ2T7hiXYxsR9MqodaxpWSX5PvXXgZrONa9+Hb3zgoxXG6hPLNxe7SGlrXKr2QDnw8GsjHcDQ1ndXDzBAxmaQy/9xCTIgkMrR8ycYHbddOQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iM/9Dk9tacIyyO7UQqCmn8vy6C4nDxM019ic/EiBgthiQ3ZG6T4NbXwoiWrY+8+Xs/ABhxHoDQdGU8GRXGeOXMy2SvhsUunxaLv8dB3bpzaPH9umFiEoP7iq+g5KYAt4ruaE5Gw9n7d5SbsefdbdqPbkxsNUmEhZlLRcdrjZ7NC2Dqh/z6FXBeTHTcIHykwJEMTun5yZAH7Z4IlOUoFgKZpRFO3dKhjQVASgexm3clf8WC2Ve3T2pTcQoI3oONCTUEW8wSXZCxzOUCMoZy16GZUvdxb8aSBtr47+LsGtJQtyMWZ1VAsFLNjG7nvRqMGMwJtoHrT+Sm7ztAKUe9XWkw==
  • Cc: <Luca.Fancellu@xxxxxxx>, <Henry.Wang@xxxxxxx>, Julien Grall <jgrall@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, "Bertrand Marquis" <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Mon, 26 Jun 2023 11:29:06 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>


On 25/06/2023 22:49, Julien Grall wrote:
> 
> 
> From: Julien Grall <jgrall@xxxxxxxxxx>
> 
> At the moment, we are mapping the size of the reserved area for Xen
> (i.e. 2MB) even if the binary is smaller. We don't exactly know what's
> after Xen, so it is not a good idea to map more than necessary for a
> couple of reasons:
>     * We would need to use break-before-make if the extra PTE needs to
>       be updated to point to another region
>     * The extra area mapped may be mapped again by Xen with different
>       memory attribute. This would result to attribue mismatch.
s/attribue/attribute

> 
> Therefore, rework the logic in create_page_tables() to map only what's
> necessary. To simplify the logic, we also want to make sure _end
> is page-aligned. So align the symbol in the linker and add an assert
> to catch any change.
> 
> Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx>
> ---
>  xen/arch/arm/arm64/head.S | 15 ++++++++++++++-
>  xen/arch/arm/xen.lds.S    |  3 +++
>  2 files changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index f37133cf7ccd..66bc85d4c39e 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -572,6 +572,19 @@ create_page_tables:
>          create_table_entry boot_first, boot_second, x0, 1, x1, x2, x3
>          create_table_entry boot_second, boot_third, x0, 2, x1, x2, x3
> 
> +        /*
> +         * Find the size of Xen in pages and multiply by the size of a
> +         * PTE. This will then be compared in the mapping loop below.
> +         *
> +         * Note the multiplication is just to avoid using an extra
> +         * register/instruction per iteration.
> +         */
> +        ldr   x0, =_start            /* x0 := vaddr(_start) */
x0 is already set to vaddr of _start by the first instruction of 
create_page_tables
and is preserved by create_table_entry. You could just reuse it instead of 
re-loading.

> +        ldr   x1, =_end              /* x1 := vaddr(_end) */
> +        sub   x0, x1, x0             /* x0 := effective size of Xen */
> +        lsr   x0, x0, #PAGE_SHIFT    /* x0 := Number of pages for Xen */
> +        lsl   x0, x0, #3             /* x0 := Number of pages * PTE size */
> +
>          /* Map Xen */
>          adr_l x4, boot_third
> 
> @@ -585,7 +598,7 @@ create_page_tables:
>  1:      str   x2, [x4, x1]           /* Map vaddr(start) */
>          add   x2, x2, #PAGE_SIZE     /* Next page */
>          add   x1, x1, #8             /* Next slot */
> -        cmp   x1, #(XEN_PT_LPAE_ENTRIES<<3) /* 512 entries per page */
> +        cmp   x1, x0                 /* Loop until we map all of Xen */
>          b.lt  1b
> 
>          /*
> diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
> index c5d8c6201423..c4627cea7482 100644
> --- a/xen/arch/arm/xen.lds.S
> +++ b/xen/arch/arm/xen.lds.S
> @@ -212,6 +212,7 @@ SECTIONS
>         . = ALIGN(POINTER_ALIGN);
>         __bss_end = .;
>    } :text
> +  . = ALIGN(PAGE_SIZE);
>    _end = . ;
> 
>    /* Section for the device tree blob (if any). */
> @@ -241,4 +242,6 @@ ASSERT(IS_ALIGNED(__init_begin,     4), "__init_begin is 
> misaligned")
>  ASSERT(IS_ALIGNED(__init_end,       4), "__init_end is misaligned")
>  ASSERT(IS_ALIGNED(__bss_start,      POINTER_ALIGN), "__bss_start is 
> misaligned")
>  ASSERT(IS_ALIGNED(__bss_end,        POINTER_ALIGN), "__bss_end is 
> misaligned")
> +/* To simplify the logic in head.S, we want to _end to be page aligned */
> +ASSERT(IS_ALIGNED(_end,            PAGE_SIZE), "_end is not page aligned")
one more space if you want to align PAGE_SIZE to POINTER_ALIGN

All in all:
Reviewed-by: Michal Orzel <michal.orzel@xxxxxxx>

~Michal




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.