[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 2/8] arm/mm: Document the differences between arm32 and arm64 directmaps
Hi Alejandro, On 17/07/2023 17:03, Alejandro Vallejo wrote: arm32 merely covers the XENHEAP, whereas arm64 currently covers anything in the frame table. These comments highlight why arm32 doesn't need to account for PDX compression in its __va() implementation while arm64 does. Signed-off-by: Alejandro Vallejo <alejandro.vallejo@xxxxxxxxx> --- xen/arch/arm/include/asm/mm.h | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/xen/arch/arm/include/asm/mm.h b/xen/arch/arm/include/asm/mm.h index 4262165ce2..1a83f41879 100644 --- a/xen/arch/arm/include/asm/mm.h +++ b/xen/arch/arm/include/asm/mm.h @@ -280,6 +280,19 @@ static inline paddr_t __virt_to_maddr(vaddr_t va) #define virt_to_maddr(va) __virt_to_maddr((vaddr_t)(va))#ifdef CONFIG_ARM_32+/** + * Find the virtual address corresponding to a machine address + * + * Only memory backing the XENHEAP has a corresponding virtual address to + * be found. This is so we can save precious virtual space, as it's in + * short supply on arm32. This mapping is not subject to PDX compression + * because XENHEAP is known to be physically contiguous and can't hence + * jump over the PDX hole. This means we can avoid the roundtrips + * converting to/from pdx. + * + * @param ma Machine address + * @return Virtual address mapped to `ma` + */ static inline void *maddr_to_virt(paddr_t ma) { ASSERT(is_xen_heap_mfn(maddr_to_mfn(ma))); @@ -287,6 +300,20 @@ static inline void *maddr_to_virt(paddr_t ma) return (void *)(unsigned long) ma + XENHEAP_VIRT_START; } #else +/** + * Find the virtual address corresponding to a machine address + * + * The directmap covers all conventional memory accesible by the + * hypervisor. This means it's subject to PDX compression. + * + * More specifically to arm64, the directmap mappings start at the first + * GiB boundary containing valid RAM. This means there's an extra offset + * applied (directmap_base_pdx) on top of the regular PDX compression + * logic. I find this paragraph a bit confusing to read because it leads to think that pdx_to_maddr(directmap_base_pdx) will return a GiB aligned address. The base PDX corresponds to the start of the first region and the only requirement is it should be page-aligned. However, when mapping in the virtual address space we also offset the start to ensure that superpage can be used (this is where the GiB alignment is used). That's why XENHEAP_VIRT_START points to directmap_virt_start rather than DIRECTMAP_VIRT_START. I think it would make sense to have the logic following what you suggest as it would remove a memory read. But I would understand if you don't want to take that extra work. :) So for now, I would suggest to remove "GiB boundary containing". Cheers, -- Julien Grall
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |