|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2] xen/riscv: Increase XEN_VIRT_SIZE
On 04.04.2025 18:04, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/include/asm/config.h
> +++ b/xen/arch/riscv/include/asm/config.h
> @@ -41,11 +41,11 @@
> * Start addr | End addr | Slot | area description
> *
> ============================================================================
> * ..... L2 511 Unused
> - * 0xffffffffc0a00000 0xffffffffc0bfffff L2 511 Fixmap
> + * 0xffffffffc1800000 0xffffffffc1afffff L2 511 Fixmap
Isn't the upper bound 0xffffffffc19fffff now?
> --- a/xen/arch/riscv/include/asm/mm.h
> +++ b/xen/arch/riscv/include/asm/mm.h
> @@ -43,13 +43,19 @@ static inline void *maddr_to_virt(paddr_t ma)
> */
> static inline unsigned long virt_to_maddr(unsigned long va)
> {
> + const unsigned int vpn1_shift = PAGETABLE_ORDER + PAGE_SHIFT;
> + const unsigned long va_vpn = va >> vpn1_shift;
> + const unsigned long xen_virt_start_vpn =
> + _AC(XEN_VIRT_START, UL) >> vpn1_shift;
> + const unsigned long xen_virt_end_vpn =
> + xen_virt_start_vpn + ((XEN_VIRT_SIZE >> vpn1_shift) - 1);
> +
> if ((va >= DIRECTMAP_VIRT_START) &&
> (va <= DIRECTMAP_VIRT_END))
> return directmapoff_to_maddr(va - directmap_virt_start);
>
> - BUILD_BUG_ON(XEN_VIRT_SIZE != MB(2));
> - ASSERT((va >> (PAGETABLE_ORDER + PAGE_SHIFT)) ==
> - (_AC(XEN_VIRT_START, UL) >> (PAGETABLE_ORDER + PAGE_SHIFT)));
> + BUILD_BUG_ON(XEN_VIRT_SIZE > GB(1));
> + ASSERT((va_vpn >= xen_virt_start_vpn) && (va_vpn <= xen_virt_end_vpn));
Not all of the range is backed by memory, and for the excess space the
translation is therefore (likely) wrong. Which better would be caught by
the assertion?
> --- a/xen/arch/riscv/mm.c
> +++ b/xen/arch/riscv/mm.c
> @@ -31,20 +31,27 @@ unsigned long __ro_after_init phys_offset; /* =
> load_start - XEN_VIRT_START */
> #define LOAD_TO_LINK(addr) ((unsigned long)(addr) - phys_offset)
>
> /*
> - * It is expected that Xen won't be more then 2 MB.
> + * It is expected that Xen won't be more then XEN_VIRT_SIZE MB.
> * The check in xen.lds.S guarantees that.
> - * At least 3 page tables (in case of Sv39 ) are needed to cover 2 MB.
> - * One for each page level table with PAGE_SIZE = 4 Kb.
> *
> - * One L0 page table can cover 2 MB(512 entries of one page table *
> PAGE_SIZE).
> + * Root page table is shared with the initial mapping and is declared
> + * separetely. (look at stage1_pgtbl_root)
> *
> - * It might be needed one more page table in case when Xen load address
> - * isn't 2 MB aligned.
> + * An amount of page tables between root page table and L0 page table
> + * (in the case of Sv39 it covers L1 table):
> + * (CONFIG_PAGING_LEVELS - 2) are needed for an identity mapping and
> + * the same amount are needed for Xen.
> *
> - * CONFIG_PAGING_LEVELS page tables are needed for the identity mapping,
> - * except that the root page table is shared with the initial mapping
> + * An amount of L0 page tables:
> + * (512 entries of one L0 page table covers 2MB ==
> 1<<XEN_PT_LEVEL_SHIFT(1))
> + * XEN_VIRT_SIZE >> XEN_PT_LEVEL_SHIFT(1) are needed for Xen and
> + * one L0 is needed for indenity mapping.
> + *
> + * It might be needed one more page table in case when Xen load
> + * address isn't 2 MB aligned.
Shouldn't we guarantee that? What may require an extra page table is when Xen
crosses a 1Gb boundary (unless we also guaranteed that it won't).
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |