[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 2/2] xen/riscv: introduce identity mapping
On 24.07.2023 11:42, Oleksii Kurochko wrote: > @@ -35,8 +36,10 @@ unsigned long __ro_after_init phys_offset; > * > * It might be needed one more page table in case when Xen load address > * isn't 2 MB aligned. > + * > + * CONFIG_PAGING_LEVELS page tables are needed for identity mapping. > */ > -#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1) > +#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) * 2) + 1 Comment addition and code change are at least apparently out of sync: With such a comment and without thinking much one would expect the constant to be bumped by CONFIG_PAGING_LEVELS. It is true that you only need CONFIG_PAGING_LEVELS - 1, because the root table is shared, but that would then be nice to also clarify in the comment. E.g. "CONFIG_PAGING_LEVELS page tables are needed for the identity mapping, except that the root page table is shared with the initial mapping." Also - where did the outermost pair of parentheses go? (Really you don't need to parenthesize the multiplication, so the last closing one can simply move last.) > @@ -75,10 +78,11 @@ static void __init setup_initial_mapping(struct mmu_desc > *mmu_desc, > unsigned int index; > pte_t *pgtbl; > unsigned long page_addr; > + bool is_identity_mapping = map_start == pa_start; > > - if ( (unsigned long)_start % XEN_PT_LEVEL_SIZE(0) ) > + if ( !IS_ALIGNED((unsigned long)_start, KB(4)) ) > { > - early_printk("(XEN) Xen should be loaded at 4k boundary\n"); > + early_printk("(XEN) Xen should be loaded at 4KB boundary\n"); The change to the message looks unrelated. > @@ -255,25 +261,44 @@ void __init noreturn noinline enable_mmu() > csr_write(CSR_SATP, > PFN_DOWN((unsigned long)stage1_pgtbl_root) | > RV_STAGE1_MODE << SATP_MODE_SHIFT); > +} > > - asm volatile ( ".p2align 2" ); > - mmu_is_enabled: > - /* > - * Stack should be re-inited as: > - * 1. Right now an address of the stack is relative to load time > - * addresses what will cause an issue in case of load start address > - * isn't equal to linker start address. > - * 2. Addresses in stack are all load time relative which can be an > - * issue in case when load start address isn't equal to linker > - * start address. > - * > - * We can't return to the caller because the stack was reseted > - * and it may have stash some variable on the stack. > - * Jump to a brand new function as the stack was reseted > - */ > +void __init remove_identity_mapping(void) > +{ > + static pte_t *pgtbl = stage1_pgtbl_root; > + static unsigned long load_start = XEN_VIRT_START; > + static unsigned int pt_level = CONFIG_PAGING_LEVELS - 1; These all want to be __initdata, but personally I find this way of recursing a little odd. Let's see what the maintainers say. > + unsigned long load_end = LINK_TO_LOAD(_end); > + unsigned long xen_size; > + unsigned long pt_level_size = XEN_PT_LEVEL_SIZE(pt_level); > + unsigned long pte_nums; > + > + unsigned long virt_indx = pt_index(pt_level, XEN_VIRT_START); > + unsigned long indx; > + > + if ( load_start == XEN_VIRT_START ) > + load_start = LINK_TO_LOAD(_start); > + > + xen_size = load_end - load_start; When you come here recursively, don't you need to limit this instance of the function to a single page table's worth of address space (at the given level), using load_end only if that's yet lower? > + pte_nums = ROUNDUP(xen_size, pt_level_size) / pt_level_size; > + > + while ( pte_nums-- ) > + { > + indx = pt_index(pt_level, load_start); > > - switch_stack_and_jump((unsigned long)cpu0_boot_stack + STACK_SIZE, > - cont_after_mmu_is_enabled); > + if ( virt_indx != indx ) > + { > + pgtbl[indx].pte = 0; > + load_start += XEN_PT_LEVEL_SIZE(pt_level); > + } > + else > + { > + pgtbl = (pte_t *)LOAD_TO_LINK(pte_to_paddr(pgtbl[indx])); Nit: Stray double blank. > + pt_level--; > + remove_identity_mapping(); Don't you need to restore pgtbl and pt_level here before the loop can continue? And because of depending on load_start, which would have moved, xen_size also needs suitably reducing, I think. Plus pte_nums as well, since that in turn was calculated from xen_size. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |