[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 2/2] xen/riscv: introduce identity mapping
On 24.07.2023 18:00, Oleksii wrote: > On Mon, 2023-07-24 at 16:11 +0200, Jan Beulich wrote: >> On 24.07.2023 11:42, Oleksii Kurochko wrote: >>> +void __init remove_identity_mapping(void) >>> +{ >>> + static pte_t *pgtbl = stage1_pgtbl_root; >>> + static unsigned long load_start = XEN_VIRT_START; >>> + static unsigned int pt_level = CONFIG_PAGING_LEVELS - 1; >> >> These all want to be __initdata, but personally I find this way of >> recursing a little odd. Let's see what the maintainers say. > I'm not completely happy either. Initially I thought that it would be > better to pass all this stuff as function's arguments. > > But then it is needed to provide an access to stage1_pgtbl_root ( > get_root_pt() function ? ). So remove_identity_mapping() will be called > as remove_identity_mapping(get_root_pt(), _start, CONFIG_PAGING_LELVELS > -1 ) or remove_identity_mapping(NULL, _start, CONFIG_PAGING_LELVELS -1 > ) and then check if first argument is NULL then initialize it with > stage1_pgtbl_root. > Also I am not sure that an 'user' should provide all this information > to such function. > > Could you recommend something better? Well, to avoid the mess you are describing I would consider making remove_identity_mapping() itself a thin wrapper around the actual function which then invokes itself recursively. That'll keep the top-level call site tidy, and all the technical details confined to the (then) two functions. >>> + unsigned long load_end = LINK_TO_LOAD(_end); >>> + unsigned long xen_size; >>> + unsigned long pt_level_size = XEN_PT_LEVEL_SIZE(pt_level); >>> + unsigned long pte_nums; >>> + >>> + unsigned long virt_indx = pt_index(pt_level, XEN_VIRT_START); >>> + unsigned long indx; >>> + >>> + if ( load_start == XEN_VIRT_START ) >>> + load_start = LINK_TO_LOAD(_start); >>> + >>> + xen_size = load_end - load_start; >> >> When you come here recursively, don't you need to limit this >> instance of the function to a single page table's worth of address >> space (at the given level), using load_end only if that's yet >> lower? > Do you mean a case when load_start > load_end? If yes then I missed > that. No, my concern is with the page table presently acted upon covering only a limited part of the address space. That "limited" is the full address space only for the top level table. You won't observe this though unless the Xen image crosses a 2Mb boundary. But if it does (and it may help if you arranged for big enough an image just for the purpose of debugging, simply by inserting enough dead code or data), and if all mappings are 4k ones, you'd run past the first L1 table's worth of mappings on the first invocation of this function with a L1 table. (Of course hitting the condition may further require Xen and 1:1 mappings to be sufficiently close to one another, so that the zapping doesn't already occur at higher levels. But then the same situation could arise at higher levels when the image crosses a 1Gb or 512Gb boundary.) >>> + pte_nums = ROUNDUP(xen_size, pt_level_size) / pt_level_size; >>> + >>> + while ( pte_nums-- ) >>> + { >>> + indx = pt_index(pt_level, load_start); >>> >>> - switch_stack_and_jump((unsigned long)cpu0_boot_stack + >>> STACK_SIZE, >>> - cont_after_mmu_is_enabled); >>> + if ( virt_indx != indx ) >>> + { >>> + pgtbl[indx].pte = 0; >>> + load_start += XEN_PT_LEVEL_SIZE(pt_level); >>> + } >>> + else >>> + { >>> + pgtbl = (pte_t >>> *)LOAD_TO_LINK(pte_to_paddr(pgtbl[indx])); >> >> Nit: Stray double blank. > Thanks. >> >>> + pt_level--; >>> + remove_identity_mapping(); >> >> Don't you need to restore pgtbl and pt_level here before the loop >> can continue? And because of depending on load_start, which would >> have moved, xen_size also needs suitably reducing, I think. Plus >> pte_nums as well, since that in turn was calculated from xen_size. > I forgot to restore pgtbl and pt_level because initially I used a > function arguments to pass that information so it wasn't needed to > restore them. > > Regarding xen_size and pte_nums it looks like it is needed to init only > once on each page table level. > For example we have the following situation: > ---------------------- > non-identity-mapping > identity-mapping > ---------------------- C > identity-mapping > ---------------------- B > identity-mapping > ---------------------- A > So we calculated that we need to remove 3 ptes, for first two ptes ( as > only identity mapping is there) are removed without any issue, then > move load_addr to C and run recursion > for the pte 'C' to go to next page table level. > At new level we are calculating how many ptes are needed to be removed > and remove only necessary amount of ptes. > When we will back to prev page table level pte_num will be 1 then we > will go to the head of the cycle, decrease pte_num to 0 and exit. I think I agree that this case is okay. > The same is with the case when non-idenitity-mapping is lower than > identity mapping ( but it looks like it is not a case becase > XEN_VIRT_START addr is the highest address by its defintion. Probably > it is needed to add a check ): And it looks like this case being impossible is what voids my respective remark. (Might be worth adding a comment to this effect.) Jan > we know that pte_num = 3 at some level. Then we go to the next level > and remove there only identity map ptes, back to previous level, > decrease pte_num to 2 and remove only 2 remaining ptes. > > ~ Oleksii >
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |