[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 4/6] xen/arm: mpu: Create boot-time MPU protection regions
Hi Julien, > On 30 Oct 2024, at 10:32, Julien Grall <julien@xxxxxxx> wrote: > > On 30/10/2024 10:08, Luca Fancellu wrote: >> Hi Julien, >>> On 30 Oct 2024, at 09:52, Julien Grall <julien.grall.oss@xxxxxxxxx> wrote: >>> >>> On Wed, 30 Oct 2024 at 09:17, Luca Fancellu <Luca.Fancellu@xxxxxxx> wrote: >>>> >>>> Hi Ayan, >>>> >>>> While I rebased the branch on top of your patches, I saw you’ve changed >>>> the number of regions >>>> mapped at boot time, can I ask why? >>> >>> I have asked the change. If you look at the layout... >> Apologies, I didn’t see you’ve asked the change > > No need to apologies! I think I asked a few revisions ago. > >>> >>>> >>>> Compared to >>>> https://patchwork.kernel.org/project/xen-devel/patch/20230626033443.2943270-25-Penny.Zheng@xxxxxxx/: >>> >>> >>> ... you have two sections with the same permissions: >>> >>> xen_mpumap[1] : Xen read-only data >>> xen_mpumap[2] : Xen read-only after init data >>> xen_mpumap[3] : Xen read-write data >>> >>> During boot, [2] and [3] will share the same permissions. After boot, >>> this will be [1] and [2]. Given the number of MPU regions is limited, >>> this is a bit of a waste. >>> >>> We also don't want to have a hole in the middle of Xen sections. So >>> folding seemed to be a good idea. >>> >>>> >>>>> +FUNC(enable_boot_cpu_mm) >>>>> + >>>>> + /* Get the number of regions specified in MPUIR_EL2 */ >>>>> + mrs x5, MPUIR_EL2 >>>>> + >>>>> + /* x0: region sel */ >>>>> + mov x0, xzr >>>>> + /* Xen text section. */ >>>>> + ldr x1, =_stext >>>>> + ldr x2, =_etext >>>>> + prepare_xen_region x0, x1, x2, x3, x4, x5, >>>>> attr_prbar=REGION_TEXT_PRBAR >>>>> + >>>>> + /* Xen read-only data section. */ >>>>> + ldr x1, =_srodata >>>>> + ldr x2, =_erodata >>>>> + prepare_xen_region x0, x1, x2, x3, x4, x5, attr_prbar=REGION_RO_PRBAR >>>>> + >>>>> + /* Xen read-only after init and data section. (RW data) */ >>>>> + ldr x1, =__ro_after_init_start >>>>> + ldr x2, =__init_begin >>>>> + prepare_xen_region x0, x1, x2, x3, x4, x5 >>>> >>>> ^— this, for example, will block Xen to call init_done(void) >>>> later, I understand this is earlyboot, >>>> but I guess we don’t want to make subsequent changes to this >>>> part when introducing the >>>> patches to support start_xen() >>> >>> Can you be a bit more descriptive... What will block? >> This call in setup.c: >> rc = modify_xen_mappings((unsigned long)&__ro_after_init_start, >> (unsigned long)&__ro_after_init_end, >> PAGE_HYPERVISOR_RO); >> Cannot work anymore because xen_mpumap[2] is wider than only >> (__ro_after_init_start, __ro_after_init_end). > > Is this because the implementation of modify_xen_mappings() is only able to > modify a full region? IOW, it would not be able to split regions and/or merge > them? Yes, the code is, at the moment, not smart enough to do that, it will only modify a full region. > >> If that is what we want, then we could wrap the above call into something >> MMU specific that will execute the above call and >> something MPU specific that will modify xen_mpumap[1] from (_srodata, >> _erodata) to (_srodata, __ro_after_init_end) >> and xen_mpumap[2] from (__ro_after_init_start, __init_begin) to >> (__ro_after_init_end, __init_begin). > > I think it would make sense to have the call mmu/mpu specific. This would > allow to limit the number of MPU regions used by Xen itself. > > The only problem is IIRC the region is not fixed because we will skip empty > regions during earlyboot. Yes, but I think we can assume that X(_srodata, _erodata) and Y(__ro_after_init_start, __init_begin) won’t never be empty for Xen? In that case, the call to mpumap_contain_region() should be able to retrieve the full region X and the partial region Y and a specific function could modify the ranges of both given the respective indexes. Code in my branch: https://gitlab.com/xen-project/people/lucafancellu/xen/-/blob/r82_rebased/xen/arch/arm/mpu/mm.c#L382
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |