[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 3/3] xen/arm: fix mask calculation in init_pdx
>>> On 06.05.19 at 17:26, <julien.grall@xxxxxxx> wrote: > On 5/6/19 10:06 AM, Jan Beulich wrote: >>>>> On 03.05.19 at 22:50, <sstabellini@xxxxxxxxxx> wrote: >>> --- a/xen/arch/arm/setup.c >>> +++ b/xen/arch/arm/setup.c >>> @@ -481,10 +481,15 @@ static paddr_t __init next_module(paddr_t s, paddr_t >>> *end) >>> static void __init init_pdx(void) >>> { >>> paddr_t bank_start, bank_size, bank_end; >>> - >>> - u64 mask = pdx_init_mask(bootinfo.mem.bank[0].start); >>> + u64 mask; >>> int bank; >>> >>> + /* >>> + * We always map the first 1<<MAX_ORDER of RAM, hence, they are left >> >> "... pages of RAM." ? >> >>> + * uncompressed. >>> + */ >>> + mask = pdx_init_mask(1ULL << (MAX_ORDER + PAGE_SHIFT)); >> >> PAGE_SIZE << MAX_ORDER? > > Hmmm, I am not entirely convince this will give the correct value > because PAGE_SIZE is defined as (_AC(1, L) << PAGE_SHIFT. Oh, indeed, for an abstract 32-bit arch this would de-generate, due to MAX_ORDER being 20. Nevertheless I think the expression used invites for "cleaning up" (making the same mistake I've made), and since this is in Arm-specific code (where MAX_ORDER is 18) I think it would still be better to use the suggested alternative. >> I wonder whether pdx_init_mask() itself wouldn't better apply this >> (lower) cap > > Do you mean always returning (PAGE_SIZE << MAX_ORDER) or capping the > init mask? > > Note that the later will not produce the same behavior as this patch. Since I'm not sure I understand what you mean with "capping the init mask", I'm also uncertain what alternative behavior you're thinking of. What I'm suggesting is u64 __init pdx_init_mask(u64 base_addr) { return fill_mask(max(base_addr, (uint64_t)PAGE_SIZE << MAX_ORDER) - 1); } Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |