[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] efi/boot: Don't free ebmalloc area at all
>>> On 01.03.17 at 11:41, <andrew.cooper3@xxxxxxxxxx> wrote: > On 01/03/17 10:39, Jan Beulich wrote: >>>>> On 01.03.17 at 11:28, <andrew.cooper3@xxxxxxxxxx> wrote: >>> @@ -144,19 +143,6 @@ static void __init __maybe_unused *ebmalloc(size_t >>> size) >>> return ptr; >>> } >>> >>> -static void __init __maybe_unused free_ebmalloc_unused_mem(void) >>> -{ >>> - unsigned long start, end; >>> - >>> - start = (unsigned long)ebmalloc_mem + PAGE_ALIGN(ebmalloc_allocated); >>> - end = (unsigned long)ebmalloc_mem + sizeof(ebmalloc_mem); >>> - >>> - destroy_xen_mappings(start, end); >>> - init_xenheap_pages(__pa(start), __pa(end)); >>> - >>> - printk(XENLOG_INFO "Freed %lukB unused BSS memory\n", (end - start) >> >>> 10); >>> -} >> To be honest, for a temporary workaround I'd have expected to >> just see the last three lines of the function put inside "#if 0". But >> anyway, > > I can do this if your ack still stands? Sure it does. >> Acked-by: Jan Beulich <jbeulich@xxxxxxxx> >> >> The one thing I don't understand here, btw, is why it is only 32-bit >> Dom0 that fails to boot. Do you have any explanation or theory? > > 32bit guests are allocated from 0 upwards, while 64bit are allocated > from top down, to keep as many mfns below the 128GB boundary available > for 32bit guests. I have to admit that I can't even see how such a bottom-up allocation would work, considering that the generic page allocator has no control for doing so and doesn't itself use any of the is_32*() constructs. Similarly when splitting chunks it always returns the highest part and keeps lower ones. The only difference I can see (on huge systems) is that for 32-bit Dom0 we'd allocate from 128Gb down, while for 64-bit it would be from top-of-memory. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |