[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v11 06/13] efi: create new early memory allocator
>>> On 05.12.16 at 23:25, <daniel.kiper@xxxxxxxxxx> wrote: > There is a problem with place_string() which is used as early memory > allocator. It gets memory chunks starting from start symbol and goes > down. Sadly this does not work when Xen is loaded using multiboot2 > protocol because then the start lives on 1 MiB address and we should > not allocate a memory from below of it. So, I tried to use mem_lower > address calculated by GRUB2. However, this solution works only on some > machines. There are machines in the wild (e.g. Dell PowerEdge R820) > which uses first ~640 KiB for boot services code or data... :-((( > Hence, we need new memory allocator for Xen EFI boot code which is > quite simple and generic and could be used by place_string() and > efi_arch_allocate_mmap_buffer(). I think about following solutions: > > 1) We could use native EFI allocation functions (e.g. AllocatePool() > or AllocatePages()) to get memory chunk. However, later (somewhere > in __start_xen()) we must copy its contents to safe place or reserve > it in e820 memory map and map it in Xen virtual address space. This > means that the code referring to Xen command line, loaded modules and > EFI memory map, mostly in __start_xen(), will be further complicated > and diverge from legacy BIOS cases. Additionally, both former things > have to be placed below 4 GiB because their addresses are stored in > multiboot_info_t structure which has 32-bit relevant members. > > 2) We may allocate memory area statically somewhere in Xen code which > could be used as memory pool for early dynamic allocations. Looks > quite simple. Additionally, it would not depend on EFI at all and > could be used on legacy BIOS platforms if we need it. However, we > must carefully choose size of this pool. We do not want increase Xen > binary size too much and waste too much memory but also we must fit > at least memory map on x86 EFI platforms. As I saw on small machine, > e.g. IBM System x3550 M2 with 8 GiB RAM, memory map may contain more > than 200 entries. Every entry on x86-64 platform is 40 bytes in size. > So, it means that we need more than 8 KiB for EFI memory map only. > Additionally, if we use this memory pool for Xen and modules command > line storage (it would be used when xen.efi is executed as EFI application) > then we should add, I think, about 1 KiB. In this case, to be on safe > side, we should assume at least 64 KiB pool for early memory allocations. > Which is about 4 times of our earlier calculations. However, during > discussion on Xen-devel Jan Beulich suggested that just in case we should > use 1 MiB memory pool like it is in original place_string() implementation. > So, let's use 1 MiB as it was proposed. If we think that we should not > waste unallocated memory in the pool on running system then we can mark > this region as __initdata and move all required data to dynamically > allocated places somewhere in __start_xen(). > > 2a) We could put memory pool into .bss.page_aligned section. Then allocate > memory chunks starting from the lowest address. After init phase we can > free unused portion of the memory pool as in case of .init.text or > .init.data > sections. This way we do not need to allocate any space in image file and > freeing of unused area in the memory pool is very simple. > > Now #2a solution is implemented because it is quite simple and requires > limited number of changes, especially in __start_xen(). > > New allocator is quite generic and can be used on ARM platforms too. > Though it is not enabled on ARM yet due to lack of some prereq. > List of them is placed before ebmalloc code. This last paragraph is now slightly stale, but anyway ... > Signed-off-by: Daniel Kiper <daniel.kiper@xxxxxxxxxx> Acked-by: Jan Beulich <jbeulich@xxxxxxxx> _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |