[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Supporting systems with large E820 maps



On 20/03/17 20:03, Alex Thorlton wrote:
> Hey everyone,
> 
> Recently, I've been working with Boris Ostrovsky to get Xen running on
> some of our larger systems, and we've run into a few problems with the
> amount of space that Xen sets aside for the E820 map.
> 
> The first problem that I hit was that E820MAX is far too small, at 128
> entries, for the system that we're testing with.  The EFI memory map
> handed up from the boot loader tops out at 783 entries, which far
> exceeds the amount of space allocated for the memory map in
> arch/x86/boot/mem.S.  I was able to get past this problem by bumping
> E820MAX up to 1024 in arch/x86/boot/mem.S and include/asm-x86/e820.h.
> 
> The second problem that I encountered was that Xen uses a signed char to
> store the number of entries in the memory map in a few places, which is
> too small to hold the number of entries after bumping E820MAX up to
> 1024.  I made the following changes to get past this:

The problem with setting E820MAX to a higher value in mem.S without
further measures is that you are growing the trampoline size. This is
problematic for memory allocation in the multiboot path.

I have some patches sitting here waiting for Daniel's multiboot series
to go in. My patches are not using the mem.S e820 array for the EFI
memory map, so the BIOS memory map buffer can remain smaller while the
EFI buffer can be made rather large. This avoids growing the trampoline
(in fact I've managed to reduce it to a single page).

I didn't post my series up to now in order to not block Daniel's series
again. So what do people think: should I wait some more time with my
patches, or should I send them rather soon?


Juergen

> 
> 8<---
> ---
>  arch/x86/e820.c |    6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> --- xen.orig/arch/x86/e820.c
> +++ xen/arch/x86/e820.c
> @@ -134,7 +134,7 @@ static struct change_member *change_poin
>  static struct e820entry *overlap_list[E820MAX] __initdata;
>  static struct e820entry new_bios[E820MAX] __initdata;
> 
> -static int __init sanitize_e820_map(struct e820entry * biosmap, char * 
> pnr_map)
> +static int __init sanitize_e820_map(struct e820entry * biosmap, unsigned int 
> * pnr_map)
>  {
>      struct change_member *change_tmp;
>      unsigned long current_type, last_type;
> @@ -509,13 +509,13 @@ static void __init reserve_dmi_region(vo
>      }
>  }
> 
> -static void __init machine_specific_memory_setup(struct e820entry *raw, char 
> *raw_nr)
> +static void __init machine_specific_memory_setup(struct e820entry *raw, 
> unsigned int *raw_nr)
>  {
>      unsigned long mpt_limit, ro_mpt_limit;
>      uint64_t top_of_ram, size;
>      int i;
> 
> -    char nr = (char)*raw_nr;
> +    unsigned int nr = (unsigned int)*raw_nr;
>      sanitize_e820_map(raw, &nr);
>      *raw_nr = nr;
>      (void)copy_e820_map(raw, nr);
> --->8
> 
> I didn't need to go all the way up to unsigned int here, but I did this
> as a quick/dirty test to see if it got things working.
> 
> These small changes get our large machine to boot up and recognize all
> 32TB of available RAM.  I know that these changes are probably not what
> we'll want to go with in the end, but I wanted to get them sent upstream
> to get a dialogue started.
> 
> So, what do others think here?  How do we want to handle a large E820
> map?  Boris mentioned to me that we might want to attempt to do a
> dynamic allocation scheme, where we reserve more space for the memory
> map when we detect that E820 is large.
> 
> Any comments/suggestions are greatly appreciated!
> 
> - Alex
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> https://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.