[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Problem with nr_nodes on large memory NUMA machine


  • To: <eak@xxxxxxxxxx>, <Xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
  • Date: Fri, 19 Oct 2007 16:14:18 +0100
  • Delivery-date: Fri, 19 Oct 2007 08:15:16 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcgSYrfP9iABt35VEdyhoQAX8io7RQ==
  • Thread-topic: [Xen-devel] Problem with nr_nodes on large memory NUMA machine

32-bit guests on 64-bit hypervisor can only address the bottom 166GB of the
memory map, due to constraints on how much of the machine-to-phys array they
can reasonably map into their limited address space.

 -- Keir

On 19/10/07 16:02, "beth kon" <eak@xxxxxxxxxx> wrote:

> We've run into an issue with an 8 node x3950 where xm info is showing
> only 6 nodes. I've traced the problem to the clip_to_limit function in
> arch/x86/e820.c.
> 
> #ifdef __x86_64__
>    clip_to_limit((uint64_t)(MACH2PHYS_COMPAT_VIRT_END -
>                             __HYPERVISOR_COMPAT_VIRT_START) << 10,
>                  "Only the first %u GB of the physical memory map "
>                  "can be accessed by 32-on-64 guests.");
> #endif
> 
> Boot messages....
> (XEN) WARNING: Only the first 166 GB of the physical memory map can be
> accessed by 32-on-64 guests. (XEN) Truncating memory map to 174063616kB
> 
> After the memory is clipped, acpi_scan_nodes runs cutoff_node, which
> limits the memory associated with each node according to the cutoff
> values. Then, acpi_scan_nodes calls unparse_node to "remove" nodes that
> don't have the minimum amount of memory, due to the clipping of the
> memory range.
> 
> Can someone explain what this is all about and why it might be necessary?



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.