[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH v3 01/24] NUMA: Make number of NUMA nodes configurable
On Wed, Jul 19, 2017 at 9:25 PM, Julien Grall <julien.grall@xxxxxxx> wrote: > Hi Vijay, > > > On 19/07/2017 08:00, Vijay Kilari wrote: >> >> On Tue, Jul 18, 2017 at 11:25 PM, Julien Grall <julien.grall@xxxxxxx> >> wrote: >>> >>> Hi, >>> >>> >>> On 18/07/17 12:41, vijay.kilari@xxxxxxxxx wrote: >>>> >>>> >>>> From: Vijaya Kumar K <Vijaya.Kumar@xxxxxxxxxx> >>>> >>>> Introduce NR_NODES config option to specify number >>>> of NUMA nodes supported. By default value is set at >>>> 64 for x86 and 8 for arm. Dropped NODES_SHIFT macro. >>>> >>>> Also move NR_NODE_MEMBLKS from asm-x86/acpi.h to xen/numa.h >>>> >>>> Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@xxxxxxxxxx> >>>> --- >>>> xen/arch/Kconfig | 7 +++++++ >>>> xen/include/asm-x86/acpi.h | 1 - >>>> xen/include/asm-x86/numa.h | 2 -- >>>> xen/include/xen/config.h | 1 + >>>> xen/include/xen/numa.h | 7 ++----- >>>> 5 files changed, 10 insertions(+), 8 deletions(-) >>>> >>>> diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig >>>> index cf0acb7..9c2a4e2 100644 >>>> --- a/xen/arch/Kconfig >>>> +++ b/xen/arch/Kconfig >>>> @@ -6,3 +6,10 @@ config NR_CPUS >>>> default "128" if ARM >>>> ---help--- >>>> Specifies the maximum number of physical CPUs which Xen will >>>> support. >>>> + >>>> +config NR_NODES >>>> + int "Maximum number of NUMA nodes" >>>> + default "64" if X86 >>>> + default "8" if ARM >>> >>> >>> >>> 3rd time I am asking it... Why the difference between x86 and ARM? >> >> >> AFAIK, there is no arm platform for now with numa more than 8 nodes. >> Thunderx is only 2 nodes. >> So kept it low value for ARM to avoid unnecessary memory allocation. >> >> Do you want me to keep same as x86?. > > > Well, you say it is for saving memory allocation but you don't give any > number on how much you can save by reducing the default from 64 to 8... > > Looking at it, MAX_NUMNODES is used for some static allocation and also for > the bitmap nodemask_t. > > Because our bitmap is based on unsigned long, you would use the same > quantity of memory for AArch64, for AArch32 the quantity will be divided by > two. Still nodemask_t does not seem to be widely used. > > In the case of the static allocation, I spot ~40 bytes per NUMA node. So 8 > node will use ~320 bytes and 64 bytes ~2560. > > NUMA is likely going to be used in server, don't tell me you are 2k short in > memory? If it is an issue it is better to think how to limit the number of > static variable rather than putting a low limit here. > > For Embedded use case, they will likely want to put the default to 1 but I > would not worry about them as they are likely going to tweak the Kconfig. Ok. I will set to 64. same as x86. > >> >>> >>> Also, you likely want to set to 1 if NUMA is not enabled. >> >> >> I don't see any dependency of NR_NODES with NUMA config. >> So it is always set to default value. Isn't? > > > Well, what is the point to allow more than 1 node when NUMA is not > supported? In such case, I have to make NR_NODES depends on NUMA config and define this value to 1 if NUMA config is not defined as below. diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig index b73d459..a5d40f5 100644 --- a/xen/arch/Kconfig +++ b/xen/arch/Kconfig @@ -11,5 +11,6 @@ config NR_NODES int "Maximum number of NUMA nodes" + range 1 254 default "64" + depends on NUMA ---help--- Specifies the maximum number of NUMA nodes which Xen will support. diff --git a/xen/include/asm-x86/numa.h b/xen/include/asm-x86/numa.h index 604fd6d..eede1c4 100644 --- a/xen/include/asm-x86/numa.h +++ b/xen/include/asm-x86/numa.h @@ -10,6 +10,10 @@ extern int srat_rev; extern nodeid_t cpu_to_node[NR_CPUS]; extern cpumask_t node_to_cpumask[]; +#ifndef CONFIG_NUMA +#define NR_NODES 1 +#endif + #define MAX_NUMNODES NR_NODES #define NR_NODE_MEMBLKS (MAX_NUMNODES * 2) > > Not mentioning that this is quite confusing for a user to allow setting up > the maximum number of nodes if the archicture is not supporting numa... > > For instance, this is the case today on ARM because, without this series, we > don't support NUMA. > > >> >>> >>> >>>> + ---help--- >>>> + Specifies the maximum number of NUMA nodes which Xen will >>>> support. >>>> diff --git a/xen/include/asm-x86/acpi.h b/xen/include/asm-x86/acpi.h >>>> index 27ecc65..15be784 100644 >>>> --- a/xen/include/asm-x86/acpi.h >>>> +++ b/xen/include/asm-x86/acpi.h >>>> @@ -105,7 +105,6 @@ extern void acpi_reserve_bootmem(void); >>>> >>>> extern s8 acpi_numa; >>>> extern int acpi_scan_nodes(u64 start, u64 end); >>>> -#define NR_NODE_MEMBLKS (MAX_NUMNODES*2) >>>> >>>> #ifdef CONFIG_ACPI_SLEEP >>>> >>>> diff --git a/xen/include/asm-x86/numa.h b/xen/include/asm-x86/numa.h >>>> index bada2c0..3cf26c2 100644 >>>> --- a/xen/include/asm-x86/numa.h >>>> +++ b/xen/include/asm-x86/numa.h >>>> @@ -3,8 +3,6 @@ >>>> >>>> #include <xen/cpumask.h> >>>> >>>> -#define NODES_SHIFT 6 >>>> - >>>> typedef u8 nodeid_t; >>>> >>>> extern int srat_rev; >>>> diff --git a/xen/include/xen/config.h b/xen/include/xen/config.h >>>> index a1d0f97..0f1a029 100644 >>>> --- a/xen/include/xen/config.h >>>> +++ b/xen/include/xen/config.h >>>> @@ -81,6 +81,7 @@ >>>> >>>> /* allow existing code to work with Kconfig variable */ >>>> #define NR_CPUS CONFIG_NR_CPUS >>>> +#define NR_NODES CONFIG_NR_NODES >>>> >>>> #ifndef CONFIG_DEBUG >>>> #define NDEBUG >>>> diff --git a/xen/include/xen/numa.h b/xen/include/xen/numa.h >>>> index 7aef1a8..6bba29e 100644 >>>> --- a/xen/include/xen/numa.h >>>> +++ b/xen/include/xen/numa.h >>>> @@ -3,14 +3,11 @@ >>>> >>>> #include <asm/numa.h> >>>> >>>> -#ifndef NODES_SHIFT >>>> -#define NODES_SHIFT 0 >>>> -#endif >>>> - >>>> #define NUMA_NO_NODE 0xFF >>>> #define NUMA_NO_DISTANCE 0xFF >>>> >>>> -#define MAX_NUMNODES (1 << NODES_SHIFT) >>>> +#define MAX_NUMNODES NR_NODES >>>> +#define NR_NODE_MEMBLKS (MAX_NUMNODES * 2) > > > Also, I don't understand why you move this define from asm-x86/numa.h to > xen/numa.h. At least, this does not seem related to this patch... ok. I will drop this change from this patch > > Cheers, > > > -- > Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |