[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH v3 01/24] NUMA: Make number of NUMA nodes configurable
On Tue, Jul 18, 2017 at 06:52:11PM +0100, Julien Grall wrote: > Hi, > > On 18/07/17 16:29, Wei Liu wrote: > > On Tue, Jul 18, 2017 at 05:11:23PM +0530, vijay.kilari@xxxxxxxxx wrote: > > > From: Vijaya Kumar K <Vijaya.Kumar@xxxxxxxxxx> > > > > > > Introduce NR_NODES config option to specify number > > > of NUMA nodes supported. By default value is set at > > > 64 for x86 and 8 for arm. Dropped NODES_SHIFT macro. > > > > > > Also move NR_NODE_MEMBLKS from asm-x86/acpi.h to xen/numa.h > > > > > > Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@xxxxxxxxxx> > > > --- > > > xen/arch/Kconfig | 7 +++++++ > > > xen/include/asm-x86/acpi.h | 1 - > > > xen/include/asm-x86/numa.h | 2 -- > > > xen/include/xen/config.h | 1 + > > > xen/include/xen/numa.h | 7 ++----- > > > 5 files changed, 10 insertions(+), 8 deletions(-) > > > > > > diff --git a/xen/arch/Kconfig b/xen/arch/Kconfig > > > index cf0acb7..9c2a4e2 100644 > > > --- a/xen/arch/Kconfig > > > +++ b/xen/arch/Kconfig > > > @@ -6,3 +6,10 @@ config NR_CPUS > > > default "128" if ARM > > > ---help--- > > > Specifies the maximum number of physical CPUs which Xen will support. > > > + > > > +config NR_NODES > > > + int "Maximum number of NUMA nodes" > > > + default "64" if X86 > > > + default "8" if ARM > > > + ---help--- > > > + Specifies the maximum number of NUMA nodes which Xen will support. > > > > Since this can now be specified by user but the definition of > > NUMA_NO_NODE is not changed, I think you need to sanitise the value > > provided somewhere. > > > > Maybe introduce a build time check? There are some examples in tree. See > > cpuid.c:build_assertions. > > You can do bound-checking in Kconfig: > > range 1 254 > Oh, good to know. Yes this is the way to go. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |