[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] asm, x86: Set max CPUs to 512 instead of 256.



On Thu, Jan 22, 2015 at 05:04:12PM +0000, Andrew Cooper wrote:
> On 22/01/15 16:52, Konrad Rzeszutek Wilk wrote:
> > Contemporary servers sport now 480 CPUs or such. We should crank
> > up the default amount of CPUs to a higher level to take advantage
> > of this without having the distro to use 'max_phys_cpus' override.
> >
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> 
> /me would really like to try something that big out, but I have not had
> the opportunity yet to hit the 256 limit.
> 
> I wonder which variables grow as a result of this change.  We might want
> to see about making more things dynamically allocated after reading the
> apci tables, if we can.

I am not sure if that is possible as there is a lot of DEFINE_PER_CPU
which are cannot grow.

The structures that grow are:


 struct cpumask 
 struct kernel_param 
 struct rangeset 
 struct csched2_runqueue_data 
 struct csched2_private 
 struct rt_vcpu 
 struct stopmachine_data 
 struct free_ptr 
 struct rcu_data 
 struct physid_mask 
 struct acpi_table_header 
 struct calibration_rendezvous 
 struct bug_frame 

(for fun see attached diff of pahole between 256 and 512 CPUs)

> 
> ~Andrew
> 
> > ---
> >  xen/include/asm-x86/config.h | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
> > index 2fbd68d..d450696 100644
> > --- a/xen/include/asm-x86/config.h
> > +++ b/xen/include/asm-x86/config.h
> > @@ -64,7 +64,7 @@
> >  #ifdef MAX_PHYS_CPUS
> >  #define NR_CPUS MAX_PHYS_CPUS
> >  #else
> > -#define NR_CPUS 256
> > +#define NR_CPUS 512
> >  #endif
> >  
> >  /* Linkage for x86 */
> 

Attachment: 256vs512
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.