[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Re: NUMA and SMP
hi xen does not support numa-aware guest linux, is it right?and there are memory-hotplug.c and migration.c in the linux2.6.20, does it means that linux could support the hotplug memory or not ? if it could ,does linux have to be numa-aware to support memory hotplug or a smp linux could support memory hotplug? I am confused about it could you help me Thanks in advance Petersson, Mats åé: -----Original Message-----From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Ryan HarperSent: 23 March 2007 14:43 To: tgh Cc: Xen Developers; Daniel Stodden Subject: Re: [Xen-devel] Re: NUMA and SMP * tgh <tianguanhua@xxxxxxxxxx> [2007-03-23 00:48]:hi how many nodes in the numa with adm64 does xen support at present?in xen/include/asm-x86/numa.h: #define NODE_SHIFT=6 #in xen/include/xen/numa.h: #define MAX_NUMNODES = (1 << NODE_SHIFT); which works out to 64 nodes. I don't know if anyone has tested more than an 8 node system.Of course, if we're talking AMD64 systems, if a NODE is a socket, the currently available architecture supports 8 NODES, so there's plenty of space to grow such a system. I think there's plans to grow this, but Idoubt that the limit above will be reached anytime soon.Even if a node is a core within a CPU, the current limit of 8 sockets will limit the number of cores in a system to 32 cores when the quad-core processors become available. So still sufficient to support any current architecture. -- Mats-- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx (512) 838-9253 T/L: 678-9253 ryanh@xxxxxxxxxx _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |