[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: Still TODO for 4.2? xl domain numa memory allocation vs xm/xend


  • To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
  • From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
  • Date: Mon, 23 Jan 2012 10:59:40 +0100
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Keir \(Xen.org\)" <keir@xxxxxxx>, Stefano Stabellini <Stefano.Stabellini@xxxxxxxxxxxxx>, "Tim \(Xen.org\)" <tim@xxxxxxx>, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>
  • Delivery-date: Mon, 23 Jan 2012 10:00:16 +0000
  • Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type:Content-Transfer-Encoding; b=dIT1osvV0UhaUA36InNb2hCLygU3Fv/9i351Vkcm0Vy4oRCkmUeFHHkP oXVl4TVXSlaRKhi+D+HpsS8ajir6t3gDlt7gl/NydB7rrTpnWGM5vw0Qh QefAOb0dYz5z8dfkl3bHiAeGR9QqyesthhEnHzAt/8QW9SIVweBPmIVZj UBgIyHFPt269HuW4gssg90DtOLSyPmIv3V8gXjevWOMgraszx9Q5AJmZd AAKGjeanTuBB9dZelYAnknl7P3jZu;
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On 01/20/2012 10:01 AM, Ian Campbell wrote:
On Fri, 2012-01-20 at 08:15 +0000, Pasi KÃrkkÃinen wrote:
On Fri, Jan 20, 2012 at 07:59:28AM +0000, Ian Campbell wrote:
On Thu, 2012-01-19 at 21:14 +0000, Pasi KÃrkkÃinen wrote:
On Wed, Jan 04, 2012 at 04:29:22PM +0000, Ian Campbell wrote:
Has anybody got anything else? I'm sure I've missed stuff. Are there any
must haves e.g. in the paging/sharing spaces?

Something that I just remembered:
http://wiki.xen.org/xenwiki/Xen4.1

"NUMA-aware memory allocation for VMs. xl in Xen 4.1 will allocate
equal amount of memory from every NUMA node for the VM. xm/xend
allocates all the memory from the same NUMA node."
I'm not that familiar with the NUMA support but my understanding was
that memory was allocated by libxc/the-hypervisor and not by the
toolstack and that the default was to allocate from the same numa nodes
as domains the processor's were pinned to i.e. if you pin the processors
appropriately the Right Thing just happens. Do you believe this is not
the case and/or not working right with xl?

CCing Juergen since he added the cpupool support and in particular the
cpupool-numa-split option so I'm hoping he knows something about NUMA
more generally.

Is this something that should be looked at?

Probably, but is anyone doing so?

Should the numa memory allocation be an option so it can be controlled
per domain?
What options did xm provide in this regard?

Does xl's cpupool (with the cpupool-numa-split option) server the same
purpose?

The default libxl behaviour might cause unexpected performance issues
on multi-socket systems?
I'm not convinced libxl is behaving any different to xend but perhaps
someone can show me the error of my ways.


See this thread:
http://old-list-archives.xen.org/archives/html/xen-devel/2011-07/msg01423.html

where Stefano wrote:
"I think we forgot about this feature but it is important and hopefully
somebody will write a patch for it before 4.2 is out."
Is anyone looking into this?

Does cpupool-numa-split solve this same problem?

I think I forgot to actually CC Juergen when I said, doing that now.

I've just sent a patch which should do the job.
I just have no NUMA machine to test it on, I just tested the patch
doesn't break booting dom0...


Juergen

--
Juergen Gross                 Principal Developer Operating Systems
PDG ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.