[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] RFC: automatic NUMA placement
Hi Andre, thanks for your thoughts. On 09/27/10 23:46, Andre Przywara wrote: Juergen Gross wrote:Hi, I just stumbled upon the automatic pinning of vcpus on domain creation in case of NUMA. This behaviour is questionable IMO, as it breaks correct handling of scheduling weights on NUMA machines. I would suggest to switch this feature off per default and make it a configuration option of xend. It would make sense, however, to change cpu pool processor allocation to be NUMA-aware. Switching NUMA off via boot option would remove NUMA-optimized memory allocation, which would be sub-optimal :-)Hi Jürgen, stumbled over your mail just now, so sorry for the delay. First: Don't turn off automatic NUMA placement ;-) In my tests it helped a lot to preserve performance on NUMA machines. I was just browsing through the ML archive to find your original CPU pools description from April, and it seems to fit the requirements in NUMA machines quite well. I haven't done any experiments with Cpupools nor haven't looked at the code yet, but just a quick idea: What about if we marry static NUMA placement and Cpupools? I'd suggest to introduce static NUMA pools, one for each node. The CPUs assigned to each pool are fixed and cannot be removed nor added (because the NUMA topology is fixed). Is that possible? Can we assign one physical CPUs to multiple pools (to Pool-0 and to NUMA-0?) Or are they exclusive or hierarchical like the Linux' cpusets? A cpu is always member of only one pool. We could introduce magic names for each NUMA pool, so that people just say cpupool="NUMA-2" and get their domain pinned to that pool. Without any explicit assignment the system would pick a NUMA node (like it does today) and would just use the respective Cpupool. I think that is very similar to what it does today, only that the pinning nature is more evident to the user (as it uses the Cpupool name space). Also it would allow for users to override the pinning by specifying a different Cpupool explicitly (like Pool-0). Just tell me what you think about this and whether I am wrong with my thinking ;-) With your proposal it isn't possible to start a domU with more vcpus than cpus in a node without changing cpu pools. I would suggest to do it the following way: - use automatic NUMA placement only in Pool-0 (this won't change anything for users not using cpu pools), perhaps with an option to switch it off - change the cpu allocation for pools to be NUMA aware - optionally add a xl and/or xm command to create one cpu pool per NUMA node Juergen -- Juergen Gross Principal Developer Operating Systems TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967 Fujitsu Technology Solutions e-mail: juergen.gross@xxxxxxxxxxxxxx Domagkstr. 28 Internet: ts.fujitsu.com D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |