[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 7/7] xl: docs for xl config vnuma options



On 11/14/2013 03:27 AM, Elena Ufimtseva wrote:
Documentation added to xl command regarding usage of vnuma
configuration options.

Signed-off-by: Elena Ufimtseva <ufimtseva@xxxxxxxxx>
---
  docs/man/xl.cfg.pod.5 |   55 +++++++++++++++++++++++++++++++++++++++++++++++++
  1 file changed, 55 insertions(+)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index d2d8921..db25521 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -216,6 +216,61 @@ if the values of B<memory=> and B<maxmem=> differ.
  A "pre-ballooned" HVM guest needs a balloon driver, without a balloon driver
  it will crash.

+=item B<vnuma_nodes=N>
+
+Number of vNUMA nodes the guest will be initialized with on boot. In general 
case,
+this is the only required option. If only this option is given, all other
+vNUMA topology parameters will be taken as default.
+
+=item B<vnuma_mem=[vmem1, vmem2, ...]>
+
+The vnode memory sizes defined in MBytes. If the sum of all vnode memories
+does not match the domain memory or not all the nodes defined here, the total
+memory will be equally split between vnodes.

So the general approach here -- "invalid or empty configurations go to default" -- isn't quite right. Invalid configurations should throw an error that stops guest creation. Only unspecified configurations should go to the default; and the text should say something like, "If unspecified, the default will be the total memory split equally between vnodes."

Same with the other options, with one exception...

+
+Example: vnuma_mem=[1024, 1024, 2048, 2048]
+
+=item B<vdistance=[d1, d2, ... ,dn]>
+
+Defines the distance table for vNUMA nodes. Distance for NUMA machines usually
+ represented by two dimensional array and all distance may be spcified in one
+line here, by rows. In short, distance can be specified as two numbers [d1, 
d2],
+where d1 is same node distance, d2 is a value for all other distances.
+If vdistance was specified with errors, the defaul distance in use, e.g. [10, 
20].
+
+Examples:
+vnodes = 3
+vdistance=[10, 20]
+will expand to this distance table (this is default setting as well):
+[10, 20, 20]
+[20, 10, 20]
+[20, 20, 10]
+
+=item B<vnuma_vcpumap=[vcpu1, vcpu2, ...]>
+
+Defines vcpu to vnode mapping as a string of integers, representing node
+numbers. If not defined, the vcpus are interleaved over the virtual nodes.
+Current limitation: vNUMA nodes have to have at least one vcpu, otherwise
+default vcpu_to_vnode will be used.
+Example:
+to map 4 vcpus to 2 nodes - 0,1 vcpu -> vnode1, 2,3 vcpu -> vnode2:
+vnuma_vcpumap = [0, 0, 1, 1]
+
+=item B<vnuma_vnodemap=[p1, p2, ..., pn]>
+
+vnode to pnode mapping. Can be configured if manual vnode allocation
+required. Will be only taken into effect on real NUMA machines and if
+memory or other constraints do not prevent it. If the mapping is fine,
+automatic NUMA placement will be disabled. If the mapping incorrect,
+automatic NUMA placement will be taken into account when selecting
+physical nodes for allocation, or mask will not be used on non-NUMA
+machines or if automatic allocation fails.

I think by default, if a vnode->pnode mapping is given that can't be satisfied, we should throw an error. But it may make sense to add a flag, either in the config file or on the command-line, that will allow a "fall-back" to the automatic placement if the specified placement can't be satisfied. (If this is complicated, it can wait to be added in later.)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.