[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v2 5/7] xl: enable using ranges of pCPUs when creating cpupools



instead of just list of single pCPUs or NUMA node IDs, as
it happens right now.

On the other hand, after this change, strings containing
pCPUs and NUMA node ranges is supported. The syntax is the
same one supported by the "cpus" and "cpus_soft" config
switch, i.e., "4-8" or "node:1,12-18,^14".

This make things more flexible, more consistent, and also
improves error handling, as the pCPU range parsing routine
already present in xl is more reliable than just a call
to atoi().

While there, remove a redundant error check in the legacy syntax
handling (libxl_bitmap_test() already checks the index being
within the size of the bitmap).

Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
Cc: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Cc: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
Cc: Juergen Gross <JGross@xxxxxxxx>
---
 docs/man/xlcpupool.cfg.pod.5 |   22 +++++++++++++++++++---
 tools/libxl/xl_cmdimpl.c     |   17 ++++++++++++++---
 2 files changed, 33 insertions(+), 6 deletions(-)

diff --git a/docs/man/xlcpupool.cfg.pod.5 b/docs/man/xlcpupool.cfg.pod.5
index bb15cbe..2ff8ee8 100644
--- a/docs/man/xlcpupool.cfg.pod.5
+++ b/docs/man/xlcpupool.cfg.pod.5
@@ -93,10 +93,26 @@ Specifies the cpus of the NUMA-nodes given in C<NODES> (an 
integer or
 a list of integers) to be member of the cpupool. The free cpus in the
 specified nodes are allocated in the new cpupool.
 
-=item B<cpus="CPUS">
+=item B<cpus="CPU-LIST">
 
-The specified C<CPUS> are allocated in the new cpupool. All cpus must
-be free. Must not be specified together with B<nodes>.
+Specifies the cpus that will be member of the cpupool. All the specified
+cpus must be free, or creation will fail. C<CPU-LIST> may be specified
+as follows:
+
+=over 4
+
+=item ["2", "3", "5"]
+
+means that cpus 2,3,5 will be member of the cpupool.
+
+=item "0-3,5,^1"
+
+means that cpus 0,2,3 and 5 will be member of the cpupool. A "node:" or
+"nodes:" modifier can be used. E.g., "0,node:1,nodes:2-3,^10-13" means
+that pcpus 0, plus all the cpus of NUMA nodes 1,2,3 with the exception
+of cpus 10,11,12,13 will be memeber of the cpupool.
+
+=back
 
 If neither B<nodes> nor B<cpus> are specified only the first free cpu
 found will be allocated in the new cpupool.
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index ba5b51e..b2d80f4 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -7148,18 +7148,29 @@ int main_cpupoolcreate(int argc, char **argv)
             fprintf(stderr, "no free cpu found\n");
             goto out_cfg;
         }
-    } else if (!xlu_cfg_get_list(config, "cpus", &cpus, 0, 0)) {
+    } else if (!xlu_cfg_get_list(config, "cpus", &cpus, 0, 1)) {
         n_cpus = 0;
         while ((buf = xlu_cfg_get_listitem(cpus, n_cpus)) != NULL) {
             i = atoi(buf);
-            if ((i < 0) || (i >= freemap.size * 8) ||
-                !libxl_bitmap_test(&freemap, i)) {
+            if ((i < 0) || !libxl_bitmap_test(&freemap, i)) {
                 fprintf(stderr, "cpu %d illegal or not free\n", i);
                 goto out_cfg;
             }
             libxl_bitmap_set(&cpumap, i);
             n_cpus++;
         }
+    } else if (!xlu_cfg_get_string(config, "cpus", &buf, 0)) {
+        if (cpurange_parse(buf, &cpumap))
+            goto out_cfg;
+
+        n_cpus = 0;
+        libxl_for_each_set_bit(i, cpumap) {
+            if (!libxl_bitmap_test(&freemap, i)) {
+                fprintf(stderr, "cpu %d illegal or not free\n", i);
+                goto out_cfg;
+            }
+            n_cpus++;
+        }
     } else
         n_cpus = 0;
 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.