[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v13 2/5] xl: move away from the use of cpumap for hard affinity



and start using the vcpu_hard_affinity array instead. This is done
as when, in a subsequent patch ("libxl/xl: make it possible to
specify soft-affinity in domain config file") we will become able
to deal with soft affinity, code can be shared.

This change also enables more advanced VCPU to PCPU (hard, for now)
affinity specification, in case a list is used, like:

      cpus = ["3-4", "2-6,^4"]

What it means is that VCPU 0 must be pinned to PCPU 3,4 and VCPU 1
to PCPUs 2,3,5,6 (before this change, cpus=[xx, yy] only supported
single values). Of course, the old (e.g., cpus=[2, 3]) syntax
continues to work.

Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
---
Changes from v12:
 * fixed usage of num_cpus in the cpus="<string>" case, when it
   is not even initialized, as noticed during review.

Changes from v11:
  * improved manpage, as requested during review;
  * do no unify the handling of string and list any longer, as
    requested during review.

Changes from v10:
  * changed the logic that checks whether we are dealing with a
    string or a list a bit. Basically, added a bool flag to store
    that, and this killed the need of having buf2 which on it
    turn needed to be 'spuriously' initialized on gcc >= 4.9.0.

Changes from v9:
 * new patch, basically containing the xl bits of what was the
   cpumap deprecation patch in v9.
---
 docs/man/xl.cfg.pod.5    |   12 ++++++++----
 tools/libxl/xl_cmdimpl.c |   31 +++++++++++++++++++++++--------
 2 files changed, 31 insertions(+), 12 deletions(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index ff9ea77..ffd94a8 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -143,11 +143,15 @@ Combining this with "all" is also possible, meaning 
"all,^nodes:1"
 results in all the vcpus of the guest running on all the cpus on the
 host, except for the cpus belonging to the host NUMA node 1.
 
-=item ["2", "3"] (or [2, 3])
+=item ["2", "3-8,^5"]
 
-To ask for specific vcpu mapping. That means (in this example), vcpu #0
-of the guest will run on cpu #2 of the host and vcpu #1 of the guest will
-run on cpu #3 of the host.
+To ask for specific vcpu mapping. That means (in this example), vcpu 0
+of the guest will run on cpu 2 of the host and vcpu 1 of the guest will
+run on cpus 3,4,6,7,8 of the host.
+
+More complex notation can be also used, exactly as described above. So
+"all,^5-8", or just "all", or "node:0,node:2,^9-11,18-20" are all legal,
+for each element of the list.
 
 =back
 
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index ad445b0..8c2ef07 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -808,16 +808,15 @@ static void parse_config_data(const char *config_source,
         b_info->vcpu_hard_affinity = xmalloc(num_cpus * sizeof(libxl_bitmap));
 
         while ((buf = xlu_cfg_get_listitem(cpus, j)) != NULL && j < num_cpus) {
-            i = atoi(buf);
-
             libxl_bitmap_init(&b_info->vcpu_hard_affinity[j]);
             if (libxl_cpu_bitmap_alloc(ctx,
                                        &b_info->vcpu_hard_affinity[j], 0)) {
                 fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", j);
                 exit(1);
             }
-            libxl_bitmap_set_none(&b_info->vcpu_hard_affinity[j]);
-            libxl_bitmap_set(&b_info->vcpu_hard_affinity[j], i);
+
+            if (vcpupin_parse(buf, &b_info->vcpu_hard_affinity[j]))
+                exit(1);
 
             j++;
         }
@@ -827,15 +826,31 @@ static void parse_config_data(const char *config_source,
         libxl_defbool_set(&b_info->numa_placement, false);
     }
     else if (!xlu_cfg_get_string (config, "cpus", &buf, 0)) {
-        if (libxl_cpu_bitmap_alloc(ctx, &b_info->cpumap, 0)) {
-            fprintf(stderr, "Unable to allocate cpumap\n");
+        b_info->vcpu_hard_affinity =
+            xmalloc(b_info->max_vcpus * sizeof(libxl_bitmap));
+
+        libxl_bitmap_init(&b_info->vcpu_hard_affinity[0]);
+        if (libxl_cpu_bitmap_alloc(ctx,
+                                   &b_info->vcpu_hard_affinity[0], 0)) {
+            fprintf(stderr, "Unable to allocate cpumap for vcpu 0\n");
             exit(1);
         }
 
-        libxl_bitmap_set_none(&b_info->cpumap);
-        if (vcpupin_parse(buf, &b_info->cpumap))
+        if (vcpupin_parse(buf, &b_info->vcpu_hard_affinity[0]))
             exit(1);
 
+        for (i = 1; i < b_info->max_vcpus; i++) {
+            libxl_bitmap_init(&b_info->vcpu_hard_affinity[i]);
+            if (libxl_cpu_bitmap_alloc(ctx,
+                                       &b_info->vcpu_hard_affinity[i], 0)) {
+                fprintf(stderr, "Unable to allocate cpumap for vcpu %d\n", i);
+                exit(1);
+            }
+            libxl_bitmap_copy(ctx, &b_info->vcpu_hard_affinity[i],
+                              &b_info->vcpu_hard_affinity[0]);
+        }
+        b_info->num_vcpu_hard_affinity = b_info->max_vcpus;
+
         libxl_defbool_set(&b_info->numa_placement, false);
     }
 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.