[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen master] libxl: Change default for b_info->{cpu, node}map to "not allocated"



commit 194e71830d8d09c9a75ed41abe02d3b8bb6ebe42
Author:     Dario Faggioli <dario.faggioli@xxxxxxxxxx>
AuthorDate: Fri Jun 20 18:19:29 2014 +0200
Commit:     Ian Campbell <ian.campbell@xxxxxxxxxx>
CommitDate: Fri Jun 27 13:38:33 2014 +0100

    libxl: Change default for b_info->{cpu, node}map to "not allocated"
    
    by avoiding allocating them in libxl__domain_build_info_setdefault.
    In fact, back in 7e449837 ("libxl: provide _init and _setdefault for
    libxl_domain_build_info") and a5d30c23 ("libxl: allow for explicitly
    specifying node-affinity"), it was decided that the default for these
    fields was for them to be allocated and filled.
    
    That is now causing problem, whenever we have to figure out whether
    the caller is using or not one of those fields. In fact, when we see
    a full bitmap, is it just the default value, or is the user that
    wants it that way?
    
    Since that kind of knowledge has become important, change the default
    to be "bitmap not allocated". It then becomes easy to know whether a
    libxl caller is using one of the fields, just by checking whether the
    bitmap is actually there with a non-zero size.
    
    This is very important for the following patches introducing new ways
    of specifying hard and soft affinity. It also allows us to improve
    the checks around NUMA automatic placement, during domain creation
    (and that bit is done in this very patch).
    
    Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
    Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
---
 tools/libxl/libxl_create.c |   12 ------------
 tools/libxl/libxl_dom.c    |   35 +++++++++++++++++++++++------------
 2 files changed, 23 insertions(+), 24 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index d6b8a29..8cc3ca5 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -187,20 +187,8 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
     } else if (b_info->avail_vcpus.size > HVM_MAX_VCPUS)
         return ERROR_FAIL;
 
-    if (!b_info->cpumap.size) {
-        if (libxl_cpu_bitmap_alloc(CTX, &b_info->cpumap, 0))
-            return ERROR_FAIL;
-        libxl_bitmap_set_any(&b_info->cpumap);
-    }
-
     libxl_defbool_setdefault(&b_info->numa_placement, true);
 
-    if (!b_info->nodemap.size) {
-        if (libxl_node_bitmap_alloc(CTX, &b_info->nodemap, 0))
-            return ERROR_FAIL;
-        libxl_bitmap_set_any(&b_info->nodemap);
-    }
-
     if (b_info->max_memkb == LIBXL_MEMKB_DEFAULT)
         b_info->max_memkb = 32 * 1024;
     if (b_info->target_memkb == LIBXL_MEMKB_DEFAULT)
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index a90a8d5..49bfa66 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -241,28 +241,39 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     }
 
     /*
-     * Check if the domain has any CPU affinity. If not, try to build
-     * up one. In case numa_place_domain() find at least a suitable
-     * candidate, it will affect info->nodemap accordingly; if it
-     * does not, it just leaves it as it is. This means (unless
-     * some weird error manifests) the subsequent call to
-     * libxl_domain_set_nodeaffinity() will do the actual placement,
-     * whatever that turns out to be.
+     * Check if the domain has any CPU or node affinity already. If not, try
+     * to build up the latter via automatic NUMA placement. In fact, in case
+     * numa_place_domain() manage to find a placement, in info->nodemap is
+     * updated accordingly; if it does not manage, info->nodemap is just left
+     * alone. It is then the the subsequent call to
+     * libxl_domain_set_nodeaffinity() that enacts the actual placement.
      */
     if (libxl_defbool_val(info->numa_placement)) {
-        if (!libxl_bitmap_is_full(&info->cpumap)) {
+        if (info->cpumap.size) {
             LOG(ERROR, "Can run NUMA placement only if no vcpu "
-                       "affinity is specified");
+                       "affinity is specified explicitly");
             return ERROR_INVAL;
         }
+        if (info->nodemap.size) {
+            LOG(ERROR, "Can run NUMA placement only if the domain does not "
+                       "have any NUMA node affinity set already");
+            return ERROR_INVAL;
+        }
+
+        rc = libxl_node_bitmap_alloc(ctx, &info->nodemap, 0);
+        if (rc)
+            return rc;
+        libxl_bitmap_set_any(&info->nodemap);
 
         rc = numa_place_domain(gc, domid, info);
         if (rc)
             return rc;
     }
-    libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
-    libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
-                               &info->cpumap, NULL);
+    if (info->nodemap.size)
+        libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
+    if (info->cpumap.size)
+        libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus,
+                                   &info->cpumap, NULL);
 
     if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
         LIBXL_MAXMEM_CONSTANT) < 0) {
--
generated by git-patchbot for /home/xen/git/xen.git#master

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.