[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen master] libxl: avoid considering pCPUs outside of the cpupool during NUMA placement



commit 4a6070ea95b17e5c5f051ebe6886783dd50e911c
Author:     Dario Faggioli <dario.faggioli@xxxxxxxxxx>
AuthorDate: Fri Oct 21 15:49:30 2016 +0200
Commit:     Wei Liu <wei.liu2@xxxxxxxxxx>
CommitDate: Fri Oct 21 14:56:07 2016 +0100

    libxl: avoid considering pCPUs outside of the cpupool during NUMA placement
    
    During NUMA automatic placement, the information
    of how many vCPUs can run on what NUMA nodes is used,
    in order to spread the load as evenly as possible.
    
    Such information is derived from vCPU hard and soft
    affinity, but that is not enough. In fact, affinity
    can be set to be a superset of the pCPUs that belongs
    to the cpupool in which a domain is but, of course,
    the domain will never run on pCPUs outside of its
    cpupool.
    
    Take this into account in the placement algorithm.
    
    Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
    Reported-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    Reviewed-by: Juergen Gross <jgross@xxxxxxxx>
    Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Release-acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
---
 tools/libxl/libxl_numa.c | 24 +++++++++++++++++++++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
index 33289d5..fd64c22 100644
--- a/tools/libxl/libxl_numa.c
+++ b/tools/libxl/libxl_numa.c
@@ -205,12 +205,21 @@ static int nr_vcpus_on_nodes(libxl__gc *gc, 
libxl_cputopology *tinfo,
     }
 
     for (i = 0; i < nr_doms; i++) {
-        libxl_vcpuinfo *vinfo;
-        int nr_dom_vcpus;
+        libxl_vcpuinfo *vinfo = NULL;
+        libxl_cpupoolinfo cpupool_info;
+        int cpupool, nr_dom_vcpus;
+
+        libxl_cpupoolinfo_init(&cpupool_info);
+
+        cpupool = libxl__domain_cpupool(gc, dinfo[i].domid);
+        if (cpupool < 0)
+            goto next;
+        if (libxl_cpupool_info(CTX, &cpupool_info, cpupool))
+            goto next;
 
         vinfo = libxl_list_vcpu(CTX, dinfo[i].domid, &nr_dom_vcpus, &nr_cpus);
         if (vinfo == NULL)
-            continue;
+            goto next;
 
         /* Retrieve the domain's node-affinity map */
         libxl_domain_get_nodeaffinity(CTX, dinfo[i].domid, &dom_nodemap);
@@ -220,6 +229,12 @@ static int nr_vcpus_on_nodes(libxl__gc *gc, 
libxl_cputopology *tinfo,
              * For each vcpu of each domain, it must have both vcpu-affinity
              * and node-affinity to (a pcpu belonging to) a certain node to
              * cause an increment in the corresponding element of the array.
+             *
+             * Note that we also need to check whether the cpu actually
+             * belongs to the domain's cpupool (the cpupool of the domain
+             * being checked). In fact, it could be that the vcpu has affinity
+             * with cpus in suitable_cpumask, but that are not in its own
+             * cpupool, and we don't want to consider those!
              */
             libxl_bitmap_set_none(&nodes_counted);
             libxl_for_each_set_bit(k, vinfo[j].cpumap) {
@@ -228,6 +243,7 @@ static int nr_vcpus_on_nodes(libxl__gc *gc, 
libxl_cputopology *tinfo,
                 int node = tinfo[k].node;
 
                 if (libxl_bitmap_test(suitable_cpumap, k) &&
+                    libxl_bitmap_test(&cpupool_info.cpumap, k) &&
                     libxl_bitmap_test(&dom_nodemap, node) &&
                     !libxl_bitmap_test(&nodes_counted, node)) {
                     libxl_bitmap_set(&nodes_counted, node);
@@ -236,6 +252,8 @@ static int nr_vcpus_on_nodes(libxl__gc *gc, 
libxl_cputopology *tinfo,
             }
         }
 
+ next:
+        libxl_cpupoolinfo_dispose(&cpupool_info);
         libxl_vcpuinfo_list_free(vinfo, nr_dom_vcpus);
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#master

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.