[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 16/16] libxl: automatic NUMA placement affects soft affinity
On gio, 2013-11-14 at 16:03 +0000, George Dunlap wrote: > On 13/11/13 19:13, Dario Faggioli wrote: > > vCPU soft affinity and NUMA-aware scheduling does not have > > to be related. However, soft affinity is how NUMA-aware > > scheduling is actually implemented, and therefore, by default, > > the results of automatic NUMA placement (at VM creation time) > > are also used to set the soft affinity of all the vCPUs of > > the domain. > > > > Of course, this only happens if automatic NUMA placement is > > enabled and actually takes place (for instance, if the user > > does not specify any hard and soft affiniy in the xl config > > file). > > > > This also takes care of the vice-versa, i.e., don't trigger > > automatic placement if the config file specifies either an > > hard (the check for which was already there) or a soft (the > > check for which is introduced by this commit) affinity. > > It looks like with this patch you set *both* hard and soft affinities > when doing auto-numa placement. Would it make more sense to change it > to setting only the soft affinity, and leaving the hard affinity to "any"? > Nope, it indeed sets only soft affinity after automatic placement, hard affinity is left untouched. > (My brain is running low, so forgive me if I've mis-read it...) > :-) This is the spot: > > Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx> > > diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c > > @@ -222,21 +222,39 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid, > > * some weird error manifests) the subsequent call to > > * libxl_domain_set_nodeaffinity() will do the actual placement, > > * whatever that turns out to be. > > + * > > + * As far as scheduling is concerned, we achieve NUMA-aware scheduling > > + * by having the results of placement affect the soft affinity of all > > + * the vcpus of the domain. Of course, we want that iff placement is > > + * enabled and actually happens, so we only change info->cpumap_soft to > > + * reflect the placement result if that is the case > > */ > > if (libxl_defbool_val(info->numa_placement)) { > > > > - if (!libxl_bitmap_is_full(&info->cpumap)) { > > + /* We require both hard and soft affinity not to be set */ > > + if (!libxl_bitmap_is_full(&info->cpumap) || > > + !libxl_bitmap_is_full(&info->cpumap_soft)) { > > LOG(ERROR, "Can run NUMA placement only if no vcpu " > > - "affinity is specified"); > > + "(hard or soft) affinity is specified"); > > return ERROR_INVAL; > > } > > > > rc = numa_place_domain(gc, domid, info); > > if (rc) > > return rc; > > + > > + /* > > + * We change the soft affinity in domain_build_info here, of course > > + * after converting the result of placement from nodes to cpus. the > > + * following call to libxl_set_vcpuaffinity_all_soft() will do the > > + * actual updating of the domain's vcpus' soft affinity. > > + */ > > + libxl_nodemap_to_cpumap(ctx, &info->nodemap, &info->cpumap_soft); > ^ | Here: -----------------------------------------------------------/ I only copy the result of placement into info->cpumap_soft, without touching info->cpumap, which is "all" (or we won't be at this point) and stays that way. > > } > > libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap); > > libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus, > > &info->cpumap); > > + libxl_set_vcpuaffinity_all_soft(ctx, domid, info->max_vcpus, > > + &info->cpumap_soft); > > Thanks and Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |