[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] tools: libxl/xl: run NUMA placement even when an hard-affinity is set
On Mon, 2018-08-20 at 11:14 +0100, Wei Liu wrote: > On Fri, Aug 17, 2018 at 07:03:03PM +0200, Dario Faggioli wrote: > > @@ -27,6 +27,8 @@ > > > > #include "_paths.h" > > > > +//#define DEBUG 1 > > + > > Stray changes here? > > You can use NDEBUG instead. > I found this in one other place in libxl (libxl_event.c), and figured I could use it here too. Reason is that this guards a (potentially) rather expensive check, which I personally consider too much, even for debug build. At the same time, it could be handy for us developers to have it there, and be able to enable it when making changes to affinity and/or NUMA placement code. But I don't have a too strong opinion; I'll change it to us NDEBUG if that's what you prefer. > > @@ -162,6 +165,38 @@ static int numa_place_domain(libxl__gc *gc, > > uint32_t domid, > > rc = libxl_cpupool_info(CTX, &cpupool_info, cpupool); > > if (rc) > > goto out; > > + map = &cpupool_info.cpumap; > > + > > + /* > > + * If there's a well defined hard affinity mask (i.e., the > > same one for all > > + * the vcpus), we can try to run the placement considering > > only the pcpus > > + * within such mask. > > + */ > > + if (info->num_vcpu_hard_affinity) > > + { > > Placement of "{" is wrong. > Yep, sorry. > > + int j; > > + > > + for (j = 0; j < info->num_vcpu_hard_affinity; j++) > > + assert(libxl_bitmap_equal(&info- > > >vcpu_hard_affinity[0], > > + &info- > > >vcpu_hard_affinity[j], 0)); > > +#endif /* DEBUG */ > > But why should the above be debug only? The assumption doesn't seem > to > always hold. > I'm not sure I'm getting your point. If we're here, we do indeed want to be sure that we don't have a different hard-affinity mask for the various vcpus... In fact, if we do, which one should we use for placement? So, yes, if we are here, I think the assert() should be true. I just considered this check only useful to developers, and too expensive even for debug builds. But as said above, I can gate this with NDEBUG if you prefer so. > > + /* > > + * Hard affinity should _really_ contain cpus that are > > inside our > > + * cpupool. Anyway, if it does not, log a warning and only > > use the > > + * cpupool's cpus for placement. > > + */ > > + if (!libxl_bitmap_is_empty(&cpumap)) > > + map = &cpumap; > > + else > > + LOG(WARN, "Hard affinity completely outside of > > domain's cpupool?"); > > Should this be an error? > > What is the expected interaction for hard affinity and cpupool? > cpupools rulez. :-D Basically, a domain can't possibly run on any cpu which is outside of the pool where the domain itself lives, no matter what the affinity is. BTW, I tested this better, and I believe there is no need for adding this WARN. In fact, if the domain is being created inside a pool, setting its hard affinity to a mask which is completely disjoint from the cpupool's one fails _before_ getting here. Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Software Engineer @ SUSE https://www.suse.com/ Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |