[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] credit: generalize __vcpu_has_soft_affinity()
On 03/06/2015 11:44 AM, Jan Beulich wrote: >>>> On 06.03.15 at 12:32, <george.dunlap@xxxxxxxxxxxxx> wrote: >> On 03/06/2015 10:16 AM, Jan Beulich wrote: >>>>>> On 06.03.15 at 10:53, <george.dunlap@xxxxxxxxxxxxx> wrote: >>>> On 03/06/2015 07:36 AM, Jan Beulich wrote: >>>>> As pointed out in the discussion of the patch at >>>>> http://lists.xenproject.org/archives/html/xen-devel/2015-02/msg03256.html >>>>> generalizing the conditions here means code elsewhere doesn't need to >>>>> take into consideration internals of how load balancing in the credit >>>>> scheduler works. >>>>> >>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> >>>>> --- >>>>> v2: Use VCPU2ONLINE(vc) (or really an open coded variant thereof) >>>>> instead of cpu_online_map (suggested by Dario). >>>>> >>>>> --- a/xen/common/sched_credit.c >>>>> +++ b/xen/common/sched_credit.c >>>>> @@ -292,11 +292,10 @@ __runq_remove(struct csched_vcpu *svc) >>>>> static inline int __vcpu_has_soft_affinity(const struct vcpu *vc, >>>>> const cpumask_t *mask) >>>>> { >>>>> - if ( cpumask_full(vc->cpu_soft_affinity) >>>>> - || !cpumask_intersects(vc->cpu_soft_affinity, mask) ) >>>>> - return 0; >>>>> - >>>>> - return 1; >>>>> + return !cpumask_subset(cpupool_online_cpumask(vc->domain->cpupool), >>>>> + vc->cpu_soft_affinity) && >>>>> + !cpumask_subset(vc->cpu_soft_affinity, vc->cpu_hard_affinity) >>>>> && >>>>> + cpumask_intersects(vc->cpu_soft_affinity, mask); >>>> >>>> It looks like the comment above this line could use changing too; perhaps: >>>> >>>> --- >>>> Hard affinity balancing is always necessary and must never be skipped. >>>> But soft affinity need only be considered when it has a functionally >>>> different effect than other constraints (such as hard affinity, cpus >>>> online, or cpupools). >>>> >>>> Soft affinity only needs to be considered if: >>>> * The cpus in the cpupool are not a subset of soft affinity >>>> * The hard affinity is not a subset of soft affinity >>> >>> "hard" and "soft" appear to be swapped here. I corrected this, >>> please let me know if you disagree (in which case the patch would >>> need changing too). >> >> Uum -- I think my comment is right. If the soft affinity is a subset of >> hard affinity, then there are some cpus in the hard affinity which are >> "preferred" (soft affine) and some that are "not preferred" >> (non-soft-affine). Whereas, if hard affinity is a subset of soft >> affinity, then all cpus in the hard affinity are "preffered" (soft >> affine), and so there's no sense in doing the soft affinity step. > > I.e. confusion about which affinity means what is ongoing: > Earlier on we had Dario answering a question of mine > >>> Hmm, not sure. And I keep being confused whether soft means >>> "allow" and hard means "prefer" or the other way around. >>> >> "hard" means allow (or not allow) >> "soft" means prefer > > while I read your reply to mean the opposite. No, Dario and I mean the same thing: "Hard affinity" means "May only run on these cpus and no others". "Soft affinity" means "I prefer you to run here if you can, but if it's too busy, go ahead and run it somewhere else". Hard affinity replaces what used to be called "vcpu_affinity"; and soft affinity is typically used for vNUMA implementation. Consider the following example Soft affinity: 00001100 Hard affinity: 00111100 In this case, the scheduler should *prefer* to run it on cpus 4 or 5 (which is in both the soft and hard affinities), but *may* run it on cpus 2-3 if it thinks it's necessary; so there is an "effective soft affinity" -- even though soft affinity is a subset of hard affinity. Now consider the reverse: Soft affinity: 00111100 Hard affinity: 00001100 In this case, the scheduler *must* run it on either cpus 4 or 5; but there is no preference between the two, since both are in the soft affinity set. So this is effectively the same as not having any soft affinity. And hard affinity is a subset of soft affinity. -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |