[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] credit: generalize __vcpu_has_soft_affinity()
On 03/06/2015 07:36 AM, Jan Beulich wrote: > As pointed out in the discussion of the patch at > http://lists.xenproject.org/archives/html/xen-devel/2015-02/msg03256.html > generalizing the conditions here means code elsewhere doesn't need to > take into consideration internals of how load balancing in the credit > scheduler works. > > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > --- > v2: Use VCPU2ONLINE(vc) (or really an open coded variant thereof) > instead of cpu_online_map (suggested by Dario). > > --- a/xen/common/sched_credit.c > +++ b/xen/common/sched_credit.c > @@ -292,11 +292,10 @@ __runq_remove(struct csched_vcpu *svc) > static inline int __vcpu_has_soft_affinity(const struct vcpu *vc, > const cpumask_t *mask) > { > - if ( cpumask_full(vc->cpu_soft_affinity) > - || !cpumask_intersects(vc->cpu_soft_affinity, mask) ) > - return 0; > - > - return 1; > + return !cpumask_subset(cpupool_online_cpumask(vc->domain->cpupool), > + vc->cpu_soft_affinity) && > + !cpumask_subset(vc->cpu_soft_affinity, vc->cpu_hard_affinity) && > + cpumask_intersects(vc->cpu_soft_affinity, mask); It looks like the comment above this line could use changing too; perhaps: --- Hard affinity balancing is always necessary and must never be skipped. But soft affinity need only be considered when it has a functionally different effect than other constraints (such as hard affinity, cpus online, or cpupools). Soft affinity only needs to be considered if: * The cpus in the cpupool are not a subset of soft affinity * The hard affinity is not a subset of soft affinity * There is an overlap between the soft affinity and the mask which is currently being considered. --- With the comment updated: Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx> _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |