[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 2/6] xen: credit2: soft-affinity awareness in gat_fallback_cpu()
On 07/27/2017 01:05 PM, Dario Faggioli wrote: > By, basically, moving all the logic of the function > inside the usual two steps (soft-affinity step and > hard-affinity step) loop. > > While there, add two performance counters (in cpu_pick > and in get_fallback_cpu() itself), in order to be able > to tell how frequently it happens that we need to look > for a fallback cpu. > > Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx> > Signed-off-by: Justin T. Weaver <jtweaver@xxxxxxxxxx> > --- > Cc: Anshul Makkar <anshulmakkar@xxxxxxxxx> > Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx> > --- > Changes from v1: > - as discussed during review, only consider hard-affinity for the last stand. > The idea is not moving the vcpu to a diffrent runqueue because of > soft-affinity, as a part of finding a fallback cpu; > - as discussed during review, added the performance counters; > - BUG_ON(1) turned into ASSERT_UNREACHABLE(), as suggested during review; > - return something same and random enough, at the end of the function (in > case we somehow manage to get there). > --- > xen/common/sched_credit2.c | 101 > +++++++++++++++++++++++++++++++++--------- > xen/include/xen/perfc_defn.h | 2 + > 2 files changed, 82 insertions(+), 21 deletions(-) > > diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c > index 57e77df..aa8f169 100644 > --- a/xen/common/sched_credit2.c > +++ b/xen/common/sched_credit2.c > @@ -549,36 +549,93 @@ void smt_idle_mask_clear(unsigned int cpu, cpumask_t > *mask) > } > > /* > - * When a hard affinity change occurs, we may not be able to check some > - * (any!) of the other runqueues, when looking for the best new processor > - * for svc (as trylock-s in csched2_cpu_pick() can fail). If that happens, we > - * pick, in order of decreasing preference: > - * - svc's current pcpu; > - * - another pcpu from svc's current runq; > - * - any cpu. > + * In csched2_cpu_pick(), it may not be possible to actually look at remote > + * runqueues (the trylock-s on their spinlocks can fail!). If that happens, > + * we pick, in order of decreasing preference: > + * 1) svc's current pcpu, if it is part of svc's soft affinity; > + * 2) a pcpu in svc's current runqueue that is also in svc's soft affinity; > + * 3) svc's current pcpu, if it is part of svc's hard affinity; > + * 4) a pcpu in svc's current runqueue that is also in svc's hard affinity; > + * 5) just one valid pcpu from svc's hard affinity > + * > + * Of course, 1, 2 and 3 makes sense only if svc has a soft affinity. Also > + * note that at least 6 is guaranteed to _always_ return at least one pcpu. s/6/5/; ? > */ > static int get_fallback_cpu(struct csched2_vcpu *svc) > { > struct vcpu *v = svc->vcpu; > - int cpu = v->processor; > + unsigned int bs; > > - cpumask_and(cpumask_scratch_cpu(cpu), v->cpu_hard_affinity, > - cpupool_domain_cpumask(v->domain)); > + SCHED_STAT_CRANK(need_fallback_cpu); > > - if ( likely(cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) ) > - return cpu; > - > - if ( likely(cpumask_intersects(cpumask_scratch_cpu(cpu), > - &svc->rqd->active)) ) > + for_each_affinity_balance_step( bs ) > { > - cpumask_and(cpumask_scratch_cpu(cpu), &svc->rqd->active, > - cpumask_scratch_cpu(cpu)); > - return cpumask_first(cpumask_scratch_cpu(cpu)); > - } > + int cpu = v->processor; > > - ASSERT(!cpumask_empty(cpumask_scratch_cpu(cpu))); > + if ( bs == BALANCE_SOFT_AFFINITY && > + !has_soft_affinity(v, v->cpu_hard_affinity) ) > + continue; > > - return cpumask_first(cpumask_scratch_cpu(cpu)); > + affinity_balance_cpumask(v, bs, cpumask_scratch_cpu(cpu)); > + cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu), > + cpupool_domain_cpumask(v->domain)); > + > + /* > + * This is cases 1 or 3 (depending on bs): if v->processor is (still) > + * in our affinity, go for it, for cache betterness. > + */ > + if ( likely(cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) ) > + return cpu; > + > + /* > + * This is cases 2 or 4 (depending on bs): v->processor isn't there > + * any longer, check if we at least can stay in our current runq. > + */ > + if ( likely(cpumask_intersects(cpumask_scratch_cpu(cpu), > + &svc->rqd->active)) ) > + { > + cpumask_and(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu), > + &svc->rqd->active); > + return cpumask_first(cpumask_scratch_cpu(cpu)); > + } > + > + /* > + * We may well pick any valid pcpu from our soft-affinity, outside > + * of our current runqueue, but we decide not to. In fact, changing > + * runqueue is slow, affects load distribution, and is a source of > + * overhead for the vcpus running on the other runqueue (we need the > + * lock). So, better do that as a consequence of a well informed > + * decision (or if we really don't have any other chance, as we will, > + * at step 6, if we get to there). s/5/6/; > + * > + * Also, being here, looking for a fallback, is an unfortunate and > + * infrequent event, while the decision of putting us in the runqueue > + * wehere we are was (likely) made taking all the relevant factors > + * into account. So let's not disrupt that, just for the sake of > + * soft-affinity, and let's wait here to be able to made (hopefully, > + * soon), another similar well informed decision. > + */ > + if ( bs == BALANCE_SOFT_AFFINITY ) > + continue; > + > + /* > + * This is cases 6: last stand, just one valid pcpu from our hard > + * affinity. It's guaranteed that there is at least one valid cpu, > + * and therefore we are sure that we return it, and never really > + * exit the loop. > + */ s/5/6/; Everything else looks good -- I can fix these up on commit. Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx> -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |