[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 1/4] xen: credit2: implement utilization cap
On 8/18/17 4:50 PM, Dario Faggioli wrote: @@ -474,6 +586,12 @@ static inline struct csched2_runqueue_data *c2rqd(const struct scheduler *ops,This is for accounting of credit. Why it willl impact the budget. Do you intend to refer thatreturn &csched2_priv(ops)->rqd[c2r(cpu)]; }+/* Does the domain of this vCPU have a cap? */+static inline bool has_cap(const struct csched2_vcpu *svc) +{ + return svc->budget != STIME_MAX; +} + /* * Hyperthreading (SMT) support. * @@ -1515,7 +1633,16 @@ static void reset_credit(const struct scheduler *ops, int cpu, s_time_t now, * that the credit it has spent so far get accounted. */ if ( svc->vcpu == curr_on_cpu(svc_cpu) ) + { burn_credits(rqd, svc, now); + /* + * And, similarly, in case it has run out of budget, as a + * consequence of this round of accounting, we also must inform + * its pCPU that it's time to park it, and pick up someone else. + */ + if ( unlikely(svc->budget <= 0) ) + tickle_cpu(svc_cpu, rqd); budget of current vcpu expired while doing calculation for credit ?? Just a bit confused. Have you seen this kind of scenario. Please can you explain it.+ }start_credit = svc->credit; @@ -1571,27 +1698,35 @@ void burn_credits(struct csched2_runqueue_data *rqd, delta = now - svc->start_time; - if ( likely(delta > 0) )- { - SCHED_STAT_CRANK(burn_credits_t2c); - t2c_update(rqd, delta, svc); - svc->start_time = now; - } - else if ( delta < 0 ) + if ( unlikely(delta <= 0) ) { +static void replenish_domain_budget(void* data) +{ + struct csched2_dom *sdom = data; + unsigned long flags; + s_time_t now; + LIST_HEAD(parked); + + spin_lock_irqsave(&sdom->budget_lock, flags); + + now = NOW(); + + /* + * Let's do the replenishment. Note, though, that a domain may overrun, + * which means the budget would have gone below 0 (reasons may be system + * overbooking, accounting issues, etc.). It also may happen that we are + * handling the replenishment (much) later than we should (reasons may + * again be overbooking, or issues with timers). + * + * Even in cases of overrun or delay, however, we expect that in 99% of + * cases, doing just one replenishment will be good enough for being able + * to unpark the vCPUs that are waiting for some budget. + */ + do_replenish(sdom); + + /* + * And now, the special cases: + * 1) if we are late enough to have skipped (at least) one full period, + * what we must do is doing more replenishments. Note that, however, + * every time we add tot_budget to the budget, we also move next_repl + * away by CSCHED2_BDGT_REPL_PERIOD, to make sure the cap is always + * respected. + */ + if ( unlikely(sdom->next_repl <= now) ) + { + do + do_replenish(sdom); + while ( sdom->next_repl <= now ); + } Is this condition necessary. "if we overran by more than tot_budget in previous run", make is more clear..+ /* + * 2) if we overrun by more than tot_budget, then budget+tot_budget is + * still < 0, which means that we can't unpark the vCPUs. Let's bail, + * and wait for future replenishments. + */ + if ( unlikely(sdom->budget <= 0) ) + { + spin_unlock_irqrestore(&sdom->budget_lock, flags); + goto out; + } + + /* Since we do more replenishments, make sure we didn't overshot. */ + sdom->budget = min(sdom->budget, sdom->tot_budget); + + /* + * As above, let's prepare the temporary list, out of the domain's + * parked_vcpus list, now that we hold the budget_lock. Then, drop such + * lock, and pass the list to the unparking function. + */ + list_splice_init(&sdom->parked_vcpus, &parked); + + spin_unlock_irqrestore(&sdom->budget_lock, flags); + + unpark_parked_vcpus(sdom->dom->cpupool->sched, &parked); + + out: + set_timer(sdom->repl_timer, sdom->next_repl); +} + #ifndef NDEBUG static inline void csched2_vcpu_check(struct vcpu *vc) @@ -1658,6 +2035,9 @@ csched2_alloc_vdata(const struct scheduler *ops, struct vcpu *vc, void *dd) } svc->tickled_cpu = -1;+ Rest, looks good to me. Thanks Anshul _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |