|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 3/4] xen: credit2: improve distribution of budget (for domains with caps)
On Thu, Jun 8, 2017 at 1:09 PM, Dario Faggioli
<dario.faggioli@xxxxxxxxxx> wrote:
> Instead of letting the vCPU that for first tries to get
s/for//;
> some budget take it all (although temporarily), allow each
> vCPU to only get a specific quota of the total budget.
>
> This improves fairness, allows for more parallelism, and
> prevents vCPUs from not being able to get any budget (e.g.,
> because some other vCPU always comes before and gets it all)
> for one or more period, and hence starve (and couse troubles
* cause
> in guest kernels, such as livelocks, triggering ofwhatchdogs,
* 'of watchdogs' (missing space)
> etc.).
>
> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> ---
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index 3f7b8f0..97efde8 100644
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -506,7 +506,7 @@ struct csched2_vcpu {
>
> int credit;
>
> - s_time_t budget;
> + s_time_t budget, budget_quota;
> struct list_head parked_elem; /* On the parked_vcpus list */
>
> s_time_t start_time; /* When we were scheduled (used for credit) */
> @@ -1627,8 +1627,16 @@ static bool vcpu_try_to_get_budget(struct csched2_vcpu
> *svc)
>
> if ( sdom->budget > 0 )
> {
> - svc->budget = sdom->budget;
> - sdom->budget = 0;
> + s_time_t budget;
> +
> + /* Get our quote, if there's at least as much budget */
*quota
> @@ -2619,6 +2650,7 @@ csched2_dom_cntl(
> vcpu_schedule_unlock(lock, svc->vcpu);
> }
> }
> +
> sdom->cap = op->u.credit2.cap;
Since you'll be re-spinning, might as well move this into the previous
patch. :-)
Everything else looks good, so with those changes you can add:
Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |