|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 16/16] xen: sched: implement vcpu hard affinity in Credit2
On Fri, Mar 18, 2016 at 7:06 PM, Dario Faggioli
<dario.faggioli@xxxxxxxxxx> wrote:
> From: Justin Weaver <jtweaver@xxxxxxxxxx>
>
> as it was still missing.
>
> Note that this patch "only" implements hard affinity,
> i.e., the possibility of specifying on what pCPUs a
> certain vCPU can run. Soft affinity (which express a
> preference for vCPUs to run on certain pCPUs) is still
> not supported by Credit2, even after this patch.
>
> Signed-off-by: Justin Weaver <jtweaver@xxxxxxxxxx>
> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
Just checking, are the main changes between this patch and the v4 that
Justin posted:
1) Moving the "scratch_mask" to a different patch
2) The code-cleanups you listed?
One rather tangential question...
> ---
> Cc: George Dunlap <dunlapg@xxxxxxxxx>
> ---
> xen/common/sched_credit2.c | 131
> ++++++++++++++++++++++++++++++++++----------
> 1 file changed, 102 insertions(+), 29 deletions(-)
>
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index a650216..3190eb3 100644
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -327,6 +327,36 @@ struct csched2_dom {
> uint16_t nr_vcpus;
> };
>
> +/*
> + * When a hard affinity change occurs, we may not be able to check some
> + * (any!) of the other runqueues, when looking for the best new processor
> + * for svc (as trylock-s in choose_cpu() can fail). If that happens, we
> + * pick, in order of decreasing preference:
> + * - svc's current pcpu;
> + * - another pcpu from svc's current runq;
> + * - any cpu.
> + */
> +static int get_fallback_cpu(struct csched2_vcpu *svc)
> +{
> + int cpu;
> +
> + if ( likely(cpumask_test_cpu(svc->vcpu->processor,
> + svc->vcpu->cpu_hard_affinity)) )
> + return svc->vcpu->processor;
> +
> + cpumask_and(cpumask_scratch, svc->vcpu->cpu_hard_affinity,
> + &svc->rqd->active);
> + cpu = cpumask_first(cpumask_scratch);
> + if ( likely(cpu < nr_cpu_ids) )
> + return cpu;
> +
> + cpumask_and(cpumask_scratch, svc->vcpu->cpu_hard_affinity,
> + cpupool_domain_cpumask(svc->vcpu->domain));
> +
> + ASSERT(!cpumask_empty(cpumask_scratch));
> +
> + return cpumask_first(cpumask_scratch);
> +}
>
> /*
> * Time-to-credit, credit-to-time.
> @@ -560,8 +590,9 @@ runq_tickle(const struct scheduler *ops, unsigned int
> cpu, struct csched2_vcpu *
> goto tickle;
> }
>
> - /* Get a mask of idle, but not tickled */
> + /* Get a mask of idle, but not tickled, that new is allowed to run on. */
> cpumask_andnot(&mask, &rqd->idle, &rqd->tickled);
> + cpumask_and(&mask, &mask, new->vcpu->cpu_hard_affinity);
It looks like this uses a cpumask_t on the stack -- can we use
scratch_mask here, or is there some reason we need to use the local
variable?
But that's really something to either add to the previous patch, or to
do in yet a different patch.
Acked-by: George Dunlap <george.dunlap@xxxxxxxxxx>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |