|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 2/5] sched: credit2: respect per-vcpu hard affinity
And here we are, finally... :-/
On Sun, 2015-07-12 at 22:13 -1000, Justin T. Weaver wrote:
> by making sure that vcpus only run on the pcpu(s) they are allowed to
> run on based on their hard affinity cpu masks.
>
> Signed-off-by: Justin T. Weaver <jtweaver@xxxxxxxxxx>
>
So, this patch looks now good to me. I've found:
- a few style issues (indentation)
- a comment that I would reword
- a trivial code issue (details below)
With all these addressed, this patch is:
Reviewed-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> ---
> Changes in v4:
> * Renamed scratch_mask to _scratch_mask
> * Renamed csched2_cpumask to scratch_mask
> * Removed "else continue" in function choose_cpu's for_each_cpu loop
> to make
> the code less confusing
> * Added an ASSERT that triggers if _scratch_mask[cpu] is NULL after
> allocation in function csched2_alloc_pdata
>
> [...]
>
> * Changed "run queue" in several comments to "runqueue"
> * Renamed function valid_vcpu_migration to vcpu_is_migrateable
> * Made condition check in function vcpu_is_migrateable "positive"
>
Ok, thanks for this really detailed list of what's changed in v4. It's
very thorough, and, AFAICT, it really covers all the review comments
that I can find about, when looking back at v3 submission.
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -1091,45 +1131,55 @@ choose_cpu(const struct scheduler *ops,
> struct vcpu *vc)
> {
> printk("%s: Runqueue migrate aborted because target
> runqueue disappeared!\n",
> __func__);
> - /* Fall-through to normal cpu pick */
> }
> else
> {
> - d2printk("%pv +\n", svc->vcpu);
> - new_cpu = cpumask_cycle(vc->processor, &svc->migrate_rqd
> ->active);
> - goto out_up;
> + cpumask_and(scratch_mask, vc->cpu_hard_affinity,
> + &svc->migrate_rqd->active);
>
Indentation.
cpumask_and(scratch_mask, vc->cpu_hard_affinity,
&svc->migrate_rqd->active);
> @@ -1138,12 +1188,14 @@ choose_cpu(const struct scheduler *ops,
> struct vcpu *vc)
> }
> }
>
> - /* We didn't find anyone (most likely because of spinlock
> contention); leave it where it is */
> + /* We didn't find anyone (most likely because of spinlock
> contention). */
> if ( min_rqi == -1 )
> - new_cpu = vc->processor;
> + new_cpu = get_fallback_cpu(svc);
> else
> {
> - new_cpu = cpumask_cycle(vc->processor, &prv
> ->rqd[min_rqi].active);
> + cpumask_and(scratch_mask, vc->cpu_hard_affinity,
> + &prv->rqd[min_rqi].active);
>
Here as well.
> @@ -1223,7 +1275,12 @@ static void migrate(const struct scheduler
> *ops,
> on_runq=1;
> }
> __runq_deassign(svc);
> - svc->vcpu->processor = cpumask_any(&trqd->active);
> +
> + cpumask_and(scratch_mask, svc->vcpu->cpu_hard_affinity,
> + &trqd->active);
>
And here.
> @@ -1237,6 +1294,17 @@ static void migrate(const struct scheduler
> *ops,
> }
> }
>
> +/*
> + * Migration of svc to runqueue rqd is a valid option if svc is not
> already
> + * flagged to migrate and if svc is allowed to run on at least one
> of the
> + * pcpus assigned to rqd based on svc's hard affinity mask.
> + */
> +static bool_t vcpu_is_migrateable(struct csched2_vcpu *svc,
> + struct csched2_runqueue_data *rqd)
> +{
> + return !test_bit(__CSFLAG_runq_migrate_request, &svc->flags)
> + && cpumask_intersects(svc->vcpu->cpu_hard_affinity, &rqd
> ->active);
>
This one, I'd make it look like this:
return !test_bit(__CSFLAG_runq_migrate_request, &svc-flags) &&
cpumask_intersects(svc->vcpu->cpu_hard_affinity, &rqd->active);
> @@ -1415,11 +1480,20 @@ csched2_vcpu_migrate(
>
> /* Check if new_cpu is valid */
> BUG_ON(!cpumask_test_cpu(new_cpu, &CSCHED2_PRIV(ops)
> ->initialized));
> + ASSERT(cpumask_test_cpu(new_cpu, vc->cpu_hard_affinity));
>
> trqd = RQD(ops, new_cpu);
>
> + /*
> + * Without the processor assignment after the else, vc may be
> assigned to
> + * a processor it is not allowed to run on. In that case,
> runq_candidate
> + * might only get called for the old cpu, and vc will not get to
> run due
> + * to the hard affinity check.
> + */
> if ( trqd != svc->rqd )
> migrate(ops, svc, trqd, NOW());
> + else
> + vc->processor = new_cpu;
> }
>
About the comment. I don't like it expressing would happen if a
specific piece of code, that is there, were missing. I'd go for
something like this:
"Do the actual movement toward new_cpu, and update vc->processor. If we
are changing runqueue, migrate() takes care of everything. If we are
not changing runqueue, we need to update vc->processor here. In fact,
if, for instance, we are here because the vcpu's hard affinity is being
changed, we don't want to risk leaving vc->processor pointing to a pcpu
where the vcpu can't run any longer"
> @@ -2047,6 +2125,9 @@ static void init_pcpu(const struct scheduler
> *ops, int cpu)
>
> spin_unlock_irqrestore(&prv->lock, flags);
>
> + free_cpumask_var(_scratch_mask[cpu]);
> + _scratch_mask[cpu] = NULL;
> +
> return;
> }
>
And here we are: this is the only issue in the code (i.e., not about
comment wording or style issues) that I've found.
How come this hunk is in init_pcpu()? Shouldn't it live in
csched2_free_pdata()? I think it should...
From the look of things, it is probably the result of a rebase of the
patch which went slightly wrong, without you noticing it. :-)
> @@ -2061,6 +2142,16 @@ csched2_alloc_pdata(const struct scheduler
> *ops, int cpu)
> printk("%s: cpu %d not online yet, deferring
> initializatgion\n",
> __func__, cpu);
>
> + /*
> + * For each new pcpu, allocate a cpumask_t for use throughout
> the
> + * scheduler to avoid putting any cpumask_t structs on the
> stack.
>
"to avoid putting too many cpumask_t structs on the stack"
In fact, there are some already, and we're not removing them, we're
just avoiding adding more of them
Thanks and Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |