[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 02/10] xen: credit2: clear bit instead of skip step in runq_tickle()



On Thu, Feb 9, 2017 at 1:58 PM, Dario Faggioli
<dario.faggioli@xxxxxxxxxx> wrote:
> Since we are doing cpumask manipulation already, clear a bit
> in the mask at once. Doing that will save us an if, later in
> the code.
>
> No functional change intended.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>

Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>

> ---
> Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
> ---
> Changes from v1:
> * rebased on current staging.
> ---
>  xen/common/sched_credit2.c |    5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index 741d372..920a7ce 100644
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -991,7 +991,7 @@ runq_tickle(const struct scheduler *ops, struct 
> csched2_vcpu *new, s_time_t now)
>      cpumask_andnot(&mask, &rqd->active, &rqd->idle);
>      cpumask_andnot(&mask, &mask, &rqd->tickled);
>      cpumask_and(&mask, &mask, cpumask_scratch_cpu(cpu));
> -    if ( cpumask_test_cpu(cpu, &mask) )
> +    if ( __cpumask_test_and_clear_cpu(cpu, &mask) )
>      {
>          cur = CSCHED2_VCPU(curr_on_cpu(cpu));
>          burn_credits(rqd, cur, now);
> @@ -1007,8 +1007,7 @@ runq_tickle(const struct scheduler *ops, struct 
> csched2_vcpu *new, s_time_t now)
>      for_each_cpu(i, &mask)
>      {
>          /* Already looked at this one above */
> -        if ( i == cpu )
> -            continue;
> +        ASSERT(i != cpu);
>
>          cur = CSCHED2_VCPU(curr_on_cpu(i));
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> https://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.