[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Cosmetic change to schedule_cpu_switch



Thanks, I'll fold this into my next patch. You'll see from my recent
changesets that I'm currently tearing into the scheduler and cpupool code as
part of my CPU hotplug cleanup. I think there must be scope for further
rationalisation of the sched-if interfaces as the sched_ops have sprouted a
bewildering array of extra functions for cpupool support. I'm sure it's over
complicated.

 -- Keir

On 18/05/2010 21:22, "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx> wrote:

> Using 'v' generally means that you mean any generic vcpu, not
> a particular vcpu.  In this case, we always use the idle vcpu;
> I think naming it explicitly idle_vcpu makes the code easier to grok.
> 
> No functional changes.
> 
> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
> 
> diff -r c6db509d7e46 -r ebad6ba33a8f xen/common/schedule.c
> --- a/xen/common/schedule.c Tue May 18 15:18:26 2010 +0100
> +++ b/xen/common/schedule.c Tue May 18 15:22:27 2010 -0500
> @@ -1151,7 +1151,7 @@
>  void schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
>  {
>      unsigned long flags;
> -    struct vcpu *v;
> +    struct vcpu *idle_vcpu;
>      void *ppriv, *ppriv_old, *vpriv = NULL;
>      struct scheduler *old_ops = per_cpu(scheduler, cpu);
>      struct scheduler *new_ops = (c == NULL) ? &ops : c->sched;
> @@ -1159,21 +1159,21 @@
>      if ( old_ops == new_ops )
>          return;
>  
> -    v = per_cpu(schedule_data, cpu).idle;
> +    idle_vcpu = per_cpu(schedule_data, cpu).idle;
>      ppriv = SCHED_OP(new_ops, alloc_pdata, cpu);
>      if ( c != NULL )
> -        vpriv = SCHED_OP(new_ops, alloc_vdata, v, v->domain->sched_priv);
> +        vpriv = SCHED_OP(new_ops, alloc_vdata, idle_vcpu,
> idle_vcpu->domain->sched_priv);
>  
>      spin_lock_irqsave(per_cpu(schedule_data, cpu).schedule_lock, flags);
>  
>      if ( c == NULL )
>      {
> -        vpriv = v->sched_priv;
> -        v->sched_priv = per_cpu(schedule_data, cpu).sched_idlevpriv;
> +        vpriv = idle_vcpu->sched_priv;
> +        idle_vcpu->sched_priv = per_cpu(schedule_data, cpu).sched_idlevpriv;
>      }
>      else
>      {
> -        v->sched_priv = vpriv;
> +        idle_vcpu->sched_priv = vpriv;
>          vpriv = NULL;
>      }
>      SCHED_OP(old_ops, tick_suspend, cpu);
> @@ -1181,7 +1181,7 @@
>      ppriv_old = per_cpu(schedule_data, cpu).sched_priv;
>      per_cpu(schedule_data, cpu).sched_priv = ppriv;
>      SCHED_OP(new_ops, tick_resume, cpu);
> -    SCHED_OP(new_ops, insert_vcpu, v);
> +    SCHED_OP(new_ops, insert_vcpu, idle_vcpu);
>  
>      spin_unlock_irqrestore(per_cpu(schedule_data, cpu).schedule_lock, flags);
>  
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.