[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/hpet: simplify hpet_get_channel()


  • To: Jan Beulich <JBeulich@xxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxx>
  • Date: Fri, 23 Mar 2012 10:24:11 +0000
  • Cc: Gang Wei <gang.wei@xxxxxxxxx>
  • Delivery-date: Fri, 23 Mar 2012 10:25:09 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: Ac0I3xZk2L2u4UJP6E+93rH4rF276g==
  • Thread-topic: [Xen-devel] [PATCH] x86/hpet: simplify hpet_get_channel()

On 23/03/2012 09:20, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:

> There's no need for a lock here, elimination of which makes the
> function a leaf one, thus allowing for better (and smaller) code.
> 
> Further, use the variable next_channel according to its name - so far
> it represented the most recently used channel rather than the next one
> to use.

Why is the following for-loop changed into do-while? It looks like it just
makes the code longer, especially as you replace a % operator with
open-coded equivalent for no reason I can see.

 -- Keir

> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> --- a/xen/arch/x86/hpet.c
> +++ b/xen/arch/x86/hpet.c
> @@ -402,30 +402,35 @@ static void __init hpet_fsb_cap_lookup(v
>  static struct hpet_event_channel *hpet_get_channel(unsigned int cpu)
>  {
>      static unsigned int next_channel;
> -    static spinlock_t next_lock = SPIN_LOCK_UNLOCKED;
>      unsigned int i, next;
>      struct hpet_event_channel *ch;
>  
>      if ( num_hpets_used == 0 )
>          return hpet_events;
>  
> -    spin_lock(&next_lock);
> -    next = next_channel = (next_channel + 1) % num_hpets_used;
> -    spin_unlock(&next_lock);
> +    do {
> +        next = next_channel;
> +        if ( (i = next + 1) == num_hpets_used )
> +            i = 0;
> +    } while ( cmpxchg(&next_channel, next, i) != next );
>  
>      /* try unused channel first */
> -    for ( i = next; i < next + num_hpets_used; i++ )
> -    {
> -        ch = &hpet_events[i % num_hpets_used];
> +    ch = &hpet_events[i = next];
> +    do {
>          if ( !test_and_set_bit(HPET_EVT_USED_BIT, &ch->flags) )
>          {
>              ch->cpu = cpu;
>              return ch;
>          }
> -    }
> +        ++ch;
> +        if ( ++i == num_hpets_used )
> +        {
> +            i = 0;
> +            ch = hpet_events;
> +        }
> +    } while ( i != next );
>  
> -    /* share a in-use channel */
> -    ch = &hpet_events[next];
> +    /* Share an in-use channel. */
>      if ( !test_and_set_bit(HPET_EVT_USED_BIT, &ch->flags) )
>          ch->cpu = cpu;
>  
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.