[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS



On Thu, Apr 17, 2014 at 11:03:57AM -0400, Waiman Long wrote:
> +static __always_inline void
> +clear_pending_set_locked(struct qspinlock *lock, u32 val)
> +{
> +     struct __qspinlock *l = (void *)lock;
> +
> +     ACCESS_ONCE(l->locked_pending) = 1;
> +}

> @@ -157,8 +251,13 @@ static inline int trylock_pending(struct qspinlock 
> *lock, u32 *pval)
>        * we're pending, wait for the owner to go away.
>        *
>        * *,1,1 -> *,1,0
> +      *
> +      * this wait loop must be a load-acquire such that we match the
> +      * store-release that clears the locked bit and create lock
> +      * sequentiality; this because not all try_clear_pending_set_locked()
> +      * implementations imply full barriers.

You renamed the function referred in the above comment.

>        */
> -     while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK)
> +     while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)
>               arch_mutex_cpu_relax();
>  
>       /*

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.