[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
On 05/08/2014 03:06 PM, Peter Zijlstra wrote: On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote:If unfair lock is supported, the lock acquisition loop at the end of the queue_spin_lock_slowpath() function may need to detect the fact the lock can be stolen. Code are added for the stolen lock detection. A new qhead macro is also defined as a shorthand for mcs.locked.NAK, unfair should be a pure test-and-set lock. I have performance data showing that a simple test-and-set lock does not scale well. That is the primary reason of ditching the test-and-set lock and use a more complicated scheme which scales better. Also, it will be hard to make the unfair test-and-set lock code to coexist nicely with PV spinlock code. /** * get_qlock - Set the lock bit and own the lock - * @lock: Pointer to queue spinlock structure + * @lock : Pointer to queue spinlock structure + * Return: 1 if lock acquired, 0 otherwise * * This routine should only be called when the caller is the only one * entitled to acquire the lock. */ -static __always_inline void get_qlock(struct qspinlock *lock) +static __always_inline int get_qlock(struct qspinlock *lock) { struct __qspinlock *l = (void *)lock; barrier(); ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL; barrier(); + return 1; }and here you make a horribly named function more horrible; try_set_locked() is that its now. Will do. -Longman _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |