[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v11 06/16] qspinlock: prolong the stay in the pending bit path
On Wed, Jun 11, 2014 at 05:22:28PM -0400, Long, Wai Man wrote: > >>@@ -233,11 +233,25 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, > >>u32 val) > >> */ > >> for (;;) { > >> /* > >>- * If we observe any contention; queue. > >>+ * If we observe that the queue is not empty or both > >>+ * the pending and lock bits are set, queue > >> */ > >>- if (val & ~_Q_LOCKED_MASK) > >>+ if ((val & _Q_TAIL_MASK) || > >>+ (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL))) > >> goto queue; > >>+ if (val == _Q_PENDING_VAL) { > >>+ /* > >>+ * Pending bit is set, but not the lock bit. > >>+ * Assuming that the pending bit holder is going to > >>+ * set the lock bit and clear the pending bit soon, > >>+ * it is better to wait than to exit at this point. > >>+ */ > >>+ cpu_relax(); > >>+ val = atomic_read(&lock->val); > >>+ continue; > >>+ } > >>+ > >> new = _Q_LOCKED_VAL; > >> if (val == new) > >> new |= _Q_PENDING_VAL; > >Wouldn't something like: > > > > while (atomic_read(&lock->val) == _Q_PENDING_VAL) > > cpu_relax(); > > > >before the cmpxchg loop have gotten you all this? > > That is not exactly the same. The loop will exit if other bits are set or the > pending > bit cleared. In the case, we will need to do the same check at the beginning > of the > for loop in order to avoid doing an extra cmpxchg that is not necessary. If other bits get set we should stop poking at the pending bit and get queued. The only transition we want to wait for is: 0,1,0 -> 0,0,1. What extra unneeded cmpxchg() is there? If we have two cpus waiting in this loop for the pending bit to go away then both will attempt to grab the now free pending bit, one will loose and get queued? There's no avoiding that contention. Attachment:
pgpHOLw19gjlQ.pgp _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |