[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
- To: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
- From: Waiman Long <waiman.long@xxxxxx>
- Date: Mon, 19 May 2014 16:30:22 -0400
- Cc: linux-arch@xxxxxxxxxxxxxxx, Raghavendra K T <raghavendra.kt@xxxxxxxxxxxxxxxxxx>, Oleg Nesterov <oleg@xxxxxxxxxx>, Gleb Natapov <gleb@xxxxxxxxxx>, kvm@xxxxxxxxxxxxxxx, Scott J Norton <scott.norton@xxxxxx>, x86@xxxxxxxxxx, Paolo Bonzini <paolo.bonzini@xxxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx, virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx, Ingo Molnar <mingo@xxxxxxxxxx>, Chegu Vinod <chegu_vinod@xxxxxx>, David Vrabel <david.vrabel@xxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Thomas Gleixner <tglx@xxxxxxxxxxxxx>, "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
- Delivery-date: Mon, 19 May 2014 20:31:06 +0000
- List-id: Xen developer discussion <xen-devel.lists.xen.org>
On 05/08/2014 03:12 PM, Peter Zijlstra wrote:
On Wed, May 07, 2014 at 11:01:38AM -0400, Waiman Long wrote:
No, we want the unfair thing for VIRT, not PARAVIRT.
Yes, you are right. I will change that to VIRT.
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 9e7659e..10e87e1 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock
*lock)
{
struct __qspinlock *l = (void *)lock;
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+ if (static_key_false(¶virt_unfairlocks_enabled))
+ /*
+ * Need to use atomic operation to get the lock when
+ * lock stealing can happen.
+ */
+ return cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0;
That's missing {}.
It is a single statement which doesn't need braces according to kernel
coding style. I could move the comments up a bit to make it easier to read.
+#endif
barrier();
ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
barrier();
But no, what you want is:
static __always_inline bool virt_lock(struct qspinlock *lock)
{
#ifdef CONFIG_VIRT_MUCK
if (static_key_false(&virt_unfairlocks_enabled)) {
while (!queue_spin_trylock(lock))
cpu_relax();
return true;
}
#else
return false;
}
void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
{
if (virt_lock(lock))
return;
...
}
This is a possible way of doing it. I can do that in the patch series to
simplify it. Hopefully that will speed up the review process and get it
done quicker.
-Longman
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|