[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] xen/spinlock: Don't use pvqspinlock if only 1 vCPU

On Sun, 22 Jul 2018, Davidlohr Bueso wrote:

On Mon, 23 Jul 2018, Wanpeng Li wrote:

On Fri, 20 Jul 2018 at 06:03, Waiman Long <longman@xxxxxxxxxx> wrote:

On 07/19/2018 05:54 PM, Davidlohr Bueso wrote:
On Thu, 19 Jul 2018, Waiman Long wrote:

On a VM with only 1 vCPU, the locking fast paths will always be
successful. In this case, there is no need to use the the PV qspinlock
code which has higher overhead on the unlock side than the native
qspinlock code.

The xen_pvspin veriable is also turned off in this 1 vCPU case to


eliminate unneeded pvqspinlock initialization in xen_init_lock_cpu()
which is run after xen_init_spinlocks().

Wouldn't kvm also want this?

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index a37bda38d205..95aceb692010 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -457,7 +457,8 @@ static void __init sev_map_percpu_data(void)
static void __init kvm_smp_prepare_cpus(unsigned int max_cpus)
-    if (kvm_para_has_hint(KVM_HINTS_REALTIME))
+    if (num_possible_cpus() == 1 ||
+        kvm_para_has_hint(KVM_HINTS_REALTIME))

That doesn't really matter as the slowpath will never get executed in
the 1 vCPU case.

How does this differ then from xen, then? I mean, same principle applies.

So this is not needed in kvm tree?

Hmm I would think that my patch would be more appropiate as it actually does
what the comment says.

Both would be needed actually yes, but also disabling the virt_spin_lock_key
would be more robust imo.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.