[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used



On Tue, 2021-01-05 at 09:45 -0500, boris.ostrovsky@xxxxxxxxxx wrote:
> For maintenance purposes. When something gets added in initialization
> path (prepare_boot_cpu() and such) we likely want to tear it down in
> cpu_die(). Today both native and Xen cpu_die() ops work the same but
> things change. And chances are people will not test
> no_vector_callback case so we will then have to hunt down what broke
> when.
> 
> 
> no_vector_callback is a pretty esoteric case already so making it
> take different code path from common case, when there is little or no
> penalty for doing the latter, seems undesirable to me.


That makes sense.

I'll also not even bother to look at those three teardown functions and
work out if it's harmless to call them unconditionally. If the point is
to remind those who come after us, then it's actually far more
effective to have an explicit 'if (xen_have_vector_callback)'.

Looks like this. I suppose I should actually try booting it under real
Xen instead of just KVM-pretending-to-be-Xen, and then I'll repost the
series (which is the top five commits of  
https://git.infradead.org/users/dwmw2/linux.git/shortlog/refs/heads/xen-pci-irq 
)

From: David Woodhouse <dwmw@xxxxxxxxxxxx>
Subject: [PATCH] x86/xen: Fix xen_hvm_smp_init() when vector callback not 
available

Only the IPI-related functions in the smp_ops should be conditional
on the vector callback being available. The rest should still happen:

 • xen_hvm_smp_prepare_boot_cpu()

   This function does two things, both of which should still happen if
   there is no vector callback support.

   The call to xen_vcpu_setup() for vCPU0 should still happen as it just
   sets up the vcpu_info for CPU0. That does happen for the secondary
   vCPUs too, from xen_cpu_up_prepare_hvm().

   The second thing it does is call xen_init_spinlocks(), which perhaps
   counter-intuitively should *also* still be happening in the case
   without vector callbacks, so that it can clear its local xen_pvspin
   flag and disable the virt_spin_lock_key accordingly.

   Checking xen_have_vector_callback in xen_init_spinlocks() itself
   would affect PV guests, so set the global nopvspin flag in
   xen_hvm_smp_init() instead, when vector callbacks aren't available.

 • xen_hvm_smp_prepare_cpus()

   This does some IPI-related setup by calling xen_smp_intr_init() and
   xen_init_lock_cpu(), which can be made conditional. And it sets the
   xen_vcpu_id to XEN_VCPU_ID_INVALID for all possible CPUS, which does
   need to happen.

 • xen_smp_cpus_done()

   This offlines any vCPUs which doesn't fit in the global shared_info
   page, if separate vcpu_info placement isn't available. That part also
   needs to happen regardless of vector callback support.

 • xen_hvm_cpu_die()

   This doesn't actually do anything other than commin_cpu_die() right
   right now in the !vector_callback case; all three teardown functions
   it calls should be no-ops. But to guard against future regressions
   it's useful to call it anyway, and for it to explicitly check for
   xen_have_vector_callback before calling those additional functions.

Signed-off-by: David Woodhouse <dwmw@xxxxxxxxxxxx>
---
 arch/x86/xen/smp_hvm.c | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/arch/x86/xen/smp_hvm.c b/arch/x86/xen/smp_hvm.c
index f5e7db4f82ab..056430a1080b 100644
--- a/arch/x86/xen/smp_hvm.c
+++ b/arch/x86/xen/smp_hvm.c
@@ -33,9 +33,11 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int 
max_cpus)
        int cpu;
 
        native_smp_prepare_cpus(max_cpus);
-       WARN_ON(xen_smp_intr_init(0));
 
-       xen_init_lock_cpu(0);
+       if (xen_have_vector_callback) {
+               WARN_ON(xen_smp_intr_init(0));
+               xen_init_lock_cpu(0);
+       }
 
        for_each_possible_cpu(cpu) {
                if (cpu == 0)
@@ -50,9 +52,11 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int 
max_cpus)
 static void xen_hvm_cpu_die(unsigned int cpu)
 {
        if (common_cpu_die(cpu) == 0) {
-               xen_smp_intr_free(cpu);
-               xen_uninit_lock_cpu(cpu);
-               xen_teardown_timer(cpu);
+               if (xen_have_vector_callback) {
+                       xen_smp_intr_free(cpu);
+                       xen_uninit_lock_cpu(cpu);
+                       xen_teardown_timer(cpu);
+               }
        }
 }
 #else
@@ -64,14 +68,17 @@ static void xen_hvm_cpu_die(unsigned int cpu)
 
 void __init xen_hvm_smp_init(void)
 {
-       if (!xen_have_vector_callback)
+       smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
+       smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
+       smp_ops.smp_cpus_done = xen_smp_cpus_done;
+       smp_ops.cpu_die = xen_hvm_cpu_die;
+
+       if (!xen_have_vector_callback) {
+               nopvspin = true;
                return;
+       }
 
-       smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
        smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
-       smp_ops.cpu_die = xen_hvm_cpu_die;
        smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
        smp_ops.send_call_func_single_ipi = 
xen_smp_send_call_function_single_ipi;
-       smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
-       smp_ops.smp_cpus_done = xen_smp_cpus_done;
 }
-- 
2.29.2

Attachment: smime.p7s
Description: S/MIME cryptographic signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.