[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH linux 1/2] xen: delay xen_hvm_init_time_ops() if kdump is boot on vcpu>=32


  • To: xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Dongli Zhang <dongli.zhang@xxxxxxxxxx>
  • Date: Tue, 12 Oct 2021 00:24:27 -0700
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=nwfSvPTOo4gh2UoyMUD98r2uUx7xcXV8jueIuef2qVw=; b=KxxWQlt9XcMFVbq7SkJx7ZOdNq2A21+OMazZ/GnlQcd8HugnrKcXYYaTKZfaANQFl2o6vJkhR2ElNmXQ6oxY1JGOsHe9NgQFTyViBaDqAFUovXRhzhQg93dj9wNidxaA6OUL78rqBNHHRRTfeZ9cwl21VG7xo5iSMFIWO9N0XBsKL2VsjEA/1iz+bSo33KmiT9JWafxBxy/KGYdaeWIx9t+Yfl05/QoHl0L92EEgiLWqhYXHhZU9U5yvVB5O0Xm3yJD2jNmPVob/RolAAzr4mbhtgpdoxoUzA4vWllzfFFES7OCXYIx9cV4aFL4xxZvmSQojHjmWD5ozmNGlMY8wbg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AMYA/XFHuyNgEJOnzAegmXCNGbowR6VuOxgSZe8ivQVEX+sYNtLCAXbFTlbgATxTJFrQUJzrsn6H88R+sBAZ3Pxo11u7lgepPxCI/6npONOcNigsJjZFIuBVLy6+EmEsN06P/tEdcvIZVOg4rqLWp5hKLFuN1exkVkl3rJHBPCyON5NmRHIrIAMtfpoe1POOZSm94AWLvCGZ8AuaIgbzQURUj7bEoOAnb/9Uzs/m8ydx8WfOrJ/49Z3xdMZT8RgnX1uFxSLr83hdyxzlmg5f+3qVz5LjvqXS7CFdEBgf46lPbN9sZh1ZT+aJjLzAv9OiyA/meyltVCZijZpeBBpH2A==
  • Authentication-results: lists.xenproject.org; dkim=none (message not signed) header.d=none;lists.xenproject.org; dmarc=none action=none header.from=oracle.com;
  • Cc: linux-kernel@xxxxxxxxxxxxxxx, x86@xxxxxxxxxx, boris.ostrovsky@xxxxxxxxxx, jgross@xxxxxxxx, sstabellini@xxxxxxxxxx, tglx@xxxxxxxxxxxxx, mingo@xxxxxxxxxx, bp@xxxxxxxxx, hpa@xxxxxxxxx, andrew.cooper3@xxxxxxxxxx, george.dunlap@xxxxxxxxxx, iwj@xxxxxxxxxxxxxx, jbeulich@xxxxxxxx, julien@xxxxxxx, wl@xxxxxxx, joe.jin@xxxxxxxxxx
  • Delivery-date: Tue, 12 Oct 2021 07:25:13 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

The sched_clock() can be used very early since upstream
commit 857baa87b642 ("sched/clock: Enable sched clock early"). In addition,
with upstream commit 38669ba205d1 ("x86/xen/time: Output xen sched_clock
time from 0"), kdump kernel in Xen HVM guest may panic at very early stage
when accessing &__this_cpu_read(xen_vcpu)->time as in below:

setup_arch()
 -> init_hypervisor_platform()
     -> x86_init.hyper.init_platform = xen_hvm_guest_init()
         -> xen_hvm_init_time_ops()
             -> xen_clocksource_read()
                 -> src = &__this_cpu_read(xen_vcpu)->time;

This is because Xen HVM supports at most MAX_VIRT_CPUS=32 'vcpu_info'
embedded inside 'shared_info' during early stage until xen_vcpu_setup() is
used to allocate/relocate 'vcpu_info' for boot cpu at arbitrary address.

However, when Xen HVM guest panic on vcpu >= 32, since
xen_vcpu_info_reset(0) would set per_cpu(xen_vcpu, cpu) = NULL when
vcpu >= 32, xen_clocksource_read() on vcpu >= 32 would panic.

This patch delays xen_hvm_init_time_ops() to later in
xen_hvm_smp_prepare_boot_cpu() after the 'vcpu_info' for boot vcpu is
registered when the boot vcpu is >= 32.

This issue can be reproduced on purpose via below command at the guest
side when kdump/kexec is enabled:

"taskset -c 33 echo c > /proc/sysrq-trigger"

Cc: Joe Jin <joe.jin@xxxxxxxxxx>
Signed-off-by: Dongli Zhang <dongli.zhang@xxxxxxxxxx>
---
 arch/x86/xen/enlighten_hvm.c | 20 +++++++++++++++++++-
 arch/x86/xen/smp_hvm.c       |  3 +++
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index e68ea5f4ad1c..152279416d9a 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -216,7 +216,25 @@ static void __init xen_hvm_guest_init(void)
        WARN_ON(xen_cpuhp_setup(xen_cpu_up_prepare_hvm, xen_cpu_dead_hvm));
        xen_unplug_emulated_devices();
        x86_init.irqs.intr_init = xen_init_IRQ;
-       xen_hvm_init_time_ops();
+
+       /*
+        * Only MAX_VIRT_CPUS 'vcpu_info' are embedded inside 'shared_info'
+        * and the VM would use them until xen_vcpu_setup() is used to
+        * allocate/relocate them at arbitrary address.
+        *
+        * However, when Xen HVM guest panic on vcpu >= MAX_VIRT_CPUS,
+        * per_cpu(xen_vcpu, cpu) is still NULL at this stage. To access
+        * per_cpu(xen_vcpu, cpu) via xen_clocksource_read() would panic.
+        *
+        * Therefore we delay xen_hvm_init_time_ops() to
+        * xen_hvm_smp_prepare_boot_cpu() when boot vcpu is >= MAX_VIRT_CPUS.
+        */
+       if (xen_vcpu_nr(0) >= MAX_VIRT_CPUS)
+               pr_info("Delay xen_hvm_init_time_ops() as kernel is running on 
vcpu=%d\n",
+                       xen_vcpu_nr(0));
+       else
+               xen_hvm_init_time_ops();
+
        xen_hvm_init_mmu_ops();
 
 #ifdef CONFIG_KEXEC_CORE
diff --git a/arch/x86/xen/smp_hvm.c b/arch/x86/xen/smp_hvm.c
index 6ff3c887e0b9..60cd4fafd188 100644
--- a/arch/x86/xen/smp_hvm.c
+++ b/arch/x86/xen/smp_hvm.c
@@ -19,6 +19,9 @@ static void __init xen_hvm_smp_prepare_boot_cpu(void)
         */
        xen_vcpu_setup(0);
 
+       if (xen_vcpu_nr(0) >= MAX_VIRT_CPUS)
+               xen_hvm_init_time_ops();
+
        /*
         * The alternative logic (which patches the unlock/lock) runs before
         * the smp bootup up code is activated. Hence we need to set this up
-- 
2.17.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.