[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen stable-4.11] x86/nmi: correctly check MSB of P6 performance counter MSR in watchdog
commit 03afae62a7704d38f3c4d4d7ec66f510a86b1489 Author: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx> AuthorDate: Mon Mar 18 17:04:30 2019 +0100 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Mon Mar 18 17:04:30 2019 +0100 x86/nmi: correctly check MSB of P6 performance counter MSR in watchdog The logic currently tries to work out if a recent overflow (that indicates that NMI comes from the watchdog) happened by checking MSB of performance counter MSR that is initially sign extended from a negative value that we program it to. A possibly incorrect assumption here is that MSB is always bit 32 while on modern hardware it's usually 47 and the actual bit-width is reported through CPUID. Checking bit 32 for overflows is usually fine since we never program it to anything exceeding 32-bits and NMI is handled shortly after overflow occurs. A problematic scenario that we saw occurs on systems where SMIs taking significant time are possible. In that case, NMI handling is deferred to the point firmware exits SMI which might take enough time for the counter to go through bit 32 and set it to 1 again. So the logic described above will misread it and report an unknown NMI erroneously. Fortunately, we can use the actual MSB, which is usually higher than the currently hardcoded 32, and treat this case correctly at least on modern hardware. Signed-off-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: 0452d02b6e7849537914dd30cbfc8eb27cdad2ce master date: 2019-02-28 13:44:40 +0000 --- xen/arch/x86/nmi.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/nmi.c b/xen/arch/x86/nmi.c index d7fce28805..e26121a737 100644 --- a/xen/arch/x86/nmi.c +++ b/xen/arch/x86/nmi.c @@ -37,6 +37,7 @@ unsigned int nmi_watchdog = NMI_NONE; static unsigned int nmi_hz = HZ; static unsigned int nmi_perfctr_msr; /* the MSR to reset in NMI handler */ static unsigned int nmi_p4_cccr_val; +static unsigned int nmi_p6_event_width; static DEFINE_PER_CPU(struct timer, nmi_timer); static DEFINE_PER_CPU(unsigned int, nmi_timer_ticks); @@ -123,7 +124,9 @@ int nmi_active; #define P6_EVNTSEL_USR (1 << 16) #define P6_EVENT_CPU_CLOCKS_NOT_HALTED 0x79 #define CORE_EVENT_CPU_CLOCKS_NOT_HALTED 0x3c -#define P6_EVENT_WIDTH 32 +/* Bit width of IA32_PMCx MSRs is reported using CPUID.0AH:EAX[23:16]. */ +#define P6_EVENT_WIDTH_MASK (((1 << 8) - 1) << 16) +#define P6_EVENT_WIDTH_MIN 32 #define P4_ESCR_EVENT_SELECT(N) ((N)<<25) #define P4_CCCR_OVF_PMI0 (1<<26) @@ -324,6 +327,15 @@ static void setup_p6_watchdog(unsigned counter) nmi_perfctr_msr = MSR_P6_PERFCTR(0); + if ( !nmi_p6_event_width && current_cpu_data.cpuid_level >= 0xa ) + nmi_p6_event_width = MASK_EXTR(cpuid_eax(0xa), P6_EVENT_WIDTH_MASK); + if ( !nmi_p6_event_width ) + nmi_p6_event_width = P6_EVENT_WIDTH_MIN; + + if ( nmi_p6_event_width < P6_EVENT_WIDTH_MIN || + nmi_p6_event_width > BITS_PER_LONG ) + return; + clear_msr_range(MSR_P6_EVNTSEL(0), 2); clear_msr_range(MSR_P6_PERFCTR(0), 2); @@ -529,7 +541,7 @@ bool nmi_watchdog_tick(const struct cpu_user_regs *regs) else if ( nmi_perfctr_msr == MSR_P6_PERFCTR(0) ) { rdmsrl(MSR_P6_PERFCTR(0), msr_content); - if ( msr_content & (1ULL << P6_EVENT_WIDTH) ) + if ( msr_content & (1ULL << (nmi_p6_event_width - 1)) ) watchdog_tick = false; /* -- generated by git-patchbot for /home/xen/git/xen.git#stable-4.11 _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |