[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v3 1/3] xen/x86: add nmi continuation framework
Actions in NMI context are rather limited as e.g. locking is rather fragile. Add a generic framework to continue processing in normal interrupt context after leaving NMI processing. This is done by a high priority interrupt vector triggered via a self IPI from NMI context, which will then call the continuation function specified during NMI handling. Signed-off-by: Juergen Gross <jgross@xxxxxxxx> --- V2: - add prototype for continuation function (Roger Pau Monné) - switch from softirq to explicit self-IPI (Jan Beulich) --- xen/arch/x86/apic.c | 13 +++++++--- xen/arch/x86/traps.c | 52 +++++++++++++++++++++++++++++++++++++++ xen/include/asm-x86/nmi.h | 13 +++++++++- 3 files changed, 74 insertions(+), 4 deletions(-) diff --git a/xen/arch/x86/apic.c b/xen/arch/x86/apic.c index 60627fd6e6..7497ddb5da 100644 --- a/xen/arch/x86/apic.c +++ b/xen/arch/x86/apic.c @@ -40,6 +40,7 @@ #include <irq_vectors.h> #include <xen/kexec.h> #include <asm/guest.h> +#include <asm/nmi.h> #include <asm/time.h> static bool __read_mostly tdt_enabled; @@ -1376,16 +1377,22 @@ void spurious_interrupt(struct cpu_user_regs *regs) { /* * Check if this is a vectored interrupt (most likely, as this is probably - * a request to dump local CPU state). Vectored interrupts are ACKed; - * spurious interrupts are not. + * a request to dump local CPU state or to continue NMI handling). + * Vectored interrupts are ACKed; spurious interrupts are not. */ if (apic_isr_read(SPURIOUS_APIC_VECTOR)) { + bool is_spurious; + ack_APIC_irq(); + is_spurious = !nmi_check_continuation(); if (this_cpu(state_dump_pending)) { this_cpu(state_dump_pending) = false; dump_execstate(regs); - return; + is_spurious = false; } + + if ( !is_spurious ) + return; } /* see sw-dev-man vol 3, chapter 7.4.13.5 */ diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c index bc5b8f8ea3..6f4db9d549 100644 --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -79,6 +79,7 @@ #include <public/hvm/params.h> #include <asm/cpuid.h> #include <xsm/xsm.h> +#include <asm/mach-default/irq_vectors.h> #include <asm/pv/traps.h> #include <asm/pv/mm.h> @@ -1799,6 +1800,57 @@ void unset_nmi_callback(void) nmi_callback = dummy_nmi_callback; } +static DEFINE_PER_CPU(nmi_contfunc_t *, nmi_cont_func); +static DEFINE_PER_CPU(void *, nmi_cont_arg); +static DEFINE_PER_CPU(bool, nmi_cont_busy); + +bool nmi_check_continuation(void) +{ + unsigned int cpu = smp_processor_id(); + nmi_contfunc_t *func = per_cpu(nmi_cont_func, cpu); + void *arg = per_cpu(nmi_cont_arg, cpu); + + if ( per_cpu(nmi_cont_busy, cpu) ) + { + per_cpu(nmi_cont_busy, cpu) = false; + printk("Trying to set NMI continuation while still one active!\n"); + } + + /* Reads must be done before following write (local cpu ordering only). */ + barrier(); + + per_cpu(nmi_cont_func, cpu) = NULL; + + if ( func ) + func(arg); + + return func; +} + +int set_nmi_continuation(nmi_contfunc_t *func, void *arg) +{ + unsigned int cpu = smp_processor_id(); + + if ( per_cpu(nmi_cont_func, cpu) ) + { + per_cpu(nmi_cont_busy, cpu) = true; + return -EBUSY; + } + + per_cpu(nmi_cont_func, cpu) = func; + per_cpu(nmi_cont_arg, cpu) = arg; + + /* + * Issue a self-IPI. Handling is done in spurious_interrupt(). + * NMI could have happened in IPI sequence, so wait for ICR being idle + * again before leaving NMI handler. + */ + send_IPI_self(SPURIOUS_APIC_VECTOR); + apic_wait_icr_idle(); + + return 0; +} + void do_device_not_available(struct cpu_user_regs *regs) { #ifdef CONFIG_PV diff --git a/xen/include/asm-x86/nmi.h b/xen/include/asm-x86/nmi.h index a288f02a50..68db75b1ed 100644 --- a/xen/include/asm-x86/nmi.h +++ b/xen/include/asm-x86/nmi.h @@ -33,5 +33,16 @@ nmi_callback_t *set_nmi_callback(nmi_callback_t *callback); void unset_nmi_callback(void); DECLARE_PER_CPU(unsigned int, nmi_count); - + +typedef void nmi_contfunc_t(void *arg); + +/** + * set_nmi_continuation + * + * Schedule a function to be started in interrupt context after NMI handling. + */ +int set_nmi_continuation(nmi_contfunc_t *func, void *arg); + +/* Check for NMI continuation pending. */ +bool nmi_check_continuation(void); #endif /* ASM_NMI_H */ -- 2.26.2
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |