[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/2] xen/x86: add nmi continuation framework

On 08.10.20 10:43, Roger Pau Monné wrote:
On Wed, Oct 07, 2020 at 03:30:10PM +0200, Juergen Gross wrote:
Actions in NMI context are rather limited as e.g. locking is rather

Add a generic framework to continue processing in softirq context after
leaving NMI processing. This is working for NMIs happening in guest
context as NMI exit handling will issue an IPI to itself in case a
softirq is pending, resulting in the continuation running before the
guest gets control again.

There's already kind of a nmi callback framework using nmi_callback.
I assume that moving this existing callback code to be executed in
softirq context is not suitable because that would be past the
execution of an iret instruction?

Might be worth mentioning in the commit message.

Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
  xen/arch/x86/traps.c      | 37 +++++++++++++++++++++++++++++++++++++
  xen/include/asm-x86/nmi.h |  8 +++++++-
  xen/include/xen/softirq.h |  5 ++++-
  3 files changed, 48 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index bc5b8f8ea3..f433fe5acb 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1799,6 +1799,42 @@ void unset_nmi_callback(void)
      nmi_callback = dummy_nmi_callback;
+static DEFINE_PER_CPU(void (*)(void *), nmi_cont_func);
+static DEFINE_PER_CPU(void *, nmi_cont_par);
+static void nmi_cont_softirq(void)
+    unsigned int cpu = smp_processor_id();
+    void (*func)(void *par) = per_cpu(nmi_cont_func, cpu);
+    void *par = per_cpu(nmi_cont_par, cpu);

I think you can use this_cpu here and below, and avoid having to worry
about the processor id at all? Also less likely to mess up and assign
a NMI callback to a wrong CPU.

this_cpu() and smp_processor_id() are both based on get_cpu_info().
So multiple uses of this_cpu() are less efficient than per_cpu() with
cpu in a local variable (regarding code-size and speed, at least I think

+    /* Reads must be done before following write (local cpu ordering only). */
+    barrier();

Is this added because of the usage of RELOC_HIDE by per_cpu?

This is added because reordering of loads by the compiler must be
avoided as a NMI using those fields again might happen anytime.

+    per_cpu(nmi_cont_func, cpu) = NULL;

Since we are using RELOC_HIDE, doesn't it imply that the compiler
cannot reorder this, since it has no idea what variable we are

I'd rather be safe here than relying on per_cpu() internals.

+    if ( func )
+        func(par);
+int set_nmi_continuation(void (*func)(void *par), void *par)
+    unsigned int cpu = smp_processor_id();
+    if ( per_cpu(nmi_cont_func, cpu) )
+    {
+        printk("Trying to set NMI continuation while still one active!\n");
+        return -EBUSY;
+    }
+    per_cpu(nmi_cont_func, cpu) = func;
+    per_cpu(nmi_cont_par, cpu) = par;
+    raise_softirq(NMI_CONT_SOFTIRQ);
+    return 0;
  void do_device_not_available(struct cpu_user_regs *regs)
  #ifdef CONFIG_PV
@@ -2132,6 +2168,7 @@ void __init trap_init(void)
cpu_init(); + open_softirq(NMI_CONT_SOFTIRQ, nmi_cont_softirq);
      open_softirq(PCI_SERR_SOFTIRQ, pci_serr_softirq);
diff --git a/xen/include/asm-x86/nmi.h b/xen/include/asm-x86/nmi.h
index a288f02a50..da40fb6599 100644
--- a/xen/include/asm-x86/nmi.h
+++ b/xen/include/asm-x86/nmi.h
@@ -33,5 +33,11 @@ nmi_callback_t *set_nmi_callback(nmi_callback_t *callback);
  void unset_nmi_callback(void);
DECLARE_PER_CPU(unsigned int, nmi_count);
+ * set_nmi_continuation
+ *
+ * Schedule a function to be started in softirq context after NMI handling.
+ */
+int set_nmi_continuation(void (*func)(void *par), void *par);

I would introduce a type for the nmi callback, as I think it's easier
than having to retype the prototype everywhere:

typedef void nmi_continuation_t(void *);

Fine with me.




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.