[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 1/3] xen/x86: add nmi continuation framework



On 16.10.2020 10:53, Juergen Gross wrote:
> Actions in NMI context are rather limited as e.g. locking is rather
> fragile.
> 
> Add a generic framework to continue processing in normal interrupt
> context after leaving NMI processing.
> 
> This is done by a high priority interrupt vector triggered via a
> self IPI from NMI context, which will then call the continuation
> function specified during NMI handling.

I'm concerned by there being just a single handler allowed, when
the series already introduces two uses. A single NMI instance
may signal multiple things in one go. At the very least we then
need a priority, such that SERR could override oprofile.

> @@ -1799,6 +1800,57 @@ void unset_nmi_callback(void)
>      nmi_callback = dummy_nmi_callback;
>  }
>  
> +static DEFINE_PER_CPU(nmi_contfunc_t *, nmi_cont_func);
> +static DEFINE_PER_CPU(void *, nmi_cont_arg);
> +static DEFINE_PER_CPU(bool, nmi_cont_busy);
> +
> +bool nmi_check_continuation(void)
> +{
> +    unsigned int cpu = smp_processor_id();
> +    nmi_contfunc_t *func = per_cpu(nmi_cont_func, cpu);
> +    void *arg = per_cpu(nmi_cont_arg, cpu);
> +
> +    if ( per_cpu(nmi_cont_busy, cpu) )
> +    {
> +        per_cpu(nmi_cont_busy, cpu) = false;
> +        printk("Trying to set NMI continuation while still one active!\n");

I guess the bool would better be a const void *, written
with __builtin_return_address(0) and logged accordingly here
(also including the CPU number).

Also (nit) maybe "... while one still active"?

> +    }
> +
> +    /* Reads must be done before following write (local cpu ordering only). 
> */
> +    barrier();
> +
> +    per_cpu(nmi_cont_func, cpu) = NULL;

I think this still is too lax, and you really need to use
xchg() to make sure you won't lose any case of "busy" (which
may become set between the if() above and the write here).

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.