[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 5/6] x86/smp: use a dedicated scratch cpumask in send_IPI_mask



On Tue, Feb 18, 2020 at 10:53:45AM +0000, Andrew Cooper wrote:
> On 17/02/2020 18:43, Roger Pau Monne wrote:
> > @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, int 
> > vector,
> >  void send_IPI_mask(const cpumask_t *mask, int vector)
> >  {
> >      bool cpus_locked = false;
> > -    cpumask_t *scratch = this_cpu(scratch_cpumask);
> > +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
> > +    unsigned long flags;
> > +
> > +    if ( in_mc() || in_nmi() )
> > +    {
> > +        /*
> > +         * When in #MC or #MNI context Xen cannot use the per-CPU scratch 
> > mask
> > +         * because we have no way to avoid reentry, so do not use the APIC
> > +         * shorthand.
> > +         */
> > +        alternative_vcall(genapic.send_IPI_mask, mask, vector);
> > +        return;
> 
> The set of things you can safely do in an NMI/MCE handler is small, and
> does not include sending IPIs.  (In reality, if you're using x2apic, it
> is safe to send an IPI because there is no risk of clobbering ICR2
> behind your outer context's back).
> 
> However, if we escalate from NMI/MCE context into crash context, then
> anything goes.  In reality, we only ever send NMIs from the crash path,
> and that is not permitted to use a shorthand, making this code dead.

This was requested by Jan, as safety measure even though we might not
currently send IPIs from such contexts.

I think it's better to be safe than sorry, as ultimately someone
adding an IPI usage in #MC or #NMI context could go unnoticed without
those checks.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.