[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/3] x86/smp: use a dedicated scratch cpumask in send_IPI_mask



On Thu, Feb 13, 2020 at 11:19:02AM +0100, Jan Beulich wrote:
> On 13.02.2020 11:03, Roger Pau Monné wrote:
> > On Thu, Feb 13, 2020 at 10:59:29AM +0100, Jan Beulich wrote:
> >> On 12.02.2020 17:49, Roger Pau Monne wrote:
> >>> Using scratch_cpumask in send_IPI_mak is not safe because it can be
> >>> called from interrupt context, and hence Xen would have to make sure
> >>> all the users of the scratch cpumask disable interrupts while using
> >>> it.
> >>>
> >>> Instead introduce a new cpumask to be used by send_IPI_mask, and
> >>> disable interrupts while using.
> >>
> >> My first thought here was: What about NMI or #MC context? Even if
> >> not using the function today (which I didn't check), there shouldn't
> >> be a latent issue introduced here preventing them from doing so in
> >> the future. Instead I think you want to allocate the scratch mask
> >> dynamically (at least if caller context doesn't allow use of the
> >> generic one), and simply refrain from coalescing IPIs if this
> >> allocations fails.
> > 
> > Hm, isn't this going to be quite expensive, and hence render the
> > benefit of using the shorthand moot?
> 
> Depends on how many CPUs there are, i.e. how long the loop otherwise
> would be. When xmalloc() doesn't need to turn to the page allocator,
> I think it won't be overly slow. Another option would be to avoid
> coalescing in a slightly different way (without having to fiddle
> with the scratch mask) when called in interrupt context.

What about preventing the mask usage when in nmi context?

I could introduce something like in_nmi and just avoid the scratch
mask usage in that case (and the shorthand).

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.