|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 5/6] x86/smp: use a dedicated scratch cpumask in send_IPI_mask
On 18/02/2020 11:10, Roger Pau Monné wrote:
> On Tue, Feb 18, 2020 at 10:53:45AM +0000, Andrew Cooper wrote:
>> On 17/02/2020 18:43, Roger Pau Monne wrote:
>>> @@ -67,7 +68,20 @@ static void send_IPI_shortcut(unsigned int shortcut, int
>>> vector,
>>> void send_IPI_mask(const cpumask_t *mask, int vector)
>>> {
>>> bool cpus_locked = false;
>>> - cpumask_t *scratch = this_cpu(scratch_cpumask);
>>> + cpumask_t *scratch = this_cpu(send_ipi_cpumask);
>>> + unsigned long flags;
>>> +
>>> + if ( in_mc() || in_nmi() )
>>> + {
>>> + /*
>>> + * When in #MC or #MNI context Xen cannot use the per-CPU scratch
>>> mask
>>> + * because we have no way to avoid reentry, so do not use the APIC
>>> + * shorthand.
>>> + */
>>> + alternative_vcall(genapic.send_IPI_mask, mask, vector);
>>> + return;
>> The set of things you can safely do in an NMI/MCE handler is small, and
>> does not include sending IPIs. (In reality, if you're using x2apic, it
>> is safe to send an IPI because there is no risk of clobbering ICR2
>> behind your outer context's back).
>>
>> However, if we escalate from NMI/MCE context into crash context, then
>> anything goes. In reality, we only ever send NMIs from the crash path,
>> and that is not permitted to use a shorthand, making this code dead.
> This was requested by Jan, as safety measure
That may be, but it doesn't mean it is correct. If execution ever
enters this function in NMI/MCE context, there is a real,
state-corrupting bug, higher up the call stack.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |