|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/boot: Fix load_system_tables() to be NMI/#MC-safe
On 27.05.2020 15:06, Andrew Cooper wrote:
> @@ -720,30 +721,26 @@ void load_system_tables(void)
> .limit = (IDT_ENTRIES * sizeof(idt_entry_t)) - 1,
> };
>
> - *tss = (struct tss64){
> - /* Main stack for interrupts/exceptions. */
> - .rsp0 = stack_bottom,
> -
> - /* Ring 1 and 2 stacks poisoned. */
> - .rsp1 = 0x8600111111111111ul,
> - .rsp2 = 0x8600111111111111ul,
> -
> - /*
> - * MCE, NMI and Double Fault handlers get their own stacks.
> - * All others poisoned.
> - */
> - .ist = {
> - [IST_MCE - 1] = stack_top + IST_MCE * PAGE_SIZE,
> - [IST_DF - 1] = stack_top + IST_DF * PAGE_SIZE,
> - [IST_NMI - 1] = stack_top + IST_NMI * PAGE_SIZE,
> - [IST_DB - 1] = stack_top + IST_DB * PAGE_SIZE,
> -
> - [IST_MAX ... ARRAY_SIZE(tss->ist) - 1] =
> - 0x8600111111111111ul,
> - },
> -
> - .bitmap = IOBMP_INVALID_OFFSET,
> - };
> + /*
> + * Set up the TSS. Warning - may be live, and the NMI/#MC must remain
> + * valid on every instruction boundary. (Note: these are all
> + * semantically ACCESS_ONCE() due to tss's volatile qualifier.)
> + *
> + * rsp0 refers to the primary stack. #MC, #DF, NMI and #DB handlers
> + * each get their own stacks. No IO Bitmap.
> + */
> + tss->rsp0 = stack_bottom;
> + tss->ist[IST_MCE - 1] = stack_top + IST_MCE * PAGE_SIZE;
> + tss->ist[IST_DF - 1] = stack_top + IST_DF * PAGE_SIZE;
> + tss->ist[IST_NMI - 1] = stack_top + IST_NMI * PAGE_SIZE;
> + tss->ist[IST_DB - 1] = stack_top + IST_DB * PAGE_SIZE;
> + tss->bitmap = IOBMP_INVALID_OFFSET;
> +
> + /* All other stack pointers poisioned. */
> + for ( i = IST_MAX; i < ARRAY_SIZE(tss->ist); ++i )
> + tss->ist[i] = 0x8600111111111111ul;
> + tss->rsp1 = 0x8600111111111111ul;
> + tss->rsp2 = 0x8600111111111111ul;
ACCESS_ONCE() unfortunately only has one of the two needed effects:
It guarantees that each memory location gets accessed exactly once
(which I assume can also be had with just the volatile addition,
but without the moving away from using an initializer), but it does
not guarantee single-insn accesses. I consider this in particular
relevant here because all of the 64-bit fields are misaligned. By
doing it like you do, we're setting us up to have to re-do this yet
again in a couple of years time (presumably using write_atomic()
instead then).
Nevertheless it is a clear improvement, so if you want to leave it
like this
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |