[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v2] x86-64/Xen: fix stack switching
While in the native case entry into the kernel happens on the trampoline stack, PV Xen kernels get entered with the current thread stack right away. Hence source and destination stacks are identical in that case, and special care is needed. Other than in sync_regs() the copying done on the INT80 path as well as on the NMI path itself isn't NMI / #MC safe, as either of these events occurring in the middle of the stack copying would clobber data on the (source) stack. (Of course, in the NMI case only #MC could break things.) I'm not altering the similar code in interrupt_entry(), as that code path is unreachable afaict when running PV Xen guests. Fixes: 7f2590a110b837af5679d08fc25c6227c5a8c497 Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Cc: stable@xxxxxxxxxx --- v2: Correct placement of .Lint80_keep_stack label. --- arch/x86/entry/entry_64.S | 8 ++++++++ arch/x86/entry/entry_64_compat.S | 10 ++++++++-- 2 files changed, 16 insertions(+), 2 deletions(-) --- 4.20-rc3/arch/x86/entry/entry_64.S +++ 4.20-rc3-x86_64-stack-switch-Xen/arch/x86/entry/entry_64.S @@ -1380,6 +1380,12 @@ ENTRY(nmi) swapgs cld SWITCH_TO_KERNEL_CR3 scratch_reg=%rdx + + movq PER_CPU_VAR(cpu_current_top_of_stack), %rdx + subq $8, %rdx + xorq %rsp, %rdx + shrq $PAGE_SHIFT, %rdx + jz .Lnmi_keep_stack movq %rsp, %rdx movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp UNWIND_HINT_IRET_REGS base=%rdx offset=8 @@ -1389,6 +1395,8 @@ ENTRY(nmi) pushq 2*8(%rdx) /* pt_regs->cs */ pushq 1*8(%rdx) /* pt_regs->rip */ UNWIND_HINT_IRET_REGS +.Lnmi_keep_stack: + pushq $-1 /* pt_regs->orig_ax */ PUSH_AND_CLEAR_REGS rdx=(%rdx) ENCODE_FRAME_POINTER --- 4.20-rc3/arch/x86/entry/entry_64_compat.S +++ 4.20-rc3-x86_64-stack-switch-Xen/arch/x86/entry/entry_64_compat.S @@ -361,17 +361,23 @@ ENTRY(entry_INT80_compat) /* Need to switch before accessing the thread stack. */ SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi + + movq PER_CPU_VAR(cpu_current_top_of_stack), %rdi + subq $8, %rdi + xorq %rsp, %rdi + shrq $PAGE_SHIFT, %rdi + jz .Lint80_keep_stack movq %rsp, %rdi movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp - pushq 6*8(%rdi) /* regs->ss */ pushq 5*8(%rdi) /* regs->rsp */ pushq 4*8(%rdi) /* regs->eflags */ pushq 3*8(%rdi) /* regs->cs */ pushq 2*8(%rdi) /* regs->ip */ pushq 1*8(%rdi) /* regs->orig_ax */ - pushq (%rdi) /* pt_regs->di */ +.Lint80_keep_stack: + pushq %rsi /* pt_regs->si */ xorl %esi, %esi /* nospec si */ pushq %rdx /* pt_regs->dx */ _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |