[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen stable-4.14] x86/pv: Rewrite segment context switching from scratch
commit 483b43c4573329a28f1c9e18f90694e5be35ddb9 Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> AuthorDate: Fri Sep 11 14:11:43 2020 +0200 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Fri Sep 11 14:11:43 2020 +0200 x86/pv: Rewrite segment context switching from scratch There are multiple bugs with the existing implementation. On AMD CPUs prior to Zen2, loading a NUL segment selector doesn't clear the segment base, which is a problem for 64bit code which typically expects to use a NUL %fs/%gs selector. On a context switch from any PV vcpu, to a 64bit PV vcpu with an %fs/%gs selector which faults, the fixup logic loads NUL, and the guest is entered at the failsafe callback with the stale base. Alternatively, a PV context switch sequence of 64 (NUL, non-zero base) => 32 (NUL) => 64 (NUL, zero base) will similarly cause Xen to enter the guest with a stale base. Both of these corner cases manifest as state corruption in the final vcpu. However, damage is limited to to 64bit code expecting to use Thread Local Storage with a base pointer of 0, which doesn't occur by default. The context switch logic is extremely complicated, and is attempting to optimise away loading a NUL selector (which is fast), or writing a 64bit base of 0 (which is rare). Furthermore, it fails to respect Linux's ABI with userspace, which manifests as userspace state corruption as far as Linux is concerned. Always restore all selector and base state, in all cases. Leave a large comment explaining hardware behaviour, and the new ABI expectations. Update the comments in the public headers. Drop all "segment preloading" to handle the AMD corner case. It was never anything but a waste of time for %ds/%es, and isn't needed now that %fs/%gs bases are unconditionally written for 64bit PV guests. In load_segments(), store the result of is_pv_32bit_vcpu() as it is an expensive predicate now, and not used in a way which impacts speculative safety. Reported-by: Andy Lutomirski <luto@xxxxxxxxxx> Reported-by: Sarah Newman <srn@xxxxxxxxx> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> x86/pv: Fix assertions in svm_load_segs() OSSTest has shown an assertion failure: http://logs.test-lab.xenproject.org/osstest/logs/153906/test-xtf-amd64-amd64-1/serial-rimava1.log This is because we pass a non-NUL selector into svm_load_segs(), which is something we must not do, as this path does not load the attributes/limits from the GDT/LDT. Drop the {fs,gs}_sel parameters from svm_load_segs() and use 0 instead. This is acceptable even for non-zero NUL segments, as it is how the IRET instruction behaves in all CPUs. Only use the svm_load_segs() path when both FS and GS are NUL, which is the common case when scheduling a 64bit vcpu with 64bit userspace in context. Fixes: ad0fd291c5 ("x86/pv: Rewrite segment context switching from scratch") Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> master commit: ad0fd291c5e79191c2e3c70e43dded569f11a450 master date: 2020-09-07 11:32:34 +0100 master commit: 1e2d3be2e516e6f415ca6029f651b76a8563a27c master date: 2020-09-08 16:46:31 +0100 --- xen/arch/x86/domain.c | 189 ++++++++++--------------------- xen/arch/x86/hvm/svm/svm.c | 9 +- xen/include/asm-x86/hvm/svm/svm.h | 6 +- xen/include/public/arch-x86/xen-x86_64.h | 4 +- 4 files changed, 69 insertions(+), 139 deletions(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 8351391cdd..b1c8644945 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1017,13 +1017,9 @@ int arch_set_info_guest( if ( !compat ) { v->arch.pv.syscall_callback_eip = c.nat->syscall_callback_eip; - /* non-nul selector kills fs_base */ - v->arch.pv.fs_base = - !(v->arch.user_regs.fs & ~3) ? c.nat->fs_base : 0; + v->arch.pv.fs_base = c.nat->fs_base; v->arch.pv.gs_base_kernel = c.nat->gs_base_kernel; - /* non-nul selector kills gs_base_user */ - v->arch.pv.gs_base_user = - !(v->arch.user_regs.gs & ~3) ? c.nat->gs_base_user : 0; + v->arch.pv.gs_base_user = c.nat->gs_base_user; } else { @@ -1342,58 +1338,60 @@ arch_do_vcpu_op( } /* - * Loading a nul selector does not clear bases and limits on AMD or Hygon - * CPUs. Be on the safe side and re-initialize both to flat segment values - * before loading a nul selector. - */ -#define preload_segment(seg, value) do { \ - if ( !((value) & ~3) && \ - (boot_cpu_data.x86_vendor & \ - (X86_VENDOR_AMD | X86_VENDOR_HYGON)) ) \ - asm volatile ( "movl %k0, %%" #seg \ - :: "r" (FLAT_USER_DS32) ); \ -} while ( false ) - -#define loadsegment(seg,value) ({ \ - int __r = 1; \ - asm volatile ( \ - "1: movl %k1,%%" #seg "\n2:\n" \ - ".section .fixup,\"ax\"\n" \ - "3: xorl %k0,%k0\n" \ - " movl %k0,%%" #seg "\n" \ - " jmp 2b\n" \ - ".previous\n" \ - _ASM_EXTABLE(1b, 3b) \ - : "=r" (__r) : "r" (value), "0" (__r) );\ - __r; }) - -/* - * save_segments() writes a mask of segments which are dirty (non-zero), - * allowing load_segments() to avoid some expensive segment loads and - * MSR writes. + * Notes on PV segment handling: + * - 32bit: All data from the GDT/LDT. + * - 64bit: In addition, 64bit FS/GS/GS_KERN bases. + * + * Linux's ABI with userspace expects to preserve the full selector and + * segment base, even sel != NUL, base != GDT/LDT for 64bit code. Xen must + * honour this when context switching, to avoid breaking Linux's ABI. + * + * Note: It is impossible to preserve a selector value of 1, 2 or 3, as these + * get reset to 0 by an IRET back to guest context. Code playing with + * arcane corners of x86 get to keep all resulting pieces. + * + * Therefore, we: + * - Load the LDT. + * - Load each segment selector. + * - Any error loads zero, and triggers a failsafe callback. + * - For 64bit, further load the 64bit bases. + * + * An optimisation exists on SVM-capable hardware, where we use a VMLOAD + * instruction to load the LDT and full FS/GS/GS_KERN data in one go. + * + * AMD-like CPUs prior to Zen2 do not zero the segment base or limit when + * loading a NUL selector. This is a problem in principle when context + * switching to a 64bit guest, as a NUL FS/GS segment is usable and will pick + * up the stale base. + * + * However, it is not an issue in practice. NUL segments are unusable for + * 32bit guests (so any stale base won't be used), and we unconditionally + * write the full FS/GS bases for 64bit guests. */ -static DEFINE_PER_CPU(unsigned int, dirty_segment_mask); -#define DIRTY_DS 0x01 -#define DIRTY_ES 0x02 -#define DIRTY_FS 0x04 -#define DIRTY_GS 0x08 -#define DIRTY_FS_BASE 0x10 -#define DIRTY_GS_BASE 0x20 - static void load_segments(struct vcpu *n) { struct cpu_user_regs *uregs = &n->arch.user_regs; - int all_segs_okay = 1; - unsigned int dirty_segment_mask, cpu = smp_processor_id(); - bool fs_gs_done = false; + bool compat = is_pv_32bit_vcpu(n); + bool all_segs_okay = true, fs_gs_done = false; - /* Load and clear the dirty segment mask. */ - dirty_segment_mask = per_cpu(dirty_segment_mask, cpu); - per_cpu(dirty_segment_mask, cpu) = 0; + /* + * Attempt to load @seg with selector @val. On error, clear + * @all_segs_okay in function scope, and load NUL into @sel. + */ +#define TRY_LOAD_SEG(seg, val) \ + asm volatile ( "1: mov %k[_val], %%" #seg "\n\t" \ + "2:\n\t" \ + ".section .fixup, \"ax\"\n\t" \ + "3: xor %k[ok], %k[ok]\n\t" \ + " mov %k[ok], %%" #seg "\n\t" \ + " jmp 2b\n\t" \ + ".previous\n\t" \ + _ASM_EXTABLE(1b, 3b) \ + : [ok] "+r" (all_segs_okay) \ + : [_val] "rm" (val) ) #ifdef CONFIG_HVM - if ( cpu_has_svm && !is_pv_32bit_vcpu(n) && - !(read_cr4() & X86_CR4_FSGSBASE) && !((uregs->fs | uregs->gs) & ~3) ) + if ( cpu_has_svm && !compat && (uregs->fs | uregs->gs) <= 3 ) { unsigned long gsb = n->arch.flags & TF_kernel_mode ? n->arch.pv.gs_base_kernel : n->arch.pv.gs_base_user; @@ -1401,62 +1399,25 @@ static void load_segments(struct vcpu *n) ? n->arch.pv.gs_base_user : n->arch.pv.gs_base_kernel; fs_gs_done = svm_load_segs(n->arch.pv.ldt_ents, LDT_VIRT_START(n), - uregs->fs, n->arch.pv.fs_base, - uregs->gs, gsb, gss); + n->arch.pv.fs_base, gsb, gss); } #endif if ( !fs_gs_done ) - load_LDT(n); - - /* Either selector != 0 ==> reload. */ - if ( unlikely((dirty_segment_mask & DIRTY_DS) | uregs->ds) ) { - preload_segment(ds, uregs->ds); - all_segs_okay &= loadsegment(ds, uregs->ds); - } - - /* Either selector != 0 ==> reload. */ - if ( unlikely((dirty_segment_mask & DIRTY_ES) | uregs->es) ) - { - preload_segment(es, uregs->es); - all_segs_okay &= loadsegment(es, uregs->es); - } + load_LDT(n); - /* Either selector != 0 ==> reload. */ - if ( unlikely((dirty_segment_mask & DIRTY_FS) | uregs->fs) && !fs_gs_done ) - { - all_segs_okay &= loadsegment(fs, uregs->fs); - /* non-nul selector updates fs_base */ - if ( uregs->fs & ~3 ) - dirty_segment_mask &= ~DIRTY_FS_BASE; + TRY_LOAD_SEG(fs, uregs->fs); + TRY_LOAD_SEG(gs, uregs->gs); } - /* Either selector != 0 ==> reload. */ - if ( unlikely((dirty_segment_mask & DIRTY_GS) | uregs->gs) && !fs_gs_done ) - { - all_segs_okay &= loadsegment(gs, uregs->gs); - /* non-nul selector updates gs_base_user */ - if ( uregs->gs & ~3 ) - dirty_segment_mask &= ~DIRTY_GS_BASE; - } + TRY_LOAD_SEG(ds, uregs->ds); + TRY_LOAD_SEG(es, uregs->es); - if ( !fs_gs_done && !is_pv_32bit_vcpu(n) ) + if ( !fs_gs_done && !compat ) { - /* This can only be non-zero if selector is NULL. */ - if ( n->arch.pv.fs_base | (dirty_segment_mask & DIRTY_FS_BASE) ) - wrfsbase(n->arch.pv.fs_base); - - /* - * Most kernels have non-zero GS base, so don't bother testing. - * (For old AMD hardware this is also a serialising instruction, - * avoiding erratum #88.) - */ + wrfsbase(n->arch.pv.fs_base); wrgsshadow(n->arch.pv.gs_base_kernel); - - /* This can only be non-zero if selector is NULL. */ - if ( n->arch.pv.gs_base_user | - (dirty_segment_mask & DIRTY_GS_BASE) ) - wrgsbase(n->arch.pv.gs_base_user); + wrgsbase(n->arch.pv.gs_base_user); /* If in kernel mode then switch the GS bases around. */ if ( (n->arch.flags & TF_kernel_mode) ) @@ -1575,7 +1536,6 @@ static void load_segments(struct vcpu *n) static void save_segments(struct vcpu *v) { struct cpu_user_regs *regs = &v->arch.user_regs; - unsigned int dirty_segment_mask = 0; regs->ds = read_sreg(ds); regs->es = read_sreg(es); @@ -1592,35 +1552,6 @@ static void save_segments(struct vcpu *v) else v->arch.pv.gs_base_user = gs_base; } - - if ( regs->ds ) - dirty_segment_mask |= DIRTY_DS; - - if ( regs->es ) - dirty_segment_mask |= DIRTY_ES; - - if ( regs->fs || is_pv_32bit_vcpu(v) ) - { - dirty_segment_mask |= DIRTY_FS; - /* non-nul selector kills fs_base */ - if ( regs->fs & ~3 ) - v->arch.pv.fs_base = 0; - } - if ( v->arch.pv.fs_base ) - dirty_segment_mask |= DIRTY_FS_BASE; - - if ( regs->gs || is_pv_32bit_vcpu(v) ) - { - dirty_segment_mask |= DIRTY_GS; - /* non-nul selector kills gs_base_user */ - if ( regs->gs & ~3 ) - v->arch.pv.gs_base_user = 0; - } - if ( v->arch.flags & TF_kernel_mode ? v->arch.pv.gs_base_kernel - : v->arch.pv.gs_base_user ) - dirty_segment_mask |= DIRTY_GS_BASE; - - this_cpu(dirty_segment_mask) = dirty_segment_mask; } void paravirt_ctxt_switch_from(struct vcpu *v) @@ -1830,8 +1761,8 @@ static void __context_switch(void) #if defined(CONFIG_PV) && defined(CONFIG_HVM) /* Prefetch the VMCB if we expect to use it later in the context switch */ if ( cpu_has_svm && is_pv_domain(nd) && !is_pv_32bit_domain(nd) && - !is_idle_domain(nd) && !(read_cr4() & X86_CR4_FSGSBASE) ) - svm_load_segs(0, 0, 0, 0, 0, 0, 0); + !is_idle_domain(nd) ) + svm_load_segs(0, 0, 0, 0, 0); #endif if ( need_full_gdt(nd) && !per_cpu(full_gdt_loaded, cpu) ) diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c index 4eb41792e2..0c0c2a964e 100644 --- a/xen/arch/x86/hvm/svm/svm.c +++ b/xen/arch/x86/hvm/svm/svm.c @@ -1518,8 +1518,7 @@ static void svm_init_erratum_383(const struct cpuinfo_x86 *c) #ifdef CONFIG_PV bool svm_load_segs(unsigned int ldt_ents, unsigned long ldt_base, - unsigned int fs_sel, unsigned long fs_base, - unsigned int gs_sel, unsigned long gs_base, + unsigned long fs_base, unsigned long gs_base, unsigned long gs_shadow) { unsigned int cpu = smp_processor_id(); @@ -1556,14 +1555,12 @@ bool svm_load_segs(unsigned int ldt_ents, unsigned long ldt_base, vmcb->ldtr.base = ldt_base; } - ASSERT(!(fs_sel & ~3)); - vmcb->fs.sel = fs_sel; + vmcb->fs.sel = 0; vmcb->fs.attr = 0; vmcb->fs.limit = 0; vmcb->fs.base = fs_base; - ASSERT(!(gs_sel & ~3)); - vmcb->gs.sel = gs_sel; + vmcb->gs.sel = 0; vmcb->gs.attr = 0; vmcb->gs.limit = 0; vmcb->gs.base = gs_base; diff --git a/xen/include/asm-x86/hvm/svm/svm.h b/xen/include/asm-x86/hvm/svm/svm.h index d568e86db9..2310878e41 100644 --- a/xen/include/asm-x86/hvm/svm/svm.h +++ b/xen/include/asm-x86/hvm/svm/svm.h @@ -52,10 +52,12 @@ void svm_update_guest_cr(struct vcpu *, unsigned int cr, unsigned int flags); /* * PV context switch helper. Calls with zero ldt_base request a prefetch of * the VMCB area to be loaded from, instead of an actual load of state. + * + * Must only be used for NUL FS/GS, as the segment attributes/limits are not + * read from the GDT/LDT. */ bool svm_load_segs(unsigned int ldt_ents, unsigned long ldt_base, - unsigned int fs_sel, unsigned long fs_base, - unsigned int gs_sel, unsigned long gs_base, + unsigned long fs_base, unsigned long gs_base, unsigned long gs_shadow); extern u32 svm_feature_flags; diff --git a/xen/include/public/arch-x86/xen-x86_64.h b/xen/include/public/arch-x86/xen-x86_64.h index 342eabc957..40aed14366 100644 --- a/xen/include/public/arch-x86/xen-x86_64.h +++ b/xen/include/public/arch-x86/xen-x86_64.h @@ -203,8 +203,8 @@ struct cpu_user_regs { uint16_t ss, _pad2[3]; uint16_t es, _pad3[3]; uint16_t ds, _pad4[3]; - uint16_t fs, _pad5[3]; /* Non-nul => takes precedence over fs_base. */ - uint16_t gs, _pad6[3]; /* Non-nul => takes precedence over gs_base_user. */ + uint16_t fs, _pad5[3]; + uint16_t gs, _pad6[3]; }; typedef struct cpu_user_regs cpu_user_regs_t; DEFINE_XEN_GUEST_HANDLE(cpu_user_regs_t); -- generated by git-patchbot for /home/xen/git/xen.git#stable-4.14
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |