[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] x86/xstate: don't clobber or leak state when using XSAVES
Commit 4d27280572 ("x86/xsaves: fix overwriting between non-lazy/lazy xsaves") switched to always saving full state when using compacted format (which is the only one XSAVES allows). It didn't, however, also adjust the restore side: In order to save full state, we also need to make sure we always load full state, or else the subject vCPU's state would get clobbered by that of the vCPU which happened to last have in use the respective component(s). Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- As to inclusion in 4.7: This is a fiy to a _latent_ bug, i.e. not one currently exposed (since XSTATE_XSAVES_ONLY is zero right now). Nevertheless I think we should avoid releasing code with such an issue. --- a/xen/arch/x86/i387.c +++ b/xen/arch/x86/i387.c @@ -129,13 +129,8 @@ static inline uint64_t vcpu_xsave_mask(c * has ever used lazy states (checking xcr0_accum excluding * XSTATE_FP_SSE), vcpu_xsave_mask will return XSTATE_ALL. Otherwise * return XSTATE_NONLAZY. - * XSTATE_FP_SSE may be excluded, because the offsets of XSTATE_FP_SSE - * (in the legacy region of xsave area) are fixed, so saving - * XSTATE_FP_SSE will not cause overwriting problem. */ - return (v->arch.xsave_area->xsave_hdr.xcomp_bv & XSTATE_COMPACTION_ENABLED) - && (v->arch.xcr0_accum & XSTATE_LAZY & ~XSTATE_FP_SSE) - ? XSTATE_ALL : XSTATE_NONLAZY; + return xstate_all(v) ? XSTATE_ALL : XSTATE_NONLAZY; } /* Save x87 extended state */ @@ -215,11 +210,26 @@ void vcpu_restore_fpu_eager(struct vcpu { ASSERT(!is_idle_vcpu(v)); - /* save the nonlazy extended state which is not tracked by CR0.TS bit */ - if ( v->arch.nonlazy_xstate_used ) + /* Restore nonlazy extended state (i.e. parts not tracked by CR0.TS). */ + if ( !v->arch.nonlazy_xstate_used ) + return; + + /* Avoid recursion */ + clts(); + + /* + * When saving full state even with !v->fpu_dirtied (see vcpu_xsave_mask() + * above) we also need to restore full state, to prevent subsequently + * saving state belonging to another vCPU. + */ + if ( xstate_all(v) ) + { + fpu_xrstor(v, XSTATE_ALL); + v->fpu_initialised = 1; + v->fpu_dirtied = 1; + } + else { - /* Avoid recursion */ - clts(); fpu_xrstor(v, XSTATE_NONLAZY); stts(); } --- a/xen/arch/x86/xstate.c +++ b/xen/arch/x86/xstate.c @@ -681,6 +681,14 @@ int handle_xsetbv(u32 index, u64 new_bv) clts(); if ( curr->fpu_dirtied ) asm ( "stmxcsr %0" : "=m" (curr->arch.xsave_area->fpu_sse.mxcsr) ); + else if ( xstate_all(curr) ) + { + /* See the comment in i387.c:vcpu_restore_fpu_eager(). */ + mask |= XSTATE_LAZY; + curr->fpu_initialised = 1; + curr->fpu_dirtied = 1; + cr0 &= ~X86_CR0_TS; + } xrstor(curr, mask); if ( cr0 & X86_CR0_TS ) write_cr0(cr0); --- a/xen/include/asm-x86/xstate.h +++ b/xen/include/asm-x86/xstate.h @@ -8,7 +8,7 @@ #ifndef __ASM_XSTATE_H #define __ASM_XSTATE_H -#include <xen/types.h> +#include <xen/sched.h> #include <asm/cpufeature.h> #define FCW_DEFAULT 0x037f @@ -107,4 +107,16 @@ int xstate_alloc_save_area(struct vcpu * void xstate_init(struct cpuinfo_x86 *c); unsigned int xstate_ctxt_size(u64 xcr0); +static inline bool_t xstate_all(const struct vcpu *v) +{ + /* + * XSTATE_FP_SSE may be excluded, because the offsets of XSTATE_FP_SSE + * (in the legacy region of xsave area) are fixed, so saving + * XSTATE_FP_SSE will not cause overwriting problem with XSAVES/XSAVEC. + */ + return (v->arch.xsave_area->xsave_hdr.xcomp_bv & + XSTATE_COMPACTION_ENABLED) && + (v->arch.xcr0_accum & XSTATE_LAZY & ~XSTATE_FP_SSE); +} + #endif /* __ASM_XSTATE_H */ Attachment:
x86-XSAVES-save-restore-all.patch _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |