[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 2/7] x86/xstate: Cross-check dynamic XSTATE sizes at boot
On 17.06.2024 19:39, Andrew Cooper wrote: > Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in for > every call. This is expensive, being used for domain create/migrate, as well > as to service certain guest CPUID instructions. > > Instead, arrange to check the sizes once at boot. See the code comments for > details. Right now, it just checks hardware against the algorithm > expectations. Later patches will add further cross-checking. > > Introduce more X86_XCR0_* and X86_XSS_* constants CPUID bits. This is to > maximise coverage in the sanity check, even if we don't expect to > use/virtualise some of these features any time soon. Leave HDC and HWP alone > for now; we don't have CPUID bits from them stored nicely. > > Only perform the cross-checks when SELF_TESTS are active. It's only > developers or new hardware liable to trip these checks, and Xen at least > tracks "maximum value ever seen in xcr0" for the lifetime of the VM, which we > don't want to be tickling in the general case. > > Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> I may certainly give R-b on the patch as it is, but I have a few questions first: > --- a/xen/arch/x86/xstate.c > +++ b/xen/arch/x86/xstate.c > @@ -604,9 +604,164 @@ static bool valid_xcr0(uint64_t xcr0) > if ( !(xcr0 & X86_XCR0_BNDREGS) != !(xcr0 & X86_XCR0_BNDCSR) ) > return false; > > + /* TILECFG and TILEDATA must be the same. */ > + if ( !(xcr0 & X86_XCR0_TILE_CFG) != !(xcr0 & X86_XCR0_TILE_DATA) ) > + return false; > + > return true; > } > > +struct xcheck_state { > + uint64_t states; > + uint32_t uncomp_size; > + uint32_t comp_size; > +}; > + > +static void __init check_new_xstate(struct xcheck_state *s, uint64_t new) > +{ > + uint32_t hw_size; > + > + BUILD_BUG_ON(X86_XCR0_STATES & X86_XSS_STATES); > + > + BUG_ON(s->states & new); /* States only increase. */ > + BUG_ON(!valid_xcr0(s->states | new)); /* Xen thinks it's a good value. */ > + BUG_ON(new & ~(X86_XCR0_STATES | X86_XSS_STATES)); /* Known state. */ > + BUG_ON((new & X86_XCR0_STATES) && > + (new & X86_XSS_STATES)); /* User or supervisor, not both. */ > + > + s->states |= new; > + if ( new & X86_XCR0_STATES ) > + { > + if ( !set_xcr0(s->states & X86_XCR0_STATES) ) > + BUG(); > + } > + else > + set_msr_xss(s->states & X86_XSS_STATES); > + > + /* > + * Check the uncompressed size. Some XSTATEs are out-of-order and fill > in > + * prior holes in the state area, so we check that the size doesn't > + * decrease. > + */ > + hw_size = cpuid_count_ebx(0xd, 0); Going forward, do we mean to get rid of XSTATE_CPUID? Else imo it should be used here (and again below). > + if ( hw_size < s->uncomp_size ) > + panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, uncompressed hw > size %#x < prev size %#x\n", > + s->states, &new, hw_size, s->uncomp_size); > + > + s->uncomp_size = hw_size; Since XSS state doesn't affect uncompressed layout, this looks like largely dead code for that case. Did you consider moving this into the if() above? Alternatively, should the comparison use == when dealing with XSS bits? > + /* > + * Check the compressed size, if available. All components strictly > + * appear in index order. In principle there are no holes, but some > + * components have their base address 64-byte aligned for efficiency > + * reasons (e.g. AMX-TILE) and there are other components small enough to > + * fit in the gap (e.g. PKRU) without increasing the overall length. > + */ > + hw_size = cpuid_count_ebx(0xd, 1); > + > + if ( cpu_has_xsavec ) > + { > + if ( hw_size < s->comp_size ) > + panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, compressed hw > size %#x < prev size %#x\n", > + s->states, &new, hw_size, s->comp_size); Unlike for uncompressed size, can't it be <= here, for - as the comment says - it being strictly index order, and no component having zero size? > + s->comp_size = hw_size; > + } > + else if ( hw_size ) /* Compressed size reported, but no XSAVEC ? */ > + { > + static bool once; > + > + if ( !once ) > + { > + WARN(); > + once = true; > + } > + } > +} > + > +/* > + * The {un,}compressed XSTATE sizes are reported by dynamic CPUID value, > based > + * on the current %XCR0 and MSR_XSS values. The exact layout is also feature > + * and vendor specific. Cross-check Xen's understanding against real > hardware > + * on boot. > + * > + * Testing every combination is prohibitive, so we use a partial approach. > + * Starting with nothing active, we add new XSTATEs and check that the CPUID > + * dynamic values never decreases. > + */ > +static void __init noinline xstate_check_sizes(void) > +{ > + uint64_t old_xcr0 = get_xcr0(); > + uint64_t old_xss = get_msr_xss(); > + struct xcheck_state s = {}; > + > + /* > + * User XSTATEs, increasing by index. > + * > + * Chronologically, Intel and AMD had identical layouts for AVX (YMM). > + * AMD introduced LWP in Fam15h, following immediately on from YMM. > Intel > + * left an LWP-shaped hole when adding MPX (BND{CSR,REGS}) in Skylake. > + * AMD removed LWP in Fam17h, putting PKRU in the same space, breaking > + * layout compatibility with Intel and having a knock-on effect on all > + * subsequent states. > + */ > + check_new_xstate(&s, X86_XCR0_SSE | X86_XCR0_FP); > + > + if ( cpu_has_avx ) > + check_new_xstate(&s, X86_XCR0_YMM); > + > + if ( cpu_has_mpx ) > + check_new_xstate(&s, X86_XCR0_BNDCSR | X86_XCR0_BNDREGS); > + > + if ( cpu_has_avx512f ) > + check_new_xstate(&s, X86_XCR0_HI_ZMM | X86_XCR0_ZMM | > X86_XCR0_OPMASK); > + > + if ( cpu_has_pku ) > + check_new_xstate(&s, X86_XCR0_PKRU); > + > + if ( boot_cpu_has(X86_FEATURE_AMX_TILE) ) > + check_new_xstate(&s, X86_XCR0_TILE_DATA | X86_XCR0_TILE_CFG); > + > + if ( boot_cpu_has(X86_FEATURE_LWP) ) > + check_new_xstate(&s, X86_XCR0_LWP); > + > + /* > + * Supervisor XSTATEs, increasing by index. > + * > + * Intel Broadwell has Processor Trace but no XSAVES. There doesn't > + * appear to have been a new enumeration when X86_XSS_PROC_TRACE was > + * introduced in Skylake. > + */ > + if ( cpu_has_xsaves ) > + { > + if ( cpu_has_proc_trace ) > + check_new_xstate(&s, X86_XSS_PROC_TRACE); > + > + if ( boot_cpu_has(X86_FEATURE_ENQCMD) ) > + check_new_xstate(&s, X86_XSS_PASID); > + > + if ( boot_cpu_has(X86_FEATURE_CET_SS) || > + boot_cpu_has(X86_FEATURE_CET_IBT) ) > + { > + check_new_xstate(&s, X86_XSS_CET_U); > + check_new_xstate(&s, X86_XSS_CET_S); > + } > + > + if ( boot_cpu_has(X86_FEATURE_UINTR) ) > + check_new_xstate(&s, X86_XSS_UINTR); > + > + if ( boot_cpu_has(X86_FEATURE_ARCH_LBR) ) > + check_new_xstate(&s, X86_XSS_LBR); > + } In principle compressed state checking could be extended to also verify the offsets are strictly increasing. That, however, would require to interleave XCR0 and XSS checks, strictly by index. Did you consider (and then discard) doing so? Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |