[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 4/5] x86/cpuid: Simplify recalculate_xstate()
On 05/05/2021 09:19, Jan Beulich wrote: > On 04.05.2021 15:58, Andrew Cooper wrote: >> On 04/05/2021 13:43, Jan Beulich wrote: >>> I'm also not happy at all to see you use a literal 3 here. We have >>> a struct for this, after all. >>> >>>> p->xstate.xss_low = xstates & XSTATE_XSAVES_ONLY; >>>> p->xstate.xss_high = (xstates & XSTATE_XSAVES_ONLY) >> 32; >>>> } >>>> - else >>>> - xstates &= ~XSTATE_XSAVES_ONLY; >>>> >>>> - for ( i = 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i ) >>>> + /* Subleafs 2+ */ >>>> + xstates &= ~XSTATE_FP_SSE; >>>> + BUILD_BUG_ON(ARRAY_SIZE(p->xstate.comp) < 63); >>>> + for_each_set_bit ( i, &xstates, 63 ) >>>> { >>>> - uint64_t curr_xstate = 1ul << i; >>>> - >>>> - if ( !(xstates & curr_xstate) ) >>>> - continue; >>>> - >>>> - p->xstate.comp[i].size = xstate_sizes[i]; >>>> - p->xstate.comp[i].offset = xstate_offsets[i]; >>>> - p->xstate.comp[i].xss = curr_xstate & XSTATE_XSAVES_ONLY; >>>> - p->xstate.comp[i].align = curr_xstate & xstate_align; >>>> + /* >>>> + * Pass through size (eax) and offset (ebx) directly. Visbility >>>> of >>>> + * attributes in ecx limited by visible features in Da1. >>>> + */ >>>> + p->xstate.raw[i].a = raw_cpuid_policy.xstate.raw[i].a; >>>> + p->xstate.raw[i].b = raw_cpuid_policy.xstate.raw[i].b; >>>> + p->xstate.raw[i].c = raw_cpuid_policy.xstate.raw[i].c & ecx_bits; >>> To me, going to raw[].{a,b,c,d} looks like a backwards move, to be >>> honest. Both this and the literal 3 above make it harder to locate >>> all the places that need changing if a new bit (like xfd) is to be >>> added. It would be better if grep-ing for an existing field name >>> (say "xss") would easily turn up all involved places. >> It's specifically to reduce the number of areas needing editing when a >> new state, and therefore the number of opportunities to screw things up. >> >> As said in the commit message, I'm not even convinced that the ecx_bits >> mask is necessary, as new attributes only come in with new behaviours of >> new state components. >> >> If we choose to skip the ecx masking, then this loop body becomes even >> more simple. Just p->xstate.raw[i] = raw_cpuid_policy.xstate.raw[i]. >> >> Even if Intel do break with tradition, and retrofit new attributes into >> existing subleafs, leaking them to guests won't cause anything to >> explode (the bits are still reserved after all), and we can fix anything >> necessary at that point. > I don't think this would necessarily go without breakage. What if, > assuming XFD support is in, an existing component got XFD sensitivity > added to it? I think that is exceedingly unlikely to happen. > If, like you were suggesting elsewhere, and like I had > it initially, we used a build-time constant for XFD-affected > components, we'd break consuming guests. The per-component XFD bit > (just to again take as example) also isn't strictly speaking tied to > the general XFD feature flag (but to me it makes sense for us to > enforce respective consistency). Plus, in general, the moment a flag > is no longer reserved in the spec, it is not reserved anywhere > anymore: An aware (newer) guest running on unaware (older) Xen ought > to still function correctly. They're still technically reserved, because of the masking of the XFD bit in the feature leaf. However, pondered this for some time, we do need to retain the attributes masking, because e.g. AMX && !XSAVEC is a legal (if very unwise) combination to expose, and the align64 bits want to disappear from the TILE state attributes. Also, in terms of implementation, the easiest way to do something plausible here is a dependency chain of XSAVE => XSAVEC (implies align64 masking) => XSAVES (implies xss masking) => CET_*. XFD would depend on XSAVE, and would imply masking the xfd attribute. This still leaves the broken corner case of the dynamic compressed size, but I think I can live with that to avoid the added complexity of trying to force XSAVEC == XSAVES. ~Andrew
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |