[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] XSAVE flavors

On Tue, Jan 26, 2016 at 08:12:20AM -0700, Jan Beulich wrote:
> >>> On 26.01.16 at 15:33, <JBeulich@xxxxxxxx> wrote:
> > originally I only meant to inquire about the state of the promised
> > alternatives improvement to the XSAVE code. However, while
> > looking over the code in question again I stumbled across a
> > separate issue: XSAVES, just like XSAVEOPT, may use the
> > "modified" optimization. However, the fcs and fds handling code
> > that has been present around the use of XSAVEOPT did not also
> > get applied to the XSAVES path. I suppose this was just an
> > oversight?
Really sorry for late response. The alternatives on xsave code is ok a couples 
weeks ago, the patch solve xsaves use modified optimization problem.
I will send it now.
> > 
> > With this another question then is whether, when both XSAVEC
> > and XSAVEOPT are available, it is indeed always better to use
> > XSAVEC (as the code is doing after your enabling).
But current no machine only support xsavec not support xsaves.  
I enable xsavec for "xsavec is a feature".
> And I'm afraid there's yet one more issue: If my reading of the
> SDM is right, then the offsets at which components get saved
> by XSAVEC / XSAVES aren't fixed, but depend on RFBM (as that's
> what gets stored into xcomp_bv[62:0]). xstate_comp_offsets[],
> otoh, gets computed based on all available features, irrespective
> of vcpu_xsave_mask() returning four different values depending
> on current guest state. I can't see how get_xsave_addr() can
> work correctly without honoring xcomp_bv. Nor can I convince
> myself that state can't get corrupted / lost, e.g. when a save
> with v->fpu_dirtied set is followed by one with v->fpu_dirtied
> clear.
> Am I misunderstanding what the SDM writes?
Yes. you are right. This is a issue. I will find a way to solve

> Jan

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.