[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V10 1/3] x86/xsaves: enable xsaves/xrstors/xsavec in xen



On Tue, Nov 17, 2015 at 04:49:03AM -0700, Jan Beulich wrote:
> >>> On 13.11.15 at 02:54, <shuai.ruan@xxxxxxxxxxxxxxx> wrote:
> > @@ -91,7 +251,15 @@ void xsave(struct vcpu *v, uint64_t mask)
> >          typeof(ptr->fpu_sse.fip.sel) fcs = ptr->fpu_sse.fip.sel;
> >          typeof(ptr->fpu_sse.fdp.sel) fds = ptr->fpu_sse.fdp.sel;
> >  
> > -        if ( cpu_has_xsaveopt )
> > +        if ( cpu_has_xsaves )
> > +            asm volatile ( ".byte 0x48,0x0f,0xc7,0x2f"
> > +                           : "=m" (*ptr)
> > +                           : "a" (lmask), "d" (hmask), "D" (ptr) );
> > +        else if ( cpu_has_xsavec )
> > +            asm volatile ( ".byte 0x48,0x0f,0xc7,0x27"
> > +                           : "=m" (*ptr)
> > +                           : "a" (lmask), "d" (hmask), "D" (ptr) );
> > +        else if ( cpu_has_xsaveopt )
> 
> I (only) now realize why I replied on the earlier version (wrongly)
> assuming you hadn't switched to alternative patching: This code.
> Why would you not do on the save side what you do on the
> restore one?
> 
For this save side. As cpu_has_xsaveopt should be handled indenpently.
If we just use alternative asm for xsaves or xsavec, the following code
handle xsaveopt/xsave should be like this:
if ( !cpu_has_xsavec && !cpu_has_xsaves && cpu_has_xsaveopt)
......
....
else if ( !cpu_has_xsavec && !cpu_has_xsaves )
.......
......

if you think it is ok to do it like this. I will add this.

Also, for instruction without REX.W in save side, alternavitve asm can
be used, but a new alternative marco which must handle 4 instruction
base on 4 cpu_features will be added. So do this in this patchset or
another patchset ? which one is ok to you?

> Nothing here really requires "fixup" to be exception fixup code.
> Macro names should, however, be chosen according to the
> purpose of the macro, not according to the first use case of it.
> 
> That said, I don't think you even need this: The size of the
> patchable region is determined by the labels (661 and 662 in
> the header), and those labels will remain unaffected if
> between them code emissions to other sections occur. By
> having said that you shouldn't need to introduce XSTATE_FIXUP
> I had actually tried to make you aware of this very fact.
> 
Sorry, I understand your meaning. You mean use fixup code and instruction 
as a whole in alternative_io. Just like code below. Is that ok to you ?

alternative_io("1: "".byte 0x48,0x0f,0xae,0x2f \n"
               ".section .fixup,\"ax\"         \n"
               "2: mov %6,%%ecx                \n"
               "   xor %1,%1                   \n"
               "   rep stosb                   \n"
               "   lea %3,%0                   \n"
               "   mov %4,%1                   \n"
               "   jmp 1b                      \n"
               ".previous                      \n"
               _ASM_EXTABLE(1b, 2b),
               ".byte 0x48,0x0f,0xc7,0x1f      \n"
               ".section .fixup,\"ax\"         \n"
               "2: mov %6,%%ecx                \n"
               "   xor %1,%1                   \n"
               "   rep stosb                   \n"
               "   lea %3,%0                   \n"
               "   mov %4,%1                   \n"
               "   jmp 1b                      \n"
               ".previous                      \n"
               _ASM_EXTABLE(1b, 2b),
               X86_FEATURE_XSAVES,
               ASM_OUTPUT2("+&D" (ptr), "+&a" (lmask)),
               "m" (*ptr), "g" (lmask), "d" (hmask), "m"
               (xsave_cntxt_size)
               : "ecx");

> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.