[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 2/4] x86: suppress SMEP and SMAP while running 32-bit PV guest code



>>> On 25.03.16 at 19:01, <konrad.wilk@xxxxxxxxxx> wrote:
>>  @@ -174,10 +174,61 @@ compat_bad_hypercall:
>>  /* %rbx: struct vcpu, interrupts disabled */
>>  ENTRY(compat_restore_all_guest)
>>          ASSERT_INTERRUPTS_DISABLED
>> +.Lcr4_orig:
>> +        ASM_NOP8 /* testb $3,UREGS_cs(%rsp) */
>> +        ASM_NOP2 /* jpe   .Lcr4_alt_end */
>> +        ASM_NOP8 /* mov   CPUINFO_cr4...(%rsp), %rax */
>> +        ASM_NOP6 /* and   $..., %rax */
>> +        ASM_NOP8 /* mov   %rax, CPUINFO_cr4...(%rsp) */
>> +        ASM_NOP3 /* mov   %rax, %cr4 */
>> +.Lcr4_orig_end:
>> +        .pushsection .altinstr_replacement, "ax"
>> +.Lcr4_alt:
>> +        testb $3,UREGS_cs(%rsp)
>> +        jpe   .Lcr4_alt_end
> 
> This would jump if the last operation had even bits set. And the
> 'testb' is 'and' operation which would give us the '011' (for $3).
> 
> Why not just depend on the ZF ? Other places that test UREGS_cs()
> look to be using that?

Because we _want_ to skip the following code when outer context
is guest ring 3. See also the v3 part of the revision log.

>> +/* This mustn't modify registers other than %rax. */
>> +ENTRY(cr4_pv32_restore)
>> +        push  %rdx
>> +        GET_CPUINFO_FIELD(cr4, %rdx)
>> +        mov   (%rdx), %rax
>> +        test  $X86_CR4_SMEP|X86_CR4_SMAP,%eax
>> +        jnz   0f
>> +        or    cr4_pv32_mask(%rip), %rax
>> +        mov   %rax, %cr4
>> +        mov   %rax, (%rdx)
> 
> Here you leave %rax with the cr4_pv32_mask value, but:
> 
>> +        pop   %rdx
>> +        ret
>> +0:
>> +#ifndef NDEBUG
>> +        /* Check that _all_ of the bits intended to be set actually are. */
>> +        mov   %cr4, %rax
>> +        and   cr4_pv32_mask(%rip), %eax
>> +        cmp   cr4_pv32_mask(%rip), %eax
>> +        je    1f
>> +        BUG
>> +1:
>> +#endif
>> +        pop   %rdx
>> +        xor   %eax, %eax
> 
> .. Here you clear it. Any particular reason?
> 
>> +        ret

Of course - see handle_exception, where this return value gets
checked (in the first case above we just care for there to be any
non-zero value in %rax).

>> -.macro LOAD_C_CLOBBERED compat=0
>> +.macro LOAD_C_CLOBBERED compat=0 ax=1
>>  .if !\compat
>>          movq  UREGS_r11(%rsp),%r11
>>          movq  UREGS_r10(%rsp),%r10
>>          movq  UREGS_r9(%rsp),%r9
>>          movq  UREGS_r8(%rsp),%r8
>> -.endif
>> +.if \ax
>>          movq  UREGS_rax(%rsp),%rax
>> +.endif
> 
> Why the .endif here considering you are doing an:
> 
>> +.elseif \ax
> 
> an else if here?
>> +        movl  UREGS_rax(%rsp),%eax
>> +.endif
> 
> Actually, Why two 'if ax' ? checks?
> 
> Or am I reading this incorrect?

After the change the macro first deals with the "native" case
(which requires looking at \ax) and then the "compat" one
(which too requires evaluating \ax).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.