[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen staging] x86/shadow: fetch CPL just once in sh_page_fault()



commit 94c7b060c072794d9f21755db24db1ac502ceb4b
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Thu Jul 12 10:47:33 2018 +0200
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Thu Jul 12 10:47:33 2018 +0200

    x86/shadow: fetch CPL just once in sh_page_fault()
    
    This isn't as much of an optimization than to avoid triggering a gcc bug
    affecting 5.x ... 7.x, triggered by any asm() put inside the ad hoc
    "rewalk" loop and taking as an (output?) operand a register variable
    tied to %rdx (an "rdx" clobber is fine). The issue is due to an apparent
    collision in register use with the modulo operation in vtlb_hash(),
    which (with optimization enabled) involves a multiplication of two
    64-bit values with the upper half (in %rdx) of the 128-bit result being
    of interest.
    
    Such an asm() was originally meant to be implicitly introduced into the
    code when converting most indirect calls through the hvm_funcs table to
    direct calls (via alternative instruction patching); that model was
    switched to clobbers due to further compiler problems, but I think the
    change here is worthwhile nevertheless.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Tim Deegan <tim@xxxxxxx>
---
 xen/arch/x86/mm/shadow/multi.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index da586c21c7..021ae252e4 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2817,6 +2817,7 @@ static int sh_page_fault(struct vcpu *v,
     uint32_t rc, error_code;
     bool walk_ok;
     int version;
+    unsigned int cpl;
     const struct npfec access = {
          .read_access = 1,
          .write_access = !!(regs->error_code & PFEC_write_access),
@@ -2967,6 +2968,8 @@ static int sh_page_fault(struct vcpu *v,
         return 0;
     }
 
+    cpl = is_pv_vcpu(v) ? (regs->ss & 3) : hvm_get_cpl(v);
+
  rewalk:
 
     error_code = regs->error_code;
@@ -3023,8 +3026,7 @@ static int sh_page_fault(struct vcpu *v,
      * If this corner case comes about accidentally, then a security-relevant
      * bug has been tickled.
      */
-    if ( !(error_code & (PFEC_insn_fetch|PFEC_user_mode)) &&
-         (is_pv_vcpu(v) ? (regs->ss & 3) : hvm_get_cpl(v)) == 3 )
+    if ( !(error_code & (PFEC_insn_fetch|PFEC_user_mode)) && cpl == 3 )
         error_code |= PFEC_implicit;
 
     /* The walk is done in a lock-free style, with some sanity check
--
generated by git-patchbot for /home/xen/git/xen.git#staging

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.