[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen stable-4.7] x86/boot: Calculate the most appropriate BTI mitigation to use



commit 327a7836744ca8d7e1cfc6dc476d51d7c63f68ea
Author:     Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
AuthorDate: Wed Feb 14 11:43:28 2018 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Wed Feb 14 11:43:28 2018 +0100

    x86/boot: Calculate the most appropriate BTI mitigation to use
    
    See the logic and comments in init_speculation_mitigations() for further
    details.
    
    There are two controls for RSB overwriting, because in principle there are
    cases where it might be safe to forego rsb_native (Off the top of my head,
    SMEP active, no 32bit PV guests at all, no use of vmevent/paging subsystems
    for HVM guests, but I make no guarantees that this list of restrictions is
    exhaustive).
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    
    x86/spec_ctrl: Fix determination of when to use IBRS
    
    The original version of this logic was:
    
        /*
         * On Intel hardware, we'd like to use retpoline in preference to
         * IBRS, but only if it is safe on this hardware.
         */
        else if ( boot_cpu_has(X86_FEATURE_IBRSB) )
        {
            if ( retpoline_safe() )
                thunk = THUNK_RETPOLINE;
            else
                ibrs = true;
        }
    
    but it was changed by a request during review.  Sadly, the result is buggy 
as
    it breaks the later fallback logic by allowing IBRS to appear as available
    when in fact it isn't.
    
    This in practice means that on repoline-unsafe hardware without IBRS, we
    select THUNK_JUMP despite intending to select THUNK_RETPOLINE.
    
    Reported-by: Zhenzhong Duan <zhenzhong.duan@xxxxxxxxxx>
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 2713715305ca516f698d58cec5e0b322c3b2c4eb
    master date: 2018-01-26 14:10:21 +0000
    master commit: 30cbd0c83ef3d0edac2d5bcc41a9a2b7a843ae58
    master date: 2018-02-06 18:32:58 +0000
---
 docs/misc/xen-command-line.markdown |  10 ++-
 xen/arch/x86/cpu/common.c           |  13 ++++
 xen/arch/x86/spec_ctrl.c            | 141 +++++++++++++++++++++++++++++++++++-
 xen/include/asm-x86/spec_ctrl.h     |   4 +-
 4 files changed, 162 insertions(+), 6 deletions(-)

diff --git a/docs/misc/xen-command-line.markdown 
b/docs/misc/xen-command-line.markdown
index 01631f1..c7962e8 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -245,7 +245,7 @@ enough. Setting this to a high value may cause boot 
failure, particularly if
 the NMI watchdog is also enabled.
 
 ### bti (x86)
-> `= List of [ thunk=retpoline|lfence|jmp ]`
+> `= List of [ thunk=retpoline|lfence|jmp, ibrs=<bool>, 
rsb_{vmexit,native}=<bool> ]`
 
 Branch Target Injection controls.  By default, Xen will pick the most
 appropriate BTI mitigations based on compiled in support, loaded microcode,
@@ -260,6 +260,14 @@ locations.  The default thunk is `retpoline` (generally 
preferred for Intel
 hardware), with the alternatives being `jmp` (a `jmp *%reg` gadget, minimal
 overhead), and `lfence` (an `lfence; jmp *%reg` gadget, preferred for AMD).
 
+On hardware supporting IBRS, the `ibrs=` option can be used to force or
+prevent Xen using the feature itself.  If Xen is not using IBRS itself,
+functionality is still set up so IBRS can be virtualised for guests.
+
+The `rsb_vmexit=` and `rsb_native=` options can be used to fine tune when the
+RSB gets overwritten.  There are individual controls for an entry from HVM
+context, and an entry from a native (PV or Xen) context.
+
 ### xenheap\_megabytes (arm32)
 > `= <size>`
 
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 48f3aa5..50e9f33 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -415,6 +415,19 @@ void identify_cpu(struct cpuinfo_x86 *c)
                if (test_bit(X86_FEATURE_IND_THUNK_JMP,
                             boot_cpu_data.x86_capability))
                        __set_bit(X86_FEATURE_IND_THUNK_JMP, c->x86_capability);
+               if (test_bit(X86_FEATURE_XEN_IBRS_SET,
+                            boot_cpu_data.x86_capability))
+                       __set_bit(X86_FEATURE_XEN_IBRS_SET, c->x86_capability);
+               if (test_bit(X86_FEATURE_XEN_IBRS_CLEAR,
+                            boot_cpu_data.x86_capability))
+                       __set_bit(X86_FEATURE_XEN_IBRS_CLEAR,
+                                 c->x86_capability);
+               if (test_bit(X86_FEATURE_RSB_NATIVE,
+                            boot_cpu_data.x86_capability))
+                       __set_bit(X86_FEATURE_RSB_NATIVE, c->x86_capability);
+               if (test_bit(X86_FEATURE_RSB_VMEXIT,
+                            boot_cpu_data.x86_capability))
+                       __set_bit(X86_FEATURE_RSB_VMEXIT, c->x86_capability);
 
                /* AND the already accumulated flags with these */
                for ( i = 0 ; i < NCAPINTS ; i++ )
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 4546f6f..797f4ae 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -20,6 +20,7 @@
 #include <xen/init.h>
 #include <xen/lib.h>
 
+#include <asm/microcode.h>
 #include <asm/msr-index.h>
 #include <asm/processor.h>
 #include <asm/spec_ctrl.h>
@@ -33,11 +34,15 @@ static enum ind_thunk {
     THUNK_LFENCE,
     THUNK_JMP,
 } opt_thunk __initdata = THUNK_DEFAULT;
+static int8_t __initdata opt_ibrs = -1;
+static bool_t __initdata opt_rsb_native = 1;
+static bool_t __initdata opt_rsb_vmexit = 1;
+uint8_t __read_mostly default_bti_ist_info;
 
 static int __init parse_bti(const char *s)
 {
     const char *ss;
-    int rc = 0;
+    int val, rc = 0;
 
     do {
         ss = strchr(s, ',');
@@ -57,6 +62,12 @@ static int __init parse_bti(const char *s)
             else
                 rc = -EINVAL;
         }
+        else if ( (val = parse_boolean("ibrs", s, ss)) >= 0 )
+            opt_ibrs = val;
+        else if ( (val = parse_boolean("rsb_native", s, ss)) >= 0 )
+            opt_rsb_native = val;
+        else if ( (val = parse_boolean("rsb_vmexit", s, ss)) >= 0 )
+            opt_rsb_vmexit = val;
         else
             rc = -EINVAL;
 
@@ -93,24 +104,84 @@ static void __init print_details(enum ind_thunk thunk)
         printk(XENLOG_DEBUG "  Compiled-in support: INDIRECT_THUNK\n");
 
     printk(XENLOG_INFO
-           "BTI mitigations: Thunk %s\n",
+           "BTI mitigations: Thunk %s, Others:%s%s%s\n",
            thunk == THUNK_NONE      ? "N/A" :
            thunk == THUNK_RETPOLINE ? "RETPOLINE" :
            thunk == THUNK_LFENCE    ? "LFENCE" :
-           thunk == THUNK_JMP       ? "JMP" : "?");
+           thunk == THUNK_JMP       ? "JMP" : "?",
+           boot_cpu_has(X86_FEATURE_XEN_IBRS_SET)    ? " IBRS+" :
+           boot_cpu_has(X86_FEATURE_XEN_IBRS_CLEAR)  ? " IBRS-"      : "",
+           boot_cpu_has(X86_FEATURE_RSB_NATIVE)      ? " RSB_NATIVE" : "",
+           boot_cpu_has(X86_FEATURE_RSB_VMEXIT)      ? " RSB_VMEXIT" : "");
+}
+
+/* Calculate whether Retpoline is known-safe on this CPU. */
+static bool_t __init retpoline_safe(void)
+{
+    unsigned int ucode_rev = this_cpu(ucode_cpu_info).cpu_sig.rev;
+
+    if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
+        return 1;
+
+    if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
+         boot_cpu_data.x86 != 6 )
+        return 0;
+
+    switch ( boot_cpu_data.x86_model )
+    {
+    case 0x17: /* Penryn */
+    case 0x1d: /* Dunnington */
+    case 0x1e: /* Nehalem */
+    case 0x1f: /* Auburndale / Havendale */
+    case 0x1a: /* Nehalem EP */
+    case 0x2e: /* Nehalem EX */
+    case 0x25: /* Westmere */
+    case 0x2c: /* Westmere EP */
+    case 0x2f: /* Westmere EX */
+    case 0x2a: /* SandyBridge */
+    case 0x2d: /* SandyBridge EP/EX */
+    case 0x3a: /* IvyBridge */
+    case 0x3e: /* IvyBridge EP/EX */
+    case 0x3c: /* Haswell */
+    case 0x3f: /* Haswell EX/EP */
+    case 0x45: /* Haswell D */
+    case 0x46: /* Haswell H */
+        return 1;
+
+        /*
+         * Broadwell processors are retpoline-safe after specific microcode
+         * versions.
+         */
+    case 0x3d: /* Broadwell */
+        return ucode_rev >= 0x28;
+    case 0x47: /* Broadwell H */
+        return ucode_rev >= 0x1b;
+    case 0x4f: /* Broadwell EP/EX */
+        return ucode_rev >= 0xb000025;
+    case 0x56: /* Broadwell D */
+        return 0; /* TBD. */
+
+        /*
+         * Skylake and later processors are not retpoline-safe.
+         */
+    default:
+        return 0;
+    }
 }
 
 void __init init_speculation_mitigations(void)
 {
     enum ind_thunk thunk = THUNK_DEFAULT;
+    bool_t ibrs = 0;
 
     /*
      * Has the user specified any custom BTI mitigations?  If so, follow their
      * instructions exactly and disable all heuristics.
      */
-    if ( opt_thunk != THUNK_DEFAULT )
+    if ( opt_thunk != THUNK_DEFAULT || opt_ibrs != -1 )
     {
         thunk = opt_thunk;
+        ibrs  = !!opt_ibrs;
     }
     else
     {
@@ -126,7 +197,18 @@ void __init init_speculation_mitigations(void)
              */
             if ( cpu_has_lfence_dispatch )
                 thunk = THUNK_LFENCE;
+            /*
+             * On Intel hardware, we'd like to use retpoline in preference to
+             * IBRS, but only if it is safe on this hardware.
+             */
+            else if ( retpoline_safe() )
+                thunk = THUNK_RETPOLINE;
+            else if ( boot_cpu_has(X86_FEATURE_IBRSB) )
+                ibrs = 1;
         }
+        /* Without compiler thunk support, use IBRS if available. */
+        else if ( boot_cpu_has(X86_FEATURE_IBRSB) )
+            ibrs = 1;
     }
 
     /*
@@ -137,6 +219,13 @@ void __init init_speculation_mitigations(void)
         thunk = THUNK_NONE;
 
     /*
+     * If IBRS is in use and thunks are compiled in, there is no point
+     * suffering extra overhead.  Switch to the least-overhead thunk.
+     */
+    if ( ibrs && thunk == THUNK_DEFAULT )
+        thunk = THUNK_JMP;
+
+    /*
      * If there are still no thunk preferences, the compiled default is
      * actually retpoline, and it is better than nothing.
      */
@@ -149,6 +238,50 @@ void __init init_speculation_mitigations(void)
     else if ( thunk == THUNK_JMP )
         __set_bit(X86_FEATURE_IND_THUNK_JMP, boot_cpu_data.x86_capability);
 
+    if ( boot_cpu_has(X86_FEATURE_IBRSB) )
+    {
+        /*
+         * Even if we've chosen to not have IBRS set in Xen context, we still
+         * need the IBRS entry/exit logic to virtualise IBRS support for
+         * guests.
+         */
+        if ( ibrs )
+            __set_bit(X86_FEATURE_XEN_IBRS_SET, boot_cpu_data.x86_capability);
+        else
+            __set_bit(X86_FEATURE_XEN_IBRS_CLEAR, 
boot_cpu_data.x86_capability);
+
+        default_bti_ist_info |= BTI_IST_WRMSR | ibrs;
+    }
+
+    /*
+     * PV guests can poison the RSB to any virtual address from which
+     * they can execute a call instruction.  This is necessarily outside
+     * of the Xen supervisor mappings.
+     *
+     * With SMEP enabled, the processor won't speculate into user mappings.
+     * Therefore, in this case, we don't need to worry about poisoned entries
+     * from 64bit PV guests.
+     *
+     * 32bit PV guest kernels run in ring 1, so use supervisor mappings.
+     * If a processors speculates to 32bit PV guest kernel mappings, it is
+     * speculating in 64bit supervisor mode, and can leak data.
+     */
+    if ( opt_rsb_native )
+    {
+        __set_bit(X86_FEATURE_RSB_NATIVE, boot_cpu_data.x86_capability);
+        default_bti_ist_info |= BTI_IST_RSB;
+    }
+
+    /*
+     * HVM guests can always poison the RSB to point at Xen supervisor
+     * mappings.
+     */
+    if ( opt_rsb_vmexit )
+        __set_bit(X86_FEATURE_RSB_VMEXIT, boot_cpu_data.x86_capability);
+
+    /* (Re)init BSP state now that default_bti_ist_info has been calculated. */
+    init_shadow_spec_ctrl_state();
+
     print_details(thunk);
 }
 
diff --git a/xen/include/asm-x86/spec_ctrl.h b/xen/include/asm-x86/spec_ctrl.h
index c454b02..6120e4f 100644
--- a/xen/include/asm-x86/spec_ctrl.h
+++ b/xen/include/asm-x86/spec_ctrl.h
@@ -24,12 +24,14 @@
 
 void init_speculation_mitigations(void);
 
+extern uint8_t default_bti_ist_info;
+
 static inline void init_shadow_spec_ctrl_state(void)
 {
     struct cpu_info *info = get_cpu_info();
 
     info->shadow_spec_ctrl = info->use_shadow_spec_ctrl = 0;
-    info->bti_ist_info = 0;
+    info->bti_ist_info = default_bti_ist_info;
 }
 
 #endif /* !__X86_SPEC_CTRL_H__ */
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.7

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.