[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/3] x86/boot: Detect the firmware SMT setting correctly on Intel hardware



>>> Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 04/13/19 6:22 PM >>>
>--- a/xen/arch/x86/spec_ctrl.c
>+++ b/xen/arch/x86/spec_ctrl.c
>@@ -58,7 +58,7 @@ uint8_t __read_mostly default_xen_spec_ctrl;
>uint8_t __read_mostly default_spec_ctrl_flags;
 >
>paddr_t __read_mostly l1tf_addr_mask, __read_mostly l1tf_safe_maddr;
>-static bool __initdata cpu_has_bug_l1tf;
>+static bool __initdata cpu_has_bug_l1tf, hw_smt_enabled;

I wonder whether this wouldn't better start out as "true" and get set to false 
when
we're certain SMT is off. I'd rather see the nagging one time too often, than 
get a
report that someone's still vulnerable because of having SMT enabled without
noticing.


>+static void __init detect_smt_status(void)

>+{
>+    uint64_t val;
>+    unsigned int cpu;
>+
>+    /*
>+     * x86_num_siblings defaults to 1 in the absence of other information, and
>+     * is adjusted based on other topology information found in CPUID leaves.
>+     *
>+     * On AMD hardware, it will be the current SMT configuration.  On Intel
>+     * hardware, it will represent the maximum capability, rather than the
>+     * current configuration.
>+     */
>+    if ( boot_cpu_data.x86_num_siblings < 2 )
>+        return;
>+
>+    /*
>+     * Intel Nehalem and later hardware does have an MSR which reports the
>+     * current count of cores/threads in the package.
>+     *
>+     * At the time of writing, it is almost completely undocumented, so isn't
>+     * virtualised reliably.
>+     */
>+    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL && !cpu_has_hypervisor 
>&&

While you clarify why you have the last part of the condition in the comment, I
still wonder whether it wouldn't be worthwhile to assume it might be (properly)
virtualized, and rather do a little more sanity checking ...


>+         !rdmsr_safe(MSR_INTEL_CORE_THREAD_COUNT, val) )
>+    {
>+        hw_smt_enabled = (MASK_EXTR(val, MSR_CTC_CORE_MASK) !=
>+                          MASK_EXTR(val, MSR_CTC_THREAD_MASK));
>+        return;

... before this: Verify that both fields are non-zero, that core count is no 
greater
than thread count, and perhaps the latter also divides evenly by the former.

However, this being an improvement on its own already, I could accept it going
in as is, so even in its current shape (albeit preferably with the 
start-out-as-true
adjustment mentioned above)

Acked-by: Jan Beulich <jbeulich@xxxxxxxx>

>@@ -886,8 +931,7 @@ void __init init_speculation_mitigations(void)
      >* However, if we are on affected hardware, with HT enabled, and the user
>* hasn't explicitly chosen whether to use HT or not, nag them to do so.
>*/
>-    if ( opt_smt == -1 && cpu_has_bug_l1tf && !pv_shim &&
>-         boot_cpu_data.x86_num_siblings > 1 )
>+    if ( opt_smt == -1 && cpu_has_bug_l1tf && !pv_shim && hw_smt_enabled )

It feels like the pv_shim part should have become redundant now, but I'm
uncertain whether it really does.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.