[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH SpectreV1+L1TF v4 01/11] is_control_domain: block speculation



Checks of domain properties, such as is_hardware_domain or is_hvm_domain,
might be bypassed by speculatively executing these instructions. A reason
for bypassing these checks is that these macros access the domain
structure via a pointer, and check a certain field. Since this memory
access is slow, the CPU assumes a returned value and continues the
execution.

In case an is_control_domain check is bypassed, for example during a
hypercall, data that should only be accessible by the control domain could
be loaded into the cache.

Since the L1TF vulnerability of Intel CPUs, loading hypervisor data into
L1 cache is problemetic, because when hyperthreading is used as well, a
guest running on the sibling core can leak this potentially secret data.

To prevent these speculative accesses, we block speculation after
accessing the domain property field by adding lfence instructions. This
way, the CPU continues executing and loading data only once the condition
is actually evaluated.

As the macros are typically used in if statements, the lfence has to come
in a compatible way. Therefore, a function that returns true after an
lfence instruction is introduced. To protect both branches after a
conditional, an lfence instruction has to be added for the two branches.
As the L1TF vulnerability is only present on the x86 architecture, the
macros will not use the lfence instruction on other architectures.

Introducing the lfence instructions catches a lot of potential leaks with
a simple unintrusive code change. During performance testing, we did not
notice performance effects.

Signed-off-by: Norbert Manthey <nmanthey@xxxxxxxxx>

---
 xen/include/xen/nospec.h | 15 +++++++++++++++
 xen/include/xen/sched.h  |  5 +++--
 2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/xen/include/xen/nospec.h b/xen/include/xen/nospec.h
--- a/xen/include/xen/nospec.h
+++ b/xen/include/xen/nospec.h
@@ -58,6 +58,21 @@ static inline unsigned long array_index_mask_nospec(unsigned 
long index,
     (typeof(_i)) (_i & _mask);                                          \
 })
 
+/*
+ * allow to insert a read memory barrier into conditionals
+ */
+#ifdef CONFIG_X86
+static inline bool lfence_true(void) { rmb(); return true; }
+#else
+static inline bool lfence_true(void) { return true; }
+#endif
+
+/*
+ * protect evaluation of conditional with respect to speculation
+ */
+#define evaluate_nospec(condition)                                      \
+    (((condition) && lfence_true()) || !lfence_true())
+
 #endif /* XEN_NOSPEC_H */
 
 /*
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -22,6 +22,7 @@
 #include <asm/atomic.h>
 #include <xen/vpci.h>
 #include <xen/wait.h>
+#include <xen/nospec.h>
 #include <public/xen.h>
 #include <public/domctl.h>
 #include <public/sysctl.h>
@@ -882,10 +883,10 @@ void watchdog_domain_destroy(struct domain *d);
  *    (that is, this would not be suitable for a driver domain)
  *  - There is never a reason to deny the hardware domain access to this
  */
-#define is_hardware_domain(_d) ((_d) == hardware_domain)
+#define is_hardware_domain(_d) evaluate_nospec((_d) == hardware_domain)
 
 /* This check is for functionality specific to a control domain */
-#define is_control_domain(_d) ((_d)->is_privileged)
+#define is_control_domain(_d) evaluate_nospec((_d)->is_privileged)
 
 #define VM_ASSIST(d, t) (test_bit(VMASST_TYPE_ ## t, &(d)->vm_assist))
 
-- 
2.7.4




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.