[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen stable-4.13] x86/shadow: suppress "fast fault path" optimization without reserved bits



commit d4ac369247dae56642c60cbd5aea3c3504069977
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Thu Mar 18 15:06:12 2021 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Thu Mar 18 15:06:12 2021 +0100

    x86/shadow: suppress "fast fault path" optimization without reserved bits
    
    When none of the physical address bits in PTEs are reserved, we can't
    create any 4k (leaf) PTEs which would trigger reserved bit faults. Hence
    the present SHOPT_FAST_FAULT_PATH machinery needs to be suppressed in
    this case, which is most easily achieved by never creating any magic
    entries.
    
    To compensate a little, eliminate sh_write_p2m_entry_post()'s impact on
    such hardware.
    
    While at it, also avoid using an MMIO magic entry when that would
    truncate the incoming GFN.
    
    Requested-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Tim Deegan <tim@xxxxxxx>
    
    x86/shadow: suppress "fast fault path" optimization when running virtualized
    
    We can't make correctness of our own behavior dependent upon a
    hypervisor underneath us correctly telling us the true physical address
    with hardware uses. Without knowing this, we can't be certain reserved
    bit faults can actually be observed. Therefore, besides evaluating the
    number of address bits when deciding whether to use the optimization,
    also check whether we're running virtualized ourselves. (Note that since
    we may get migrated when running virtualized, the number of address bits
    may also change.)
    
    Requested-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Acked-by: Tim Deegan <tim@xxxxxxx>
    master commit: 9318fdf757ec234f0ee6c5cd381326b2f581d065
    master date: 2021-03-05 13:29:28 +0100
    master commit: 60c0444fae2148452f9ed0b7c49af1fa41f8f522
    master date: 2021-03-08 10:41:50 +0100
---
 xen/arch/x86/mm/shadow/multi.c |  3 ++-
 xen/arch/x86/mm/shadow/types.h | 34 ++++++++++++++++++++++++++++------
 2 files changed, 30 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 26798b317c..61e9cc951e 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -520,7 +520,8 @@ _sh_propagate(struct vcpu *v,
     {
         /* Guest l1e maps emulated MMIO space */
         *sp = sh_l1e_mmio(target_gfn, gflags);
-        d->arch.paging.shadow.has_fast_mmio_entries = true;
+        if ( sh_l1e_is_magic(*sp) )
+            d->arch.paging.shadow.has_fast_mmio_entries = true;
         goto done;
     }
 
diff --git a/xen/arch/x86/mm/shadow/types.h b/xen/arch/x86/mm/shadow/types.h
index d5096748ac..71d8c322ad 100644
--- a/xen/arch/x86/mm/shadow/types.h
+++ b/xen/arch/x86/mm/shadow/types.h
@@ -290,24 +290,41 @@ void sh_destroy_monitor_table(struct vcpu *v, mfn_t mmfn);
  * pagetables.
  *
  * This is only feasible for PAE and 64bit Xen: 32-bit non-PAE PTEs don't
- * have reserved bits that we can use for this.
+ * have reserved bits that we can use for this.  And even there it can only
+ * be used if we can be certain the processor doesn't use all 52 address bits.
  */
 
 #define SH_L1E_MAGIC 0xffffffff00000001ULL
+
+static inline bool sh_have_pte_rsvd_bits(void)
+{
+    return paddr_bits < PADDR_BITS && !cpu_has_hypervisor;
+}
+
 static inline bool sh_l1e_is_magic(shadow_l1e_t sl1e)
 {
     return (sl1e.l1 & SH_L1E_MAGIC) == SH_L1E_MAGIC;
 }
 
 /* Guest not present: a single magic value */
-static inline shadow_l1e_t sh_l1e_gnp(void)
+static inline shadow_l1e_t sh_l1e_gnp_raw(void)
 {
     return (shadow_l1e_t){ -1ULL };
 }
 
+static inline shadow_l1e_t sh_l1e_gnp(void)
+{
+    /*
+     * On systems with no reserved physical address bits we can't engage the
+     * fast fault path.
+     */
+    return sh_have_pte_rsvd_bits() ? sh_l1e_gnp_raw()
+                                   : shadow_l1e_empty();
+}
+
 static inline bool sh_l1e_is_gnp(shadow_l1e_t sl1e)
 {
-    return sl1e.l1 == sh_l1e_gnp().l1;
+    return sl1e.l1 == sh_l1e_gnp_raw().l1;
 }
 
 /*
@@ -322,9 +339,14 @@ static inline bool sh_l1e_is_gnp(shadow_l1e_t sl1e)
 
 static inline shadow_l1e_t sh_l1e_mmio(gfn_t gfn, u32 gflags)
 {
-    return (shadow_l1e_t) { (SH_L1E_MMIO_MAGIC
-                             | MASK_INSR(gfn_x(gfn), SH_L1E_MMIO_GFN_MASK)
-                             | (gflags & (_PAGE_USER|_PAGE_RW))) };
+    unsigned long gfn_val = MASK_INSR(gfn_x(gfn), SH_L1E_MMIO_GFN_MASK);
+
+    if ( !sh_have_pte_rsvd_bits() ||
+         gfn_x(gfn) != MASK_EXTR(gfn_val, SH_L1E_MMIO_GFN_MASK) )
+        return shadow_l1e_empty();
+
+    return (shadow_l1e_t) { (SH_L1E_MMIO_MAGIC | gfn_val |
+                             (gflags & (_PAGE_USER | _PAGE_RW))) };
 }
 
 static inline bool sh_l1e_is_mmio(shadow_l1e_t sl1e)
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.