[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen stable-4.6] x86/ept: allow write-combining on !mfn_valid() MMIO mappings again



commit 92074638fada43690ce811c84af6daf72bdb93ef
Author:     David Woodhouse <dwmw@xxxxxxxxxx>
AuthorDate: Mon Feb 20 16:05:29 2017 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Mon Feb 20 16:05:29 2017 +0100

    x86/ept: allow write-combining on !mfn_valid() MMIO mappings again
    
    For some MMIO regions, such as those high above RAM, mfn_valid() will
    return false.
    
    Since the fix for XSA-154 in commit c61a6f74f80e ("x86: enforce
    consistent cachability of MMIO mappings"), guests have no longer been
    able to use PAT to obtain write-combining on such regions because the
    'ignore PAT' bit is set in EPT.
    
    We probably want to err on the side of caution and preserve that
    behaviour for addresses in mmio_ro_ranges, but not for normal MMIO
    mappings. That necessitates a slight refactoring to check mfn_valid()
    later, and let the MMIO case get through to the right code path.
    
    Since we're not bailing out for !mfn_valid() immediately, the range
    checks need to be adjusted to cope Â? simply by masking in the low bits
    to account for 'order' instead of adding, to avoid overflow when the mfn
    is INVALID_MFN (which happens on unmap, since we carefully call this
    function to fill in the EMT even though the PTE won't be valid).
    
    The range checks are also slightly refactored to put only one of them in
    the fast path in the common case. If it doesn't overlap, then it
    *definitely* isn't contained, so we don't need both checks. And if it
    overlaps and is only one page, then it definitely *is* contained.
    
    Finally, add a comment clarifying how that 'return -1' works Â? it isn't
    returning an error and causing the mapping to fail; it relies on
    resolve_misconfig() being able to split the mapping later. So it's
    *only* sane to do it where order>0 and the 'problem' will be solved by
    splitting the large page. Not for blindly returning 'error', which I was
    tempted to do in my first attempt.
    
    Signed-off-by: David Woodhouse <dwmw@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx>
    master commit: 30921dc2df3665ca1b2593595aa6725ff013d386
    master date: 2017-02-07 14:30:01 +0100
---
 xen/arch/x86/hvm/mtrr.c | 44 ++++++++++++++++++++++++++------------------
 1 file changed, 26 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 4188ae7..cee97d8 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -770,17 +770,19 @@ int epte_get_entry_emt(struct domain *d, unsigned long 
gfn, mfn_t mfn,
     if ( v->domain != d )
         v = d->vcpu ? d->vcpu[0] : NULL;
 
-    if ( !mfn_valid(mfn_x(mfn)) ||
-         rangeset_contains_range(mmio_ro_ranges, mfn_x(mfn),
-                                 mfn_x(mfn) + (1UL << order) - 1) )
-    {
-        *ipat = 1;
-        return MTRR_TYPE_UNCACHABLE;
-    }
-
+    /* Mask, not add, for order so it works with INVALID_MFN on unmapping */
     if ( rangeset_overlaps_range(mmio_ro_ranges, mfn_x(mfn),
-                                 mfn_x(mfn) + (1UL << order) - 1) )
+                                 mfn_x(mfn) | ((1UL << order) - 1)) )
+    {
+        if ( !order || rangeset_contains_range(mmio_ro_ranges, mfn_x(mfn),
+                                               mfn_x(mfn) | ((1UL << order) - 
1)) )
+        {
+            *ipat = 1;
+            return MTRR_TYPE_UNCACHABLE;
+        }
+        /* Force invalid memory type so resolve_misconfig() will split it */
         return -1;
+    }
 
     switch ( hvm_get_mem_pinned_cacheattr(d, gfn, order, &type) )
     {
@@ -791,15 +793,6 @@ int epte_get_entry_emt(struct domain *d, unsigned long 
gfn, mfn_t mfn,
         return -1;
     }
 
-    if ( !need_iommu(d) && !cache_flush_permitted(d) )
-    {
-        ASSERT(!direct_mmio ||
-               !((mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >>
-                 order));
-        *ipat = 1;
-        return MTRR_TYPE_WRBACK;
-    }
-
     if ( direct_mmio )
     {
         if ( (mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >> order )
@@ -810,6 +803,21 @@ int epte_get_entry_emt(struct domain *d, unsigned long 
gfn, mfn_t mfn,
         return MTRR_TYPE_WRBACK;
     }
 
+    if ( !mfn_valid(mfn_x(mfn)) )
+    {
+        *ipat = 1;
+        return MTRR_TYPE_UNCACHABLE;
+    }
+
+    if ( !need_iommu(d) && !cache_flush_permitted(d) )
+    {
+        ASSERT(!direct_mmio ||
+               !((mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >>
+                 order));
+        *ipat = 1;
+        return MTRR_TYPE_WRBACK;
+    }
+
     gmtrr_mtype = is_hvm_domain(d) && v ?
                   get_mtrr_type(&v->arch.hvm_vcpu.mtrr,
                                 gfn << PAGE_SHIFT, order) :
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.6

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.