[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] [XEN-4.2] hvmloader: Workaround uncached P2M mappings being created when relocating RAM



Upstream commit 1c84d046735102e02d2df454ab07f14ac51f235d (XSA-60 fix)
Stopped tracking of setting/unsetting of uncached mode in an HVM guest.
(This was later fixed by the EPT misconfig changes in Xen 4.5.)

On hosts which do not have forced IOMMU snooping then Xen allows for uncached
P2M mappings to be created otherwise cache issues would occur.

When hvmloader relocates RAM out of the HVM guests MMIO hole, Xen use's the
default MTRR setting to determine the cache setting for the destination RAM.

hvmloader has not configured the default MTRR to enable write back caching at
the point the RAM is relocated and so the destination RAM ends up being
configured as uncached.

Before the XSA-60 patches, the RAM would have it's cachability setting changed
to write back when the default MTRR was configured to write back.

After XSA-60 patches, this does not happen and the RAM remains as uncached.

This patch sets the default MTRR to be write back before relocating the RAM
and restores the default MTRR after relocating the RAM.

The patch was designed to be be minimally invasive so it could be backported
easily.

Signed-off-by: Malcolm Crossley <malcolm.crossley@xxxxxxxxxx>

diff -r dfb21287aac1 -r 4c9ba9c07a76 tools/firmware/hvmloader/pci.c
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -27,6 +27,7 @@
 
 #include <xen/memory.h>
 #include <xen/hvm/ioreq.h>
+#define MSR_MTRRdefType      0x02ff
 
 unsigned long pci_mem_start = PCI_MEM_START;
 unsigned long pci_mem_end = PCI_MEM_END;
@@ -197,25 +198,34 @@ void pci_setup(void)
             ((pci_mem_start << 1) != 0) )
         pci_mem_start <<= 1;
 
-    /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
-    while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
+    /* Workaround XENMEM_add_to_physmap creating uncached mappings
+     * by setting default MTRR type to write back mapping */
+    if ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
     {
-        struct xen_add_to_physmap xatp;
-        unsigned int nr_pages = min_t(
-            unsigned int,
-            hvm_info->low_mem_pgend - (pci_mem_start >> PAGE_SHIFT),
-            (1u << 16) - 1);
-        if ( hvm_info->high_mem_pgend == 0 )
-            hvm_info->high_mem_pgend = 1ull << (32 - PAGE_SHIFT);
-        hvm_info->low_mem_pgend -= nr_pages;
-        xatp.domid = DOMID_SELF;
-        xatp.space = XENMAPSPACE_gmfn_range;
-        xatp.idx   = hvm_info->low_mem_pgend;
-        xatp.gpfn  = hvm_info->high_mem_pgend;
-        xatp.size  = nr_pages;
-        if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
-            BUG();
-        hvm_info->high_mem_pgend += nr_pages;
+        uint64_t mtrr_def_type = rdmsr(MSR_MTRRdefType);
+        wrmsr(MSR_MTRRdefType, (1u << 11) | 6);
+        /* Relocate RAM that overlaps PCI space (in 64k-page chunks). */
+        while ( (pci_mem_start >> PAGE_SHIFT) < hvm_info->low_mem_pgend )
+        {
+            struct xen_add_to_physmap xatp;
+            unsigned int nr_pages = min_t(
+                unsigned int,
+                hvm_info->low_mem_pgend - (pci_mem_start >> PAGE_SHIFT),
+                (1u << 16) - 1);
+            if ( hvm_info->high_mem_pgend == 0 )
+                hvm_info->high_mem_pgend = 1ull << (32 - PAGE_SHIFT);
+            hvm_info->low_mem_pgend -= nr_pages;
+            xatp.domid = DOMID_SELF;
+            xatp.space = XENMAPSPACE_gmfn_range;
+            xatp.idx   = hvm_info->low_mem_pgend;
+            xatp.gpfn  = hvm_info->high_mem_pgend;
+            xatp.size  = nr_pages;
+            if ( hypercall_memory_op(XENMEM_add_to_physmap, &xatp) != 0 )
+                BUG();
+            hvm_info->high_mem_pgend += nr_pages;
+        }
+        /* Restore previous default MTRR value */
+        wrmsr(MSR_MTRRdefType, mtrr_def_type);
     }
 
     mem_resource.base = pci_mem_start;

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.