[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] x86/mm: do not mark IO regions as Xen heap


  • To: <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Date: Thu, 10 Sep 2020 15:35:14 +0200
  • Authentication-results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Paul Durrant <paul@xxxxxxx>
  • Delivery-date: Thu, 10 Sep 2020 13:35:32 +0000
  • Ironport-sdr: PbMZBLudUnvZjwcUet/YrBJMWxjj0t8L2abuoHMAUGSY8733rtpzSS6VkGq0UiBtZIrRkofX9o OvN46qQkQR6KvUhKKBlHrrz569Vx7GTLZ02uuoMUrGPiTkIbBT3FTobgOuQkf4cMdOx6/eYpNk /02eb6hiqfoxoTlu3s6EDv5/RjURQHI+xgEua7gMudRggH3Hx0gZOLhPZAMbv0PqFU/u+YcJTg ZfViPUZh/T3lTJA65u+POgxPz0tljLwim+tsw/+YL7lXg1KLJ8hlkNVXec/Y3rcrncnVDi3wSI n4o=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

arch_init_memory will treat all the gaps on the physical memory map
between RAM regions as MMIO and use share_xen_page_with_guest in order
to assign them to dom_io. This has the side effect of setting the Xen
heap flag on such pages, and thus is_special_page would then return
true which is an issue in epte_get_entry_emt because such pages will
be forced to use write-back cache attributes.

Fix this by introducing a new helper to assign the MMIO regions to
dom_io without setting the Xen heap flag on the pages, so that
is_special_page will return false and the pages won't be forced to use
write-back cache attributes.

Fixes: 81fd0d3ca4b2cd ('x86/hvm: simplify 'mmio_direct' check in 
epte_get_entry_emt()')
Suggested-by: Jan Beulich <jbeulich@xxxxxxxx>
Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
---
Cc: Paul Durrant <paul@xxxxxxx>
---
 xen/arch/x86/mm.c | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 35ec0e11f6..4daf4e038a 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -271,6 +271,18 @@ static l4_pgentry_t __read_mostly split_l4e;
 #define root_pgt_pv_xen_slots ROOT_PAGETABLE_PV_XEN_SLOTS
 #endif
 
+static void __init assign_io_page(struct page_info *page)
+{
+    set_gpfn_from_mfn(mfn_x(page_to_mfn(page)), INVALID_M2P_ENTRY);
+
+    /* The incremented type count pins as writable. */
+    page->u.inuse.type_info = PGT_writable_page | PGT_validated | 1;
+
+    page_set_owner(page, dom_io);
+
+    page->count_info |= PGC_allocated | 1;
+}
+
 void __init arch_init_memory(void)
 {
     unsigned long i, pfn, rstart_pfn, rend_pfn, iostart_pfn, ioend_pfn;
@@ -291,7 +303,7 @@ void __init arch_init_memory(void)
      */
     BUG_ON(pvh_boot && trampoline_phys != 0x1000);
     for ( i = 0; i < 0x100; i++ )
-        share_xen_page_with_guest(mfn_to_page(_mfn(i)), dom_io, SHARE_rw);
+        assign_io_page(mfn_to_page(_mfn(i)));
 
     /* Any areas not specified as RAM by the e820 map are considered I/O. */
     for ( i = 0, pfn = 0; pfn < max_page; i++ )
@@ -332,7 +344,7 @@ void __init arch_init_memory(void)
             if ( !mfn_valid(_mfn(pfn)) )
                 continue;
 
-            share_xen_page_with_guest(mfn_to_page(_mfn(pfn)), dom_io, 
SHARE_rw);
+            assign_io_page(mfn_to_page(_mfn(pfn)));
         }
 
         /* Skip the RAM region. */
-- 
2.28.0




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.