[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen stable-4.13] xen/page_alloc: Only flush the page to RAM once we know they are scrubbed



commit ab995b6af9ab723b0b52e5ea0e342b612f1a7b89
Author:     Julien Grall <jgrall@xxxxxxxxxx>
AuthorDate: Tue Feb 16 15:38:11 2021 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Tue Feb 16 15:38:11 2021 +0100

    xen/page_alloc: Only flush the page to RAM once we know they are scrubbed
    
    At the moment, each page are flushed to RAM just after the allocator
    found some free pages. However, this is happening before check if the
    page was scrubbed.
    
    As a consequence, on Arm, a guest may be able to access the old content
    of the scrubbed pages if it has cache disabled (default at boot) and
    the content didn't reach the Point of Coherency.
    
    The flush is now moved after we know the content of the page will not
    change. This also has the benefit to reduce the amount of work happening
    with the heap_lock held.
    
    This is XSA-364.
    
    Fixes: 307c3be3ccb2 ("mm: Don't scrub pages while holding heap lock in 
alloc_heap_pages()")
    Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 3b1cc15f1931ba56d0ee256fe9bfe65509733b27
    master date: 2021-02-16 15:32:08 +0100
---
 xen/common/page_alloc.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 7cb1bd368b..ebc7f45c0f 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -923,6 +923,7 @@ static struct page_info *alloc_heap_pages(
     bool need_tlbflush = false;
     uint32_t tlbflush_timestamp = 0;
     unsigned int dirty_cnt = 0;
+    mfn_t mfn;
 
     /* Make sure there are enough bits in memflags for nodeID. */
     BUILD_BUG_ON((_MEMF_bits - _MEMF_node) < (8 * sizeof(nodeid_t)));
@@ -1021,11 +1022,6 @@ static struct page_info *alloc_heap_pages(
         pg[i].u.inuse.type_info = 0;
         page_set_owner(&pg[i], NULL);
 
-        /* Ensure cache and RAM are consistent for platforms where the
-         * guest can control its own visibility of/through the cache.
-         */
-        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
-                          !(memflags & MEMF_no_icache_flush));
     }
 
     spin_unlock(&heap_lock);
@@ -1061,6 +1057,14 @@ static struct page_info *alloc_heap_pages(
     if ( need_tlbflush )
         filtered_flush_tlb_mask(tlbflush_timestamp);
 
+    /*
+     * Ensure cache and RAM are consistent for platforms where the guest
+     * can control its own visibility of/through the cache.
+     */
+    mfn = page_to_mfn(pg);
+    for ( i = 0; i < (1U << order); i++ )
+        flush_page_to_ram(mfn_x(mfn) + i, !(memflags & MEMF_no_icache_flush));
+
     return pg;
 }
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.13



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.