[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen staging-4.12] xen/arm: p2m: Avoid off-by-one check on p2m->max_mapped_gfn



commit bbcd6c5f50adf91a78239e3ad12bf2cdc9331ba4
Author:     Julien Grall <julien.grall@xxxxxxx>
AuthorDate: Thu Oct 31 16:56:34 2019 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Thu Oct 31 16:56:34 2019 +0100

    xen/arm: p2m: Avoid off-by-one check on p2m->max_mapped_gfn
    
    The code base is using inconsistently the field p2m->max_mapped_gfn.
    Some of the useres expect that p2m->max_guest_gfn contain the highest
    mapped GFN while others expect highest + 1.
    
    p2m->max_guest_gfn is set as highest + 1, because of that the sanity
    check on the GFN in p2m_resolved_translation_fault() and
    p2m_get_entry() can be bypassed when GFN == p2m->max_guest_gfn.
    
    p2m_get_root_pointer(p2m->max_guest_gfn) may return NULL if it is
    outside of address range supported and therefore the BUG_ON() could be
    hit.
    
    The current value hold in p2m->max_mapped_gfn is inconsistent with the
    expectation of the common code (see domain_get_maximum_gpfn()) and also
    the documentation of the field.
    
    Rather than changing the check in p2m_translation_fault() and
    p2m_get_entry(), p2m->max_mapped_gfn is now containing the highest
    mapped GFN and the callers assuming "highest + 1" are now adjusted.
    
    Take the opportunity to use 1UL rather than 1 as page_order could
    theoritically big enough to overflow a 32-bit integer.
    
    Lastly, the documentation of the field max_guest_gfn to reflect how it
    is computed.
    
    This is part of XSA-301.
    
    Reported-by: Julien Grall <Julien.Grall@xxxxxxx>
    Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
    Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
    master commit: 6e8e163b46d0823526f1afbbe6f66c668fc811d1
    master date: 2019-10-31 16:18:38 +0100
---
 xen/arch/arm/p2m.c        | 6 +++---
 xen/include/asm-arm/p2m.h | 5 +----
 2 files changed, 4 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 8c20690cec..e6b170335f 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1052,7 +1052,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
         p2m_write_pte(entry, pte, p2m->clean_pte);
 
         p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn,
-                                      gfn_add(sgfn, 1 << page_order));
+                                      gfn_add(sgfn, (1UL << page_order) - 1));
         p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, sgfn);
     }
 
@@ -1589,7 +1589,7 @@ int relinquish_p2m_mapping(struct domain *d)
     p2m_write_lock(p2m);
 
     start = p2m->lowest_mapped_gfn;
-    end = p2m->max_mapped_gfn;
+    end = gfn_add(p2m->max_mapped_gfn, 1);
 
     for ( ; gfn_x(start) < gfn_x(end);
           start = gfn_next_boundary(start, order) )
@@ -1658,7 +1658,7 @@ int p2m_cache_flush_range(struct domain *d, gfn_t 
*pstart, gfn_t end)
     p2m_read_lock(p2m);
 
     start = gfn_max(start, p2m->lowest_mapped_gfn);
-    end = gfn_min(end, p2m->max_mapped_gfn);
+    end = gfn_min(end, gfn_add(p2m->max_mapped_gfn, 1));
 
     next_block_gfn = start;
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 772d43296f..12d1e137a5 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -36,10 +36,7 @@ struct p2m_domain {
     /* Current Translation Table Base Register for the p2m */
     uint64_t vttbr;
 
-    /*
-     * Highest guest frame that's ever been mapped in the p2m
-     * Only takes into account ram and foreign mapping
-     */
+    /* Highest guest frame that's ever been mapped in the p2m */
     gfn_t max_mapped_gfn;
 
     /*
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.12

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.