[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen staging] x86/altp2m: p2m_altp2m_propagate_change() should honor present page order



commit cbd0874fef835b229d91c94ac736ea26b23915da
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Fri Feb 25 11:09:21 2022 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Fri Feb 25 11:09:21 2022 +0100

    x86/altp2m: p2m_altp2m_propagate_change() should honor present page order
    
    For higher order mappings the comparison against p2m->min_remapped_gfn
    needs to take the upper bound of the covered GFN range into account, not
    just the base GFN. Otherwise, i.e. when dropping a mapping overlapping
    the remapped range but the base GFN outside of that range, an altp2m may
    wrongly not get reset.
    
    Note that there's no need to call get_gfn_type_access() ahead of the
    check against the remapped range boundaries: None of its outputs are
    needed earlier, and p2m_reset_altp2m() doesn't require the lock to be
    held. In fact this avoids a latent lock order violation: With per-GFN
    locking p2m_reset_altp2m() not only doesn't require the GFN lock to be
    held, but holding such a lock would actually not be allowed, as the
    function acquires a P2M lock.
    
    Local variables are moved into the more narrow scope (one is deleted
    altogether) to help see their actual life ranges.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
---
 xen/arch/x86/mm/p2m.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index c1cff37709..444761d31b 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2481,9 +2481,6 @@ int p2m_altp2m_propagate_change(struct domain *d, gfn_t 
gfn,
                                 p2m_type_t p2mt, p2m_access_t p2ma)
 {
     struct p2m_domain *p2m;
-    p2m_access_t a;
-    p2m_type_t t;
-    mfn_t m;
     unsigned int i;
     unsigned int reset_count = 0;
     unsigned int last_reset_idx = ~0;
@@ -2496,15 +2493,17 @@ int p2m_altp2m_propagate_change(struct domain *d, gfn_t 
gfn,
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
+        p2m_type_t t;
+        p2m_access_t a;
+
         if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
             continue;
 
         p2m = d->arch.altp2m_p2m[i];
-        m = get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, NULL);
 
         /* Check for a dropped page that may impact this altp2m */
         if ( mfn_eq(mfn, INVALID_MFN) &&
-             gfn_x(gfn) >= p2m->min_remapped_gfn &&
+             gfn_x(gfn) + (1UL << page_order) > p2m->min_remapped_gfn &&
              gfn_x(gfn) <= p2m->max_remapped_gfn )
         {
             if ( !reset_count++ )
@@ -2515,8 +2514,6 @@ int p2m_altp2m_propagate_change(struct domain *d, gfn_t 
gfn,
             else
             {
                 /* At least 2 altp2m's impacted, so reset everything */
-                __put_gfn(p2m, gfn_x(gfn));
-
                 for ( i = 0; i < MAX_ALTP2M; i++ )
                 {
                     if ( i == last_reset_idx ||
@@ -2530,16 +2527,19 @@ int p2m_altp2m_propagate_change(struct domain *d, gfn_t 
gfn,
                 break;
             }
         }
-        else if ( !mfn_eq(m, INVALID_MFN) )
+        else if ( !mfn_eq(get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0,
+                                              NULL), INVALID_MFN) )
         {
             int rc = p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma);
 
             /* Best effort: Don't bail on error. */
             if ( !ret )
                 ret = rc;
-        }
 
-        __put_gfn(p2m, gfn_x(gfn));
+            __put_gfn(p2m, gfn_x(gfn));
+        }
+        else
+            __put_gfn(p2m, gfn_x(gfn));
     }
 
     altp2m_list_unlock(d);
--
generated by git-patchbot for /home/xen/git/xen.git#staging



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.