[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v3 3/4] x86/shadow: slightly consolidate sh_unshadow_for_p2m_change() (part III)


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 12 Aug 2022 09:44:54 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qxtZ+MVir5YJsKHAmigIFPCuHzEhnPOLkFiSiePFqik=; b=J6KXikWLHwtE2JTraQSkyvX8cX9I7Aeboktc9uYmPzqGPMfJ6u+MaXLKqxoyujyP8FiyRvqe8JZ1L5Cmnl3G01kDID4BZe3QsPVM6BwOgHqyNC9t2t4ogHsWu6gI8kSqxquXDbbCOhlkXJBj/dn0ZuGo/yP5HdVGcwmfLxhj/zgv8uO8P6dDAHF1WdDiEVXME6t+EucbbU/sERt83jqYYhefXB18AuHhkjQbPf6HrSawSb2XkhNKTRigljL9TsiI6pLzITRgk9gERWpJnJzgNEr8e0Ou2GeRLi31518GuGe1AwVC6axy9SLKQ3npa1uS/DmKO9BEMjjrLxLXGtT2Lw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KmF3ICwdXbCb0LihB5VXsMyWCDhnKfZ4ER3GffL8gGLBCf3J6yMJQcSAQtRQ2SPwpX1BGC2SiBGbNGrBviMvNmqK7cg4YPVoX7pB4NgU1fBX0O/wdabeKhbJ/g4GTxDWKKzOdIZwxYb0eUPMW8bfasRP7WhYK9cXcsUJx81CLsrLT+iSfDvxqB2cAFCEmx/HrdreBIK5RtZcsr5Fn2IKgyaV68kkz3S5BudYbKpjKgfIwX5M39U4r/JUGZXGpnn4wWQuWNRaAWHdoAqZnEaLmGHlH/Z51XKpHt6R+j3n8dyUzQgaOBwgbiutUFaaR74jZH2l8ZJa6J3fEsMuwmS9Zg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>
  • Delivery-date: Fri, 12 Aug 2022 07:45:01 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

In preparation for reactivating the presently dead 2M page path of the
function, also deal with the case of replacing an L1 page table all in
one go. Note that the prior comparing of MFNs to bypass the removal of
shadows was insufficient (but kind of benign, for being dead code so
far) - at the very least the R/W bit also needs considering there (to be
on the safe side, compare the full [virtual] PTEs).

While adjusting the first conditional in the loop for the use of the new
local variable "nflags", also drop mfn_valid(): If anything we'd need to
compare against INVALID_MFN, but that won't come out of l1e_get_mfn().

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
v3: Compare (virtual) PTEs, not MFNs. Correct MFN increment at the
    bottom of the loop. Respect PAT bit.
v2: Split from previous bigger patch.
---
The two mfn_add()s dealing with PAT aren't pretty, but short of us also
having mfn_sub() the cast there is pretty much unavoidable (alternatives
not really looking any neater).

--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -844,40 +844,65 @@ static void cf_check sh_unshadow_for_p2m
      * scheme, that's OK, but otherwise they must be unshadowed.
      */
     case 2:
-        if ( !(oflags & _PAGE_PSE) )
-            break;
-
-        ASSERT(!p2m_is_grant(p2mt));
-
         {
             unsigned int i;
             mfn_t nmfn = l1e_get_mfn(new);
-            l1_pgentry_t *npte = NULL;
+            unsigned int nflags = l1e_get_flags(new);
+            l1_pgentry_t *npte = NULL, *opte = NULL;
+
+            BUILD_BUG_ON(_PAGE_PAT != _PAGE_PSE);
 
+            if ( !(nflags & _PAGE_PRESENT) )
+                nmfn = INVALID_MFN;
             /* If we're replacing a superpage with a normal L1 page, map it */
-            if ( (l1e_get_flags(new) & _PAGE_PRESENT) &&
-                 !(l1e_get_flags(new) & _PAGE_PSE) &&
-                 mfn_valid(nmfn) )
+            else if ( !(nflags & _PAGE_PSE) )
                 npte = map_domain_page(nmfn);
+            else if ( !(mfn_x(nmfn) & (_PAGE_PSE_PAT >> PAGE_SHIFT)) )
+                nflags &= ~_PAGE_PSE;
+            else
+                nmfn = mfn_add(nmfn, -(long)(_PAGE_PSE_PAT >> PAGE_SHIFT));
+
+            /* If we're replacing a normal L1 page, map it as well. */
+            if ( !(oflags & _PAGE_PSE) )
+                opte = map_domain_page(omfn);
+            else if ( !(mfn_x(omfn) & (_PAGE_PSE_PAT >> PAGE_SHIFT)) )
+                oflags &= ~_PAGE_PSE;
+            else
+                omfn = mfn_add(omfn, -(long)(_PAGE_PSE_PAT >> PAGE_SHIFT));
 
             gfn &= ~(L1_PAGETABLE_ENTRIES - 1);
 
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
             {
-                if ( !npte ||
-                     !(l1e_get_flags(npte[i]) & _PAGE_PRESENT) ||
-                     !mfn_eq(l1e_get_mfn(npte[i]), omfn) )
+                if ( opte )
+                {
+                    oflags = l1e_get_flags(opte[i]);
+                    if ( !(oflags & _PAGE_PRESENT) )
+                        continue;
+                    omfn = l1e_get_mfn(opte[i]);
+                }
+
+                if ( npte )
+                {
+                    nflags = l1e_get_flags(npte[i]);
+                    nmfn = nflags & _PAGE_PRESENT
+                           ? l1e_get_mfn(npte[i]) : INVALID_MFN;
+                }
+
+                if ( !mfn_eq(nmfn, omfn) || nflags != oflags )
                 {
                     /* This GFN->MFN mapping has gone away */
                     sh_remove_all_shadows_and_parents(d, omfn);
                     if ( sh_remove_all_mappings(d, omfn, _gfn(gfn + i)) )
                         flush = true;
                 }
+
                 omfn = mfn_add(omfn, 1);
+                nmfn = mfn_add(nmfn, !mfn_eq(nmfn, INVALID_MFN));
             }
 
-            if ( npte )
-                unmap_domain_page(npte);
+            unmap_domain_page(opte);
+            unmap_domain_page(npte);
         }
 
         break;




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.