[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-3.0-testing] [TOOLS] Fix PAE save/restore/migrate: we must flush



# HG changeset patch
# User kaf24@xxxxxxxxxxxxxxxxxxxx
# Node ID 98a4aad0751a6d0f594b8f17864ea9cfb9fe4d15
# Parent  79286c3c783cb65abf4899c89814cef55f64e2d1
[TOOLS] Fix PAE save/restore/migrate: we must flush
all pending 'mmu updates' before moving page directories
below 4GB.
Signed-off-by: Keir Fraser <keir@xxxxxxxxxxxxx>
xen-unstable changeset:   10345:b0ba792f393520a4262aa06f5ab2395efa1a32c2
xen-unstable date:        Tue Jun 13 17:30:30 2006 +0100
---
 tools/libxc/xc_linux_restore.c |   22 ++++++++++++++--------
 1 files changed, 14 insertions(+), 8 deletions(-)

diff -r 79286c3c783c -r 98a4aad0751a tools/libxc/xc_linux_restore.c
--- a/tools/libxc/xc_linux_restore.c    Tue Jun 13 15:58:40 2006 +0100
+++ b/tools/libxc/xc_linux_restore.c    Tue Jun 13 17:38:16 2006 +0100
@@ -429,6 +429,15 @@ int xc_linux_restore(int xc_handle, int 
         n+= j; /* crude stats */
     }
 
+    /*
+     * Ensure we flush all machphys updates before potential PAE-specific
+     * reallocations below.
+     */
+    if (xc_finish_mmu_updates(xc_handle, mmu)) {
+        ERR("Error doing finish_mmu_updates()");
+        goto out;
+    }
+
     DPRINTF("Received all pages (%d races)\n", nraces);
 
     if(pt_levels == 3) { 
@@ -523,14 +532,11 @@ int xc_linux_restore(int xc_handle, int 
             }
         }
 
-    }
-
-
-    if (xc_finish_mmu_updates(xc_handle, mmu)) { 
-        ERR("Error doing finish_mmu_updates()"); 
-        goto out;
-    } 
-
+        if (xc_finish_mmu_updates(xc_handle, mmu)) { 
+            ERR("Error doing finish_mmu_updates()"); 
+            goto out;
+        } 
+    }
 
     /*
      * Pin page tables. Do this after writing to them as otherwise Xen

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.