[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] x86: Fix shadow code's handling of p2m superpage changes



# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1218626039 -3600
# Node ID d96bf4cd0f3789f5db9af735692bf54204df2a41
# Parent  641e10533c89fdba208e650d2a6205396ae20509
x86: Fix shadow code's handling of p2m superpage changes

When a p2m superpage entry is shattered, it's important not to
unshadow any parts of the 2MB region that are still there afterwards.
Otherwise shattering a superpage that contains the guest's top-level
pagetable will cause the guest to be killed.

Signed-off-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
---
 xen/arch/x86/mm/shadow/common.c |   40 +++++++++++++++++++++++++++++++---------
 1 files changed, 31 insertions(+), 9 deletions(-)

diff -r 641e10533c89 -r d96bf4cd0f37 xen/arch/x86/mm/shadow/common.c
--- a/xen/arch/x86/mm/shadow/common.c   Wed Aug 13 12:12:08 2008 +0100
+++ b/xen/arch/x86/mm/shadow/common.c   Wed Aug 13 12:13:59 2008 +0100
@@ -3357,23 +3357,45 @@ shadow_write_p2m_entry(struct vcpu *v, u
         }
     }
 
-    /* If we're removing a superpage mapping from the p2m, remove all the
-     * MFNs covered by it from the shadows too. */
+    /* If we're removing a superpage mapping from the p2m, we need to check 
+     * all the pages covered by it.  If they're still there in the new 
+     * scheme, that's OK, but otherwise they must be unshadowed. */
     if ( level == 2 && (l1e_get_flags(*p) & _PAGE_PRESENT) &&
          (l1e_get_flags(*p) & _PAGE_PSE) )
     {
         unsigned int i;
-        mfn_t mfn = _mfn(l1e_get_pfn(*p));
+        cpumask_t flushmask;
+        mfn_t omfn = _mfn(l1e_get_pfn(*p));
+        mfn_t nmfn = _mfn(l1e_get_pfn(new));
+        l1_pgentry_t *npte = NULL;
         p2m_type_t p2mt = p2m_flags_to_type(l1e_get_flags(*p));
-        if ( p2m_is_valid(p2mt) && mfn_valid(mfn) )
-        {
+        if ( p2m_is_valid(p2mt) && mfn_valid(omfn) )
+        {
+            cpus_clear(flushmask);
+
+            /* If we're replacing a superpage with a normal L1 page, map it */
+            if ( (l1e_get_flags(new) & _PAGE_PRESENT)
+                 && !(l1e_get_flags(new) & _PAGE_PSE) 
+                 && mfn_valid(nmfn) )
+                npte = map_domain_page(mfn_x(nmfn));
+            
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
             {
-                sh_remove_all_shadows_and_parents(v, mfn);
-                if ( sh_remove_all_mappings(v, mfn) )
-                    flush_tlb_mask(d->domain_dirty_cpumask);
-                mfn = _mfn(mfn_x(mfn) + 1);
+                if ( !npte 
+                     || !p2m_is_ram(p2m_flags_to_type(l1e_get_flags(npte[i])))
+                     || l1e_get_pfn(npte[i]) != mfn_x(omfn) )
+                {
+                    /* This GFN->MFN mapping has gone away */
+                    sh_remove_all_shadows_and_parents(v, omfn);
+                    if ( sh_remove_all_mappings(v, omfn) )
+                        cpus_or(flushmask, flushmask, d->domain_dirty_cpumask);
+                }
+                omfn = _mfn(mfn_x(omfn) + 1);
             }
+            flush_tlb_mask(flushmask);
+            
+            if ( npte )
+                unmap_domain_page(npte);
         }
     }
 

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.