[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] P2M superpages fixes



Hello,

This patch addresses three issues in the p2m superpages:

- return -EINVAL if the guest is trying to request a page_order not supported by current implementation (0 or 9).

- check for error (as it used to be) on guest_pyshmap_add_page in populate_physmap()

- Re-insert missing shadow code from the original patch that was submitted to xen-devel, to remove mappings when we're removing a p2m 2mb page.


Signed-off-by: Gianluca Guida <gianluca.guida@xxxxxxxxxxxxx>

diff -r 5cd4fe68b6c2 xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c     Tue Jul 08 17:25:04 2008 +0100
+++ b/xen/arch/x86/mm/p2m.c     Tue Jul 08 22:22:10 2008 +0100
@@ -939,6 +939,14 @@ guest_physmap_add_entry(struct domain *d
 
     P2M_DEBUG("adding gfn=%#lx mfn=%#lx\n", gfn, mfn);
 
+    if ( page_order && (page_order != 9) )
+    {
+        /* Current implementation supports only 4kb and 2Mb pages. */
+        gdprintk(XENLOG_ERR, "request of P2M %d page order not supported.\n",
+                 page_order);
+        return -EINVAL;
+    }
+
     omfn = gfn_to_mfn(d, gfn, &ot);
     if ( p2m_is_ram(ot) )
     {
diff -r 5cd4fe68b6c2 xen/arch/x86/mm/shadow/common.c
--- a/xen/arch/x86/mm/shadow/common.c   Tue Jul 08 17:25:04 2008 +0100
+++ b/xen/arch/x86/mm/shadow/common.c   Tue Jul 08 22:22:10 2008 +0100
@@ -3354,6 +3354,26 @@ shadow_write_p2m_entry(struct vcpu *v, u
         }
     }
 
+    /* If we're removing a superpage mapping from the p2m, remove all the
+     * MFNs covered by it from the shadows too. */
+    if ( level == 2 && (l1e_get_flags(*p) & _PAGE_PRESENT) &&
+         (l1e_get_flags(*p) & _PAGE_PSE) )
+    {
+        unsigned int i;
+        mfn_t mfn = _mfn(l1e_get_pfn(*p));
+        p2m_type_t p2mt = p2m_flags_to_type(l1e_get_flags(*p));
+        if ( p2m_is_valid(p2mt) && mfn_valid(mfn) )
+        {
+            for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
+            {
+                sh_remove_all_shadows_and_parents(v, mfn);
+                if ( sh_remove_all_mappings(v, mfn) )
+                    flush_tlb_mask(d->domain_dirty_cpumask);
+                mfn = _mfn(mfn_x(mfn) + 1);
+            }
+        }
+    }
+
     /* Update the entry with new content */
     safe_write_pte(p, new);
 
diff -r 5cd4fe68b6c2 xen/common/memory.c
--- a/xen/common/memory.c       Tue Jul 08 17:25:04 2008 +0100
+++ b/xen/common/memory.c       Tue Jul 08 22:22:10 2008 +0100
@@ -122,7 +122,8 @@ static void populate_physmap(struct memo
         }
 
         mfn = page_to_mfn(page);
-        guest_physmap_add_page(d, gpfn, mfn, a->extent_order);
+        if ( guest_physmap_add_page(d, gpfn, mfn, a->extent_order) )
+            goto out;
 
         if ( !paging_mode_translate(d) )
         {
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.