[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] bogus gfn - mfn - gfn - mfn checks in guest_physmap_add_entry



On Thu, Nov 25, Keir Fraser wrote:

> On 25/11/2010 15:03, "Olaf Hering" <olaf@xxxxxxxxx> wrote:
> 
> >> The guest_physmap_add_entry code, and the p2m audit code, would be made
> >> more reliable if, say, alloc_domheap_pages and/or free_domheap_pages
> >> zapped the m2p entries for MFNs they touched.
> >> 
> >> I think originally that wasn't done because the alloc is quickly
> >> followed by another write of the m2p but that's probably over-keen
> >> optimization.
> > 
> > Could it be done like that? (not yet compile-tested)
> > The mfn is probably always valid.
> 
> If you xap m2p in free_domheap_pages(), you shouldn't need to do it again in
> alloc_domheap_pages().

What about the initial allocation? I have to double check, but I think
the array does already contain gfn numbers.

... checking ...

It seems the change below is indeed enough to invalidate the entries.

Olaf

#Subject: xenpaging: update machine_to_phys_mapping during page-in and page 
deallocation

The machine_to_phys_mapping array needs updating during page-in, and
during page deallocation.  If a page is gone, a call to
get_gpfn_from_mfn will still return the old gfn for an already paged-out
page.  This happens when the entire guest ram is paged-out before
xen_vga_populate_vram() runs.  Then XENMEM_populate_physmap is called
with gfn 0xff000.  A new page is allocated with alloc_domheap_pages.
This new page does not have a gfn yet.  However, in
guest_physmap_add_entry() the passed mfn maps still to an old gfn
(perhaps from another old guest).  This old gfn is in paged-out state in
this guests context and has no mfn anymore.  As a result, the ASSERT()
triggers because p2m_is_ram() is true for p2m_ram_paging* types.

If the machine_to_phys_mapping array is updated properly, both loops in
guest_physmap_add_entry() turn into no-ops for the new page and the
mfn/gfn mapping will be done at the end of the function.

The same thing needs to happen dring a page-in, the current gfn must be
written for the page.

If XENMEM_add_to_physmap is used with XENMAPSPACE_gmfn,
get_gpfn_from_mfn() will return an appearently valid gfn.  As a result,
guest_physmap_remove_page() is called.  The ASSERT in p2m_remove_page
triggers because the passed mfn does not match the old mfn for the
passed gfn.

Signed-off-by: Olaf Hering <olaf@xxxxxxxxx>

---
v3:
  invalidate machine_to_phys_mapping[] during page deallocation
v2:
  call set_gpfn_from_mfn only if mfn is valid

 xen/arch/x86/mm/p2m.c   |   13 ++++++++++---
 xen/common/page_alloc.c |    9 +++++++++
 2 files changed, 19 insertions(+), 3 deletions(-)

--- xen-4.0.1-testing.orig/xen/arch/x86/mm/p2m.c
+++ xen-4.0.1-testing/xen/arch/x86/mm/p2m.c
@@ -2598,9 +2598,16 @@ void p2m_mem_paging_resume(struct domain
 
     /* Fix p2m entry */
     mfn = gfn_to_mfn(d, rsp.gfn, &p2mt);
-    p2m_lock(d->arch.p2m);
-    set_p2m_entry(d, rsp.gfn, mfn, 0, p2m_ram_rw);
-    p2m_unlock(d->arch.p2m);
+    if (mfn_valid(mfn))
+    {
+        p2m_lock(d->arch.p2m);
+        set_p2m_entry(d, rsp.gfn, mfn, 0, p2m_ram_rw);
+        set_gpfn_from_mfn(mfn_x(mfn), rsp.gfn);
+        p2m_unlock(d->arch.p2m);
+    } else {
+        gdprintk(XENLOG_ERR, "invalid mfn %lx for gfn %lx p2mt %x flags %lx\n",
+            mfn_x(mfn), rsp.gfn, p2mt, (unsigned long)rsp.flags);
+    }
 
     /* Unpause domain */
     if ( rsp.flags & MEM_EVENT_FLAG_VCPU_PAUSED )
--- xen-4.0.1-testing.orig/xen/common/page_alloc.c
+++ xen-4.0.1-testing/xen/common/page_alloc.c
@@ -1178,9 +1178,18 @@ void free_domheap_pages(struct page_info
 {
     int            i, drop_dom_ref;
     struct domain *d = page_get_owner(pg);
+    unsigned long mfn;
 
     ASSERT(!in_irq());
 
+    /* this page is not a gfn anymore */
+    mfn = page_to_mfn(pg);
+    if (mfn_valid(mfn))
+    {
+        for ( i = 0; i < (1 << order); i++ )
+            set_gpfn_from_mfn(mfn + i, INVALID_M2P_ENTRY);
+    }
+
     if ( unlikely(is_xen_heap_page(pg)) )
     {
         /* NB. May recursively lock from relinquish_memory(). */

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.