[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] x86: fix domain cleanup



# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1225113763 0
# Node ID 11c86c51a697dab2e4a49efe3dda139ea206f423
# Parent  101e50cffc7825065f4dd39610728a2ba3ea68b4
x86: fix domain cleanup

The preemptable page type handling changes modified free_page_type()
behavior without adjusting the call site in relinquish_memory(): Any
type reference left pending when leaving hypercall handlers is
associated with a page reference, and when successful free_page_type()
decrements the type refcount - hence relinquish_memory() must now also
drop the page reference.

Also, the recursion avoidance during domain shutdown somehow (probably
by me when I merged the patch up to a newer snapshot) got screwed up:
The avoidance logic in mm.c should short circuit levels below the top
one currently being processed, rather than the top one itself.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
---
 xen/arch/x86/domain.c |    1 +
 xen/arch/x86/mm.c     |   14 +++++++-------
 2 files changed, 8 insertions(+), 7 deletions(-)

diff -r 101e50cffc78 -r 11c86c51a697 xen/arch/x86/domain.c
--- a/xen/arch/x86/domain.c     Mon Oct 27 13:20:52 2008 +0000
+++ b/xen/arch/x86/domain.c     Mon Oct 27 13:22:43 2008 +0000
@@ -1687,6 +1687,7 @@ static int relinquish_memory(
             {
                 if ( free_page_type(page, x, 0) != 0 )
                     BUG();
+                put_page(page);
                 break;
             }
         }
diff -r 101e50cffc78 -r 11c86c51a697 xen/arch/x86/mm.c
--- a/xen/arch/x86/mm.c Mon Oct 27 13:20:52 2008 +0000
+++ b/xen/arch/x86/mm.c Mon Oct 27 13:22:43 2008 +0000
@@ -1343,7 +1343,7 @@ static void free_l1_table(struct page_in
 
 static int free_l2_table(struct page_info *page, int preemptible)
 {
-#ifdef CONFIG_COMPAT
+#if defined(CONFIG_COMPAT) || defined(DOMAIN_DESTRUCT_AVOID_RECURSION)
     struct domain *d = page_get_owner(page);
 #endif
     unsigned long pfn = page_to_mfn(page);
@@ -1351,6 +1351,11 @@ static int free_l2_table(struct page_inf
     unsigned int  i = page->nr_validated_ptes - 1;
     int err = 0;
 
+#ifdef DOMAIN_DESTRUCT_AVOID_RECURSION
+    if ( d->arch.relmem == RELMEM_l3 )
+        return 0;
+#endif
+
     pl2e = map_domain_page(pfn);
 
     ASSERT(page->nr_validated_ptes);
@@ -1381,7 +1386,7 @@ static int free_l3_table(struct page_inf
     int rc = 0;
 
 #ifdef DOMAIN_DESTRUCT_AVOID_RECURSION
-    if ( d->arch.relmem == RELMEM_l3 )
+    if ( d->arch.relmem == RELMEM_l4 )
         return 0;
 #endif
 
@@ -1423,11 +1428,6 @@ static int free_l4_table(struct page_inf
     l4_pgentry_t *pl4e = page_to_virt(page);
     unsigned int  i = page->nr_validated_ptes - !page->partial_pte;
     int rc = 0;
-
-#ifdef DOMAIN_DESTRUCT_AVOID_RECURSION
-    if ( d->arch.relmem == RELMEM_l4 )
-        return 0;
-#endif
 
     do {
         if ( is_guest_l4_slot(d, i) )

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.