[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen master] fix locking in offline_page()



commit d4837a56da4a59259dd0cf9f3bdc073159d81d7a
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Tue Dec 3 12:40:57 2013 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Tue Dec 3 12:40:57 2013 +0100

    fix locking in offline_page()
    
    Coverity ID 1055655
    
    Apart from the Coverity-detected lock order reversal (a domain's
    page_alloc_lock taken with the heap lock already held), calling
    put_page() with heap_lock is a bad idea too (as a possible descendant
    from put_page() is free_heap_pages(), which wants to take this very
    lock).
    
    From all I can tell the region over which heap_lock was held was far
    too large: All we need to protect are the call to mark_page_offline()
    and reserve_heap_page() (and I'd even put under question the need for
    the former). Hence by slightly re-arranging the if/else-if chain we
    can drop the lock much earlier, at once no longer covering the two
    put_page() invocations.
    
    Once at it, do a little bit of other cleanup: Put the "pod_replace"
    code path inline rather than at its own label, and drop the effectively
    unused variable "ret".
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Tim Deegan <tim@xxxxxxx>
    Acked-by: Keir Fraser <keir@xxxxxxx>
---
 xen/common/page_alloc.c |   39 ++++++++++++++++++---------------------
 1 files changed, 18 insertions(+), 21 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index c82aba6..9497623 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -957,7 +957,6 @@ int offline_page(unsigned long mfn, int broken, uint32_t 
*status)
 {
     unsigned long old_info = 0;
     struct domain *owner;
-    int ret = 0;
     struct page_info *pg;
 
     if ( !mfn_valid(mfn) )
@@ -1007,16 +1006,28 @@ int offline_page(unsigned long mfn, int broken, 
uint32_t *status)
     if ( page_state_is(pg, offlined) )
     {
         reserve_heap_page(pg);
-        *status = PG_OFFLINE_OFFLINED;
+
+        spin_unlock(&heap_lock);
+
+        *status = broken ? PG_OFFLINE_OFFLINED | PG_OFFLINE_BROKEN
+                         : PG_OFFLINE_OFFLINED;
+        return 0;
     }
-    else if ( (owner = page_get_owner_and_reference(pg)) )
+
+    spin_unlock(&heap_lock);
+
+    if ( (owner = page_get_owner_and_reference(pg)) )
     {
         if ( p2m_pod_offline_or_broken_hit(pg) )
-            goto pod_replace;
+        {
+            put_page(pg);
+            p2m_pod_offline_or_broken_replace(pg);
+            *status = PG_OFFLINE_OFFLINED;
+        }
         else
         {
             *status = PG_OFFLINE_OWNED | PG_OFFLINE_PENDING |
-              (owner->domain_id << PG_OFFLINE_OWNER_SHIFT);
+                      (owner->domain_id << PG_OFFLINE_OWNER_SHIFT);
             /* Release the reference since it will not be allocated anymore */
             put_page(pg);
         }
@@ -1024,7 +1035,7 @@ int offline_page(unsigned long mfn, int broken, uint32_t 
*status)
     else if ( old_info & PGC_xen_heap )
     {
         *status = PG_OFFLINE_XENPAGE | PG_OFFLINE_PENDING |
-          (DOMID_XEN << PG_OFFLINE_OWNER_SHIFT);
+                  (DOMID_XEN << PG_OFFLINE_OWNER_SHIFT);
     }
     else
     {
@@ -1043,21 +1054,7 @@ int offline_page(unsigned long mfn, int broken, uint32_t 
*status)
     if ( broken )
         *status |= PG_OFFLINE_BROKEN;
 
-    spin_unlock(&heap_lock);
-
-    return ret;
-
-pod_replace:
-    put_page(pg);
-    spin_unlock(&heap_lock);
-
-    p2m_pod_offline_or_broken_replace(pg);
-    *status = PG_OFFLINE_OFFLINED;
-
-    if ( broken )
-        *status |= PG_OFFLINE_BROKEN;
-
-    return ret;
+    return 0;
 }
 
 /*
--
generated by git-patchbot for /home/xen/git/xen.git#master

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.