[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 2/3] xmalloc: don't evaluate ADD_REGION without holding the pool lock



It's not safe to add a region without holding the lock, but this is
exactly what may happen if two threads race entering xmem_pool_alloc()
before the init_region is set.

This patch instead checks for init_region under lock, drops the lock if it
needs to allocate a page, takes the lock, adds the region and then confirms
init_region is still unset before pointing it at the newly added region.
Thus, it is possible that a race may cause an extra region to be added,
but there will be no pool metadata corruption.

Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
---
Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
Cc: Julien Grall <julien.grall@xxxxxxx>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
Cc: Tim Deegan <tim@xxxxxxx>
Cc: Wei Liu <wl@xxxxxxx>
---
 xen/common/xmalloc_tlsf.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/xen/common/xmalloc_tlsf.c b/xen/common/xmalloc_tlsf.c
index 6d889b7bdc..71597c3590 100644
--- a/xen/common/xmalloc_tlsf.c
+++ b/xen/common/xmalloc_tlsf.c
@@ -380,18 +380,22 @@ void *xmem_pool_alloc(unsigned long size, struct 
xmem_pool *pool)
     int fl, sl;
     unsigned long tmp_size;
 
+    spin_lock(&pool->lock);
     if ( pool->init_region == NULL )
     {
+        spin_unlock(&pool->lock);
         if ( (region = pool->get_mem(pool->init_size)) == NULL )
             goto out;
+        spin_lock(&pool->lock);
         ADD_REGION(region, pool->init_size, pool);
-        pool->init_region = region;
+        /* Re-check since the lock was dropped */
+        if ( pool->init_region == NULL )
+            pool->init_region = region;
     }
 
     size = (size < MIN_BLOCK_SIZE) ? MIN_BLOCK_SIZE : ROUNDUP_SIZE(size);
     /* Rounding up the requested size and calculating fl and sl */
 
-    spin_lock(&pool->lock);
  retry_find:
     MAPPING_SEARCH(&size, &fl, &sl);
 
-- 
2.20.1.2.gb21ebb671


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.