[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] track free pages live rather than count pages in all nodes/zones
Trying to fix a livelock condition in tmem that occurs only when the system is totally out of memory requires the ability to easily determine if all zones in all nodes are empty, and this must be checked at a fairly high frequency. So to avoid walking all the zones in all the nodes each time, I'd like a fast way to determine if "free_pages" is zero. This patch tracks the sum of the free pages in all nodes/zones. Since I think the value is modified only when heap_lock is held, it need not be atomic. I don't know this for sure, but suspect this will be useful in other future memory utilization code, e.g. page sharing. This has had limited testing, though I did drive free memory down to zero and up and down a few times with debug on and no asserts were triggered. On the chance that it looks good as is or is trivially modified: Signed-off-by: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx> Thanks, Dan diff -r 0fb962a5dad3 xen/common/page_alloc.c --- a/xen/common/page_alloc.c Mon Dec 07 14:10:27 2009 +0000 +++ b/xen/common/page_alloc.c Mon Dec 07 18:22:44 2009 -0700 @@ -222,6 +222,7 @@ static heap_by_zone_and_order_t *_heap[M #define heap(node, zone, order) ((*_heap[node])[zone][order]) static unsigned long *avail[MAX_NUMNODES]; +static long total_avail_pages; static DEFINE_SPINLOCK(heap_lock); @@ -350,6 +351,8 @@ static struct page_info *alloc_heap_page ASSERT(avail[node][zone] >= request); avail[node][zone] -= request; + total_avail_pages -= request; + ASSERT(total_avail_pages >= 0); spin_unlock(&heap_lock); @@ -445,6 +448,8 @@ static int reserve_offlined_page(struct continue; avail[node][zone]--; + total_avail_pages--; + ASSERT(total_avail_pages >= 0); page_list_add_tail(cur_head, test_bit(_PGC_broken, &cur_head->count_info) ? @@ -497,6 +502,7 @@ static void free_heap_pages( spin_lock(&heap_lock); avail[node][zone] += 1 << order; + total_avail_pages += 1 << order; /* Merge chunks as far as possible. */ while ( order < MAX_ORDER ) @@ -816,6 +822,7 @@ static unsigned long avail_heap_pages( static unsigned long avail_heap_pages( unsigned int zone_lo, unsigned int zone_hi, unsigned int node) { +#ifndef NDEBUG unsigned int i, zone; unsigned long free_pages = 0; @@ -831,7 +838,16 @@ static unsigned long avail_heap_pages( free_pages += avail[i][zone]; } + ASSERT ( free_pages == total_avail_pages ); return free_pages; +#else + return total_avail_pages; +#endif +} + +unsigned long total_free_pages(void) +{ + return total_avail_pages; } void __init end_boot_allocator(void) diff -r 0fb962a5dad3 xen/include/xen/mm.h --- a/xen/include/xen/mm.h Mon Dec 07 14:10:27 2009 +0000 +++ b/xen/include/xen/mm.h Mon Dec 07 18:22:44 2009 -0700 @@ -62,6 +62,7 @@ unsigned int online_page(unsigned long m unsigned int online_page(unsigned long mfn, uint32_t *status); int offline_page(unsigned long mfn, int broken, uint32_t *status); int query_page_offline(unsigned long mfn, uint32_t *status); +unsigned long total_free_pages(void); void scrub_heap_pages(void); Attachment:
freepages.patch _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |