[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] xen/timers: Fix memory leak with cpu hot unplug
timer_softirq_action() realloc's itself a larger timer heap whenever necessary, which includes bootstrapping from the empty dummy_heap. Nothing ever freed this allocation. CPU hot unplug and plug has the side effect of zeroing the percpu data area, which clears ts->heap. This in turn causes new timers to be put on the list rather than the heap, and for timer_softirq_action() to bootstrap itself again. This in practice leaks ts->heap every time a CPU is hot unplugged and replugged. In the cpu notifier, free the heap after migrating all other timers away. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> --- CC: George Dunlap <George.Dunlap@xxxxxxxxxxxxx> CC: Ian Jackson <ian.jackson@xxxxxxxxxx> CC: Jan Beulich <JBeulich@xxxxxxxx> CC: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx> CC: Tim Deegan <tim@xxxxxxx> CC: Wei Liu <wei.liu2@xxxxxxxxxx> CC: Julien Grall <julien.grall@xxxxxxx> This texturally depends on "xen/timers: Document and improve the representation of the timer heap metadata" which was necessary to understand the problem well enough to fix it, but isn't backporting over this change isn't too complicated (should the cleanup patch not want to be backported). --- xen/common/timer.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/xen/common/timer.c b/xen/common/timer.c index 98f2c48..afcb1b0 100644 --- a/xen/common/timer.c +++ b/xen/common/timer.c @@ -631,6 +631,10 @@ static int cpu_callback( case CPU_UP_CANCELED: case CPU_DEAD: migrate_timers_from_cpu(cpu); + ASSERT(heap_metadata(ts->heap)->size == 0); + if ( heap_metadata(ts->heap)->limit ) + xfree(ts->heap); + ts->heap = dummy_heap; break; default: break; -- 2.1.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |