[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen-unstable] x86: On CPU offline, fix master waiting for slave to be fully dead.
# HG changeset patch # User Keir Fraser <keir@xxxxxxx> # Date 1299324881 0 # Node ID d3d29df8f082d77c5fa8c790b4c97064943aef2d # Parent 0662bff7dabb59cad54cd04f49733c87a18510ee x86: On CPU offline, fix master waiting for slave to be fully dead. On two back-to-back CPU offline operations, on second offline the cpu_state var will be CPU_STATE_DEAD from the first offline. Hence __cpu_die() will incorrectly not wait for the second slave to fully die and set cpu_state itself. The fix is to set cpu_state to a new value, CPU_STATE_DYING, earlier during CPU offline, before __cpu_die() starts to execute. Original diagnosis and patch by Liu, Jinsong <jinsong.liu@xxxxxxxxx> Signed-off-by: Keir Fraser <keir@xxxxxxx> --- diff -r 0662bff7dabb -r d3d29df8f082 xen/arch/x86/smpboot.c --- a/xen/arch/x86/smpboot.c Fri Mar 04 17:33:32 2011 +0000 +++ b/xen/arch/x86/smpboot.c Sat Mar 05 11:34:41 2011 +0000 @@ -74,7 +74,8 @@ static int cpu_error; static enum cpu_state { - CPU_STATE_DEAD = 0, /* slave -> master: I am completely dead */ + CPU_STATE_DYING, /* slave -> master: I am dying */ + CPU_STATE_DEAD, /* slave -> master: I am completely dead */ CPU_STATE_INIT, /* master -> slave: Early bringup phase 1 */ CPU_STATE_CALLOUT, /* master -> slave: Early bringup phase 2 */ CPU_STATE_CALLIN, /* slave -> master: Completed phase 2 */ @@ -834,6 +835,8 @@ extern void fixup_irqs(void); int cpu = smp_processor_id(); + set_cpu_state(CPU_STATE_DYING); + local_irq_disable(); clear_local_APIC(); /* Allow any queued timer interrupts to get serviced */ @@ -861,6 +864,7 @@ while ( cpu_state != CPU_STATE_DEAD ) { + BUG_ON(cpu_state != CPU_STATE_DYING); mdelay(100); cpu_relax(); process_pending_softirqs(); _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |