[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] Disable HPET broadcast mode on kexec



# HG changeset patch
# User Ian Campbell <ian.campbell@xxxxxxxxxx>
# Date 1254298855 0
# Node ID 5215da46d60f95d57244e709cb3b189caffec50c
# Parent  6472342c8ab0789b844714bcf557e9e5eeacca42
Disable HPET broadcast mode on kexec.

Without this the new kernel cannot receive timer interrupts from the
legacy sources. Hangs are observed in the second kernel's
"check_timer()" routing or at "Checking 'hlt' instruction."

Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>

diff -r 6472342c8ab0 -r 5215da46d60f xen/arch/x86/crash.c
--- a/xen/arch/x86/crash.c      Wed Sep 30 08:51:21 2009 +0100
+++ b/xen/arch/x86/crash.c      Wed Sep 30 08:20:55 2009 +0000
@@ -25,6 +25,7 @@
 #include <public/xen.h>
 #include <asm/shared.h>
 #include <asm/hvm/support.h>
+#include <asm/hpet.h>
 
 static atomic_t waiting_for_crash_ipi;
 static unsigned int crashing_cpu;
@@ -83,6 +84,9 @@
 
     nmi_shootdown_cpus();
 
+    if ( hpet_broadcast_is_available() )
+        hpet_disable_legacy_broadcast();
+
     disable_IO_APIC();
 
     hvm_cpu_down();
diff -r 6472342c8ab0 -r 5215da46d60f xen/arch/x86/hpet.c
--- a/xen/arch/x86/hpet.c       Wed Sep 30 08:51:21 2009 +0100
+++ b/xen/arch/x86/hpet.c       Wed Sep 30 08:20:55 2009 +0000
@@ -604,8 +604,9 @@
 void hpet_disable_legacy_broadcast(void)
 {
     u32 cfg;
+    unsigned long flags;
 
-    spin_lock_irq(&legacy_hpet_event.lock);
+    spin_lock_irqsave(&legacy_hpet_event.lock, flags);
 
     legacy_hpet_event.flags |= HPET_EVT_DISABLE;
 
@@ -619,7 +620,7 @@
     cfg &= ~HPET_CFG_LEGACY;
     hpet_write32(cfg, HPET_CFG);
 
-    spin_unlock_irq(&legacy_hpet_event.lock);
+    spin_unlock_irqrestore(&legacy_hpet_event.lock, flags);
 
     smp_send_event_check_mask(&cpu_online_map);
 }

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.