[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] pv 2.6.31 (kernel.org) and save/migrate fails, domU BUG



On Tue, 2009-11-24 at 14:27 +0000, Ian Campbell wrote:
> 
> I'm still seeing other problems with resume, the system is hung on
> restore and the RCU stall detection logic is triggering, unfortunately
> arch_trigger_all_cpu_backtrace is not Xen compatible (uses APIC
> directly) so I don't get much useful info out of it. It's most likely
> a symptom of the actual problem rather than a problem with RCU per-se
> anyhow. 

tick_resume() is never called on secondary processors. Presumably this
is because they are offlined for suspend on native and so this is
normally taken care of in the CPU onlining path. Under Xen we keep all
CPUs online over a suspend.

This patch papers over the issue for me but I will investigate a more
generic, less hacky, way of doing to the same.

tick_suspend is also only called on the boot CPU which I presume should
be fixed too.

Ian.

diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
index 6343a5d..cdfeed2 100644
--- a/arch/x86/xen/suspend.c
+++ b/arch/x86/xen/suspend.c
@@ -1,4 +1,5 @@
 #include <linux/types.h>
+#include <linux/clockchips.h>
 
 #include <xen/interface/xen.h>
 #include <xen/grant_table.h>
@@ -46,7 +50,19 @@ void xen_post_suspend(int suspend_cancelled)
 
 }
 
+static void xen_vcpu_notify_restore(void *data)
+{
+       unsigned long reason = (unsigned long)data;
+
+       /* Boot processor notified via generic timekeeping_resume() */
+       if ( smp_processor_id() == 0)
+               return;
+
+       clockevents_notify(reason, NULL);
+}
+
 void xen_arch_resume(void)
 {
-       /* nothing */
+       smp_call_function_many(cpu_online_mask, xen_vcpu_notify_restore,
+                              (void *)CLOCK_EVT_NOTIFY_RESUME, 1);
 }



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.