[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Debian/squeeze: domU live migraton hangs



* Jean-Francois Malouin <Jean-Francois.Malouin@xxxxxxxxxxxxxxxxx> [20101102 
16:41]:
> Replying to myself,

Continuing my soliloque,

Updated Dom0 kernel to debian/squeeze to 2.6.32-27 (from -26) 
Looks like this resolves the save/migration issue.
Still experiencing some delays for the domUs network to come up though...

jf

> 
> * Jean-Francois Malouin <Jean-Francois.Malouin@xxxxxxxxxxxxxxxxx> [20101102 
> 14:53]:
> > 
> > Hi,
> > 
> > In view of the problems I was having with DomU network timeout after a
> > live migration (I posted that problems here a while ago but never got
> > anything except from private emails) I finally updated my
> > Debian/Squeeze Dom0s last night to a new kernel, from 2.6.32-23 to
> > 2.6.32-26. 
> > 
> > Now live migration just hangs...Any ideas?
> 
> Forgot to add the kernel log from the DomU (same kernel as dom0)
> 
> [10765.336002] BUG: soft lockup - CPU#0 stuck for 61s! [xenwatch:16]
> [10765.336002] Modules linked in: xt_tcpudp iptable_filter ip_tables x_tables 
> snd_pcm snd_timer snd soundcore snd_page_alloc evdev pcspkr ext3 jbd mbcache 
> dm_mod raid1 md_mod xen_netfront xen_blkfront
> [10765.336002] CPU 0:
> [10765.336002] Modules linked in: xt_tcpudp iptable_filter ip_tables x_tables 
> snd_pcm snd_timer snd soundcore snd_page_alloc evdev pcspkr ext3 jbd mbcache 
> dm_mod raid1 md_mod xen_netfront
> [10765.336002] Pid: 16, comm: xenwatch Not tainted 2.6.32-5-xen-amd64 #1      
>        
> [10765.336002] RIP: e030:[<ffffffff81068705>] [<ffffffff81068705>] 
> lock_hrtimer_base+0x3a/0x3c
> [10765.336002] RSP: e02b:ffff8800ffaadd70 EFLAGS: 00000246
> [10765.336002] RAX: ffff880002a28680 RBX: 0000000000000000 RCX: 
> 0000000000000006
> [10765.336002] RDX: ffff8800fcd70850 RSI: ffff8800ffaadda0 RDI: 
> ffff880002a2f820
> [10765.336002] RBP: ffff880002a2f820 R08: 0000000000000000 R09: 
> 0000000000000000
> [10765.336002] R10: ffff8800fcd70c50 R11: ffffffff8122b649 R12: 
> ffff8800ffaadda0
> [10765.336002] R13: 0000000000000002 R14: ffff8800fcd708a0 R15: 
> ffff8800ffaaddf0
> [10765.336002] FS:  00007fedc5e08710(0000) GS:ffff8800039bf000(0000) 
> knlGS:0000000000000000
> [10765.336002] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> [10765.336002] CR2: 00007fae1857c120 CR3: 0000000002fe3000 CR4: 
> 0000000000002660
> [10765.336002] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 
> 0000000000000000
> [10765.336002] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 
> 0000000000000400
> [10765.336002] Call Trace:
> [10765.336002]  [<ffffffff8106875b>] ?  hrtimer_try_to_cancel+0x16/0x43
> [10765.336002]  [<ffffffff8122b649>] ?  serial8250_suspend+0x0/0x48
> [10765.336002]  [<ffffffff81068794>] ?  hrtimer_cancel+0xc/0x16
> [10765.336002]  [<ffffffffa0009147>] ?  netfront_suspend+0x19/0x1d 
> [xen_netfront]
> [10765.336002]  [<ffffffff811f569b>] ?  xenbus_dev_suspend+0x1f/0x3b
> [10765.336002]  [<ffffffff81233872>] ?  dpm_suspend_start+0x359/0x45b
> [10765.336002]  [<ffffffff811f2ca0>] ?  shutdown_handler+0x15f/0x25c
> [10765.336002]  [<ffffffff8130b475>] ?  mutex_lock+0xd/0x31
> [10765.336002]  [<ffffffff811f47ad>] ?  xenwatch_thread+0x117/0x14a
> [10765.336002]  [<ffffffff81065afe>] ?  autoremove_wake_function+0x0/0x2e
> [10765.336002]  [<ffffffff811f4696>] ?  xenwatch_thread+0x0/0x14a
> [10765.336002]  [<ffffffff81065831>] ?  kthread+0x79/0x81
> [10765.336002]  [<ffffffff81012baa>] ?  child_rip+0xa/0x20
> [10765.336002]  [<ffffffff81011d61>] ?  int_ret_from_sys_call+0x7/0x1b
> [10765.336002]  [<ffffffff8101251d>] ?  retint_restore_args+0x5/0x6
> [10765.336002]  [<ffffffff8102ddac>] ?  pvclock_clocksource_read+0x3a/0x8b
> [10765.336002]  [<ffffffff81012ba0>] ?  child_rip+0x0/0x20
> 
> googling digged this:
> 
> http://www.linux-archive.org/debian-kernel/442963-bug-600992-further-logs.html
> 
> which pretty much what I'm seeing on my systems. 
> jf
> 
> 
> > 
> > Xen-related Debian packages (all from Debian repositary, except drbd
> > which is from linbit):
> > 
> > ~# dpkg -l \*xen\* | grep ^i
> > drbd8-module-2.6.32-5-xen-amd64 2:8.3.8-0+2.6.32-26     RAID 1 over tcp/ip 
> > for Linux kernel module
> > libxenstore3.0                                          4.0.1-1 Xenstore 
> > communications library for Xen
> > linux-headers-2.6.32-5-common-xen                       2.6.32-26 Common 
> > header files for Linux 2.6.32-5-xen
> > linux-headers-2.6.32-5-xen-amd64                        2.6.32-26 Header 
> > files for Linux 2.6.32-5-xen-amd64
> > linux-image-2.6.32-5-xen-amd64                          2.6.32-26 Linux 
> > 2.6.32 for 64-bit PCs, Xen dom0 support
> > xen-hypervisor-4.0-amd64                                4.0.1-1 The Xen 
> > Hypervisor on AMD64
> > xen-qemu-dm-4.0                                         4.0.1-1 Xen Qemu 
> > Device Model virtual machine hardware emulator
> > xen-tools                                               4.2-1 Tools to 
> > manage Xen virtual servers
> > xen-utils-4.0                                           4.0.1-1 XEN 
> > administrative tools
> > xen-utils-common                                        4.0.0-1 XEN 
> > administrative tools - common files
> > xenstore-utils                                          4.0.1-1 Xenstore 
> > utilities for Xen
> > xenwatch                                                0.5.4-2 
> > Virtualization utilities, mostly for Xen
> > 
> > ~# xm info
> > host                   : dom0
> > release                : 2.6.32-5-xen-amd64
> > version                : #1 SMP Wed Oct 20 02:22:18 UTC 2010
> > machine                : x86_64
> > nr_cpus                : 8
> > nr_nodes               : 2
> > cores_per_socket       : 4
> > threads_per_core       : 1
> > cpu_mhz                : 2000
> > hw_caps                : 
> > bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000
> > virt_caps              : hvm hvm_directio
> > total_memory           : 12279
> > free_memory            : 3934
> > node_to_cpu            : node0:0-3
> >                          node1:4-7
> > node_to_memory         : node0:1996
> >                          node1:1938
> > node_to_dma32_mem      : node0:1995
> >                          node1:0
> > max_node_id            : 1
> > xen_major              : 4
> > xen_minor              : 0
> > xen_extra              : .1
> > xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 
> > hvm-3.0-x 86_32p hvm-3.0-x86_64 
> > xen_scheduler          : credit
> > xen_pagesize           : 4096
> > platform_params        : virt_start=0xffff800000000000
> > xen_changeset          : unavailable
> > xen_commandline        : dom0_mem=2048M dom0_max_vcpus=2 loglvl=all 
> > guest_loglvl=all console=tty0 
> > cc_compiler            : gcc version 4.4.5 20100824 (prerelease) (Debian 
> > 4.4.4-11) 
> > cc_compile_by          : waldi
> > cc_compile_domain      : debian.org
> > cc_compile_date        : Fri Sep  3 15:38:12 UTC 2010
> > xend_config_format     : 4
> > 
> > thanks
> > jf
> > -- 
> > <° >< Jean-François Malouin          McConnell Brain Imaging Centre        
> > Systems/Network Administrator       Montréal Neurological Institute
> > 3801 Rue University, Suite WB219          Montréal, Québec, H3A 2B4
> > Phone: 514-398-8924                               Fax: 514-398-8948
> > 
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-users
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users

-- 
<° >< Jean-François Malouin          McConnell Brain Imaging Centre        
Systems/Network Administrator       Montréal Neurological Institute
3801 Rue University, Suite WB219          Montréal, Québec, H3A 2B4
Phone: 514-398-8924                               Fax: 514-398-8948

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.