[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] WARNings in guest during xl save/restore
On Fri, Nov 21, 2014 at 11:08:37AM +0100, Juergen Gross wrote: > Hi, > > during tests of my "linear p2m list" patches I stumbled over some > WARNs issued during xl save and xl restore of a pv-domU with > unpatched linux 3.18-rc5: Boris had an patch for this I htink.. > > during save I saw multiple entries like: > [ 176.900393] WARNING: CPU: 0 PID: 9 at arch/x86/xen/enlighten.c:968 > clear_local_APIC+0xa5/0x2b0() > [ 176.900393] Modules linked in: cfg80211 rfkill nfsd auth_rpcgss > oid_registry nfs_acl nfs lockd grace fscache sunrpc evdev > x86_pkg_temp_thermal thermal_sys snd_pcm coretemp snd_timer crc32_pclmul > aesni_intel snd xts soundcore aes_i586 lrw gf128mul ablk_helper pcspkr > cryptd fuse autofs4 ext4 crc16 mbcache jbd2 crc32c_intel > [ 176.900393] CPU: 0 PID: 9 Comm: migration/0 Tainted: G W > 3.18.0-rc5 #30 > [ 176.900393] 00000009 c14c40b2 00000000 c1054b10 c1599538 00000000 > 00000009 c158bdc2 > [ 176.900393] 000003c8 c103c925 c103c925 000003c8 00000002 00000000 > c15d25eb e8867e64 > [ 176.900393] c1054bd9 00000009 00000000 c103c925 00000000 c103cb54 > 00000002 00000000 > [ 176.900393] Call Trace: > [ 176.900393] [<c14c40b2>] ? dump_stack+0x3e/0x4e > [ 176.900393] [<c1054b10>] ? warn_slowpath_common+0x90/0xc0 > [ 176.900393] [<c103c925>] ? clear_local_APIC+0xa5/0x2b0 > [ 176.900393] [<c103c925>] ? clear_local_APIC+0xa5/0x2b0 > [ 176.900393] [<c1054bd9>] ? warn_slowpath_null+0x19/0x20 > [ 176.900393] [<c103c925>] ? clear_local_APIC+0xa5/0x2b0 > [ 176.900393] [<c103cb54>] ? disable_local_APIC+0x24/0x90 > [ 176.900393] [<c103ccde>] ? lapic_suspend+0x11e/0x170 > [ 176.900393] [<c1360ff9>] ? syscore_suspend+0x79/0x220 > [ 176.900393] [<c107e782>] ? set_next_entity+0x62/0x80 > [ 176.900393] [<c13165cd>] ? xen_suspend+0x2d/0x110 > [ 176.900393] [<c10045df>] ? xen_mc_flush+0x13f/0x170 > [ 176.900393] [<c10d5619>] ? multi_cpu_stop+0xa9/0xd0 > [ 176.900393] [<c10d5570>] ? cpu_stop_should_run+0x50/0x50 > [ 176.900393] [<c10d5771>] ? cpu_stopper_thread+0x71/0x100 > [ 176.900393] [<c1074214>] ? finish_task_switch+0x34/0xd0 > [ 176.900393] [<c14c513d>] ? __schedule+0x23d/0x7f0 > [ 176.900393] [<c10897f4>] ? __wake_up_common+0x44/0x70 > [ 176.900393] [<c14c8962>] ? _raw_spin_lock_irqsave+0x12/0x60 > [ 176.900393] [<c1071f22>] ? smpboot_thread_fn+0xd2/0x170 > [ 176.900393] [<c1071e50>] ? SyS_setgroups+0x110/0x110 > [ 176.900393] [<c106e801>] ? kthread+0xa1/0xc0 > [ 176.900393] [<c14c8ea1>] ? ret_from_kernel_thread+0x21/0x30 > [ 176.900393] [<c106e760>] ? kthread_create_on_node+0x120/0x120 > [ 176.900393] ---[ end trace b38596d5cfdcde8d ]--- > > and during restore: > [ 176.900393] WARNING: CPU: 0 PID: 9 at arch/x86/xen/enlighten.c:968 > lapic_resume+0xc6/0x270() > [ 176.900393] Modules linked in: cfg80211 rfkill nfsd auth_rpcgss > oid_registry nfs_acl nfs lockd grace fscache sunrpc evdev > x86_pkg_temp_thermal thermal_sys snd_pcm coretemp snd_timer crc32_pclmul > aesni_intel snd xts soundcore aes_i586 lrw gf128mul ablk_helper pcspkr > cryptd fuse autofs4 ext4 crc16 mbcache jbd2 crc32c_intel > [ 176.900393] CPU: 0 PID: 9 Comm: migration/0 Tainted: G W > 3.18.0-rc5 #30 > [ 176.900393] 00000009 c14c40b2 00000000 c1054b10 c1599538 00000000 > 00000009 c158bdc2 > [ 176.900393] 000003c8 c103c1e6 c103c1e6 000003c8 c1030020 00000002 > 0000001b 00000000 > [ 176.900393] c1054bd9 00000009 00000000 c103c1e6 00000000 c16432c0 > 0108cdfe c15d25dc > [ 176.900393] Call Trace: > [ 176.900393] [<c14c40b2>] ? dump_stack+0x3e/0x4e > [ 176.900393] [<c1054b10>] ? warn_slowpath_common+0x90/0xc0 > [ 176.900393] [<c103c1e6>] ? lapic_resume+0xc6/0x270 > [ 176.900393] [<c103c1e6>] ? lapic_resume+0xc6/0x270 > [ 176.900393] [<c1030020>] ? mcheck_cpu_init+0x170/0x4f0 > [ 176.900393] [<c1054bd9>] ? warn_slowpath_null+0x19/0x20 > [ 176.900393] [<c103c1e6>] ? lapic_resume+0xc6/0x270 > [ 176.900393] [<c1360e66>] ? syscore_resume+0x46/0x160 > [ 176.900393] [<c1009012>] ? xen_timer_resume+0x42/0x60 > [ 176.900393] [<c131661c>] ? xen_suspend+0x7c/0x110 > [ 176.900393] [<c10d5619>] ? multi_cpu_stop+0xa9/0xd0 > [ 176.900393] [<c10d5570>] ? cpu_stop_should_run+0x50/0x50 > [ 176.900393] [<c10d5771>] ? cpu_stopper_thread+0x71/0x100 > [ 176.900393] [<c1074214>] ? finish_task_switch+0x34/0xd0 > [ 176.900393] [<c14c513d>] ? __schedule+0x23d/0x7f0 > [ 176.900393] [<c10897f4>] ? __wake_up_common+0x44/0x70 > [ 176.900393] [<c14c8962>] ? _raw_spin_lock_irqsave+0x12/0x60 > [ 176.900393] [<c1071f22>] ? smpboot_thread_fn+0xd2/0x170 > [ 176.900393] [<c1071e50>] ? SyS_setgroups+0x110/0x110 > [ 176.900393] [<c106e801>] ? kthread+0xa1/0xc0 > [ 176.900393] [<c14c8ea1>] ? ret_from_kernel_thread+0x21/0x30 > [ 176.900393] [<c106e760>] ? kthread_create_on_node+0x120/0x120 > [ 176.900393] ---[ end trace b38596d5cfdcde93 ]--- > > While this seems not to be critical (the system is running after the > restore) I assume disabling/enabling a local APIC on a pv-domain isn't > something we want to happen... > > > Juergen > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxx > http://lists.xen.org/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |