[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] OpenSUSE 11.2 Xen HVM guest crash causing BUG() in dom0



Hello,

I was talking on irc with an user who had problems with HVM guests 
(freebsd freenas) on OpenSUSE 11.2 Xen 3.4.1.

Andy/Jan: Have you guys seen this before? Is it normal to have 
this kind of BUG()s in dom0, when a guest crashes? 

dom0 kernel: 2.6.31.8-0.1-xen x86_64
Xen: Xen-3.4.1_19718_04-2.1 x86_64

So basicly HVM guest crashes, and then dom0 gets BUG().

Also reported here:
https://bugzilla.novell.com/show_bug.cgi?id=573305

First:
(XEN) Domain 4 reported crashed by domain 0 on cpu#0:

then lots of:

(XEN) domain_crash called from p2m.c:1091
(XEN) p2m_pod_demand_populate: Out of populate-on-demand memory!

And then:

[ 5293.824815] BUG: soft lockup - CPU#0 stuck for 61s! [qemu-dm:6405]
[ 5293.824815] Modules linked in: tun nf_conntrack_ipv4 nf_defrag_ipv4 xt_state 
nf_conntrack xt_physdev iptable_filter ip_tables x_tables netbk blkbk 
blkback_pagemap blktap xenbus_be edd nls_utf8 cifs i915 drm i2c_algo_bit video 
output bridge stp llc fuse loop dm_mod usb_storage 3c59x 8250_pnp 8250_pci 
intel_agp pcspkr heci(C) serio_raw iTCO_wdt wmi sr_mod sg iTCO_vendor_support 
8250 agpgart container button i2c_i801 serial_core i2c_core usbhid hid raid456 
raid6_pq async_xor async_memcpy async_tx xor raid1 raid0 uhci_hcd ehci_hcd 
xenblk cdrom xennet fan processor ide_pci_generic ide_core ata_generic thermal 
thermal_sys hwmon
[ 5293.824815] CPU 0:
[ 5293.824815] Modules linked in: tun nf_conntrack_ipv4 nf_defrag_ipv4 xt_state 
nf_conntrack xt_physdev iptable_filter ip_tables x_tables netbk blkbk 
blkback_pagemap blktap xenbus_be edd nls_utf8 cifs i915 drm i2c_algo_bit video 
output bridge stp llc fuse loop dm_mod usb_storage 3c59x 8250_pnp 8250_pci 
intel_agp pcspkr heci(C) serio_raw iTCO_wdt wmi sr_mod sg iTCO_vendor_support 
8250 agpgart container button i2c_i801 serial_core i2c_core usbhid hid raid456 
raid6_pq async_xor async_memcpy async_tx xor raid1 raid0 uhci_hcd ehci_hcd 
xenblk cdrom xennet fan processor ide_pci_generic ide_core ata_generic thermal 
thermal_sys hwmon
[ 5293.824815] Pid: 6405, comm: qemu-dm Tainted: G         C 2.6.31.8-0.1-xen 
#1 7484A8U
[ 5293.824815] RIP: e030:[<ffffffff8000802a>]  [<ffffffff8000802a>] 
0xffffffff8000802a
[ 5293.824815] RSP: e02b:ffff880028791cc0  EFLAGS: 00000246
[ 5293.824815] RAX: 00000000ffffffea RBX: 8000000000000427 RCX: ffffffff8000802a
[ 5293.824815] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff88002f4e5000
[ 5293.824815] RBP: ffff880028791d38 R08: 0000000000000000 R09: ffff880028791d00
[ 5293.824815] R10: 0000000000000004 R11: 0000000000000246 R12: ffff88002f4e5010
[ 5293.824815] R13: 00007f2512e9d000 R14: 00000000000016f6 R15: ffff88002f4e5000
[ 5293.824815] FS:  00007f251ede76f0(0000) GS:ffffc90000000000(0000) 
knlGS:0000000000000000
[ 5293.824815] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 5293.824815] CR2: 00007fd847703000 CR3: 000000001bd9d000 CR4: 0000000000002660
[ 5293.824815] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 5293.824815] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 5293.824815] Call Trace:
[ 5293.824815]  [<ffffffff80026759>] __direct_remap_pfn_range+0x1b9/0x200
[ 5293.824815]  [<ffffffff80026813>] direct_remap_pfn_range+0x43/0x50
[ 5293.824815]  [<ffffffff80311a08>] privcmd_ioctl+0x618/0x740
[ 5293.824815]  [<ffffffff80180772>] proc_reg_unlocked_ioctl+0x92/0x160
[ 5293.824815]  [<ffffffff8012b510>] vfs_ioctl+0x30/0xd0
[ 5293.824815]  [<ffffffff8012b6f0>] do_vfs_ioctl+0x90/0x430
[ 5293.824815]  [<ffffffff8012bb29>] sys_ioctl+0x99/0xb0
[ 5293.824815]  [<ffffffff8000c868>] system_call_fastpath+0x16/0x1b
[ 5293.824815]  [<00007f251d26d7e7>] 0x7f251d26d7e7

Full log here:
http://pastebin.com/m24b0e01

-- Pasi


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.