[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Re: OpenSUSE 11.2 Xen HVM guest crash causing BUG() in dom0



On Sat, Jan 23, 2010 at 05:04:37PM +0000, Andrew Lyon wrote:
> Pasi,
> 
> I'm not sure if that kernel rpm version includes the swiotlb fixes
> that made 2.6.31 reliable on all of my xen systems, it would be a good
> idea if the user tries the latest kotd rpm for kernel-xen as that
> should be up to date with the git tree which definatly has the
> necessary fixes.
> 

ftp://ftp.suse.com/pub/projects/kernel/kotd/master/x86_64/
Seems to have 2.6.32 based kernels.

Do you know if there are newer 2.6.31 based? 
2.6.31.8-0.1-xen in OpenSUSE 11.2 seems to be dated 18 Dec 2009.

> Since Jan fixed the swiotlb problems I've not had a single crash with
> .31 and I've tested it very thoroughly so I am surprised if there are
> still undiscovered bugs.
> 

Yeah.. that's what I said too :)

-- Pasi

> Andy
> 
> On 23/01/2010, Pasi Kärkkäinen <pasik@xxxxxx> wrote:
> > Hello,
> >
> > I was talking on irc with an user who had problems with HVM guests
> > (freebsd freenas) on OpenSUSE 11.2 Xen 3.4.1.
> >
> > Andy/Jan: Have you guys seen this before? Is it normal to have
> > this kind of BUG()s in dom0, when a guest crashes?
> >
> > dom0 kernel: 2.6.31.8-0.1-xen x86_64
> > Xen: Xen-3.4.1_19718_04-2.1 x86_64
> >
> > So basicly HVM guest crashes, and then dom0 gets BUG().
> >
> > Also reported here:
> > https://bugzilla.novell.com/show_bug.cgi?id=573305
> >
> > First:
> > (XEN) Domain 4 reported crashed by domain 0 on cpu#0:
> >
> > then lots of:
> >
> > (XEN) domain_crash called from p2m.c:1091
> > (XEN) p2m_pod_demand_populate: Out of populate-on-demand memory!
> >
> > And then:
> >
> > [ 5293.824815] BUG: soft lockup - CPU#0 stuck for 61s! [qemu-dm:6405]
> > [ 5293.824815] Modules linked in: tun nf_conntrack_ipv4 nf_defrag_ipv4
> > xt_state nf_conntrack xt_physdev iptable_filter ip_tables x_tables netbk
> > blkbk blkback_pagemap blktap xenbus_be edd nls_utf8 cifs i915 drm
> > i2c_algo_bit video output bridge stp llc fuse loop dm_mod usb_storage 3c59x
> > 8250_pnp 8250_pci intel_agp pcspkr heci(C) serio_raw iTCO_wdt wmi sr_mod sg
> > iTCO_vendor_support 8250 agpgart container button i2c_i801 serial_core
> > i2c_core usbhid hid raid456 raid6_pq async_xor async_memcpy async_tx xor
> > raid1 raid0 uhci_hcd ehci_hcd xenblk cdrom xennet fan processor
> > ide_pci_generic ide_core ata_generic thermal thermal_sys hwmon
> > [ 5293.824815] CPU 0:
> > [ 5293.824815] Modules linked in: tun nf_conntrack_ipv4 nf_defrag_ipv4
> > xt_state nf_conntrack xt_physdev iptable_filter ip_tables x_tables netbk
> > blkbk blkback_pagemap blktap xenbus_be edd nls_utf8 cifs i915 drm
> > i2c_algo_bit video output bridge stp llc fuse loop dm_mod usb_storage 3c59x
> > 8250_pnp 8250_pci intel_agp pcspkr heci(C) serio_raw iTCO_wdt wmi sr_mod sg
> > iTCO_vendor_support 8250 agpgart container button i2c_i801 serial_core
> > i2c_core usbhid hid raid456 raid6_pq async_xor async_memcpy async_tx xor
> > raid1 raid0 uhci_hcd ehci_hcd xenblk cdrom xennet fan processor
> > ide_pci_generic ide_core ata_generic thermal thermal_sys hwmon
> > [ 5293.824815] Pid: 6405, comm: qemu-dm Tainted: G         C
> > 2.6.31.8-0.1-xen #1 7484A8U
> > [ 5293.824815] RIP: e030:[<ffffffff8000802a>]  [<ffffffff8000802a>]
> > 0xffffffff8000802a
> > [ 5293.824815] RSP: e02b:ffff880028791cc0  EFLAGS: 00000246
> > [ 5293.824815] RAX: 00000000ffffffea RBX: 8000000000000427 RCX:
> > ffffffff8000802a
> > [ 5293.824815] RDX: 0000000000000000 RSI: 0000000000000001 RDI:
> > ffff88002f4e5000
> > [ 5293.824815] RBP: ffff880028791d38 R08: 0000000000000000 R09:
> > ffff880028791d00
> > [ 5293.824815] R10: 0000000000000004 R11: 0000000000000246 R12:
> > ffff88002f4e5010
> > [ 5293.824815] R13: 00007f2512e9d000 R14: 00000000000016f6 R15:
> > ffff88002f4e5000
> > [ 5293.824815] FS:  00007f251ede76f0(0000) GS:ffffc90000000000(0000)
> > knlGS:0000000000000000
> > [ 5293.824815] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> > [ 5293.824815] CR2: 00007fd847703000 CR3: 000000001bd9d000 CR4:
> > 0000000000002660
> > [ 5293.824815] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> > 0000000000000000
> > [ 5293.824815] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> > 0000000000000400
> > [ 5293.824815] Call Trace:
> > [ 5293.824815]  [<ffffffff80026759>] __direct_remap_pfn_range+0x1b9/0x200
> > [ 5293.824815]  [<ffffffff80026813>] direct_remap_pfn_range+0x43/0x50
> > [ 5293.824815]  [<ffffffff80311a08>] privcmd_ioctl+0x618/0x740
> > [ 5293.824815]  [<ffffffff80180772>] proc_reg_unlocked_ioctl+0x92/0x160
> > [ 5293.824815]  [<ffffffff8012b510>] vfs_ioctl+0x30/0xd0
> > [ 5293.824815]  [<ffffffff8012b6f0>] do_vfs_ioctl+0x90/0x430
> > [ 5293.824815]  [<ffffffff8012bb29>] sys_ioctl+0x99/0xb0
> > [ 5293.824815]  [<ffffffff8000c868>] system_call_fastpath+0x16/0x1b
> > [ 5293.824815]  [<00007f251d26d7e7>] 0x7f251d26d7e7
> >
> > Full log here:
> > http://pastebin.com/m24b0e01
> >
> > -- Pasi
> >
> >
> 
> -- 
> Sent from my mobile device

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.