[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] kernel BUG at arch/x86/xen/mmu.c:1860!


  • To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
  • From: Teck Choon Giam <giamteckchoon@xxxxxxxxx>
  • Date: Wed, 5 Jan 2011 23:30:34 +0800
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 05 Jan 2011 07:31:36 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=Cc/az+71iLvFnUzTpXn6yUmnei07kdJQjR7xpVSZ7+lNamYVdULsBpN3bNLQpeE3iA uefV0UyxMKFiSlrxsfh8gz3UYekuvXBUU3W9m7fIlqs8l3H2ZEl6fDnZfXe9fEbmEOba 86shnl1Iz7IF7JCZiUgspLCQKILcFy/UEBkFA=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>



On Wed, Jan 5, 2011 at 3:24 AM, Teck Choon Giam <giamteckchoon@xxxxxxxxx> wrote:


On Tue, Jan 4, 2011 at 9:48 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
On Sun, 2010-12-26 at 08:16 +0000, Teck Choon Giam wrote:
>
> Triggered BUG() in line 1860:
>
> static void pin_pagetable_pfn(unsigned cmd, unsigned long pfn)
> {
>         struct mmuext_op op;
>         op.cmd = cmd;
>         op.arg1.mfn = pfn_to_mfn(pfn);
>            if (HYPERVISOR_mmuext_op(&op, 1, NULL, DOMID_SELF))
>                    BUG(); <<THIS ONE?
> }

A failure to pin/unpin is usually associated with a log message from the
hypervisor. Please can you attempt to capture the full host log, e.g.
using serial console.

See http://wiki.xen.org/xenwiki/XenParavirtOps under "Are there more
debugging options I could enable to troubleshoot booting problems?" for
some details.

I will once I got the serial console cable and have a system to catch the log during my next visit to the DC.


Hi Ian,

Here is another console output besides the other one which I posted:

EXT3-fs warning: maximal mount count reached, running e2fsck is recommended                    
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended                    
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended                    
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended                    
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended                    
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended                    
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended                    
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended                    
INIT: Id "s1" respawning too fast: disabled for 5 minutes                                      
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended                    
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended                    
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended                    
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended                    
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
hrtimer: interrupt took 3096797 ns
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
(XEN) mm.c:2364:d0 Bad type (saw 7400000000000001 != exp 1000000000000000) for mfn 1ee744 (pfn 1c399)
(XEN) mm.c:2733:d0 Error while pinning mfn 1ee744
------------[ cut here ]------------
kernel BUG at arch/x86/xen/mmu.c:1860!
invalid opcode: 0000 [#1] SMP
last sysfs file: /sys/block/dm-17/dev
CPU 1
Modules linked in: ext4 jbd2 crc16 gfs2 dlm configfs xt_physdev iptable_filter ip_tables x_tables bridge stp be2iscsi iscsi_]
Pid: 19758, comm: dmsetup Not tainted 2.6.32.27-0.xen.pvops.choon.centos5 #1 PowerEdge 860
RIP: e030:[<ffffffff8100cb5b>]  [<ffffffff8100cb5b>] pin_pagetable_pfn+0x53/0x59
RSP: e02b:ffff88003a615dc8  EFLAGS: 00010282
RAX: 00000000ffffffea RBX: 000000000001c399 RCX: 00000000000000e1
RDX: 00000000deadbeef RSI: 00000000deadbeef RDI: 00000000deadbeef
RBP: ffff88003a615de8 R08: 0000000000000cc8 R09: ffff880000000000
R10: 00000000deadbeef R11: 0000000000000246 R12: 0000000000000003
R13: 000000000001c399 R14: ffff88001c1d86c0 R15: 0000003db7400258
FS:  00007f6cbd5b6710(0000) GS:ffff88002806c000(0000) knlGS:0000000000000000
CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000003db7400258 CR3: 000000003a5ed000 CR4: 0000000000002660
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process dmsetup (pid: 19758, threadinfo ffff88003a614000, task ffff88001c1d86c0)
Stack:
 0000000000000000 00000000001ee744 000000013e64d518 000000000001c399
<0> ffff88003a615e08 ffffffff8100e07c ffff880027a2c580 ffff88003a56fdd0
<0> ffff88003a615e18 ffffffff8100e0af ffff88003a615e58 ffffffff810a402f
Call Trace:
 [<ffffffff8100e07c>] xen_alloc_ptpage+0x64/0x69
 [<ffffffff8100e0af>] xen_alloc_pte+0xe/0x10
 [<ffffffff810a402f>] __pte_alloc+0x70/0xce
 [<ffffffff810a41cd>] handle_mm_fault+0x140/0x8b9
 [<ffffffff8131be4d>] do_page_fault+0x252/0x2e2
 [<ffffffff81319dd5>] page_fault+0x25/0x30
Code: 48 b8 ff ff ff ff ff ff ff 7f 48 21 c2 48 89 55 e8 48 8d 7d e0 be 01 00 00 00 31 d2 41 ba f0 7f 00 00 e8 e9 c7 ff ff 8
RIP  [<ffffffff8100cb5b>] pin_pagetable_pfn+0x53/0x59
 RSP <ffff88003a615dc8>
---[ end trace 63676fea977b3461 ]---
BUG: soft lockup - CPU#1 stuck for 61s! [dmsetup:19758]
Modules linked in: ext4 jbd2 crc16 gfs2 dlm configfs xt_physdev iptable_filter ip_tables x_tables bridge stp be2iscsi iscsi_]
CPU 1:
Modules linked in: ext4 jbd2 crc16 gfs2 dlm configfs xt_physdev iptable_filter ip_tables x_tables bridge stp be2iscsi iscsi_]
Pid: 19758, comm: dmsetup Tainted: G      D    2.6.32.27-0.xen.pvops.choon.centos5 #1 PowerEdge 860
RIP: e030:[<ffffffff813199d3>]  [<ffffffff813199d3>] _spin_lock+0x19/0x20
RSP: e02b:ffff88003a615a68  EFLAGS: 00000297
RAX: 0000000000000025 RBX: 000000003d2fd000 RCX: 0000000000000004
RDX: 0000000000000024 RSI: 0000000000000004 RDI: ffff880027a2c600
RBP: ffff88003a615a68 R08: 0000000000000000 R09: ffffffff816dd100
R10: 3030303030303030 R11: 0000000000000120 R12: ffff880027a2c580
R13: 0000000000000004 R14: ffff880027a2c5e0 R15: ffffffff816dd100
FS:  00007f720a0f36e0(0000) GS:ffff88002806c000(0000) knlGS:0000000000000000
CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00007f7209c9c898 CR3: 0000000001001000 CR4: 0000000000002660
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Call Trace:
 [<ffffffff811660dd>] ? free_cpumask_var+0x9/0xb
 [<ffffffff8100dde1>] xen_exit_mmap+0x199/0x1d7
 [<ffffffff810a8137>] exit_mmap+0x5f/0x14b
 [<ffffffff81048648>] mmput+0x46/0xb2
 [<ffffffff8104c552>] exit_mm+0xfd/0x108
 [<ffffffff8100f799>] ? xen_irq_enable_direct_end+0x0/0x7
 [<ffffffff8104d7ee>] do_exit+0x1f3/0x67b
 [<ffffffff8131a908>] oops_end+0xba/0xc2
 [<ffffffff810163a1>] die+0x55/0x5e
 [<ffffffff8131a192>] do_trap+0x110/0x11f
 [<ffffffff810142c8>] do_invalid_op+0x97/0xa0
 [<ffffffff8100cb5b>] ? pin_pagetable_pfn+0x53/0x59
 [<ffffffff810138bb>] invalid_op+0x1b/0x20
 [<ffffffff8100cb5b>] ? pin_pagetable_pfn+0x53/0x59
 [<ffffffff8100cb57>] ? pin_pagetable_pfn+0x4f/0x59
 [<ffffffff8100e07c>] xen_alloc_ptpage+0x64/0x69
 [<ffffffff8100e0af>] xen_alloc_pte+0xe/0x10
 [<ffffffff810a402f>] __pte_alloc+0x70/0xce
 [<ffffffff810a41cd>] handle_mm_fault+0x140/0x8b9
 [<ffffffff8131be4d>] do_page_fault+0x252/0x2e2
 [<ffffffff81319dd5>] page_fault+0x25/0x30
Kernel panic - not syncing: softlockup: hung tasks
Pid: 19758, comm: dmsetup Tainted: G      D    2.6.32.27-0.xen.pvops.choon.centos5 #1
Call Trace:
 <IRQ>  [<ffffffff8104aa97>] panic+0xa0/0x15f
 [<ffffffff81319dd5>] ? page_fault+0x25/0x30
 [<ffffffff8101640f>] ? show_trace_log_lvl+0x4c/0x58
 [<ffffffff8101642b>] ? show_trace+0x10/0x12
 [<ffffffff81011755>] ? show_regs+0x44/0x48
 [<ffffffff8107f202>] softlockup_tick+0x173/0x182
 [<ffffffff810539bf>] run_local_timers+0x18/0x1a
 [<ffffffff81053bde>] update_process_times+0x30/0x54
 [<ffffffff81068821>] tick_sched_timer+0x70/0x99
 [<ffffffff8105f52e>] __run_hrtimer+0x53/0xb3
 [<ffffffff8105f772>] hrtimer_interrupt+0xae/0x192
 [<ffffffff8100f3a3>] xen_timer_interrupt+0x37/0x181
 [<ffffffff81082898>] ? check_for_new_grace_period+0x97/0xa5
 [<ffffffff811c870f>] ? unmask_evtchn+0x34/0xd6
 [<ffffffff8108318c>] ? __rcu_process_callbacks+0xf2/0x2ae
 [<ffffffff8107f708>] handle_IRQ_event+0x2d/0xb7
 [<ffffffff81081079>] handle_percpu_irq+0x3c/0x69
 [<ffffffff811c8640>] __xen_evtchn_do_upcall+0xe1/0x168
 [<ffffffff811c92d1>] xen_evtchn_do_upcall+0x2e/0x41
 [<ffffffff81013c7e>] xen_do_hypervisor_callback+0x1e/0x30
 <EOI>  [<ffffffff813199d3>] ? _spin_lock+0x19/0x20
 [<ffffffff811660dd>] ? free_cpumask_var+0x9/0xb
 [<ffffffff8100dde1>] ? xen_exit_mmap+0x199/0x1d7
 [<ffffffff810a8137>] ? exit_mmap+0x5f/0x14b
 [<ffffffff81048648>] ? mmput+0x46/0xb2
 [<ffffffff8104c552>] ? exit_mm+0xfd/0x108
 [<ffffffff8100f799>] ? xen_irq_enable_direct_end+0x0/0x7
 [<ffffffff8104d7ee>] ? do_exit+0x1f3/0x67b
 [<ffffffff8131a908>] ? oops_end+0xba/0xc2
 [<ffffffff810163a1>] ? die+0x55/0x5e
 [<ffffffff8131a192>] ? do_trap+0x110/0x11f
 [<ffffffff810142c8>] ? do_invalid_op+0x97/0xa0
 [<ffffffff8100cb5b>] ? pin_pagetable_pfn+0x53/0x59
 [<ffffffff810138bb>] ? invalid_op+0x1b/0x20
 [<ffffffff8100cb5b>] ? pin_pagetable_pfn+0x53/0x59
 [<ffffffff8100cb57>] ? pin_pagetable_pfn+0x4f/0x59
 [<ffffffff8100e07c>] ? xen_alloc_ptpage+0x64/0x69
 [<ffffffff8100e0af>] ? xen_alloc_pte+0xe/0x10
 [<ffffffff810a402f>] ? __pte_alloc+0x70/0xce
 [<ffffffff810a41cd>] ? handle_mm_fault+0x140/0x8b9
 [<ffffffff8131be4d>] ? do_page_fault+0x252/0x2e2
 [<ffffffff81319dd5>] ? page_fault+0x25/0x30
(XEN) Domain 0 crashed: rebooting machine in 5 seconds.

Thanks.

Kindest regards,
Giam Teck Choon
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.