[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [Xen-users] kernel panic in domU

Hi All,

I'm using xen 4.0.1, kernel for DomU, kernel for Dom0. I also use pygrub to boot the domU kernel in the xen. I tried all the things I know for making this work, but unfortunately it does not work at this moment.
The kernel shows panic message and then escapes. The following is thetrace.. it tells me that the hypercall_page might trigger the xen_panic_event and then kernel got it to make itself panic.

cs:eip: 0061:c04023a7 hypercall_page+0x3a7
flags: 00001286 i s nz p
ss:esp: 0069:dfc1ff14
eax: 00000000    ebx: 00000002    ecx: dfc1ff18    edx: 00000000
esi: 00000000    edi: 00000000    ebp: dfc1ff28
 ds:     007b     es:     007b     fs:     00d8     gs:     0000
Code (instr addr c04023a7)
cc cc cc cc cc cc cc cc cc cc cc cc cc cc b8 1d 00 00 00 cd 82 <c3> cc cc cc cc cc cc cc cc cc cc

 c04036f2 00000003 c086ecc8 00000000 00000000 dfc1ff44 c06f7e45 c092b99c
 c0874c24 dfc22cc0 c087379c dfc22cc0 dfc1ff54 c06f7e89 ffffffff 00000000
 dfc1ff70 c06f3e55 c07da467 c092b99c dfc1ff7c dfc22cc0 c087379c dfc1ffa4
 c043eba9 c07da8ae c0406fab c040b33b dfc10000 00000000 00000001 dfc1ff90

Call Trace:
  [<c04023a7>] hypercall_page+0x3a7  <--
  [<c04036f2>] xen_panic_event+0x1d
  [<c06f7e45>] notifier_call_chain+0x26
  [<c06f7e89>] atomic_notifier_call_chain+0xf
  [<c06f3e55>] panic+0x59
  [<c043eba9>] do_exit+0x5c
  [<c0406fab>] xen_restore_fl_direct_reloc+0x4
  [<c040b33b>] do_softirq+0xd5
  [<c043f1f1>] sys_exit+0x13
  [<c0409389>] syscall_call+0x7

I also include some logs from console.

cpuidle: using governor ladder
cpuidle: using governor menu
usbcore: registered new interface driver hiddev
usbcore: registered new interface driver usbhid
usbhid: v2.6:USB HID core driver
nf_conntrack version 0.5.0 (7992 buckets, 31968 max)
CONFIG_NF_CT_ACCT is deprecated and will be removed soon. Please use
nf_conntrack.acct=1 kernel parameter, acct=1 nf_conntrack module option or
sysctl net.netfilter.nf_conntrack_acct=1 to enable it.
ip_tables: (C) 2000-2006 Netfilter Core Team
TCP cubic registered
Initializing XFRM netlink socket
NET: Registered protocol family 17
Bridge firewalling registered
Using IPI No-Shortcut mode
registered taskstats version 1
XENBUS: Device with no driver: device/vbd/51714
XENBUS: Device with no driver: device/vbd/51713
XENBUS: Device with no driver: device/console/0
  Magic number: 1:252:3141
Freeing unused kernel memory: 464k freed
Write protecting the kernel text: 3048k
Write protecting the kernel read-only data: 1464k
Loading, please wait...
<30>udev[493]: starting version 167
Begin: Loading essential drivers ... JINHO: 1. xlblk_init called
JINHO: 2. xlblk_init called
JINHO: 3. xlblk_init called
blkfront: xvda2: barriers enabled (tag)
Setting capacity to 8388608
blkfront: xvda1: barriers enabled (tag)
Setting capacity to 2097152
xvda1: detected capacity change from 0 to 1073741824
Begin: Running /scripts/init-premount ... done.
Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done.
Begin: Running /scripts/local-premount ... done.
kjournald starting.  Commit interval 5 seconds
EXT3-fs: mounted filesystem with writeback data mode.
Begin: Running /scripts/local-bottom ... done.
Begin: Running /scripts/init-bottom ... done.
run-init: /sbin/init: No such file or directory
Kernel panic - not syncing: Attempted to kill init!
Pid: 1, comm: run-init Not tainted #1
Call Trace:
 [<c06f3efc>] ? printk+0xf/0x13
 [<c06f3e35>] panic+0x39/0xf1
 [<c043eba9>] do_exit+0x5c/0x5cf
 [<c0406fab>] ? xen_restore_fl_direct_end+0x0/0x1
 [<c040b33b>] ? do_softirq+0xd5/0xde
 [<c043f1f1>] complete_and_exit+0x0/0x17
 [<c0409389>] syscall_call+0x7/0xb

Please help me to solve this problem. I have spent one week with this problem.

Thank you,


Jinho Hwang
PhD Student
Department of Computer Science
The George Washington University
Washington, DC 20052
hwang.jinho@xxxxxxxxx (email)
276.336.0971 (Cell)
202.994.4875 (fax)
070.8285.6546 (myLg070)
Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.