[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] 3.0.3-testing #11686 crash after domU started


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: Michele <vo.sinh@xxxxxxxxx>
  • Date: Thu, 5 Oct 2006 19:08:58 +0200
  • Delivery-date: Thu, 05 Oct 2006 10:09:29 -0700
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=dPJGidkSkQ0c4E6QF6u3cv408sKG65uASlTj4MBSDXq1OFi/8c5GlYpxwIV/Q7UpIv7b3jA9m3tQtAxllGLPIYdYlKosbGkgO86ZbmxLlZIFQrQdnXkRWhCZujEgb0MQEgv+/Ani4/ignSPJ+OpTK9qvVZvDRM90sXSBjACcPc8=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hi list,

I got this after compiling the latest 3.0.3-testing changeset (11686).
I did make tools xen, make install-tools, install-xen then did a make
kernels and a make install-kernels so there should be the default
config and everything should be ok.
The domU started there is an older 3.0.3 or maybe 3.0.2, I'll give you
more info when the machine will be rebooted. After that It's
impossible even to access the console from a serial interface.

regards,
Michele

[root@localhost ~]# service xend start
Breaking affinity for irq 20
Breaking affinity for irq 259
Breaking affinity for irq 260
Breaking affinity for irq 261
Breaking affinity for irq 14
Breaking affinity for irq 265
Breaking affinity for irq 266
Breaking affinity for irq 267
Breaking affinity for irq 17
Breaking affinity for irq 262
Breaking affinity for irq 263
Breaking affinity for irq 264
Bridge firewalling registered
[root@localhost ~]# service xendomains start
Restoring Xen domains: admin.
Starting auto Xen domains: admin(skip)/etc/init.d/xendomains: line 67:
log_success_msg: command not found
[root@localhost ~]# xm console admin
----------- [cut here ] --------- [please bite here ] ---------
Kernel BUG at drivers/xen/netfront/netfront.c:717
invalid opcode: 0000 [1] SMP
CPU 0
Modules linked in: ipv6 binfmt_misc ide_generic dm_snapshot dm_zero
dm_mirror dm_mod raid1 ext3 jbd ide_disk
Pid: 8, comm: xenwatch Not tainted 2.6.16.29-xenU #2
RIP: e030:[<ffffffff8111b44b>] <ffffffff8111b44b>{network_alloc_rx_buffers+495}
RSP: e02b:ffff88000135fdf8  EFLAGS: 00010082
RAX: 0000000000000000 RBX: ffff880005c4c980 RCX: 0000000000008bc7
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000002058
RBP: ffff88000f820500 R08: 0000000000000000 R09: 0000000000002c97
R10: 0000000000000000 R11: 0000000000000000 R12: ffff88000f87a838
R13: ffff88000f820628 R14: 000000000000040b R15: ffff88000f8258e0
FS:  0000000000000000(0000) GS:ffffffff8124a000(0000) knlGS:0000000000000000
CS:  e033 DS: 0000 ES: 0000
Process xenwatch (pid: 8, threadinfo ffff88000135e000, task ffff88000134a100)
Stack: ffff88000f820000 000088000135fed8 000000c900000100 ffff88000160bee8
      00000000000000c9 000000370134a318 ffffffff812134a0 ffff88000135e000
      ffff88000135fe88 0000000000000001
Call Trace: <ffffffff8111b9b4>{backend_changed+456}
      <ffffffff8110f700>{xenwatch_thread+0}
<ffffffff8103b072>{keventd_create_kthread+0}
      <ffffffff8110ed1f>{xenwatch_handle_callback+21}
<ffffffff8110f82d>{xenwatch_thread+301}
      <ffffffff8103b44d>{autoremove_wake_function+0}
<ffffffff8103b072>{keventd_create_kthread+0}
      <ffffffff8110f700>{xenwatch_thread+0} <ffffffff8103b315>{kthread+212}
      <ffffffff8100b7ee>{child_rip+8}
<ffffffff8103b072>{keventd_create_kthread+0}
      <ffffffff8103b241>{kthread+0} <ffffffff8100b7e6>{child_rip+0}

Code: 0f 0b 68 db 4b 1c 81 c2 cd 02 4c 63 e2 4a 89 9c e5 78 09 00
RIP <ffffffff8111b44b>{network_alloc_rx_buffers+495} RSP <ffff88000135fdf8>
BUG: xenwatch/8, lock held at task exit time!
[ffffffff812134a0] {xenwatch_mutex}
.. held by:          xenwatch:    8 [ffff88000134a100, 110]
... acquired at:               xenwatch_thread+0xa8/0x145

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.