[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Xen 3.2.0 on debian etch, many kernel panics
On Mo, MÃr 03, 2008 at 12:55:27 +0100, Jordi Moles wrote: > Hi, > > i'm trying xen 3.2.0 on debian etch. I installed everything from the > repositories. > I added the lenny repositories so that i could install the lastest > packages and have xen 3.2.0 on my machine from the debian repositories. > > these are the packages installed: > > ii linux-image-2.6.18-6-xen-amd64 2.6.18.dfsg.1-18etch1 > Linux 2.6.18 image on AMD64 > ii linux-modules-2.6.18-6-xen-amd64 2.6.18.dfsg.1-18etch1 > Linux 2.6.18 modules on AMD64 > ii xen-hypervisor-3.2-1-amd64 3.2.0-2 > The Xen Hypervisor on AMD64 > ii xen-tools 3.9-2 > Tools to manage Debian XEN virtual servers > ii xen-utils-3.2-1 3.2.0-2 > XEN administrative tools > ii xen-utils-common 3.1.0-1 > XEN administrative tools - common files > > Everything was installed without any trouble or warning or error > reported in the logs, but now... well... the whole machine keeps hanging > with kernel panics. > > This is an example: > > ************************* > > Mar 3 12:28:22 x03glus01 kernel: device vif5.1 entered promiscuous mode > Mar 3 12:28:22 x03glus01 kernel: audit(1204543702.112:14): dev=vif5.1 > prom=256 old_prom=0 auid=4294967295 > Mar 3 12:28:22 x03glus01 kernel: device vif5.0 entered promiscuous mode > Mar 3 12:28:22 x03glus01 kernel: audit(1204543702.116:15): dev=vif5.0 > prom=256 old_prom=0 auid=4294967295 > Mar 3 12:28:22 x03glus01 kernel: ADDRCONF(NETDEV_UP): vif5.1: link is > not ready > Mar 3 12:28:22 x03glus01 kernel: ADDRCONF(NETDEV_UP): vif5.0: link is > not ready > Mar 3 12:28:23 x03glus01 kernel: ----------- [cut here ] --------- > [please bite here ] --------- > Mar 3 12:28:23 x03glus01 kernel: Kernel BUG at > drivers/xen/core/evtchn.c:481 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Try to set limit dom0 cpus to 1. # vi /etc/xen/xend-condif.sxp (dom0-cpus 1) If anybody knows how to fix (or circumvent in any other way) this bug, please tell us. AFAIK the bug is Debian specific only > Mar 3 12:28:23 x03glus01 kernel: invalid opcode: 0000 [1] SMP > Mar 3 12:28:23 x03glus01 kernel: CPU 1 > Mar 3 12:28:23 x03glus01 kernel: Modules linked in: xt_tcpudp > xt_physdev iptable_filter ip_tables x_tables bridge netloop button ac > battery ipmi_si ipmi_devintf ipmi_msghandler ipv6 dm_snapshot dm_mirror > dm_mod loop serio_raw i2c_i801 i2c_core serial_core psmouse pcspkr > shpchp pci_hotplug evdev ext3 jbd mbcache ide_generic ide_cd cdrom > sd_mod piix ahci libata generic ehci_hcd scsi_mod ide_core uhci_hcd > e1000 fan raid1 md_mod > Mar 3 12:28:23 x03glus01 kernel: Pid: 37, comm: xenwatch Not tainted > 2.6.18-6-xen-amd64 #1 > Mar 3 12:28:23 x03glus01 kernel: RIP: e030:[<ffffffff80360fe1>] > [<ffffffff80360fe1>] retrigger+0x26/0x3e > Mar 3 12:28:23 x03glus01 kernel: RSP: e02b:ffff8801ef297d88 EFLAGS: > 00010046 > Mar 3 12:28:23 x03glus01 kernel: RAX: 0000000000000000 RBX: > 0000000000009000 RCX: ffffffffff578000 > Mar 3 12:28:23 x03glus01 kernel: RDX: 000000000000002f RSI: > ffff8801ef297d30 RDI: 0000000000000120 > Mar 3 12:28:23 x03glus01 kernel: RBP: ffffffff804cd480 R08: > 0000000000000060 R09: ffff8801ef253e80 > Mar 3 12:28:23 x03glus01 kernel: R10: ffff8801eafb4cc0 R11: > ffffffff80360fbb R12: 0000000000000120 > Mar 3 12:28:23 x03glus01 kernel: R13: ffffffff804cd4bc R14: > 0000000000000000 R15: 0000000000000008 > Mar 3 12:28:23 x03glus01 kernel: FS: 00002ac9f9b2c6e0(0000) > GS:ffffffff804c3080(0000) knlGS:0000000000000000 > Mar 3 12:28:23 x03glus01 kernel: CS: e033 DS: 0000 ES: 0000 > Mar 3 12:28:23 x03glus01 kernel: Process xenwatch (pid: 37, threadinfo > ffff8801ef296000, task ffff8801ef289080) > Mar 3 12:28:23 x03glus01 kernel: Stack: ffffffff802a0679 > ffff8801edf75500 ffff8801edf75500 0000000000000000 > Mar 3 12:28:23 x03glus01 kernel: ffff8801ef297de0 000000000000040b > ffffffff8036db4c 0000000000000000 > Mar 3 12:28:23 x03glus01 kernel: ffffffff8036dfc4 ffff8801ef297ea4 > Mar 3 12:28:23 x03glus01 kernel: Call Trace: > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff802a0679>] enable_irq+0x9d/0xbc > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff8036db4c>] __netif_up+0xc/0x15 > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff8036dfc4>] > netif_map+0x2a6/0x2d8 > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff8035c325>] > bus_for_each_dev+0x61/0x6e > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff803667ce>] > xenwatch_thread+0x0/0x145 > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff803667ce>] > xenwatch_thread+0x0/0x145 > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff8036830e>] > frontend_changed+0x2ba/0x4f9 > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff803667ce>] > xenwatch_thread+0x0/0x145 > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff8028f865>] > keventd_create_kthread+0x0/0x61 > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff80365bdc>] > xenwatch_handle_callback+0x15/0x48 > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff803668fb>] > xenwatch_thread+0x12d/0x145 > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff8028fa28>] > autoremove_wake_function+0x0/0x2e > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff8028f865>] > keventd_create_kthread+0x0/0x61 > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff803667ce>] > xenwatch_thread+0x0/0x145 > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff8023352b>] kthread+0xd4/0x107 > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff8025c830>] child_rip+0xa/0x12 > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff8028f865>] > keventd_create_kthread+0x0/0x61 > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff80233457>] kthread+0x0/0x107 > Mar 3 12:28:23 x03glus01 kernel: [<ffffffff8025c826>] child_rip+0x0/0x12 > Mar 3 12:28:23 x03glus01 kernel: > Mar 3 12:28:23 x03glus01 kernel: > Mar 3 12:28:23 x03glus01 kernel: Code: 0f 0b 68 74 db 41 80 c2 e1 01 f0 > 0f ab 91 00 08 00 00 b8 01 > Mar 3 12:28:23 x03glus01 kernel: RIP [<ffffffff80360fe1>] > retrigger+0x26/0x3e > Mar 3 12:28:23 x03glus01 kernel: RSP <ffff8801ef297d88> > > ************************ > > This actually happens in several new machines i'm testing. I've run disk > and memory tests on them and no problem reported at all. > They are intel quad core, 8 virtual cpus. > > Do you have any idea where my problem is? > > Thank you. > > _______________________________________________ > Xen-users mailing list > Xen-users@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-users -- WBR, i.m.chubin _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |