[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen 4.0 crashes with pvops kernel


  • To: xen-users@xxxxxxxxxxxxxxxxxxx, Cris Daniluk <cris.daniluk@xxxxxxxxx>
  • From: Boris Derzhavets <bderzhavets@xxxxxxxxx>
  • Date: Mon, 14 Jun 2010 13:11:05 -0700 (PDT)
  • Cc:
  • Delivery-date: Mon, 14 Jun 2010 13:12:33 -0700
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type; b=dzZshJjx3hiJ1TdI19/IVpAtWcZDh3cjqeYgZ9Rk7qrRvPhR4UhwZxXgzmcp0zFwafLmg1ntZ4u4DlGkbb13ubVXB03g8L4fsmclkm7Br/b8nqTNuKIZEg0ttp6AFBQ91kR77gsbhEWaxyxdj3QHLqV89II6ko4W2MM3fmzzyLA=;
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Would escalate to xen-devel and Yu Ke in particular.

Boris.

--- On Mon, 6/14/10, Cris Daniluk <cris.daniluk@xxxxxxxxx> wrote:

From: Cris Daniluk <cris.daniluk@xxxxxxxxx>
Subject: [Xen-users] Xen 4.0 crashes with pvops kernel
To: xen-users@xxxxxxxxxxxxxxxxxxx
Date: Monday, June 14, 2010, 12:55 PM

Hi,

I'm trying to get Xen 4.0 going with a pvops-enabled kernel on an IBM
x3500 7797 server. I've tried several different distros, including
CentOS5.5, RHEL6 beta, FC12 and FC13. In each of them, I can run a
Xenlinux (2.6.18) kernel, including the Xen-enabled distro kernels in
CentOS 5.5 and FC12. However, if I try to run a pvops kernel, I get a
panic. The CPUs are detected fine, but it seems to have trouble
shortly thereafter.

I can boot the pvops-enabled kernel directly and everything works
fine. I only have trouble when booting it as a dom0. I've got two
identical servers and it is a problem on both, so I don't think
there's bad RAM. I also tried this with the latest 4.0-testing branch
and had the same experience.

Here's my console output from FC12 with a 2.6.32.15-compiled kernel.
Please let me know what additional debugging info is needed.

ACPI: bus type pci registered
PCI: MCFG configuration 0: base e0000000 segment 0 buses 0 - 27
PCI: MCFG area at e0000000 reserved in E820
PCI: Using MMCONFIG at e0000000 - e1bfffff
PCI: Using configuration type 1 for base access
bio: create slab <bio-0> at 0
ERROR: Unable to locate IOAPIC for GSI 9
ACPI: Interpreter enabled
ACPI: (supports S0 S4 S5)
ACPI: Using IOAPIC for interrupt routing
ACPI: No dock devices found.
(XEN) mm.c:797:d0 Non-privileged (0) attempt to map I/O space 000fec80
BUG: unable to handle kernel paging request at ffffc900001b0000
IP: [<ffffffff81281df4>] acpi_ex_system_memory_space_handler+0x1c6/0x1e6
PGD 3fd5a067 PUD 3fd5b067 PMD 3fd5c067 PTE 0
Oops: 0002 [#1] SMP
last sysfs file:
CPU 3
Modules linked in:
Pid: 1, comm: swapper Not tainted 2.6.32.15 #1 IBM eServer x3500-[7977AC1]-
RIP: e030:[<ffffffff81281df4>]  [<ffffffff81281df4>]
acpi_ex_system_memory_space_handler+0x1c6/0x1e6
RSP: e02b:ffff88003ee876c0  EFLAGS: 00010246
RAX: 000000000000002e RBX: ffff88003efc5880 RCX: 0000000000000001
RDX: 0000000000000000 RSI: ffffffff81228a14 RDI: 80000000fec80273
RBP: ffff88003ee87700 R08: ffff880002697220 R09: 0000000000000100
R10: 0000000000000001 R11: ffffea0000dc7708 R12: ffffc900001b0000
R13: 0000000000000000 R14: 0000000000000020 R15: ffff88003ee87848
FS:  0000000000000000(0000) GS:ffff880002685000(0000) knlGS:0000000000000000
CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: ffffc900001b0000 CR3: 0000000001001000 CR4: 0000000000002660
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process swapper (pid: 1, threadinfo ffff88003ee86000, task ffff88003ee88000)
Stack:
ffff88003ee876f0 0000000000000100 ffff880000000000 ffff88003fdeeea0
<0> ffffffff81281c2e ffff88003ee11ea0 ffff88003fdeef78 0000000000000000
<0> ffff88003ee87770 ffffffff8127a40e ffff88003ee87720 ffffffff810380f5
Call Trace:
[<ffffffff81281c2e>] ? acpi_ex_system_memory_space_handler+0x0/0x1e6
[<ffffffff8127a40e>] acpi_ev_address_space_dispatch+0x170/0x1be
[<ffffffff810380f5>] ? ioremap_nocache+0x17/0x19
[<ffffffff8127f033>] acpi_ex_access_region+0x235/0x242
[<ffffffff8127a40e>] ? acpi_ev_address_space_dispatch+0x170/0x1be
[<ffffffff8100ee7d>] ? xen_force_evtchn_callback+0xd/0xf
[<ffffffff8127f137>] acpi_ex_field_datum_io+0xf7/0x189
[<ffffffff8100f5ff>] ? xen_restore_fl_direct_end+0x0/0x1
[<ffffffff8127f425>] acpi_ex_write_with_update_rule+0xb5/0xc0
[<ffffffff8127f5ee>] acpi_ex_insert_into_field+0x1be/0x1e0
[<ffffffff8100f5ff>] ? xen_restore_fl_direct_end+0x0/0x1
[<ffffffff8127dab0>] acpi_ex_write_data_to_field+0x1a4/0x1c2
[<ffffffff8128fb5c>] ? acpi_ut_allocate_object_desc_dbg+0x40/0x78
[<ffffffff81281eb7>] acpi_ex_store_object_to_node+0xa3/0xe6
[<ffffffff81278785>] ? acpi_ds_create_operand+0x1f7/0x20a
[<ffffffff812820a6>] acpi_ex_store+0xc3/0x255
[<ffffffff8127fe88>] acpi_ex_opcode_1A_1T_1R+0x361/0x4bc
[<ffffffff812806f2>] ? acpi_ex_resolve_operands+0x1f2/0x4d4
[<ffffffff812773e3>] acpi_ds_exec_end_op+0xef/0x3dc
[<ffffffff81289b9e>] acpi_ps_parse_loop+0x7c0/0x946
[<ffffffff81288c88>] acpi_ps_parse_aml+0x9f/0x2de
[<ffffffff8128a42c>] acpi_ps_execute_method+0x1e9/0x2b9
[<ffffffff8128598a>] acpi_ns_evaluate+0xe6/0x1ad
[<ffffffff8128d957>] acpi_ut_evaluate_object+0xb7/0x1e0
[<ffffffff8100f5ff>] ? xen_restore_fl_direct_end+0x0/0x1
[<ffffffff8128b680>] acpi_rs_get_method_data+0x1f/0x45
[<ffffffff81271e27>] ? get_root_bridge_busnr_callback+0x0/0x40
[<ffffffff8128af7a>] acpi_walk_resources+0x56/0xc9
[<ffffffff81457c63>] acpi_pci_root_add+0x70/0x273
[<ffffffff8126deed>] acpi_device_probe+0x50/0x122
[<ffffffff812ecb1a>] driver_probe_device+0xea/0x217
[<ffffffff812ecca4>] __driver_attach+0x5d/0x81
[<ffffffff812ecc47>] ? __driver_attach+0x0/0x81
[<ffffffff812ebfbe>] bus_for_each_dev+0x53/0x88
[<ffffffff812ec8aa>] driver_attach+0x1e/0x20
[<ffffffff812ec4e9>] bus_add_driver+0xd5/0x23c
[<ffffffff812ecfa4>] driver_register+0x9d/0x10e
[<ffffffff81863968>] ? acpi_pci_root_init+0x0/0x28
[<ffffffff8126e9f2>] acpi_bus_register_driver+0x43/0x45
[<ffffffff81863981>] acpi_pci_root_init+0x19/0x28
[<ffffffff8100a069>] do_one_initcall+0x5e/0x159
[<ffffffff818366bc>] kernel_init+0x165/0x1bf
[<ffffffff81013d2a>] child_rip+0xa/0x20
[<ffffffff81012f11>] ? int_ret_from_sys_call+0x7/0x1b
[<ffffffff8101369d>] ? retint_restore_args+0x5/0x6
[<ffffffff81013d20>] ? child_rip+0x0/0x20
Code: 83 fe 08 75 33 eb 0e 41 83 fe 20 74 1b 41 83 fe 40 75 25 eb 1c
49 8b 07 41 88 04 24 eb 1a 49 8b 07 66 41 89 04 24 eb 10 49 8b 07 <41>
89 04 24 eb 07 49 8b 07 49 89 04 24 31 c0 48 83 c4 18 5b 41
RIP  [<ffffffff81281df4>] acpi_ex_system_memory_space_handler+0x1c6/0x1e6
RSP <ffff88003ee876c0>
CR2: ffffc900001b0000
---[ end trace a22d306b065d4a66 ]---
Kernel panic - not syncing: Attempted to kill init!
Pid: 1, comm: swapper Tainted: G      D    2.6.32.15 #1
Call Trace:
[<ffffffff81467068>] panic+0x7a/0x133
[<ffffffff810624f6>] ? exit_ptrace+0xa1/0x121
[<ffffffff8105afdd>] do_exit+0x7a/0x6d3
[<ffffffff8146a2df>] oops_end+0xbf/0xc7
[<ffffffff81037831>] no_context+0x1f3/0x202
[<ffffffff8100ed11>] ? xen_set_pte_at+0x37/0x109
[<ffffffff810379bd>] __bad_area_nosemaphore+0x17d/0x1a0
[<ffffffff8100c7bd>] ? __raw_callee_save_xen_pmd_val+0x11/0x1e
[<ffffffff810379f3>] bad_area_nosemaphore+0x13/0x15
[<ffffffff8146b75e>] do_page_fault+0x14f/0x2a0
[<ffffffff81469775>] page_fault+0x25/0x30
[<ffffffff81228a14>] ? rb_insert_color+0xbc/0xe5
[<ffffffff81281df4>] ? acpi_ex_system_memory_space_handler+0x1c6/0x1e6
[<ffffffff81281c2e>] ? acpi_ex_system_memory_space_handler+0x0/0x1e6
[<ffffffff8127a40e>] acpi_ev_address_space_dispatch+0x170/0x1be
[<ffffffff810380f5>] ? ioremap_nocache+0x17/0x19
[<ffffffff8127f033>] acpi_ex_access_region+0x235/0x242
[<ffffffff8127a40e>] ? acpi_ev_address_space_dispatch+0x170/0x1be
[<ffffffff8100ee7d>] ? xen_force_evtchn_callback+0xd/0xf
[<ffffffff8127f137>] acpi_ex_field_datum_io+0xf7/0x189
[<ffffffff8100f5ff>] ? xen_restore_fl_direct_end+0x0/0x1
[<ffffffff8127f425>] acpi_ex_write_with_update_rule+0xb5/0xc0
[<ffffffff8127f5ee>] acpi_ex_insert_into_field+0x1be/0x1e0
[<ffffffff8100f5ff>] ? xen_restore_fl_direct_end+0x0/0x1
[<ffffffff8127dab0>] acpi_ex_write_data_to_field+0x1a4/0x1c2
[<ffffffff8128fb5c>] ? acpi_ut_allocate_object_desc_dbg+0x40/0x78
[<ffffffff81281eb7>] acpi_ex_store_object_to_node+0xa3/0xe6
[<ffffffff81278785>] ? acpi_ds_create_operand+0x1f7/0x20a
[<ffffffff812820a6>] acpi_ex_store+0xc3/0x255
[<ffffffff8127fe88>] acpi_ex_opcode_1A_1T_1R+0x361/0x4bc
[<ffffffff812806f2>] ? acpi_ex_resolve_operands+0x1f2/0x4d4
[<ffffffff812773e3>] acpi_ds_exec_end_op+0xef/0x3dc
[<ffffffff81289b9e>] acpi_ps_parse_loop+0x7c0/0x946
[<ffffffff81288c88>] acpi_ps_parse_aml+0x9f/0x2de
[<ffffffff8128a42c>] acpi_ps_execute_method+0x1e9/0x2b9
[<ffffffff8128598a>] acpi_ns_evaluate+0xe6/0x1ad
[<ffffffff8128d957>] acpi_ut_evaluate_object+0xb7/0x1e0
[<ffffffff8100f5ff>] ? xen_restore_fl_direct_end+0x0/0x1
[<ffffffff8128b680>] acpi_rs_get_method_data+0x1f/0x45
[<ffffffff81271e27>] ? get_root_bridge_busnr_callback+0x0/0x40
[<ffffffff8128af7a>] acpi_walk_resources+0x56/0xc9
[<ffffffff81457c63>] acpi_pci_root_add+0x70/0x273
[<ffffffff8126deed>] acpi_device_probe+0x50/0x122
[<ffffffff812ecb1a>] driver_probe_device+0xea/0x217
[<ffffffff812ecca4>] __driver_attach+0x5d/0x81
[<ffffffff812ecc47>] ? __driver_attach+0x0/0x81
[<ffffffff812ebfbe>] bus_for_each_dev+0x53/0x88
[<ffffffff812ec8aa>] driver_attach+0x1e/0x20
[<ffffffff812ec4e9>] bus_add_driver+0xd5/0x23c
[<ffffffff812ecfa4>] driver_register+0x9d/0x10e
[<ffffffff81863968>] ? acpi_pci_root_init+0x0/0x28
[<ffffffff8126e9f2>] acpi_bus_register_driver+0x43/0x45
[<ffffffff81863981>] acpi_pci_root_init+0x19/0x28
[<ffffffff8100a069>] do_one_initcall+0x5e/0x159
[<ffffffff818366bc>] kernel_init+0x165/0x1bf
[<ffffffff81013d2a>] child_rip+0xa/0x20
[<ffffffff81012f11>] ? int_ret_from_sys_call+0x7/0x1b
[<ffffffff8101369d>] ? retint_restore_args+0x5/0x6
[<ffffffff81013d20>] ? child_rip+0x0/0x20

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.