| [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
 [Xen-devel] Fw: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
 
To: xen-devel@xxxxxxxxxxxxxxxxxxxFrom: Boris Derzhavets <bderzhavets@xxxxxxxxx>Date: Tue, 25 Aug 2009 23:53:58 -0700 (PDT)Delivery-date: Tue, 25 Aug 2009 23:54:38 -0700Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;	h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:MIME-Version:Content-Type;	b=Mm9oj7ErUm1lMK4KRB6Ila3Q3tl15i7aSMBYMBDYWyXLaNxoEmAH+wkBgsB4VNKjUlr/+Ccea+0u4+WoFB54ZNC6JWAeSsYFbXKko2KAkcMxmyjh1Yg3XM7eRO9oONrbYb9nhfHzgCdMyTq8LlVdpu+hbrE3uAX1GbfB8Cz8luQ=;List-id: Xen developer discussion <xen-devel.lists.xensource.com> 
 | CC fixed. 
 Boris
 
 --- On Wed, 8/26/09, Boris Derzhavets <bderzhavets@xxxxxxxxx> wrote:
 
 From: Boris Derzhavets <bderzhavets@xxxxxxxxx>
 Subject: Re: [fedora-virt] Re: [Fedora-xen] Dom0 kernels
 To: fedora-xen@xxxxxxxxxx, fedora-virt@xxxxxxxxxx, "M A Young" <m.a.young@xxxxxxxxxxxx>
 Cc: xen-devel@xxxxxxxxxxxxx
 Date: Wednesday, August 26, 2009, 2:50 AM
 
 
 | With rpm upgraded can load Dom0. However,
 dmesg report contains :- 
 Initializing cgroup subsys cpuset
 Initializing cgroup subsys cpu
 Linux version 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 (root@ServerXenSRC) (gcc version 4.4.0 20090506 (Red Hat 4.4.0-4) (GCC) ) #1 SMP Wed Aug 26 08:14:58 MSD 2009
 Command line: root=/dev/mapper/vg_serverxensrc-LogVol00  ro console=tty0
 KERNEL supported cpus:
 Intel GenuineIntel
 AMD AuthenticAMD
 Centaur CentaurHauls
 BIOS-provided physical RAM map:
 Xen: 0000000000000000 - 000000000009ec00 (usable)
 Xen: 000000000009ec00 - 0000000000100000 (reserved)
 Xen: 0000000000100000 - 00000000cff80000 (usable)
 Xen: 00000000cff80000 - 00000000cff8e000 (ACPI data)
 Xen: 00000000cff8e000 - 00000000cffe0000 (ACPI NVS)
 Xen:
 00000000cffe0000 - 00000000d0000000 (reserved)
 Xen: 00000000fee00000 - 00000000fee01000 (reserved)
 Xen: 00000000ffe00000 - 0000000100000000 (reserved)
 Xen: 0000000100000000 - 00000001f1a6b000 (usable)
 DMI 2.4 present.
 AMI BIOS detected: BIOS may corrupt low RAM, working around it.
 e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved)
 last_pfn = 0x1f1a6b max_arch_pfn = 0x400000000
 last_pfn = 0xcff80 max_arch_pfn = 0x400000000
 initial memory mapped : 0 - 20000000
 init_memory_mapping: 0000000000000000-00000000cff80000
 0000000000 - 00cff80000 page 4k
 kernel direct mapping tables up to cff80000 @ 100000-785000
 init_memory_mapping: 0000000100000000-00000001f1a6b000
 0100000000 - 01f1a6b000 page 4k
 
 . . . . . . .
 
 ======================================================
 [ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
 ]
 2.6.31-0.1.2.58.rc7.git1.xendom0.fc11.x86_64 #1
 ------------------------------------------------------
 khubd/28 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
 (&retval->lock){......}, at: [<ffffffff8112a240>] dma_pool_alloc+0x45/0x321
 
 and this task is already holding:
 (&ehci->lock){-.....}, at: [<ffffffff813d4654>] ehci_urb_enqueue+0xb4/0xd5c
 which would create a new lock dependency:
 (&ehci->lock){-.....} -> (&retval->lock){......}
 
 but this new dependency connects a HARDIRQ-irq-safe lock:
 (&ehci->lock){-.....}
 ... which became HARDIRQ-irq-safe at:
 [<ffffffff8109908b>] __lock_acquire+0x254/0xc0e
 [<ffffffff81099b33>] lock_acquire+0xee/0x12e
 [<ffffffff8150b987>] _spin_lock+0x45/0x8e
 [<ffffffff813d325c>] ehci_irq+0x41/0x441
 [<ffffffff813b7e5f>]
 usb_hcd_irq+0x59/0xcc
 [<ffffffff810ca810>] handle_IRQ_event+0x62/0x148
 [<ffffffff810ccda3>] handle_level_irq+0x90/0xf9
 [<ffffffff81017078>] handle_irq+0x9a/0xba
 [<ffffffff8130a0c6>] xen_evtchn_do_upcall+0x10c/0x1bd
 [<ffffffff8101527e>] xen_do_hypervisor_callback+0x1e/0x30
 [<ffffffffffffffff>] 0xffffffffffffffff
 
 to a HARDIRQ-irq-unsafe lock:
 (purge_lock){+.+...}
 ... which became HARDIRQ-irq-unsafe at:
 ...  [<ffffffff810990ff>] __lock_acquire+0x2c8/0xc0e
 [<ffffffff81099b33>] lock_acquire+0xee/0x12e
 [<ffffffff8150b987>] _spin_lock+0x45/0x8e
 [<ffffffff811233a4>] __purge_vmap_area_lazy+0x63/0x198
 [<ffffffff81124c74>] vm_unmap_aliases+0x18f/0x1b2
 [<ffffffff8100eeb3>] xen_alloc_ptpage+0x5a/0xa0
 [<ffffffff8100ef97>]
 xen_alloc_pte+0x26/0x3c
 [<ffffffff81118381>] __pte_alloc_kernel+0x6f/0xdd
 [<ffffffff811241a1>] vmap_page_range_noflush+0x1c5/0x315
 [<ffffffff81124332>] map_vm_area+0x41/0x6b
 [<ffffffff8112448b>] __vmalloc_area_node+0x12f/0x167
 [<ffffffff81124553>] __vmalloc_node+0x90/0xb5
 [<ffffffff811243c8>] __vmalloc_area_node+0x6c/0x167
 [<ffffffff81124553>] __vmalloc_node+0x90/0xb5
 [<ffffffff811247ca>] __vmalloc+0x28/0x3e
 [<ffffffff818504c9>] alloc_large_system_hash+0x12f/0x1fb
 [<ffffffff81852b4e>] vfs_caches_init+0xb8/0x140
 [<ffffffff8182b061>] start_kernel+0x3ef/0x44c
 [<ffffffff8182a2d0>] x86_64_start_reservations+0xbb/0xd6
 [<ffffffff8182e6d1>] xen_start_kernel+0x5ab/0x5b2
 [<ffffffffffffffff>] 0xffffffffffffffff
 
 other info that might help
 us debug this:
 
 2 locks held by khubd/28:
 #0:  (usb_address0_mutex){+.+...}, at: [<ffffffff813b2e82>] hub_port_init+0x8c/0x7ee
 #1:  (&ehci->lock){-.....}, at: [<ffffffff813d4654>] ehci_urb_enqueue+0xb4/0xd5c
 
 the HARDIRQ-irq-safe lock's dependencies:
 -> (&ehci->lock){-.....} ops: 0 {
 IN-HARDIRQ-W at:
 [<ffffffff8109908b>] __lock_acquire+0x254/0xc0e
 [<ffffffff81099b33>] lock_acquire+0xee/0x12e
 [<ffffffff8150b987>]
 _spin_lock+0x45/0x8e
 [<ffffffff813d325c>] ehci_irq+0x41/0x441
 [<ffffffff813b7e5f>] usb_hcd_irq+0x59/0xcc
 [<ffffffff810ca810>] handle_IRQ_event+0x62/0x148
 [<ffffffff810ccda3>] handle_level_irq+0x90/0xf9
 
 .  .  .  .  .  .
 
 Jeremy's current version is rc6.  I believe this issue has been noticed at
 xen-devel.
 Boris
 
 
 
 
 | 
-----Inline Attachment Follows-----
 
 
 | 
 
 Attachment:
dmesg.log.gzDescription: GNU Zip compressed data
 _______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
 
 |