[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] HVM + IGD Graphics + 4GB RAM = Soft Lockup



I'm having an issue forwarding through an Intel on-board graphics
adapter.  This is on a Dell Optiplex 780 with 8GB of RAM.  The
pass-through works perfectly fine if I have 2GB of RAM assigned to the
HVM domU.  If I try to assign 3GB or 4GB of RAM, I get the following on
the console:

[   41.222073] br0: port 2(vif1.0) entering forwarding state
[   41.269854] (cdrom_add_media_watch()
file=/usr/src/packages/BUILD/kernel-xen-2.6.31.14/linux-2.6.31/drivers/xen/blkback/cdrom.c,
 line=108) nodename:backend/vbd/1/768
[   41.269864] (cdrom_is_type()
file=/usr/src/packages/BUILD/kernel-xen-2.6.31.14/linux-2.6.31/drivers/xen/blkback/cdrom.c,
 line=95) type:0
[  244.340384] INFO: task qemu-dm:3210 blocked for more than 120
seconds.
[  244.340394] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[  244.340398] qemu-dm       D 00000000fe0c55d4     0  3210   3119
0x00000000
[  244.340403]  ffff8800e09f3968 0000000000000282 ffff8800e09f3898
ffff8800e09f38e8
[  244.340408]  ffff8800e09f38b8 ffff8800e09f3930 000000000000a380
ffff8801e5f74728
[  244.340412]  000000000000a380 000000000000a380 000000000000a380
0000000000007d00
[  244.340417] Call Trace:
[  244.340432]  [<ffffffff8046d720>] io_schedule+0x70/0xd0
[  244.340438]  [<ffffffff8014958e>] sync_buffer+0x4e/0x70
[  244.340443]  [<ffffffff8046ddd7>] __wait_on_bit+0x67/0xb0
[  244.340448]  [<ffffffff8046dea2>] out_of_line_wait_on_bit+0x82/0xb0
[  244.340452]  [<ffffffff80149522>] __wait_on_buffer+0x32/0x50
[  244.340457]  [<ffffffff801a0c27>] __ext3_get_inode_loc+0x2c7/0x350
[  244.340462]  [<ffffffff801a0d35>] ext3_iget+0x85/0x410
[  244.340467]  [<ffffffff801a7d98>] ext3_lookup+0xc8/0x150
[  244.340472]  [<ffffffff801248c2>] real_lookup+0x102/0x180
[  244.340477]  [<ffffffff80126d80>] do_lookup+0xd0/0x100
[  244.340482]  [<ffffffff80127c58>] __link_path_walk+0x7f8/0xf40
[  244.340486]  [<ffffffff801285b6>] path_walk+0x66/0xd0
[  244.340490]  [<ffffffff8012878b>] do_path_lookup+0x6b/0xb0
[  244.340494]  [<ffffffff80129a5d>] do_filp_open+0x10d/0xb30
[  244.340499]  [<ffffffff80116983>] do_sys_open+0x73/0x150
[  244.340503]  [<ffffffff80116ace>] sys_open+0x2e/0x50
[  244.340508]  [<ffffffff8000c8c8>] system_call_fastpath+0x16/0x1b
[  244.340523]  [<00007ffee4ff2267>] 0x7ffee4ff2267
[  244.344364] BUG: soft lockup - CPU#0 stuck for 192s! [xend:3119]
[  244.344364] Modules linked in: usbbk gntdev netbk blkbk
blkback_pagemap blktap edd af_packet bridge stp llc microcode fuse loop
ppdev i2c_i801 parport_pc iTCO_wdt intel_agp e1000e wmi heci(C) i2c_core
dcdbas serio_raw pcspkr sg parport iTCO_vendor_support agpgart 8250_pci
8250_pnp button 8250 serial_core linear dm_snapshot dm_mod xenblk cdrom
xennet fan processor ide_pci_generic ide_core thermal thermal_sys hwmon
ata_generic pciback xenbus_be
[  244.344364] CPU 0:
[  244.344364] Modules linked in: usbbk gntdev netbk blkbk
blkback_pagemap blktap edd af_packet bridge stp llc microcode fuse loop
ppdev i2c_i801 parport_pc iTCO_wdt intel_agp e1000e wmi heci(C) i2c_core
dcdbas serio_raw pcspkr sg parport iTCO_vendor_support agpgart 8250_pci
8250_pnp button 8250 serial_core linear dm_snapshot dm_mod xenblk cdrom
xennet fan processor ide_pci_generic ide_core thermal thermal_sys hwmon
ata_generic pciback xenbus_be
[  244.344364] Pid: 3119, comm: xend Tainted: G         C
2.6.31.14-0.4-xen #1 OptiPlex 780                 
[  244.344364] RIP: e030:[<ffffffff8000848a>]  [<ffffffff8000848a>]
0xffffffff8000848a
[  244.344364] RSP: e02b:ffff8801e50d9d40  EFLAGS: 00000282
[  244.344364] RAX: 0000000000000000 RBX: ffffffff80008480 RCX:
ffffffff8000848a
[  244.344364] RDX: 00007f5e3f2851d1 RSI: 0000000000a4b738 RDI:
000000000123e000
[  244.344364] RBP: ffff8801e50d9e28 R08: 00007f5e41d34318 R09:
000000000127c5e2
[  244.344364] R10: 000000000130b070 R11: 0000000000000282 R12:
ffff8801e52819c0
[  244.344364] R13: 0000000000305000 R14: 0000000000000000 R15:
ffffffff80311e30
[  244.344364] FS:  00007f5e33fff910(0000) GS:ffffc90000000000(0000)
knlGS:0000000000000000
[  244.344364] CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
[  244.344364] CR2: 0000000000b4a2c8 CR3: 00000001e04c0000 CR4:
0000000000002660
[  244.344364] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[  244.344364] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
0000000000000400
[  244.344364] Call Trace:
[  244.344364]  [<ffffffff80311f43>] privcmd_ioctl+0x113/0x740
[  244.344364]  [<ffffffff80180f52>] proc_reg_unlocked_ioctl+0x92/0x160
[  244.344364]  [<ffffffff8012b9c0>] vfs_ioctl+0x30/0xd0
[  244.344364]  [<ffffffff8012bba0>] do_vfs_ioctl+0x90/0x430
[  244.344364]  [<ffffffff8012bfd9>] sys_ioctl+0x99/0xb0
[  244.344364]  [<ffffffff8000c8c8>] system_call_fastpath+0x16/0x1b
[  244.344364]  [<00007f5e40d87547>] 0x7f5e40d87547
[  260.150678] tun: Universal TUN/TAP device driver, 1.6
[  260.150683] tun: (C) 1999-2004 Max Krasnyansky <maxk@xxxxxxxxxxxx>
[  260.177987] device tap1.0 entered promiscuous mode
[  260.178016] br0: port 3(tap1.0) entering forwarding state
[  260.640363] pciback: vpci: 0000:00:02.0: assign to virtual slot 0
[  260.641060] pciback: vpci: 0000:00:02.1: assign to virtual slot 0
func 1
[  260.641563] pciback: vpci: 0000:00:1a.0: assign to virtual slot 1
[  260.642101] pciback: vpci: 0000:00:1a.1: assign to virtual slot 1
func 1
[  260.642680] pciback: vpci: 0000:00:1a.2: assign to virtual slot 1
func 2
[  260.643403] pciback: vpci: 0000:00:1a.7: assign to virtual slot 1
func 7
[  260.644122] pciback: vpci: 0000:00:1b.0: assign to virtual slot 2
[  422.413643] br0: port 3(tap1.0) entering disabled state
[  422.484938] device tap1.0 left promiscuous mode
[  422.484947] br0: port 3(tap1.0) entering disabled state
[  422.661714] br0: port 2(vif1.0) entering disabled state
[  422.677042] br0: port 2(vif1.0) entering disabled state

I'm running openSuSE 11.3 as my dom0 on Xen 4.0.1 (4.0.1_01-79.2,
changeset 21326).  Kernel is 2.6.31.14-0.4-xen, and is not the pvops
kernel.  Sometimes the domU starts correctly; more often, it does not.
When it does start correctly I then get the following messages
repeatedly in xm dmesg:

(XEN) [VT-D]iommu.c:845: iommu_fault_status: Fault Overflow
(XEN) [VT-D]iommu.c:848: iommu_fault_status: Primary Pending Fault
(XEN) [VT-D]iommu.c:823: DMAR:[DMA Write] Request device [00:02.0] fault
addr bf4aa000, iommu reg = ffff82c3fff56000
(XEN) DMAR:[fault reason 05h] PTE Write access is not set
(XEN) print_vtd_entries: iommu = ffff830237cf88b0 bdf = 0:2.0 gmfn =
bf4aa
(XEN)     root_entry = ffff830237ce4000
(XEN)     root_entry[0] = 6f33001
(XEN)     context = ffff830006f33000
(XEN)     context[10] = 101_2259ec001
(XEN)     l3 = ffff8302259ec000
(XEN)     l3_index = 2
(XEN)     l3[2] = 2259e9003
(XEN)     l2 = ffff8302259e9000
(XEN)     l2_index = 1fa
(XEN)     l2[1fa] = 2259dc003
(XEN)     l1 = ffff8302259dc000
(XEN)     l1_index = aa
(XEN)     l1[aa] = 0
(XEN)     l1[aa] not present


The qemu-dm log file is attached for the HVM domU.  Any hints on what
may be going on would be appreciated.  I'm happy to provide more
detailed debug information, if necessary.

Thanks,
Nick




--------
This e-mail may contain confidential and privileged material for the sole use 
of the intended recipient.  If this email is not intended for you, or you are 
not responsible for the delivery of this message to the intended recipient, 
please note that this message may contain SEAKR Engineering (SEAKR) 
Privileged/Proprietary Information.  In such a case, you are strictly 
prohibited from downloading, photocopying, distributing or otherwise using this 
message, its contents or attachments in any way.  If you have received this 
message in error, please notify us immediately by replying to this e-mail and 
delete the message from your mailbox.  Information contained in this message 
that does not relate to the business of SEAKR is neither endorsed by nor 
attributable to SEAKR.

Attachment: qemu-dm-L7.log
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.