[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] pv-ops domU not working with MSI interrupts on Nehalem


  • To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
  • From: Bruce Edge <bruce.edge@xxxxxxxxx>
  • Date: Fri, 8 Oct 2010 10:56:53 -0700
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 08 Oct 2010 10:57:55 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=ORWLKCOUqVqld1zbq2Zl2ARxN3BTZJkWePooaFCONYzKF0ll7FoTd0rE6CYrrMaezC A4isbs5PNRfpkCgfO/B/NhLWMXwEVzkyRv/SaCSSte6iBLVWcJHU8ZHRoiKzMX5b4LJT YLx42n2SKX6ENtRjct6FAp4aGMyDkI5cKAYGo=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On Fri, Oct 1, 2010 at 2:11 PM, Konrad Rzeszutek Wilk
<konrad.wilk@xxxxxxxxxx> wrote:
> On Mon, Sep 27, 2010 at 08:52:39AM -0700, Bruce Edge wrote:
>> One of our developers who is working on a tachyon driver is
>> complaining that the pvops domU kernel is not working for these MSI
>> interrupts.
>> This is using the current head of xen/2.6.32.x on both a single
>> Nahelam 920 and a dual E5540. This behavior is consistent with Xen
>> 4.0.1, 4.0.2.rc1-pre and 4.1.
>
>
> I just checked on my SuperMicro X8DTN, this combination
>  - For Dom0, git commit fe999249 (2.6.32.18)
>  - For DomU, devel/xen-pcifront-0.6 or devel/xen-pcifront-0.7
>  - For Hypervisor I used cs 21976, but found that the latest (22155) works too
>

Booting the above combination of xen/dom0/dpmU logs the following
messages on the dom0 console as soon as domU starts (no custom drivers
loaded yet):

(XEN) tmem: all pools frozen for all domains
(XEN) tmem: all pools thawed for all domains
(XEN) tmem: all pools frozen for all domains
(XEN) tmem: all pools thawed for all domains
mapping kernel into physical memory
about to get started...
(XEN) traps.c:2310:d1 Domain attempted WRMSR 000000000000008b from
0x0000000a00000000 to 0x0000000000000000.
(XEN) traps.c:2310:d1 Domain attempted WRMSR 000000000000008b from
0x0000000a00000000 to 0x0000000000000000.
(XEN) traps.c:2310:d1 Domain attempted WRMSR 000000000000008b from
0x0000000a00000000 to 0x0000000000000000.
(XEN) traps.c:2310:d1 Domain attempted WRMSR 000000000000008b from
0x0000000a00000000 to 0x0000000000000000.
(XEN) traps.c:2310:d1 Domain attempted WRMSR 000000000000008b from
0x0000000a00000000 to 0x0000000000000000.
(XEN) traps.c:2310:d1 Domain attempted WRMSR 000000000000008b from
0x0000000a00000000 to 0x0000000000000000.
[ 1784.608283] ------------[ cut here ]------------
[ 1784.608336] WARNING: at kernel/lockdep.c:2323
trace_hardirqs_on_caller+0x131/0x190()
[ 1784.608418] Hardware name: X8ST3
[ 1784.608445] Modules linked in: xt_physdev ipmi_msghandler ipv6
xenfs xen_gntdev xen_evtchn xen_pciback tun serio_raw joydev bridge
stp llc ioatdma dca usb_storage usbhid hid e1000e
[ 1784.608669] Pid: 11, comm: xenwatch Not tainted
2.6.32.18-pv-ops-stable-debug #1
[ 1784.608725] Call Trace:
[ 1784.608744]  <IRQ>  [<ffffffff81069fbb>] warn_slowpath_common+0x7b/0xc0
[ 1784.608807]  [<ffffffff815de490>] ? _spin_unlock_irq+0x30/0x40
[ 1784.608853]  [<ffffffff8106a014>] warn_slowpath_null+0x14/0x20
[ 1784.608899]  [<ffffffff810a74b1>] trace_hardirqs_on_caller+0x131/0x190
[ 1784.608945]  [<ffffffff810a751d>] trace_hardirqs_on+0xd/0x10
[ 1784.608991]  [<ffffffff815de490>] _spin_unlock_irq+0x30/0x40
[ 1784.609039]  [<ffffffff813b9736>] add_to_net_schedule_list_tail+0x86/0xd0
[ 1784.609085]  [<ffffffff813ba948>] netif_be_int+0x38/0x160
[ 1784.609123]  [<ffffffff810dd750>] handle_IRQ_event+0x50/0x160
[ 1784.609170]  [<ffffffff810e05d9>] handle_level_irq+0x99/0x140
[ 1784.609217]  [<ffffffff813adc09>] __xen_evtchn_do_upcall+0x1b9/0x1f0
[ 1784.609263]  [<ffffffff813ae06d>] xen_evtchn_do_upcall+0x3d/0x60
[ 1784.609311]  [<ffffffff8101537e>] xen_do_hypervisor_callback+0x1e/0x30
[ 1784.609356]  <EOI>  [<ffffffff8100940a>] ? hypercall_page+0x40a/0x1010
[ 1784.609416]  [<ffffffff8100940a>] ? hypercall_page+0x40a/0x1010
[ 1784.609461]  [<ffffffff813b1a13>] ? xb_write+0x103/0x240
[ 1784.609499]  [<ffffffff813b21c0>] ? xs_talkv+0x80/0x1f0
[ 1784.609537]  [<ffffffff813b249b>] ? xs_single+0x4b/0x60
[ 1784.609575]  [<ffffffff813b2b28>] ? xenbus_read+0x48/0x70
[ 1784.609613]  [<ffffffff813bcf6e>] ? frontend_changed+0x47e/0x760
[ 1784.609659]  [<ffffffff813b3e32>] ? xenbus_otherend_changed+0xd2/0x190
[ 1784.609736]  [<ffffffff81010aff>] ? xen_restore_fl_direct_end+0x0/0x1
[ 1784.609782]  [<ffffffff810a991d>] ? lock_release+0xed/0x230
[ 1784.609820]  [<ffffffff813b4540>] ? frontend_changed+0x10/0x20
[ 1784.609866]  [<ffffffff813b1df6>] ? xenwatch_thread+0x56/0x160
[ 1784.609912]  [<ffffffff81090e70>] ? autoremove_wake_function+0x0/0x40
[ 1784.609958]  [<ffffffff813b1da0>] ? xenwatch_thread+0x0/0x160
[ 1784.610004]  [<ffffffff81090b36>] ? kthread+0x96/0xa0
[ 1784.610041]  [<ffffffff8101522a>] ? child_rip+0xa/0x20
[ 1784.610077]  [<ffffffff81014b90>] ? restore_args+0x0/0x30
[ 1784.610114]  [<ffffffff81015220>] ? child_rip+0x0/0x20
[ 1784.610149] ---[ end trace db9e4f4f3b76b033 ]---

-Bruce

> with which where I passed in PCI devices with legacy IRQ, MSI, and MSI-X. I 
> tried
> a combination of doing this with IOMMU (VT-d) and without - both cases these 
> devices:
>
> 00:1d.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI 
> Controller #1 (rev 02)
> 00:1d.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI 
> Controller #2 (rev 02)
> 00:1d.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI 
> Controller #3 (rev 02)
> 00:1d.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI 
> Controller #1 (rev 02)
> 03:00.0 Ethernet controller: Intel Corporation 82572EI Gigabit Ethernet 
> Controller (Copper) (rev 06)
> 0a:00.1 Ethernet controller: Intel Corporation 82575EB Gigabit Network 
> Connection (rev 02)
>
> worked just fine (either defining pci=["..."] or just using pci-attach).
>
> But if I use the latest xen/next or xen/stable-2.6.32.x it does not look
> that happy :-(
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.