[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [BUG] domU kernel crash at igbvf module loading / __msix_mask_irq



On Wed, Nov 27, 2013 at 09:08:58PM +0000, Andrew Cooper wrote:
> On 27/11/2013 20:53, Norbert Marx wrote:
> > Hello,
> >
> > on a Supermicro H8DGU server with the latest BIOS PCI passtrough fails.
> >
> > I tried to give a two igbvf devices to the domU. pciback is configured
> > and everything locks good until igbvf tries to initialize the PCI
> > device. I have the same error with XEN 4.3, 4.3.1 and the current
> > 4.4-unstable, linux kernel 3.12.0, 3.12.1, 3.9.
> 
> What about dom0 kernel?  This looks suspiciously like the new memory
> protection for MSI-X config tables.  With this model, pciback marks the
> MSI region as read only after it has set it up appropriately, with the
> knowledge that pcifront should indirect all requests.

The fix for Linux to work with this is 0e4ccb1505a9e29c50170742ce26ac4655baab2d
Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date:   Wed Nov 6 16:16:56 2013 -0500

    PCI: Add x86_msi.msi_mask_irq() and msix_mask_irq()

> 
> What about Xen 4.2?

And it should work with that version of Xen.
> 
> ~Andrew
> 
> >
> > DomU config:
> > kernel = "/boot/gentoo-DomU"
> > memory = 2048
> > name   = "domU"
> > nic    = 2
> > vcpus  = 1
> > #pci    = ['02:10.0,msitranslate=1,permissive=1'] <= same crash
> > #pci    = ['02:10.0,permissive=1'] <= same crash
> > pci    = ['02:10.0', '02:10.1']
> > vif    = ['bridge=xenbr1','bridge=xenbr2']
> > disk   = ['phy:/dev/loop4,xvda1,w', 'phy:/dev/loop5,xvda2,w',
> > 'phy:/dev/loop6,xvda3,w']
> > root   = "/dev/xvda1 ro rootfstype=ext4 iommu=soft
> > xen-pcifront.verbose_request=1"
> >
> > DomU crash log:
> >
> > [   71.124852] pcifront pci-0: write dev=0000:00:00.0 - offset 72 size
> > 2 val c00
> > 2
> > [   71.124888] BUG: unable to handle kernel paging request at
> > ffffc9000015400c
> > [   71.124900] IP: [<ffffffff8121ea05>] __msix_mask_irq+0x21/0x24
> > [   71.124911] PGD 784a0067 PUD 784a1067 PMD 784a2067 PTE
> > 8010000000000464
> > [   71.124919] Oops: 0003 [#1] SMP
> > [   71.124923] Modules linked in: igbvf(+)
> > [   71.124929] CPU: 0 PID: 2114 Comm: insmod Not tainted
> > 3.12.1-gentoo-DomU #6
> > [   71.124934] task: ffff8800784c3080 ti: ffff880077324000 task.ti:
> > ffff88007732
> > 4000
> > [   71.124939] RIP: e030:[<ffffffff8121ea05>]  [<ffffffff8121ea05>]
> > __msix_mask_
> > irq+0x21/0x24
> > [   71.124947] RSP: e02b:ffff880077325bb0  EFLAGS: 00010286
> > [   71.124951] RAX: 0000000000000001 RBX: ffff880078741000 RCX:
> > 0000000000000001
> > [   71.124957] RDX: ffffc9000015400c RSI: 0000000000000001 RDI:
> > ffff880077770180
> > [   71.124961] RBP: ffff880077770180 R08: 0000000000000200 R09:
> > ffff88007873fc00
> > [   71.124967] R10: 0000000000000000 R11: ffff88007873fc00 R12:
> > 0000000000000000
> > [   71.124972] R13: ffff8800771138a0 R14: 0000000000000000 R15:
> > ffffc9000015400c
> > [   71.124980] FS:  00007fb8d2c37700(0000) GS:ffff88007f200000(0000)
> > knlGS:00000
> > 00000000000
> > [   71.124985] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> > [   71.124989] CR2: ffffc9000015400c CR3: 000000007712e000 CR4:
> > 0000000000040660
> > [   71.124994] Stack:
> > [   71.124996]  ffffffff8121f9dd ffff88007707c000 ffff880078741840
> > 0000000000000
> > 002
> > [   71.125004]  ffff880078741840 ffffffffa0005140 00000000c0020010
> > ffff880078741
> > 000
> > [   71.125011]  ffff880078741000 0000000000000000 ffff880078741098
> > ffff88007707c
> > 7c0
> > [   71.125018] Call Trace:
> > [   71.125023]  [<ffffffff8121f9dd>] ? pci_enable_msix+0x27d/0x353
> > [   71.125032]  [<ffffffffa00017b6>] ? igbvf_probe+0x323/0x8d9 [igbvf]
> > [   71.125039]  [<ffffffff8141fe9d>] ?
> > _raw_spin_unlock_irqrestore+0x42/0x5b
> > [   71.125047]  [<ffffffff8121300d>] ? pci_device_probe+0x60/0x9d
> > [   71.125056]  [<ffffffff812af74d>] ? driver_probe_device+0x1b3/0x1b3
> > [   71.125060]  [<ffffffff812af62c>] ? driver_probe_device+0x92/0x1b3
> > [   71.125060]  [<ffffffff812af7a0>] ? __driver_attach+0x53/0x73
> > [   71.125060]  [<ffffffff812add94>] ? bus_for_each_dev+0x4e/0x7f
> > [   71.125060]  [<ffffffff812aedf2>] ? bus_add_driver+0xe5/0x22d
> > [   71.125060]  [<ffffffff812afcfa>] ? driver_register+0x82/0xb5
> > [   71.125060]  [<ffffffffa0008000>] ? 0xffffffffa0007fff
> > [   71.125060]  [<ffffffff81002092>] ? do_one_initcall+0x78/0x102
> > [   71.125060]  [<ffffffff810db633>] ? free_hot_cold_page+0x100/0x109
> > [   71.125060]  [<ffffffff811083b3>] ? kfree+0xb6/0xc8
> > [   71.125060]  [<ffffffff810fd8f4>] ? __vunmap+0x8c/0xc4
> > [   71.125060]  [<ffffffff810ad804>] ? load_module+0x18d3/0x1b9a
> > [   71.125060]  [<ffffffff810ab117>] ? mod_kobject_put+0x42/0x42
> > [   71.125060]  [<ffffffff81118716>] ? vfs_read+0xf7/0x13e
> > [   71.125060]  [<ffffffff810adbad>] ? SyS_finit_module+0x4e/0x62
> > [   71.125060]  [<ffffffff81420c8f>] ? tracesys+0xe1/0xe6
> > [   71.125060] Code: 83 c4 18 5b 5d 41 5c 41 5d c3 8b 47 08 0f b7 57
> > 02 83 e0 fe
> >  c1 e2 04 89 c1 83 c9 01 83 c2 0c 85 f6 0f 45 c1 48 63 d2 48 03 57 28
> > <89> 02 c3
> >  48 8b 46 10 48 83 ef 48 48 85 c0 74 02 ff e0 48 c7 c0
> > [   71.125060] RIP  [<ffffffff8121ea05>] __msix_mask_irq+0x21/0x24
> > [   71.125060]  RSP <ffff880077325bb0>
> > [   71.125060] CR2: ffffc9000015400c
> > [   71.125060] ---[ end trace 66e59b16e50eead2 ]---
> >
> > I tried also to use the patch from
> > http://lists.xen.org/archives/html/xen-devel/2013-11/msg03752.html
> > "[Xen-devel] [PATCH v6] x86: properly handle MSI-X unmask operation
> > from guests" but without success.
> >
> > More logs I attached. Any suggestions to fix this issue?
> >
> > Kind regards,
> > Norbert
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxx
> > http://lists.xen.org/xen-devel
> 

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.