[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI Passthrough Problems/Questions



> Nick,
> 
> I think the issue is 02:00.0 was mapped twice. Could you try with below 
> patch? Then post the xen log. Pls post all output of 'lspci -v' on your 
> system.
> 

I applied the patch, with one minor change.  The line:

dprintk(XENLOG_ERR VTDPREFIX, "context_present: %x:%x.%x:pdev->domain=%d
domain=%d\n", bus, PCI_SLOT(devfn), PCI_FUNC(devfn), pdev->domain,
domain->domain_id);

should be:

dprintk(XENLOG_ERR VTDPREFIX, "context_present: %x:%x.%x:pdev->domain=%d
domain=%d\n", bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
pdev->domain->domain_id, domain->domain_id);

(notice the pdev->domain->domain_id instead of pdev->domain).

The domU no longer generates the error about failing to assign device to
IOMMU, but now it just silently crashes.  xm dmesg output:

(XEN) [VT-D]iommu.c:1511: d0:PCI: unmap bdf = 2:0.0
(XEN) [VT-D]iommu.c:1340: bus: 2, devfn: 0[VT-D]iommu.c:1368: d1:PCI:
map bdf = 2:0.0
(XEN) [VT-D]iommu.c:1371: domain_conext_mapping_one ret: 0
(XEN) [VT-D]iommu.c:1378: Upstream bridge for 1:0 is 2.
(XEN) [VT-D]iommu.c:1383: d1:PCI: map PCIe2PCI bdf = 1:0.0
(XEN) [VT-D]iommu.c:1394: d1:PCI: map secbus (2) with devfn 0
(XEN) [VT-D]iommu.c:1249: context_present: 2:0.0:pdev->domain=0 domain=1
(XEN) [VT-D]iommu.c:1415: Return value: 0
(XEN) [VT-D]io.c:300: d1: bind: m_gsi=16 g_gsi=36 device=5 intx=0
(XEN) [VT-D]iommu.c:1511: d0:PCI: unmap bdf = 2:0.1
(XEN) [VT-D]iommu.c:1340: bus: 2, devfn: 1[VT-D]iommu.c:1368: d1:PCI:
map bdf = 2:0.1
(XEN) [VT-D]iommu.c:1371: domain_conext_mapping_one ret: 0
(XEN) [VT-D]iommu.c:1378: Upstream bridge for 1:0 is 2.
(XEN) [VT-D]iommu.c:1383: d1:PCI: map PCIe2PCI bdf = 1:0.0
(XEN) [VT-D]iommu.c:1394: d1:PCI: map secbus (2) with devfn 0
(XEN) [VT-D]iommu.c:1415: Return value: 0
(XEN) [VT-D]io.c:300: d1: bind: m_gsi=16 g_gsi=40 device=6 intx=0
(XEN) [VT-D]iommu.c:1511: d1:PCI: unmap bdf = 2:0.1
(XEN) [VT-D]iommu.c:1340: bus: 2, devfn: 1[VT-D]iommu.c:1368: d0:PCI:
map bdf = 2:0.1
(XEN) [VT-D]iommu.c:1371: domain_conext_mapping_one ret: 0
(XEN) [VT-D]iommu.c:1378: Upstream bridge for 1:0 is 2.
(XEN) [VT-D]iommu.c:1383: d0:PCI: map PCIe2PCI bdf = 1:0.0
(XEN) [VT-D]iommu.c:1394: d0:PCI: map secbus (2) with devfn 0
(XEN) [VT-D]iommu.c:1415: Return value: 0
(XEN) [VT-D]iommu.c:1511: d1:PCI: unmap bdf = 2:0.0
(XEN) [VT-D]iommu.c:1340: bus: 2, devfn: 0[VT-D]iommu.c:1368: d0:PCI:
map bdf = 2:0.0
(XEN) [VT-D]iommu.c:1371: domain_conext_mapping_one ret: 0
(XEN) [VT-D]iommu.c:1378: Upstream bridge for 1:0 is 2.
(XEN) [VT-D]iommu.c:1383: d0:PCI: map PCIe2PCI bdf = 1:0.0
(XEN) [VT-D]iommu.c:1394: d0:PCI: map secbus (2) with devfn 0
(XEN) [VT-D]iommu.c:1249: context_present: 2:0.0:pdev->domain=1 domain=0
(XEN) [VT-D]iommu.c:1415: Return value: 0

And from xend.log I've pasted into this pastebin:

http://pastebin.com/b4bwdBPq

I have some extra dprintk calls that I threw in there, so there may be a
little more output than with the patch you sent.

-Nick



--------
This e-mail may contain confidential and privileged material for the sole use 
of the intended recipient.  If this email is not intended for you, or you are 
not responsible for the delivery of this message to the intended recipient, 
please note that this message may contain SEAKR Engineering (SEAKR) 
Privileged/Proprietary Information.  In such a case, you are strictly 
prohibited from downloading, photocopying, distributing or otherwise using this 
message, its contents or attachments in any way.  If you have received this 
message in error, please notify us immediately by replying to this e-mail and 
delete the message from your mailbox.  Information contained in this message 
that does not relate to the business of SEAKR is neither endorsed by nor 
attributable to SEAKR.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.