[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d issues with LSI MegaSAS (PERC5i))



On Wed, 11 Sep 2013 12:53:09 +0100, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
On 11.09.13 at 13:05, Gordan Bobic <gordan@xxxxxxxxxx> wrote:
I found this:

 http://lists.xen.org/archives/html/xen-devel/2010-06/msg00093.html

 while looking for a solution to a similar problem. I am
 facing a similar issue with LSI (8408E, 3081E-R) and
 Adaptec (31605) SAS cards. Was there ever a proper, more general
 fix or workaround for this issue?

 These SAS cards experience these problems in dom0. When running
 a vanilla kernel on bare metal, they work OK without intel_iommu
 set. As soon as I set intel_iommu, the same thing happens (on
 bare metal, not dom0).

 Clearly there is something badly broken with multiple layers
 of bridges when it comes to IOMMU in my setup (Intel 5520 PCIe
 root hub -> NF200 bridge -> Intel 80333 Bridge -> SAS controller)

The link above has some (hackish) workarounds - did you try
them?

Not yet. The thing that bothers me is that he workaround
involves hard-coding the PCI device ID which is _nasty_
and unstable.

The link above, however, doesn't indicate any relationship to
multiple bridges being in between, so it may not match what
you're observing.

The impression I got was that the "invisible" devices the
previous thread was referring to were bridges on the SAS
card. But I may have misunderstood.

In any event, seeing a hypervisor log with "iommu=debug" might
shed further light on this: For one, we might be able to see which
exact devices are present in the ACPI tables. And we would see
which device(s) eventual faults originate from.

The thing that bothers me is that this happens in dom0 even
with iommu=dom0-passthrough being set.
iommu=dom0-passthrough,workaround_bios_bug doesn't help,
either

And lo and behold, I do have phantom PCI devices after all!
lspci shows no device with ID 0000:0f:01.0


(XEN) [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault
(XEN) [VT-D]iommu.c:865: DMAR:[DMA Write] Request device [0000:0f:01.0] fault addr 857f15000, iommu reg = ffff82c3ffd54000
(XEN) DMAR:[fault reason 02h] Present bit in context entry is clear
(XEN) print_vtd_entries: iommu ffff83043fff5600 dev 0000:0f:01.0 gmfn 857f15
(XEN)     root_entry = ffff83043ffe5000
(XEN)     root_entry[f] = e6f7001
(XEN)     context = ffff83000e6f7000
(XEN)     context[8] = 0_0
(XEN)     ctxt_entry[8] not present
(XEN) [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault
(XEN) [VT-D]iommu.c:865: DMAR:[DMA Write] Request device [0000:0f:01.0] fault addr 858a35000, iommu reg = ffff82c3ffd54000
(XEN) DMAR:[fault reason 02h] Present bit in context entry is clear
(XEN) print_vtd_entries: iommu ffff83043fff5600 dev 0000:0f:01.0 gmfn 858a35
(XEN)     root_entry = ffff83043ffe5000
(XEN)     root_entry[f] = e6f7001
(XEN)     context = ffff83000e6f7000
(XEN)     context[8] = 0_0
(XEN)     ctxt_entry[8] not present
(XEN) [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault
(XEN) [VT-D]iommu.c:865: DMAR:[DMA Write] Request device [0000:0f:01.0] fault addr 471df8000, iommu reg = ffff82c3ffd54000
(XEN) DMAR:[fault reason 02h] Present bit in context entry is clear
(XEN) print_vtd_entries: iommu ffff83043fff5600 dev 0000:0f:01.0 gmfn 471df8
(XEN)     root_entry = ffff83043ffe5000
(XEN)     root_entry[f] = e6f7001
(XEN)     context = ffff83000e6f7000
(XEN)     context[8] = 0_0
(XEN)     ctxt_entry[8] not present
(XEN) [VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault
(XEN) [VT-D]iommu.c:865: DMAR:[DMA Write] Request device [0000:0f:01.0] fault addr 46fc22000, iommu reg = ffff82c3ffd54000
(XEN) DMAR:[fault reason 02h] Present bit in context entry is clear
(XEN) print_vtd_entries: iommu ffff83043fff5600 dev 0000:0f:01.0 gmfn 46fc22
(XEN)     root_entry = ffff83043ffe5000
(XEN)     root_entry[f] = e6f7001
(XEN)     context = ffff83000e6f7000
(XEN)     context[8] = 0_0
(XEN)     ctxt_entry[8] not present

Gordan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.