[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d issues with LSI MegaSAS (PERC5i))



This got me thinking - if the problem is broken IOMMU implementation,
is the IOMMU _actually_ required for PCI passthrough to HVM
guests if all the memory holes and BARs are made exactly the same
in dom0 and domU? If vBAR=pBAR, then surely there is no memory
range remapping to be done anyway - which means that there
is no need for the strict IOMMU requirements (over and above
the requirements and caveats of PV domUs).

In turn, this would enable PCI passthrough (incl. secondary VGA,
unless I am very much mistaken) to HVM guests while running xen
with iommu=0. It shifts the design from virtualization to
partitioning, which I see having obvious advantages and no
disadvantages (e.g. VM migration doesn't work with PCI
passthrough anyway).

The reason I am mentioning this is because I'm working on
a vhole=phole+vBAR=pBAR patch set anyway, and this would
be a neat logical extension that would help me work around
yet more problems on what appears to be a fairly common
hardware implementation bug.

Gordan

On Wed, 11 Sep 2013 12:25:18 +0100, Gordan Bobic <gordan@xxxxxxxxxx> wrote:
It looks like I'm definitely not the first person to
hit this problem:


http://www.gossamer-threads.com/lists/xen/users/168557?do=post_view_threaded#168557

No responses or workarounds suggested back then. :(

Gordan

On Wed, 11 Sep 2013 12:05:35 +0100, Gordan Bobic <gordan@xxxxxxxxxx> wrote:
I found this:

http://lists.xen.org/archives/html/xen-devel/2010-06/msg00093.html

while looking for a solution to a similar problem. I am
facing a similar issue with LSI (8408E, 3081E-R) and
Adaptec (31605) SAS cards. Was there ever a proper, more general
fix or workaround for this issue?

These SAS cards experience these problems in dom0. When running
a vanilla kernel on bare metal, they work OK without intel_iommu
set. As soon as I set intel_iommu, the same thing happens (on
bare metal, not dom0).

Clearly there is something badly broken with multiple layers
of bridges when it comes to IOMMU in my setup (Intel 5520 PCIe
root hub -> NF200 bridge -> Intel 80333 Bridge -> SAS controller)

I tried iommu=dom0-passthrough and it doesn't appear to have
helped.

I am not seeing similar problems with other PCIe devices that
are also, in theory doing DMA (e.g. GPUs), but LSI and Adapted
controllers appear to be affected for some reason.

Is there anything else I could try/do to make this work?

Gordan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.