[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d issues with LSI MegaSAS (PERC5i))

On Wed, Sep 11, 2013 at 01:19:44PM +0100, Gordan Bobic wrote:
> On Wed, 11 Sep 2013 12:57:10 +0100, "Jan Beulich"
> <JBeulich@xxxxxxxx> wrote:
> >>>>On 11.09.13 at 13:44, Gordan Bobic <gordan@xxxxxxxxxx> wrote:
> >>This got me thinking - if the problem is broken IOMMU
> >>implementation,
> >> is the IOMMU _actually_ required for PCI passthrough to HVM
> >> guests if all the memory holes and BARs are made exactly the same
> >> in dom0 and domU? If vBAR=pBAR, then surely there is no memory
> >> range remapping to be done anyway - which means that there
> >> is no need for the strict IOMMU requirements (over and above
> >> the requirements and caveats of PV domUs).
> >
> >But with this you ignore the need to handle device bus mastering
> >activities. In order to work without IOMMU, the guest's memory
> >addresses would also require guest-physical = machine-physical.
> Hmm... that would be harder to achieve, mainly due to legacy
> stuff like base memory. But if (fingers crossed) DMA doesn't
> occur below 1MB, maybe the map can be bodged to emulate the
> addresses below 1MB, carve out as small a chunk of 32-bit
> memory as we can get away with for each guest, and map the
> rest with vmem=pmem in 64-bit memory range. But you are
> right - that gets way more complicated than I originally
> envisaged - all for the sake of supporting legacy
> and Steam boot-loader systems. :(

I don't know the details but many years ago there was a patch
to allow PCI passthru to HVM guests without an IOMMU..

I think it's linked on Xen pcipassthrough wiki page.
It had a limitation that only one HVM guest was supported for PCI passthru.

-- Pasi

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.