[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] [Xen VGA Passthrough] AMD R9 290X GPU???



On 2014-09-12 13:14, Peter Kay wrote:
It's possible that the reason mixing multiple cards doesn't work is
because ATI drivers allegedly try and initialise cards, even if
they're claimed elsewhere.

Can you elaborate on that? Do you mean that fglrx driver in dom0
tries to initialize the devices that are bound to xen-pciback?

Or do you mean the ATI driver in domU tries to initialize the ATI
GPUs that haven't even been passed to that domU?

Or something else entirely?

Most of my attempts so far have been with KVM. With that a HD6950
passes through just fine once a NoSnoop patch is applied, but having a
low end Nvidia card in the host Linux breaks things badly.

Are you saying that passthrough of an ATI card causes the host
to crash even though the host is running an Nvidia card?

I've not managed to get the 6950 working in Xen at all, possibly
because its BIOS is >64KB. Under KVM it was necessary to supply the
extracted BIOS in a file.

AFAIK this is only necessary for VGA passthrough (i.e. for
seeing the SeaBIOS splash screen in the guest and getting
video output before the GPU driver loads in the guest.

I've never even tried using VGA passthrough, only secondary
PCI passthrough - I can live with the video output being confined
to the emulated GPU with VNC output and only getting output from
the GPU once the driver loads.

A GTX480 soft modded to a Quadro 6000 is
working fine with both qemu-traditional and qemu upstream, although
upstream seems decidedly less stable. That's with an x64 unpatched
Windows 8.

I never tried qemu-traditional, and I only used XP x64 and 7 x64.
Unfortunately, NF200 bridges break IOMMU operation so I have to
make sure the domU memory doesn't overlap the IOMEM regions. I
have a bodged patch that achieves this but am hoping to switch
to using the patch that limits RAM below 4GB soon which should
achieve the same thing (I must prevent the domU from accessing
memory above 2688MB as this is where my IOMEM areas are).

In fairness, I could make do without either patch by simply
using the Windows 7 bcdedit badmem option in domU to mark memory
between 2688MB and 4096MB as bad. I'd lose 1408MB per domU, but
with only a handful of domUs and 96GB of RAM I can live with
that.

I've spent a large amount of time messing around with motherboards,
kernels and suchlike and my notes so far are :

1) If your criteria is passthrough of any type, KVM is a better option
than Xen. It works and it's also easily possible to identify iommu
isolation groups, aiding stability.

I'm going to guess this requires IOMMU ACL to work (the correct
name/acronym for it escapes me at the moment) - which it doesn't
on some PCIe bridges (NF200), and is broken on many BIOS-es even
when the hardware itself isn't impossibly broken.

2) If you want AMD GPU passthrough, use KVM, it's solid and you'll
save yourself a huge amount of pain.

Guest reboots work without side effects?

3) Quadro passthrough on Xen is stable, but the official NVidia
drivers must be installed, and the adapter must be assigned at VM
bootup.

4) Xen's virtualisation seems better, if possibly slower than KVM.
Less exceptions and the virtual USB is fairly decent.

Last time I tested Xen was somewhat faster than KVM. I would
have preferred to use KVM because unlike the Xen dom0, with
KVM the host domain isn't running as a virtual domain which
has performance and driver compatibility benefits for the host
domain.

5) The number of issues and broken BIOSes in motherboards is huge.

This is really the key issue, and coincides with my experience, too.
Worse, apart from a handful of big brand, expensive servers/workstations
that are certified for VGA/PCI passthrough, there is practically no
comprehensive list of motherboards that are known to work properly
with one or more GPUs passed to different VMs.

The impression I get is that on the whole AMD motherboards fare
a little better, but that could be purely down to more Xen users
using them.

For my next system, I will probably be buying an Intel workstation or
server board. It's clear that the more exciting virtualisation
improvements are happening on Intel first and being Intel, their
BIOSes generally work, even if there are various quirks. I've just
been mildly stung by a xw4600 motherboard, which claims to support
VT-d, but is broken. Back to the S3210SHLC it is, then - I'm not keen
on the maximum 4x PCI-e speed I'm limited to graphics wise, but it
does actually work.

In fairness, it has been demonstrated many times that PCIe speed is
not particularly relevant for gaming loads. For compute loads that
are heavily reliant on shipping data to/from the GPU it will make
a difference, but if you your compute vs. I/O ratio is so low
the performance will be pretty horrible anyway.

(yes, I should run something much more modern, but I'm looking at how
the latest Haswell/Haswell EP chips work out before wasting money..)

I have had a similarly problematic experience with EVGA's flagship
SR-2 motherboard. None of the issues were insurmountable, but the
NF200 bridges bypassing the IOMMU for DMA and thus clobbering the
host's IOMEM region was particularly annoying. But once the Xen
developers here came up with a hunch that this was what was happening,
working around the problem wasn't particularly difficult.

I will write up an extensive howto with the exact patches, software
versions and hardware modifications required to make it all work
properly at some point (probably as soon as I upgrade to a version
of Xen with the max-mem-below-4GB option implemented).

Gordan

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.