[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] [Xen VGA Passthrough] AMD R9 290X GPU???




On 12 September 2014 14:46, Gordan Bobic <gordan@xxxxxxxxxx> wrote:
On 2014-09-12 13:14, Peter Kay wrote:
It's possible that the reason mixing multiple cards doesn't work is
because ATI drivers allegedly try and initialise cards, even if
they're claimed elsewhere.
Â
Most of my attempts so far have been with KVM. With that a HD6950
passes through just fine once a NoSnoop patch is applied, but having a
low end Nvidia card in the host Linux breaks things badly.

Are you saying that passthrough of an ATI card causes the host
to crash even though the host is running an Nvidia card?
No. The lower end NVidia cards (GT210) have a (dom0/KVM host) Linux driver that causes instability if passthrough is used, due to VGA arbitration. Using the official NVidia driver is a bad idea. Nouveau is slightly better iirc, but has other issues. Not sure if higher end Nvidia cards fix this.
Â

I've never even tried using VGA passthrough, only secondary
PCI passthrough - I can live with the video output being confined
to the emulated GPU with VNC output and only getting output from
the GPU once the driver loads.

I haven't managed to get a 6950 working full stop, either primary or secondary in Xen. It's fine in KVM.
Â
1) If your criteria is passthrough of any type, KVM is a better option
than Xen. It works and it's also easily possible to identify iommu
isolation groups, aiding stability.

I'm going to guess this requires IOMMU ACL to work (the correct
name/acronym for it escapes me at the moment) - which it doesn't
on some PCIe bridges (NF200), and is broken on many BIOS-es even
when the hardware itself isn't impossibly broken.

To be fair, you'll have just as many issues with KVM with a NF200 motherboard.. I think there are workarounds there, but I haven't been keeping up.
Â
2) If you want AMD GPU passthrough, use KVM, it's solid and you'll
save yourself a huge amount of pain.

Guest reboots work without side effects?

Yes. It's solid.
Â
Last time I tested Xen was somewhat faster than KVM. I would
have preferred to use KVM because unlike the Xen dom0, with
KVM the host domain isn't running as a virtual domain which
has performance and driver compatibility benefits for the host
domain.

I think the benchmarks differ but show KVM having a bit of an edge. I prefer the manageability of Xen. KVM is currently better at hot plug. It's a bit of a mess how Xen has migrated from xm to xl without maintaining all the functionality, never mind in a solid manner.

Â
5) The number of issues and broken BIOSes in motherboards is huge.

This is really the key issue, and coincides with my experience, too.
Worse, apart from a handful of big brand, expensive servers/workstations
that are certified for VGA/PCI passthrough, there is practically no
comprehensive list of motherboards that are known to work properly
with one or more GPUs passed to different VMs.

The impression I get is that on the whole AMD motherboards fare
a little better, but that could be purely down to more Xen users
using them.

I will have to re-read if that is the case, as my impression is more people were using Intel, and that was where development was first targeted.Â
Â
In fairness, it has been demonstrated many times that PCIe speed is
not particularly relevant for gaming loads. For compute loads that
are heavily reliant on shipping data to/from the GPU it will make
a difference, but if you your compute vs. I/O ratio is so low
the performance will be pretty horrible anyway.

This is true to some extent. My testing seems to show the difference between PCI-e 1.x 16x and 8x is minimal, that the difference between 8x and 4x is (sadly) noticeable but not catastrophic, and then below that things become a bit treacle like - although a fast card at 1x may fare a lot better than a slow card at 8x.
Â
PK
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.