[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] GPU passthrough on Xen 4.4.0, FLReset-

  • To: xen-users@xxxxxxxxxxxxx
  • From: Gordan Bobic <gordan@xxxxxxxxxx>
  • Date: Wed, 04 Jun 2014 10:52:36 +0100
  • Delivery-date: Wed, 04 Jun 2014 09:53:01 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>

On 2014-06-04 10:13, Kuba wrote:
I have a pair of 780Ti cards in my system (each passed to a different
VM) and it works just fine. I never managed to get any ATI cards to
work properly in the same setup. Probably best to stay away from
those if you want something that "just works".

Dear Gordan,

would you mind sharing some details about your 780Ti based system?
I've got a (genuine) Quadro 4000 based system with working gpu
passthrough, but I still have some issues with it, so any additional
info would be greatly appreciated :) Here's my list of questions:

1. What is your dom0 kernel version?


2. What is your domU OS/version?


3. What is your xen version?

4.3.0, with a very-not-for-public-consumption patch to
work around a memory stomp caused by NF200 PCIe bridges
on my motherboard. Unless you have a buggy motherboard
or one that features extra PCIe bridges, this shouldn't
be a problem.

4. Do you have GPLPV drivers installed? Which version?

Yes. Not sure about the version (probably at least 4-5
months old). It works fine both with and without GPLPV
drivers (disk and network I/O is much faster with them,
of course).

5. Do you have any issues with restarting the domU? Do you have to do
something before rebooting the domU, like ejecting the GPU?

No, it just works.

6. Do you have any issues with multi-monitor setups?

Could never get it working with ATI, never had a problem
with Nvidia.

I use IBM T221 monitors, which are implicitly
a multi-monitor setup, since if you want a refresh rate
about 15Hz, each monitor appears as either 2 or 4 separate
monitors which you have to stitch together. I use my T221
with DL-DVI adapters which presents the monitor as
2x 1920x2400@48Hz

So yes - it works just fine for me.

7. Can you have more than 4GB of RAM assigned to the domUs with GPU passthrough?

The patch I have makes 2.5GB of RAM go missing in each
VM (my bodge was to just mark all of the RAM between
1GB and 4GB in domU e820 map as reserved in hvmloader).
This is purely to work around the NF200 bridges, otherwise
domU ends up stomping all over the real PCI memory hole
area and crashing the machine. I have 96GB of RAM in the
machine so I can live with 5GB of it going missing. I
expect the need to use the patch will go away once the
patch that provides memory sizing configuration below
4GB makes it into a release.

Without the said patch, due to my machine's physical
memory layout I'd have to limit the domUs to 2688MB
since that is where the first PCI BAR is mapped.

8. Do you have any hardware with exclamation marks in your hardware
manager in domU?

No. I only see the Xen virtual device showing up as having
no driver installed, but this is normal. It's a commercial
only optional extra that isn't particularly important.

9. Which qemu flavour do you use?


10. In fact - could you please post the domU config file and your xl
info output? :)




disk=[ '/dev/zvol/ssd/edi,raw,hda,rw' ]

vif=[ 'mac=00:16:3e:4e:c5:0c,bridge=br0,model=e1000', ]


# GPU, PCI audio, USB
pci = [ '07:00.0', '07:00.1', '00:1b.0', '00:1a.1' ]


# Without my bodgy patch, this is for PV domains only


xl info:
# xl info
host                   : normandy
release                : 3.9.9-2.el6xen.x86_64
version                : #1 SMP Tue Jul 16 15:52:11 BST 2013
machine                : x86_64
nr_cpus                : 24
max_cpu_id             : 23
nr_nodes               : 2
cores_per_socket       : 6
threads_per_core       : 2
cpu_mhz                : 3321
hw_caps : bfebfbff:2c100800:00000000:00003f00:029ee3ff:00000000:00000001:00000000
virt_caps              : hvm hvm_directio
total_memory           : 98295
free_memory            : 1361
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 3
xen_extra              : .0
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          :
xen_commandline : noreboot dom0_vcpus_pin iommu=dom0-passthrough unrestricted_guest=1 msi=1
cc_compiler            : gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3)
cc_compile_by          : root
cc_compile_domain      : shatteredsilicon.net
cc_compile_date        : Thu Sep  5 10:55:26 BST 2013
xend_config_format     : 4

11. Where did you find the necessary information to hard-mod the card?
Is there some particular post on some forum?

Read this thread (very, very long), this post is a good place to start:

680 mods are fairly early on in the thread, I suggest you go
with the Tesla K10 mod by just removing one resistor off
the back of the card, no need to even remove the heatsink

480 BIOS only mods are somewhere inbetween, or you can just
skip straight to here:

12. I have three issues with my setup, have you noticed any of these?
a. Can't have more than 3.5GB RAM.

That's either a QEMU bug that has been fixed at some point
since I heard of it, or the bug caused by NF200 (or similar)
PCIe bridge(s).

b. Rebooting domU with more than one monitor and without ejecting the
gpu first causes very strange artefacts on the displays (everything
works just fine with only one monitor).

I have never seen this. If I had to guess, it would be that something
is causing a memory stomp over your PCI memory holes. Does this also
happen when you reduce domU memory to 1GB? If it does, it's probably
a PCIe bridge issue (or buggy IOMMU/BIOS).

c. Have some yellow exclamation marks in hardware manager.

As mentioned above, I am not seeing anything like that on my system.
The setup I have has a modified 780Ti in each domU, and 8800GT in
dom0, and everything "just works".


Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.