[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] PCI/VGA passthrough: differences between Xen and ESXi?

  • To: Eric Shelton <eshelton@xxxxxxxxx>
  • From: David TECHER <davidtecher@xxxxxxxx>
  • Date: Wed, 3 Apr 2013 20:49:18 +0100 (BST)
  • Cc: "xen-users@xxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxx>, Patrick Proniewski <patpro@xxxxxxxxxx>
  • Delivery-date: Wed, 03 Apr 2013 19:51:02 +0000
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.fr; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=LHXkUGNtzfEPPJhi7wCbqEHRCj+ptBG2f7zQxw84guW4VWf7l5Dw3aE9Ryaue5FhuV8ZffgUSnzhWFWEFlVNt3MuqmQY3NZOxh6YUpVbb8WW3JTyzmKaO24rDn5QxCtxnnhURfGHES6KStO8Uh+yPGxslm0YmvnQacp3Z1DUQxM=;
  • List-id: Xen user discussion <xen-users.lists.xen.org>

Thanks for this wonderful story but

I was asking

1) for your revision number/changeset for Xen 4.3 (xl info should return something)

hg clone -r ????


make ???

patch -p1 <????

make ???

2) any domU configuration file ?

De : Eric Shelton <eshelton@xxxxxxxxx>
À : David TECHER <davidtecher@xxxxxxxx>
Cc : "xen-users@xxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxx>; Patrick Proniewski <patpro@xxxxxxxxxx>
Envoyé le : Mercredi 3 avril 2013 18h01
Objet : Re: [Xen-users] PCI/VGA passthrough: differences between Xen and ESXi?

1) I applied the patch to 4.3 unstable (back in February).  I recall
it applying cleanly (thanks to Dr. Wettstein's work in updating the
original patches).  However, there is a build issue when the patched
files are used to build minios, as its libraries do not provide
iopl().  On my initial pass, I may have #ifdef-ed out the iopl() calls
during minios build time, on the presumption that this code path is
not exercised by minios.  However, not being fully confident in that
presumption, in some later work I  identified a more appropriate
mechanism: an existing hypervisor call for iopl.  I think revised code
was written, but remains untested.

2) I am using what I saw as the core aspects of that script:
unbind_devices(), re-enabling the devices, and the use of vbetool to
reinitialize the display (which returns the text console in my case).
Although I am doing passthrough of a USB3 controller as well, I did
not bother with rebinding the USB driver, as I simply surrendered use
of those ports for Windows.  I also am not waiting for the VM to exit;
instead, I split it the single script into before and after scripts
that were manually invoked.

I think the runtime unbind, bind, and vbetool, versus giving the PCI
devices over to pciback at boot time, makes a difference in terms of
whether you can stop & restart the passthrough VM.  With this config,
it is no problem for me.  I heard many reports of people starting up
the Windows VM once, but not again without a reboot.

3+) Tonight I can pull together what I am using and post it up here.
Very little modification of Dr. Wettstein's patch was done.  However,
at this stage, it seems to be the little things that make the
difference between GPU passthrough working or not.

My understanding is that there are many intentional quirks in GPUs to
be in compliance with certain HDCP requirements.  Unfortunately, I
think the universe of possible secret register access sequences, etc.
will make it impossible, even with 1:1 mapping in addition, to 100%
convince the display driver, which would be needed to get to
virtualized HDCP compliance (I am interested in a virtualized HTPC
environment).  It sounds like, from what I have heard of your work,
that NVidia's quirks are especially difficult to deal with (not
necessarily intentionally).  It was enough to make me abandon by
existing NVidia GPU in favor of an AMD solution.

Also, as I think I noted in an earlier post, one of the qemu
developers has, perhaps independently, implemented the necessary
quirks for AMD cards.  His code effects pretty much the same things as
the patch I used.  I think there is at least a patch against mainline
qemu in qemu-devel, if it has not been incorporated already.  He has
also been doing a fair amount of PCIe work, which I think may be
important for NVidia, assuming their use of extended config space is
causing some of the difficulties.

It would be nice to see enough code get into Xen 4.3 to get AMD cards
working, but with the switch away from qemu-traditional, which is what
the patch is for, this may now be more reliant on the work already
being done in qemu-devel being mainlined into qemu.

- Eric

On Wed, Apr 3, 2013 at 4:45 AM, David TECHER <davidtecher@xxxxxxxx> wrote:
> Thanks for clarifying.
> I am applying the original patches for nvidia. That's the reason why I've
> got the 3GM limitation.
> Question 1:
> In your link
> http://lists.xen.org/archives/html/xen-users/2013-02/msg00410.html
> Did you apply the following patch
> on Xen 4.3 unstable or Xen 4.2 stable?
> Question 2:
> Do you use this script
> ftp://ftp.enjellic.com/pub/xen/run-passthrough
> to start/stop your domU
> Question 3
> Can you show a domU config file?
> Thanks for letting me know.
> ________________________________
> De : Eric Shelton <eshelton@xxxxxxxxx>
> À : "xen-users@xxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxx>; Patrick Proniewski
> <patpro@xxxxxxxxxx>; davidtecher@xxxxxxxx
> Envoyé le : Mercredi 3 avril 2013 1h26
> Objet : Re: [Xen-users] PCI/VGA passthrough: differences between Xen and
> ESXi?
> David:
> What are the circumstances in which your comment about "Xen - limited
> to 3GB for RAM" applies?  Is it in the case in which one uses multiple
> GPU cards?
> There are a number of HVM tutorial examples with well over 3GB:
> http://wiki.xen.org/wiki/Secondary_GPU_Passthrough (6GB Win7 HVM with
> passthrough of ATI 6970 and USB controller)
> http://wiki.xen.org/wiki/Comprehensive_Xen_Debian_Wheezy_PCI_Passthrough_Tutorial
> (6GB Win7 HVM with passthrough of ATI 68XX and multiple USB
> controllers)
> None of these tutorials mention a 3GB limitation.
> The only recent reference to a 3GB HVM issue during passthrough is here:
> http://comments.gmane.org/gmane.comp.emulators.xen.user/77214
> which appears to link the issue with use of nvidia patches from here:
> http://www.davidgis.fr/download/xen-4.2_rev25240_gfx-passthrough-patchs.tar.bz2
> From what I gather, nvidia cards perform a bit of undocumented voodoo
> (by design) that has not been fully accounted for.
> Elsewhere:
> http://new-wiki.xen.org/old-wiki/xenwiki/XenPCIpassthrough.html
> it sounds like there was a PV domain issue arising from e820 for which
> a reliable workaround was incorporated into the mainline code some
> time ago.
> Patrick:
> From the above examples, it seems like >3GB may be fine if you use
> ATI/AMD GPUs.  I have had fairly good experiences with passthrough of
> a single AMD 6570 with the patch mentioned here:
> http://lists.xen.org/archives/html/xen-users/2013-02/msg00410.html
> although I did not explore how much or how little memory I could
> assign to it.  Note, however, that patch may make certain assumptions,
> such as that only one GPU is being passed through.  I'm not sure there
> is any unpatched release version (for example, 4.2.1) that reliably
> does GPU passthrough (PCI passthrough of less wonky devices has worked
> for some time).
> I think the proposed multi-GPU setup may be putting you out on the
> bleeding edge (on the other hand, it sounds like it is absolutely no
> available under ESXi).  It's probably not impossible.  Maybe someone
> has already done it.  However, how interested and/or comfortable are
> you with code & configuration tweaking, debug, and experimentation?
> - Eric
>>> Patrick,
>>> 1) Xen's vocabulary
>>> dom0 = the main (first) hypervisor, hosting all virtual machines -- like
>>> your Mac OS X
>>> domU = any virtual machine.
>>> This is not the real meaning but here I try to stay "understandable". But
>>> for better understanding please refer to http://en.wikipedia.org/wiki/Xen
>>> My understanding is that your dom0 has to be replaced by Linux so you can
>>> run Xen. But for this part I am not a Mac OS X expert.
>>> 2) More info  - details listed beow are only available for a Xen dom0
>>> running on Linux.
>>> [Extracted]:....I've ended my testing with ESXi, because I was not able
>>> to passthrough sound device, and USB....
>>> [Comment]: With Xen you can :)
>>> [Extracted]: ...with stock ATI Radeon..;
>>> [Comment]...Xen offers VGA PassThrough feature for ATI Radeon.
>>> Here is a link to a Youtube video to run Crysis 3 on a domU Windows 7 64
>>> Bits with HD 7970
>>> http://www.youtube.com/watch?v=GTnchEG4YtI&feature=player_embedded
>>> As you can see
>>> - I can use PCI Passthrough to add a XBOX 360 Paddle while playing Crysis
>>> 3 (see at 09:12 in the Video)
>>> - I have sound too :)
>>> But it works on Linux domU too. Here is another Youtube video link
>>> http://www.youtube.com/watch?feature=player_embedded&v=KzqOIMaBgX0
>>> But you need to apply a few patches to able this feature with Xen 4.2 and
>>> over. Not very complicated to do.
>>> Here are my instructions to do it but I know that there are currently
>>> better way to test it
>>> http://www.davidgis.fr/blog/index.php?2013/03/13/935-xen-43-vga-passthrough-ati-card-hd-7970-changeset-26706
>>> [Extracted]...memory allocation
>>> [Comment]...Xen - limited to 3GB for RAM. A few developers are working on
>>> overpassing this limitation.
>>> [Extracted]...no snaphost possible
>>> [Comment]...If you used Xen with LVM is then snahshotting is easier to
>>> manage ;)
>>> [Extracted] no sleep
>>> [Comment] works on Xen for both Linux and Windows. The main issue is that
>>> you can reboot a domU without restarting the dom0.
>>> [Extracted] Is it possible to create a virtualized desktop with VGA and
>>> PCI passthrough for 2 or 3 VM's running simultaneously (ie. each one with
>>> its own video card)?
>>> [Comment] Yes it is doable. Never tested but from experience share with
>>> other Xen users  I know that it should be run fine.
>>> Hope it helps.
>>> Kind regards.
>>> David.

Xen-users mailing list

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.