[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen 4.3 Passthrough Problems & Documentation

Thanks for the reply Gordon.

On Mon, Jul 15, 2013 at 11:02 AM, Gordan Bobic <gordan@xxxxxxxxxx> wrote:
On Sun, 14 Jul 2013 17:58:46 -0400, Casey DeLorme <cdelorme@xxxxxxxxx> wrote:

- Upstream qemu fails to load virtual machines with VGA passthrough
and a large amount of memory (3600MB+ in my case breaks the DomU).

I may be wrong, but my understanding is that the PCI passthrough
related BAR memory patch was for qemu-traditional, not upstream.

The patch points to files in the qemu-xen-dir not qemu-xen-traditional-dir, so I am pretty sure it is for upstream. ÂIf it wasn't then it should have had no effect when added, instead it broke my HVM using upstream-qemu.

- Does anyone know exactly what Windows device ejection does to the
hardware, or how we can do the same from Linux (such as Dom0)?

I suspect it does "whatever the driver does", rather than something
defined by a standard of some sort.

FWIW, ejecting a device only ever even succeeded for me on Win7.
If I try to eject a GPU in XP, it refuses to do so because the
"device is busy".

I had ejections working fine with 4.2 and Windows 8. ÂHowever, upstream-qemu provides way smoother performance for a number of things, so ideally I would like to use it instead of traditional.

**A note on GPLPV:**

Using the latest GPLPV, and so far it works excellent. ÂTo be honest
I don't notice a different with regards to disk IO, solid state is
already fast, but the Windows Experience index jumps from a 6.6 to a

Really? I found the difference is _enormous_. Booting domU takes
seconds rather than minutes, and running any kind of anti-virus
grinds the machine to a halt without PV disk drivers.

Actually this is exactly one of the things upstream qemu addressed, Windows 8 boot time on 4.2 traditional was upwards of 2 minutes even with GPLPV installed. ÂGPLPV made almost no difference visible for my boot times, software may or may not be runnuing faster. ÂI'm sure it is, but I don't notice the difference.

SSD's are fast, so between fast and faster the line gets blurry I guess. ÂIf I was using an HDD it would probably be a different story.

Testing sysfs reset:

Modern linux kernel sysfs comes with reset files that can be used to
reset (some) PCI devices manually:

- [Kernel Docs


I decided to give this a try to see if it would allow me to reset the
adapter from within Linux, where I could then tie a script to automate
the reset process when a DomU is rebooted.

The planned scenario:

- Windows boots and initializes the graphics card
- I shut down windows and the card remains initialized
- I reset the graphics card state by:
ÂÂ Â - Unbinding from pciback
  - issuing a reset
  - rebinding it
- Booting windows should initialize a fresh card

I think you'll find this process is entirely at the mercy of
what the driver does in domU. Quadro drivers seem to handle
this very gracefully.

This is good news, because I am hoping Linux handles things the same way.
Primary passthrough might work better because it re-executes
the BIOS which may well get the card to a clean state, but
I am purely guessing since I have given up on ATI cards
some time ago for number of reasons.

That is possible, I never had luck getting primary passthrough working before, maybe I will try again. ÂHowever then I have to use traditional qemu again, so again ideally I'd rather use upstream and work around secondary passthrough.Â

- I am to believe that Windows ejection is probably working because it
is using AMD drivers.

Ejecting a Quadro card on Win7 "worked" for me, but I never
actually saw any benefit from doing so with Quadro cards
since they work fine after a domU reboot anyway.

If I could achieve that with an AMD I would be happy, but I haven't found any good instructions on how to actually mod the GTX to Quadro that doesn't involve hardware modifications.

- The reset in Linux fails when it has no drivers so the reset
probably triggers a driver operation

You have a reset option under /sys/ when the driver is loaded?
I've never seen that. I thought it was specifically related
to FLreset PCI level functionality.

Supposedly the reset files were an alternative addition to the `do_flr`? ÂI did read a little bit about it, but not much by way of documentation around it yet.

- The driver operation probably fails because it is not tied to an AMD

And you have definitely confirmed that it does something (or even
exists) when the fglrx driver is claiming the device?

I have not, but if it's anything like Windows then this is exactly what should be happening right? ÂI am basing this off of that thought and the fact that if no driver is attached the reset throws an error. ÂIt's all speculation right now, I was hoping someone with knowledge about pciback or sysfs could confirm it.

If that is the case there is a strong possibility attaching it to say the radeon or fglrx driver would handle a reset properly.

I did test resetting emulated graphics in a virtual machine successfully, so I can say that the reset appears to do "something".Â

Another option I have not yet tested would be loading the radon driver
to bind and unbind it before adding it back to pciback, which may
cause the proper reset chain to occur. ÂI didn't see it in the
drivers list though and wouldn't know where to begin loading it
without causing problems.

Well, you can modprobe fglrx and see if/what it breaks. :)

Good idea, I will have to install fglrx first, but hopefully that will load the driver into `/sys/bus/pci/drivers`.

If anyone knows how to cause a D0>D3>D0 power change to a device
through sysfs let me know because I would like to try that next.

Hmm... Abusing power management - I like the idea. :)
It is not likely to work if the card takes auxiliary power
input, though. :(

Hmm good point, it does take two auxiliary power inputs. ÂI thought D0/D3 operations were for device hibernation, does auxiliary power prevent that from being possible?

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.