[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Reporting success with VGA Passthrough, and some other question/issues, mainly with Audio

  • To: xen-users@xxxxxxxxxxxxx
  • From: Gordan Bobic <gordan@xxxxxxxxxx>
  • Date: Mon, 06 Jan 2014 10:19:50 +0000
  • Delivery-date: Mon, 06 Jan 2014 10:20:23 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>

My xl vm config refers to the disk as follows:

disk=[ '/dev/zvol/ssd/guest1,raw,hda,rw' ]

which points to ZFS pool "ssd", volume "guest1". Volumes in
ZFS are similar to LVM volumes (i.e. use it as a block device),
only managed by ZFS with all the features that implies.

I don't use any virt* components - I never saw the point.

Any particular other configuration highlights you are interested


On 2014-01-05 23:21, Etzion Bar-Noy wrote:
Following your comment regarding ZFS - I tried placing virtual disks
as files, and Xen didn't like it that much (hung during VM startup).
The system was vastly modified later on (newer kernel, custom RPM
building of Xen 4.4 from GIT repository, breaking most of Virt*
components, etc). Now I am on a native 'xl' interface, without any
additional interface, and I have not tried to run a VM again from a
file, so I have no idea as to its behaviour over ZFS.

I do use 'tap2:aio' over ZFS volumes, and get wonderful performance.
It's a nice littel 34TB system with lots of RAM for a complete lab
solution. Care to share your configuration?


On Sun, Jan 5, 2014 at 11:59 AM, Gordan Bobic <gordan@xxxxxxxxxx>

On 01/04/2014 08:38 PM, Zir Blazer wrote:

1) VGA Passthrough

At first, I used xl pci-assignable-add to manually add everytime
rebooted Dom0 the GPU and the HDMI device, but decided to add
these to
the syslinux.cfg file to skip that step. Either way, I didn't had
issues making the Radeon itself free to pass it to the VM (As I
using my Xeon Haswell integrated GPU as main Video Card, and
installed the Radeon Drivers on Dom0) when I used xl create and
had the
pci = option in the VM's CFG file. However, it either BSODed, or
was unable to use it as it appeared with a yellow exclamation
mark while
on Device Manager.

All xl pci-assignable-add does is detach the device from its
current driver and assign it to the xen-pciback driver. The problem
is that once the driver for the device loads, it initializes the
device, which may leave it in a state that the driver in domU cannot
deal with gracefully. This is particularly the case with ATI GPUs.

I'm not sure how your distro does it, but on EL6 I have a
multi-pronged approach:

options xen-pciback permissive=1

Two GPUs, each with HDMI audio, two USB controllers and a sound
card, if you're wondering


modprobe xen-pciback
modprobe nvidia

This ensures that the Nvidia devices to be passed through are
grabbed by xen-pciback _BEFORE_ the Nvidia driver loads. I don't
know if this is actually necessary with Nvidia - I don't think it
is, but I still have it as a hangover from my futile attempts to get
ATI cards to work properly for my setup before I finally gave up on
them (both Linux and Windows drivers are just too fundamentally
crippled and broken for my use-case).

Googling around I found that the latest version of QEMU broke VGA
Passthrough, and that using qemu-xen-traditional fixed it, which
through I was using. However, there was a problem with that.
device_model = qemu-xen-traditional, as told by most Xen VGA
guides available currently, I got this error:

WARNING: ignoring device_model directive.
WARNING: Use "device_model_override" instead if you really want a
non-default device_model

I ignored that because the VM was successfully created, and
when I replaced device_model = qemu-dm with device_model_override
qemu-xen-traditional, it throwed another error which made it to
not even
create the VM. However, I recently discovered that I instead had
to use
device_model_version = qemu-xen-traditional. It worked pretty
flawlessly with that. Basically, there are a lot of guides, and
even the
Xen wiki, that are severely outdated in this area. I spend weeks
to figure out what I was doing wrong due bad documentation, maybe
because I didn't digged deep enough earlier, but still, most of
easily accessible data and google results are for older versions,
some critical options like device_model have changed.

http://wiki.xen.org/wiki/Xen_Configuration_File_Options [1] - old
which I was using
http://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html [2] - what
I should
have used on first place

This is really worth writing about because I'm sure that someone
will sooner or later stumble here (Saw several people with this
issue on
google), as some guides assumes you're using an specific Linux
distribution with an older Xen version instead of something
bleeding edge.

Have you changed the incorrect info on the wiki? If you haven't,
please do so - it is a wiki, after all.

After finally being able to see Windows Desktop on the Monitor
to the Radeon 5770, I installed the Radeon 5770 Drivers from
Manager with an INF file instead of the full Catalyst Control
Center, as
I hear than that gives more possible BSOD issues. Additionally,
around one week of playing around with the GPU on the VM (Even
leaving a
game open during all the night), I don't seem to notice issues,
and the
games I tried (Path of Exile, League of Legends) worked
flawlessly with
it. I only had a single GPU crash, with lost of Monitor signal
and the
VM destroying itself, but that may not be necesarrily
attributable to Xen.

ATI drivers are buggy in all sorts of ways. Issues I have had

1) GPU-Z causes a crash (the host may survive it with PCI ACL
support, my hardware lacks it so it crashes the whole host)

2) Automatic power management is broken, at least on my 7970 - the
fan doesn't spin up according to the driver's power curve, possibly
sensor access being broken in a VM (potentially related to why GPU-Z
crashes). The net result is that the fan sits at 20% whatever you do
until the GPU hits 90C. At that point the card's on-board power
management kicks in and cranks the fan up to 100%. On a standard
reference design card, fan when running at over 80% produces huge
noise and vibration - enough to make the disks in the machine start
to generate hundreds of pending sectors every day. The only way to
wake the card up and get it back below 100% fan speed is to manually
force the fan speed using the CCC (which is difficult if you don't
have it installed for fear of it BSOD-ing the VM).

That's the issues with drivers being broken, I'm not going to go
into the issues of the driver needlessly crippling the capabilities
right now because they are probably less relevant to your use case.

Also, I didn't notice any issues with the so-called "Soft Reset"
or FLR,
but maybe that was because I builded Xen with the Radeon patch

Does that issue a bus reset to reset the card?

For example, I used xl destroy to abruptly shut down the VM. The
shows a freezed image of the last image before I did that, yet I
am able
to create it again with the Video Card passing with no issues. I
notice either any performance degradation, albeit I'm not sure if
is under reboot scenarios or also applies when you shut down and
again the VM/DomU without restarting Dom0.

Applies to most scenarios. Weirdly, I found it doesn't happen on
all cards. For example on a 7450 it is possible to reboot the VM
without performance degradation and video corruption. On a 7970 I
never managed to reboot the domU without it breaking. I hypothesized
that this could be due to the 7450 hving no auxiliary power input
which makes it more susceptible to actually getting reset via
secondary means (e.g. via PCIe power saving to cut off power to the
slot, whereas the 7970's auxiliary power inputs keep it alive when
the power to the slot is switched off - which is arguably a hardware
bug with the ATI cards' power management.

Overally, it seems very functional and quite reliable. The only
issue I find, is that when I create the VM, I need to have the VM
(Check attached file) on the main monitor as the active window,
otherwise it seems that after Windows XP splash screen when it
video resolution it usually either BSODs, or doesn't initialize
Video Card properly, and instead the Monitor stays in Standby
while the
VM window displays the Windows Desktop as if there was no VGA
Passthrough being done.

Not sure what you are describing here. If you set the domU output
to VNC this should matter. I only ever check VNC output when
troubleshooting e.g. to see if there's a crash. I don't think I've
checked it since I switched to Nvidia cards.

2) VNC vs SDL, Keyboard and Mouse focus

I have tried with both VNC and SDL and I prefer the later. When I
SDL, the VM window automatically pops up, through that is rather
unneeded as it later black screens as the Radeon takes control of
video output in the other Monitor. With VNC I have to manually
vncviewer to be able to control the VM. Most important is that
with SDL,
when I click on the VM black window, control of the Keyboard and
goes to the VM, while on VNC I didn't ever managed to get control
of the
Keyboard. The Mouse pointer works without having to make the VM
the active one as if I was using a freemouse tool on a windowed
however, the VM black window surface on Dom0 doesn't allow me to
the entire Desktop surface of the VM, so is rather useless.

VNC works fine for me when I use it, but most of the time I use a
separate mouse/keyboard passed to the VM. My main setup is 3 GPUs,
monitors, sound cards, keyboards and mice all running off of one
physical machine.

The only thing that annoys me from using SDL is that I have the
screen always open while the VM is working, and having to click
everytime I want to switch control to the VM is rather annoying.
there any way to change Keyboard and Mouse focus from Dom0 to a
DomU and
viceversa like if they were consoles? For example, I may want to
Ctrl + Alt + F1 to get control of Dom0, then use Ctrl + Alt + F2
switch control to the VM. This would increase usability.

There is no way to do that. As I said, I use separate
mouse/keyboard/monitor for each VGA passthrough VM I use.

3) Disk Images

As can be seen in the CFG file I copypasted, I'm using file: for
my IMG
Disk Images. However, on some other documentation like that of
on the Xen wiki, it mentions that I can use tap:tapdisk:aio:,
is there any reason why I should pick one over another? Do they
have specific format support or anything I should be careful

Besides, is there any easy way to mount the IMG Disk Image files
Linux (Including NTFS partitions) so I can retrieve or add files
the VM is not running? I still didn't learned to setup Networks
Linux and need a workable way to move data from and to the VM
Disk Images.

You need to install ntfs-3g package and do something like:

losetup /dev/loop0 /path/to/file.img
kpartx -a /dev/loop0
mount /dev/mapper/loop0p1 /path/to/mountpoint

Make sure this is never mounted while using the VM or you will very
thoroughly destroy the FS.

I know about LVM partitions, but find them a bit harder to manage
Disk Images. Plus performance is currently adequate this way.

Personally I use ZFS for anything nowdays. Try it and you will
never look back.

4) Audio emulation

While the Windows XP VM works nicely with games, I have the issue
there is no audio coming from it, which seems to be the most
missing thing before I can call my VM "production-ready" for
games as if it were my old computer. I didn't tried to pass the
integrated Realtek Sound Card, but that would be rather stupid as
I need
sound in both the current Linux Dom0, which I'm using for simple
like browsing, and the Windows XP VM for games. This means I have
rely on emulated Audio devices, which as far that I know are
soundhw = 'ac97' seems to work, Windows XP recognizes the Sound
Card and
install the Drivers for it with no issues. It also has the HDMI
passed to it along the GPU. However, after googling a lot, I
didn't find
any easy to get audio from a DomU to Dom0 to get mixed so I can
rely on
emulated Sound Cards instead of needing one per VM like you need

The problem is that support for Intel HD Audio has only been added
very recently in qemu, and traditional qemu doesn't have it. The
other half of that problem is that XP and later don't have drivers
for other emulated devices that traditional qemu supports (e.g. the
once ubiquitous SB16).

As far that I know, there are other VMMs like VirtualBox where you
usually get easily sound from the VMs, and it also uses
Sound Cards as Xen does. Is there any reason why getting audio
seem to be easy to do on Xen?

If you use the default qemu (a.k.a. upstream) rather than
traditional, this may have the Intel HDA support and you can use
Intel HDA driver in domU with it. There is no known driver that
works properly with Windows XP or later for any of the old emulated
devices traditional qemu can provide.

I know that many people considers USB
Sound Card cheap to pass to the VM, but I don't have easy access
those, so I would need a Software way to get sound from multiple
using a single Headphone connected to the integrated Realtek
Sound Card.

USB audio devices are _cheap_ and easily available. I use these and
they work great:

http://amzn.to/1gzLVAX [3]

And with buy-one-get-one-free you get two for less than the price
of a beer.

Note - you may find it preferable to pass through the USB
controller via PCI passthrough, rather than USB device via USB
passthrough (USB passthrough seems to chew through about 5% of a CPU
core per device).

Xen-users mailing list
http://lists.xen.org/xen-users [4]

[1] http://wiki.xen.org/wiki/Xen_Configuration_File_Options
[2] http://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html
[3] http://amzn.to/1gzLVAX
[4] http://lists.xen.org/xen-users

Xen-users mailing list

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.