[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Reporting success with VGA Passthrough, and some other question/issues, mainly with Audio



ZVOLs are sparse / thinly provisioned. I use 4KB block size with dedupe and ZLE compression on mirrored SSDs.


Etzion Bar-Noy <etzion@xxxxxxxxxxxx> wrote:

I didn't know you could use disks like that in 'xl' configuration, without prefix. My disks look like this:
disk = [ "tap2:aio:/dev/share/VMS/vmware.lun,hda,w","tap2:aio:/dev/share/VMS/vmware-2.lun,hdb,w" ]

(this is a VMware ESXi, nested under Xen). I use zvols indeed. I thought you might have used disk files, because they consume less size. I assume you enabled compression, right?
It's that volumes consume the entire space from the pool as soon as they are provisioned, while files are, usually, thin.

Thanks
Etzion


On Mon, Jan 6, 2014 at 12:19 PM, Gordan Bobic <gordan@xxxxxxxxxx> wrote:
My xl vm config refers to the disk as follows:

disk=[ '/dev/zvol/ssd/guest1,raw,hda,rw' ]

which points to ZFS pool "ssd", volume "guest1". Volumes in
ZFS are similar to LVM volumes (i.e. use it as a block device),
only managed by ZFS with all the features that implies.

I don't use any virt* components - I never saw the point.

Any particular other configuration highlights you are interested
in?

Gordan


On 2014-01-05 23:21, Etzion Bar-Noy wrote:
Following your comment regarding ZFS - I tried placing virtual disks
as files, and Xen didn't like it that much (hung during VM startup).
The system was vastly modified later on (newer kernel, custom RPM
building of Xen 4.4 from GIT repository, breaking most of Virt*
components, etc). Now I am on a native 'xl' interface, without any
additional interface, and I have not tried to run a VM again from a
file, so I have no idea as to its behaviour over ZFS.

I do use 'tap2:aio' over ZFS volumes, and get wonderful performance.
It's a nice littel 34TB system with lots of RAM for a complete lab
solution. Care to share your configuration?

Etzion

On Sun, Jan 5, 2014 at 11:59 AM, Gordan Bobic <gordan@xxxxxxxxxx>
wrote:

On 01/04/2014 08:38 PM, Zir Blazer wrote:

1) VGA Passthrough

At first, I used xl pci-assignable-add to manually add everytime
I
rebooted Dom0 the GPU and the HDMI device, but decided to add
these to
the syslinux.cfg file to skip that step. Either way, I didn't had
any
issues making the Radeon itself free to pass it to the VM (As I
was
using my Xeon Haswell integrated GPU as main Video Card, and
didn't
installed the Radeon Drivers on Dom0) when I used xl create and
had the
pci = option in the VM's CFG file. However, it either BSODed, or
Windows
was unable to use it as it appeared with a yellow exclamation
mark while
on Device Manager.

All xl pci-assignable-add does is detach the device from its
current driver and assign it to the xen-pciback driver. The problem
is that once the driver for the device loads, it initializes the
device, which may leave it in a state that the driver in domU cannot
deal with gracefully. This is particularly the case with ATI GPUs.

I'm not sure how your distro does it, but on EL6 I have a
multi-pronged approach:

/etc/modprobe.d/xen-pciback.conf:
options xen-pciback permissive=1
hide=(00:1a.1)(07:00.0)(07:00.1)(0d:00.0)(0d:00.1)(09:00.0)(0f:00.0)

Two GPUs, each with HDMI audio, two USB controllers and a sound
card, if you're wondering

/etc/sysconfig/modules/xen-pciback.modules
#!/bin/sh

modprobe xen-pciback
modprobe nvidia

This ensures that the Nvidia devices to be passed through are
grabbed by xen-pciback _BEFORE_ the Nvidia driver loads. I don't
know if this is actually necessary with Nvidia - I don't think it
is, but I still have it as a hangover from my futile attempts to get
ATI cards to work properly for my setup before I finally gave up on
them (both Linux and Windows drivers are just too fundamentally
crippled and broken for my use-case).

Googling around I found that the latest version of QEMU broke VGA
Passthrough, and that using qemu-xen-traditional fixed it, which
I
through I was using. However, there was a problem with that.
Using
device_model = qemu-xen-traditional, as told by most Xen VGA
Passthrough
guides available currently, I got this error:

WARNING: ignoring device_model directive.
WARNING: Use "device_model_override" instead if you really want a
non-default device_model

I ignored that because the VM was successfully created, and
besides,
when I replaced device_model = qemu-dm with device_model_override
=
qemu-xen-traditional, it throwed another error which made it to
not even
create the VM. However, I recently discovered that I instead had
to use
device_model_version = qemu-xen-traditional. It worked pretty
much
flawlessly with that. Basically, there are a lot of guides, and
even the
Xen wiki, that are severely outdated in this area. I spend weeks
trying
to figure out what I was doing wrong due bad documentation, maybe
because I didn't digged deep enough earlier, but still, most of
the
easily accessible data and google results are for older versions,
and
some critical options like device_model have changed.

http://wiki.xen.org/wiki/Xen_Configuration_File_Options [1] - old

parameters
which I was using
http://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html [2] - what

I should
have used on first place

This is really worth writing about because I'm sure that someone
else
will sooner or later stumble here (Saw several people with this
issue on
google), as some guides assumes you're using an specific Linux
distribution with an older Xen version instead of something
bleeding edge.

Have you changed the incorrect info on the wiki? If you haven't,
please do so - it is a wiki, after all.

After finally being able to see Windows Desktop on the Monitor
plugged
to the Radeon 5770, I installed the Radeon 5770 Drivers from
Device
Manager with an INF file instead of the full Catalyst Control
Center, as
I hear than that gives more possible BSOD issues. Additionally,
after
around one week of playing around with the GPU on the VM (Even
leaving a
game open during all the night), I don't seem to notice issues,
and the
games I tried (Path of Exile, League of Legends) worked
flawlessly with
it. I only had a single GPU crash, with lost of Monitor signal
and the
VM destroying itself, but that may not be necesarrily
attributable to Xen.

ATI drivers are buggy in all sorts of ways. Issues I have had
include:

1) GPU-Z causes a crash (the host may survive it with PCI ACL
support, my hardware lacks it so it crashes the whole host)

2) Automatic power management is broken, at least on my 7970 - the
fan doesn't spin up according to the driver's power curve, possibly
sensor access being broken in a VM (potentially related to why GPU-Z
crashes). The net result is that the fan sits at 20% whatever you do
until the GPU hits 90C. At that point the card's on-board power
management kicks in and cranks the fan up to 100%. On a standard
reference design card, fan when running at over 80% produces huge
noise and vibration - enough to make the disks in the machine start
to generate hundreds of pending sectors every day. The only way to
wake the card up and get it back below 100% fan speed is to manually
force the fan speed using the CCC (which is difficult if you don't
have it installed for fear of it BSOD-ing the VM).

That's the issues with drivers being broken, I'm not going to go
into the issues of the driver needlessly crippling the capabilities
right now because they are probably less relevant to your use case.

Also, I didn't notice any issues with the so-called "Soft Reset"
or FLR,
but maybe that was because I builded Xen with the Radeon patch
included.

Does that issue a bus reset to reset the card?

For example, I used xl destroy to abruptly shut down the VM. The
Monitor
shows a freezed image of the last image before I did that, yet I
am able
to create it again with the Video Card passing with no issues. I
didn't
notice either any performance degradation, albeit I'm not sure if
that
is under reboot scenarios or also applies when you shut down and
create
again the VM/DomU without restarting Dom0.

Applies to most scenarios. Weirdly, I found it doesn't happen on
all cards. For example on a 7450 it is possible to reboot the VM
without performance degradation and video corruption. On a 7970 I
never managed to reboot the domU without it breaking. I hypothesized
that this could be due to the 7450 hving no auxiliary power input
which makes it more susceptible to actually getting reset via
secondary means (e.g. via PCIe power saving to cut off power to the
slot, whereas the 7970's auxiliary power inputs keep it alive when
the power to the slot is switched off - which is arguably a hardware
bug with the ATI cards' power management.

Overally, it seems very functional and quite reliable. The only
actual
issue I find, is that when I create the VM, I need to have the VM
window
(Check attached file) on the main monitor as the active window,
otherwise it seems that after Windows XP splash screen when it
changes
video resolution it usually either BSODs, or doesn't initialize
the
Video Card properly, and instead the Monitor stays in Standby
while the
VM window displays the Windows Desktop as if there was no VGA
Passthrough being done.

Not sure what you are describing here. If you set the domU output
to VNC this should matter. I only ever check VNC output when
troubleshooting e.g. to see if there's a crash. I don't think I've
checked it since I switched to Nvidia cards.

2) VNC vs SDL, Keyboard and Mouse focus

I have tried with both VNC and SDL and I prefer the later. When I
use
SDL, the VM window automatically pops up, through that is rather
unneeded as it later black screens as the Radeon takes control of
the
video output in the other Monitor. With VNC I have to manually
launch
vncviewer to be able to control the VM. Most important is that
with SDL,
when I click on the VM black window, control of the Keyboard and
Mouse
goes to the VM, while on VNC I didn't ever managed to get control
of the
Keyboard. The Mouse pointer works without having to make the VM
window
the active one as if I was using a freemouse tool on a windowed
game,
however, the VM black window surface on Dom0 doesn't allow me to
travel
the entire Desktop surface of the VM, so is rather useless.

VNC works fine for me when I use it, but most of the time I use a
separate mouse/keyboard passed to the VM. My main setup is 3 GPUs,
monitors, sound cards, keyboards and mice all running off of one
physical machine.

The only thing that annoys me from using SDL is that I have the
black
screen always open while the VM is working, and having to click
it
everytime I want to switch control to the VM is rather annoying.
Isn't
there any way to change Keyboard and Mouse focus from Dom0 to a
DomU and
viceversa like if they were consoles? For example, I may want to
use
Ctrl + Alt + F1 to get control of Dom0, then use Ctrl + Alt + F2
to
switch control to the VM. This would increase usability.

There is no way to do that. As I said, I use separate
mouse/keyboard/monitor for each VGA passthrough VM I use.

3) Disk Images

As can be seen in the CFG file I copypasted, I'm using file: for
my IMG
Disk Images. However, on some other documentation like that of
blktap2
on the Xen wiki, it mentions that I can use tap:tapdisk:aio:,
however,
is there any reason why I should pick one over another? Do they
also
have specific format support or anything I should be careful
with?

Besides, is there any easy way to mount the IMG Disk Image files
on
Linux (Including NTFS partitions) so I can retrieve or add files
when
the VM is not running? I still didn't learned to setup Networks
with
Linux and need a workable way to move data from and to the VM
Disk Images.

You need to install ntfs-3g package and do something like:

losetup /dev/loop0 /path/to/file.img
kpartx -a /dev/loop0
mount /dev/mapper/loop0p1 /path/to/mountpoint

Make sure this is never mounted while using the VM or you will very
thoroughly destroy the FS.

I know about LVM partitions, but find them a bit harder to manage
than
Disk Images. Plus performance is currently adequate this way.

Personally I use ZFS for anything nowdays. Try it and you will
never look back.

4) Audio emulation

While the Windows XP VM works nicely with games, I have the issue
that
there is no audio coming from it, which seems to be the most
important
missing thing before I can call my VM "production-ready" for
playing
games as if it were my old computer. I didn't tried to pass the
integrated Realtek Sound Card, but that would be rather stupid as
I need
sound in both the current Linux Dom0, which I'm using for simple
task
like browsing, and the Windows XP VM for games. This means I have
to
rely on emulated Audio devices, which as far that I know are
common.
soundhw = 'ac97' seems to work, Windows XP recognizes the Sound
Card and
install the Drivers for it with no issues. It also has the HDMI
device
passed to it along the GPU. However, after googling a lot, I
didn't find
any easy to get audio from a DomU to Dom0 to get mixed so I can
rely on
emulated Sound Cards instead of needing one per VM like you need
Video
Cards.

The problem is that support for Intel HD Audio has only been added
very recently in qemu, and traditional qemu doesn't have it. The
other half of that problem is that XP and later don't have drivers
for other emulated devices that traditional qemu supports (e.g. the
once ubiquitous SB16).

As far that I know, there are other VMMs like VirtualBox where you
can
usually get easily sound from the VMs, and it also uses
QEMU-emulated
Sound Cards as Xen does. Is there any reason why getting audio
doesn't
seem to be easy to do on Xen?

If you use the default qemu (a.k.a. upstream) rather than
traditional, this may have the Intel HDA support and you can use
Intel HDA driver in domU with it. There is no known driver that
works properly with Windows XP or later for any of the old emulated
devices traditional qemu can provide.

I know that many people considers USB
Sound Card cheap to pass to the VM, but I don't have easy access
to
those, so I would need a Software way to get sound from multiple
VMs
using a single Headphone connected to the integrated Realtek
Sound Card.

USB audio devices are _cheap_ and easily available. I use these and
they work great:

http://amzn.to/1gzLVAX [3]


And with buy-one-get-one-free you get two for less than the price
of a beer.

Note - you may find it preferable to pass through the USB
controller via PCI passthrough, rather than USB device via USB
passthrough (USB passthrough seems to chew through about 5% of a CPU
core per device).

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users [4]



Links:
------
[1] http://wiki.xen.org/wiki/Xen_Configuration_File_Options
[2] http://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html
[3] http://amzn.to/1gzLVAX
[4] http://lists.xen.org/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.