[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Multiseat workstation with one VM per user



On 2015-01-05 12:08, Luis P. Mendes wrote:
Hi,

2015-01-01 2:22 GMT+00:00 Gordan Bobic <gordan@xxxxxxxxxx>:

On 2014-12-31 12:38, Luis P. Mendes wrote:

- base system (dom0) as lean as possible, just for Xen


Trying to lean things out to a great extent is generally a
waste of time. On any recent Linux distribution the base
install is sufficiently large that it's losing game and
saving a few GB of disk space is not worth the effort.

- one Slackware VM and one Ubuntu VM with direct access to hardware
via PV


You need to clarify what exactly you mean by this. Getting
hardware passthrough working at all can be hit and miss and
is very hardware dependant. There are so many hardware and
firmware bugs around that luck is a large factor in hardware
selection.
I'd like to have both VMs in a Paravirtualized mode as in
http://wiki.xen.org/wiki/Xen_Project_Software_Overview#Xen_Project_Paravirtualization_.28PV.29
with direct access to dedicated graphics card and usb controller or usb devices.




- other VMs for occasional use, which can run in virtualized hardware.


- three fanless graphic cards, for example AMD Radeon 6450. One for
base system (could be a cheaper one), and one dedicated (passthrough)
to Slackware VM, and similar for the third one for the Ubuntu VM.
Iâd be using HDMI as the output interface for the two VMs and VGA
for the base system, in case of necessity.

 I've read http://wiki.xen.org/wiki/Xen_VGA_Passthrough [1] and
http://wiki.xen.org/wiki/Xen_VGA_Passthrough_Tested_Adapters [2]

but still would like your opinions, as itâs my first time with Xen
and Iâm not fully aware of all the corners I could face.


People's experience with ATI cards is at best mixed. I never got
it fully working. Most people find it works OK on the first boot
of the VMs, but as soon as you need to reboot VMs things fall
apart pretty quickly with cards not being reinitialized properly
on a reboot. That's on Windows VMs. With Linux VMs, a lot would
depent on how up to the job the radeon driver is. Last I checked,
it wasn't.
I've no experience with this, but always got the impression, from what
I've read, that nvidia proprietary blob was bettter than AMD ATI's on
Linux, but that open-source ATI radeon driver was better than
Nvidia's.
The reboot problem you mention is something I have to take into consideration.

I have no idea how/if Linux drivers in domU handle GPU passthrough.
It is not a particularly commonly used arrangement.

If all you are after is Linux-on-Linux kind of a setup, you would
probably be a lot better off with something like LXC, OpenVZ or
VServer for separating server tasks.
But AFAICT, Linux on  Linux should have the same OS and the same
Kernel (maybe it supports a kernel with a minor update difference).
As I want to have Slackware and Ubuntu, I don't think I can use LXC or
OpenVZ for that.

Yes you can. With containers you can run any distro's userspace
in the chroot. The kernel is shared, but that is largely
irrelevant (and a good thing in terms of performance, unless you are
concerned about security related to kernel level exploits).

If all you need is a multi-seat workstation, you don't need
virtualization at all, you can just configure multiple Xorg
instances to access different GPU/keyboard/mouse sets.
As stated above, as I want to have one system with Slackware and
another one with Ubuntu, I don't see any other way than to have
virtualization.

See above. You don't actually need full virtualization for that.

7. (repetition)  Is NetBSD with its lower power requirements up to
this task?


There is no gain. Getting this kind of a setup to work reliably at
all on OS-es that are used (and thus debugged) by thousands of people
is difficult enough without getting bogged down in OS-es that only
a handful of people use in a similar scenario.

In conclusion:

One workstation, with native disk and graphic card access to each of
the two main VMs running as fast as it they were native.


As fast as native? Not going to happen. Fast enough? Sure. I have
a triple seat gaming machine that works quite well, but that is
very different from what you are proposing above (Nvidia GPUs,
Windows guests)
So, for graphics do you get native performance with the passthrough?

I haven't benchmarked it, but the deterioration is sufficiently
small to be tolerable. After I virtualized I upgraded from a GTX680
to a GTX780Ti to make up any performance drop in gaming.

But for the rest, are you able to measure how slower are the guest
systems?

It depends on the workload. With a highly parallel workload
capable of saturating the host, the performance can be quite
dire:

http://www.altechnative.net/2012/08/04/virtual-performance-part-1-vmware/

I do regular performance testing with MySQL under parallel, saturation
level loads and the performance degradation hasn't changed to any
significant extent since that article was written.

Gordan


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.