[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Multiseat workstation with one VM per user
Hi, 2015-01-01 2:22 GMT+00:00 Gordan Bobic <gordan@xxxxxxxxxx>: > > On 2014-12-31 12:38, Luis P. Mendes wrote: > >> - base system (dom0) as lean as possible, just for Xen > > > Trying to lean things out to a great extent is generally a > waste of time. On any recent Linux distribution the base > install is sufficiently large that it's losing game and > saving a few GB of disk space is not worth the effort. > >> - one Slackware VM and one Ubuntu VM with direct access to hardware >> via PV > > > You need to clarify what exactly you mean by this. Getting > hardware passthrough working at all can be hit and miss and > is very hardware dependant. There are so many hardware and > firmware bugs around that luck is a large factor in hardware > selection. I'd like to have both VMs in a Paravirtualized mode as in http://wiki.xen.org/wiki/Xen_Project_Software_Overview#Xen_Project_Paravirtualization_.28PV.29 with direct access to dedicated graphics card and usb controller or usb devices. > > >> - other VMs for occasional use, which can run in virtualized hardware. >> >> >> - three fanless graphic cards, for example AMD Radeon 6450. One for >> base system (could be a cheaper one), and one dedicated (passthrough) >> to Slackware VM, and similar for the third one for the Ubuntu VM. >> Iâd be using HDMI as the output interface for the two VMs and VGA >> for the base system, in case of necessity. >> >> I've read http://wiki.xen.org/wiki/Xen_VGA_Passthrough [1] and >> http://wiki.xen.org/wiki/Xen_VGA_Passthrough_Tested_Adapters [2] >> >> but still would like your opinions, as itâs my first time with Xen >> and Iâm not fully aware of all the corners I could face. > > > People's experience with ATI cards is at best mixed. I never got > it fully working. Most people find it works OK on the first boot > of the VMs, but as soon as you need to reboot VMs things fall > apart pretty quickly with cards not being reinitialized properly > on a reboot. That's on Windows VMs. With Linux VMs, a lot would > depent on how up to the job the radeon driver is. Last I checked, > it wasn't. I've no experience with this, but always got the impression, from what I've read, that nvidia proprietary blob was bettter than AMD ATI's on Linux, but that open-source ATI radeon driver was better than Nvidia's. The reboot problem you mention is something I have to take into consideration. > > If all you are after is Linux-on-Linux kind of a setup, you would > probably be a lot better off with something like LXC, OpenVZ or > VServer for separating server tasks. But AFAICT, Linux on Linux should have the same OS and the same Kernel (maybe it supports a kernel with a minor update difference). As I want to have Slackware and Ubuntu, I don't think I can use LXC or OpenVZ for that. > > If all you need is a multi-seat workstation, you don't need > virtualization at all, you can just configure multiple Xorg > instances to access different GPU/keyboard/mouse sets. As stated above, as I want to have one system with Slackware and another one with Ubuntu, I don't see any other way than to have virtualization. > >> Now, what Iâd like to know: >> >> 1. Is Slackware 14.1 or current with the xen package from >> http://slackbuilds.org/repository/14.1/system/xen/ [3] as stable as >> Slackware without it? Iâve been using Slackware for ten years as a >> rock solid Linux. Would I gain anything in having another OS as dom0? >> Is NetBSD up to the task? > > > I think this is the first time I heard of anyone using Slackware in > at least 10 years. Most people these days prefer to have a > package management system in their OS. Slackware does not give me any headache when installing programs, although it has no built-in dependency management system. There are some efforts to accomplish this: https://github.com/dslackw/slpkg But, in the past, I had problems with .deb and .rpm when I tried to install programs not available as packages. It's easier to install them in Slackware, for me. >> 6. Iâve read that itâs more stable to passthrough usb devices >> individually, than usb host controllers. Is this still the case? As >> Iâd like each of the two of us to have two USB 3.0 ports in >> exclusivety. > > > Passing USB devices has been hit and miss for me. Passing PCIe devices > that are USB host controllers, on the other hand, has worked well. Ok, better to know this. > >> 7. (repetition) Is NetBSD with its lower power requirements up to >> this task? > > > There is no gain. Getting this kind of a setup to work reliably at > all on OS-es that are used (and thus debugged) by thousands of people > is difficult enough without getting bogged down in OS-es that only > a handful of people use in a similar scenario. > >> In conclusion: >> >> One workstation, with native disk and graphic card access to each of >> the two main VMs running as fast as it they were native. > > > As fast as native? Not going to happen. Fast enough? Sure. I have > a triple seat gaming machine that works quite well, but that is > very different from what you are proposing above (Nvidia GPUs, > Windows guests) So, for graphics do you get native performance with the passthrough? But for the rest, are you able to measure how slower are the guest systems? Luis _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxx http://lists.xen.org/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |