[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] gaming on multiple OS of the same machine?



Hi Peter,

You are correct, I meant to type "without" the NF200 chip.  I will now explain in detail:

I checked every major manufacturers high end boards, searching for one with the best features to price and was aiming for the most PCIe slots I could get.  Pretty much Every single board with more than 3 PCIe (x16) slots came with some form of PCI switch.  Most of these PCI switches break IOMMU in one way or another.

On the ASRock Extreme7 Gen3 it came with two, the NF200 and the PLX PEX8606.  The NF200 is completely incompatible with IOMMU, anything sitting behind it creates a layer of fail between your card and success.

The PLX was an entirely different form of problem, it merges device "function" with device identifiers.  If you run "lspci" you get a list of your devices, they are identified by "bus:device.function".

The last two PCIe slots on the ASRock Extreme7 Gen3 shared functionality with onboard components, so for example the second to last was shared with my dual onboard LAN and the ASMedia SATA controller.  When I used xen-pciback.hide to remove just the graphics card, it removed the other two components as well (treating them as one "device").  As a result I lost internet and four drive ports worth of storage.

In conclusion, I've already tried to find a way to make a 3-4x gaming machine and failed due to hardware problems, I didn't even get a chance to run into software issues.

*************

In my opinion it would be cheaper both in time and money to buy two computers and reproduce one setup on the other, than to try getting four machines up on a single physical system.

A 4-core i7 with hyperthreading will be treated like 8-vcores, and while you "could" pass two to each windows machine, you end up with nothing left for the control OS (Dom0) or the hypervisor (Xen) itself.  They would share CPU's of course, but in my opinion you're bound to run into resource conflicts at high load.

*************

I didn't think the dual GPU's would work, for the same reason my PLX chip created trouble.  While it has two GPU's it's probably treated as a "single device" with multiple "functions" and you can't share a single device between multiple machines.

So, you will need a motherboard with four distinct PCIe x16 slots that are not tied to some PCI Switch such as the PLX or NF200 chip.

I can't say that such a board doesn't exist, but my understanding is that no board manufacturer is producing consumer hardware specifically for virtualization, and NF200 or PLX are beneficial to anyone running a single OS system with multi-GPU configurations (SLI/Crossfire), which would account for the majority of their target market.

*************

Armed with this knowledge, here is where you may run into problems:

-  Finding a board with 4x PCIe x16 Slots not tied to a PCI Switch
-  Sparing enough USB ports for all machines input devices
-  Limited to around 3GB of RAM per HVM unless you buy 8GB RAM Chips
-  Will need a 6-core i7 to power all systems without potential resource conflicts
-  Encountering bugs nobody else has when you reach that 3rd or 4th HVM

If I were in your shoes, I would do two systems, others have already mentioned success, so you'll have an easier time of getting it setup, and you won't buy all that hardware only to run into some limitation you hadn't planned on.

*************

I started my system with Stock air cooling and 2x 120MM fans in a cheap mid-tower computer case.  The CPU never went over 60C, the GPU doesn't overheat either, and the ambient temperature is around 70F.

I did upgrade my stock CPU fan to a corsair H70 Core self-contained liquid cooling system, was inexpensive and my CPU stays around 40C on average, plus it's even quieter than it was before.

I have never run more than one GPU in my computers before, so I don't know if there is some special magic that happens when you have two or more that they suddenly get even hotter, but I have to imagine that not to be the case unless you're doing some serious overclocking.

*************

The ASRock Extreme4 Gen3 does have enough PCIe slots that I could connect three GPU's and still have space for a single-slot PCIe device, but I only have a 650W power supply, and have no need for more than one Windows instance.

*************

Secondary VS Primary:

Secondary cards are available after the system boots up.  Primary cards are used from the moment the system boots.  Your primary card is where you will see POST during boot-time, and the windows logo.

Secondary pass through works great for gaming, shows the display after the machine boots without any problems, and takes literally no extra effort to setup on your part.

Primary passthrough requires custom ATI patching, and what exists may not work for all cards.

I began looking into Primary passthrough very recently, because I use my machine for more than just games and ran into a problem.  Software like CAD, Photoshop, and 3D Sculpting tools use OpenGL and only work with the primary GPU, which means they either don't run or run without GPU acceleration (slowly).

*************

A lot to take in, but I hope my answers help a bit.  If you have more questions I'll be happy to share what knowledge I can.

~Casey

On Sun, May 13, 2012 at 7:30 AM, Peter Vandendriessche <peter.vandendriessche@xxxxxxxxx> wrote:
On Sat, May 12, 2012 at 12:54 AM, Andrew Bobulsky <rulerof@xxxxxxxxx> wrote:
Hello Peter,

I've done exactly this, and I can affirm that it kicks ass ;)

Wonderful, that's the best answer the existential question can have. :)


Make sure that you actually have the cores to give to those DomUs.
Specifically, if you plan on making each guest a dual core machine,
and have 4 guests, get an 8 core chip.

8 core or 8 threads? I was planning to get one of those 4core/8thread CPUs via hyperthreading. Sufficient or not?
I read in the documentation that 2 threads are reserved for the windows graphics anyway, so it'd have to be 4 virtual cores anyway.


> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x Radeon HD 6990 (=4 GPUs in 2 PCI-e slots)?
Alas, no. Not because Xen or IOMMU won't allow it, but because of the
architecture of the 6990. While the individual GPUs /can/ be split up
from the standpoint of PCIe, all of the video outputs are hardwired to
the "primary" GPU.  So while it would work in theory, there's nowhere
to plug in the second monitor.

So, are there any other dual GPUs that do work here? They don't have to be high-end (low power is even preferred), but given that most graphics cards are 2 pci-e slots high, having 2 dual cards or 4 single cards makes a HUGE difference in motherboard options, case requirements, cooling solutions, connectivity (pci-e wifi, pci-e usb controllers, ...) so anything that would deliver 4 discrete GPUs via 2 PCI-e slots would be far better than any other option.


I suggest picking up a Highpoint RocketU 1144A USB3
controller. It provides four USB controllers on one PCIe 4x card,
essentially giving you four different PCIe devices, one for each port,
that can be assigned to individual VMs.

If the 2x dual GPU option works, then that's certainly possible. Otherwise, I'll need really all PCI-e slots for the GPUs (used or covered). And that is a problem in itself, as I'd want to use wifi for the networking and it needs a PCI-e slot.


If you're still only in these
conceptual stages of your build, I may have some suggestions for you
if you like.

I am still in the conceptual stage, and I'm very much willing to listen.

Currently I'm mainly wondering how to get 4 GPUs cooled cheaply. Watercooling is overkill (I'm not needing high graphics like HD6990 anyway, just wanting to play games at medium-low resolutions for the coming years) but with air cooling they will block eachother's airflow and steal the slots for an extra wifi card or for the USB controller. So anything to get 2x dual GPU working here would be great.


Now that I think of it, you'll have the least amount of hassle by
doing "secondary VGA passthrough," which is just assigning a video
card to a vm as you would any other PCIe device. I'll readily admit
that this is nowhere near as cool as primary passthrough, but it
involves the least amount of work.

Where can I find information on the difference between these? Google suggests that primary/secondary VGA passthrough is passing the primary/secondary GPU to the VM, but that doesn't seem to make sense here...


Best regards,
Peter

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.