[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] gaming on multiple OS of the same machine?



That's Excellent!

Thanks for that info, it is *very* helpful.

I'm currently having a problem where, after installing the GPLPV
drivers (from here:
http://wiki.univention.de/index.php?title=Installing-signed-GPLPV-drivers
), my system BSODs during winload on atikmpag.sys.

You're running GPLPV... are you running all of the drivers, or just select ones?


On Sat, May 12, 2012 at 1:48 PM, Casey DeLorme <cdelorme@xxxxxxxxx> wrote:
> Hi Andrew,
>
> You mean the Windows DomU configuration, right?  I put it up on pastebin
> here along with a couple other configuration files:
> http://pastebin.com/9E1g1BHf
>
> I'm just using normal LV partitions and passing them to an HVM, there is no
> special trick so any LVM guide should put you on the right track.
>
> I named my SSD VG "xen" so my drives are all found at /dev/xen/lvname.
>
> **********
>
> The only convoluted part is my Dom0 installation, since I used EFI boot and
> an LV to store root (/), so I have two 256MB partitions, one FAT32 for EFI,
> one Ext4 for boot (/boot) and then the rest of the disk to LVM.  I did the
> LVM setup right in the installation, added the SSD partition (PV) to a
> Volume Group (VG) then threw on a few partitions.
>
> I created a linux root partition of 8GB, a home partition of 20GB, and a
> swap partition of 2GB.  I mapped those in the configuration, then I went on
> ahead and made a 140GB partition for windows, and two 4GB partitions for
> PFSense and NGinx.
>
> Once the system is installed, the standard LVM tools can be used, lvcreate,
> lvresize, lvremove, lv/vg/pvdisplay commands, etc...
>
> My Disk IO is not optimal, which might be because I run four systems off the
> same drive at the same time, so if you intend to use many systems you may
> want to split the drives onto multiple physical disks.  However, I have
> reason to believe my IO problems are a Xen bug, I just haven't had time to
> test/prove it.
>
> **********
>
> When you pass a LV to an HVM it treats it like a physical disk, and it will
> create a partition table, MBR code, and partitions inside the LV (partitions
> within partitions).
>
> When I get some free time I want to write up a pretty verbose guide on LVM
> specifically for Xen, there are plenty of things I've learned about
> accessing the partitions too.
>
> Some things I learned recently with Xen, IDE drives (hdX) only allow four
> passed devices, so if you have more than 3 storage partitions you will want
> to use SCSI (sdX) for them, but SCSI drives are not bootable.  Hence my
> configuration has "hda" for the boot drive (lv partition), and sdX for all
> storage drives (lv partitons) (X = alphabetical increment, a, b, c, d, etc).
>
> **********
>
> Hope that helps a bit, let me know if you have any other questions or if
> that didn't answer them correct.
>
> ~Casey
>
>
> On Sat, May 12, 2012 at 1:10 PM, Andrew Bobulsky <rulerof@xxxxxxxxx> wrote:
>>
>> Hello Casey,
>>
>> Quick question!
>>
>> What's the config file entry for the LVM-type setup you have going on
>> for the guest disk look like?  Might you be able to point me to a
>> guide that'll show me how to set up a disk like that?
>>
>> Thanks!
>>
>> -Andrew Bobulsky
>>
>> On Fri, May 11, 2012 at 6:51 PM, Casey DeLorme <cdelorme@xxxxxxxxx> wrote:
>> > Hello Peter,
>> >
>> >
>> > Question #1: Performance
>> >
>> > With x86 Virtualization hardware such as CPU's and Memory are mapped not
>> > layered, there should be almost no difference in speeds from running
>> > natively.
>> >
>> > I am running Windows 7 HVM with an ATI Radeon 6870.  My system has 12GB
>> > of
>> > RAM, and a Core i7 2600.  I gave Windows 4 vcores and 6GB of memory,
>> > Windows
>> > Experience index gives me 7.5 for CPU and 7.6 for RAM.  With VGA
>> > Passthrough
>> > I have 7.8 for both graphics scores.  I am running all my systems on LVM
>> > partitions on an OCZ Vertex 3 Drive, without PV Drivers windows scored
>> > 6.2
>> > for HDD speeds, with PV drivers it jumped to 7.8.
>> >
>> > Scores aside, performance with CPU/RAM is excellent, I am hoping to
>> > create a
>> > demo video of my system when I get some time (busy with college).
>> >
>> > My biggest concern right now is Disk IO ranges from excellent to
>> > abysmal,
>> > but I have a feeling the displayed values and actual speeds might be
>> > different.  I'll put putting together an extensive test with this later,
>> > but
>> > let's just say IO speeds vary (even with PV drivers).  The Disk IO does
>> > not
>> > appear to have any affect on games from my experience, so it may only be
>> > write speeds.  I have not run any disk benchmarks.
>> >
>> >
>> > Question #2: GPU Assignment
>> >
>> > I have no idea how Dual GPU cards work, so I can't really answer this
>> > question.
>> >
>> > I can advise you to be on the lookout for motherboards with NF200
>> > chipsets
>> > or strange PCI Switches, I bought an ASRock Extreme7 Gen3, great bought
>> > but
>> > NF200 is completely incompatible with VT-d, ended up with only one PCIe
>> > slot
>> > to pass.  I can recommend the ASRock Extreme4 Gen3, got it right now, if
>> > I
>> > had enough money to buy a bigger PSU and a second GPU I would be doing
>> > what
>> > you are planning to.
>> >
>> >
>> > Question #3:  Configuration
>> >
>> > Two approaches to device connection, USB Passthrough and PCI
>> > Passthrough.  I
>> > haven't tried USB Passthrough, but I have a feeling it wouldn't work
>> > with
>> > complex devices that require OS drives, such as BlueTooth receivers or
>> > an
>> > XBox 360 Wireless adapter.
>> >
>> > I took the second approach of passing the USB Controller, but this will
>> > vary
>> > by hardware.  The ASRock Extreme4 Gen3 has four USB PCI Controllers, I
>> > don't
>> > have any idea how you would check this stuff from their manuals, I found
>> > out
>> > when I ran "lspci" from Linux Dom0.
>> >
>> > I had no luck with USB 3.0, many devices weren't functional when
>> > connected
>> > to it, so I left my four USB 3.0 ports to my Dom0, and passed all my USB
>> > 2.0
>> > ports.
>> >
>> > Again hardware specific, one of the bus had 4 ports, the other had only
>> > two,
>> > I bought a 4 port USB PCI plate and attached the additional USB pins
>> > from
>> > the board to turn the 2-port into a 6-port controller.
>> >
>> > I use a ton of USB devices on my Windows system, Disk IO blows, but
>> > everything else functions great.  With PCI Passed USB I am able to use
>> > an
>> > XBox 360 Wireless Adapter, 2 Wireless USB Keyboards in different areas
>> > of
>> > the room, a Hauppauge HD PVR, A logitech C910 HD Webcam, and a Logitech
>> > Wireless Mouse.  I had BlueTooth but I got rid of it, the device itself
>> > went
>> > bad and was causing my system to BlueScreen.
>> >
>> > When I tested USB 3.0, I got no video from my Happauge HD PVR or my
>> > Logitech
>> > C910 webcam, and various devices when connected failed to function
>> > right.
>> >
>> >
>> > Question #4:  Other?
>> >
>> > I am 100% certain you could get a system running 2 Windows 7 HVM's up
>> > for
>> > gaming, but you may need to daisy chain some USB devices if you want
>> > more
>> > than just a keyboard and mouse for each.
>> >
>> > Also, if you are not confident in your ability to work with *nix, I
>> > wouldn't
>> > advise it.  I had spent two years tinkering with Web Servers in Debian,
>> > so I
>> > thought I would have an easy time of things.
>> >
>> > I tried it on a week off, ended up taking me 2 months to complete my
>> > setup.
>> >  The results are spectacular, but be prepared to spend many hours
>> > debugging
>> > unless you find a really good guide.
>> >
>> > I would recommend going for a Two Windows on One Rig, and duplicate that
>> > rig
>> > for a second machine, and I recommend that for two reasons.  If you are
>> > successful with the first machine, you can easily copy the process.
>> >  This
>> > will save you hours of attempting to get a whole four Gaming machines
>> > working on one system.
>> >
>> >
>> > As stated, I only run one gaming machine, but I do have two other HVM's
>> > running, one manages my households network and the other is a private
>> > web/file server.  So, performance wise Xen can do a lot.
>> >
>> > Best of luck,
>> >
>> > ~Casey
>> >
>> > On Fri, May 11, 2012 at 6:17 PM, Peter Vandendriessche
>> > <peter.vandendriessche@xxxxxxxxx> wrote:
>> >>
>> >> Hi,
>> >>
>> >> I am new to Xen and I was wondering if the following construction would
>> >> be
>> >> feasible with the current Xen.
>> >>
>> >> I would like to put 2/3/4 new computers in my house, mainly for gaming.
>> >> Instead of buying 2/3/4 different computers, I was thinking of building
>> >> one
>> >> computer with a 4/6/8-core CPU, 2/3/4 GPUs, 2/3/4 small SSDs, and
>> >> attach
>> >> 2/3/4 monitors to it, 2/3/4 keyboards and 2/3/4 mouses, and run VGA
>> >> passthrough. This would save me money on hardware, and it would also
>> >> save
>> >> quite some space on the desk where I wanted to put them.
>> >>
>> >> If this is possible, I have a few additional questions about this:
>> >>
>> >> 1) Would the speed on each virtual machine be effectively that of a
>> >> 2-core
>> >> CPU with 1 GPU? What about memory speed/latency?
>> >> 2) Is it possible to split dual GPUs, e.g. drive 4 OSes with 2x Radeon
>> >> HD
>> >> 6990 (=4 GPUs in 2 PCI-e slots)?
>> >> 3) How should one configure the machine such that each OS receives only
>> >> the input from its own keyboard/mouse?
>> >> 4) Any other problems or concerns that you can think of?
>> >>
>> >> Thanks in advance,
>> >> Peter
>> >>
>> >>
>> >> _______________________________________________
>> >> Xen-users mailing list
>> >> Xen-users@xxxxxxxxxxxxx
>> >> http://lists.xen.org/xen-users
>> >
>> >
>> >
>> > _______________________________________________
>> > Xen-users mailing list
>> > Xen-users@xxxxxxxxxxxxx
>> > http://lists.xen.org/xen-users
>
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.