[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] New to Xen, looking for advice regarding system configuration



Hello, uh, Braindead!

In my company, I virtualized 30 servers of mixed genetics (Windows
2003, Windows 2008 R2, Debian, Ubuntu, and Gentoo -- G is fast
replacing D+U here) on XenServer (Free Edition).

The benefit of XenServer would be Citrix's easy-to-use XenCenter,
which provides you with a graphical console to the Windows VMs.
(Unfortunately, XenCenter is Windows-only).

For your storage, if you are not using a battery-backed server-grade
controller, I suggest using Openfiler to act as a cache.

BTW, Gentoo Linux runs very well paravirtualized on XenServer. I'm
currently writing up a HOWTO on deploying fully paravirtualized Gentoo
VMs on XenServer (including how to install the latest Citrix
xe-guest-utilities).

Rgds,


On 2011-07-12, Braindead <Braindead@xxxxxxxxxxxx> wrote:
> On Tue, 12 Jul 2011 21:08:37 +0700 "Fajar A. Nugraha" <list@xxxxxxxxx>
> wrote:
>> On Tue, Jul 12, 2011 at 7:59 PM, Braindead <Braindead@xxxxxxxxxxxx>
>> wrote:
>> > My main purpose would be to support my software development and
>> > consulting work. ÂSo I need to be able to run various OS's. ÂI
>> > don't develop games so no need for any fancy graphics. ÂI'm used to
>> > the limitations of virtual machines (use VMWare a lot for dev
>> > purposes) and I'm fairly sure Xen can do everything I need.
>> >
>>
>> You haven't said why you want to move away from vmware. If we know
>> what your priorities are, we might be able to give better advice. For
>> example, if you're used to vmware-style GUI, but want an open-source
>> license, XCP might be a better choice. But it you want something you
>> can tinker, or use bleeding-edge technology, then starting with a
>> distro that includes Xen would be a better choice.
>
> I use VMWare workstation at work, I use virtualbox on Linux a bit.  I only
> mention VMWare to note that I'm used to the concepts of VM's.  I prefer
> running *nix, Gentoo to be precise.
>
> My home server is running a ton of services (subversion, mail, http, backup,
> router, ossec, nagios, dns, dhcp..etc) and for sanity's sake I'd like to
> break that up into multiple servers.  I also need a few windows boxes
> (various configs, versions).
>
> Goal is to consolidate things into one box, and have a complete backup box
> as well.  Thus virtualization.  The XEN 'near bare metal' performance is
> what I'm interested in, and definitely into optimizing every aspect I can
> which is why I use a source distro.
>
>
>> > I expect to have 2-3 virtual machines running most of the time,
>> > possibly 2 working hard (for example restoring a gig+ database
>> > backup on one while programming/doing other tasks on another).
>> > ÂI'll be purchasing 2 identical machines one as a backup, so I
>> > don't need any extra 'robustness' that a server motherboard/system
>> > would provide. ÂWhich leads into the following question.
>> >
>> > Would it make sense to spend extra bucks on a multi processor
>> > motherboard rather than going with a single Core i7 or the like? ÂI
>> > think there are i/o bandwidth benefits with multi processor boards,
>>
>> Is there?
>
> Not sure, which is why I'm askin ;-)
>
>> IIRC the main selling point of server-grade motherborad used to be the
>> ability to use ECC RAM. But now some motherboards for i7 support ECC
>> RAM and SATA III.
>
> Many include system monitoring and alerting capabilities in the BIOS (or at
> least used to, it's been a while since I've worked on server grade
> hardware).
>
>
>> > however due to a lot of database grinding I tend to do I suspect
>> > that disk i/o is a limiting factor in my case which I'll try to
>> > deal with somewhat by RAID0 over 4-5 fast drives. ÂI don't need any
>> > redundancy as all variable data (code and the like) is on remote
>> > servers and already fully backed up.
>>
>> ... which brings another point. If you know you're I/O-starved anyway,
>> why not use SSD? Pure SSD implementation can easily give 10-100x IOPS
>> of HDD. And since you say you'll have an identical machine as backup,
>> if you're worried about SSD lifetime, you can have HDD on the backup
>> machine.
>
> Well, i'd love to...but ;-) My current dev machine has 2TB of databases
> sitting on it that I may need access to at any given time.  I could move
> them as needed onto SSD however that would take a lot of time much more than
> just accessing them directly on the slower media.  Might be doable as some
> sort of hybrid setup (some SSD's, some regular HD's) however that would
> likely just confuse me.
>
>
>> Another option would be using SSD as cache, with something like
>> facebook's flashcache. This setup would reduce the possibility of data
>> loss (since SSD will only be cache), and have the additional benefit
>> of higher capacity (compared to pure SSD setup), but is also more
>> complex and (depending on how you look at it) "experimental".
>
> Isn't that what the 'hybrid' drives are?  I'd think those would work outta
> the box, should look just like a regular drive to the OS I'd think?
>
>> > Do folks generally install X11 on Dom0 so they can get a gui
>> > VNC/remote desktop into Windows DomU machines? ÂOr is there some
>> > other mechanism available?
>>
>> Generally speaking you don't need full-blown X desktop on dom0. It can
>> be headless with "minimum" software installed. VNC console of domU is
>> provided by QEMU, not by X desktop on dom0.
>
> Ah, one question that has a simple answer ;-)
>
> Thanks for the thoughts and suggestions.  I know hardware config is complex,
> and depends highly on how things are used...
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>


-- 
--
Pandu E Poluan - IT Optimizer
My website: http://pandu.poluan.info/

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.