[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] RAID-1 strategy for a Xen/CentOS server?


  • To: "Tom Brown" <xensource.com@xxxxxxxxxxxxxxxxxxx>
  • From: "Bob Tomkins" <bob.r350@xxxxxxxxx>
  • Date: Fri, 7 Dec 2007 12:31:47 -0800
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 07 Dec 2007 12:32:38 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:references; b=gpAp0mmOitq10hpjZGhtvWTsx5evsY2eatGd5GdJAeuZ2Rp3sGpgbzftNVW7/S61G18e72DcwUvH1Cnzahwqp0PKpn28LiuzgbBg3mGBX4RYMZlPaOiJTpuwJM20c5xpKgNI6JmS7SRe552/+ecHo+XwWxWpJcJHH7/pohP6mrY=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hi Tom,

On Dec 7, 2007 12:05 PM, Tom Brown <xensource.com@xxxxxxxxxxxxxxxxxxx> wrote:
I was going to say "add memory", and that's always a help with respect to
reducing disk I/O ... but zimbra seems to be a mail-server platform ...
and MTA software often _forces_ disk I/Os via fsync() type calls. Still
making sure that the VM has lots of memory will reduce the disk reads, and
allow non sync()'d I/O to be cached for a bit longer and written to disk
when convenient.

You're correct re: Zimbra's disk I/O intensity --- whther or not  that's an obstacle that "more memory" will overcome, or mitigate, I simply do not know at this point.

FWIW, "this box" has 4GB RAM atm, expandale to 8GB.

frankly, IMHO and in my experience, things that eat up disk I/O aren't
particularily well suited to running on virtual machines. XEN generally(*)
is for cutting up a big machine into smaller pieces which can be more
conveniently administered... partly due to the isolation between
machines...

That's my goal.  In particular, isolating Zimbra which I understand has some "quirks & oddities" that I'd rather not have "polluting" the rest of my Production server
 
... but disk drives do not "isolate" well unless you are dedicating
spindles to individual domains... if a domain starts maxing out it's
attached drives, any other domain using that drive is going to see it's
load average go up as processes trying to use those drives get stuck in a
long queue waiting for disk I/O.

Clear.

(* - there are situations where inserting a "shim" between the physical
hardware and the O/S are usefull... being able to backup a windows box, or
replicate it via DRBD in real time are examples, that require having a
"virtual" layer. The consistency of the virtual machine is also good, as
you can pick up a domU and start it on another physical box and see the
same virtual machine.)

Clear.

What is the point in "raid'ing" swap space? There's nothing in swap you
really need to preserve. If your target is trying to increase
reliability...

It is.  And the answer to your question seems to vary greatly depending on to whom you listen
.  In my case, the reference is:

http://www.linuxjournal.com/article/5898

"If you are using RAID-1 to help to ensure that your system stays up in the event of a hard disk partition failure, you should consider raiding your swap partition(s). If the disk or partition you are using for swap goes bad, your machine may crash. Using a RAID-1 device for a swap partition can help prevent that crash. If one of the mirrored swap partitions goes bad, the kernel automatically will fail over to the other, and your system should keep running until you can fix the disk problem."
 
I'd find other approaches ... but even so, the same logic
applies, either have the VM swap to a dedicated vbd or have it do LVM on
it's existing vbd's and swap to one of those ... the same raid rules
apply.

This might be relevent to your HW vs SW raid. Generally HW raid will make
it simpler to replace a drive without having to reboot the system... but
it tends to have constraints of it's own...

I gather the Devil's in the Details of the prior two paragraphs.  In principle I understand, but have yet to touch-and-feel in practice.

My broad-brushstroke goals -- for the whole system,as well as for individual DomUs -- include:

(1) Maximize failover capability
(2) Minimize potential fordata loss due to HW failure
(3) Maximize the performance of the system

(I've given up on running at less than 10W total power utilization ... ;-) )


> And, if DomU hosts the RAID mirror, what's the recommended file system
> choice -- or is that dicated by Xen as a preference (I haven't got to that
> either yet ...)

AFAIK, none of this has anything to do with xen. In theory the folks
behind your target software (like zimbra) should have recommendations.

You're correct -- in answering my misstated question.  I *meant* to ask:  What's the recommended FS for the Dom0 Xen host?

Regards,

Bob
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.