[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Linux software RAID1 (Dom0 or domU)?

  • To: <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: "Jensen Nathan A Capt USAFA/DFCS" <nathan.jensen@xxxxxxxxxxxx>
  • Date: Thu, 1 Feb 2007 16:04:37 -0700
  • Delivery-date: Thu, 01 Feb 2007 15:04:42 -0800
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AcdGNcxQxiJB0ttHRriQmWpTDu1eRgAGr8Ag
  • Thread-topic: [Xen-users] Linux software RAID1 (Dom0 or domU)?

Great conversation!

This dovetails nicely into some of the design considerations that I am
now considering.  

I have the opportunity to design a new datacenter from the ground up,
and Xen virtualization sounds like an awesome way to go. However, I
would like to try and make things as reliable (while not necessarily
complicated) as possible, since the control center is NOT co-located
with the datacenter.  Money is also somewhat tight.

My initial thoughts go something like this:

1.  Build several robust servers with large RAID storage space
2.  The only purpose for these machines is to serve storage
3.  Use LVM to create partitions on the raid servers as needed
4.  Buy commodity servers to serve as Dom0's
5.  Dom0's can see block devices on large RAID using AoE
6.  DomU's can be created on commodity servers using RAID as storage
7.  Use an NFS share to maintain all of the xen config files

What this buys me:

1.  If commodity server dies, just point a different commodity server at
the right partition and restore
2.  All commodity servers can see RAID LVM partitions -- I can now do
hot-swapping of domU instances -- I don't think I'll need a CFS like GFS
3.  Data storage should be fairly reliable using the RAID servers
4.  I can recover from any failure, (except extreme hardware failure on
a RAID server), without leaving the remote control center

My fear is:

1.  I am totally hosed if my RAID server tanks

What is your take on this configuration?  Does anyone have
recommendations for quickly recovering from a RAID server crash in this
scenario?  What about AoE?  Would I be better served to pay the money
and go fibre-channel?

Thanks for the discussion,

-----Original Message-----
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Luke S.
Sent: Thursday, February 01, 2007 12:17 PM
To: Mark Williamson
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Linux software RAID1 (Dom0 or domU)?

On Thu, 1 Feb 2007, Mark Williamson wrote:
> I think I'd do RAID in dom0...  It makes configuration changes easy to
> without fiddling with the guest and will free up some space in the
block ring
> (although I'm not convinced that'll make much difference to

Hm.  I'm interested in this discussion because I have a fibre-channel 
setup with lots of disks, but no raid head.  Right now, we are running
software raids in Dom0 as you suggest;  the primary problem with this is

that hot-migration won't work; all my Dom0s can see all the disks,
depending on how I setup zoning)   but the MD devices only exist on the 
server that created them, so I believe hot-migration or even having half

of one md on one server and half of the same md on another server just 
won't work.

One way to solve this is to make clvm support mirroring;  but I'm 
too dumb to write that code myself.

the other solution would be to make every physical disk a clvm 
VolumeGroup;  then make sure that every DomU have 2 equaly-sized 
partitions from different volume groups.  The DomU can then mirror or 
stripe the disks as required.

(this has actually been a pretty hot internal prgmr.com debate;  the
solution, we all agree, is to get a hardware-raid device, but we haven't

done so yet, mostly because I'm cheap.  disks and fibre switches are 
commodity, raid-heads are decidedly not.)

Xen-users mailing list

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.