[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Advice on redundant SAN/NAS storage for Xen


  • To: "Fajar A. Nugraha" <fajar@xxxxxxxxx>, "Chris 'Xenon' Hanson" <xenon@xxxxxxxxxxxxxx>
  • From: Andrew Lyon <andrew.lyon@xxxxxxxxx>
  • Date: Sat, 30 May 2009 13:05:15 +0100
  • Cc: Xen User-List <xen-users@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Sat, 30 May 2009 05:05:58 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=bE8Oujybz/73gzH1ho98+p9w9gDJqVJeSTkucXE/wqfgAvpJFYJMJkQ7SamcabByf6 qP3Y80Sc13QibXeeYrQ3EzNO+/S7Dltx0wTgtIzDzjTal5N2zbNtrViBNjEGbHDmS5YF 7HAG+wz5IMepZu6+QVC/GLcQ6y5G8Mf3M4DjI=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

On Sat, May 30, 2009 at 6:30 AM, Fajar A. Nugraha <fajar@xxxxxxxxx> wrote:
> On Sat, May 30, 2009 at 3:30 AM, Chris 'Xenon' Hanson
> <xenon@xxxxxxxxxxxxxx> wrote:
>>  I'm planning to expand my Xen servers at my datacenter into a cluster with 
>> high
>> availability and reliability. As part of this, I want to move all DomU 
>> storage to a common
>> SAN or NAS infrastructure and make all the Dom0s basically identical. In 
>> this way, I can
>> move DomU's around between Dom0s as needed for performance or reliability 
>> reasons. If a
>> Dom0 server fails, I can just bring up its DomUs on different servers with 
>> no loss.
>
> Simple goal, not-so-simple implementation.
>
>>  The best design I can think of is this:
>>
>> Two machines running Linux configured as SANs, using something like ATA over 
>> Ethernet
>> (AoE) to link them to a pair of GigE switches that then link to every Dom) 
>> box. The pair
>> of SAN boxes each export a block of raw storage that the Dom0 machine then 
>> RAIDs together
>> as RAID1 and provides to Xen and the DomU as a block device. The Dom0 gets
>> network-portable storage, with RAID reliability and redundancy.
>>
>>  The other way might be to have the Dom0 and Xen pass through both block 
>> devices to the
>> DomU and let the DomU RAID them together. I'm not sure if either is better. 
>> Maybe RAID on
>> the DomU would allow the DomU to be migrated easier?
>
> RAID might be the weakest link here. Think what will happen if :
> - one of the SAN box gets disconnected -> RAID will (hopefully) cope
> with it well and use the live SAN
> - some time later, the dead SAN is available again -> RAID won't
> automatically re-add it
> - the other SAN dies.
>
> These are big IFs, but you get the idea.
>
>>
>>  Is there a better and less messy way to provide redundant SAN-type storage 
>> to Xen DomUs?
>> The main criteria are:
>>
>>  Immune to failure of a single switch or SAN box.
>>  Allow DomUs to be moved seamlessly to other Dom0s without messy 
>> reconfiguration.
>
> Immune to a SAN box failure is hard.
> The common way to do it in enterprise-level storage is to have high
> availability in the SAN box. It does raid and have multiple
> controllers in a cluster/HA setup so that it'd be "immune" enough to
> disk or controller failure. I don't think there's a viable way to
> achieve that with your planned setup. Feel free to correct me if I
> wrong.
>
> --
> Fajar
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>

Have a look at drdb, I've not used it myself but the idea of having
two sets of disks (local or san) backing a single block device seems
more robust than having two dom0's accessing the same storage.

http://www.gridvm.org/drbd-lvm-gnbd-and-xen-for-free-and-reliable-san.html
http://lists.xensource.com/archives/html/xen-users/2008-11/msg00828.html
http://openqrm.com/storage-cluster.png

Andy

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.