[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] best practices in using shared storage for XEN VirtualMachines and auto-failover?


  • To: Jeff Sturm <jeff.sturm@xxxxxxxxxx>
  • From: Rudi Ahlers <Rudi@xxxxxxxxxxx>
  • Date: Fri, 15 Oct 2010 11:11:25 +0200
  • Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 15 Oct 2010 02:13:12 -0700
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=softdux.com; h=MIME-Version:Reply-To:In-Reply-To:References:From:Date:Message-ID:Subject:To:Cc:Content-Type:Content-Transfer-Encoding:X-Assp-Whitelisted:X-Assp-Envelope-From:X-Assp-Intended-For:X-Assp-ID:X-Assp-Version:X-Source:X-Source-Args:X-Source-Dir; b=QaJ9KsSlFIpPpstSPFhDTEv1+kFTlaCLnGvlIX5oZMRc58YXUfoz01/+/YL/YXOK894INEK8Wa2naSWqlXFyayVf1AVR5Zul+8Nz/KK4FZURQNN5zQPwP0MW/WlmbtHb;
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

On Thu, Oct 14, 2010 at 3:42 PM, Jeff Sturm <jeff.sturm@xxxxxxxxxx> wrote:
>> -----Original Message-----
>> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
>> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Rudi Ahlers
>> Sent: Thursday, October 14, 2010 7:25 AM
>> To: xen-users
>> Subject: [Xen-users] best practices in using shared storage for XEN
> VirtualMachines
>> and auto-failover?
>>
>> Hi all,
>>
>> Can anyone pleas tell me what would be best practice to use shared
> storage with
>> virtual machines, especially when it involved high availability /
> automated failover
>> between 2 XEN servers?
>
> With 2 servers, I hear good things about DRBD, if you don't want to go
> the SAN route.  If you have a SAN make sure it is sufficiently
> redundant--i.e. two (or more) power supplies, redundant Ethernet, spare
> controllers, etc.  And of course RAID 10 or similar RAID level to guard
> against single-drive failure.

I am planning on setting up a SAN with a few Gluster / CLVM servers -
just need to decide which one first, but I'm going to attemp high
availability + load balancing + ease-of-upgrade-with-no-downtime. Each
server will run RAID10 (maybe RAID6?)


> Pay close attention to power and networking.  With 4 NICs available per
> host, I'd go for a bonded pair for general network traffic, and a
> multipath pair for I/O.  Use at least two switches.  If you get it right
> you should be able to lose one switch or one power circuit and maintain
> connectivity to your critical hosts.

So would you bond eth0 & eth1, and then eth2 & eth3 together? But then
connect the bonded eth0+1 one one switch, and eth2+3 on another switch
for failover? Or would you have eth0 & eth2 on one switch, and eth1 &
eth3 on the other? Is this actually possible? I presume the 2 switches
should also be connected together (preferably via fiber?) and then
setup Spanning Tree? Or should I seperate the 2 networks,and connect
them indivually to the internet?

>
> In my experience with high availability, the #1 mistake I see is
> overthinking the esoteric failure modes and missing the simple stuff.
> The #2 mistake is inadequate monitoring to detect single device
> failures.  I've seen a lot of mistakes that are simple to correct:
>
> - Plugging a bonded Ethernet pair into the same switch.
> - Connecting dual power supplies to the same PDU.
> - Oversubscribing a power circuit.  When a power supply fails, power
> draw on the remaining supply will increase--make sure this increase
> doesn't overload and trip a breaker.
> - Ignoring a drive failure until the 2nd drive fails.
>
> You can use any of a variety of clustering tools, like heartbeat, to
> automate the domU failover.  Make sure you can't get into split-brain
> mode, where a domU can start on two nodes at once--that would quickly
> corrupt a shared filesystem.  With any shared storage configuration,
> node fencing is generally an essential requirement.
>
>> What is the best way to connect a NAS / SAN to these 2 servers for
> this kind of setup
>> to work flawlessly? The NAS can export iSCSI, NFS, SMB, etc. I'm sure
> I could even
>> use ATAOE if needed
>
> For my money I'd go with iSCSI (or AoE), partition my block storage and
> export whole block devices as disk images for the domU guests.  If your
> SAN can't easily partition your storage, consider a clustered logical
> volume manager like CLVM on RHCS.
>
> -Jeff
>

I am considering CLVM, or Gluster - just need to play with them and
decide which one I prefer :)

>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>



-- 
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.