[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] as promised description of my XEN HA setup


  • To: Bart Coninckx <bart.coninckx@xxxxxxxxxx>
  • From: Frank S Fejes III <frank@xxxxxxxxx>
  • Date: Sat, 3 Jul 2010 07:30:12 -0500
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Sat, 03 Jul 2010 05:31:52 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:from:date :x-google-sender-auth:message-id:subject:to:cc:content-type :content-transfer-encoding; b=nw9SGik5jdn483oSW1pyCIw3vi/lFYymdwuutYP+B7IKdgvLoxjNgdoUaXitw0pwLe v+TnA8kzqO6RlK4uggWaWciO6habO5gRx/viBoDeP/e4itb6zCviZzB74wg9Cq0xgwdU nO7uX77mVGFp9T3tXxd0mgE7tIGFApEeTnBwk=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

This is fascinating because it is almost exactly what I have been
doing (though the design has waffled between a single highly redundant
single storage server and two DRBD-redundant storage servers).  We use
Dell 5448 switches instead of the procurve, crossover 10gbit between
the two DRBD machines, and also add offsite DRBD replication, but
otherwise it's the same design. Oh, and I use heartbeat over pacemaker
because it's so easy. :)

IMO this a very logical approach that is both supportable and
scalable.  The only real trick in my experience is getting the
initiator multipath device configuration handled automatically as part
of the IET lun provisioning process.

Well done!  Hopefully others can share their own designs and experiences.

--frank

On Sat, Jul 3, 2010 at 5:58 AM, Bart Coninckx <bart.coninckx@xxxxxxxxxx> wrote:
> Hi all,
>
> In threads posted by I believe Jonathan Tripley I promised to post my new XEN
> HA setup. Hope it can be of some use to some people.
>
> In this particular case I'm forced to use SLES 10SP3 with XEN 3.2, which
> excludes the possibility of using things like cLVM (which I don't think I need
> anyway).
>
> So:
>
> Storage:
> I use two HP ML370 G5 machines with DRBD and heartbeat on them. They are
> linked by two Gigabit bonded NICs for syncing. They offer IET across two other
> NICs with IPs in different segments. DRBD is on top of LVM and LVM is again on
> top of DRBD to be able to create a LV for each DomU.
>
> Network:
> switches are HP Procurve 1810. Not the fastest switches, but also not the most
> expensive ones. Will report later on if they can handle it all.
>
> Hypervisors:
> different machines, but for the moment all having 4 NICs. One NIC is for the
> trusted LAN, two are used for iSCSI initiating. One for DomUs in the DMZ. I
> use multipathing on top of the iSCSI paths for redundancy and supposedbly
> extra speed (his hasn't been proven yet). The paths run over different
> switches for redundancy.
>
> DomUs:
> Currently HVMs. Will have about 10 in the end. They use phy: devices pointing
> to the multipath devices. config files are synced across the Hypervisors (no
> network storage for avoiding SPOF).
>
> HA:
> (to do) Pacemaker will take care of monitoring DomUs and failing them over.
>
> Backup:
> It seems the only save way to backup DomUs is by shutting them down, so what I
> do is make sure the storage servers can ssh to the Hypervisors with public key
> auth. They will shut down the guests, create a snapshot volume of the relevant
> LV for that particular machine (a script finds out where it is running), start
> the guests again and dd the snapshot  to a file server over ssh. Next the
> snapshot is deleted.
>
>
> There you go, hope this can inspire people.  ;-)
>
> B.
>
>
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.