[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Looking for tips about Physical Migration on XEN


  • To: Xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: Chris de Vidal <chris@xxxxxxxxxx>
  • Date: Mon, 19 Jun 2006 15:42:06 -0700 (PDT)
  • Delivery-date: Mon, 19 Jun 2006 15:42:49 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

--- Igor Morgado <igormorgado.listas@xxxxxxxxx> wrote:
> Hi people.

Hi person!  ;-)


> Im new on Xen and I'm looking in how to do a physical migration on Xen. I
> know that there is a lot of choices (that is the first problem)
> 
> My environment is simple:
> 
> 2 physical servers, each one running one instance of XEN. Each host has 2
> gigabit cards. One to talk with the world, other to talk between
> theirselves.
> 
> I want to run the every vm on the both hosts, if one fail the other one can
> keep the work (something as heartbeat), but I want to choose if do a
> migration of every VM from one host to another with a lesser stop time
> possible.
> 
> How can I proceed? If there is some url about this please point me, Im
> looking for environments already tested on production and with a good
> performance/speed on migration.

I don't have URLs or tested environments or performance/speed migration 
results, but I am planning
on implementing a similar setup so I can offer tips.

It sounds as if you want high availability along with live migration; two 
slightly different goals
but which should be reachable.

I've learned from the OpenVZ message boards that TCP has a timeout of about 2 
minutes so live
migration isn't always necessary.  Because of this, I am probably going to 
install DRBD.  In order
to fail over DRBD the partition must be unmounted, meaning the virtual machine 
must be suspended,
precluding the use of live migration.  But because TCP gives you 2 minutes this 
is acceptable for
me.

The advantage of this setup is you only need two nodes (important for me).

If you still require live migration (as in the case of a game server), it seems 
you must have an
external NFS or iSCSI or AoE server.  This is because both Xen node servers 
need to access the
storage at the same time.  I think AoE is the simplest and best performing, and 
with the vblade
daemon it's free and works on any server.  I'd use 2 servers and install 
Heartbeat+DRBD on them
for the ultimate in HA.  As an alternative you can use shared SCSI storage.

So for live migration and high availability, four nodes.  Two are the front-end 
Xen hosts and two
are back-end storage hosts.  Heartbeat on the front-end nodes and 
Heartbeat+DRBD on the back.  Or
shared SCSI storage.

Earlier this month I'd thought I'd figured out how to do 2-node HA + live 
migration (read the
archives) using AoE in place of DRBD and software RAID inside each Xen host.  
The problem with
this is a slight network interruption will result in a RAID resync.  You could 
install the "Fast
RAID" patch if your Xen guest is running Linux, or you could just deal with it. 
 Perhaps network
interruptions are infrequent enough not to worry.

You should be very careful to avoid split-brain situations; it is for this 
reason I'm probably
going to forego using Heartbeat and just use a "meatware" Heartbeat (that is, 
if a node dies I log
in and manually bring up the Xen guest on the other node).  I'll monitor health 
with something
like Nagios.


Hope that helps!

CD

TenThousandDollarOffer.com

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.