[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Distributed xen or cluster?



lists@xxxxxxxxxxxx wrote:
>>  Exactly what kind of redundancy are you looking for? 
>>     
>
> Here's a small example.
>
> I have a GFS cluster of servers which serve up LAMP services which uses two 
> LVS (redundant) servers as a front end for load balancer. 
Ah, so you're familiar with GFS and LVS. From your earlier post I'm not
sure whether you're a newbie or someone experienced :)

> The one problem I haven't bothered with is that if there is a failure, the 
> user has to reconnect because the session gets messed up. Otherwise, it's 
> fully redundant.
>   

That is the nature of TCP. To achive full redundancy, usually the
protocol, client, and/or server implementation need to adapt. Example :
- With HTTP, (for example) using LAMP servers with session data located
on a shared storage or db, you get a "redundant" setup, as in you can
conect to any server and will get the same session. There's still
possibility of a failure though: client won't retry the request if data
transfer is interrupted in the middle.
- NFS can handle server failover better than HTTP. NFS over TCP will
automatically reconnect if disconnected, and retry a failed request.
This setup still has a possible problem : If an NFS TCP client is moved
from one server to another it will work, but when moved back again to
the first server in a short time (say several minutes) it will not work.
To handle this particular issue you can use NFS over UDP.

> With my virtualization testing, things aren't so much fun. When a server goes 
> down, or needs to be rebooted, or anything which causes it to have to be 
> down, all guests on that server go down with it of course. Migrating over to 
> another machine is pointless because it takes way too much work to migrate 
> just to reboot a server.
>
> Of course, what would be best would be proper redundancy, so that there are 
> multiple containers working together as one. If one goes down, the others 
> simply keep going and no servers go down. 
>
>   

So you want to achive the same level of "redundandy" with VM/domUs as
you would with (from my example above) HTTP or NFS? Then the answer is
it's not possible.

With HTTP or NFS, two server can share the same data (via GFS, for
example). This means that they serve the same data, and for a failover
to occur client simply need to (re)connect (to be more accurate,
connected by the load balancer) to the other server.

With generic VMs (as in Windows, Linux on ext3, or anything that uses
non-cluster fs) however, sharing the same data is not possible. You can
not have two generic VMs using the same backend storage because it will
lead to data corruption. An exeption is when the VM is using cluster FS
like GFS, but that's another story.

What IS possible, though, is LIVE migration. For this to work :
- backend storage for domU is stored in shared block device (SAN LUN,
cLVM, GFS, nfs, whatever) accessible by both dom0s.
- at any given time, only one dom0 can start a particular domU
- moving domUs between dom0s can be done using live migration. This
migration will be transparent to domU (e.g. they don't need a reboot)
and clients connected to domUs (they will only see something like a
network glitch, but the network stack can handle it correctly).

BTW, that is also the basic principle with VMware ESX. They use their
own cluster FS for VM backend storage, but the rest is similar.

Regards,

Fajar




Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.