[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Two redundant Xen servers with one SAN



Maybe a different approach can helps: if the large data amount is essentially at rest for most of the time, you can think to share it as simple NFS4 shared directories (better file lock mechanism than NFS3) ad expose them to a kubernetes cluster running small&fast "VM" (pods, docker instances)
You can run a redundant (two) kubernetes controllers and two kubernetes nodes. You can set up this scenario with two bare-metal server. If a server dies the cluster survives by replicating automatically the pods (VMs) on the second node.
This is nearly a full-HA solution. But it doesn't make use of xen, so you have to change discussion list :D



On 24/06/21 16:20, mabi wrote:
Thank you GD for the details regarding a DRBD setup. I was also thinking of such a solution but the underlying VMs will have virtual drives which are are in the TB range, probably 5-10 TB. As far as I know a sync/resync of the DRBD-LV for such a VM woulf take ages even over a 10 Gbit/s fiber link. This is the reason why I was thinking I should go for a SAN.


‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday, June 24th, 2021 at 4:04 PM, GD <g.d.monnezza@xxxxxxxxxx> wrote:

"I should have been more precise, I am looking into building a active/passive dual node Xen installation "

In this case, you can configure a simple but robust architecture I'm already running:
Two Xen Server, storing VMs in LV built on top of DRBD. It's up to you if create a DRBD-LV pair for each VM (as I did in my setup) or a single DRBD-LVG
Both Xen host are running, but VMs are running on the host having the DRBD(s) active. Xen hosts shares VM conf files (and other stuffs as iptables forwarding rules) in a csync'd directory.
When the "active" Xen host dies, you can quickly boot up the VMs on the other Xen host after switching the DRBD volumes into "primary" mode.
If xen's VMs confs are the same, you'll end up with the same VMs with the same IPs (if in a LAN, they immediatly work, othewise you have to change the routing in previous-hop)

In this scenario, you can also think to go further (as i did) and run a "cross-configuration" with both Xen server running VMs on an active DRBD, but holding the passive DRBD of the other server. This way you can run half of the VMs on a server and the other half on the other server ;)

Side notes:
- On a 1Gb/s eth link it works for low global disk write rates. 10GB fiber/eth link is required for high disk write rates such as in storage servers.
- Be careful to disk write speed: I experienced dramatic slow down with Debian 10 when booted Xen kernel (comparison made against regular kernel boot)

Hope it helps
g
 
On 24/06/21 13:16, mabi wrote:
Thanks Florian and GD for your answers.

I should have been more precise, I am looking into building a active/passive dual node Xen installation. I don't need active/active as I believe this is also more dangerous.

Also I was thinking that I would manually failover the VM whenever necessary in order not to rely on additional external tools such as corosync/pacemaker and hence avoid more complexity.

So as you notice, at least for a start, I am trying to keep things as simple as possible. If I understand correctly that should be possible with Xen and all I need is multipathd and CLVM, is this correct?

Then regarding CLVM, I checked Debian buster but could not find any CLVM-related pacakges. Is maybe CLVM not available on Debian?

Regards,
Mabi

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐

On Thursday, June 24th, 2021 at 12:28 PM, Florian Heigl <florian.heigl@xxxxxxxxx> wrote:

Hi,

you can look into solutions based on cLVM or OCFS2 + Corosync/Pacemaker.

Don't forget to set up multipathd so your system can handle the link/controller failovers.

This has been done, and also has been/is a commonplace solution. There is also lots of blog posts that you can dig up once you search the right way.

I would avoid running natively on SAN luns attached to VMs due the risk of "the cluster stack had an error / was misconfigured". If you'd failover manually that would be less of an issue. The clusters like Pacemaker can protect the VMs a bit using SCSI reservations.

There was also Remus for running 2 VMs in lockstep for HA, but that was expecting no shared storage and was never polished by anyone to be worthwhile for production use.

A fair warning: Most homegrown HA setups like they're done commonly in the ISP industry tend to blow up much more often than what a proper solution should be like.

It might be better to pick something pre-made for that purpose if you don't have the SAN/Cluster experience.

I.e. XenServer/XCP or Oracle VM3.

Good luck!

Florian

Am 24.06.2021 um 10:09 schrieb mabi mabi@xxxxxxxxxxxxx:

Hello,

Is it possible with Xen on Debian 10 to have two Xen servers both directly attached in a redundant way through HBA interfaces to a single SAN?

The goal here would be to achieve higher availability of the Vms in case one Xen server is down for maintenance or because it is defective. This would mean that the the virtual machines can continue to run on the second available Xen server. The SAN would be used to store the virtual machines images directly via LVM I guess.

I did not find any Xen documentation or third-party howtos in order to do that. Does anyone have any pointers to some documentation or hints? or maybe this is simply not possible?

Best regards,

Mabi



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.