[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] easy high availability storage



hi

i am looking at all this discussion about ha storage.
it looks to me like there are a lot of complicated solutions for the problem.

i guess that it will be not very complicated to do all this stuff in the virtual disc driver of xen. basicly its just double all the read and write access to the disc image. and do the resync stuff there.

meight be we shold ask the developper team if they can do it.

the question is ... are there some other people that also want a easy solution or its just me.

roland


On Thu, 26 Nov 2009 16:29:43 +0100, Nick Couchman <Nick.Couchman@xxxxxxxxx> wrote:

On 2009/11/26 at 02:26, Peter Braun <xenware@xxxxxxxxx> wrote:
Hi,

its not my document.

I don't think Fajar was suggesting that it was your document, or blaming you for what he saw as a couple of possible problems in configuration described in the document. He was just pointing out that there are weaknesses with manual fencing that can lead to corruption of your file system, so, when using GFS, you need to set up real fencing devices to avoid major filesystem problems.


Actually Ive tried to install H/A SAN according do this document but
without success.

Am looking for opensource solution of H/A SAN - and this is close to my goal.


The basics:

1) 2 machines with some HDD space synchronized between them with DRBD.

OK so far...or, if you can afford some sort of storage on the back end capable of being presented to two hosts simultaneously (like FC-based storage), you can use that, too.


2)    Now am little bit confused what shall I use above the DRBD block
device?
       - some cluster FS like OCFS2 or GFS?
       - LVM?

Your "SAN controllers" need not actually have a filesystem on them, unless you're trying to run DRBD on the same systems that are running your domUs. If you have separate machines doing all of the SAN functionality, simply synchronize with DRBD and present with iSCSI. Then you can use a CRM tool (like Heartbeat or one of the others out there) to manage switching IP addresses between your redundant SAN heads. At this stage you need not worry about OCFS2 or GFS. On your Xen servers, use an iSCSI client or iSCSI HBA to connect to the IP address(es).


3)    create files with DD on cluster FS and export them with iSCSI?
       create LVM partitions and export them with iSCSI?

This is where you choose between a cluster-aware FS and a cluster-aware volume manager on the servers running Xen (or both). Either one should be fine - I use a cluster-aware FS (OCFS2) and use files for each of the domU disks. One thing to note with a cluster-aware volume manager, if you go this route you still need to figure out a way to synchronize your domU configurations between the machines. This may mean using a tool like rsync, or creating a volume on your volume manager that's just for configuration files and using a cluster-aware FS on that volume to mount it on all of the Xen servers.


4)    how to make iSCSI target highly available?
- configure iSCSI on virtual IP/another IP and run it as HA service
       - configure separate iSCSI targets on both SAN hosts and
connect it to Xen server as multipath?

I think either one of these should work, though you need to make sure that the later of the two options is okay given the iSCSI Target Daemon that you use - with buffering and caching you may run into some issues with writes not being committed to the disk in the proper order, which could lead to corruption.


5)    hearbeat configuration

Yep...on the SAN controllers for the IPs and iSCSI targets, and possibly on the Xen servers, as well, for the cluster-aware FS or the cluster-aware volume manager.


VM machines with iSCSI HDD space on SAN should survive reboot/non
availability of one SAN hosts without interruption nor noticing that
SAN is degraded.

Is that even possible?

Absolutely - the primary issue is getting that failover set up correctly such that the small interruption to iSCSI service that your Xen servers will experience does not cause them to think the target has gone away or cause them to miss iSCSI disk writes.

-Nick



--------

This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR.


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.