[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Cluster xen

  • To: <xen-users@xxxxxxxxxxxxx>
  • From: <admin@xxxxxxxxxxx>
  • Date: Wed, 7 Mar 2012 20:57:42 -0600
  • Delivery-date: Thu, 08 Mar 2012 02:59:06 +0000
  • Importance: Normal
  • List-id: Xen user discussion <xen-users.lists.xen.org>
  • Thread-index: Acz8dq8L2CpRKssBRqKcg32mE4clggAXxKmw

If you need to deploy more than tool XCP or XenServer servers per pool, then
you need to use a shared iSCSI or NFS target instead of DRDB.  I recommend
building something ZFS based using Nexenta, OpenSolaris, or OpenIndiana.

We experimented with Gluster, but we could never get the performance out of
it that we could easily get from ZFS based solutions.  We also played with
FreeNAS and OpenFiler, but neither of those could match the performance of

Here is a good article about building a ZFS based storage solution, which
can be used as either an iSCSI or NFS target.

You could build the exact box shown in that AnandTech article and then toss
Nexenta Community Edition on it for free.  It would provide you with a fast,
reliable iSCSI/NFS target for your Xen pools.  Good luck.

-----Original Message-----
From: xen-users-bounces@xxxxxxxxxxxxx
[mailto:xen-users-bounces@xxxxxxxxxxxxx] On Behalf Of Matthieu Roudon
Sent: Wednesday, March 07, 2012 9:24 AM
To: Bart Coninckx
Cc: xen-users@xxxxxxxxxxxxx
Subject: Re: [Xen-users] Cluster xen

Thanks for all your answers

I think that am going to test to configure my cluster by using DRBD and 

On the other hand DRBD is good to use two servers but it is not suited 
for more servers.

Of the blow how go t we to make to have more both servers


Le 06/03/12 21:31, Bart Coninckx a écrit :
> On 03/06/12 00:08, Outback Dingo wrote:
>> On Mon, Mar 5, 2012 at 5:59 PM, Bart 
>> Coninckx<bart.coninckx@xxxxxxxxxx>  wrote:
>>> On 03/05/12 23:41, netz-haut - stephan seitz wrote:
>>> Am Montag, den 05.03.2012, 20:52 +0100 schrieb Bart Coninckx:
>>>>> To help this fellow along, I will translate his post:
>>>>> Hello,
>>>>> I would like to implement a cluster with Xen or Xen server with 
>>>>> two Dell
>>>>> R710 servers.
>>>>> I would like to build a cluster using the entire added diskspace 
>>>>> of the
>>>>> two
>>>>> servers, as well as the total memory.
>>>>> What are your experiences and configurations for this?
>>>>> Thanks in advance,
>>>>> Regards,
>>>>> Mat
>>>>> Well Mat,
>>>>> I usually use DRBD and Pacemaker for this. You can load balance the
>>>>> cluster
>>>>> resources (being Xen DomU's) across the two nodes. For live 
>>>>> migration you
>>>>> need dual primary.
>>>>> For dual primary you need stonith.
>>>>> Read up on http://www.clusterlabs.org
>>>>> B.
>>>> http://www.cloudstack.org/
>>>>> _______________________________________________
>>>>> Xen-users mailing list
>>>>> Xen-users@xxxxxxxxxxxxx
>>>>> http://lists.xen.org/xen-users
>>> I'm not familiar (yet) with Cloudstack, but
>>> http://cloudstack.org/cloudstack/requirements.html seems to state that
>>> the requirements are more than two servers. I guess this solution won't
>>> do Mat a lot of good, does it?
>>> cheers,
>>> B.
>>> You're right. To get useful benefits from using cloudstack, one 
>>> would need
>>> *at least* one gateway/firewall,
>>> one management node, a primary storage (local, redundant like drbd
>>> master/master are not supported
>>> out of the box and need to be "hacked"), one secondary storage (nfs 
>>> e.g.)
>>> and at least one switch capable
>>> of dynamic vlan registration (i forget, 802.something).
>>> Personally, I love cloudstack, but for the OP's needs, drbd 
>>> master/master
>>> with a stonith (could be done via
>>> IPMI / iDRAC) would be a much better solution. For a two node 
>>> cluster, i
>>> think, REMUS and/or Kemari projects
>>> are worth a try for failover scenarios.
>>> I'm actually having the same challenge as OP. What if you would run
>>> cloudstack ON a drbd master/master as a series of virtual machines? 
>>> The main
>>> benefit would be easy provisioning, central management, snapshotting 
>>> etc
>>> B.
>> Well at that point just throw XCP into the mix...... itll run from 
>> local disk
> Was thinking about that,
> thx!
> B.

Xen-users mailing list

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.