[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Configuring Xen + DRBD + Corosync + Pacemaker



Hi Lars,

Thanks for reply,

So, I am following the configurations of the DRBD + Corosync. I think th eonly diference is that I don't have SCSI HD. My lines to configure Pacemaker are these below. I am getting the error:

ERROR: ocf:tripadvisor:iSCSITarget: could not parse meta-data:
ERROR: ocf:tripadvisor:iSCSITarget: no such resource agent

Because I don't have this resource "ocf:tripadvisor:iSCSITarget".

Could I change to other resource agent? Which one?

Thsnks in advance.
Felipe

crm configure

primitive res_ip_float ocf:heartbeat:IPaddr2 params ip="192.168.188.75" cidr_netmask="20" op monitor interval="10s"

primitive res_portblock_r9_block ocf:heartbeat:portblock params action="" portno="3260" ip="192.168.188.75" protocol="tcp"

primitive res_portblock_r9_unblock ocf:heartbeat:portblock params action="" portno="3260" ip="192.168.188.75" protocol="tcp"

primitive res_drbd_r9 ocf:linbit:drbd params drbd_resource="r9"

ms ms_drbd_r9 res_drbd_r9 meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"

primitive res_target_r9 ocf:tripadvisor:iSCSITarget params implementation="tgt" tid="1" iqn="iqn.2012-12.com.cloud:storage.example.xsg" incoming_username="target_r9" incoming_password="target_r9" additional_parameters="MaxRecvDataSegmentLength=131072 MaxXmitDataSegmentLength=131072" op monitor interval="10s"

primitive res_lun_r9_lun1 ocf:heartbeat:iSCSILogicalUnit params target_iqn="iqn.2011-12.com.example:storage.example.xsg" lun="1" path="/dev/drbd/by-res/r9" scsi_id="r9_1" op monitor interval="10s"

group rg_r9 res_portblock_xcp_block res_target_xcp res_lun_xcp_lun1 res_ip_float res_portblock_xcp_unblock

colocation c_r9_on_drbd inf: rg_r9 ms_drbd_xcp:Master

order o_drbd_before_xcp inf: ms_drbd_xcp:promote rg_r9:start




On Tue, Dec 11, 2012 at 11:45 AM, Lars Kurth <lars.kurth@xxxxxxx> wrote:
Felipe,
this may not entirely answer your question, but you may want to check out https://vimeo.com/46125363 and http://www.slideshare.net/xen_com_mgr/oscon-2012-from-datacenter-to-the-cloud-featuring-xen-and-xcp/152 (from slide 152)
Regards
Lars


On 11/12/2012 12:22, Felipe Gutierrez wrote:
Hi everyone,

I need some help to setup my configuration failover system.
My goal is to have a redundance system using Xen + DRBD + Corosync + Pacemaker

On Xen I will have one virtual machine. When this computer has network down, I will do a Live migration to the second computer.
The first configuration I will need is a crossover cable, won't I? It is really necessary? Ok, I did it. eth0 is the crossover and eth1 is the network.

In my mind I will have one partion configured to DRBD and there I will install Xen Virtual Machines. The Corosync will listen this connection through network board (eth1). When this connection fail, the live migration will execute through the crossover cable. For this I will need to configure the Pacemaker with the crossover cable, won't I? I still need to do that... and I don't know how.

I am configuring other DRBD partition to share .cfg (Xen files) through it. My reference to do it is http://publications.jbfavre.org/virtualisation/cluster-xen-corosync-pacemaker-drbd-ocfs2.en

Is this configuration plausible, correct or the best one?

Thanks in advance,
Felipe
--
--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@xxxxxxxxx
-- https://sites.google.com/site/lipe82/Home/diaadia



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users



--
--
-- Felipe Oliveira Gutierrez
-- Felipe.o.Gutierrez@xxxxxxxxx
-- https://sites.google.com/site/lipe82/Home/diaadia

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.