Heartbeat2+DRBD
Prerequisites- Setup Minimal CentOS 5
- be sure that both nodes can resolve correctly names (either through dns or /etc/hosts)
- yum update (as usual …)
- yum install heartbeat drbd kmod-drbd (available in the extras repository)
Current situation
* node1 192.168.0.11/24 , source disc /dev/sdb that will be replicated
* node2 192.168.0.12/24 , target disc /dev/sdb
DRBD Configuration
vi /etc/drbd.conf
global { usage-count no; }
resource repdata {
protocol C;
startup { wfc-timeout 0; degr-wfc-timeout 120; }
disk { on-io-error detach; } # or panic, …
net {cram-hmac-alg “sha1″; shared-secret “Cent0Sru!3z”; } # don’t forget to choose a secret for auth !
syncer { rate 10M; }
on node1 {
device /dev/drbd0;
disk /dev/sdb;
address 192.168.0.11:7788;
meta-disk internal;
}
on node2 {
device /dev/drbd0;
disk /dev/sdb;
address 192.168.0.12:7788;
meta-disk internal;
}
}
scp /etc/drbd.conf root@node2:/etc/
- Initialize the meta-data area on disk before starting drbd (! on both nodes!)
# drbdadm create-md repdata
- start drbd on both nodes (service drbd start)
# service drbd start
# service drbd start
# drbdadm — –overwrite-data-of-peer primary repdata
# watch -n 1 cat /proc/drbd
- we can now format /dev/drbd0 and mount it on node1 : mkfs.ext3 /dev/drbd0 ; mkdir /repdata ; mount /dev/drbd0 /repdata
- create some fake data on node 1 :
# for i in {1..5};do dd if=/dev/zero of=/repdata/file$i bs=1M count=100;done
- now switch manually to the second node :
# umount /repdata ; drbdadm secondary repdata
# mkdir /repdata ; drbdadm primary repdata ; mount /dev/drbd0 /repdata
# ls /repdata/ file1file2file3file4file5lost+found
Great, data was replicated …. now let’s delete/add some file :
# rm /repdata/file2 ; dd if=/dev/zero of=/repdata/file6 bs=100M count=2
- Now switch back to the first node :
# umount /repdata/ ; drbdadm secondary repdata
# drbdadm primary repdata ; mount /dev/drbd0 /repdata
# ls /repdata/ file1file3file4file5file6lost+found
OK … Drbd is working … let’s be sure that it will always be started : chkconfig drbd on
Heartbeat V2 Configuration
vi /etc/ha.d/ha.cf
keepalive 1
deadtime 30
warntime 10
initdead 120
bcast eth0
node node1
node node2
crm yes
vi /etc/ha.d/authkeys (with permissions 600 !!!) :
auth 1
1 sha1 MySecret
Start the heartbeat service on node1 :
# service heartbeat start
Starting High-Availability services:
Check the cluster status :
# crm_mon
Replicate now the ha.cf and authkeys to node2 and start heartbeat
# scp /etc/ha.d/ha.cf /etc/ha.d/authkeys root@node2:/etc/ha.d/
# service heartbeat start
Verify cluster with crm_mon :
=====
Last updated: Wed Sep 12 16:20:39 2007
Current DC: node1.centos.org (6cb712e4-4e4f-49bf-8200-4f15d6bd7385)
2 Nodes configured.
0 Resources configured.
=====
Node: node1 (6cb712e4-4e4f-49bf-8200-4f15d6bd7385): online
Node: node2 (f6112aae-8e2b-403f-ae93-e5fd4ac4d27e): online
vi /var/lib/heartbeat/crm/cib.xml
<div class="wp_syntax"><div class="code"> <cib generated="false" admin_epoch="0" epoch="25" num_updates="1" have_quorum="true" ignore_dtd="false" num_peers="0" cib-last-written="Sun Sep 16 19:47:18 2007" cib_feature_revision="1.3" ccm_transition="1"> <configuration> <crm_config/> <nodes> <node id="6cb712e4-4e4f-49bf-8200-4f15d6bd7385" uname="node1" type="normal"/> <node id="f6112aae-8e2b-403f-ae93-e5fd4ac4d27e" uname="node2" type="normal"/> </nodes> <resources> <group id="My-DRBD-group" ordered="true" collocated="true"> <primitive id="IP-Addr" class="ocf" type="IPaddr2" provider="heartbeat"> <instance_attributes id="IP-Addr_instance_attrs"> <attributes> <nvpair id="IP-Addr_target_role" name="target_role" value="started"/> <nvpair id="2e967596-73fe-444e-82ea-18f61f3848d7" name="ip" value="192.168.0.110"/> </attributes> </instance_attributes> </primitive> <instance_attributes id="My-DRBD-group_instance_attrs"> <attributes> <nvpair id="My-DRBD-group_target_role" name="target_role" value="started"/> </attributes> </instance_attributes> <primitive id="DRBD_data" class="heartbeat" type="drbddisk" provider="heartbeat"> <instance_attributes id="DRBD_data_instance_attrs"> <attributes> <nvpair id="DRBD_data_target_role" name="target_role" value="started"/> <nvpair id="93d753a8-e69a-4ea5-a73d-ab0d0367f001" name="1" value="repdata"/> </attributes> </instance_attributes> </primitive> <primitive id="FS_repdata" class="ocf" type="Filesystem" provider="heartbeat"> <instance_attributes id="FS_repdata_instance_attrs"> <attributes> <nvpair id="FS_repdata_target_role" name="target_role" value="started"/> <nvpair id="96d659dd-0881-46df-86af-d2ec3854a73f" name="fstype" value="ext3"/> <nvpair id="8a150609-e5cb-4a75-99af-059ddbfbc635" name="device" value="/dev/drbd0"/> <nvpair id="de9706e8-7dfb-4505-b623-5f316b1920a3" name="directory" value="/repdata"/> </attributes> </instance_attributes> </primitive> </group> </resources> <constraints> <rsc_location id="runs_on_pref_node" rsc="My-DRBD-group"> <rule id="prefered_runs_on_pref_node" score="100"> <expression attribute="#uname" id="786ef2b1-4289-4570-8923-4c926025e8fd" operation="eq" value="node1"/> </rule> </rsc_location> </constraints> </configuration> </cib>
页:
[1]