Inhaltsverzeichnis
MC ServiceGuard Cluster
Anlegen eines MC ServiceGuard Cluster
VG auf node1 anlegen
pvcreate -f /dev/rdsk/cxtxdx mkdir /dev/vg_name /dev/vg_name/group c 64 0x010000 vgcreate /dev/vg_name /dev/dsk/dxtxdx /dev/dsk/cxtxdx vgchange -a y vg_name lvcreate -L xxx vg_name vgchange -a n vg_name vgexport -p -s -m /tmp/vg_name.map vg_name scp /tmp/vg_name.map node2:/tmp
VG auf node2 importieren
mkdir /dev/vg_name mknod /dev/vg_name/group c 64 0xXX0000 vgimport -s -m /tmp/vg_name.map vg_name
Cluster auf node1 konfigurieren
cmquerycl -C /etc/cmcluster/cmclconf -n node1 -n node2 vi /etc/cmcluster/conf_ascii_file cmcheckconf -C /etc/cmcluster/conf_ascii_file vgchange -a y vg_name cmapplyconf -v -C /etc/cmcluster/conf_ascii_file vgchange -a n vg_name cmruncl cmviewcl -v cmhaltcl
package auf node1 konfigurieren
mkdir /etc/cmcluster/pkg_name cd /etc/cmcluster/pkg_name cmmakepkg -p pgk_name.conf cmmakepkg -s pkg_name.cntl vi pkg_name.conf vi pkg_name.cntl scp -r /etc/cmcluster/pkg_name node2:/etc/cmcluster cmcheckconf -C conf -P pkg_name.pkg cmapplyconf -C conf -P pkg_name.pkg cmruncl cmviewcl -v
nodes zum cluster hinzufügen
folgende Schritte sind durchzuführen:- lege die gleiche netzwerk Konfiguration an (heartbeat, failover.. interface)
- update der /etc/hosts durchführen (fall quorum server genutzt, diesen nicht vergessen einzutragen!)
- editieren der /etc/cmcluster/conf_ascii_file und kopieren auf alle nodes
- editieren der cmclnodelist und kopieren auf alle nodes
- falls ein Quroum Server benutzt wird alle nodes in die authfile eintragen
- falls eine lock lun benutzt wird, sicherstellen, dass alle nodes das gleiche device benutzen
ggfs. die cluster conf anpassen, damit die gleiche lun angesprochen wird - check und apply die neue Cluster Konfiguration
- rekonfiguriere die package failover für die neuen nodes
Wichtige Cluster- und Systemdateien:
tail -f /etc/cmcluster/pgk_name/pkg_name.cntl.log tail -f var/adm/syslog/syslog.log
Quorum server unter Linux anlegen
Mount der SGLX CD und Installation des RPM:cd x86_x86-64/[DISTRIBUTION]/Serviceguard/IA32 rpm -ivh qs-A.04.00.04-0.xxxx.i386.rpm
Authorisieren der hosts die sich zum Quorum Server verbinden dürfen,
vi /usr/local/qs/conf/qs_authfile
...
node1.example.net
node2.example.net
nodeXX.example.net
...
node1.example.net
node2.example.net
nodeXX.example.net
...
Log Directroy anlegen
mkdir -p /var/log/qs
Eintrag in der /etc/inittab vornehmen
vi /etc/inittab
qs:345:respawn:/usr/local/qs/bin/qs >/var/log/qs/qs 2>/var/log/qs/qs_error
starten des Service Guard Quorum Servers
telinit q
Überprüfen ob der service gestartet wurde
ps aux | grep qs
und checken der zwei "LISTEN" ports
netstat -an --inet
...
tcp 0 0 0.0.0.0:60277 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:1238 0.0.0.0:* LISTEN
...
tcp 0 0 0.0.0.0:60277 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:1238 0.0.0.0:* LISTEN
...
Hinweis: 1238 correspondiert zum registrierten "hacl-qs" service.
Falls die nodes ein "permission denied" zurückliefern, kill die Quorum server Prozesse,
pkill qsc
sie werden automatisch neu gestartet und lesen den qs_authfile neu ein.
Reconfiguring a Running MC ServiceGuard Cluster
If you have a cluster, that already is up and running - and you need to change the cluster configuration, here is how to.: (in this example, we are using a quorum server)
Dump the running cluster configuration to a file:
cmgetconf -c cluster-name /etc/cmcluster/tmp.cluster.config
Now you can use cmquerycl to fetch information of all the nodes that you cant to have in your cluster configuration:
cmquerycl -v -C /etc/cmcluster/cluster.config -c cluster-name -n nodeA -n nodeB -n nodeC -q quorumsvr
In the case of errors, please read the logfiles of all servers that needs to be in the cluster, correct any errors and try again. The cluster.config is now created, and you need to edit the file for configuration.
After you have edited the cluster.config file, you need to execute a cmcheckconf and cmapplyconf for the changes to take effect. Again, in the case of errors - please direct your attension to the logfiles, and ITRC.
cmcheckconf -v -C /etc/cmcluster/cluster.config cmapplyconf -v -C /etc/cmcluster/cluster.config
cmapplyconf will ask you a few questions, and you need to accept among other things.
To verify the cluster setup, and ready to execute command:
cmviewcl [-v] cmrunnode
Erstellen eines Cluster Package
Node1
Anlegen des neuen cluster directoriesmkdir /etc/cmcluster/[PackageName] chmod o+r /etc/cmcluster/[PackageName] chmod o+x /etc/cmcluster/[PackageName]
Anlegen des config files
cd /etc/cmcluster/config cmmakepkg -p [PackageName].conf
modifizieren des config files
vi [PackageName].conf
und folgende Werte definieren
13 PACKAGE_NAME PackageName 96 NODE_NAME Node1 97 NODE_NAME Node2 143 RUN_SCRIPT /etc/cmcluster/PackageName/PackageName.cntl 145 HALT_SCRIPT /etc/cmcluster/PackageName/PackageName.cntl
kopieren des neuen cluster config files auf alle anderen cluster nodes
scp /etc/cmcluster/config/[PackageName].conf NodeX:/etc/cmcluster/config
Node1 + Node2
linken des standard controll und main scripts auf allen nodescd /etc/cmcluster/[PackageName] ln -s /ccf/local/bin/PKG.cntl PackageName.cntl ln -s /ccf/local/bin/main.sh /etc/cmcluster/PackageName/PackageName_main.sh
editieren/anlegen des neuen package environments
vi PackageName.cntl.env
mit folgenden Werten
# CCF site specific environment for PackageName CCF_Rechner=$(/usr/bin/uname -n) CCF_Comment="Data Center Integration Package" CCF_Startup="/etc/cmcluster/${CCF_Package}/${CCF_Package}_main.sh" IP0="10.128.198.196" ; SUBNET0="10.128.198.0" # [PackageName].ccf-it.de VG0="/dev/VG_Package" LV0="/dev/VG_Package/lvol1"; FS0="/test/PackageName"; FS_MOUNT_OPT0=""; FS_TYPE0="vxfs" LV1="/dev/VG_Package/lvol2"; FS1="/test/PackageName/share"; FS_MOUNT_OPT1=""; FS_TYPE1="vxfs" LV2="/dev/VG_Package/lvol3"; FS2="/test/PackageName/oradata1"; FS_MOUNT_OPT2=""; FS_TYPE2="vxfs" LV3="/dev/VG_Package/lvol4"; FS3="/test/PackageName/redo1"; FS_MOUNT_OPT3=""; FS_TYPE3="vxfs"
kopieren des environment files auf alle nodes
cd /etc/cmcluster/PackageName scp PackageName.cntl.env Node1:`pwd`
verify und apply Serviceguard cluster configuration
checken der neuen Konfigurationcd /etc/cmcluster/config cmcheckconf -v -P PackageName.conf Checking existing configuration ... Done Gathering configuration information ... Done Parsing package file: [PackageName].conf. Package [PackageName] already exists. It will be modified. Maximum configured packages parameter is 100. Configuring 16 package(s). 84 package(s) can be added to this cluster. 199 access policies can be added to this cluster. Modifying the package configuration for package [PackageName]. Verification completed with no errors found. Use the cmapplyconf command to apply the configuration.
wenn keine Fehler an gezeigt werden kann die neue Konfiguration applyed werden
cmapplyconf -v -P [PackageName].conf Checking existing configuration ... Done Gathering configuration information ... Done Parsing package file: [PackageName].conf. Attempting to add package [PackageName]. Maximum configured packages parameter is 100. Configuring 16 package(s). 84 package(s) can be added to this cluster. 199 access policies can be added to this cluster. Modify the package configuration ([y]/n)? y Adding the package configuration for package [PackageName]. Completed the cluster update.
checken der neuen cluster Konfiguration
mount points anlegen
Bitte beachten, daß alle definierten mount points existieren !!!Löschen eine Cluster Package
Um ein Cluster Package zu löschen, muß man auf das directory des Cluster Package wechseln, wie unten gezeigt.cd /etc/cmcluster/pkgname cmdeleteconf -p pkgname vgexport <vgname> #(on all nodes in the cluster)
If you have an extra server as an arbitrator node in your cluster, you should also do the following:
on the arbitrator node, to get a new updated clusterascii file
cmgetconf