...
- It should be noted this is a highly complex topic.
- This document does not cover all aspects of clustering and failover scenarios and is not intended to go through the exact installation steps.
- It is advisable to keep the cluster nodes on their own subnet as each node broadcasts "heartbeats" to the other node to ensure it is still alive.
System requirements
- Two high power servers (8 GB RAM minimum) with three NICs in each
- Dedicated networks for each service
- Fencing device on each server such as Dell DRAC or HP iLO. This is so if a cluster node is misbehaving it is immediately powered off instead of causing data corruption as well as to trigger proper failover.
Example Diagram
The following is a diagram of the clustering scenario we will be setting up:
Install OS
Install CentOS 5.6 on two identical servers. You may also configure the NICs at this point. One NIC will be for the iSCSI storage network (eth2), one will be for clustering heartbeat (eth1), and one NIC will be for all other communications, including sipXecs (eth0).
When setting the default gateway, it must be set on the main communication interface (eth0). This is done automatically in initial setup but can be changed to the detriment of the system post install. DO NOT set the default gateway on any interface other than the primary communication interface (eth0).
It is also advisable to set the hostname at this time. The hostname MUST be the same on both systems otherwise sipXecs will fail to function if switched to another node.
When prompted for software selection, deselect desktop - Gnome and select Clustering and Storage Clustering. Perform the install.
Post Installation
After each system reboots you'll need to disable the firewall and turn of SELinux. This is done when the Setup Agent appears. Choose Firewall, then disable all security settings.
After you have disabled the security settings perform a system update on both systems:
Code Block |
---|
yum update |
After the update reboot both systems. Once you've rebooted, install the software package xauth to enable X11 forwarding (used later):
Code Block |
---|
yum install xauth |
Connect to Shared iSCSI disk
Each server needs to have iSCSI turned on to connect. Run the following commands to enable iSCSI:
Code Block |
---|
chkconfig iscsi onservice iscsi start |
On each server you need to connect to a blank (fresh) shared iSCSI disk. For example, if your iSCSI shared disk is located at IP address 172.16.5.10 you would run the following command on each server:
Code Block |
---|
iscsiadm -m discovery -t sendtargets -p 172.16.5.10
service iscsi restart |
You will now see the addition of a drive to the server, usually /dev/sda or something similar.
Configure LVM to allow for filesystem clustering
Edit /etc/lvm/lvm.conf on each node and change the following line:
Code Block |
---|
locking_type = 1 |
to
Code Block |
---|
locking_type = 3 |
and run the following command for this change to take effect:
Code Block |
---|
vgscan |
Add Clustered Network Script
...
This will open the CentOS cluster configuration system:
Click on Create New Configuration to begin configuring the cluster.
Enter the name of the cluster you wish to create. For this example we will create a cluster called uc_cluster:
For a simple two node cluster it is not necessary to use a quorum disk. Click OK.
...
We now need to add all of our cluster information into the cluster configuration manager the first thing we need to do is add our cluster nodes. To do this click on Cluster Nodes and click Add a Cluster Node. Be sure to use IP addresses and not DNS host names :
Click OK then add the second node in the same fashion.
...
In the system requirements section of this document it was noted that you would need some sort of manual fencing device to enable the cluster to power down a troublesome node. To add a fencing device, click on Fence Devices then click on Add a Fencing Device. You will now need to select the type of fencing device you have installed:
Once you have selected your fencing device you will need to enter a descriptive name, IP address, username, and password to connect to the device.
Add one fencing device into the cluster manager for each iLO/DRAC you have (for two servers, you'd have two fencing devices).
Fencing, however, is beyond the scope of this document. For demonstration purposes we will use manual fencing.
...
Each cluster node needs to have its own fencing device assigned to itself so that the cluster manager knows how to power down the device. To add a fencing device to a cluster node, click on Cluster Nodes, click on the cluster node you wish to modify, then click Manage Fencing For This Node. You will now see the following:
Click on Add a New Fence Level then click on the fence level that was just created, Fence-Level-1 then click Add a New Fence to this Level, then select the fence device associated with that server. Click OK to return to the cluster node fence configuration:
Click Close to return to the cluster configuration system. Repeat the cluster node fence configuration for the other node in this cluster.
...
Once you've created all the necessary resources you must now assign them to a cluster service.
To add a service, click on Services then click Add a Service. Enter a descriptive name for the service, such as sipXpbx, then click OK. The service configuration window will now appear:
You'll notice there are two options regarding resources: Create a new resource for this service and Add a Shared Resource to this service. Since we already created the resources we do not need to create them again, so click Add a Shared Resource to this service which will prompt you to select an existing resource:
You first need to select the IP Address that you created earlier and click OK. This will be the top resource in this service, meaning this resource will start before all other resources start.
Once you've added the the IP address you will need to add the network device startup script (clusnet) by clicking Add Shared resource to this service. Select the clusnet resource created earlier then click OK. Select the clusnet resource you just added then click Attach a Shared Resource to the selection, then select the PGSQL-8 resource you created earlier and click OK. Now add Phonelogd and sipXecs the same way as subordinates of the PostgreSQL resource. You
Your resource hierarchy should look similar to this:
For now we don't want this service to automatically start when the cluster service starts so we need to disable automatic startup for this service. To do this uncheck the Autostart This Service checkbox, then click Close
...
For the initial propagation of the cluster configuration to the second cluster node you will need to manually copy /etc/cluster/cluster.conf from the primary node to the secondary node. This can be done with SCP or by copying and pasting the contents of the file.
Start Clustering Service
To start the clustering service and all of the necessary dependencies, run the following commands on both nodes:
Code Block |
---|
service cman |
...
start service rgmanager start service clvmd start service gfs start vgscan chkconfig cman on chkconfig clvmd on chkconfig rgmanager on chkconfig gfs on |
To verify that the cluster service is up and running, run the clustat command. You should see output similar to the following:
...
GFS requires LVM to operate properly, so we will need to create a LVM physical volume (PV), volume group (VG), and logical volumes (LV). This only needs to be done on one node of the cluster, preferably the primary node.
Enter the following commands to create the PV and the VG (assuming /dev/sda for the shared disk)
...
This creates a 5 GB logical volume with the name etc_sipxpbx in the volume group sipx-vol.
Now we need to create the rest of these volumes. Bear in mind that the PostgreSQL database (/var/lib/pgsql) and the sipXecs data store (/var/sipxdata) will need the most space.
Run the following commands, changing the space values per your needs:
...
Enter all the necessary information and then when prompted, select Exit to prompt.
Disable sipXecs services from starting at bootup:
...
You'll also need to change the IP address of your primary node's communication interface (eth0) to an IP address that's different than the sipXecs system's IP address (but on the same subnet). This is so sipXecs can function on both nodes.
DNS Caching Nameserver Configuration
Because running sipxecs-setup-system would have broken many things we'll have to perform a few steps involving DNS manually.
The first thing we need to do on both nodes is remove the file /etc/named. caching-nameserver.conf and add our own /etc/named.conf with the following contents:
...
To start sipXecs you'll need to start system-config-cluster then click on the Cluster Management tab at the top of the window. Here you'll see the active nodes and services:
To start the sipXpbx service, click on the service and then click the Enable button. This will start all the necessary services.