Step by Step 2 Nodes RHEL6 Cluster Configuration
Virtual Lab Environment
1. Two Nodes
2. SAN Storage Server (software iscsi-taget)
Two Nodes
1. Rhelsrv1
OS: RHEL6 Server
IP :172.168.100.101
Iscsi-initiator – iscsi-client
2. Rhelsrv2
OS: RHEL6
IP : 192.168.100.102
Iscsi-client
3. Client1
OS: RHEL6 Desktop
IP : 172.168.100.103
4. Cluster Name: iscsicluster
IP : 172.168.100.200
Required Software
Set up yum repo in all three nodes
From Vbox interface , attach rhel6 iso as cdrom and reboot the VMs.
Mount /dev/sr0 /media
/etc/yum.repos.d/rhel6dvd.repo
[Server]
name=Server
baseurl=file:///media/rhel6/Server
enabled=1
gpgcheck=0
[HighAvailability]
name=HighAvailability
baseurl=file:///media/HighAvailability
enabled=1
gpgcheck=0
[LoadBalancer]
name=LoadBalancer
baseurl=file:///media/LoadBalancer
enabled=1
gpgcheck=0
[ScalableFileSystem]
name=ScalableFileSystem
baseurl=file:///media/ScalableFileSystem
enabled=1
gpgcheck=0
Now Prepare iscsi-target as SAN storage in host client1
(turn off selinux and iptables)
1. Attach another VM disk in client1. It will be sdb as scsi device.
2. Create one or more partition i.e. /dev/sdb1, dev/sdb2 etc, as each partition will seen by iscsi-client host as a single block device. For example /dev/sdb1 will be treated as /dev/sdb if there is already sda present.
Or you can create lvm partion like
Pvcreate /dev/sdb1
Vgcreate vg1 /dev/sdb1
Lvcreate –n vg1lv1 –L 5G vg1
3. [root@client1] yum install scsi-target-utils
4. Now edit /etc/tgt/targets.conf file
<target iqn.2011-08.mydomain.client1:storage1>
backing-store /dev/sdb1
backing-store /dev/vg1/vg1lv1
</target>
5. Start tgtd service
[root@client1] # service tgtd start
6. Now check whether it has got the device or not
[root@client1] # tgt-admin –show
And you will see LUN1 path as /dev/sdb1
ISCSI target configuration done
Now prepare rhelsrv1 & rhelsr2node
(Turn off selinux and iptabels)
1. Install following rpms
[root@rhelsrv1]# yum groupinstall “High Availability Management”
[root@rhelsrv1]# yum groupinstall “High Availability”
[root@rhelsrv1]# yum groupinstall “Load Balancer”
[root@rhelsrv1]# yum groupinstall “Scalable Filesystems”
2. Yum install iscsi-initiator-utils
3. Service iscsi start
4. Service iscsid start
5. Now type fdisk –l or dmesg to find /dev/sdb
Or for lvm partions just type
vgscan or lvs , it will display the disk exported by the iscsi-target server.
6. Now create GFS2 file system to work it with cluster
[root#rhelsrv1]# mkfs.gfs2 –p lock_dlm –t iscsicluster:sanstorage –j 2 /dev/sdb
Here iscsicluster is the name of the cluster name we will configure later
Sanstorage is the arbitrary storage name .
J 2 is the number of journals , here 1 journal for each node. So 2 is for two nodes.
It will create gfs2 file system.
7. Now edit /etc/fstab file and following lines to mount the gfs2 file system automatically
/dev/sdb /SAN gfs2 acl 0 0
Here /SAN is the mount point and gfs2 is teh type of file system
root#rhelsrv1]# service gfs2 start
will mount the /deb/sdb file system to /San directory.
8. In node2 i.e. rhelsrv2 , you simply edit the /etc/fstab file with
/dev/sdb /SAN gfs2 acl 0 0
9. Service gfs2 start
It will mount to /SAN
3 comments:
Excellent information. Thanks for share.
IP :172.168.100.1001 this ip in whi class ?????
Thanks
Updated information.
Post a Comment