This article explains the process of installing and configuring CentOS 7 with GlusterFS on AArch64. GlusterFS is an open source and scale-out filesystem which combines multiple storage servers using Infiband or TCP and makes one large network filesystem.
Requirements
To configure GlusterFS, you need two or more servers (AArch64) with CentOS 7 installed. The servers can be either physical or virtual. I'm using two virtual servers here and setting their host names as 'gfs1' and 'gfs2'. Network connectivity should be active in both of them. Each node should have a storage device. In the examples used in this article, virtual storage disks with 2 GB each on the nodes are used.
Add the IP address and hostname of the servers to /etc/hosts in both the nodes.
45.79.161.123 gfs1
45.79.174.123 gfs2
GlusterFS Installation
Before proceeding with the installation, we need to enable both EPEL (Exta Packages for Enterprise Linux) and GlusterFS repositories on both the servers to resolve the external dependencies at the time of installation. If you have enabled only GlusterFS repository and not enabled EPEL repository then it is likely that you will hit the following error message while installing glusterfs-server:
Error: Package: glusterfs-server-3.7.0-2.el7.x86_64 (glusterfs-epel)
Requires: liburcu-cds.so.1()(64bit)
Error: Package: glusterfs-server-3.7.0-2.el7.x86_64 (glusterfs-epel)
Requires: liburcu-bp.so.1()(64bit)
Enabling EPEL reopsitory in CentOS:
Use wget to fetch the required file and install it using rpm.
[root@gfs1 ~]# wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
--2015-05-26 10:35:33-- http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
Resolving dl.fedoraproject.org (dl.fedoraproject.org)... 209.132.181.24, 209.132.181.25, 209.132.181.23, ...
Connecting to dl.fedoraproject.org (dl.fedoraproject.org)|209.132.181.24|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14524 (14K) [application/x-rpm]
Saving to: epel-release-7-5.noarch.rpm
100%[======================================>] 14,524 --.-K/s in 0.06s
2015-05-26 10:35:33 (239 KB/s) - ˜epel-release-7-5.noarch.rpm saved [14524/14524]
[root@localhost ~]# rpm -ivh epel-release-7-5.noarch.rpm
warning: epel-release-7-5.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY
Preparing... ################################# [100%]
Updating / installing...
1:epel-release-7-5 ################################# [100%]
Enabling GlusterFS repository:
[root@gfs1 ~]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
--2015-05-26 10:37:49-- http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
Resolving download.gluster.org (download.gluster.org)... 50.57.69.89
Connecting to download.gluster.org (download.gluster.org)|50.57.69.89|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1055 (1.0K) [text/plain]
Saving to: /etc/yum.repos.d/glusterfs-epel.repo
100%[======================================>] 1,055 --.-K/s in 0s
2015-05-26 10:37:49 (81.2 MB/s) - /etc/yum.repos.d/glusterfs-epel.repo saved [1055/1055]
Follow the below mentioned steps on both the servers:
Install glusterfs on both:
[root@gfs1 ~]# yum install glusterfs-server
Now, start the glusterfs daemon:
root@gfs1 ~]# service glusterd start
Redirecting to /bin/systemctl start glusterd.service
Verify if the service has started successfully:
[root@gfs1 ~]# service glusterd status
Redirecting to /bin/systemctl status glusterd.service
glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled)
Active: active (running) since Tue 2015-05-26 10:42:08 UTC; 38s ago
Process: 13418 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid (code=exited, status=0/SUCCESS)
Main PID: 13419 (glusterd)
CGroup: /system.slice/glusterd.service
13419 /usr/sbin/glusterd -p /var/run/glusterd.pid
May 26 10:42:08 localhost.localdomain systemd[1]: Started GlusterFS, a cluste...
Hint: Some lines were ellipsized, use -l to show in full.
Disable SELinux, in case it is enabled, by changing “SELINUX=disabled” or “SELINUX=permissive” in the file /etc/sysconfig/selinux.
Next, flush the iptables.
[root@gfs1 ~]# iptables -F
Now, create identical partitions on both the nodes. I'm using partition /dev/xvdc with 2GB size here.
fdisk /dev/xvdc
Create a new partition using the option 'n'. Choose option 'p' for primary partition and then follow the default options. When done, select 'w' to save the data to the disk and exit.
Create a filesystem on the newly created partition:
mkfs.ext4 /dev/xvdc1
Mount it on a directory called /data/brick:
[root@gfs1 ~]# mkdir -p /data/brick
[root@gfs1 ~]# mount /dev/xvdc1 /data/brick
Add this to fstab entry to retain the mounting after reboot.
[root@gfs1 ~]# echo "/dev/xvdc1 /data/brick ext4 defaults 0 0" >> /etc/fstab
Now, we need to configure the trusted pool.
Configuration
We need to create trusted storage pool from the gluster servers which provide the bricks for the volumes.
Execute the below command on the first server:
[root@gfs1 ~]# gluster peer probe gfs2
peer probe: success.
Execute on the second server:
[root@gfs2 ~]# gluster peer probe gfs1
peer probe: success.
Verify the storage pool:
[root@gfs1 ~]# gluster pool list
UUID Hostname State
4d1d974d-4c75-424c-a788-7f0e71002e02 gfs2 Connected
473b1bc5-b8c0-4cea-ac86-568a77d0edf0 localhost Connected
Setting up GlusterFS volume:
In order to set up the volume, it is sufficient to execute the below commands on only one of the servers. I'm using the first server (gfs1) here.
[root@gfs1 ~]# gluster volume status
No volumes present
I'm creating a replicated volume in this example. It provides high availability and reliability. For more details on different types of volumes, please refer gluster community page.
[root@gfs1 ~]# mkdir /data/brick/gvol0
[root@gfs1 ~]# gluster volume create gvol0 replica 2 gfs1:/data/brick/gvol0 gfs2:/data/brick/gvol0
volume create: gvol0: success: please start the volume to access data.
Start the newly created volume
[root@localhost ~]# gluster volume start gvol0
volume start: gvol0: success
Verify the details:
[root@localhost ~]# gluster volume info
Volume Name: gvol0
Type: ReplicateVolume ID: 4a61822d-75cf-402b-bad4-19ae57626673
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:Brick1: gfs1:/data/brick/gvol0
Brick2: gfs2:/data/brick/gvol0
Options Reconfigured:
performance.readdir-ahead: on
Yes, you are almost there! You just have to mount the newly created volume on any mount point and start using it.
[root@gfs1 ~]# mount -t glusterfs gfs1:/gvol0 /mnt
[root@gfs2 ~]# mount -t glusterfs gfs1:/gvol0 /mnt
Copy some data to the mounted volume from any of the server and verify that it is accessible from the other server as well.
[root@gfs1 ~]# cp /var/log/yum.log /mnt
[root@gfs2 mnt]# ls
yum.log
Conclusion
Congratulations! You have completed the configuration of GlusterFS on your CentOS7 system. This mount point now acts as a single filesystem which can be used to create, edit or delete files from either of the nodes. The entire process of installation and setup is quite simple and does not take much time. If you want additional resources on GlusterFS, you can refer gluster.org
The post Install and Configure GlusterFS in CentOS 7 on AArch64 appeared first on LinOxide.