Gluster FS Setup

Cluster Node 1: ( )

Cluster Node 2:

Cluster Node 3:

IMPORTANT NOTE: Please ensure the glusterfs security group is added to all glusterfs nodes to ensure all ports needed for glusterfs communication are opened.

ALLOW 24007:24007 from
ALLOW 49152:49155 from

Server Config

1) No hosts file changes were added to the /etc/hosts file, we do rely on dns servers basically to all services.

2) Install Server Components on all cluster nodes

#apt-get install glusterfs-server

3) On one of the hosts, we need to peer with the second host. It doesn't matter which server you use, but we will be preforming these commands from our gluster01 server

# gluster peer probe
peer probe: success

# gluster peer probe
peer probe: success

4) This means that the peering was successful. We can check that the nodes are communicating at any time by typing:

root@gluster01:/gluster-storage# gluster peer status
Number of Peers: 2

Port: 24007
Uuid: 9947788d-f454-4108-843a-8ffb6b1c6b67
State: Peer in Cluster (Connected)

Port: 24007
Uuid: ab5f725e-ff89-469d-a4e5-2a42ab3293e9
State: Peer in Cluster (Connected)

5) Create a storage volume ( volume 1 ) with 3 replica copies to ensure redundancy. the directory /gluster-storage will be created if it doesn't exist already.

# gluster volume create volume1 replica 3 transport tcp force
volume create: volume1: success: please start the volume to access data

6) Start the volume

#gluster volume start volume1
volume start: volume1: success

7) Check volume status

root@gluster01:/# gluster volume status
Status of volume: volume1
Gluster process                                         Port    Online  Pid
01                                                      49154   Y       3168
01                                                      49154   Y       11095
01                                                      49152   Y       6355
NFS Server on localhost                                 2049    Y       3185
Self-heal Daemon on localhost                           N/A     Y       3180
NFS Server on                               2049    Y       11107
Self-heal Daemon on                         N/A     Y       11112
NFS Server on                N/A     N       N/A
Self-heal Daemon on  N/A     N       N/A
There are no active volume tasks

8) Check volume info

admin@gluster02:~$ sudo gluster volume info
[sudo] password for admin: 
Volume Name: volume1
Type: Replicate
Volume ID: 8bfc56bf-1b59-4461-91a5-c8965a75ceea
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Options Reconfigured:

Client Config

1) Install client components on gluster02 and gluster03

#apt-get install glusterfs-client

2) Mount the glusterfs volume

mkdir  /storage-pool
# from the instance gluster02
mount -t glusterfs /storage-pool

# from the instance gluster03
mount -t glusterfs /storage-pool

3) Check df

root@gluster02:/home/admin/scripts/configs# df -h
Filesystem                                 Size  Used Avail Use% Mounted on
udev                                       2.0G   12K  2.0G   1% /dev
tmpfs                                      396M  940K  395M   1% /run
/dev/vda1                                   20G   11G  8.3G  57% /
none                                       4.0K     0  4.0K   0% /sys/fs/cgroup
none                                       5.0M     0  5.0M   0% /run/lock
none                                       2.0G   24K  2.0G   1% /run/shm
none                                       100M     0  100M   0% /run/user  8.5G  1.2G  6.9G  15% /storage-pool

root@gluster03:/home/admin/scripts/scripts# mount
/dev/vda1 on / type ext4 (rw)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
none on /sys/fs/pstore type pstore (rw)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw)
systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd) on /storage-pool type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)


a) Write data under /storage-pool on gluster-client. It automatically replicated under /storage-pool of the gluster02 and gluster03 instances.

b) rebooted instances

The reboots were performed and the instances auto mounted the glusterfs filesystem.
But they took about 20 seconds to complete the mount once the server was available to ssh in.
When the instance was shutdown, the mountpoint took about 1 minute to complete.

Access Control

root@gluster01:~# gluster volume set volume1 auth.allow,,
volume set: success

fstab entry

root@gluster02:/home/admin/scripts/configs# cat /etc/fstab
LABEL=cloudimg-rootfs   /        ext4   defaults        0 0 /storage-pool glusterfs defaults,_netdev,log-level=WARNING,log-file=/var/log/gluster.log, 0 0

root@gluster03:/home/admin/scripts/scripts# cat /etc/fstab
LABEL=cloudimg-rootfs   /        ext4   defaults        0 0 /storage-pool glusterfs defaults,_netdev,log-level=WARNING,log-file=/var/log/gluster.log, 0 0

rb/glusterfs.txt · Last modified: 08/09/2018 02:07 by andrew