Differences

This shows you the differences between two versions of the page.

Link to this comparison view

rb:glusterfs [08/09/2018 02:07] (current)
andrew created
Line 1: Line 1:
 +====== Gluster FS Setup ======
 +
 +
 +
 +
 +Cluster Node 1: gluster01.site.example.com ( 10.11.12.13 ) \\
 +
 +Cluster Node 2: gluster02.site.example.com \\
 +
 +Cluster Node 3: gluster03.site.example.com \\
 +
 +
 +
 +IMPORTANT NOTE: Please ensure the glusterfs security group is added to all glusterfs nodes to ensure all ports needed for glusterfs communication are opened. \\
 +
 +<​code>​
 +glusterfs
 +ALLOW 24007:24007 from 10.0.0.0/8
 +ALLOW 49152:49155 from 10.0.0.0/8
 +</​code>​
 +
 +
 +==== Server Config ====
 +
 +
 +1) No hosts file changes were added to the /etc/hosts file, we do rely on dns servers basically to all services.
 +
 +2) Install Server Components on all cluster nodes 
 +
 +<​code>​
 +#apt-get install glusterfs-server
 +</​code>​
 +
 +3) On one of the hosts, we need to peer with the second host. It doesn'​t matter which server you use, but we will be preforming these commands from our gluster01 server\\
 +
 +<​code>​
 +
 +# gluster peer probe gluster02.site.example.com
 +peer probe: success
 +
 +# gluster peer probe gluster03.site.example.com
 +peer probe: success
 +
 +</​code>​
 +
 +4) This means that the peering was successful. We can check that the nodes are communicating at any time by typing:
 +<​code>​
 +
 +root@gluster01:/​gluster-storage#​ gluster peer status
 +Number of Peers: 2
 +
 +Hostname: 10.10.22.11
 +Port: 24007
 +Uuid: 9947788d-f454-4108-843a-8ffb6b1c6b67
 +State: Peer in Cluster (Connected)
 +
 +Hostname: gluster03.site.example.com
 +Port: 24007
 +Uuid: ab5f725e-ff89-469d-a4e5-2a42ab3293e9
 +State: Peer in Cluster (Connected)
 +
 +</​code>​
 +
 +5) Create a storage volume ( volume 1 ) with 3 replica copies to ensure redundancy. the directory /​gluster-storage will be created if it doesn'​t exist already.
 +
 +<​code>​
 +
 +# gluster volume create volume1 replica 3 transport tcp gluster01.site.example.com:/​gluster-storage01 gluster02.site.example.com:/​gluster-storage01 gluster03.site.example.com:/​gluster-storage01 force
 +volume create: volume1: success: please start the volume to access data
 +
 +</​code>​
 +
 +6) Start the volume
 +<​code>​
 +#gluster volume start volume1
 +volume start: volume1: success
 +</​code>​
 +
 +7) Check volume status
 +
 +<​code>​
 +
 +
 +root@gluster01:/#​ gluster volume status
 +Status of volume: volume1
 +Gluster process ​                                        ​Port ​   Online ​ Pid
 +------------------------------------------------------------------------------
 +Brick gluster01.site.example.com:/​gluster-storage
 +01                                                      49154   ​Y ​      3168
 +Brick gluster02.site.example.com:/​gluster-storage
 +01                                                      49154   ​Y ​      11095
 +Brick gluster03.site.example.com:/​gluster-storage
 +01                                                      49152   ​Y ​      6355
 +NFS Server on localhost ​                                ​2049 ​   Y       3185
 +Self-heal Daemon on localhost ​                          ​N/​A ​    ​Y ​      3180
 +NFS Server on 10.10.22.11 ​                              ​2049 ​   Y       11107
 +Self-heal Daemon on 10.10.22.11 ​                        ​N/​A ​    ​Y ​      11112
 +NFS Server on gluster03.site.example.com ​               N/A     ​N ​      N/A
 +Self-heal Daemon on gluster03.site.example.com ​ N/A     ​N ​      N/A
 + 
 +There are no active volume tasks
 +
 +
 +
 +</​code>​
 +
 +8) Check volume info
 +
 +<​code>​
 +
 +admin@gluster02:​~$ sudo gluster volume info
 +[sudo] password for admin: ​
 + 
 +Volume Name: volume1
 +Type: Replicate
 +Volume ID: 8bfc56bf-1b59-4461-91a5-c8965a75ceea
 +Status: Started
 +Number of Bricks: 1 x 3 = 3
 +Transport-type:​ tcp
 +Bricks:
 +Brick1: gluster01.site.example.com:/​gluster-storage01
 +Brick2: gluster02.site.example.com:/​gluster-storage01
 +Brick3: gluster03.site.example.com:/​gluster-storage01
 +Options Reconfigured:​
 +auth.allow: 10.10.22.11,​10.10.22.12,​10.11.12.13
 +admin@gluster02:​~$ ​
 +
 +
 +</​code>​
 +
 +
 +=== Client Config ===
 +
 +1) Install client components on gluster02 and gluster03
 +<​code>​
 +#apt-get install glusterfs-client
 +</​code>​
 +
 +2)  Mount the glusterfs volume
 +<​code>​
 +mkdir  /​storage-pool
 +# from the instance gluster02
 +mount -t glusterfs gluster02.site.example.com:/​volume1 /​storage-pool
 +
 +# from the instance gluster03
 +mount -t glusterfs gluster03.site.example.com:/​volume1 /​storage-pool
 +
 +
 +</​code>​
 +
 +3) Check df 
 +
 +<​code>​
 +root@gluster02:/​home/​admin/​scripts/​configs#​ df -h
 +Filesystem ​                                ​Size ​ Used Avail Use% Mounted on
 +udev                                       ​2.0G ​  ​12K ​ 2.0G   1% /dev
 +tmpfs                                      396M  940K  395M   1% /run
 +/​dev/​vda1 ​                                  ​20G ​  ​11G ​ 8.3G  57% /
 +none                                       ​4.0K ​    ​0 ​ 4.0K   0% /​sys/​fs/​cgroup
 +none                                       ​5.0M ​    ​0 ​ 5.0M   0% /run/lock
 +none                                       ​2.0G ​  ​24K ​ 2.0G   1% /run/shm
 +none                                       ​100M ​    ​0 ​ 100M   0% /run/user
 +gluster02.site.example.com:/​volume1 ​ 8.5G  1.2G  6.9G  15% /​storage-pool
 +root@gluster02:/​home/​admin/​scripts/​configs# ​
 +
 +
 +</​code>​
 +
 +<​code>​
 +
 +
 +root@gluster03:/​home/​admin/​scripts/​scripts#​ mount
 +/dev/vda1 on / type ext4 (rw)
 +proc on /proc type proc (rw,​noexec,​nosuid,​nodev)
 +sysfs on /sys type sysfs (rw,​noexec,​nosuid,​nodev)
 +none on /​sys/​fs/​cgroup type tmpfs (rw)
 +none on /​sys/​fs/​fuse/​connections type fusectl (rw)
 +none on /​sys/​kernel/​debug type debugfs (rw)
 +none on /​sys/​kernel/​security type securityfs (rw)
 +udev on /dev type devtmpfs (rw,​mode=0755)
 +devpts on /dev/pts type devpts (rw,​noexec,​nosuid,​gid=5,​mode=0620)
 +tmpfs on /run type tmpfs (rw,​noexec,​nosuid,​size=10%,​mode=0755)
 +none on /run/lock type tmpfs (rw,​noexec,​nosuid,​nodev,​size=5242880)
 +none on /run/shm type tmpfs (rw,​nosuid,​nodev)
 +none on /run/user type tmpfs (rw,​noexec,​nosuid,​nodev,​size=104857600,​mode=0755)
 +none on /​sys/​fs/​pstore type pstore (rw)
 +rpc_pipefs on /​run/​rpc_pipefs type rpc_pipefs (rw)
 +systemd on /​sys/​fs/​cgroup/​systemd type cgroup (rw,​noexec,​nosuid,​nodev,​none,​name=systemd)
 +gluster03.site.example.com:/​volume1 on /​storage-pool type fuse.glusterfs (rw,​default_permissions,​allow_other,​max_read=131072)
 +root@gluster03:/​home/​admin/​scripts/​scripts# ​
 +
 +
 +</​code>​
 +
 +=== REDUNDANCY TESTS ===
 +
 +a) Write data under /​storage-pool on gluster-client. It automatically replicated under /​storage-pool of the gluster02 and gluster03 instances.
 +
 +b) rebooted instances ​
 +
 +<​code>​
 +The reboots were performed and the instances auto mounted the glusterfs filesystem.
 +But they took about 20 seconds to complete the mount once the server was available to ssh in.
 +When the gluster01.site.example.com instance was shutdown, the mountpoint took about 1 minute to complete.
 +
 +</​code>​
 +
 +
 +
 +=== Access Control ===
 +
 +<​code>​
 +root@gluster01:​~#​ gluster volume set volume1 auth.allow 10.10.22.11,​10.10.22.12,​10.11.12.13
 +volume set: success
 +root@gluster01:​~# ​
 +
 +</​code>​
 +
 +=== fstab entry ===
 +
 +<​code>​
 +root@gluster02:/​home/​admin/​scripts/​configs#​ cat /etc/fstab
 +LABEL=cloudimg-rootfs ​  / ​       ext4   ​defaults ​       0 0
 +gluster02.site.example.com:/​volume1 /​storage-pool glusterfs defaults,​_netdev,​log-level=WARNING,​log-file=/​var/​log/​gluster.log,​backupvolfile-server=gluster01.site.example.com 0 0
 +
 +root@gluster03:/​home/​admin/​scripts/​scripts#​ cat /etc/fstab
 +LABEL=cloudimg-rootfs ​  / ​       ext4   ​defaults ​       0 0
 +gluster03.site.example.com:/​volume1 /​storage-pool glusterfs defaults,​_netdev,​log-level=WARNING,​log-file=/​var/​log/​gluster.log,​backupvolfile-server=gluster01.site.example.com 0 0
 +
 +
 +</​code>​
 +
 +
 +
 +
  

rb/glusterfs.txt ยท Last modified: 08/09/2018 02:07 by andrew