In this blogpost we will see how we can use the GFS2 (Global File System 2) on Oracle Linux 9 allows multiple cluster nodes to concurrently access a shared block storage device—ideal for highly available applications in a clustered environment (e.g., with KVM, OLVM, or Pacemaker clusters).
Prerequisites:
System Requirements
Oracle Linux 9 with UEK kernel
Shared block storage (iSCSI, FC, or shared disk)
1. Cluster infrastructure (Pacemaker/Corosync)
Fencing configured (GFS2 requires it)
Install Required Packages (on all nodes):
bash#dnf install -y gfs2-utils lvm2-cluster pcs fence-agents-all2. Enable and Start Cluster Services:
bash#systemctl enable --now pcsd bash#echo "yourpassword" | passwd --stdin hacluster bash#pcs host auth node1 node2 node3 -u hacluster -p yourpassword bash#pcs cluster setup --name gfscluster node1 node2 node3 bash#pcs cluster start --all bash#pcs cluster enable --all3. Configure Shared Storage: If using LVM, make sure to enable clustering:
# On all nodes: bash#lvmconf --enable-cluster bash#systemctl restart lvm2-lvmetad.service4. Create and Mount the GFS2 Filesystem:
bash#mkfs.gfs2 -p lock_dlm -t gfscluster:gfsdata -j 3 /dev/gfs_vg/gfs_lv-t gfscluster:gfsdata → Cluster name and lock name -j 3 → One journal per node
5. Mount GFS2 on All Nodes:
Create mount point, add to fastab and mount:
bash#mkdir /mnt/gfs bash#vi /etc/fastab bash#/dev/gfs_vg/gfs_lv /mnt/gfs gfs2 defaults 0 0 bash#mount -a6. Fencing Is Mandatory: GFS2 will not mount if fencing is not configured. Example fencing (IPMI):
bash#pcs stonith create fence-node1 fence_ipmilan ipaddr=IPMI1 login=admin passwd=pass lanplus=1 pcmk_host_list="node1"Repeat for all nodes that are going to be part of the cluster. 7. Check Status Verify GFS2 is mounted and functioning on all nodes:
bash#df -h /mnt/gfsConclusion: A GFS2 (Global File System 2) cluster offers significant benefits for high-availability and shared-storage environments. It allows multiple nodes in a cluster to access the same file system simultaneously on shared block storage, ensuring data consistency and eliminating the need for data replication across servers. This concurrent access is crucial for clustered applications, virtual machine images, and databases that require real-time coordination. GFS2 integrates tightly with cluster management tools like Pacemaker and Corosync, enabling robust failover, fencing, and resource control. By centralizing data storage and supporting coordinated access, a GFS2 cluster enhances scalability, simplifies storage management, and minimizes downtime, making it an ideal solution for enterprise-grade workloads. BR, ZAHEER
Comments