gfs2(5) | File Formats Manual | gfs2(5) |
gfs2 - GFS2 reference guide
Overview of the GFS2 filesystem
GFS2 is a clustered filesystem, designed for sharing data between multiple nodes connected to a common shared storage device. It can also be used as a local filesystem on a single node, however since the design is aimed at clusters, that will usually result in lower performance than using a filesystem designed specifically for single node use.
GFS2 is a journaling filesystem and one journal is required for each node that will mount the filesystem. The one exception to that is spectator mounts which are equivalent to mounting a read-only block device and as such can neither recover a journal or write to the filesystem, so do not require a journal assigned to them.
The LockProtoName must be one of the supported locking protocols, currently these are lock_nolock and lock_dlm.
The default lock protocol name is written to disk initially when creating the filesystem with mkfs.gfs2(8), -p option. It can be changed on-disk by using the gfs2_tool(8) utility's sb proto command.
The lockproto mount option should be used only under special circumstances in which you want to temporarily use a different lock protocol without changing the on-disk default. Using the incorrect lock protocol on a cluster filesystem mounted from more than one node will almost certainly result in filesystem corruption.
The format of LockTableName is lock-module-specific. For lock_dlm, the format is clustername:fsname. For lock_nolock, the field is ignored.
The default cluster/filesystem name is written to disk initially when creating the filesystem with mkfs.gfs2(8), -t option. It can be changed on-disk by using the gfs2_tool(8) utility's sb table command.
The locktable mount option should be used only under special circumstances in which you want to mount the filesystem in a different cluster, or mount it as a different filesystem name, without changing the on-disk default.
This is turned on automatically by the lock_nolock module.
GFS2 doesn't support errors=remount-ro or data=journal. It is not possible to switch support for user and group quotas on and off independently of each other. Some of the error messages are rather cryptic, if you encounter one of these messages check firstly that gfs_controld is running and secondly that you have enough journals on the filesystem for the number of nodes in use.
mount(8) for general mount options, chmod(1) and chmod(2) for access permission flags, acl(5) for access control lists, lvm(8) for volume management, ccs(7) for cluster management, umount(8), initrd(4).
The GFS2 documentation has been split into a number of sections:
gfs2_edit(8) A GFS2 debug tool (use with caution) fsck.gfs2(8) The GFS2 file system checker gfs2_grow(8) Growing a GFS2 file system gfs2_jadd(8) Adding a journal to a GFS2 file system mkfs.gfs2(8) Make a GFS2 file system gfs2_quota(8) Manipulate GFS2 disk quotas gfs2_tool(8) Tool to manipulate a GFS2 file system (obsolete) tunegfs2(8) Tool to manipulate GFS2 superblocks
GFS2 clustering is driven by the dlm, which depends on dlm_controld to provide clustering from userspace. dlm_controld clustering is built on corosync cluster/group membership and messaging.
Follow these steps to manually configure and run gfs2/dlm/corosync.
1. create /etc/corosync/corosync.conf and copy to all nodes
In this sample, replace cluster_name and IP addresses, and add nodes as needed. If using only two nodes, uncomment the two_node line. See corosync.conf(5) for more information.
totem {
version: 2
secauth: off
cluster_name: abc } nodelist {
node {
ring0_addr: 10.10.10.1
nodeid: 1
}
node {
ring0_addr: 10.10.10.2
nodeid: 2
}
node {
ring0_addr: 10.10.10.3
nodeid: 3
} } quorum {
provider: corosync_votequorum # two_node: 1 } logging {
to_syslog: yes }
2. start corosync on all nodes
systemctl start corosync
Run corosync-quorumtool to verify that all nodes are listed.
3. create /etc/dlm/dlm.conf and copy to all nodes
* To use no fencing, use this line:
enable_fencing=0
* To use no fencing, but exercise fencing functions, use this line:
fence_all /bin/true
The "true" binary will be executed for all nodes and will succeed (exit 0) immediately.
* To use manual fencing, use this line:
fence_all /bin/false
The "false" binary will be executed for all nodes and will fail (exit 1) immediately.
When a node fails, manually run: dlm_tool fence_ack <nodeid>
* To use stonith/pacemaker for fencing, use this line:
fence_all /usr/sbin/dlm_stonith
The "dlm_stonith" binary will be executed for all nodes. If stonith/pacemaker systems are not available, dlm_stonith will fail and this config becomes the equivalent of the previous /bin/false config.
* To use an APC power switch, use these lines:
device apc /usr/sbin/fence_apc ipaddr=1.1.1.1 login=admin password=pw connect apc node=1 port=1 connect apc node=2 port=2 connect apc node=3 port=3
Other network switch based agents are configured similarly.
* To use sanlock/watchdog fencing, use these lines:
device wd /usr/sbin/fence_sanlock path=/dev/fence/leases connect wd node=1 host_id=1 connect wd node=2 host_id=2 unfence wd
See fence_sanlock(8) for more information.
* For other fencing configurations see dlm.conf(5) man page.
4. start dlm_controld on all nodes
systemctl start dlm
Run "dlm_tool status" to verify that all nodes are listed.
5. if using clvm, start clvmd on all nodes
systemctl clvmd start
6. make new gfs2 file systems
mkfs.gfs2 -p lock_dlm -t cluster_name:fs_name -j num /path/to/storage
The cluster_name must match the name used in step 1 above. The fs_name must be a unique name in the cluster. The -j option is the number of journals to create, there must be one for each node that will mount the fs.
7. mount gfs2 file systems
mount /path/to/storage /mountpoint
Run "dlm_tool ls" to verify the nodes that have each fs mounted.
8. shut down
umount -a -t gfs2 systemctl clvmd stop systemctl dlm stop systemctl corosync stop
More setup information:
dlm_controld(8),
dlm_tool(8),
dlm.conf(5),
corosync(8),
corosync.conf(5)