glocktop(8) | System Manager's Manual | glocktop(8) |
glocktop - Display or print active GFS2 locks.
glocktop [OPTIONS]
The glocktop tool is used to display active GFS2 inter-node locks, also known as glocks. Simply put, it's a tool to filter and interpret the contents of the glocks debugfs file. The glocks debugfs file shows all glocks known to GFS2, their holders, and technical data such as flags. The glocktop tool will only show the glocks that are important: glocks that are being held or for which there are waiters. It also interprets the debugfs file of DLM (Distributed Lock Manager).
# glocktop
@ nate_bob1 Wed Jan 27 07:24:14 2016 @host-050
G: s:EX n:9/1 f:Iqb t:EX d:EX/0 a:0 v:0 r:2 m:200 (journal)
D: Granted EX on node 2 to pid 17468 [ended]
H: s:EX f:eH e:0 p:17468 [(ended)] gfs2_glock_nq_num+0x5b/0xa0 [gfs2]
U: (N/A:Journl) H journal 1 Held:Exclusive [Queued, Blocking]
U: (N/A:Journl) H ---> held by pid 17468 [(ended)] (Granted EX on node 2 to pid 17468 [ended])
G: s:SH n:1/1 f:Iqb t:SH d:EX/0 a:0 v:0 r:2 m:200 (non-disk)
D: Granted PR on node 2 to pid 17468 [ended]
H: s:SH f:eEH e:0 p:17468 [(ended)] gfs2_glock_nq_num+0x5b/0xa0 [gfs2]
U: (N/A:Not EX) H non-disk 1 Held:Shared [Queued, Blocking]
U: (N/A:Not EX) H ---> held by pid 17468 [(ended)] (Granted PR on node 2 to pid 17468 [ended])
G: s:EX n:2/181ec f:yIqob t:EX d:EX/0 a:1 v:0 r:3 m:200 (inode)
D: Granted EX on this node to pid 17468 [ended]
H: s:EX f:H e:0 p:17468 [(ended)] init_per_node+0x17d/0x280 [gfs2]
I: n:12/98796 t:8 f:0x00 d:0x00000201 s:24
U: (N/A:System) H inode 181ec Held:Exclusive [Dirty, Queued, Blocking]
U: (N/A:System) H ---> held by pid 17468 [(ended)] (Granted EX on this node to pid 17468 [ended])
G: s:EX n:2/181ed f:Iqob t:EX d:EX/0 a:0 v:0 r:3 m:200 (inode)
D: Granted EX on this node to pid 17468 [ended]
H: s:EX f:H e:0 p:17468 [(ended)] init_per_node+0x1b0/0x280 [gfs2]
I: n:13/98797 t:8 f:0x00 d:0x00000200 s:1048576
U: (N/A:System) H inode 181ed Held:Exclusive [Queued, Blocking]
U: (N/A:System) H ---> held by pid 17468 [(ended)] (Granted EX on this node to pid 17468 [ended])
G: s:SH n:2/183f5 f:ldrIqob t:EX d:UN/0 a:0 v:0 r:5 m:10 (inode)
D: Granted PR on node 2 to pid 17511 [python]
H: s:EX f:W e:0 p:17511 [python] gfs2_unlink+0x7e/0x250 [gfs2]
I: n:1/99317 t:4 f:0x00 d:0x00000003 s:2048
U: W inode 183f5 Is:Shared, Want:Exclusive [Demote pending, Reply pending, Queued, Blocking]
U: W ---> waiting pid 17511 [python] (Granted PR on node 2 to pid 17511 [python])
C: gfs2_unlink+0xdc/0x250 [gfs2]
C: vfs_unlink+0xa0/0xf0
C: do_unlinkat+0x163/0x260
C: sys_unlink+0x16/0x20
G: s:SH n:2/805b f:Iqob t:SH d:EX/0 a:0 v:0 r:3 m:200 (inode)
D: Granted PR on node 2 to pid 17468 [ended]
H: s:SH f:eEcH e:0 p:17468 [(ended)] init_journal+0x185/0x500 [gfs2]
I: n:5/32859 t:8 f:0x01 d:0x00000200 s:134217728
U: (N/A:Not EX) H inode 805b Held:Shared [Queued, Blocking]
U: (N/A:Not EX) H ---> held by pid 17468 [(ended)] (Granted PR on node 2 to pid 17468 [ended]) S glocks nondisk inode rgrp iopen flock quota jrnl Total S --------- ------- -------- ------- ------- ------- ----- ---- -------- S Unlocked: 1 5 4 0 0 0 0 10 S Locked: 2 245 6 58 0 0 1 313 S Total: 3 250 10 58 0 0 1 323 S S Held EX: 0 2 0 0 0 0 1 3 S Held SH: 1 1 0 57 0 0 0 59 S Held DF: 0 0 0 0 0 0 0 0 S G Waiting: 0 1 0 0 0 0 0 1 S P Waiting: 0 1 0 0 0 0 0 1 S DLM wait: 0 @ nate_bob0 Wed Jan 27 07:24:14 2016 @host-050
G: s:EX n:2/180e9 f:yIqob t:EX d:EX/0 a:1 v:0 r:3 m:200 (inode)
D: Granted EX on this node to pid 17465 [ended]
H: s:EX f:H e:0 p:17465 [(ended)] init_per_node+0x17d/0x280 [gfs2]
I: n:9/98537 t:8 f:0x00 d:0x00000201 s:24
U: (N/A:System) H inode 180e9 Held:Exclusive [Dirty, Queued, Blocking]
U: (N/A:System) H ---> held by pid 17465 [(ended)] (Granted EX on this node to pid 17465 [ended])
G: s:UN n:2/609b4 f:lIqob t:EX d:EX/0 a:0 v:0 r:3 m:200 (inode)
D: Granted EX on node 2 to pid 14367 [ended]
H: s:EX f:W e:0 p:16297 [delete_workqueu] gfs2_delete_inode+0x9d/0x450 [gfs2]
U: W inode 609b4 Is:Unlocked, Want:Exclusive [Queued, Blocking]
U: W ---> waiting pid 16297 [delete_workqueu] (Granted EX on node 2 to pid 14367 [ended])
C: gfs2_delete_inode+0xa5/0x450 [gfs2]
C: generic_delete_inode+0xde/0x1d0
C: generic_drop_inode+0x65/0x80
C: gfs2_drop_inode+0x37/0x40 [gfs2]
G: s:SH n:2/19 f:Iqob t:SH d:EX/0 a:0 v:0 r:3 m:200 (inode)
D: Granted PR on this node to pid 17465 [ended]
H: s:SH f:eEcH e:0 p:17465 [(ended)] init_journal+0x185/0x500 [gfs2]
I: n:4/25 t:8 f:0x01 d:0x00000200 s:134217728
U: (N/A:Not EX) H inode 19 Held:Shared [Queued, Blocking]
U: (N/A:Not EX) H ---> held by pid 17465 [(ended)] (Granted PR on this node to pid 17465 [ended])
G: s:EX n:2/180ea f:Iqob t:EX d:EX/0 a:0 v:0 r:3 m:200 (inode)
D: Granted EX on this node to pid 17465 [ended]
H: s:EX f:H e:0 p:17465 [(ended)] init_per_node+0x1b0/0x280 [gfs2]
I: n:10/98538 t:8 f:0x00 d:0x00000200 s:1048576
U: (N/A:System) H inode 180ea Held:Exclusive [Queued, Blocking]
U: (N/A:System) H ---> held by pid 17465 [(ended)] (Granted EX on this node to pid 17465 [ended])
G: s:EX n:9/0 f:Iqb t:EX d:EX/0 a:0 v:0 r:2 m:200 (journal)
D: Granted EX on this node to pid 17465 [ended]
H: s:EX f:eH e:0 p:17465 [(ended)] gfs2_glock_nq_num+0x5b/0xa0 [gfs2]
U: (N/A:Journl) H journal 0 Held:Exclusive [Queued, Blocking]
U: (N/A:Journl) H ---> held by pid 17465 [(ended)] (Granted EX on this node to pid 17465 [ended])
G: s:UN n:2/4fe12 f:ldIqob t:EX d:UN/0 a:0 v:0 r:4 m:10 (inode)
H: s:EX f:W e:0 p:17523 [python] gfs2_rename+0x344/0x8b0 [gfs2]
H: s:SH f:AW e:0 p:17527 [python] gfs2_permission+0x176/0x210 [gfs2]
U: W inode 4fe12 Is:Unlocked, Want:Exclusive [Demote pending, Queued, Blocking]
U: W ---> waiting pid 17523 [python]
C: gfs2_permission+0x17f/0x210 [gfs2]
C: __link_path_walk+0xb3/0x1000
C: path_walk+0x6a/0xe0
C: filename_lookup+0x6b/0xc0
U: W ---> waiting pid 17527 [python]
C: do_unlinkat+0x107/0x260
C: sys_unlink+0x16/0x20
C: system_call_fastpath+0x16/0x1b
C: 0xffffffffffffffff
G: s:SH n:1/1 f:Iqb t:SH d:EX/0 a:0 v:0 r:2 m:200 (non-disk)
D: Granted PR on node 2 to pid 14285 [ended]
D: Granted PR on this node to pid 17465 [ended]
H: s:SH f:eEH e:0 p:17465 [(ended)] gfs2_glock_nq_num+0x5b/0xa0 [gfs2]
U: (N/A:Not EX) H non-disk 1 Held:Shared [Queued, Blocking]
U: (N/A:Not EX) H ---> held by pid 17465 [(ended)] (Granted PR on node 2 to pid 14285 [ended]) (Granted PR on this node to pid 17465 [ended]) S glocks nondisk inode rgrp iopen flock quota jrnl Total S --------- ------- -------- ------- ------- ------- ----- ---- -------- S Unlocked: 1 8 7 0 0 0 0 16 S Locked: 2 208 3 41 0 0 1 256 S Total: 3 216 10 41 0 0 1 272 S S Held EX: 0 2 0 0 0 0 1 3 S Held SH: 1 1 0 41 0 0 0 43 S Held DF: 0 0 0 0 0 0 0 0 S G Waiting: 0 2 0 0 0 0 0 2 S P Waiting: 0 3 0 0 0 0 0 3 S DLM wait: 0
From this example output, we can see there are two GFS2 file systems mounted on system host-050: nate_bob1 and nate_bob0. In nate_bob1, we can see six glocks, but we can ignore all of them marked (N/A:...) because they are system files or held in SHared mode, and therefore other nodes should be able to hold the lock in SHared as well.
There is one glock, for inode 183f5, which is has a process waiting to hold it. The lock is currently in SHared mode (s:SH on the G: line) but process 17511 (python) wants to hold the lock in EXclusive mode (S:EX on the H: line). That process has a call stack that indicates it is trying to hold the glock from gfs2_unlink. The DLM says the lock is currently granted on node 2 in PR (Protected Read) mode.
For file system nate_bob0, there are 7 glocks listed. All but two are uninteresting. Locks 2/609b4 and 2/4fe12 have processes waiting to hold them.
In the summary data for nate_bob0, you can see there are 3 processes waiting for 2 inode glocks (so one of those glocks has multiple processes waiting).
Since DLM wait is 0 in the summary data for both GFS2 mount points, nobody is waiting for DLM to grant the lock.
Since the GFS2 debugfs files are completely separate from the DLM debugfs files, and locks can change status in a few nanoseconds time, there will always be a lag between the GFS2 view of a lock and the DLM view of a lock. If there is some kind of long-term hang, they are more likely to match. However, under ordinary conditions, by the time glocktop gets around to fetching the DLM status of a lock, the information has changed. Therefore, don't be surprised if the DLM's view of a lock is at odds with its glock.
Since iopen glocks are held by the thousands, glocktop skips most of the information related to them unless there's a waiter. For that reason, iopen lock problems may be difficult to debug with glocktop.
It doesn't handle very large numbers (millions) of glocks.