PACEMAKER-SCHEDULER(7) | Pacemaker Configuration | PACEMAKER-SCHEDULER(7) |
pacemaker-schedulerd - Pacemaker scheduler options
[no-quorum-policy=select] [symmetric-cluster=boolean] [maintenance-mode=boolean] [start-failure-is-fatal=boolean] [enable-startup-probes=boolean] [shutdown-lock=boolean] [shutdown-lock-limit=time] [stonith-enabled=boolean] [stonith-action=select] [stonith-timeout=time] [have-watchdog=boolean] [concurrent-fencing=boolean] [startup-fencing=boolean] [priority-fencing-delay=time] [cluster-delay=time] [batch-limit=integer] [migration-limit=integer] [stop-all-resources=boolean] [stop-orphan-resources=boolean] [stop-orphan-actions=boolean] [remove-after-stop=boolean] [pe-error-series-max=integer] [pe-warn-series-max=integer] [pe-input-series-max=integer] [node-health-strategy=select] [node-health-base=integer] [node-health-green=integer] [node-health-yellow=integer] [node-health-red=integer] [placement-strategy=select]
Cluster options used by Pacemaker's scheduler
no-quorum-policy = select [stop]
What to do when the cluster does not have quorum Allowed values: stop, freeze, ignore, demote, suicide
symmetric-cluster = boolean [true]
maintenance-mode = boolean [false]
start-failure-is-fatal = boolean [true]
When true, the cluster will immediately ban a resource from a node if it fails to start there. When false, the cluster will instead check the resource's fail count against its migration-threshold.
enable-startup-probes = boolean [true]
shutdown-lock = boolean [false]
When true, resources active on a node when it is cleanly shut down are kept "locked" to that node (not allowed to run elsewhere) until they start again on that node after it rejoins (or for at most shutdown-lock-limit, if set). Stonith resources and Pacemaker Remote connections are never locked. Clone and bundle instances and the promoted role of promotable clones are currently never locked, though support could be added in a future release.
shutdown-lock-limit = time [0]
If shutdown-lock is true and this is set to a nonzero time duration, shutdown locks will expire after this much time has passed since the shutdown was initiated, even if the node has not rejoined.
stonith-enabled = boolean [true]
If false, unresponsive nodes are immediately assumed to be harmless, and resources that were active on them may be recovered elsewhere. This can result in a "split-brain" situation, potentially leading to data loss and/or service unavailability.
stonith-action = select [reboot]
Action to send to fence device when a node needs to be fenced ("poweroff" is a deprecated alias for "off") Allowed values: reboot, off, poweroff
stonith-timeout = time [60s]
This value is not used by Pacemaker, but is kept for backward compatibility, and certain legacy fence agents might use it.
have-watchdog = boolean [false]
This is set automatically by the cluster according to whether SBD is detected to be in use. User-configured values are ignored. The value `true` is meaningful if diskless SBD is used and `stonith-watchdog-timeout` is nonzero. In that case, if fencing is required, watchdog-based self-fencing will be performed via SBD without requiring a fencing resource explicitly configured.
concurrent-fencing = boolean [false]
startup-fencing = boolean [true]
Setting this to false may lead to a "split-brain" situation,potentially leading to data loss and/or service unavailability.
priority-fencing-delay = time [0]
Apply specified delay for the fencings that are targeting the lost nodes with the highest total resource priority in case we don't have the majority of the nodes in our cluster partition, so that the more significant nodes potentially win any fencing match, which is especially meaningful under split-brain of 2-node cluster. A promoted resource instance takes the base priority + 1 on calculation if the base priority is not 0. Any static/random delays that are introduced by `pcmk_delay_base/max` configured for the corresponding fencing resources will be added to this delay. This delay should be significantly greater than, safely twice, the maximum `pcmk_delay_base/max`. By default, priority fencing delay is disabled.
cluster-delay = time [60s]
The node elected Designated Controller (DC) will consider an action failed if it does not get a response from the node executing the action within this time (after considering the action's own timeout). The "correct" value will depend on the speed and load of your network and cluster nodes.
batch-limit = integer [0]
The "correct" value will depend on the speed and load of your network and cluster nodes. If set to 0, the cluster will impose a dynamically calculated limit when any node has a high load.
migration-limit = integer [-1]
stop-all-resources = boolean [false]
stop-orphan-resources = boolean [true]
stop-orphan-actions = boolean [true]
remove-after-stop = boolean [false]
Values other than default are poorly tested and potentially dangerous. This option will be removed in a future release.
pe-error-series-max = integer [-1]
Zero to disable, -1 to store unlimited.
pe-warn-series-max = integer [5000]
Zero to disable, -1 to store unlimited.
pe-input-series-max = integer [4000]
Zero to disable, -1 to store unlimited.
node-health-strategy = select [none]
Requires external entities to create node attributes (named with the prefix "#health") with values "red", "yellow", or "green". Allowed values: none, migrate-on-red, only-green, progressive, custom
node-health-base = integer [0]
Only used when node-health-strategy is set to progressive.
node-health-green = integer [0]
Only used when node-health-strategy is set to custom or progressive.
node-health-yellow = integer [0]
Only used when node-health-strategy is set to custom or progressive.
node-health-red = integer [-INFINITY]
Only used when node-health-strategy is set to custom or progressive.
placement-strategy = select [default]
How the cluster should allocate resources to nodes Allowed values: default, utilization, minimal, balanced
Andrew Beekhof <andrew@beekhof.net>
07/09/2023 | Pacemaker Configuration |