PCS(8) | System Administration Utilities | PCS(8) |
pcs - pacemaker/corosync configuration system
pcs [-f file] [-h] [commands]...
Control and configure pacemaker and corosync.
Example: Create a new resource called 'VirtualIP' with IP address 192.168.0.99, netmask of 32, monitored everything 30 seconds, on eth2: pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=32 nic=eth2 op monitor interval=30s
If --strict is specified, the command will also fail if other resources would be affected.
If --promoted is used the scope of the command is limited to the Promoted role and promotable clone id must be used (instead of the resource id).
If --wait is specified, pcs will wait up to 'n' seconds for the resource to move and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes.
NOTE: This command has been changed in pcs-0.11. It is equivalent to command 'resource move <resource id> --autodelete' from pcs-0.10.9. Legacy functionality of the 'resource move' command is still available as 'resource move-with-constraint <resource id>'.
If you want the resource to preferably avoid running on some nodes but be able to failover to them use 'pcs constraint location avoids'.
If lifetime is specified then the constraint will expire after that time, otherwise it defaults to infinity and the constraint can be cleared manually with 'pcs resource clear' or 'pcs constraint delete'. Lifetime is expected to be specified as ISO 8601 duration (see https://en.wikipedia.org/wiki/ISO_8601#Durations).
If --promoted is used the scope of the command is limited to the Promoted role and promotable clone id must be used (instead of the resource id).
If --wait is specified, pcs will wait up to 'n' seconds for the resource to move and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes.
If you want the resource to preferably avoid running on some nodes but be able to failover to them use 'pcs constraint location avoids'.
If --promoted is used the scope of the command is limited to the Promoted role and promotable clone id must be used (instead of the resource id).
If lifetime is specified then the constraint will expire after that time, otherwise it defaults to infinity and the constraint can be cleared manually with 'pcs resource clear' or 'pcs constraint delete'. Lifetime is expected to be specified as ISO 8601 duration (see https://en.wikipedia.org/wiki/ISO_8601#Durations).
If --wait is specified, pcs will wait up to 'n' seconds for the resource to move and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes.
If you want the resource to preferably avoid running on some nodes but be able to failover to them use 'pcs constraint location avoids'.
If --promoted is used the scope of the command is limited to the Promoted role and promotable clone id must be used (instead of the resource id).
If --expired is specified, only constraints with expired lifetimes will be removed.
If --wait is specified, pcs will wait up to 'n' seconds for the operation to finish (including starting and/or moving resources if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes.
If an operation (op) is specified it will update the first found operation with the same action on the specified resource. If no operation with that action exists then a new operation will be created. (WARNING: all existing options on the updated operation will be reset if not specified.) If you want to create multiple monitor operations you should use the 'op add' & 'op remove' commands.
If --agent-validation is specified, resource agent validate-all action will be used to validate resource options.
If --wait is specified, pcs will wait up to 'n' seconds for the changes to take effect and then return 0 if the changes have been processed or 1 otherwise. If 'n' is not specified it defaults to 60 minutes.
Set options are: id, score
Expression looks like one of the following:
op <operation name> [interval=<interval>]
resource [<standard>]:[<provider>]:[<type>]
defined|not_defined <node attribute>
<node attribute> lt|gt|lte|gte|eq|ne
[string|integer|number|version] <value>
date gt|lt <date>
date in_range [<date>] to <date>
date in_range <date> to duration <duration options>
date-spec <date-spec options>
<expression> and|or <expression>
(<expression>)
You may specify all or any of 'standard', 'provider' and 'type' in a resource expression. For example: 'resource ocf::' matches all resources of 'ocf' standard, while 'resource ::Dummy' matches all resources of 'Dummy' type regardless of their standard and provider.
Dates are expected to conform to ISO 8601 format.
Duration options are: hours, monthdays, weekdays, yearsdays, months, weeks, years, weekyears, moon. Value for these options is an integer.
Date-spec options are: hours, monthdays, weekdays, yearsdays, months, weeks, years, weekyears, moon. Value for these options is an integer or a range written as integer-integer.
NOTE: Defaults do not apply to resources / stonith devices which override them with their own defined values.
NOTE: Defaults do not apply to resources / stonith devices which override them with their own defined values.
NOTE: Defaults do not apply to resources / stonith devices which override them with their own defined values.
You can use --before or --after to specify the position of the added resources relatively to some resource already existing in the group. By adding resources to a group they are already in and specifying --after or --before you can move the resources in the group.
If --wait is specified, pcs will wait up to 'n' seconds for the operation to finish (including moving resources if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes.
If you wish to update a resource encapsulated in the bundle, use the 'pcs resource update' command instead and specify the resource id.
If --wait is specified, pcs will wait up to 'n' seconds for the operation to finish (including moving resources if appropriate) and then return 0 on success or 1 on error. If 'n' is not specified it defaults to 60 minutes.
Set options are: id, score
Expression looks like one of the following:
resource [<standard>]:[<provider>]:[<type>]
date gt|lt <date>
date in_range [<date>] to <date>
date in_range <date> to duration <duration options>
date-spec <date-spec options>
<expression> and|or <expression>
(<expression>)
You may specify all or any of 'standard', 'provider' and 'type' in a resource expression. For example: 'resource ocf::' matches all resources of 'ocf' standard, while 'resource ::Dummy' matches all resources of 'Dummy' type regardless of their standard and provider.
Dates are expected to conform to ISO 8601 format.
Duration options are: hours, monthdays, weekdays, yearsdays, months, weeks, years, weekyears, moon. Value for these options is an integer.
Date-spec options are: hours, monthdays, weekdays, yearsdays, months, weeks, years, weekyears, moon. Value for these options is an integer or a range written as integer-integer.
NOTE: Defaults do not apply to resources / stonith devices which override them with their own defined values.
NOTE: Defaults do not apply to resources / stonith devices which override them with their own defined values.
NOTE: Defaults do not apply to resources / stonith devices which override them with their own defined values.
If the named resource is part of a group, or one numbered instance of a clone or bundled resource, the clean-up applies to the whole collective resource unless --strict is given.
If a resource id / stonith id is not specified then all resources / stonith devices will be cleaned up.
If a node is not specified then resources / stonith devices on all nodes will be cleaned up.
If the named resource is part of a group, or one numbered instance of a clone or bundled resource, the refresh applies to the whole collective resource unless --strict is given.
If a resource id / stonith id is not specified then all resources / stonith devices will be refreshed.
If a node is not specified then resources / stonith devices on all nodes will be refreshed.
Nodes are specified by their names and optionally their addresses. If no addresses are specified for a node, pcs will configure corosync to communicate with that node using an address provided in 'pcs host auth' command. Otherwise, pcs will configure corosync to communicate with the node using the specified addresses.
Transport knet:
This is the default transport. It allows configuring traffic encryption
and compression as well as using multiple addresses (links) for nodes.
Transport options are: ip_version, knet_pmtud_interval, link_mode
Link options are: link_priority, linknumber, mcastport, ping_interval,
ping_precision, ping_timeout, pong_count, transport (udp or sctp)
Each 'link' followed by options sets options for one link in the order the
links are defined by nodes' addresses. You can set link options for a
subset of links using a linknumber. See examples below.
Compression options are: level, model, threshold
Crypto options are: cipher, hash, model
By default, encryption is enabled with cipher=aes256 and hash=sha256. To
disable encryption, set cipher=none and hash=none.
Transports udp and udpu:
These transports are limited to one address per node. They do not support
traffic encryption nor compression.
Transport options are: ip_version, netmtu
Link options are: bindnetaddr, broadcast, mcastaddr, mcastport, ttl
Totem and quorum can be configured regardless of used
transport.
Totem options are: block_unlisted_ips, consensus, downcheck,
fail_recv_const, heartbeat_failures_allowed, hold, join, max_messages,
max_network_delay, merge, miss_count_const, send_join,
seqno_unchanged_const, token, token_coefficient, token_retransmit,
token_retransmits_before_loss_const, window_size
Quorum options are: auto_tie_breaker, last_man_standing,
last_man_standing_window, wait_for_all
Transports and their options, link, compression, crypto and totem options are all documented in corosync.conf(5) man page; knet link options are prefixed 'knet_' there, compression options are prefixed 'knet_compression_' and crypto options are prefixed 'crypto_'. Quorum options are documented in votequorum(5) man page.
--no-cluster-uuid will not generate a unique ID for the cluster. --enable will configure the cluster to start on nodes boot. --start will start the cluster right after creating it. --wait will wait up to 'n' seconds for the cluster to start. --no-keys-sync will skip creating and distributing pcsd SSL certificate and key and corosync and pacemaker authkey files. Use this if you provide your own certificates and keys.
Local only mode:
By default, pcs connects to all specified nodes to verify they can be used
in the new cluster and to send cluster configuration files to them. If
this is not what you want, specify --corosync_conf option
followed by a file path. Pcs will save corosync.conf to the specified
file and will not connect to cluster nodes. These are the tasks that pcs
skips in that case:
* make sure the nodes are not running or configured to run a cluster
already
* make sure cluster packages are installed on all nodes and their versions
are compatible
* make sure there are no cluster configuration files on any node (run 'pcs
cluster destroy' and remove pcs_settings.conf file on all nodes)
* synchronize corosync and pacemaker authkeys, /etc/corosync/authkey and
/etc/pacemaker/authkey respectively, and the corosync.conf file
* authenticate the cluster nodes against each other ('pcs cluster auth' or
'pcs host auth' command)
* synchronize pcsd certificates (so that pcs web UI can be used in an HA
mode)
Examples:
Create a cluster with default settings:
pcs cluster setup newcluster node1 node2
Create a cluster using two links:
pcs cluster setup newcluster node1 addr=10.0.1.11 addr=10.0.2.11 node2
addr=10.0.1.12 addr=10.0.2.12
Set link options for all links. Link options are matched to the links in
order. The first link (link 0) has sctp transport, the second link (link
1) has mcastport 55405:
pcs cluster setup newcluster node1 addr=10.0.1.11 addr=10.0.2.11 node2
addr=10.0.1.12 addr=10.0.2.12 transport knet link transport=sctp link
mcastport=55405
Set link options for the second and fourth links only. Link options are
matched to the links based on the linknumber option (the first link is
link 0):
pcs cluster setup newcluster node1 addr=10.0.1.11 addr=10.0.2.11
addr=10.0.3.11 addr=10.0.4.11 node2 addr=10.0.1.12 addr=10.0.2.12
addr=10.0.3.12 addr=10.0.4.12 transport knet link linknumber=3
mcastport=55405 link linknumber=1 transport=sctp
Create a cluster using udp transport with a non-default port:
pcs cluster setup newcluster node1 node2 transport udp link
mcastport=55405
If --corosync_conf is specified, update cluster configuration in a file specified by <path>.
All options are documented in corosync.conf(5) man page. There
are different transport options for transport types. Compression and
crypto options are only available for knet transport. Totem options can
be set regardless of the transport type.
Transport options for knet transport are: ip_version, knet_pmtud_interval,
link_mode
Transport options for udp and updu transports are: ip_version, netmtu
Compression options are: level, model, threshold
Crypto options are: cipher, hash, model
Totem options are: block_unlisted_ips, consensus, downcheck,
fail_recv_const, heartbeat_failures_allowed, hold, join, max_messages,
max_network_delay, merge, miss_count_const, send_join,
seqno_unchanged_const, token, token_coefficient, token_retransmit,
token_retransmits_before_loss_const, window_size
If --corosync_conf is specified, update cluster configuration in file specified by <path>.
If --force is specified, existing UUID will be overwritten.
Example:
pcs cluster cib > original.xml
cp original.xml new.xml
pcs -f new.xml constraint location apache prefers node2
pcs cluster cib-push new.xml diff-against=original.xml
The new node is specified by its name and optionally its addresses. If no addresses are specified for the node, pcs will configure corosync to communicate with the node using an address provided in 'pcs host auth' command. Otherwise, pcs will configure corosync to communicate with the node using the specified addresses.
Use 'watchdog' to specify a path to a watchdog on the new node, when SBD is enabled in the cluster. If SBD is configured with shared storage, use 'device' to specify path to shared device(s) on the new node.
If --start is specified also start cluster on the new node, if --wait is specified wait up to 'n' seconds for the new node to start. If --enable is specified configure cluster to start on the new node on boot. If --no-watchdog-validation is specified, validation of watchdog will be skipped.
WARNING: By default, it is tested whether the specified watchdog is supported. This may cause a restart of the system when a watchdog with no-way-out-feature enabled is present. Use --no-watchdog-validation to skip watchdog validation.
WARNING: This command permanently removes any cluster configuration that has been created. It is recommended to run 'pcs cluster stop' before destroying the cluster. To prevent accidental running of this command, --force or interactive user response is required in order to proceed.
Example: Create a device for nodes node1 and node2
pcs stonith create MyFence fence_virt pcmk_host_list=node1,node2
Example: Use port p1 for node n1 and ports p2 and p3 for node n2
pcs stonith create MyFence fence_virt 'pcmk_host_map=n1:p1;n2:p2,p3'
If an operation (op) is specified it will update the first found operation with the same action on the specified stonith device. If no operation with that action exists then a new operation will be created. (WARNING: all existing options on the updated operation will be reset if not specified.) If you want to create multiple monitor operations you should use the 'op add' & 'op remove' commands.
If --agent-validation is specified, stonith agent validate-all action will be used to validate stonith device options.
If --wait is specified, pcs will wait up to 'n' seconds for the changes to take effect and then return 0 if the changes have been processed or 1 otherwise. If 'n' is not specified it defaults to 60 minutes.
List currently configured default values for operations. If --all is specified, also list expired sets of values. If --full is specified, also list ids. If --no-expire-check is specified, do not evaluate whether sets of values are expired.
Set default values for operations.
NOTE: Defaults do not apply to resources / stonith devices which override
them with their own defined values.
Create a new set of default values for resource / stonith device operations. You may specify a rule describing resources / stonith devices and / or operations to which the set applies.
Set options are: id, score
Expression looks like one of the following:
op <operation name> [interval=<interval>]
resource [<standard>]:[<provider>]:[<type>]
defined|not_defined <node attribute>
<node attribute> lt|gt|lte|gte|eq|ne
[string|integer|number|version] <value>
date gt|lt <date>
date in_range [<date>] to <date>
date in_range <date> to duration <duration options>
date-spec <date-spec options>
<expression> and|or <expression>
(<expression>)
You may specify all or any of 'standard', 'provider' and 'type' in a resource expression. For example: 'resource ocf::' matches all resources of 'ocf' standard, while 'resource ::Dummy' matches all resources of 'Dummy' type regardless of their standard and provider.
Dates are expected to conform to ISO 8601 format.
Duration options are: hours, monthdays, weekdays, yearsdays, months, weeks, years, weekyears, moon. Value for these options is an integer.
Date-spec options are: hours, monthdays, weekdays, yearsdays, months, weeks, years, weekyears, moon. Value for these options is an integer or a range written as integer-integer.
NOTE: Defaults do not apply to resources / stonith devices which override them with their own defined values.
Delete specified options sets.
Delete specified options sets.
Add, remove or change values in specified set of default values for resource / stonith device operations. Unspecified options will be kept unchanged. If you wish to remove an option, set it to empty value, i.e. 'option_name='.
NOTE: Defaults do not apply to resources / stonith devices which override them with their own defined values.
Add, remove or change default values for operations. This is a simplified command useful for cases when you only manage one set of default values. Unspecified options will be kept unchanged. If you wish to remove an option, set it to empty value, i.e. 'option_name='.
NOTE: Defaults do not apply to resources / stonith devices which override them with their own defined values.
Example: pcs stonith meta test_stonith failure-timeout=50 resource-stickiness=
List currently configured default values for resources / stonith devices. If --all is specified, also list expired sets of values. If --full is specified, also list ids. If --no-expire-check is specified, do not evaluate whether sets of values are expired.
Set default values for resources / stonith devices.
NOTE: Defaults do not apply to resources / stonith devices which override
them with their own defined values.
Create a new set of default values for resources / stonith devices. You may specify a rule describing resources / stonith devices to which the set applies.
Set options are: id, score
Expression looks like one of the following:
resource [<standard>]:[<provider>]:[<type>]
date gt|lt <date>
date in_range [<date>] to <date>
date in_range <date> to duration <duration options>
date-spec <date-spec options>
<expression> and|or <expression>
(<expression>)
You may specify all or any of 'standard', 'provider' and 'type' in a resource expression. For example: 'resource ocf::' matches all resources of 'ocf' standard, while 'resource ::Dummy' matches all resources of 'Dummy' type regardless of their standard and provider.
Dates are expected to conform to ISO 8601 format.
Duration options are: hours, monthdays, weekdays, yearsdays, months, weeks, years, weekyears, moon. Value for these options is an integer.
Date-spec options are: hours, monthdays, weekdays, yearsdays, months, weeks, years, weekyears, moon. Value for these options is an integer or a range written as integer-integer.
NOTE: Defaults do not apply to resources / stonith devices which override them with their own defined values.
Delete specified options sets.
Delete specified options sets.
Add, remove or change values in specified set of default values for resources / stonith devices. Unspecified options will be kept unchanged. If you wish to remove an option, set it to empty value, i.e. 'option_name='.
NOTE: Defaults do not apply to resources / stonith devices which override them with their own defined values.
Add, remove or change default values for resources / stonith devices. This is a simplified command useful for cases when you only manage one set of default values. Unspecified options will be kept unchanged. If you wish to remove an option, set it to empty value, i.e. 'option_name='.
NOTE: Defaults do not apply to resources / stonith devices which override them with their own defined values.
Make the cluster forget failed operations from history of the resource / stonith device and re-detect its current state. This can be useful to purge knowledge of past failures that have since been resolved.
If the named resource is part of a group, or one numbered instance of a clone or bundled resource, the clean-up applies to the whole collective resource unless --strict is given.
If a resource id / stonith id is not specified then all resources / stonith devices will be cleaned up.
If a node is not specified then resources / stonith devices on all nodes will be cleaned up.
Make the cluster forget the complete operation history (including failures) of the resource / stonith device and re-detect its current state. If you are interested in forgetting failed operations only, use the 'pcs resource cleanup' command.
If the named resource is part of a group, or one numbered instance of a clone or bundled resource, the refresh applies to the whole collective resource unless --strict is given.
If a resource id / stonith id is not specified then all resources / stonith devices will be refreshed.
If a node is not specified then resources / stonith devices on all nodes will be refreshed.
Show current failcount for resources and stonith devices, optionally filtered by a resource / stonith device, node, operation and its interval. If --full is specified do not sum failcounts per resource / stonith device and node. Use 'pcs resource cleanup' or 'pcs resource refresh' to reset failcounts.
WARNING: If this node is not actually powered off or it does have access to shared resources, data corruption/cluster failure can occur. To prevent accidental running of this command, --force or interactive user response is required in order to proceed.
NOTE: It is not checked if the specified node exists in the cluster in order to be able to work with nodes not visible from the local cluster partition.
WARNING: Cluster has to be restarted in order to apply these changes.
WARNING: By default, it is tested whether the specified watchdog is supported. This may cause a restart of the system when a watchdog with no-way-out-feature enabled is present. Use --no-watchdog-validation to skip watchdog validation.
Example of enabling SBD in cluster with watchdogs on node1 will be /dev/watchdog2, on node2 /dev/watchdog1, /dev/watchdog0 on all other nodes, device /dev/sdb on node1, device /dev/sda on all other nodes and watchdog timeout will bet set to 10 seconds:
pcs stonith sbd enable watchdog=/dev/watchdog2@node1 watchdog=/dev/watchdog1@node2 watchdog=/dev/watchdog0 device=/dev/sdb@node1 device=/dev/sda SBD_WATCHDOG_TIMEOUT=10
WARNING: Cluster has to be restarted in order to apply these changes.
WARNING: All content on device(s) will be overwritten.
WARNING: Listing available watchdogs may cause a restart of the system when a watchdog with no-way-out-feature enabled is present.
WARNING: If you want to change "host" option of qdevice model net, use "pcs quorum device remove" and "pcs quorum device add" commands to set up configuration properly unless old and new host is the same machine.
WARNING: If the nodes are not actually powered off or they do have access to shared resources, data corruption/cluster failure can occur. To prevent accidental running of this command, --force or interactive user response is required in order to proceed.
Various pcs commands accept the --force option. Its purpose is to override some of checks that pcs is doing or some of errors that may occur when a pcs command is run. When such error occurs, pcs will print the error with a note it may be overridden. The exact behavior of the option is different for each pcs command. Using the --force option can lead into situations that would normally be prevented by logic of pcs commands and therefore its use is strongly discouraged unless you know what you are doing.
This section summarizes the most important changes in commands done in pcs-0.11.x compared to pcs-0.10.x. For detailed description of current commands see above.
This section summarizes the most important changes in commands done in pcs-0.10.x compared to pcs-0.9.x. For detailed description of current commands see above.
http://clusterlabs.org/doc/
corosync_overview(8), votequorum(5), corosync.conf(5), corosync-qdevice(8), corosync-qdevice-tool(8), corosync-qnetd(8), corosync-qnetd-tool(8)
pacemaker-controld(7), pacemaker-fenced(7), pacemaker-schedulerd(7), crm_mon(8), crm_report(8), crm_simulate(8)
2023-03-01 | pcs 0.11.5 |