TEAMD.CONF(5) | Team daemon configuration | TEAMD.CONF(5) |
teamd.conf — libteam daemon configuration file
teamd uses JSON format configuration.
Default: 0 (disabled)
broadcast — Simple runner which directs the team device to transmit packets via all ports.
roundrobin — Simple runner which directs the team device to transmits packets in a round-robin fashion.
random — Simple runner which directs the team device to transmits packets on a randomly selected port.
activebackup — Watches for link changes and selects active port to be used for data transfers.
loadbalance — To do passive load balancing, runner only sets up BPF hash function which will determine port for packet transmit. To do active load balancing, runner moves hashes among available ports trying to reach perfect balance.
lacp — Implements 802.3ad LACP protocol. Can use same Tx port selection possibilities as loadbalance runner.
Default: 0 (disabled)
Default for activebackup runner: 1
Default: 0
Default: 0 (disabled)
Default for activebackup runner: 1
Default: 0
ethtool — Uses Libteam lib to get port ethtool state changes.
arp_ping — ARP requests are sent through a port. If an ARP reply is received, the link is considered to be up.
nsna_ping — Similar to the previous, except that it uses IPv6 Neighbor Solicitation / Neighbor Advertisement mechanism. This is an alternative to arp_ping and becomes handy in pure-IPv6 environments.
See examples for more information.
Default: None
same_all — All ports will always have the same hardware address as the associated team device.
by_active — Team device adopts the hardware address of the currently active port. This is useful when the port device is not able to change its hardware address.
only_active — Only the active port adopts the hardware address of the team device. The others have their own.
Default: same_all
Default: 0
Default: false
eth — Uses source and destination MAC addresses.
vlan — Uses VLAN id.
ipv4 — Uses source and destination IPv4 addresses.
ipv6 — Uses source and destination IPv6 addresses.
ip — Uses source and destination IPv4 and IPv6 addresses.
l3 — Uses source and destination IPv4 and IPv6 addresses.
tcp — Uses source and destination TCP ports.
udp — Uses source and destination UDP ports.
sctp — Uses source and destination SCTP ports.
l4 — Uses source and destination TCP and UDP and SCTP ports.
Default: None
Default: 50
Default: true
Default: 65535
Default: 0
lacp_prio — Aggregator with highest priority according to LACP standard will be selected. Aggregator priority is affected by per-port option lacp_prio.
lacp_prio_stable — Same as previous one, except do not replace selected aggregator if it is still usable.
bandwidth — Select aggregator with highest total bandwidth.
count — Select aggregator with highest number of ports.
port_config — Aggregator with highest priority according to per-port options prio and sticky will be selected. This means that the aggregator containing the port with the highest priority will be selected unless at least one of the ports in the currently selected aggregator is sticky.
Default: lacp_prio
Default: 255
Default: 0
Default: 0
Default: 0
Default: 0
Default: 3
Default: 0.0.0.0
Default: false
Default: false
Default: None
Default: false
Default: 3
{
"device": "team0",
"runner": {"name": "roundrobin"},
"ports": {"eth1": {}, "eth2": {}} }
Very basic configuration.
{
"device": "team0",
"runner": {"name": "activebackup"},
"link_watch": {"name": "ethtool"},
"ports": {
"eth1": {
"prio": -10,
"sticky": true
},
"eth2": {
"prio": 100
}
} }
This configuration uses active-backup runner with ethtool link watcher. Port eth2 has higher priority, but the sticky flag ensures that if eth1 becomes active, it stays active while the link remains up.
{
"device": "team0",
"runner": {"name": "activebackup"},
"link_watch": {
"name": "ethtool",
"delay_up": 2500,
"delay_down": 1000
},
"ports": {
"eth1": {
"prio": -10,
"sticky": true
},
"eth2": {
"prio": 100
}
} }
Similar to the previous one. Only difference is that link changes are not propagated to the runner immediately, but delays are applied.
{
"device": "team0",
"runner": {"name": "activebackup"},
"link_watch": {
"name": "arp_ping",
"interval": 100,
"missed_max": 30,
"target_host": "192.168.23.1"
},
"ports": {
"eth1": {
"prio": -10,
"sticky": true
},
"eth2": {
"prio": 100
}
} }
This configuration uses ARP ping link watch.
{ "device": "team0", "runner": {"name": "activebackup"}, "link_watch": [
{
"name": "arp_ping",
"interval": 100,
"missed_max": 30,
"target_host": "192.168.23.1"
},
{
"name": "arp_ping",
"interval": 50,
"missed_max": 20,
"target_host": "192.168.24.1"
} ], "ports": {
"eth1": {
"prio": -10,
"sticky": true
},
"eth2": {
"prio": 100
}
} }
Similar to the previous one, only this time two link watchers are used at the same time.
{
"device": "team0",
"runner": {
"name": "loadbalance",
"tx_hash": ["eth", "ipv4", "ipv6"]
},
"ports": {"eth1": {}, "eth2": {}} }
Configuration for hash-based passive Tx load balancing.
{
"device": "team0",
"runner": {
"name": "loadbalance",
"tx_hash": ["eth", "ipv4", "ipv6"],
"tx_balancer": {
"name": "basic"
}
},
"ports": {"eth1": {}, "eth2": {}} }
Configuration for active Tx load balancing using basic load balancer.
{
"device": "team0",
"runner": {
"name": "lacp",
"active": true,
"fast_rate": true,
"tx_hash": ["eth", "ipv4", "ipv6"]
},
"link_watch": {"name": "ethtool"},
"ports": {"eth1": {}, "eth2": {}} }
Configuration for connection to LACP capable counterpart.
Jiri Pirko is the original author and current maintainer of libteam.
2013-07-09 | libteam |