DOKK / manpages / debian 12 / bpfcc-tools / cpudist-bpfcc.8.en
cpudist(8) System Manager's Manual cpudist(8)

cpudist - On- and off-CPU task time as a histogram.

cpudist [-h] [-O] [-T] [-m] [-P] [-L] [-p PID] [-I] [-e] [interval] [count]

This measures the time a task spends on the CPU before being descheduled, and shows the times as a histogram. Tasks that spend a very short time on the CPU can be indicative of excessive context-switches and poor workload distribution, and possibly point to a shared source of contention that keeps tasks switching in and out as it becomes available (such as a mutex).

Similarly, the tool can also measure the time a task spends off-CPU before it is scheduled again. This can be helpful in identifying long blocking and I/O operations, or alternatively very short descheduling times due to short-lived locks or timers.

By default CPU idle time are excluded by simply excluding PID 0.

This tool uses in-kernel eBPF maps for storing timestamps and the histogram, for efficiency. Despite this, the overhead of this tool may become significant for some workloads: see the OVERHEAD section.

Since this uses BPF, only the root user can use this tool.

CONFIG_BPF and bcc.

Print usage message.
Measure off-CPU time instead of on-CPU time.
Include timestamps on output.
Output histogram in milliseconds.
Print a histogram for each PID (tgid from the kernel's perspective).
Print a histogram for each TID (pid from the kernel's perspective).
Only show this PID (filtered in kernel for efficiency).
Include CPU idle time (by default these are excluded).
Show extension summary (average/total/count).
Output interval, in seconds.
Number of outputs.

# cpudist
# cpudist -O
# cpudist 1 10
# cpudist -mT 1
# cpudist -p 185 1
# cpudist -I
# cpudist -e

Microsecond range
Millisecond range
How many times a task event fell into this range
An ASCII bar chart to visualize the distribution (count column)

This traces scheduler tracepoints, which can become very frequent. While eBPF has very low overhead, and this tool uses in-kernel maps for efficiency, the frequency of scheduler events for some workloads may be high enough that the overhead of this tool becomes significant. Measure in a lab environment to quantify the overhead before use.

This is from bcc.

https://github.com/iovisor/bcc

Also look in the bcc distribution for a companion _example.txt file containing example usage, output, and commentary for this tool.

Linux

Unstable - in development.

Sasha Goldshtein, Rocky Xing

pidstat(1), runqlat(8)

2016-06-28 USER COMMANDS