DOKK Library

Concise Linux - An Introduction to Linux Use and Administration

Authors tuxcademy

License CC-BY-SA-4.0

                                                                               rofe        In




                                                                            d Tr ng M

            Concise Linux
An Introduction to Linux Use and

                      $ echo tux
                      $ ls
                      $ /bin/su -

  tuxcademy – Linux and Open Source learning materials for everyone
             ssional            These training materials have been certified by the Linux Professional Institute (LPI) under the
          ofe        In         auspices of the LPI ATM programme. They are suitable for preparation for the LPIC-1 certification.


                         tute   The Linux Professional Institute does not endorse specific exam preparation materials or
                                techniques—refer to for details.



      d Tr ng M

                                         The tuxcademy project aims to supply freely available high-quality training materials on
                                         Linux and Open Source topics – for self-study, school, higher and continuing education
                                         and professional training.
                                         Please visit ! Do contact us with questions or suggestions.

                                      Concise Linux       An Introduction to Linux Use and Administration
                                      Revision: lxk1:807d647231c25323:2015-08-21
                                                adm1:33e55eeadba676a3:2015-08-08 10–18, 26–27
                                                adm2:0cd20ee1646f650c:2015-08-21 20–25
                                                grd1:be27bba8095b329b:2015-08-04 1–9, B
                                                grd2:6eb247d0aa1863fd:2015-08-05 19

                                      © 2015 Linup Front GmbH           Darmstadt, Germany
                                      © 2016 tuxcademy (Anselm Lingnau)             Darmstadt, Germany
                                      Linux penguin “Tux” © Larry Ewing (CC-BY licence)

                                      All representations and information contained in this document have been com-
                                      piled to the best of our knowledge and carefully tested. However, mistakes cannot
                                      be ruled out completely. To the extent of applicable law, the authors and the tux-
                                      cademy project assume no responsibility or liability resulting in any way from the
                                      use of this material or parts of it or from any violation of the rights of third parties.
                                      Reproduction of trade marks, service marks and similar monikers in this docu-
                                      ment, even if not specially marked, does not imply the stipulation that these may
                                      be freely usable according to trade mark protection laws. All trade marks are used
                                      without a warranty of free usability and may be registered trade marks of third

                                      This document is published under the “Creative Commons-BY-SA 4.0 Interna-
                                      tional” licence. You may copy and distribute it and make it publically available as
                                      long as the following conditions are met:
                                      Attribution You must make clear that this document is a product of the tux-
                                            cademy project.

                                      Share-Alike You may alter, remix, extend, or translate this document or modify
                                           or build on it in other ways, as long as you make your contributions available
                                           under the same licence as the original.
                                      Further information and the full legal license grant may be found at

                                      Authors: Tobias Elsner, Thomas Erker, Anselm Lingnau
                                      Technical Editor: Anselm Lingnau ⟨ ⟩
                                      Typeset in Palatino, Optima and DejaVu Sans Mono
                                                                                                    $ echo tux
                                                                                                    $ ls
                                                                                                    $ /bin/su -


1 Introduction                                                                                 15
1.1 What is Linux? . . . . . . . . . . .               .   .   .   .   .   .   .   .   .   .   16
1.2 Linux History . . . . . . . . . . .                .   .   .   .   .   .   .   .   .   .   16
1.3 Free Software, “Open Source” and the GPL           .   .   .   .   .   .   .   .   .   .   18
1.4 Linux—The Kernel . . . . . . . . .                 .   .   .   .   .   .   .   .   .   .   21
1.5 Linux Properties . . . . . . . . . .               .   .   .   .   .   .   .   .   .   .   23
1.6 Linux Distributions . . . . . . . . .              .   .   .   .   .   .   .   .   .   .   26

2 Using the Linux System                                      31
2.1 Logging In and Out . . . . . . . . . . . . . . . . . . . 32
2.2 Switching On and Off . . . . . . . . . . . . . . . . . . 34
2.3 The System Administrator. . . . . . . . . . . . . . . . . 34

3 Who’s Afraid Of The Big Bad Shell?                                                           37
3.1 Why? . . . . . . . . . . .             .   .   .   .   .   .   .   .   .   .   .   .   .   38
   3.1.1 What Is The Shell? . . . .        .   .   .   .   .   .   .   .   .   .   .   .   .   38
3.2 Commands . . . . . . . . .             .   .   .   .   .   .   .   .   .   .   .   .   .   40
   3.2.1 Why Commands?. . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   40
   3.2.2 Command Structure. . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   40
   3.2.3 Command Types . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   41
   3.2.4 Even More Rules . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   42

4 Getting Help                                                                                 45
4.1 Self-Help . . . . . . . . .        . . .       .   .   .   .   .   .   .   .   .   .   .   46
4.2 The help Command and the --help    Option      .   .   .   .   .   .   .   .   .   .   .   46
4.3 The On-Line Manual . . . .         . . .       .   .   .   .   .   .   .   .   .   .   .   46
   4.3.1 Overview . . . . . .          . . .       .   .   .   .   .   .   .   .   .   .   .   46
   4.3.2 Structure . . . . . . .       . . .       .   .   .   .   .   .   .   .   .   .   .   47
   4.3.3 Chapters . . . . . . .        . . .       .   .   .   .   .   .   .   .   .   .   .   48
   4.3.4 Displaying Manual Pages .     . . .       .   .   .   .   .   .   .   .   .   .   .   48
4.4 Info Pages . . . . . . . .         . . .       .   .   .   .   .   .   .   .   .   .   .   49
4.5 HOWTOs. . . . . . . . .            . . .       .   .   .   .   .   .   .   .   .   .   .   50
4.6 Further Information Sources . .    . . .       .   .   .   .   .   .   .   .   .   .   .   50

5 The vi Editor                                                                                53
5.1 Editors. . . . . . . .     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   54
5.2 The Standard—vi . . . .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   54
   5.2.1 Overview . . . .      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   54
   5.2.2 Basic Functions . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   55
   5.2.3 Extended Commands     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   58
5.3 Other Editors . . . . .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   60
4                                                                                          Contents

    6 Files: Care and Feeding                                                                      63
    6.1 File and Path Names . . . . . . . . . . . . .                      .   .   .   .   .   .   64
       6.1.1 File Names . . . . . . . . . . . . . .                        .   .   .   .   .   .   64
       6.1.2 Directories . . . . . . . . . . . . . .                       .   .   .   .   .   .   65
       6.1.3 Absolute and Relative Path Names . . . . .                    .   .   .   .   .   .   66
    6.2 Directory Commands . . . . . . . . . . . .                         .   .   .   .   .   .   67
       6.2.1 The Current Directory: cd & Co. . . . . . .                   .   .   .   .   .   .   67
       6.2.2 Listing Files and Directories—ls . . . . . .                  .   .   .   .   .   .   68
       6.2.3 Creating and Deleting Directories: mkdir and rmdir            .   .   .   .   .   .   69
    6.3 File Search Patterns . . . . . . . . . . . . .                     .   .   .   .   .   .   70
       6.3.1 Simple Search Patterns . . . . . . . . . .                    .   .   .   .   .   .   70
       6.3.2 Character Classes . . . . . . . . . . . .                     .   .   .   .   .   .   72
       6.3.3 Braces . . . . . . . . . . . . . . . .                        .   .   .   .   .   .   73
    6.4 Handling Files . . . . . . . . . . . . . . .                       .   .   .   .   .   .   74
       6.4.1 Copying, Moving and Deleting—cp and Friends.                  .   .   .   .   .   .   74
       6.4.2 Linking Files—ln and ln -s . . . . . . . .                    .   .   .   .   .   .   76
       6.4.3 Displaying File Content—more and less . . . .                 .   .   .   .   .   .   80
       6.4.4 Searching Files—find . . . . . . . . . .                      .   .   .   .   .   .   81
       6.4.5 Finding Files Quickly—locate and slocate . . .                .   .   .   .   .   .   84

    7 Standard I/O and Filter Commands                                                              87
    7.1 I/O Redirection and Command Pipelines . . . . . . .                        .   .   .   .    88
       7.1.1 Standard Channels . . . . . . . . . . . . .                           .   .   .   .    88
       7.1.2 Redirecting Standard Channels . . . . . . . . .                       .   .   .   .    89
       7.1.3 Command Pipelines . . . . . . . . . . . . .                           .   .   .   .    92
    7.2 Filter Commands . . . . . . . . . . . . . . . .                            .   .   .   .    94
    7.3 Reading and Writing Files . . . . . . . . . . . . .                        .   .   .   .    94
       7.3.1 Outputting and Concatenating Text Files—cat and tac                   .   .   .   .    94
       7.3.2 Beginning and End—head and tail . . . . . . . .                       .   .   .   .    96
       7.3.3 Just the Facts, Ma’am—od and hexdump . . . . . . .                    .   .   .   .    97
    7.4 Text Processing. . . . . . . . . . . . . . . . .                           .   .   .   .   100
       7.4.1 Character by Character—tr , expand and unexpand . . .                 .   .   .   .   100
       7.4.2 Line by Line—fmt , pr and so on . . . . . . . . .                     .   .   .   .   103
    7.5 Data Management . . . . . . . . . . . . . . .                              .   .   .   .   108
       7.5.1 Sorted Files—sort and uniq . . . . . . . . . .                        .   .   .   .   108
       7.5.2 Columns and Fields—cut , paste etc. . . . . . . .                     .   .   .   .   113

    8 More About The Shell                                                                         119
    8.1 Simple Commands: sleep , echo , and date . . .         .   .   .   .   .   .   .   .   .   120
    8.2 Shell Variables and The Environment. . . .             .   .   .   .   .   .   .   .   .   121
    8.3 Command Types—Reloaded . . . . . . .                   .   .   .   .   .   .   .   .   .   123
    8.4 The Shell As A Convenient Tool. . . . . .              .   .   .   .   .   .   .   .   .   124
    8.5 Commands From A File . . . . . . . .                   .   .   .   .   .   .   .   .   .   128
    8.6 The Shell As A Programming Language. . .               .   .   .   .   .   .   .   .   .   129
       8.6.1 Foreground and Background Processes .             .   .   .   .   .   .   .   .   .   132

    9 The File System                                                                              137
    9.1 Terms . . . . . . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   .   138
    9.2 File Types. . . . . . . . .        .   .   .   .   .   .   .   .   .   .   .   .   .   .   138
    9.3 The Linux Directory Tree . . .     .   .   .   .   .   .   .   .   .   .   .   .   .   .   139
    9.4 Directory Tree and File Systems.   .   .   .   .   .   .   .   .   .   .   .   .   .   .   147
    9.5 Removable Media. . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   148

    10 System Administration                                                                       151
    10.1 Introductory Remarks . . . . . . .            .   .   .   .   .   .   .   .   .   .   .   152
    10.2 The Privileged root Account . . . . .         .   .   .   .   .   .   .   .   .   .   .   152
    10.3 Obtaining Administrator Privileges . .        .   .   .   .   .   .   .   .   .   .   .   154
    10.4 Distribution-specific Administrative Tools    .   .   .   .   .   .   .   .   .   .   .   156

11 User Administration                                                                    159
11.1 Basics . . . . . . . . . . . . . . . . . .                   .   .   .   .   .   .   160
    11.1.1 Why Users? . . . . . . . . . . . . . .                 .   .   .   .   .   .   160
    11.1.2 Users and Groups . . . . . . . . . . .                 .   .   .   .   .   .   161
    11.1.3 People and Pseudo-Users . . . . . . . . .              .   .   .   .   .   .   163
11.2 User and Group Information . . . . . . . . . .               .   .   .   .   .   .   163
    11.2.1 The /etc/passwd File . . . . . . . . . . .             .   .   .   .   .   .   163
    11.2.2 The /etc/shadow File . . . . . . . . . . .             .   .   .   .   .   .   166
    11.2.3 The /etc/group File . . . . . . . . . . .              .   .   .   .   .   .   168
    11.2.4 The /etc/gshadow File . . . . . . . . . . .            .   .   .   .   .   .   169
    11.2.5 The getent Command . . . . . . . . . .                 .   .   .   .   .   .   170
11.3 Managing User Accounts and Group Information . .             .   .   .   .   .   .   170
    11.3.1 Creating User Accounts . . . . . . . . .               .   .   .   .   .   .   171
    11.3.2 The passwd Command . . . . . . . . . .                 .   .   .   .   .   .   172
    11.3.3 Deleting User Accounts . . . . . . . . .               .   .   .   .   .   .   174
    11.3.4 Changing User Accounts and Group Assignment            .   .   .   .   .   .   174
    11.3.5 Changing User Information Directly—vipw . . .          .   .   .   .   .   .   175
    11.3.6 Creating, Changing and Deleting Groups . . .           .   .   .   .   .   .   175

12 Access Control                                                                         179
12.1 The Linux Access Control System . . . . . . . . .                .   .   .   .   .   180
12.2 Access Control For Files And Directories . . . . . .             .   .   .   .   .   180
    12.2.1 The Basics . . . . . . . . . . . . . . .                   .   .   .   .   .   180
    12.2.2 Inspecting and Changing Access Permissions. . .            .   .   .   .   .   181
    12.2.3 Specifying File Owners and Groups—chown and chgrp          .   .   .   .   .   182
    12.2.4 The umask . . . . . . . . . . . . . . .                    .   .   .   .   .   183
12.3 Access Control Lists (ACLs) . . . . . . . . . . .                .   .   .   .   .   185
12.4 Process Ownership . . . . . . . . . . . . . .                    .   .   .   .   .   185
12.5 Special Permissions for Executable Files . . . . . . .           .   .   .   .   .   185
12.6 Special Permissions for Directories . . . . . . . .              .   .   .   .   .   186
12.7 File Attributes . . . . . . . . . . . . . . . .                  .   .   .   .   .   188

13 Process Management                                                                     191
13.1 What Is A Process? . . . . . . . . . . . . . .                   .   .   .   .   .   192
13.2 Process States . . . . . . . . . . . . . . . .                   .   .   .   .   .   193
13.3 Process Information—ps . . . . . . . . . . . .                   .   .   .   .   .   194
13.4 Processes in a Tree—pstree . . . . . . . . . . .                 .   .   .   .   .   195
13.5 Controlling Processes—kill and killall . . . . . . .             .   .   .   .   .   196
13.6 pgrep and pkill . . . . . . . . . . . . . . . .                  .   .   .   .   .   197
13.7 Process Priorities—nice and renice . . . . . . . . .             .   .   .   .   .   199
13.8 Further Process Management Commands—nohup and top                .   .   .   .   .   199

14 Hard Disks (and Other Secondary Storage)                                               201
14.1 Fundamentals . . . . . . . . . . .           .   .   .   .   .   .   .   .   .   .   202
14.2 Bus Systems for Mass Storage . . . . .       .   .   .   .   .   .   .   .   .   .   202
14.3 Partitioning . . . . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   205
    14.3.1 Fundamentals . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   205
    14.3.2 The Traditional Method (MBR) . . .     .   .   .   .   .   .   .   .   .   .   206
    14.3.3 The Modern Method (GPT) . . . .        .   .   .   .   .   .   .   .   .   .   207
14.4 Linux and Mass Storage . . . . . . .         .   .   .   .   .   .   .   .   .   .   208
14.5 Partitioning Disks. . . . . . . . . .        .   .   .   .   .   .   .   .   .   .   210
    14.5.1 Fundamentals . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   210
    14.5.2 Partitioning Disks Using fdisk . . .   .   .   .   .   .   .   .   .   .   .   212
    14.5.3 Formatting Disks using GNU parted .    .   .   .   .   .   .   .   .   .   .   215
    14.5.4 gdisk . . . . . . . . . . . .          .   .   .   .   .   .   .   .   .   .   216
    14.5.5 More Partitioning Tools . . . . .      .   .   .   .   .   .   .   .   .   .   217
14.6 Loop Devices and kpartx . . . . . . .        .   .   .   .   .   .   .   .   .   .   217
14.7 The Logical Volume Manager (LVM) . . .       .   .   .   .   .   .   .   .   .   .   219
6                                                                                              Contents

    15 File Systems: Care and Feeding                                                                  223
    15.1 Creating a Linux File System .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   224
        15.1.1 Overview . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   224
        15.1.2 The ext File Systems . .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   226
        15.1.3 ReiserFS . . . . . .        .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   234
        15.1.4 XFS . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   235
        15.1.5 Btrfs . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   237
        15.1.6 Even More File Systems      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   238
        15.1.7 Swap space . . . . .        .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   239
    15.2 Mounting File Systems . . .       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   240
        15.2.1 Basics . . . . . . .        .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   240
        15.2.2 The mount Command . .       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   240
        15.2.3 Labels and UUIDs . .        .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   242
    15.3 The dd Command . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   244

    16 Booting Linux                                                                                   247
    16.1 Fundamentals . . . . . . . . . . .                    .   .   .   .   .   .   .   .   .   .   248
    16.2 GRUB Legacy . . . . . . . . . . .                     .   .   .   .   .   .   .   .   .   .   251
        16.2.1 GRUB Basics . . . . . . . . .                   .   .   .   .   .   .   .   .   .   .   251
        16.2.2 GRUB Legacy Configuration . . . .               .   .   .   .   .   .   .   .   .   .   252
        16.2.3 GRUB Legacy Installation . . . . .              .   .   .   .   .   .   .   .   .   .   253
        16.2.4 GRUB 2 . . . . . . . . . . .                    .   .   .   .   .   .   .   .   .   .   254
        16.2.5 Security Advice . . . . . . . .                 .   .   .   .   .   .   .   .   .   .   255
    16.3 Kernel Parameters . . . . . . . . .                   .   .   .   .   .   .   .   .   .   .   255
    16.4 System Startup Problems . . . . . . .                 .   .   .   .   .   .   .   .   .   .   257
        16.4.1 Troubleshooting . . . . . . . .                 .   .   .   .   .   .   .   .   .   .   257
        16.4.2 Typical Problems . . . . . . . .                .   .   .   .   .   .   .   .   .   .   257
        16.4.3 Rescue systems and Live Distributions           .   .   .   .   .   .   .   .   .   .   259

    17 System-V Init and the Init Process                                                              261
    17.1 The Init Process . . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   .   262
    17.2 System-V Init . . . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   262
    17.3 Upstart . . . . . . . . .             .   .   .   .   .   .   .   .   .   .   .   .   .   .   268
    17.4 Shutting Down the System . .          .   .   .   .   .   .   .   .   .   .   .   .   .   .   270

    18 Systemd                                                                                         275
    18.1 Overview. . . . . .       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   276
    18.2 Unit Files . . . . . .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   277
    18.3 Unit Types . . . . .      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   281
    18.4 Dependencies . . . .      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   282
    18.5 Targets. . . . . . .      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   284
    18.6 The systemctl Command     .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   286
    18.7 Installing Units. . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   289

    19 Time-controlled Actions—cron and at                                                             291
    19.1 Introduction. . . . . . . . .             .   .   .   .   .   .   .   .   .   .   .   .   .   292
    19.2 One-Time Execution of Commands            .   .   .   .   .   .   .   .   .   .   .   .   .   292
        19.2.1 at and batch . . . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   292
        19.2.2 at Utilities . . . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   294
        19.2.3 Access Control . . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   294
    19.3 Repeated Execution of Commands            .   .   .   .   .   .   .   .   .   .   .   .   .   295
        19.3.1 User Task Lists. . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   295
        19.3.2 System-Wide Task Lists . .          .   .   .   .   .   .   .   .   .   .   .   .   .   296
        19.3.3 Access Control . . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   297
        19.3.4 The crontab Command . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   297
        19.3.5 Anacron . . . . . . . .             .   .   .   .   .   .   .   .   .   .   .   .   .   298

    20 System Logging                                                                                  301

20.1   The Problem . . . . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   302
20.2   The Syslog Daemon . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   302
20.3   Log Files . . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   305
20.4   Kernel Logging . . . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   306
20.5   Extended Possibilities: Rsyslog . .   .   .   .   .   .   .   .   .   .   .   .   .   .   306
20.6   The “next generation”: Syslog-NG.     .   .   .   .   .   .   .   .   .   .   .   .   .   310
20.7   The logrotate Program . . . . .       .   .   .   .   .   .   .   .   .   .   .   .   .   314

21 System Logging with Systemd and “The Journal”              319
21.1 Fundamentals . . . . . . . . . . . . . . . . . . . . . 320
21.2 Systemd and journald . . . . . . . . . . . . . . . . . . 321
21.3 Log Inspection . . . . . . . . . . . . . . . . . . . . . 323

22 TCP/IP Fundamentals                                                                           329
22.1 History and Introduction . . . . . . . . .                  .   .   .   .   .   .   .   .   330
    22.1.1 The History of the Internet . . . . . .               .   .   .   .   .   .   .   .   330
    22.1.2 Internet Administration . . . . . . .                 .   .   .   .   .   .   .   .   330
22.2 Technology . . . . . . . . . . . . . .                      .   .   .   .   .   .   .   .   332
    22.2.1 Overview . . . . . . . . . . . .                      .   .   .   .   .   .   .   .   332
    22.2.2 Protocols . . . . . . . . . . . . .                   .   .   .   .   .   .   .   .   333
22.3 TCP/IP . . . . . . . . . . . . . . .                        .   .   .   .   .   .   .   .   335
    22.3.1 Overview . . . . . . . . . . . .                      .   .   .   .   .   .   .   .   335
    22.3.2 End-to-End Communication: IP and ICMP                 .   .   .   .   .   .   .   .   336
    22.3.3 The Base for Services: TCP and UDP . . .              .   .   .   .   .   .   .   .   339
    22.3.4 The Most Important Application Protocols.             .   .   .   .   .   .   .   .   342
22.4 Addressing, Routing and Subnetting . . . . .                .   .   .   .   .   .   .   .   344
    22.4.1 Basics . . . . . . . . . . . . . .                    .   .   .   .   .   .   .   .   344
    22.4.2 Routing . . . . . . . . . . . . .                     .   .   .   .   .   .   .   .   345
    22.4.3 IP Network Classes . . . . . . . . .                  .   .   .   .   .   .   .   .   346
    22.4.4 Subnetting . . . . . . . . . . . .                    .   .   .   .   .   .   .   .   346
    22.4.5 Private IP Addresses . . . . . . . .                  .   .   .   .   .   .   .   .   347
    22.4.6 Masquerading and Port Forwarding . . .                .   .   .   .   .   .   .   .   348
22.5 IPv6. . . . . . . . . . . . . . . . .                       .   .   .   .   .   .   .   .   349
    22.5.1 IPv6 Addressing . . . . . . . . . .                   .   .   .   .   .   .   .   .   350

23 Linux Network Configuration                                                                   355
23.1 Network Interfaces . . . . . . . . . . . .                      .   .   .   .   .   .   .   356
    23.1.1 Hardware and Drivers . . . . . . . . .                    .   .   .   .   .   .   .   356
    23.1.2 Configuring Network Adapters Using ifconfig               .   .   .   .   .   .   .   357
    23.1.3 Configuring Routing Using route . . . . .                 .   .   .   .   .   .   .   358
    23.1.4 Configuring Network Settings Using ip . . .               .   .   .   .   .   .   .   360
23.2 Persistent Network Configuration . . . . . . .                  .   .   .   .   .   .   .   361
23.3 DHCP . . . . . . . . . . . . . . . . .                          .   .   .   .   .   .   .   364
23.4 IPv6 Configuration . . . . . . . . . . . .                      .   .   .   .   .   .   .   365
23.5 Name Resolution and DNS . . . . . . . . .                       .   .   .   .   .   .   .   366

24 Network Troubleshooting                                                                       371
24.1 Introduction. . . . . . . . . . . . . .                     .   .   .   .   .   .   .   .   372
24.2 Local Problems. . . . . . . . . . . . .                     .   .   .   .   .   .   .   .   372
24.3 Checking Connectivity With ping . . . . . .                 .   .   .   .   .   .   .   .   372
24.4 Checking Routing Using traceroute And tracepath             .   .   .   .   .   .   .   .   375
24.5 Checking Services With netstat And nmap . . .               .   .   .   .   .   .   .   .   378
24.6 Testing DNS With host And dig . . . . . . .                 .   .   .   .   .   .   .   .   381
24.7 Other Useful Tools For Diagnosis . . . . . .                .   .   .   .   .   .   .   .   383
    24.7.1 telnet and netcat . . . . . . . . . .                 .   .   .   .   .   .   .   .   383
    24.7.2 tcpdump . . . . . . . . . . . . . .                   .   .   .   .   .   .   .   .   385
    24.7.3 wireshark . . . . . . . . . . . . .                   .   .   .   .   .   .   .   .   385

25 The Secure Shell                                                                              387
8                                                                                              Contents

    25.1    Introduction. . . . . . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   388
    25.2    Logging Into Remote Hosts Using ssh        .   .   .   .   .   .   .   .   .   .   .   .   388
    25.3    Other Useful Applications: scp and sftp    .   .   .   .   .   .   .   .   .   .   .   .   391
    25.4    Public-Key Client Authentication . .       .   .   .   .   .   .   .   .   .   .   .   .   392
    25.5    Port Forwarding Using SSH . . . .          .   .   .   .   .   .   .   .   .   .   .   .   394
           25.5.1 X11 Forwarding . . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   394
           25.5.2 Forwarding Arbitrary TCP Ports       .   .   .   .   .   .   .   .   .   .   .   .   395

    26 Software Package Management Using Debian Tools                                                  399
    26.1 Overview. . . . . . . . . . . . . . . .                           .   .   .   .   .   .   .   400
    26.2 The Basis: dpkg . . . . . . . . . . . . . .                       .   .   .   .   .   .   .   400
        26.2.1 Debian Packages . . . . . . . . . . .                       .   .   .   .   .   .   .   400
        26.2.2 Package Installation . . . . . . . . . .                    .   .   .   .   .   .   .   401
        26.2.3 Deleting Packages . . . . . . . . . .                       .   .   .   .   .   .   .   402
        26.2.4 Debian Packages and Source Code . . . .                     .   .   .   .   .   .   .   403
        26.2.5 Package Information. . . . . . . . . .                      .   .   .   .   .   .   .   403
        26.2.6 Package Verification . . . . . . . . . .                    .   .   .   .   .   .   .   406
    26.3 Debian Package Management: The Next Generation                    .   .   .   .   .   .   .   407
        26.3.1 APT . . . . . . . . . . . . . . .                           .   .   .   .   .   .   .   407
        26.3.2 Package Installation Using apt-get . . . . .                .   .   .   .   .   .   .   407
        26.3.3 Information About Packages . . . . . . .                    .   .   .   .   .   .   .   409
        26.3.4 aptitude . . . . . . . . . . . . . .                        .   .   .   .   .   .   .   410
    26.4 Debian Package Integrity . . . . . . . . . .                      .   .   .   .   .   .   .   412
    26.5 The debconf Infrastructure . . . . . . . . .                      .   .   .   .   .   .   .   413
    26.6 alien : Software From Different Worlds . . . . .                  .   .   .   .   .   .   .   414

    27 Package Management with RPM and YUM                                                             417
    27.1 Introduction. . . . . . . . . . . . . . .                         .   .   .   .   .   .   .   418
    27.2 Package Management Using rpm . . . . . . . .                      .   .   .   .   .   .   .   419
        27.2.1 Installation and Update . . . . . . . .                     .   .   .   .   .   .   .   419
        27.2.2 Deinstalling Packages . . . . . . . . .                     .   .   .   .   .   .   .   419
        27.2.3 Database and Package Queries . . . . . .                    .   .   .   .   .   .   .   420
        27.2.4 Package Verification . . . . . . . . . .                    .   .   .   .   .   .   .   422
        27.2.5 The rpm2cpio Program . . . . . . . . .                      .   .   .   .   .   .   .   422
    27.3 YUM . . . . . . . . . . . . . . . . .                             .   .   .   .   .   .   .   423
        27.3.1 Overview . . . . . . . . . . . . .                          .   .   .   .   .   .   .   423
        27.3.2 Package Repositories . . . . . . . . .                      .   .   .   .   .   .   .   423
        27.3.3 Installing and Removing Packages Using YUM                  .   .   .   .   .   .   .   424
        27.3.4 Information About Packages . . . . . . .                    .   .   .   .   .   .   .   426
        27.3.5 Downloading Packages. . . . . . . . .                       .   .   .   .   .   .   .   428

    A Sample Solutions                                                                                 429

    B Example Files                                                                                    449

    C LPIC-1 Certification                                                                             453
    C.1 Overview. . . . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   453
    C.2 Exam LPI-101 . . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   453
    C.3 Exam LPI-102 . . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   454
    C.4 LPI Objectives In This Manual      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   455

    D Command Index                                                                                    469

    Index                                                                                              475
                                                                                                                                     $ echo tux
                                                                                                                                     $ ls
                                                                                                                                     $ /bin/su -

List of Tables

 4.1    Manual page sections . . . . . . . . . . . . . . . . . . . . . . . . . . .                                             47
 4.2    Manual Page Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . .                                             48

 5.1    Insert-mode commands for vi . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   56
 5.2    Cursor positioning commands in vi          .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   57
 5.3    Editing commands in vi . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   58
 5.4    Replacement commands in vi . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   58
 5.5    ex commands in vi . . . . . . . . . .      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   60

 6.1    Some file type designations in ls      .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   68
 6.2    Some ls options . . . . . . . . . .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   68
 6.3    Options for cp . . . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   74
 6.4    Keyboard commands for more . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   80
 6.5    Keyboard commands for less . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   81
 6.6    Test conditions for find . . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   82
 6.7    Logical operators for find . . . . .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   83

 7.1    Standard channels on Linux . . . . . .             .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    89
 7.2    Options for cat (selection) . . . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    94
 7.3    Options for tac (selection) . . . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    95
 7.4    Options for od (excerpt) . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .    97
 7.5    Options for tr . . . . . . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   100
 7.6    Characters and character classes for tr            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   101
 7.7    Options for pr (selection) . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   104
 7.8    Options for nl (selection) . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   105
 7.9    Options for wc (selection) . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   107
 7.10   Options for sort (selection) . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   110
 7.11   Options for join (selection) . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   115

 8.1    Important Shell Variables . . . . . . . . . . . . . . . . . . . . . . . . 122
 8.2    Key Strokes within bash . . . . . . . . . . . . . . . . . . . . . . . . . . 127
 8.3    Options for jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

 9.1    Linux file types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
 9.2    Directory division according to the FHS . . . . . . . . . . . . . . . . 146

 12.1 The most important file attributes . . . . . . . . . . . . . . . . . . . . 188

 14.1 Different SCSI variants . . . . . . . . . . . . . . . . . . . . . . . . . . 204
 14.2 Partition types for Linux (hexadecimal) . . . . . . . . . . . . . . . . 206
 14.3 Partition type GUIDs for GPT (excerpt) . . . . . . . . . . . . . . . . 208

 18.1 Common targets for systemd (selection) . . . . . . . . . . . . . . . . 284
 18.2 Compatibility targets for System-V init . . . . . . . . . . . . . . . . . 285

 20.1 syslogd facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
 20.2 syslogd priorities (with ascending urgency) . . . . . . . . . . . . . . 303
10                                                                                         List of Tables

     20.3 Filtering functions for Syslog-NG . . . . . . . . . . . . . . . . . . . . 312

     22.1   Common application protocols based on TCP/IP           .   .   .   .   .   .   .   .   .   .   .   343
     22.2   Addressing example . . . . . . . . . . . . . . . .     .   .   .   .   .   .   .   .   .   .   .   345
     22.3   Traditional IP Network Classes . . . . . . . . . .     .   .   .   .   .   .   .   .   .   .   .   346
     22.4   Subnetting Example . . . . . . . . . . . . . . . . .   .   .   .   .   .   .   .   .   .   .   .   347
     22.5   Private IP address ranges according to RFC 1918        .   .   .   .   .   .   .   .   .   .   .   347

     23.1 Options within /etc/resolv.conf . . . . . . . . . . . . . . . . . . . . . 367

     24.1 Important ping options . . . . . . . . . . . . . . . . . . . . . . . . . . 374
                                                                                                            $ echo tux
                                                                                                            $ ls
                                                                                                            $ /bin/su -

List of Figures

 1.1    Ken Thompson and Dennis Ritchie with a PDP-11 . . . . . . . . . .                             17
 1.2    Linux development . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     18
 1.3    Organizational structure of the Debian project . . . . . . . . . . . .                        27

 2.1    The login screens of some common Linux distributions . . . . . . .                            32
 2.2    Running programs as a different user in KDE . . . . . . . . . . . . .                         35

 4.1    A manual page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     48

 5.1    vi ’s   modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               56

 7.1    Standard channels on Linux . . . . . . . . . . . . . . . . . . . . . . .                      88
 7.2    The tee command . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                     93

 8.1    Synchronous command execution in the shell . . . . . . . . . . . . . 133
 8.2    Asynchronous command execution in the shell . . . . . . . . . . . . 133

 9.1    Content of the root directory (SUSE) . . . . . . . . . . . . . . . . . . 140

 13.1 The relationship between various process states . . . . . . . . . . . 193

 15.1 The /etc/fstab file (example) . . . . . . . . . . . . . . . . . . . . . . . 241

 17.1 A typical /etc/inittab file (excerpt) . . . . . . . . . . . . . . . . . . . 263
 17.2 Upstart configuration file for job rsyslog . . . . . . . . . . . . . . . . 269

 18.1 A systemd unit file: console- getty.service . . . . . . . . . . . . . . . . 279

 20.1 Example configuration for logrotate (Debian GNU/Linux 8.0) . . . 315

 21.1 Complete log output of journalctl . . . . . . . . . . . . . . . . . . . . 326

 22.1   Protocols and service interfaces . . . . . . . . . . . . . .      .   .   .   .   .   .   .   334
 22.2   ISO/OSI reference model . . . . . . . . . . . . . . . . .         .   .   .   .   .   .   .   334
 22.3   Structure of an IP datagram . . . . . . . . . . . . . . . .       .   .   .   .   .   .   .   337
 22.4   Structure of an ICMP packet . . . . . . . . . . . . . . . .       .   .   .   .   .   .   .   338
 22.5   Structure of a TCP Segment . . . . . . . . . . . . . . . .        .   .   .   .   .   .   .   339
 22.6   Starting a TCP connection: The Three-Way Handshake                .   .   .   .   .   .   .   340
 22.7   Structure of a UDP datagram . . . . . . . . . . . . . . .         .   .   .   .   .   .   .   341
 22.8   The /etc/services file (excerpt) . . . . . . . . . . . . . . .    .   .   .   .   .   .   .   342

 23.1 /etc/resolv.conf example . . . . . . . . . . . . . . . . . . . . . . . . . 367
 23.2 The /etc/hosts file (SUSE) . . . . . . . . . . . . . . . . . . . . . . . . . 368

 26.1 The aptitude program . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
                                                                                            $ echo tux
                                                                                            $ ls
                                                                                            $ /bin/su -

This manual offers a concise introduction to the use and administration of Linux.
It is aimed at students who have had some experience using other operating sys-
tems and want to transition to Linux, but is also suitable for use at schools and
    Topics include a thorough introduction to the Linux shell, the vi editor, and the
most important file management tools as well as a primer on basic administration
tasks like user, permission, and process management. We present the organisation
of the file system and the administration of hard disk storage, describe the system
boot procedure, the configuration of services, the time-based automation of tasks
and the operation of the system logging service. The course is rounded out by
an introduction to TCP/IP and the configuration and operation of Linux hosts as
network clients, with particular attention to troubleshooting, and chapters on the
Secure Shell and printing to local and network printers.
    Together with the subsequent volume, Concise Linux—Advanced Topics, this
manual covers all of the objectives of the Linux Professional Institute’s LPIC-1 cer-
tificate exams and is therefore suitable for exam preparation.
    This courseware package is designed to support the training course as effi-
ciently as possible, by presenting the material in a dense, extensive format for
reading along, revision or preparation. The material is divided in self-contained
chapters detailing a part of the curriculum; a chapter’s goals and prerequisites chapters
are summarized clearly at its beginning, while at the end there is a summary and goals
(where appropriate) pointers to additional literature or web pages with further prerequisites

B Additional material or background information is marked by the “light-
  bulb” icon at the beginning of a paragraph. Occasionally these paragraphs
  make use of concepts that are really explained only later in the courseware,
  in order to establish a broader context of the material just introduced; these
  “lightbulb” paragraphs may be fully understandable only when the course-
  ware package is perused for a second time after the actual course.

A Paragraphs with the “caution sign” direct your attention to possible prob-
  lems or issues requiring particular care. Watch out for the dangerous bends!

C Most chapters also contain exercises, which are marked with a “pencil” icon exercises
  at the beginning of each paragraph. The exercises are numbered, and sam-
  ple solutions for the most important ones are given at the end of the course-
  ware package. Each exercise features a level of difficulty in brackets. Exer-
  cises marked with an exclamation point (“!”) are especially recommended.
   Excerpts from configuration files, command examples and examples of com-
puter output appear in typewriter type . In multiline dialogs between the user and
the computer, user input is given in bold typewriter type in order to avoid misun-
derstandings. The “” symbol appears where part of a command’s output
had to be omitted. Occasionally, additional line breaks had to be added to make
things fit; these appear as “
 ”. When command syntax is discussed, words enclosed in angle brack-
ets (“⟨Word⟩”) denote “variables” that can assume different values; material in
14                                                                                                    Preface

                         brackets (“[-f ⟨file⟩]”) is optional. Alternatives are separated using a vertical bar
                         (“-a |-b ”).
     Important concepts      Important concepts are emphasized using “marginal notes” so they can be eas-
             definitions ily located; definitions of important terms appear in bold type in the text as well
                         as in the margin.
                             References to the literature and to interesting web pages appear as “[GPL91]”
                         in the text and are cross-referenced in detail at the end of each chapter.
                             We endeavour to provide courseware that is as up-to-date, complete and error-
                         free as possible. In spite of this, problems or inaccuracies may creep in. If you
                         notice something that you think could be improved, please do let us know, e.g.,
                         by sending e-mail to


                        (For simplicity, please quote the title of the courseware package, the revision ID
                        on the back of the title page and the page number(s) in question.) Thank you very

                        LPIC-1 Certification
                        These training materials are part of a recommended curriculum for LPIC-1 prepa-
                        ration. Refer to Appendix C for further information.
                                                                                                 $ echo tux
                                                                                                 $ ls
                                                                                                 $ /bin/su -


1.1     What is Linux? . . . . . . . . . . .        .   .   .   .   .   .   .   .   .   .   16
1.2     Linux History . . . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   16
1.3     Free Software, “Open Source” and the GPL    .   .   .   .   .   .   .   .   .   .   18
1.4     Linux—The Kernel . . . . . . . . .          .   .   .   .   .   .   .   .   .   .   21
1.5     Linux Properties . . . . . . . . . .        .   .   .   .   .   .   .   .   .   .   23
1.6     Linux Distributions . . . . . . . . .       .   .   .   .   .   .   .   .   .   .   26

      • Knowing about Linux, its properties and its history
      • Differentiating between the Linux kernel and Linux distributions
      • Understanding the terms “GPL”, “free software”, and “open-source soft-

      • Knowledge of other operating systems is useful to appreciate similarities
        and differences

grd1-einfuehrung.tex   (be27bba8095b329b )
16                                                                                                      1 Introduction

                         1.1      What is Linux?
                         Linux is an operating system. As such, it manages a computer’s basic function-
                         ality. Application programs build on the operating system. It forms the interface
                         between the hardware and application programs as well as the interface between
                         the hardware and people (users). Without an operating system, the computer is
                         unable to “understand” or process our input.
                             Various operating systems differ in the way they go about these tasks. The
                         functions and operation of Linux are inspired by the Unix operating system.

                         1.2      Linux History
                         The history of Linux is something special in the computer world. While most other
                         operating systems are commercial products produced by companies, Linux was
                         started by a university student as a hobby project. In the meantime, hundreds of
                         professionals and enthusiasts all over the world collaborate on it—from hobbyists
                         and computer science students to operating systems experts funded by major IT
                         corporations to do Linux development. The basis for the existence of such a project
                         is the Internet: Linux developers make extensive use of services like electronic
                         mail, distributed version control, and the World Wide Web and, through these,
                         have made Linux what it is today. Hence, Linux is the result of an international
                         collaboration across national and corporate boundaries, now as then led by Linus
                         Torvalds, its original author.
                             To explain about the background of Linux, we need to digress for a bit: Unix,
                         the operating system that inspired Linux, was begun in 1969. It was developed by
     Bell Laboratories   Ken Thompson and his colleagues at Bell Laboratories (the US telecommunication
                         giant AT&T’s research institute)1 . Unix caught on rapidly especially at universi-
                         ties, because Bell Labs furnished source code and documentation at cost (due to
                         an anti-trust decree, AT&T was barred from selling software). Unix was, at first,
                         an operating system for Digital Equipment’s PDP-11 line of minicomputers, but
                         was ported to other platforms during the 1970s—a reasonably feasible endeavour,
                         since the Unix software, including the operating system kernel, was mostly writ-
                    C    ten in Dennis Ritchie’s purpose-built C programming language. Possibly most
                         important of all Unix ports was the one to the PDP-11’s successor platform, the
                 VAX     VAX, at the University of California in Berkeley, which came to be distributed as
                         “BSD” (short for Berkeley Software Distribution). By and by, various computer man-
                         ufacturers developed different Unix derivatives based either on AT&T code or on
                         BSD (e. g., Sinix by Siemens, Xenix by Microsoft (!), SunOS by Sun Microsystems,
                         HP/UX by Hewlett-Packard or AIX by IBM). Even AT&T was finally allowed to
            System V     market Unix—the commercial versions System III and (later) System V. This led to
                         a fairly incomprehensible multitude of different Unix products. A real standardi-
                         sation never happened, but it is possible to distinguish roughly between BSD-like
                         and System-V-like Unix variants. The BSD and System V lines of development
                SVR4     were mostly unified by “System V Release 4”, which exhibited the characteristics
                         of both factions.
                             The very first parts of Linux were developed in 1991 by Linus Torvalds, then
                         a 21-year-old student in Helsinki, Finland, when he tried to fathom the capabil-
                         ities of his new PC’s Intel 386 processor. After a few months, the assembly lan-
                         guage experiments had matured into a small but workable operating system ker-
                Minix    nel that could be used in a Minix system—Minix was a small Unix-like operating
                         system that computer science professor Andrew S. Tanenbaum of the Free Uni-
                         versity of Amsterdam, the Netherlands, had written for his students. Early Linux
                         had properties similar to a Unix system, but did not contain Unix source code.
                         Linus Torvalds made the program’s source code available on the Internet, and the
                            1 The name “Unix” is a pun on “Multics”, the operating system that Ken Thompson and his col-

                         leagues worked on previously. Early Unix was a lot simpler than Multics. How the name came to be
                         spelled with an “x” is no longer known.
1.2 Linux History                                                                     17

Figure 1.1: Ken Thompson (sitting) and Dennis Ritchie (standing) with a
            PDP-11, approx. 1972. (Photograph courtesy of Lucent Technologies.)

idea was eagerly taken up and developed further by many programmers. Version
0.12, issued in January, 1992, was already a stable operating system kernel. There
was—thanks to Minix—the GNU C compiler (gcc ), the bash shell, the emacs editor
and many other GNU tools. The operating system was distributed world-wide by
anonymous FTP. The number of programmers, testers and supporters grew very
rapidly. This catalysed a rate of development only dreamed of by powerful soft-
ware companies. Within months, the tiny kernel grew into a full-blown operating
system with fairly complete (if simple) Unix functionality.
   The “Linux” project is not finished even today. Linux is constantly updated
and added to by hundreds of programmers throughout the world, catering to
millions of satisfied private and commercial users. In fact it is inappropriate to
say that the system is developed “only” by students and other amateurs—many
contributors to the Linux kernel hold important posts in the computer industry
and are among the most professionally reputable system developers anywhere.
By now it is fair to claim that Linux is the operating system with the widest sup-
ported range of hardware ever, not just with respect to the platforms it is running
on (from PDAs to mainframes) but also with respect to driver support on, e. g., the
Intel PC platform. Linux also serves as a research and development platform for
new operating systems ideas in academia and industry; it is without doubt one of
the most innovative operating systems available today.

C 1.1 [4] Use the Internet to locate the famous (notorious?) discussion between
  Andrew S. Tanenbaum and Linus Torvalds, in which Tanenbaum says that,
  with something like Linux, Linus Torvalds would have failed his (Tanen-
  baum’s) operating systems course. What do you think of the controversy?

C 1.2 [2] Give the version number of the oldest version of the Linux source
  code that you can locate.
18                                                                                                                  1 Introduction

                     Linux 2.0
                     Linux 2.1
          50MiB      Linux 2.2
                     Linux 2.3
                     Linux 2.4
                     Linux 2.5
          45MiB      Linux 2.6








                     1997        1998       1999        2000      2001      2002      2003     2004   2005   2006   2007

Figure 1.2: Linux development, measured by the size of linux- *.tar.gz . Each marker corresponds to a Linux
            version. During the 10 years between Linux 2.0 and Linux 2.6.18, the size of the compressed Linux
            source code has roughly increased tenfold.

                                  1.3            Free Software, “Open Source” and the GPL
                                  From the very beginning of its development, Linux was placed under the GNU
                          GPL General Public License (GPL) promulgated by the Free Software Foundation (FSF).
      Free Software Foundation The FSF was founded by Richard M. Stallman, the author of the Emacs editor
                                  and other important programs, with the goal of making high-quality software
                  Free Software “freely” available—in the sense that users are “free” to inspect it, to change it
                              and to redistribute it with or without changes, not necessarily in the sense that
                              it does not cost anything2 . In particular, he was after a freely available Unix-like
                              operating system, hence “GNU” as a (recursive) acronym for “GNU’s Not Unix”.
                              The main tenet of the GPL is that software distributed under it may be changed
                              as well as sold at any time, but that the (possibly modified) source code must
                  Open Source always be passed along—thus Open Source—and that the recipient must receive
                              the same rights of modification and redistribution. Thus there is little point in
                              selling GPL software “per seat”, since the recipient must be allowed to copy and
                              install the software as often as wanted. (It is of course permissible to sell support
                              for the GPL software “per seat”.) New software resulting from the extension or
                              modification of GPL software must, as a “derived work”, also be placed under the
                                  Therefore, the GPL governs the distribution of software, not its use, and al-
                              lows the recipient to do things that he would not be allowed to do otherwise—for
                              example, the right to copy and distribute the software, which according to copy-
                              right law is the a priori prerogative of the copyright owner. Consequently, it differs
                              markedly from the “end user license agreements” (EULAs) of “proprietary” soft-
                              ware, whose owners try to take away a recipient’s rights to do various things. (For
                              example, some EULAs try to forbid a software recipient from talking critically—or
                                        2 The   FSF says “free as in speech, not as in beer”
1.3 Free Software, “Open Source” and the GPL                                              19

at all—about the product in public.)

B The GPL is a license, not a contract, since it is a one-sided grant of rights
  to the recipient (albeit with certain conditions attached). The recipient of
  the software does not need to “accept” the GPL explicitly. The common
  EULAs, on the other hand, are contracts, since the recipient of the software
  is supposed to waive certain rights in exchange for being allowed to use the
  software. For this reason, EULAs must be explicitly accepted. The legal
  barriers for this may be quite high—in many jurisdictions (e. g., Germany),
  any EULA restrictions must be known to the buyer before the actual sale in
  order to become part of the sales contract. Since the GPL does not in any
  way restrict a buyer’s rights (in particular as far as use of the software is
  concerned) compared to what they would have to expect when buying any
  other sort of goods, these barriers do not apply to the GPL; the additional
  rights that the buyer is conferred by the GPL are a kind of extra bonus.

B Currently two versions of the GPL are in widespread use. The newer ver-
  sion 3 (also called “GPLv3”) was published in July, 2007, and differs from the GPLv3
  older version 2 (also “GPLv2”) by more precise language dealing with ar-
  eas such as software patents, the compatibility with other free licenses, and
  the introduction of restrictions on making changes to theoretically “free”
  devices impossible by excluding them through special hardware (“Tivoisa-
  tion”, after a Linux-based personal video recorder whose kernel is impossi-
  ble to alter or exchange). In addition, GPLv3 allows its users to add further
  clauses. – Within the community, the GPLv3 was not embraced with unan-
  imous enthusiasm, and many projects, in particular the Linux kernel, have
  intentionally stayed with the simpler GPLv2. Many other projects are made
  available under the GPLv2 “or any newer version”, so you get to decide
  which version of the GPL you want to follow when distributing or modify-
  ing such software.

   Neither should you confuse GPL software with “public-domain” software. Public Domain
The latter belongs to nobody, everybody can do with it what he wants. A GPL
program’s copyright still rests with its developer or developers, and the GPL
states very clearly what one may do with the software and what one may not.

B It is considered good form among free software developers to place contri-
  butions to a project under the same license that the project is already using,
  and in fact most projects insist on this, at least for code that is supposed to
  become part of the “official” version. Indeed, some projects insist on “copy-
  right assignments”, where the code author signs the copyright over to the
  project (or a suitable organisation). The advantage of this is that, legally,
  only the project is responsible for the code and that licensing violations—
  where only the copyright owner has legal standing—are easier to prose-
  cute. A side effect that is either desired or else explicitly unwanted is that
  copyright assignment makes it easier to change the license for the complete
  project, as this is an act that only the copyright owner may perform.

B In case of the Linux operating system kernel, which explicitly does not re-
  quire copyright assignment, a licensing change is very difficult to impossible
  in practice, since the code is a patchwork of contributions from more than
  a thousand authors. The issue was discussed during the GPLv3 process,
  and there was general agreement that it would be a giant project to ascer-
  tain the copyright provenance of every single line of the Linux source code,
  and to get the authors to agree to a license change. In any case, some Linux
  developers would be violently opposed, while others are impossible to find
  or even deceased, and the code in question would have to be replaced by
  something similar with a clear copyright. At least Linus Torvalds is still in
  the GPLv2 camp, so the problem does not (yet) arise in practice.
20                                                                                               1 Introduction

         GPL and Money         The GPL does not stipulate anything about the price of the product. It is utterly
                           legal to give away copies of GPL programs, or to sell them for money, as long
                           as you provide source code or make it available upon request, and the software
                           recipient gets the GPL rights as well. Therefore, GPL software is not necessarily
                               You can find out more by reading the GPL [GPL91], which incidentally must
                           accompany every GPLlicensed product (including Linux).
     Other “free” licenses     There are other “free” software licenses which give similar rights to the soft-
                           ware recipient, for example the “BSD license” which lets appropriately licensed
                           software be included in non-free products. The GPL is considered the most thor-
                           ough of the free licenses in the sense that it tries to ensure that code, once pub-
                           lished under the GPL, remains free. Every so often, companies have tried to include
                           GPL code in their own non-free products. However, after being admonished by
                           (usually) the FSF as the copyright holder, these companies have always complied
                           with the GPL’s requirements. In various jurisdictions the GPL has been validated
                           in courts of law—for example, in the Frankfurt (Germany) Landgericht (state court),
                           a Linux kernel developer obtained a judgement against D-Link (a manufacturer of
                           network components, in this case a Linux-based NAS device) in which the latter
                           was sued for damages because they did not adhere to the GPL conditions when
                           distributing the device [GPL-Urteil06].

                          B Why does the GPL work? Some companies that thought the GPL condi-
                            tions onerous have tried to declare or have it declared it invalid. For exam-
                            ple, it was called “un-American” or “unconstitutional” in the United States;
                            in Germany, anti-trust law was used in an attempt to prove that the GPL
                            amounts to price fixing. The general idea seems to be that GPL-ed soft-
                            ware can be used by anybody if something is demonstrably wrong with the
                            GPL. All these attacks ignore one fact: Without the GPL, nobody except the
                            original author has the right to do anything with the code, since actions like
                            sharing (let alone selling) the code are the author’s prerogative. So if the
                            GPL goes away, all other interested parties are worse off than they were.

                          B A lawsuit where a software author sues a company that distributes his GPL
                            code without complying with the GPL would approximately look like this:
                                Judge What seems to be the problem?
                                Software Author Your Lordship, the defendant has distributed my soft-
                                    ware without a license.
                                Judge (to the defendant’s counsel) Is that so?
                                At this point the defendant can say “yes”, and the lawsuit is essentially over
                                (except for the verdict). They can also say “no” but then it is up to them
                                to justify why copyright law does not apply to them. This is an uncom-
                                fortable dilemma and the reason why few companies actually do this to
                                themselves—most GPL disagreements are settled out of court.

                          B If a manufacturer of proprietary software violates the GPL (e. g., by includ-
                            ing a few hundreds of lines of source code from a GPL project in their prod-
                            uct), this does not imply that all of that product’s code must now be released
                            under the terms of the GPL. It only implies that they have distributed GPL
                            code without a license. The manufacturer can solve this problem in various
                                  • They can remove the GPL code and replace it by their own code. The
                                    GPL then becomes irrelevant for their software.
                                  • They can negotiate with the GPL code’s copyright holder (if he is avail-
                                    able and willing to go along) and, for instance, agree to pay a license
                                    fee. See also the section on multiple licenses below.
                                  • They can release their entire program under the GPL voluntarily and
                                    thereby comply with the GPL’s conditions (the most unlikely method).
1.4 Linux—The Kernel                                                                                               21

      Independently of this there may be damages payable for the prior violations.
      The copyright status of the proprietary software, however, is not affected in
      any way.

   When is a software package considered “free” or “open source”? There are Freedom criteria
no definite criteria, but a widely-accepted set of rules are the Debian Free Software Debian Free Software Guidelines
Guidelines [DFSG]. The FSF summarizes its criteria as the Four Freedoms which
must hold for a free software package:

   • The freedom to use the software for any purpose (freedom 0)
   • The freedom to study how the software works, and to adapt it to one’s re-
     quirements (freedom 1)
   • The freedom to pass the software on to others, in order to help one’s neigh-
     bours (freedom 2)
   • The freedom to improve the software and publish the improvements, in or-
     der to benefit the general public (freedom 3)
Access to the source code is a prerequisite for freedoms 1 and 3. Of course, com-
mon free-software licenses such as the GPL or the BSD license conform to these
    In addition, the owner of a software package is free to distribute it under dif- Multiple licenses
ferent licenses at the same time, e.g., the GPL and, alternatively, a “commercial”
license that frees the recipient from the GPL restrictions such as the duty to make
available the source code for modifications. This way, private users and free soft-
ware authors can enjoy the use of a powerful programming library such as the
“Qt” graphics package (published by Qt Software—formerly Troll Tech—, a Nokia
subsidiary), while companies that do not want to make their own source code
available may “buy themselves freedom” from the GPL.

C 1.3 [!2] Which of the following statements concerning the GPL are true and
  which are false?
        1. GPL software may not be sold.
        2. GPL software may not be modified by companies in order to base their
           own products on it.
        3. The owner of a GPL software package may distribute the program un-
           der a different license as well.
        4. The GPL is invalid, because one sees the license only after having ob-
           tained the software package in question. For a license to be valid, one
           must be able to inspect it and accept it before acquiring the software.

C 1.4 [4] Some software licenses require that when a file from a software distri-
  bution is changed, it must be renamed. Is software distributed under such a
  license considered “free” according to the DFSG? Do you think this practice
  makes sense?

1.4     Linux—The Kernel
Strictly speaking, the name “Linux” only applies to the operating system “kernel”,
which performs the actual operating system tasks. It takes care of elementary
functions like memory and process management and hardware control. Applica-
tion programs must call upon the kernel to, e.g., access files on disk. The kernel
validates such requests and in doing so can enforce that nobody gets to access
22                                                                                                  1 Introduction

                         other users’ private files. In addition, the kernel ensures that all processes in the
                         system (and hence all users) get the appropriate fraction of the available CPU time.
                Versions    Of course there is not just one Linux kernel, but there are many different ver-
                         sions. Until kernel version 2.6, we distinguished stable “end-user versions” and
                         unstable “developer versions” as follows:
           stable version        • In version numbers such as 1.𝑥.𝑦 or 2.𝑥.𝑦, 𝑥 denotes a stable version if it is
                                   even. There should be no radical changes in stable versions; mistakes should
                                   be corrected, and every so often drivers for new hardware components or
                                   other very important improvements are added or “back-ported” from the
                                   development kernels.
     development version         • Versions with odd 𝑥 are development versions which are unsuitable for pro-
                                   ductive use. They may contain inadequately tested code and are mostly
                                   meant for people wanting to take active part in Linux development. Since
                                   Linux is constantly being improved, there is a constant stream of new ker-
                                   nel versions. Changes concern mostly adaptations to new hardware or the
                                   optimization of various subsystems, sometimes even completely new exten-
              kernel 2.6 The procedure has changed in kernel 2.6: Instead of starting version 2.7 for new
                          development after a brief stabilisation phase, Linus Torvalds and the other kernel
                          developers decided to keep Linux development closer to the stable versions. This
                          is supposed to avoid the divergence of developer and stable versions that grew to
                          be an enormous problem in the run-up to Linux 2.6—most notably because corpo-
                          rations like SUSE and Red Hat took great pains to backport interesting properties
                          of the developer version 2.5 to their versions of the 2.4 kernel, to an extent where,
                          for example, a SUSE 2.4.19 kernel contained many hundreds of differences to the
                          “vanilla” 2.4.19 kernel.
                              The current procedure consists of “test-driving” proposed changes and en-
                          hancements in a new kernel version which is then declared “stable” in a shorter
                          timeframe. For example, after version 2.6.37 there is a development phase during
                          which Linus Torvalds accepts enhancements and changes for the 2.6.38 version.
                          Other kernel developers (or whoever else fancies it) have access to Linus’ internal
                          development version, which, once it looks reasonable enough, is made available
        release candidate as the “release candidate” 2.6.38-rc1. This starts the stabilisation phase, where
                          this release candidate is tested by more people until it looks stable enough to be
                          declared the new version 2.6.38 by Linus Torvalds. Then follows the 2.6.39 devel-
                          opment phase and so on.

                              B In parallel to Linus Torvalds’ “official” version, Andrew Morton maintains
                 -mm   tree     a more experimental version, the so-called “-mm tree”. This is used to test
                                larger and more sweeping changes until they are mature enough to be taken
                                into the official kernel by Linus.

                              B Some other developers maintain the “stable” kernels. As such, there might
                                be kernels numbered,, …, which each contain only small
                                and straightforward changes such as fixes for grave bugs and security is-
                                sues. This gives Linux distributors the opportunity to rely on kernel ver-
                                sions maintained for longer periods of time.

             version 3.0         On 21 July 2011, Linus Torvalds officially released version 3.0 of the Linux ker-
                              nel. This was really supposed to be version 2.6.40, but he wanted to simplify the
                              version numbering scheme. “Stable” kernels based on 3.0 are accordingly num-
                              bered 3.0.1, 3.0.2, …, and the next kernels in Linus’ development series are 3.1-rc1,
                              etc. leading up to 3.1 and so forth.

                              B Linus Torvalds insists that there was no big difference in functionality be-
                                tween the 2.6.39 and 3.0 kernels—at least not more so than between any
                                two other consecutive kernels in the 2.6 series—, but that there was just a
                                renumbering. The idea of Linux’s 20th anniversary was put forward.
1.5 Linux Properties                                                                                23

   You can obtain source code for “official” kernels on the Internet from ftp. . However, only very few Linux distributors use the original kernel
sources. Distribution kernels are usually modified more or less extensively, e. g.,
by integrating additional drivers or features that are desired by the distribution
but not part of the standard kernel. The Linux kernel used in SUSE’s Linux Enter-
prise Server 8, for example, reputedly contained approximately 800 modifications
to the “vanilla” kernel source. (The changes to the Linux development process
have succeeded to an extent where the difference is not as great today.)
   Today most kernels are modular. This was not always the case; former kernels Kernel structure
consisted of a single piece of code fulfilling all necessary functions such as the
support of particular devices. If you wanted to add new hardware or make use
of a different feature like a new type of file system, you had to compile a new
kernel from sources—a very time-consuming process. To avoid this, the kernel
was endowed with the ability to integrate additional features by way of modules.
   Modules are pieces of code that can be added to the kernel dynamically (at run- Modules
time) as well as removed. Today, if you want to use a new network adapter, you do
not have to compile a new kernel but merely need to load a new kernel module.
Modern Linux distributions support automatic hardware recognition, which can hardware recognition
analyze a system’s properties and locate and configure the correct driver modules.

C 1.5 [1] What is the version number of the current stable Linux kernel? The
  current developer kernel? Which Linux kernel versions are still being sup-

1.5      Linux Properties
As a modern operating system kernel, Linux has a number of properties, some
of which are part of the “state of the art” (i. e., exhibited by similar systems in an
equivalent form) and some of which are unique to Linux.

    • Linux supports a large selection of processors and computer architectures, processors
      ranging from mobile phones (the very successful “Android” operating sys-
      tem by Google, like some other similar systems, is based on Linux) through
      PDAs and tablets, all sorts of new and old PC-like computers and server
      systems of various kinds up to the largest mainframe computers (the vast
      majority of the machines on the list of the fastest computers in the world is
      running Linux).

      B A huge advantage of Linux in the mobile arena is that, unlike Mi-
        crosoft Windows, it supports the energy-efficient and powerful ARM
        processors that most mobile devices are based upon. In 2012, Microsoft
        released an ARM-based, partially Intel-compatible, version of Win-
        dows 8 under the name of “Windows RT”, but that did not exactly
        prove popular in the market.

    • Of all currently available operating systems, Linux supports the broadest
      selection of hardware. For the very newest components there may not be hardware
      drivers available immediately, but on the other hand Linux still works with
      devices that systems like Windows have long since left behind. Thus, your
      investments in printers, scanners, graphic boards, etc. are protected opti-
    • Linux supports “preemptive multitasking”, that is, several processes are multitasking
      running—virtually or, on systems with more than one CPU, even actually—
      in parallel. These processes cannot obstruct or damage one another; the ker-
      nel ensures that every process is allotted CPU time according to its priority.
24                                                                                                   1 Introduction

                                     B This is nothing special today; when Linux was new, this was much
                                       more remarkable.

                                     On carefully configured systems this may approach real-time behaviour,
                                     and in fact there are Linux variants that are being used to control industrial
                                     plants requiring “hard” real-time ability, as in guaranteed (quick) response
                                     times to external events.
                  several users    • Linux supports several users on the same system, even at the same time
                                     (via the network or serially connected terminals, or even several screens,
                                     keyboards, and mice connected to the same computer). Different access per-
                                     missions may be assigned to each user.
                                   • Linux can effortlessly be installed alongside other operating systems on the
                                     same computer, so you can alternately start Linux or another system. By
                  virtualisation     means of “virtualisation”, a Linux system can be split into independent
                                     parts that look like separate computers from the outside and can run Linux
                                     or other operating systems. Various free or proprietary solutions are avail-
                                     able that enable this.
                     efficiency    • Linux uses the available hardware efficiently. The dual-core CPUs common
                                     today are as fully utilised as the 4096 CPU cores of a SGI Altix server. Linux
                                     does not leave working memory (RAM) unused, but uses it to cache data
                                     from disk; conversely, available working memory is used reasonably in or-
                                     der to cope with workloads that are much larger than the amount of RAM
                                     inside the computer.
     POSIX, System V and BSD       • Linux is source-code compatible with POSIX, System V and BSD and hence
                                     allows the use of nearly all Unix software available in source form.
                   file systems    • Linux not only offers powerful “native” file systems with properties such
                                     as journaling, encryption, and logical volume management, but also allows
                                     access to the file systems of various other operating systems (such as the
                                     Microsoft Windows FAT, VFAT, and NTFS file systems), either on local disks
                                     or across the network on remote servers. Linux itself can be used as a file
                                     server in Linux, Unix, or Windows networks.
                        TCP/IP     • The Linux TCP/IP stack is arguably among the most powerful in the indus-
                                     try (which is due to the fact that a large fraction of R&D in this area is done
                                     based on Linux). It supports IPv4 and IPv6 and all important options and
       graphical environments      • Linux offers powerful and elegant graphical environments for daily work
                                     and, in X11, a very popular network-transparent base graphics system. Ac-
                                     celerated 3D graphics is supported on most popular graphics cards.
      productivity applications    • All important productivity applications are available—office-type pro-
                                     grams, web browsers, programs to access electronic mail and other com-
                                     munication media, multimedia tools, development environments for a di-
                                     verse selection of programming languages, and so on. Most of this software
                                     comes with the system at no cost or can be obtained effortlessly and cheaply
                                     over the Internet. The same applies to servers for all important Internet pro-
                                     tocols as well as entertaining games.
                                The flexibility of Linux not only makes it possible to deploy the system on all
                            sorts of PC-class computers (even “old chestnuts” that do not support current
                            Windows can serve well in the kids’ room, as a file server, router, or mail server),
           embedded systems but also helps it make inroads in the “embedded systems” market, meaning com-
                            plete appliances for network infrastructure or entertainment electronics. You will,
                            for example, find Linux in the popular AVM FRITZ!Box and similar WLAN, DSL
                            or telephony devices, in various set-top boxes for digital television, in PVRs, digi-
                            tal cameras, copiers, and many other devices. Your author has seen the bottle bank
1.5 Linux Properties                                                                                        25

in the neighbourhood supermarket boot Linux. This is very often not trumpeted
all over the place, but, in addition to the power and convenience of Linux itself
the manufacturers appreciate the fact that, unlike comparable operating systems,
Linux does not require licensing fees “per unit sold”.
    Another advantage of Linux and free software is the way the community deals
with security issues. In practice, security issues are as unavoidable in free software security issues
as they are in proprietary code—at least nobody so far has written and published
a software system of interesting size that proved completely free of them in the
long run. In particular, it would be improper to claim that free software has no
security issues. The differences are more likely to be found on a philosophical
   • As a rule, a vendor of proprietary software has no interest in fixing security
     issues in their code—they will try to cover up problems and to talk down
     possible dangers for as long as they possibly can, since constantly publish-
     ing “patches” means, in the best case, terrible PR (“where there is smoke,
     there must be a fire”; the competition, which just happens not to be in the
     spotlight of scrutiny at the moment, is having a secret laugh), and, in the
     worst case, great expense and lots of hassle if exploits are around that make
     active use of the security holes. Besides, there is the usual danger of intro-
     ducing three new errors while fixing one known one, which is why fixing
     bugs in released software is normally not an econonomically viable propo-
   • A free-software publisher does not gain anything by sitting on information
     about security issues, since the source code is generally available, and ev-
     erybody can find the problems. It is, in fact, a matter of pride to fix known
     security issues as quickly as possible. The fact that the source code is pub-
     lically available also implies that third parties find it easy to audit code for
     problems that can be repaired proactively. (A common claim is that the
     availability of source code exerts a very strong attraction on crackers and
     other unsavoury vermin. In fact, these low-lifes do not appear to have major
     difficulties identifying large numbers of security issues in proprietary sys-
     tems such as Windows, whose source code is not generally available. Thus
     any difference, if it exists, must be minute indeed.)
   • Especially as far as software dealing with cryptography (the encryption and
     decryption of confidential information) is concerned, there is a strong argu-
     ment that availability of source code is an indispensable prerequisite for
     trust that a program really does what it is supposed to do, i. e., that the
     claimed encryption algorithm has been implemented completely and cor-
     rectly. Linux does have an obvious advantage here.
    Linux is used throughout the world by private and professional users— Linux in companies
companies, research establishments, universities. It plays an important role par-
ticularly as a system for web servers (Apache), mail servers (Sendmail, Postfix),
file servers (NFS, Samba), print servers (LPD, CUPS), ISDN routers, X terminals,
scientific/engineering workstations etc. Linux is an essential part of industrial IT
departments. Widespread adoption of Linux in public administration, such as the Public administration
city of Munich, also serves as a signal. In addition, many reputable IT companies Support by IT companies
such as IBM, Hewlett-Packard, Dell, Oracle, Sybase, Informix, SAP, Lotus etc. are
adapting their products to Linux or selling Linux versions already. Furthermore,
ever more computers (“netbooks”)— come with Linux or are at least tested for
Linux compability by their vendors.

C 1.6 [4] Imagine you are responsible for IT in a small company (20–30 employ-
  ees). In the office there are approximately 20 desktop PCs and two servers (a
  file and printer server and a mail and Web proxy server). So far everything
  runs on Windows. Consider the following scenarios:
26                                                                                              1 Introduction

                                   • The file and printer server is replaced by a Linux server using Samba
                                     (a Linux/Unix-based server for Windows clients).
                                   • The mail and proxy server is replaced by a Linux server.
                                   • The twenty office desktop PCs are replaced by Linux machines.

                                 Comment on the different scenarios and draw up short lists of their advan-
                                 tages and disadvantages.

                           1.6    Linux Distributions
                         Linux in the proper sense of the word only consists of the operating system ker-
                         nel. To accomplish useful work, a multitude of system and application programs,
           Distributions libraries, documentation etc. is necessary. “Distributions” are nothing but up-to-
                         date selections of these together with special programs (usually tools for instal-
                         lation and maintenance) provided by companies or other organisations, possibly
                         together with other services such as support, documentation, or updates. Distri-
                         butions differ mostly in the selection of software they offer, their administration
                         tools, extra services, and price.

     Red Hat and Fedora          “Fedora” is a freely available Linux distribution developed under the guid-
                                 ance of the US-based company, Red Hat. It is the successor of the “Red Hat
                                 Linux” distribution; Red Hat has withdrawn from the private end-user mar-
                                 ket and aims their “Red Hat” branded distributions at corporate customers.
                                 Red Hat was founded in 1993 and became a publically-traded corporation
                                 in August, 1999; the first Red Hat Linux was issued in 1994, the last (ver-
                                 sion 9) in late April, 2004. “Red Hat Enterprise Linux” (RHEL), the current
                                 product, appeared for the first time in March, 2002. Fedora, as mentioned, is
                                 a freely available offering and serves as a development platform for RHEL;
                                 it is, in effect, the successor of Red Hat Linux. Red Hat only makes Fedora
                                 available for download; while Red Hat Linux was sold as a “boxed set” with
                                 CDs and manuals, Red Hat now leaves this to third-party vendors.

                  SUSE           The SUSE company was founded 1992 under the name “Gesellschaft für
                                 Software und Systementwicklung” as a Unix consultancy and accordingly
                                 abbreviated itself as “S.u.S.E.” One of its products was a version of Patrick
                                 Volkerding’s Linux distribution, Slackware, that was adapted to the Ger-
                                 man market. (Slackware, in turn, derived from the first complete Linux
                                 distribution, “Softlanding Linux System” or SLS.) S.u.S.E. Linux 1.0 came
                                 out in 1994 and slowly differentiated from Slackware, for example by taking
                                 on Red Hat features such as the RPM package manager or the /etc/ sysconfig
                                 file. The first version of S.u.S.E. Linux that no longer looked like Slackware
                                 was version 4.2 of 1996. SuSE (the dots were dropped at some point) soon
                                 gained market leadership in German-speaking Europe and published SuSE
                                 Linux in a “box” in two flavours, “Personal” and “Professional”; the latter
                                 was markedly more expensive and contained more server software. Like
                                 Red Hat, SuSE offered an enterprise-grade Linux distribution called “SuSE
                                 Linux Enterprise Server” (SLES), with some derivatives like a specialised
                                 server for mail and groupware (“SuSE Linux OpenExchange Server” or
                                 SLOX). In addition, SuSE endeavoured to make their distribution available
                                 on IBM’s mid-range and mainframe computers.
         Novell takeover         In November 2003, the US software company Novell announced their in-
                                 tention of taking over SuSE for 210 million dollars; the deal was concluded
                                 in January 2004. (The “U” went uppercase on that occasion). Like Red Hat,
                                 SUSE has by now taken the step to open the “private customer” distribution
                                 and make it freely available as “openSUSE” (earlier versions appeared for
                                 public download only after a delay of several months). Unlike Red Hat,
1.6 Linux Distributions                                                                                                            27

                                                                       elect                               Developers

                                                    Project leader
                          appoints                                                       appoints

                 Technical committee                                                  Project secretary

                                       Delegates            approve

                 Release team                                                             Maintainers / porters

                 FTP masters                                                                       apply
                                         DAM                NM team / advocates                                   applicants

                 Security team                 Policy group                           Documentation / i18n teams

                 Press contacts             Web/list/...masters                               Quality assurance

                 Administrators                    CD team                                          etc.

                      etc.                           etc.                                  s
                                                                                 ol unt
                                                                               V                             Software in the
                                                                                                             Public Interest

           Figure 1.3: Organizational structure of the Debian project. (Graphic by Martin F. Krafft.)

      Novell/SUSE still offers a “boxed” version containing additional propri-
      etary software. Among others, SUSE still sells SLES and a corporate desktop
      platform called “SUSE Linux Enterprise Desktop” (SLED).
      In early 2011, Novell was acquired by Attachmate, which in turn was taken Attachmate
      over by Micro Focus in 2014. Both are companies whose main field of busi- Micro Focus
      ness is traditional mainframe computers and which so far haven not distin-
      guished themselves in the Linux and open-source arena. These maneuver-
      ings, however, have had fairly little influence on SUSE and its products.
      A particular property of SUSE distributions is “YaST”, a comprehensive YaST
      graphical administration tool.

      Unlike the two big Linux distribution companies Red Hat and Novell/SUSE,
      the Debian project is a collaboration of volunteers whose goal is to make Debian project
      available a high-quality Linux distribution called “Debian GNU/Linux”.
      The Debian project was announced on 16 August 1993 by Ian Murdock; the
      name is a contraction of his first name with that of his then-girlfriend (now
      ex-wife) Debra (and is hence pronounced “debb-ian”). By now the project
      includes more than 1000 volunteers.
      Debian is based on three documents:
        • The Debian Free Software Guidelines (DFSG) define which software the
          project considers “free”. This is important, since only DFSG-free soft-
          ware can be part of the Debian GNU/Linux distribution proper. The
          project also distributes non-free software, which is strictly separated
          from the DFSG-free software on the distribution’s servers: The latter
          is in subdirectory called main , the former in non- free . (There is an inter-
          mediate area called contrib ; this contains software that by itself would
          be DFSG-free but does not work without other, non-free, components.)
28                                                                                                  1 Introduction

                                     • The Social Contract describes the project’s goals.
                                     • The Debian Constitution describes the project’s organisation.
                     versions     At any given time there are at least three versions of Debian GNU/Linux:
                                  New or corrected versions of packages are put into the unstable branch.
                                  If, for a certain period of time, no grave errors have appeared in a pack-
                                  age, it is copied to the testing branch. Every so often the content of test-
                                  ing is “frozen”, tested very thoroughly, and finally released as stable . A
                                  frequently-voiced criticism of Debian GNU/Linux is the long timespan be-
                                  tween stable releases; many, however, consider this an advantage. The De-
                                  bian project makes Debian GNU/Linux available for download only; media
                                  are available from third-party vendors.
                                  By virtue of its organisation, its freedom from commercial interests, and its
                                  clean separation between free and non-free software, Debian GNU/Linux is
           derivative projects    a sound basis for derivative projects. Some of the more popular ones include
                                  Knoppix (a “live CD” which makes it possible to test Linux on a PC without
                                  having to install it first), SkoleLinux (a version of Linux especially adapted to
                                  the requirements of schools), or commercial distributions such as Xandros.
                                  Limux, the desktop Linux variant used in the Munich city administration,
                                  is also based on Debian GNU/Linux.

                     Ubuntu       One of the most popular Debian derivatives is Ubuntu, which is offered
                                  by the British company, Canonical Ltd., founded by the South African
                                  entrepreneur Mark Shuttleworth. (“Ubuntu” is a word from the Zulu lan-
                         goal     guage and roughly means “humanity towards others”.) The goal of Ubuntu
                                  is to offer, based on Debian GNU/Linux, a current, capable, and easy-to-
                                  understand Linux which is updated at regular intervals. This is facilitated,
                                  for example, by Ubuntu being offered on only three computer architec-
                                  tures as opposed to Debian’s ten, and by restricting itself to a subset of the
                                  software offered by Debian GNU/Linux. Ubuntu is based on the unstable
                                  branch of Debian GNU/Linux and uses, for the most part, the same tools
                                  for software distribution, but Debian and Ubuntu software packages are
                                  not necessarily mutually compatible.
           Ubuntu vs. Debian      Some Ubuntu developers are also active participants in the Debian project,
                                  which ensures a certain degree of exchange. On the other hand, not all De-
                                  bian developers are enthusiastic about the shortcuts Ubuntu takes every so
                                  often in the interest of pragmatism, where Debian might look for more com-
                                  prehensive solutions even if these require more effort. In addition, Ubuntu
                                  does not appear to feel as indebted to the idea of free software as does De-
                                  bian; while all of Debian’s infrastructure tools (such as the bug management
                                  system) are available as free software, this is not always the case for those
                                  of Ubuntu.
     Ubuntu vs. SUSE/Red Hat      Ubuntu not only wants to offer an attractive desktop system, but also take
                                  on the more established systems like RHEL or SLES in the server space, by
                                  offering stable distributions with a long life cycle and good support. It is
                                  unclear how Canonical Ltd. intends to make money in the long run; for the
                                  time being the project is mostly supported out of Mark Shuttleworth’s pri-
                                  vate coffers, which are fairly well-filled since he sold his Internet certificate
                                  authority, Thawte, to Verisign …

           More distributions  In addition to these distributions there are many more, such as Mageia or Linux
                            Mint as smaller “generally useful” distributions, various “live systems” for dif-
                            ferent uses from firewalls to gaming or multimedia platforms, or very compact
                            systems usable as routers, firewalls, or rescue systems.
              Commonalities    Even though there is a vast number of distributions, most look fairly similar in
                            daily life. This is due, on the one hand, to the fact that they use the same basic
                            programs—for example, the command line interpreter is nearly always bash . On
1.6 Bibliography                                                                       29

the other hand, there are standards that try to counter rank growth. The “Filesys-
tem Hierarchy Standard” (FHS) or “Linux Standard Base” (LSB) must be men-

C 1.7 [2] Some Linux hardware platforms have been enumerated above. For
  which of those platforms are there actual Linux distributions available?
  (Hint: )

   • Linux is a Unix-like operating system.
   • The first version of Linux was developed by Linus Torvalds and made avail-
     able on the Internet as “free software”. Today, hundreds of developers all
     over the world contribute to updating and extending the system.
   • The GPL is the best-known “free software” license. It tries to ensure that
     the recipients of software can modify and redistribute the package, and that
     these “freedoms” are passed on to future recipients. GPL software may also
     be sold.
   • To the user, “open source” means approximately the same as “free soft-
   • There are other free licenses besides the GPL. Software may also be dis-
     tributed by the copyright owner under several licenses at the same time.
   • Linux is actually just the operating system kernel. We distinguish “stable”
     and “development kernels”; with the former, the second part of the version
     number is even and with the latter, odd. Stable kernels are meant for end
     users, while development kernels are not necessarily functional, represent-
     ing interim versions of Linux development.
   • There are numerous Linux distributions bringing together a Linux kernel
     and additional software, documentation and installation and administra-
     tion tools.

DFSG “Debian Free Software Guidelines”.

GPL-Urteil06 Landgericht Frankfurt am Main. “Urteil 2-6 0 224/06”, July 2006.

GPL91 Free Software Foundation, Inc. “GNU General Public License, Version 2”,
     June 1991.                 

LR89 Don Libes, Sandy Ressler. Life with UNIX: A Guide for Everyone. Prentice-
     Hall, 1989. ISBN 0-13-536657-7.
Rit84 Dennis M. Ritchie. “The Evolution of the Unix Time-sharing System”.
      AT&T Bell Laboratories Technical Journal, October 1984. 63(6p2):1577–93.

RT74 Dennis M. Ritchie, Ken Thompson. “The Unix Time-sharing System”. Com-
     munications of the ACM, July 1974. 17(7):365–73. The classical paper on Unix.
TD02 Linus Torvalds, David Diamond. Just For Fun: The Story of an Accidental
    Revolutionary. HarperBusiness, 2002. ISBN 0-066-62073-2.
                                                                                   $ echo tux
                                                                                   $ ls
                                                                                   $ /bin/su -

Using the Linux System

2.1     Logging In and Out . . . . . . . . . . . . . . . . . . . 32
2.2     Switching On and Off . . . . . . . . . . . . . . . . . . 34
2.3     The System Administrator. . . . . . . . . . . . . . . . . 34

      • Logging on and off the system
      • Understanding the difference between normal user accounts and the system
        administrator’s account

      • Basic knowledge of using computers is helpful

grd1-bedienung.tex   (be27bba8095b329b )
32                                                                             2 Using the Linux System

         Figure 2.1: The login screens of some common Linux distributions

                 2.1      Logging In and Out
                   The Linux system distinguishes between different users. Consequently, it may
                   be impossible to start working right after the computer has been switched on.
                   First you have to tell the computer who you are—you need to “log in” (or “on”).
                   Based on the information you provide, the system can decide what you may do
     access rights (or may not do). Of course you need access rights to the system (an “account”) –
                   the system administrator must have entered you as a valid user and assigned you
                   a user name (e. g., joe ) and a password (e. g., secret ). The password is supposed to
                   ensure that only you can use your account; you must keep it secret and should not
                   make it known to anybody else. Whoever knows your user name and password
                   can pretend to be you on the system, read (or delete) all your files, send electronic
                   mail in your name and generally get up to all kinds of shenanigans.

                  B Modern Linux distributions want to make it easy on you and allow you to
                    skip the login process on a computer that only you will be using anyway. If
                    you use such a system, you will not have to log in explicitly, but the computer
                    boots straight into your session. You should of course take advantage of this
                    only if you do not foresee that third parties have access to your computer;
                    refrain from this in particular on laptop computers or other mobile systems
                    that tend to get lost or stolen.

                 Logging in in a graphical enviroment These days it is common for Linux worksta-
                 tions to present a graphical environment (as they should), and the login process
                 takes place in a graphical environment as well. Your computer shows a dialog
2.1 Logging In and Out                                                                 33

that lets you enter your user name and password (Figure 2.1 shows some repre-
sentative examples.)

B Don’t wonder if you only see asterisks when you’re entering your password.
  This does not mean that your computer misunderstands your input, but that
  it wants to make life more difficult for people who are watching you over
  your shoulder in order to find out your password.

   After you have logged in, the computer starts a graphical session for you, in
which you have convenient access to your application programs by means of
menus and icons (small pictures on the “desktop” background). Most graphical
environments for Linux support “session management” in order to restore your
session the way it was when you finished it the time before (as far as possible,
anyway). That way you do not need to remember which programs you were
running, where their windows were placed on the screen, and which files you
had been using.

Logging out in a graphical environment If you are done with your work or want
to free the computer for another user, you need to log out. This is also important
because the session manager needs to save your current session for the next time.
How logging out works in detail depends on your graphical environment, but as
a rule there is a menu item somewhere that does everything for you. If in doubt,
consult the documentation or ask your system administrator (or knowledgeable

Logging in on a text console Unlike workstations, server systems often support
only a text console or are installed in draughty, noisy machine halls, where you
don’t want to spend more time than absolutely necessary. So you will prefer to log
into such a computer via the network. In both cases you will not see a graphical
login screen, but the computer asks you for your user name and password directly.
For example, you might simply see something like

computer login: _

(if we stipulate that the computer in question is called “computer ”). Here you must
enter your user name and finish it off with the ↩ key. The computer will con-
tinue by asking you for your password:

Password: _

Enter your password here. (This time you won’t even see asterisks—simply noth-
ing at all.) If both the user name and password were correct, the system will ac-
cept your login. It starts the command line interpreter (the shell), and you can
enter commands and invoke programs. After logging in you will be placed in
your “home directory”, where you will be able to find your files.

B If you use the “secure shell”, for example, to log in to another machine over
  the network, the user name question is usually skipped, since unless you
  specify otherwise the system will assume that your user name on the re-
  mote computer will be the same as on the computer you are initiating the
  session from. The details are beyond the scope of this manual; the secure
  shell is discussed in detail in the Linup Front training manual Linux Admin-
  istration II.

Logging out on a text console On the text console, you can log out using, for
example, the logout command:

$ logout
34                                                                                          2 Using the Linux System

                             Once you have logged out, on a text console the system once more displays the
                             start message and a login prompt for the next user. With a secure shell session,
                             you simply get another command prompt from your local computer.

                              C 2.1 [!1] Try logging in to the system. After that, log out again. (You will find
                                a user name and password in your system documentation, or—in a training
                                centre—your instructor will tell you what to use.)

                              C 2.2 [!2] What happens if you give (a) a non-existing user name, (b) a wrong
                                password? Do you notice anything unusual? What reasons could there be
                                for the system to behave as it does?

                             2.2     Switching On and Off
                             A Linux computer can usually be switched on by whoever is able to reach the
                             switch (local policy may vary). On the other hand, you should not switch off a
                             Linux machine on a whim—there might be data left in main memory that really
                             belong on disk and will be lost, or—which would be worse—the data on the hard
                             disk could get completely addled. Besides, other users might be logged in to the
                             machine via the network, be surprised by the sudden shutdown, and lose valu-
                             able work. For this reason, important computers are usually only “shut down”
                             by the system administrator. Single-user workstations, though, can usually be
                             shut down cleanly via the graphical desktop; depending on the system’s settings
                             normal user privileges may suffice, or you may have to enter the administrator’s

                              C 2.3 [2] Check whether you can shut down your system cleanly as a normal
                                (non-administrator) user, and if so, try it.

                             2.3     The System Administrator
                             As a normal user, your privileges on the system are limited. For example, you may
                             not write to certain files (most files, actually—mostly those that do not belong to
                             you) and not even read some files (e. g., the file containing the encrypted pass-
                             words of all users). However, there is a user account for system administration
                             which is not subject to these restrictions—the user “root ” may read and write all
                             files, and do various other things normal users aren’t entitled to. Having admin-
                             istrator (or “root”) rights is a privilege as well as a danger—therefore you should
                             only log on to the system as root if you actually want to exercise these rights, not
                             just to read your mail or surf the Internet.

                             A Simply pretend you are Spider-Man: “With great power comes great re-
                               sponsibility”. Even Spider-Man wears his Spandex suit only if he must …

                                   In particular, you should avoid logging in as root via the graphical user inter-
                               face, since all of the GUI will run with root ’s privileges. This is a possible security
          GUI as root : risky risk—GUIs like KDE contain lots of code which is not vetted as thoroughly for
                               security holes as the textual shell (which is, by comparison, relatively compact).
     Assuming root ’s identity Normally you can use the command “/bin/su - ” to assume root ’s identity (and thus
                               root ’s privileges). su asks for root ’s password and then starts a new shell, which
                               lets you work as if you had logged in as root directly. You can leave the shell again
                               using the exit command.
2.3 The System Administrator                                                                                    35

            Figure 2.2: Running programs as a different user in KDE

E You should get used to invoking su via its full path name—“/bin/su - ”. Oth-
  erwise, a user could trick you by calling you to her computer, getting you to
  enter “su ” in one of her windows and to input the root password. What you
  don’t realize at that moment is that the clever user wrote her own “Trojan”
  su command—which doesn’t do anything except write the password to a
  file, output the “wrong password” error message and remove itself. When
  you try again (gritting your teeth) you get the correct su —and your user
  possesses the coveted administrator’s privileges …

   You can usually tell that you actually have administrator privileges by look-
ing at the shell prompt—for root , it customarily ends with the “# ” character. (For   root ’s   shell prompt
normal users, the shell prompt usually ends in “$ ” or “> ”).

      In Ubuntu you can’t even log in as root by default. Instead, the system al-
      lows the first user created during installation to execute commands with
      administrator privileges by prefixing them with the sudo command. With

       $ sudo chown joe file.txt

      for example, he could sign over the file.txt file to user joe – an operation
      that is restricted to the system administrator.

      Recent versions of Debian GNU/Linux offer a similar arrangement to

B Incidentally, with the KDE GUI, it is very easy to start arbitrary programs          root   and KDE
  as root : Select “Run command” from the “KDE” menu (usually the entry
  at the very left of the command panel—the “Start” menu on Windows sys-
  tems), and enter a command in the dialog window. Before executing the
  command, click on the “Settings” button; an area with additional settings
  appears, where you can check “As different user” (root is helpfully set up as
  the default value). You just have to enter the root password at the bottom
  (Figure 2.2).
36                                                                   2 Using the Linux System

     kdesu   B Alternatively, you can put “kdesu ” in front of the actual command in the dia-
               log window (or indeed any shell command line in a KDE session). This will
               ask you for root ’s password before starting the command with administrator

             C 2.4 [!1] Use the su command to gain administrator privileges, and change
               back to your normal account.

             C 2.5 [5] (For programmers.) Write a convincing “Trojan” su program. Use it
               to try and fool your system administrator.

             C 2.6 [2] Try to run the id program as root in a terminal session under KDE, us-
               ing “Run command …”. Check the appropriate box in the extended settings
               to do so.

             Commands in this Chapter
             exit     Quits a shell                                             bash (1) 34
             kdesu    Starts a program as a different user on KDE      KDE: help:/kdesu 35
             logout   Terminates a shell session                                bash (1) 33
             su       Starts a shell using a different user’s identity            su (1) 34
             sudo     Allows normal users to execute certain commands with administrator
                      privileges                                                sudo (8) 35

                • Before using a Linux system, you have to log in giving your user name and
                  password. After using the system, you have to log out again.
                • Normal access rights do not apply to user root , who may do (essentially)
                  everything. These privileges should be used as sparingly as possible.
                • You should not log in to the GUI as root but use (e. g.) su to assume admin-
                  istrator privileges if necessary.
                                                                                                              $ echo tux
                                                                                                              $ ls
                                                                                                              $ /bin/su -

Who’s Afraid Of The Big Bad

3.1  Why? . . . . . . . .                .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   38
   3.1.1 What Is The Shell? .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   38
3.2 Commands . . . . . .                 .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   40
   3.2.1 Why Commands?. .                .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   40
   3.2.2 Command Structure.              .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   40
   3.2.3 Command Types . .               .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   41
   3.2.4 Even More Rules . .             .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   42

      •   Appreciating the advantages of a command-line user interface
      •   Knowing about common Linux shells
      •   Working with Bourne-Again Shell (Bash) commands
      •   Understanding the structure of Linux commands

      • Basic knowledge of using computers is helpful

grd1-shell1.tex    (be27bba8095b329b )
38                                                                               3 Who’s Afraid Of The Big Bad Shell?

                          3.1      Why?
                          More so than other modern operating systems, Linux (like Unix) is based on the
                          idea of entering textual commands via the keyboard. This may sound antediluvial
                          to some, especially if one is used to systems like Windows, who have been trying
                          for 15 years or so to brainwash their audience into thinking that graphical user
                          interfaces are the be-all and end-all. For many people who come to Linux from
                          Windows, the comparative prominence of the command line interface is at first
                          a “culture shock” like that suffered by a 21-century person if they suddenly got
                          transported to King Arthur’s court – no cellular coverage, bad table manners, and
                          dreadful dentists!
                             However, things aren’t as bad as all that. On the one hand, nowadays there
                          are graphical interfaces even for Linux, which are equal to what Windows or Ma-
                          cOS X have to offer, or in some respects even surpass these as far as convenience
                          and power are concerned. On the other hand, graphical interfaces and the text-
                          oriented command line are not mutually exclusive, but in fact complementary
                          (according to the philosophy “the right tool for every job”).
                             At the end of the day this only means that you as a budding Linux user will
                          do well to also get used to the text-oriented user interface, known as the “shell”.
                          Of course nobody wants to prevent you from using a graphical desktop for every-
                          thing you care to do. The shell, however, is a convenient way to perform many
                          extremely powerful operations that are rather difficult to express graphically. To
                          reject the shell is like rejecting all gears except first in your car1 . Sure, you’ll get
                          there eventually even in first gear, but only comparatively slowly and with a hor-
                          rible amount of noise. So why not learn how to really floor it with Linux? And if
                          you watch closely, we’ll be able to show you another trick or two.

                          3.1.1     What Is The Shell?
                          Users cannot communicate directly with the operating system kernel. This is only
                          possible through programs accessing it via “system calls”. However, you must be
                          able to start such programs in some way. This is the task of the shell, a special user
                          program that (usually) reads commands from the keyboard and interprets them
                          (for example) as commands to be executed. Accordingly, the shell serves as an
                          “interface” to the computer that encloses the actual operating system like a shell
                          (as in “nutshell”—hence the name) and hides it from view. Of course the shell is
                          only one program among many that access the operating system.

                          B Even today’s graphical “desktops” like KDE can be considered “shells”. In-
                            stead of reading text commands via the keyboard, they read graphical com-
                            mands via the mouse—but as the text commands follow a certain “gram-
                            mar”, the mouse commands do just the same. For example, you select ob-
                            jects by clicking on them and then determine what to do with them: open-
                            ing, copying, deleting, …

                             Even the very first Unix—end-1960s vintage—had a shell. The oldest shell to
                          be found outside museums today was developed in the mid-1970s for “Unix ver-
          Bourne shell    sion 7” by Stephen L. Bourne. This so-called “Bourne shell” contains most basic
                          functions and was in very wide-spread use, but is very rarely seen in its original
                C shell   form today. Other classic Unix shells include the C shell, created at the University
                          of California in Berkeley and (very vaguely) based on the C programming lan-
             Korn shell   guage, and the largely Bourne-shell compatible, but functionally enhanced, Korn
                          shell (by David Korn, also at AT&T).
     Bourne-again shell      Standard on Linux systems is the Bourne-again shell, bash for short. It was
                          developed under the auspices of the Free Software Foundation’s GNU project by
                          Brian Fox and Chet Ramey and unifies many functions of the Korn and C shells.
                             1 This metaphor is for Europeans and other people who can manage a stick shift; our American

                          readers of course all use those wimpy automatic transmissions. It’s like they were all running Win-
3.1 Why?                                                                                                     39

B Besides the mentioned shells, there are many more. On Unix, a shell is sim- shells: normal programs
  ply an application program like all others, and you need no special privi-
  leges to write one—you simply need to adhere to the “rules of the game”
  that govern how a shell communicates with other programs.

   Shells may be invoked interactively to read user commands (normally on a “ter-
minal” of some sort). Most shells can also read commands from files containing
pre-cooked command sequences. Such files are called “shell scripts”.              shell scripts
   A shell performs the following steps:
   1. Read a command from the terminal (or the file)
   2. Validate the command
   3. Run the command directly or start the corresponding program

   4. Output the result to the screen (or elsewhere)
   5. Continue at step 1.
In addition to this standard command loop, a shell generally contains further fea-
tures such as a programming language. This includes complex command struc- programming language
tures involving loops, conditions, and variables (usually in shell scripts, less fre-
quently in interactive use). A sophisticated method for recycling recently used
commands also makes a user’s life easier.
   Shell sessions can generally be terminated using the exit command. This also Terminating shell sessions
applies to the shell that you obtained immediately after logging in.
   Although, as we mentioned, there are several different shells, we shall concen-
trate here on bash as the standard shell on most Linux distributions. The LPI exams
also refer to bash exclusively.

B If there are several shells available on the system (the usual case), you can Changing shell
  use the following commands to switch between them:
      sh    for the classic Bourne shell (if available—on most Linux systems, sh refers
              to the Bourne-again shell).
      bash    for the Bourne-again shell (bash).
      ksh    for the Korn shell.
      csh    for the C shell.
      tcsh    for the “Tenex C shell”, an extended and improved version of the nor-
              mal C shell. On many Linux systems, the csh command really refers to
              tcsh .

B In case you cannot remember which shell you are currently running, the
  “echo $0 ” command should work in any shell and output the current shell’s

C 3.1 [2] How many different shells are installed on your system? Which ones?
  (Hint: Check the file /etc/shells .)

C 3.2 [2] Log off and on again and check the output of the “echo $0 ” command
  in the login shell. Start a new shell using the “bash ” command and enter
  “echo $0 ” again. Compare the output of the two commands. Do you notice
  anything unusual?
40                                                                           3 Who’s Afraid Of The Big Bad Shell?

                             3.2     Commands
                             3.2.1    Why Commands?
                             A computer’s operation, no matter which operating system it is running, can be
                             loosely described in three steps:
                                1. The computer waits for user input
                                2. The user selects a command and enters it via the keyboard or mouse
                                3. The computer executes the command
                             In a Linux system, the shell displays a “prompt”, meaning that commands can be
                             entered. This prompt usually consists of a user and host (computer) name, the
                             current directory, and a final character:

                             joe@red:/home > _

                             In this example, user joe works on computer red in the /home directory.

                             3.2.2    Command Structure
                             A command is essentially a sequence of characters which is ends with a press
                             of the ↩ key and is subsequently evaluated by the shell. Many commands are
                             vaguely inspired by the English language and form part of a dedicated “command
                   syntax    language”. Commands in this language must follow certain rules, a “syntax”, for
                             the shell to be able to interpret them.
                    words       To interpret a command line, the shell first tries to divide the line into words.
     First word: command     Just like in real life, words are separated by spaces. The first word on a line is usu-
                parameters   ally the actual command. All other words on the line are parameters that explain
                             what is wanted in more detail.

                             A DOS and Windows users may be tripped up here by the fact that the shell
                               distinguishes between uppercase and lowercase letters. Linux commands
                               are usually spelled in lowercase letters only (exceptions prove the rule) and
                               not understood otherwise. See also Section 3.2.4.

                             B When dividing a command into words, one space character is as good as
                               many – the difference does not matter to the shell. In fact, the shell does
                               not even insist on spaces; tabulator characters are also allowed, which is
                               however mostly of importance when reading commands from files, since
                               the shell will not let you enter tab character directly (not without jumping
                               through hoops, anyway).

                             B You may even use the line terminator ( ↩ ) to distribute a long command
                               across several input lines, but you must put a “Token\” immediately in front
                               of it so the shell will not consider your command finished already.

                                A command’s parameters can be roughly divided into two types:
                  options       • Parameters starting with a dash (“- ”) are called options. These are usually,
                                  er, optional—the details depend on the command in question. Figuratively
                                  spoken they are “switches” that allow certain aspects of the command to
                                  be switched on or off. If you want to pass several options to a command,
                                  they can (often) be accumulated behind a single dash, i. e., the options se-
                                  quence “-a -l -F ” corresponds to “-alF ”. Many programs have more options
                                  than can be conveniently mapped to single characters, or support “long op-
                                  tions” for readability (frequently in addition to equivalent single-character
                                  options). Long options most often start with two dashes and cannot be ac-
                                  cumulated: “foo --bar --baz ”.
3.2 Commands                                                                                               41

   • Parameters with no leading dash are called arguments. These are often the arguments
     names of files that the command should process.
   The general command structure can be displayed as follows:                          command structure

   • Command—“What to do?”
   • Options—“How to do it?”
   • Arguments—“What to do it with?”
Usually the options follow the command and precede the arguments. However,
not all commands insist on this—with some, arguments and options can be mixed
arbitrarily, and they behave as if all options came immediately after the command.
With others, options are taken into account only when they are encountered while
the command line is processed in sequence.

A The command structure of current Unix systems (including Linux) has
  grown organically over a period of almost 40 years and thus exhibits vari-
  ous inconsistencies and small surprises. We too believe that there ought to
  be a thorough clean-up, but 30 years’ worth of shell scripts are difficult to
  ignore completely … Therefore be prepared to get used to little weirdnesses
  every so often.

3.2.3     Command Types
In shells, there are essentially two kinds of commands:

Internal commands These commands are made available by the shell itself. The
      Bourne-again shell contains approximately 30 such commands, which can
      be executed very quickly. Some commands (such as exit or cd ) alter the state
      of the shell itself and thus cannot be provided externally.
External commands The shell does not execute these commands by itself but
     launches executable files, which within the file system are usually found
     in directories like /bin or /usr/bin . As a user, you can provide your own pro-
     grams, which the shell will execute like all other external commands.
   You can use the type command to find out the type of a command. If you pass External or internal?
a command name as the argument, it outputs the type of command or the corre-
sponding file name, such as

$ type echo
echo is a shell builtin
$ type date
date is /bin/date

(echo is an interesting command which simply outputs its parameters:

$ echo Thou hast it now, king, Cawdor, Glamis, all
Thou hast it now, king, Cawdor, Glamis, all

datedisplays the current date and time, possibly adjusted to the current time zone
and language setup:

$ date
Mon May   7 15:32:03 CEST 2012

You will find out more about echo and date in Chapter 8.)
  You can obtain help for internal Bash commands via the help command:                 help
42                                                                       3 Who’s Afraid Of The Big Bad Shell?

                          $ help type
                          type: type [-afptP] name [name ...]
                            For each NAME, indicate how it would be interpreted if used as a
                            command name.

                            If the -t option is used, `type' outputs a single word which is one of
                            `alias', `keyword', `function', `builtin', `file' or `', if NAME is an

                          C 3.3 [2] With bash, which of the following programs are provided externally
                            and which are implemented within the shell itself: alias , echo , rm , test ?

                         3.2.4    Even More Rules
                          As mentioned above, the shell distinguishes between uppercase and lowercase
                          letters when commands are input. This does not apply to commands only, but
                          consequentially to options and parameters (usually file names) as well.
                              Furthermore, you should be aware that the shell treats certain characters in the
          space character input specially. Most importantly, the already-mentioned space character is used
                          to separate words on teh command line. Other characters with a special meaning

                         If you want to use any of these characters without the shell interpreting according
     “Escaping” characters to its the special meaning, you need to “escape” it. You can use the backslash “\ ”
                         to escape a single special character or else single or double quotes (' …' , " …" ) to
                         excape several special characters. For example:

                          $ touch 'New File'

                         Due to the quotes this command applies to a single file called New File . Without
                         quotes, two files called New and File would have been involved.

                          B We can’t explain all the other special characters here. Most of them will
                            show up elsewhere in this manual – or else check the Bash documentation.

                         Commands in this Chapter
                         bash     The “Bourne-Again-Shell”, an interactive command interpreter
                                                                                             bash (1) 38, 39
                         csh      The “C-Shell”, an interactive command interpreter                csh (1) 39
                         date     Displays the date and time                                      date (1) 41
                         echo     Writes all its parameters to standard output, separated by spaces
                                                                                        bash (1), echo (1) 41
                         help     Displays on-line help for bash commands                         bash (1) 41
                         ksh      The ”‘Korn shell”’, an interactive command interpreter           ksh (1) 39
                         sh       The “Bourne shell”, an interactive command interpreter            sh (1) 39
                         tcsh     The “Tenex C shell”, an interactive command interpreter         tcsh (1) 39
                         type     Determines the type of command (internal, external, alias) bash (1) 41
3.2 Commands                                                                        43

  • The shell reads user commands and executes them.
  • Most shells have programming language features and support shell scripts
    containing pre-cooked command sequences.
  • Commands may have options and arguments. Options determine how the
    command operates, and arguments determine what it operates on.
  • Shells differentiate between internal commands, which are implemented in
    the shell itself, and external commands, which correspond to executable files
    that are started in separate processes.
                                                                                                   $ echo tux
                                                                                                   $ ls
                                                                                                   $ /bin/su -

Getting Help

4.1  Self-Help . . . . . . . . .         . . .    .   .   .   .   .   .   .   .   .   .   .   46
4.2  The help Command and the --help     Option   .   .   .   .   .   .   .   .   .   .   .   46
4.3  The On-Line Manual . . . .          . . .    .   .   .   .   .   .   .   .   .   .   .   46
   4.3.1 Overview . . . . . .            . . .    .   .   .   .   .   .   .   .   .   .   .   46
   4.3.2 Structure . . . . . . .         . . .    .   .   .   .   .   .   .   .   .   .   .   47
   4.3.3 Chapters . . . . . . .          . . .    .   .   .   .   .   .   .   .   .   .   .   48
   4.3.4 Displaying Manual Pages .       . . .    .   .   .   .   .   .   .   .   .   .   .   48
4.4 Info Pages . . . . . . . .           . . .    .   .   .   .   .   .   .   .   .   .   .   49
4.5 HOWTOs. . . . . . . . .              . . .    .   .   .   .   .   .   .   .   .   .   .   50
4.6 Further Information Sources . .      . . .    .   .   .   .   .   .   .   .   .   .   .   50

      • Being able to handle manual and info pages
      • Knowing about and finding HOWTOs
      • Being familiar with the most important other information sources

      • Linux Overview
      • Basic command-line Linux usage (e. g., from the previous chapters)

grd1-hilfe.tex   (be27bba8095b329b )
46                                                                                             4 Getting Help

                           4.1     Self-Help
                           Linux is a powerful and intricate system, and powerful and intricate systems are,
                           as a rule, complex. Documentation is an important tool to manage this complex-
                           ity, and many (unfortunately not all) aspects of Linux are documented very exten-
                           sively. This chapter describes some methods to access this documentation.

                           B “Help” on Linux in many cases means “self-help”. The culture of free soft-
                             ware implies not unnecessarily imposing on the time and goodwill of other
                             people who are spending their free time in the community by asking things
                             that are obviously explained in the first few paragraphs of the manual. As
                             a Linux user, you do well to have at least an overview of the available doc-
                             umentation and the ways of obtaining help in cases of emergency. If you
                             do your homework, you will usually experience that people will help you
                             out of your predicament, but any tolerance towards lazy individuals who
                             expect others to tie themselves in knots on their behalf, on their own time,
                             is not necessarily very pronounced.

                           B If you would like to have somebody listen around the clock, seven days a
                             week, to your not-so-well-researched questions and problems, you will have
                             to take advantage of one of the numerous “commercial” support offerings.
                             These are available for all common distributions and are offered either by
                             the distribution vendor themselves or else by third parties. Compare the
                             different service vendors and pick one whose service level agreements and
                             pricing suit you.

                           4.2     The help Command and the --help Option
     Internal bash commands In bash , internal commands are described in more detail by the help command,
                           giving the command name in question as an argument:

                           $ help exit
                           exit: exit [n]
                               Exit the shell with a status of N.
                               If N is omitted, the exit status
                               is that of the last command executed.
                           $ _

                           B More detailed explanations are available from the shell’s manual page and
                             info documentation. These information sources will be covered later in this

                             Many external commands (programs) support a --help option instead. Most
                           commands display a brief listing of their parameters and syntax.

                           B Not every command reacts to --help ; frequently the option is called -h or -? ,
                             or help will be output if you specify any invalid option or otherwise illegal
                             command line. Unfortunately there is no universal convention.

                           4.3     The On-Line Manual
                           4.3.1   Overview
                           Nearly every command-line program comes with a “manual page” (or “man
                           page”), as do many configuration files, system calls etc. These texts are generally
             Command man   installed with the software, and can be perused with the “man ⟨name⟩” command.
4.3 The On-Line Manual                                                                                 47

                         Table 4.1: Manual page sections

          Section        Content
        NAME             Command name and brief description
       SYNOPSIS          Description of the command syntax
     DESCRIPTION         Verbose description of the command’s effects
       OPTIONS           Available options
     ARGUMENTS           Available Arguments
         FILES           Auxiliary files
      EXAMPLES           Sample command lines
       SEE ALSO          Cross-references to related topics
     DIAGNOSTICS         Error and warning messages
      COPYRIGHT          Authors of the command
         BUGS            Known limitations of the command

Here, ⟨name⟩ is the command or file name that you would like explained. “man
bash ”, for example, produces a list of the aforementioned internal shell commands.
   However, the manual pages have some disadvantages: Many of them are only
available in English; there are sets of translations for different languages which are
often incomplete. Besides, the explanations are frequently very complex. Every
single word can be important, which does not make the documentation accessi-
ble to beginners. In addition, especially with longer documents the structure can
be obscure. Even so, the value of this documentation cannot be underestimated.
Instead of deluging the user with a large amount of paper, the on-line manual is
always available with the system.

B Many Linux distributions pursue the philosophy that there should be a
  manual page for every command that can be invoked on the command line.
  This does not apply to the same extent to programs belonging to the graph-
  ical desktop environments KDE and GNOME, many of which not only do
  not come with a manual page at all, but which are also very badly docu-
  mented even inside the graphical environment itself. The fact that many of
  these programs have been contributed by volunteers is only a weak excuse.

4.3.2    Structure
The structure of the man pages loosely follows the outline given in Table 4.1, even Man page outline
though not every manual page contains every section mentioned there. In partic-
ular, the EXAMPLES are frequently given short shrift.

B The BUGS heading is often misunderstood: Read bugs within the imple-
  mentation get fixed, of course; what is documented here are usually restric-
  tions which follow from the approach the command takes, which are not able
  to be lifted with reasonable effort, and which you as a user ought to know
  about. For example, the documentation for the grep command points out
  that various constructs in the regular expression to be located may lead to
  the grep process using very much memory. This is a consequence of the way
  grep implements searching and not a trivial, easily fixed error.

    Man pages are written in a special input format which can be processed for text
display or printing by a program called groff . Source code for the manual pages is
stored in the /usr/share/man directory in subdirectories called man 𝑛, where 𝑛 is one
of the chapter numbers from Table 4.2.

B You can integrate man pages from additional directories by setting the MAN-
  PATH environment variable, which contains the directories which will be
  searched by man , in order. The manpath command gives hints for setting up
48                                                                                       4 Getting Help

                                              Table 4.2: Manual Page Topics

                        No.      Topic
                           1     User commands
                           2     System calls
                           3     C language library functions
                           4     Device files and drivers
                           5     Configuration files and file formats
                           6     Games
                           7     Miscellaneous (e. g. groff macros, ASCII tables, …)
                           8     Administrator commands
                           9     Kernel functions
                           n     »New« commands

                   4.3.3       Chapters
        Chapters Every manual page belongs to a “chapter” of the conceptual “manual” (Table 4.2).
                   Chapters 1, 5 and 8 are most important. You can give a chapter number on the man
                   command line to narrow the search. For example, “man 1 crontab ” displays the
                   man page for the crontab command, while “man 5 crontab ” explains the format of
                   crontab files. When referring to man pages, it is customary to append the chap-
                   ter number in parentheses; we differentiate accordingly between crontab (1), the
                   crontab command manual, and crontab (5), the description of the file format.
          man -a      With the -a option, man displays all man pages matching the given name; with-
                   out this option, only the first page found (generally from chapter 1) will be dis-

                   4.3.4       Displaying Manual Pages
                   The program actually used to display man pages on a text terminal is usually
                   less , which will be discussed in more detail later on. At this stage it is important
                   to know that you can use the cursor keys ↑ and ↓ to navigate within a man
                   page. You can search for keywords inside the text by pressing / —after entering
                   the word and pressing the return key, the cursor jumps to the next occurrence of
                   the word (if it does occur at all). Once you are happy, you can quit the display
                   using q to return to the shell.

                   B Using the KDE web browser, Konqueror, it is convenient to obtain nicely for-
                     matted man pages. Simply enter the URL “man:/ ⟨name⟩” (or even “# ⟨name⟩”)

     Figure 4.1: A manual page in a text terminal (left) and in Konqueror (right)
4.4 Info Pages                                                                                       49

      in the browser’s address line. This also works on the KDE command line
      (Figure 2.2).
   Before rummaging aimlessly through innumerable man pages, it is often sen-
sible to try to access general information about a topic via apropos . This command Keyword search
works just like “man -k ”; both search the “NAME” sections of all man pages for
a keyword given on the command line. The output is a list of all manual pages
containing the keyword in their name or description.
   A related command is whatis . This also searches all manual pages, but for a whatis
command (file, …) name rather than a keyword—in other words, the part of the
“NAME” section to the left of the dash. This displays a brief description of the
desired command, system call, etc.; in particular the second part of the “NAME”
section of the manual page(s) in question. whatis is equivalent to “man -f ”.

C 4.1 [!1] View the manual page for the ls command. Use the text-based man
  command and—if available—the Konqueror browser.

C 4.2 [2] Which manual pages on your system deal (at least according to their
  “NAME” sections) with processes?

C 4.3 [5] (Advanced.) Use a text editor to write a manual page for a hypotheti-
  cal command. Read the man (7) man page beforehand. Check the appearance
  of the man page on the screen (using “groff -Tascii -man ⟨file⟩ | less ”) and
  as printed output (using something like “groff -Tps -man ⟨file⟩ | gv - ”).

4.4     Info Pages
For some commands—often more complicated ones—there are so-called “info
pages” instead of (or in addition to) the more usual man pages. These are usu-
ally more extensive and based on the principles of hypertext, similar to the World hypertext
Wide Web.

B The idea of info pages originated with the GNU project; they are therefore
  most frequently found with software published by the FSF or otherwise be-
  longing to the GNU project. Originally there was supposed to be only info
  documentation for the “GNU system”; however, since GNU also takes on
  board lots of software not created under the auspices of the FSF, and GNU
  tools are being used on systems pursuing a more conservative approach,
  the FSF has relented in many cases.
   Analogously to man pages, info pages are displayed using the “info ⟨command⟩”
command (the package containing the info program may have to be installed
explicitly). Furthermore, info pages can be viewed using the emacs editor or dis-
played in the KDE web browser, Konqueror, via URLs like “info:/ ⟨command⟩”.

B One advantage of info pages is that, like man pages, they are written in
  a source format which can conveniently be processed either for on-screen
  display or for printing manuals using PostScript or PDF. Instead of groff ,
  the TEX typesetting program is used to prepare output for printing.

C 4.4 [!1] Look at the info page for the ls program. Try the text-based info
  browser and, if available, the Konqueror browser.

C 4.5 [2] Info files use a crude (?) form of hypertext, similar to HTML files on
  the World Wide Web. Why aren’t info files written in HTML to begin with?
50                                                                                               4 Getting Help

                            4.5     HOWTOs
                            Both manual and info pages share the problem that the user must basically know
                            the name of the program to use. Even searching with apropos is frequently nothing
                            but a game of chance. Besides, not every problem can be solved using one sin-
         Problem-oriented   gle command. Accordingly, there is “problem-oriented” rather than “command-
           documentation    oriented” documentation is often called for. The HOWTOs are designed to help
                            with this.
                                HOWTOs are more extensive documents that do not restrict themselves to sin-
                            gle commands in isolation, but try to explain complete approaches to solving
                            problems. For example, there is a “DSL HOWTO” detailing ways to connect a
                            Linux system to the Internet via DSL, or an “Astronomy HOWTO” discussing as-
                            tronomy software for Linux. Many HOWTOs are available in languages other
                            than English, even though the translations often lag behind the English-language
        HOWTO packages          Most Linux distributions furnish the HOWTOs (or significant subsets) as pack-
                            ages to be installed locally. They end up in a distribution-specific directory—/usr/
                            share/doc/howto for SUSE distributions, /usr/share/doc/HOWTO for Debian GNU/Linux—
     HOWTOs on the Web      , typically either als plain text or else HTML files. Current versions of all HOWTOs
                            and other formats such as PostScript or PDF can be found on the Web on the site
                            of the “Linux Documentation Project” ( ) which also offers other
                            Linux documentation.

                            4.6     Further Information Sources
     Additional information You will find additional documentation and example files for (nearly) every in-
                        stalled software package under /usr/share/doc or /usr/share/doc/packages (depend-
                        ing on your distribution). Many GUI applications (such as those from the KDE or
                        GNOME packages) offer “help” menus. Besides, many distributions offer special-
                        ized “help centers” that make it convenient to access much of the documentation
                        on the system.
                           Independently of the local system, there is a lot of documentation available on
                  WWW the Internet, among other places on the WWW and in USENET archives.
                 USENET    Some of the more interesting web sites for Linux include:

                                              The “Linux Documentation Project”, which is in charge of
                                  man pages and HOWTOs (among other things).
                     A general “portal” for Linux enthusiasts.
                   “free-form text information database for everything
                                  pertaining to Linux” (in German).
                    Linux Weekly News—probably the best web site for Linux news of
                                  all sorts. Besides a daily overview of the newest developments, products,
                                  security holes, Linux advocacy in the press, etc., on Thursdays there is an
                                  extensive on-line magazine with well-researched background reports about
                                  the preceding week’s events. The daily news are freely available, while the
                                  weekly issues must be paid for (various pricing levels starting at US-$ 5 per
                                  month). One week after their first appearance, the weekly issues are made
                                  available for free as well.
                    This site publishes announcements of new (predominantly
                                  free) software packages, which are often available for Linux. In addition to
                                  this there is a database allowing queries for interesting projects or software
                                                                  A site collecting “headlines” from other in-
                            http://www.linux- knowledge-
                                  teresting Linux sites, including LWN and Freshmeat.
4.6 Further Information Sources                                                                       51

   If there is nothing to be found on the Web or in Usenet archives, it is possible to
ask questions in mailing lists or Usenet groups. In this case you should note that
many users of these forums consider it very bad form to ask questions answered
already in the documentation or in a “FAQ” (frequently answered questions) re-
source. Try to prepare a detailed description of your problem, giving relevant
excerpts of log files, since a complex problem like yours is difficult to diagnose at
a distance (and you will surely be able to solve non-complex problems by your-

B A news archive is accessible on (formerly De-

B Interesting news groups for Linux can be found in the English-language
  comp.os.linux.* or the German-language de.comp.os.unix.linux.* hierarchies.
  Many Unix groups are appropriate for Linux topics; a question about the
  shell should be asked in a group dedicated to shell programming rather
  than a Linux group, since shells are usually not specific to Linux.

B Linux-oriented mailing lists can be found, for example, at majordomo@vger. . You should send an e-mail message including “subscribe LIST ” to
  this address in order to subscribe to a list called LIST. A commented list of
  all available mailing lists on the system may be found at http://vger.kernel.
  org/vger- lists.html .

B An established strategy for dealing with seemingly inexplicable problems is
  to search for the error message in question using Google (or another search search engine
  engine you trust). If you do not obtain a helpful result outright, leave out
  those parts of your query that depend on your specific situation (such as
  domain names that only exist on your system). The advantage is that Google
  indexes not just the common web pages, but also many mailing list archives,
  and chances are that you will encounter a dialogue where somebody else
  had a problem very like yours.

    Incidentally, the great advantage of open-source software is not only the large
amount of documentation, but also the fact that most documentation is restricted Free documentation
as little as the software itself. This facilitates collaboration between software
developers and documentation authors, and the translation of documentation
into different languages is easier. In fact, there is ample opportunity for non-
programmers to help with free software projects, e. g., by writing good documen-
tation. The free-software scene should try to give the same respect to documen-
tation authors that it does to programmers—a paradigm shift that has begun but
is by no means finished yet.

Commands in this Chapter
apropos Shows all manual pages whose NAME sections contain a given keyword
                                                                apropos (1) 49
groff   Sophisticated typesetting program                     groff (1) 47, 49
help    Displays on-line help for bash commands                    bash (1) 46
info    Displays GNU Info pages on a character-based terminal      info (1) 49
less    Displays texts (such as manual pages) by page              less (1) 48
man     Displays system manual pages                                 man (1) 46
manpath Determines the search path for system manual pages      manpath (1) 47
whatis  Locates manual pages with a given keyword in its description
                                                                 whatis (1) 49
52                                                                 4 Getting Help

      • “help ⟨command⟩” explains internal bash commands. Many external com-
        mands support a --help option.
      • Most programs come with manual pages that can be perused using man .
        apropos searches all manual pages for keywords, whatis looks for manual
        page names.
      • For some programs, info pages are an alternative to manual pages.
      • HOWTOs form a problem-oriented kind of documentation.
      • There is a multitude of interesting Linux resources on the World Wide Web
        and USENET.
                                                                                                             $ echo tux
                                                                                                             $ ls
                                                                                                             $ /bin/su -

The vi Editor

5.1  Editors. . . . . . . .             .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   54
5.2  The Standard—vi . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   54
   5.2.1 Overview . . . .               .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   54
   5.2.2 Basic Functions . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   55
   5.2.3 Extended Commands              .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   58
5.3 Other Editors . . . . .             .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   60

      • Becoming familiar with the vi editor
      • Being able to create and change text files

      • Basic shell operation (qv. Chapter 2)

grd1-editoren-opt.tex [!emacs ] (be27bba8095b329b )
54                                                                                                         5 The vi Editor

                                     5.1     Editors
                                   Most operating systems offer tools to create and change text documents. Such
                                   programs are commonly called “editors” (from the Latin “edire”, “to work on”).
                                      Generally, text editors offer functions considerably exceeding simple text input
                                   and character-based editing. Good editors allow users to remove, copy or insert
                                   whole words or lines. For long files, it is helpful to be able to search for partic-
                                   ular sequences of characters. By extension, “search and replace” commands can
                                   make tedious tasks like “replace every x by a u ” considerably easier. Many editors
                                   contain even more powerful features for text processing.
     Difference to word processors    In contrast to widespread “word processors” such as Writer or
                                   Microsoft Word, text editors usually do not offer markup elements such as various
                                   fonts (Times, Helvetica, Courier, …), type attributes (boldface, italic, underlined,
                                   …), typographical features (justified type, …) and so on—they are predominantly
                                   intended for the creation and editing of pure text files, where these things would
                                   really be a nuisance.

                                     B Of course there is nothing wrong with using a text editor to prepare input
                                       files for typesetting systems such as groff or LATEX that offer all these typo-
                                       graphic features. However, chances are you won’t see much of these in your
                                       original input—which can really be an advantage: After all, much of the ty-
                                       pography serves as a distraction when writing, and authors are tempted to
                                       fiddle with a document’s appearance while inputting text, rather than con-
                                       centrating on its content.

               Syntax highlighting   B Most text editors today support syntax highlighting, that is, identifying cer-
                                       tain elements of a program text (comments, variable names, reserved words,
                                       strings) by colours or special fonts. This does look spiffy, even though the
                                       question of whether it really helps with programming has not yet been an-
                                       swered through suitable psychological studies.

                                        In the rest of the chapter we shall introduce the possibly most important Linux
                                     editor, vi . However, we shall restrict ourselves to the most basic functionality; it
                                     would be easy to conduct multi-day training courses for each of the two. As with
                                     the shells, the choice of text editor is up to a user’s own personal preference.

                                     C 5.1 [2] Which text editors are installed on your system? How can you find

                                     5.2     The Standard—vi
                                     5.2.1    Overview
                                   The only text editor that is probably part of every Linux system is called vi (from
                                   “visual”, not Roman 6—usually pronounced “vee-i”). For practical reasons, this
                vi : today a clone usually doesn’t mean the original vi (which was part of BSD and is decidedly long
                                   in the tooth today), but more modern derivatives such as vim (from “vi improved”)
                                   or elvis ; these editors are, however, sufficiently close to the original vi , to be all
                                   lumped together.
                                       vi , originally developed by Bill Joy for BSD, was one of the first “screen-
                                   oriented” editors in widespread use for Unix. This means that it allowed users to
                                   use the whole screen for editing rather than let them edit just one line at a time.
                                   This is today considered a triviality, but used to be an innovation—which is not to
                                   say that earlier programmers were too stupid to figure it out, but that text termi-
                                   nals allowing free access to arbitrary points on the screen (a mandatory feature for
                                   programs like vi ) had only just become affordable. Out of consideration for older
5.2 The Standard—vi                                                                                    55

systems using teletypes or “glass ttys” (terminals that could only add material at
the bottom of the screen), vi also supports a line-oriented editor under the name
of ex .
   Even with the advanced terminals of that time, one could not rely on the
availability of keyboards with special keys for cursor positioning or advanced Keyboard restrictions
functions—today’s standard PC keyboards would have been considered luxuri-
ous, if not overloaded. This justifies vi ’s unusual concepts of operation, which
today could rightly be considered antediluvian. It cannot be taken amiss if peo-
ple reject vi because of this. In spite of this, having rudimentary knowledge of
vi cannot possibly hurt, even if you select a different text editor for your daily
work—which you should by all means do if vi does not agree with you. It is not
as if there was no choice of alternatives, and we shall not get into childish games
such as “Whoever does not use vi is not a proper Linux user”. Today’s graphical
desktops such as KDE do contain very nice and powerful text editors.

B There is, in fact, an editor which is even cruder than vi —the ed program.
  The title “the only editor that is guaranteed to be available on any Unix sys-
  tem” rightfully belongs to ed instead of vi , but ed as a pure line editor with
  a teletype-style user interface is too basic for even hardcore Unix advocates.
  (ed can be roughly compared with the DOS program, EDLIN ; ed , however, is
  vastly more powerful than the Redmond offering.) The reason why ed is still
  available in spite of the existence of dozens of more convenient text editors
  is unobvious, but very Unix-like: ed accepts commands on its standard input
  and can therefore be used in shell scripts to change files programmatically.
  ed allows editing operations that apply to the whole file at once and is, thus,
  more powerful than its colleague, the “stream editor” sed , which copies its
  standard input to its standard output with certain modifications; normally
  one would use sed and revert to ed for exceptional cases, but ed is still useful
  every so often.

5.2.2   Basic Functions
The Buffer Concept vi works in terms of so-called buffers. If you invoke vi with buffers
a file name as an argument, the content of that file will be read into a buffer. If no
file exists by that name, an empty buffer is created.
    All the modifications made with the editor are only applied inside the buffer.
To make these modifications permanent, the buffer content must be explicitly
written back to the file. If you really want to discard the modifications, simply
leave vi without storing the buffer content—the file on the storage medium will
remain unchanged.
    In addition to a file name as an argument, you can pass options to vi as usual.
Refer to the documentation for the details.

Modes As mentioned earlier, one of the characteristics of vi is its unusual man-
ner of operation. vi supports three different working “modes”:
Command mode All keyboard input consists of commands that do not appear
    on screen and mostly do not need to be finalized using the return key. Af-
    ter invoking vi , you end up in this mode. Be careful: Any key press could
    invoke a command.

Insert mode All keyboard input is considered text and displayed on the screen.
      vi behaves like a “modern” editor, albeit with restricted navigation and cor-
      rection facilities.
Command-line mode This is used to enter long commands. These usually start
    with a colon (“: ”) and are finished using the return key.

In insert mode, nearly all navigation or correction commands are disabled, which
requires frequent alternation between insert and command modes. The fact that
56                                                                                          5 The vi Editor

                                                                  Insert Mode

                                                              [Esc]                   a, i, o, ...

                          vi command
                                                              Command Mode
                                   ZZ, ...

                                                                    :                 [Return]

                                                         Command-Line Mode

                                                   Figure 5.1: vi ’s modes

                                         Table 5.1: Insert-mode commands for vi

                         Command        Result
                               a        Appends new text after the cursor
                              A         Appends new text at the end of the line
                               i        Inserts new text at the cursor position
                               I        Inserts new text at the beginning of the line
                               o        Inserts a new line below the line containing the cursor
                              O         Inserts a new line above the line containing the cursor

                    it may be difficult to find out which mode the editor is currently in (depending on
                    the vi implementation used and its configuration) does not help to make things
                    easier for beginners. An overview of vi modes may be found in Figure 5.1.

                    B Consider: vi started when keyboards consisting only of the “letter block” of
                      modern keyboards were common (127 ASCII characters). There was really
                      no way around the scheme used in the program.

     command mode      After invoking vi without a file name you end up in command mode. In con-
                    trast to most other editors, direct text input is not possible. There is a cursor at the
                    top left corner of the screen above a column filled with tildes. The last line, also
                    called the “status line”, displays the current mode (maybe), the name of the file
                    currently being edited (if available) and the current cursor position.

                    B If your version of vi does not display status information, try your luck with
                       Esc :set showmode ↩ .

                       Shortened by a few lines, this looks similar to Das sieht (um einige Zeilen
                    verkürzt) etwa so aus:

5.2 The Standard—vi                                                                               57

                               Table 5.2: Cursor positioning commands in vi

      Command                     Cursor moves …
          h       or       ←      one character to the left
          l       or    →         one character to the right
          k        or      ↑      one character up
              j   or       ↓      one character down
                   0              to the beginning of the line
                   $              to the end of the line
                  w               to the next word
                   b              to the previous word
      f   ⟨character⟩             to the next ⟨character⟩ on the line
          Strg + F                to the next page (screenful)
          Strg + B                to the previous page
                  G               to the last line of the file
              ⟨n⟩      G          to line no. ⟨n⟩

Empty Buffer                                        0,0-1

    Only after a command such as a (“append”), i (“insert”), or o (“open”)
will vi change into “insert mode”. The status line displays something like “-- insert mode
INSERT -- ”, and keyboard input will be accepted as text.
    The possible commands to enter insert mode are listed in Table 5.1; note that
lower-case and upper-case commands are different. To leave insert mode and go
back to command mode, press the Esc key. In command mode, enter Z Z to
write the buffer contents to disk and quit vi .
    If you would rather discard the modifications you made, you need to quit the
editor without saving the buffer contents first. Use the command : q! ↩ . The
leading colon emphasises that this is a command-line mode command.
    When : is entered in command mode, vi changes to command-line mode. command-line mode
You can recognize this by the colon appearing in front of the cursor on the bottom
line of the screen. All further keyboard input is appended to that colon, until the
command is finished with the return key ( ↩ ); vi executes the command and
reverts to command mode. In command-line mode, vi processes the line-oriented
commands of its alter ego, the ex line editor.
    There is an ex command to save an intermediate version of the buffer called :
w (“write”). Commands : x and : wq save the buffer contents and quit the editor;
both commands are therefore identical to the Z Z command.

Movement Through the Text In insert mode, newly entered characters will be put
into the current line. The return key starts a new line. You can move about the text
using cursor keys, but you can remove characters only on the current line using
 ⇐ —an inheritance of vi ’s line-oriented predecessors. More extensive navigation
is only possible in command mode (Table 5.2).
    Once you have directed the cursor to the proper location, you can begin cor-
recting text in command mode.

Deleting characters The d command is used to delete characters; it is always
followed by another character that specifies exactly what to delete (Table 5.3). To
make editing easier, you can prefix a repeat count to each of the listed commands. repeat count
For example; the 3 x command will delete the next three characters.
    If you have been too eager and deleted too much material, you can revert the
last change (or even all changes one after the other) using the u (“undo”) com- undo
58                                                                                            5 The vi Editor

                                                   Table 5.3: Editing commands in vi

                       Command                  Result
                                    x           Deletes the character below the cursor
                                    X           Deletes the character to the left of the cursor
                            r   ⟨char⟩          Replaces the character below the cursor by ⟨char⟩
                                d       w       Deletes from cursor to end of current word
                                d       $       Deletes from cursor to end of current line
                                d       0       Deletes from cursor to start of current line
                        d       f   ⟨char⟩      Deletes from cursor to next occurrence of ⟨char⟩ on the
                                                current line
                                d       d       Deletes current line
                                d       G       Deletes from current line to end of text
                            d       1       G   Deletes from current line to beginning of text

                                                 Table 5.4: Replacement commands in vi

                      Command                   Result
                                c       w       Replace from cursor to end of current word
                                c       $       Replace from cursor to end of current line
                                c       0       Replace from cursor to start of current line
                        c       f   ⟨char⟩      Replace from cursor to next occurrence of ⟨char⟩ on the
                                                current line
                            c       / abc       Replace from cursor to next occurrence of character se-
                                                quence abc

                mand. This is subject to appropriate configuration settings.

     Overwriting Replacing characters    The c command (“change”) serves to overwrite a selected
                part of the text. c is a “combination command” similar to d , requiring an addi-
                tional specification of what to overwrite. vi will remove that part of the text before
                changing to insert mode automatically. You can enter new material and return to
                command mode using Esc . (Table 5.4).

                5.2.3           Extended Commands
                 Cutting, Copying, and Pasting Text A frequent operation in text editing is to move
                 or copy existing material elsewhere in the document. vi offers handy combination
                 commands to do this, which take specifications similar to those for the c com-
                 mand. y (“yank”) copies material to an interim buffer without changing the
                 original text, whereas d moves it to the interim buffer, i. e., it is removed from
                 its original place and only available in the interim buffer afterwards. (We have
                 introduced this as “deletion” above.)
                     Of course there is a command to re-insert (or “paste”) material from an interim
                 buffer. This is done using p (to insert after the current cursor position) or P (to
                 insert at the current cursor position).
      26 buffers     A peculiarity of vi is that there is not just one interim buffer but 26. This makes
                 it easy to paste different texts (phrases, …) to different places in the file. The in-
                 terim buffers are called “a ” through “z ” and can be invoked using a combination
                 of double quotes and buffer names. The command sequence ” c y 4 w , for
                 instance, copies the next four words to the interim buffer called c ; the command
                 sequence ” g p inserts the contents of interim buffer g after the current cursor
5.2 The Standard—vi                                                                                  59

Regular-Expression Text Search Like every good editor, vi offers well-thought-
out search commands. “Regular expressions” make it possible to locate character
sequences that fit elaborate search patterns. To start a search, enter a slash / in
command mode. This will appear on the bottom line of the terminal followed by
the cursor. Enter the search pattern and start the search using the return key. vi
will start at the current cursor position and work towards the end of the docu-
ment. To search towards the top, the search must be started using ? instead of /
. Once vi has found a matching character sequence, it stops the search and places
the cursor on the first character of the sequence. You can repeat the same search
towards the end using n (“next”) or towards the beginning using N .

Searching and Replacing Since locating character sequences is often not all that is
desired. Therefore, vi also allows replacing found character sequences by others.
The following ex command can be used:

 :   [⟨start line⟩, ⟨end line⟩]s/ ⟨regexp⟩/ ⟨replacement⟩[/q ]

The parts of the command within square brackets are optional. What do the dif-
ferent components of the command mean?
   ⟨Start line⟩ and ⟨end line⟩ determine the range of lines to be searched. Without range of lines
these, only the current line will be looked at! Instead of line numbers, you can
use a dot to specify the current line or a dollar sign to specify the last line—but do
not confuse the meanings of these characters with their meanings within regular


replaces the first occurrence of red on each line by blue , where the first four lines
are not considered.

replaces every occurrence of red in those lines by blue . (Watch out: Even Fred Flint-
stone will become Fblue Flintstone .)

B Instead of line numbers, “. ”, and “$ ”, vi also allows regular expressions
  within slashes as start and end markers:

        replaces red by blue only in lines located between a line starting with BEGIN

    After the command name s and a slash, you must enter the desired regular
expression. After another slash, ⟨replacement⟩ gives a character sequence by which
the original text is to be replaced.
    There is a special function for this argument: With a & character you can “ref- Back reference
erence back” to the text matched by the ⟨regexp⟩ in every actual case. That is, “
 : s/bull/& frog ” changes every bull within the search range to a bull frog—a task
which will probably give genetic engineers trouble for some time to come.

Command-line Mode Commands So far we have described some command-line
mode (or “ex mode”) commands. There are several more, all of which can be
accessed from command mode by prefixing them with a colon and finishing them
with the return key (Table 5.5).

C 5.2 [5] (For systems with vim , e. g., the SUSE distributions.) Find out how to
  access the interactive vim tutorial and work through it.
60                                                                           5 The vi Editor

                                     Table 5.5: ex commands in vi

             Command                        Result
             : w   ⟨file name⟩              Writes the complete buffer content to the
                                            designated file
             : w!   ⟨file name⟩             Writes to the file even if it is write-
                                            protected (if possible)
             : e   ⟨file name⟩              Reads the designated file into the buffer
             : e #                          Reads the last-read file again
             : r   ⟨file name⟩              Inserts the content of the designated file
                                            after the line containing the cursor
             : !   ⟨shell command⟩          Executes the given shell command and re-
                                            turns to vi afterwards
             : r!   ⟨shell command⟩         Inserts the output of ⟨shell command⟩ after
                                            the line containing the cursor
             : s/ ⟨regexp⟩/ ⟨replacement⟩   Searches for ⟨regexp⟩ and replaces by
             : q                            Quits vi
             : q!                           Quits vi even if the buffer contents is un-
             : x   oder   :e wq             Saves the buffer contents and quits vi

     5.3      Other Editors
     We have already alluded to the fact that your choice of editor is just as much down
     to your personal preferences and probably says as much about you as a user as
     your choice of car: Do you drive a polished BMW or are you happy with a dented
     Astra? Or would you rather prefer a Land Rover? As far as choice is concerned,
     the editor market offers no less than the vehicle market. We have presented the
     possibly most important Linux editor, but of course there are many others. kate
     on KDE and gedit on GNOME, for example, are straightforward and easy-to-learn
     editors with a graphical user interface that are perfectly adequate for the require-
     ments of a normal user. GNU Emacs, however, is an extremely powerful and cus-
     tomisable editor for programmers and authors, and its extensive “ecosystem” of
     extensions leaves few desires uncatered for. Do browse through the package lists
     of your distribution and check whether you will find the editor of your dreams

     Commands in this Chapter
     ed        Primitive (but useful) line-oriented text editor               ed (1) 55
     elvis     Popular “clone” of the vi editor                            elvis (1) 54
     ex        Powerful line-oriented text editor (really vi )                vi (1) 54
     sed       Stream-oriented editor, copies its input to its output making changes in
               the process                                                   sed (1) 55
     vi        Screen-oriented text editor                                    vi (1) 54
     vim       Popular “clone” of the vi editor                              vim (1) 54
5.3 Other Editors                                                                61

   • Text editors are important for changing configuration files and program-
     ming. They often offer special features to make these tasks easier.
   • vi is a traditional, very widespread and powerful text editor with an id-
     iosyncratic user interface.
                                                                                           $ echo tux
                                                                                           $ ls
                                                                                           $ /bin/su -

Files: Care and Feeding

6.1  File and Path Names . . . . . . . . . . . . .            .   .   .   .   .   .   64
   6.1.1 File Names . . . . . . . . . . . . . .               .   .   .   .   .   .   64
   6.1.2 Directories . . . . . . . . . . . . . .              .   .   .   .   .   .   65
   6.1.3 Absolute and Relative Path Names . . . . .           .   .   .   .   .   .   66
6.2 Directory Commands . . . . . . . . . . . .                .   .   .   .   .   .   67
   6.2.1 The Current Directory: cd & Co. . . . . . .          .   .   .   .   .   .   67
   6.2.2 Listing Files and Directories—ls . . . . . .         .   .   .   .   .   .   68
   6.2.3 Creating and Deleting Directories: mkdir and rmdir   .   .   .   .   .   .   69
6.3 File Search Patterns . . . . . . . . . . . . .            .   .   .   .   .   .   70
   6.3.1 Simple Search Patterns . . . . . . . . . .           .   .   .   .   .   .   70
   6.3.2 Character Classes . . . . . . . . . . . .            .   .   .   .   .   .   72
   6.3.3 Braces . . . . . . . . . . . . . . . .               .   .   .   .   .   .   73
6.4 Handling Files . . . . . . . . . . . . . . .              .   .   .   .   .   .   74
   6.4.1 Copying, Moving and Deleting—cp and Friends.         .   .   .   .   .   .   74
   6.4.2 Linking Files—ln and ln -s . . . . . . . .           .   .   .   .   .   .   76
   6.4.3 Displaying File Content—more and less . . . .        .   .   .   .   .   .   80
   6.4.4 Searching Files—find . . . . . . . . . .             .   .   .   .   .   .   81
   6.4.5 Finding Files Quickly—locate and slocate . . .       .   .   .   .   .   .   84

      • Being familiar with Linux conventions concerning file and directory names
      • Knowing the most important commands to work with files and directories
      • Being able to use shell filename search patterns

      • Using a shell (qv. Chapter 2)
      • Use of a text editor (qv. Chapter 5)

grd1-dateien.tex   (be27bba8095b329b )
64                                                                                    6 Files: Care and Feeding

                           6.1      File and Path Names
                           6.1.1     File Names
                          One of the most important services of an operating system like Linux consists
                          of storing data on permanent storage media like hard disks or USB keys and re-
                          trieving them later. To make this bearable for humans, similar data are usually
                    files collected into “files” that are stored on the medium under a name.

                           B Even if this seems trivial to you, it is by no means a given. In former times,
                             some operating systems made it necessary to know abominations like track
                             numbers on a disk in order to retrieve one’s data.

                           Thus, before we can explain to you how to handle files, we need to explain to
                        you how Linux names files.
     Allowed characters    In Linux file names, you are essentially allowed to use any character that your
                        computer can display (and then some). However, since some of the characters
                        have a special meaning, we would recommend against their use in file names.
                        Only two characters are completely disallowed, the slash and the zero byte (the
                        character with ASCII value 0). Other characters like spaces, umlauts, or dollar
                        signs may be used freely, but must usually be escaped on the command line by
                        means of a backslash or quotes in order to avoid misinterpretations by the shell.

                           A An easy trap for beginners to fall into is the fact that Linux distinguishes
             Letter case     uppercase and lowercase letters in file names. Unlike Windows, where up-
                             percase and lowercase letters in file names are displayed but treated the
                             same, Linux considers x- files and X- Files two different file names.

                            Under Linux, file names may be “quite long”—there is no definite upper
                         bound, since the maximum depends on the “file system”, which is to say the
                         specific way bytes are arranged on the medium (there are several methods on
                         Linux). A typical upper limit is 255 characters—but since such a name would
                         take somewhat more than three lines on a standard text terminal this shouldn’t
                         really cramp your style.
                            A further difference from DOS and Windows computers is that Linux does not
                suffixes use suffixes to characterise a file’s “type”. Hence, the dot is a completely ordi-
                         nary character within a file name. You are free to store a text as mumble.txt , but
                         mumble would be just as acceptable in principle. This should of course not turn you
                         off using suffixes completely—you do after all make it easier to identify the file

                           B Some programs insist on their input files having specific suffixes. The C
                             compiler, gcc , for example, considers files with names ending in “.c ” C
                             source code, those ending in “.s ” assembly language source code, and
                             those ending in “.o ” precompiled object files.

      special characters      You may freely use umlauts and other special characters in file names. How-
                           ever, if files are to be used on other systems it is best to stay away from special
                           characters in file names, as it is not guaranteed that they will show up as the same
                           characters elsewhere.

         locale settings A What happens to special characters also depends on your locale settings,
                                   since there is no general standard for representing characters exceeding the
                                   ASCII character set (128 characters covering mostly the English language,
                                   digits and the most common special characters). Widely used encodings
                                   are, for example, ISO 8859-1 and ISO 8859-15 (popularly know as ISO-Latin-
                                   1 and ISO-Latin-9, respectively … don’t ask) as well as ISO 10646, casually
                                   and not quite correctly called “Unicod” and usually encoded as “UTF-8”.
                                   File names you created while encoding 𝑋 was active may look completely
                                   different when you look at the directory while encoding 𝑌 is in force. The
                                   whole topic is nothing you want to think about during meals.
6.1 File and Path Names                                                                                            65

A Should you ever find yourself facing a pile of files whose names are encoded
  according to the wrong character set, the convmv program, which can con-                   convmv
  vert file names between various character encodings, may be able to help
  you. (You will probably have to install it yourself since it is not part of
  the standard installation of most distributions.) However, you should re-
  ally get down to this only after working through the rest of this chapter, as
  we haven’t even explained the regular mv yet …

   All characters from the following set may be used freely in file names:                   Portable file names


However, you should pay attention to the following hints:
   • To allow moving files between Linux and older Unix systems, the length of
     a file name should be at most 14 characters. (Make that “ancient”, really.)
   • File names should always start with one of the letters or a digit; the other
     four characters can be used without problems only inside a file name.

These conventions are easiest to understand by looking at some examples. Allow-
able file names would be, for instance:


On the contrary, problems would be possible (if not likely or even assured) with:

-10°F                                       Starts with ‘‘- ’’, includes special character
.profile                                                                   Will be hidden
3/4-metre                                                       Contains illegal character
Smörrebröd                                                             Contains umlauts

   As another peculiarity, file names starting with a dot (“. ”) will be skipped in Hidden files
some places, for example when the files within a directory are listed—files with
such names are considered “hidden”. This feature is often used for files contain-
ing settings for programs and which should not distract users from more impor-
tant files in directory listings.

B For DOS and Windows experts: These systems allow “hiding” files by
  means of a “file attribute” which can be set independently of the file’s
  name. Linux and Unix do not support such a thing.

6.1.2    Directories
Since potentially many users may work on the same Linux system, it would be
problematic if each file name could occur just once. It would be difficult to make
clear to user Joe that he cannot create a file called letter.txt since user Sue already
has a file by that name. In addition, there must be a (convenient) way of ensuring
that Joe cannot read all of Sue’s files and the other way round.
   For this reason, Linux supports the idea of hierarchical “directories” which are
used to group files. File names do not need to be unique within the whole system,
but only within the same directory. This means in particular that the system can
assign different directories to Joe and Sue, and that within those they may call
their files whatever they please without having to worry about each other’s files.
66                                                                                    6 Files: Care and Feeding

                         In addition, we can forbid Joe from accessing Sue’s directory (and vice versa) and
                         no longer need to worry about the individual files within them.
                            On Linux, directories are simply files, even though you cannot access them
                         using the same methods you would use for “plain” files. However, this implies
                         that the rules we discussed for file names (see the previous section) also apply to
                   slash the names of directories. You merely need to learn that the slash (“/ ”) serves to
                         separate file names from directory names and directory names from one another.
                         joe/letter.txt would be the file letter.txt in the directory joe .
                            Directories may contain other directories (this is the term “hierarchical” we
          directory tree mentioned earlier), which results in a tree-like structure (inventively called a “di-
                         rectory tree”). A Linux system has a special directory which forms the root of the
                         tree and is therefore called the “root directory”. Its name is “/ ” (slash).

                           B In spite of its name, the root directory has nothing to do with the system
                             administrator, root . It’s just that their names are similar.

                           B The slash does double duty here—it serves both as the name of the root
                             directory and as the separator between other directory names. We’ll come
                             back to this presently.

                              The basic installation of common Linux distributions usually contains tens of
                           thousands of files in a directory hierarchy that is mostly structured according to
                           certain conventions. We shall tell you more about this directory hierarchy in Chap-
                           ter 9.

                           6.1.3    Absolute and Relative Path Names
                         Every file in a Linux system is described by a name which is constructed by start-
                         ing at the root directory and mentioning every directory down along the path to
                         the one containing the file, followed by the name of the file itself. For example,
                         /home/joe/letter.txt names the file letter.txt , which is located within the joe direc-
                         tory, which in turn is located within the home directory, which in turn is a direct
                         descendant of the root directory. A name that starts with the root directory is
     absolute path name called an “absolute path name”—we talk about “path names” since the name de-
                         scribes a “path” through the directory tree, which may contain directory and file
                         names (i. e., it is a collective term).
                             Each process within a Linux system has a “current directory” (often also called
                         “working directory”). File names are searched within this directory; letter.txt
                         is thus a convenient abbreviation for “the file called letter.txt in the current di-
                         rectory”, and sue/letter.txt stands for “the file letter.txt within the sue directory
                         within the current directory”. Such names, which start from the current directory,
     relative path names are called “relative path names”.

                           B It is trivial to tell absolute from relative path names: A path name starting
                             with a “/ ” is absolute; all others are relative.

                           B The current directory is “inherited” between parent and child processes. So
                             if you start a new shell (or any program) from a shell, that new shell uses
                             the same current directory as the shell you used to start it. In your new
                             shell, you can change into another directory using the cd command, but the
                             current directory of the old shell does not change—if you leave the new
                             shell, you are back to the (unchanged) current directory of the old shell.

               shortcuts      There are two convenient shortcuts in relative path names (and even absolute
                           ones): The name “.. ” always refers to the directory above the directory in question
                           in the directory tree—for example, in the case of /home/joe , /home . This frequently
                           allows you to refer conveniently to files in a “side branch” of the directory tree
                           as viewed from the current directory, without having to resort to absolute path
                           names. Assume /home/joe has the subdirectories letters and novels . With letters
                           as the current directory, you could refer to the ivanhoe.txt file within the novels
6.2 Directory Commands                                                                             67

directory by means of the relative path name ../novels/ivanhoe.txt , without having
to use the unwieldy absolute path name /home/joe/novels/ivanhoe.txt .
   The second shortcut does not make quite as obvious sense: the “. ” name within
a directory always stands for the directory itself. It is not immediately clear why
one would need a method to refer to a directory which one has already reached,
but there are situations where this comes in quite handy. For example, you may
know (or could look up in Chapter 8) that the shell searches program files for
external commands in the directories listed in the environment variable PATH . If
you, as a software developer, want to invoke a program, let’s call it prog , which (a)
resides in a file within the current directory, and (b) this directory is not listed in
PATH (always a good idea for security reasons), you can still get the shell to start
your file as a program by saying

$ ./prog

without having to enter an absolute path name.

B As a Linux user you have a “home directory” which you enter immediately
  after logging in to the system. The system administrator determines that
  directory’s name when they create your user account, but it is usually called
  the same as your user name and located below /home —something like /home/
  joe for the user joe .

6.2     Directory Commands
6.2.1      The Current Directory: cd & Co.
You can use the cd shell command to change the current directory: Simply give Changing directory
the desired directory as a parameter:

$ cd letters                                            Change to the letters directory
$ cd ..                                                  Change to the directory above

   If you do not give a parameter you will end up in your home directory:

$ cd
$ pwd

You can output the absolute path name of the current directory using the pwd current directory
(“print working directory”) command.
   Possibly you can also see the current directory as part of your prompt: Depend- prompt
ing on your system settings there might be something like

joe@red:~/letters> _

where ~/letters is short for /home/joe/letters ; the tilde (“~ ”) stands for the current
user’s home directory.

B The “cd - ” command changes to the directory that used to be current before
  the most recent cd command. This makes it convenient to alternate between
  two directories.

C 6.1 [2] In the shell, is cd an internal or an external command? Why?

C 6.2 [3] Read about the pushd , popd , and dirs commands in the bash man page.
  Convince yourself that these commands work as described there.
68                                                                                               6 Files: Care and Feeding

                                     Table 6.1: Some file type designations in ls

                           File type          Colour    Suffix (ls -F )   Type letter (ls -l )
                           plain file          black         none                  -
                           executable file     green           *                   -
                           directory            blue           /                   d
                           link                cyan            @                   l

                                              Table 6.2: Some ls options

     Option                       Result
     -a   or --all                Displays hidden files as well
     -i   or --inode              Displays the unique file number (inode number)
     -l   or --format=long        Displays extra information
     -o   or --no-color           Omits colour-coding the output
     -p   or -F                   Marks file type by adding a special character
     -r   or --reverse            Reverses sort order
     -R   or --recursive          Recurses into subdirectories (DOS: DIR/S )
     -S   or --sort=size          Sorts files by size (longest first)
     -t   or --sort=time          Sorts file by modification time (newest first)
     -X   or --sort=extension     Sorts file by extension (“file type”)

                                  6.2.2    Listing Files and Directories—ls
                                To find one’s way around the directory tree, it is important to be able to find out
                                which files and directories are located within a directory. The ls (“list”) command
                                does this.
                 Tabular format     Without options, this information is output as a multi-column table sorted by
                                file name. With colour screens being the norm rather than the exception today, it
                                has become customary to display the names of files of different types in various
                                colours. (We have not talked about file types yet; this topic will be mentioned in
                                Chapter 9.)

                                  B Thankfully, by now most distributions have agreed about the colours to use.
                                    Table 6.1 shows the most common assignment.

                                  B On monochrome monitors—which can still be found—, the options -F or -p
                                    recommend themselves. These will cause special characters to be appended
                                    to the file names according to the file’s type. A subset of these characters is
                                    given in Table 6.1.

                   Hidden files     You can display hidden files (whose names begin with a dot) by giving the -a
                                 (“all”) option. Another very useful option is -l (a lowercase “L”, for “long”, rather
          Additional information than the digit “1”). This displays not only the file names, but also some additional
                                 information about each file.

                                  B Some Linux distributions pre-set abbreviations for some combinations of
                                    helpful options; the SUSE distributions, for example, use a simple l as an
                                    abbreviation of “ls -alF ”. “ll ” and “la ” are also abbreviations for ls variants.

                                    Here is an example of ls without and with -l :

                                  $ ls
                                  $ ls -l
6.2 Directory Commands                                                                                  69

-rw-r--r--    1   joe users   4711   Oct 4 11:11 file.txt
-rw-r--r--    1   joe users    333   Oct 2 13:21 file2.dat

In the first case, all visible (non-hidden) files in the directory are listed; the second
case adds the extra information.
    The different parts of the long format have the following meanings: The first Long format
character gives the file type (see Chapter 9); plain files have “- ”, directories “d ”
and so on (“type character” in Table 6.1).
    The next nine characters show the access permissions. Next there are a refer-
ence counter, the owner of the file (joe here), and the file’s group (users ). After the
size of file in bytes, you can see the date and time of the last modification of the
file’s content. On the very right there is the file’s name.

A Depending on the language you are using, the date and time columns in par-
  ticular may look completely different than the ones in our example (which
  we generated using the minimal language environment “C ”). This is usu-
  ally not a problem in interactive use, but may prove a major nuisance if you
  try to take the output of “ls -l ” apart in a shell script. (Without wanting to
  anticipate the training manual Advanced Linux, we recommend setting the
  language environment to a defined value in shell scripts.)

B If you want to see the extra information for a directory (such as /tmp ), “ls -l
  /tmp ” doesn’t really help, because ls will list the data for all the files within
  /tmp . Use the -d option to suppress this and obtain the information about
  /tmp itself.

  ls supports many more options than the ones mentioned here; a few of the
more important ones are shown in Table 6.2.

        In the LPI exams, Linux Essentials and LPI-101, nobody expects you to know
        all 57 varieties of ls options by heart. However, you may wish to commit the
        most import half dozen or so—the content of Table 6.2, approximately—to

C 6.3 [1] Which files does the /boot directory contain? Does the directory have
  subdirectories and, if so, which ones?

C 6.4 [2] Explain the difference between ls with a file name argument and ls
  with a directory name argument.

C 6.5 [2] How do you tell ls to display information about a directory rather
  than the files in that directory, if a directory name is passed to the program?
  (Hint: Documentation.)

6.2.3     Creating and Deleting Directories: mkdir and rmdir
To keep your own files in good order, it makes sense to create new directories. You
can keep files in these “folders” according to their subject matter (for example).
Of course, for further structuring, you can create further directories within such
directories—your ambition will not be curbed by arbitrary limits.
   To create new directories, the mkdir command is available. It requires one or Creating directories
more directory names as arguments, otherwise you will only obtain an error mes-
sage instead of a new directory. To create nested directories in a single step, you
can use the -p option, otherwise the command assumes that all directories in a
path name except the last one already exist. For example:
70                                                                                       6 Files: Care and Feeding

                            $ mkdir pictures/holiday
                            mkdir: cannot create directory `pictures/holiday': No such file 
                              or directory
                            $ mkdir -p pictures/holiday
                            $ cd pictures
                            $ ls -F

     Removing directories       Sometimes a directory is no longer required. To reduce clutter, you can remove
                            it using the rmdir (“remove directory”) command.
                                As with mkdir , at least one path name of a directory to be deleted must be given.
                            In addition, the directories in question must be empty, i. e., they may not contain
                            entries for files, subdirectories, etc. Again, only the last directory in every name
                            will be removed:
                            $ rmdir pictures/holiday
                            $ ls -F

                            With the -p option, all empty subdirectories mentioned in a name can be removed
                            in one step, beginning with the one on the very right.

                            $ mkdir -p pictures/holiday/summer
                            $ rmdir pictures/holiday/summer
                            $ ls -F pictures
                            $ rmdir -p pictures/holiday
                            $ ls -F pictures
                            ls: pictures: No such file or directory

                            C 6.6 [!2] In your home directory, create a directory grd1- test with subdirecto-
                              ries dir1 , dir2 , and dir3 . Change into directory grd1- test/dir1 and create (e. g.,
                              using a text editor) a file called hello containing “hello ”. In grd1- test/dir2 ,
                              create a file howdy containing “howdy ”. Check that these files do exist. Delete
                              the subdirectory dir3 using rmdir . Next, attempt to remove the subdirectory
                              dir2 using rmdir . What happens, and why?

                            6.3     File Search Patterns
                            6.3.1    Simple Search Patterns
                           You will often want to apply a command to several files at the same time. For
                           example, if you want to copy all files whose names start with “p ” and end with
                           “.c ” from the prog1 directory to the prog2 directory, it would be quite tedious to
                           have to name every single file explictly—at least if you need to deal with more
          search patterns than a couple of files! It is much more convenient to use the shell’s search patterns.
                  asterisk     If you specify a parameter containing an asterisk on the shell command line—
6.3 File Search Patterns                                                                          71

—the shell replaces this parameter in the actual program invocation by a sorted list
of all file names that “match” the parameter. “Match” means that in the actual file
name there may be an arbitrary-length sequence of arbitrary characters in place
of the asterisk. For example, names like


are eligible (note in particular the last name in the example—“arbitrary length”
does include “length zero”!). The only character the asterisk will not match is—
can you guess it?—the slash; it is usually better to restrict a search pattern like the
asterisk to the current directory.

B You can test these search patterns conveniently using echo . The
       $ echo prog1/p*.c

       command will output the matching file names without any obligation or
       consequence of any kind.

B If you really want to apply a command to all files in the directory tree starting
  with a particular directory, there are ways to do that, too. We will discuss
  this in Section 6.4.4.

   The search pattern “* ” describes “all files in the current directory”—excepting All files
hidden files whose name starts with a dot. To avert possibly inconvenient sur-
prises, search patterns diligently ignore hidden files unless you explicitly ask for
them to be included by means of something like “.* ”.

A You may have encountered the asterisk at the command line of operating
  systems like DOS or Windows1 and may be used to specifying the “*.* ”
  pattern to refer to all files in a directory. On Linux, this is not correct—the
  “*.* ” pattern matches “all files whose name contains a dot”, but the dot isn’t
  mandatory. The Linux equivalent, as we said, is “* ”.

   A question mark as a search pattern stands for exactly one arbitrary character question mark
(again excluding the slash). A pattern like


thus matches the names

(among others). Note that there must be one character—the “nothing” option
does not exist here.
 You should take particular care to remember a very important fact: The expan-
 sion of search pattern is the responsibility of the shell! The commands that you ex-
 ecute usually know nothing about search patterns and don’t care about them,
 either. All they get to see are lists of path names, but not where they come
 from—i. e., whether they have been typed in directly or resulted from the ex-
 pansion of search patterns.

  1 You’re   probably too young for CP/M.
72                                                                         6 Files: Care and Feeding

              B Incidentally, nobody says that the results of search patterns always need to
                be interpreted as path names. For example, if a directory contains a file
                called “- l ”, a “ls * ” in that directory will yield an interesting and perhaps
                surprising result (see Exercise 6.9).

              B What happens if the shell cannot find a file whose name matches the search
                pattern? In this case the command in question is passed the search pattern
                as such; what it makes of that is its own affair. Typically such search patterns
                are interpreted as file names, but the “file” in question is not found and an
                error message is issued. However, there are commands that can do useful
                things with search patterns that you pass them—with them, the challenge
                is really to ensure that the shell invoking the command does not try to cut
                in with its own expansion. (Cue: quotes)

              6.3.2      Character Classes
              A somewhat more precise specification of the matching characters in a search pat-
              tern is offered by “character classes”: In a search pattern of the form


              the square brackets match exactly those characters that are enumerated within
              them (no others). The pattern in the example therefore matches


              but not
              prog.c                                           There needs to be exactly one character
              prog4.c                                                           4 was not enumerated
              proga.c                                                                        a neither
              prog12.c                                                  Exactly one character, please

     ranges      As a more convenient notation, you may specify ranges as in


              The square brackets in the first line match all digits, the ones in the second all
              uppercase letters.

              A Note that in the common character encodings the letters are not contiguous:
                A pattern like


                      not only matches progQ.c and progx.c , but also prog_.c . (Check an ASCII table,
                      e. g. using “man ascii ”.) If you want to match “uppercase and lowercase
                      letters only”, you need to use


              A A construct like

                      does not catch umlauts, even if they look suspiciously like letters.
6.3 File Search Patterns                                                                                      73

   As a further convenience, you can specify negated character classes, which are negated classes
interpreted as “all characters except these”: Something like


matches all names where the character between “g ” and “. ” is not a letter. As
usual, the slash is excepted.

6.3.3     Braces
The expansion of braces in expressions like


is often mentioned in conjunction with shell search patterns, even though it is
really just a distant relative. The shell replaces this by

red.txt yellow.txt blue.txt

In general, a word on the command line that contains several comma-separated
pieces of text within braces is replaced by as many words as there are pieces of
text between the braces, where in each of these words the whole brace expression
is replaced by one of the pieces. This replacement is purely based on the command
line text and is completely independent of the existence or non-existence of any files or
directories—unlike search patterns, which always produce only those names that
actually exist as path names on the system.
    You can have more than one brace expression in a word, which will result in
the cartesian product, in other words all possible combinations:                          cartesian product



a1.dat a2.dat a3.dat b1.dat b2.dat b3.dat c1.dat c2.dat c3.dat

   This is useful, for example, to create new directories systematically; the usual
search patterns cannot help there, since they can only find things that already
$ mkdir -p revenue/200{8,9}/q{1,2,3,4}

C 6.7 [!1] The current directory contains the files
        prog.c    prog1.c   prog2.c   progabc.c   prog
        p.txt     p1.txt    p21.txt   p22.txt     p22.dat

        Which of these names match the search patterns (a) prog*.c , (b) prog?.c , (c)
        p?*.txt , (d) p[12]* , (e) p* , (f) *.* ?

C 6.8 [!2] What is the difference between “ls ” and “ls * ”? (Hint: Try both in a
  directory containing subdirectories.)

C 6.9 [2] Explain why the following command leads to the output shown:
74                                                                                            6 Files: Care and Feeding

                                               Table 6.3: Options for cp

             Option               Result
     -b             (backup)      Makes backup copies of existing target files by appending a tilde to their
     -f                (force)    Overwrites existing target files without prompting
     -i   (engl. interactive)     Asks (once per file) whether existing target files should be overwritten
     -p      (engl. preserve)     Tries to preserve all attributes of the source file for the copy
     -R     (engl. recursive)     Copies directories with all their content
     -u        (engl. update)     Copies only if the source file is newer than the target file (or the target file
                                  doesn’t exist)
     -v      (engl. verbose)      Displays all activity on screen

                                         $ ls
                                         -l file1     file2   file3
                                         $ ls *
                                         -rw-r--r--   1 joe users 0 Dec 19 11:24 file1
                                         -rw-r--r--   1 joe users 0 Dec 19 11:24 file2
                                         -rw-r--r--   1 joe users 0 Dec 19 11:24 file3

                                 C 6.10 [2] Why does it make sense for “* ” not to match file names starting with
                                   a dot?

                                 6.4       Handling Files
                                 6.4.1     Copying, Moving and Deleting—cp and Friends
                Copying files You can copy arbitrary files using the cp (“copy”) command. There are two basic
                  1 ∶ 1 copy         If you tell cp the source and target file names (two arguments), then a 1 ∶ 1 copy
                                 of the content of the source file will be placed in the target file. Normally cp does
                                 not ask whether it should overwrite the target file if it already exists, but just does
                                 it—caution (or the -i option) is called for here.
                                     You can also give a target directory name instead of a target file name. The
                                 source file will then be copied to that directory, keeping its old name.

                                 $ cp list list2
                                 $ cp /etc/passwd .
                                 $ ls -l
                                 -rw-r--r-- 1 joe       users    2500   Oct 4 11:11 list
                                 -rw-r--r-- 1 joe       users    2500   Oct 4 11:25 list2
                                 -rw-r--r-- 1 joe       users    8765   Oct 4 11:26 passwd

                                 In this example, we first created an exact copy of file list under the name list2 .
                                 After that, we copied the /etc/passwd file to the current directory (represented by
                                 the dot as a target directory name). The most important cp options are listed in
                                 Table 6.3.
            List of source files    Instead of a single source file, a longer list of source files (or a shell wildcard
                                 pattern) is allowed. However, this way it is not possible to copy a file to a different
                                 name, but only to a different directory. While in DOS it is possible to use “COPY
                                 *.TXT *.BAK ” to make a backup copy of every TXT file to a file with the same name
                                 and a BAK suffix, the Linux command “cp *.txt *.bak ” usually fails with an error
6.4 Handling Files                                                                                         75

B To understand this, you have to visualise how the shell executes this com-
  mand. It tries first to replace all wildcard patterns with the corresponding
  file names, for example *.txt by letter1.txt and letter2.txt . What happens
  to *.bak depends on the expansion of *.txt and on whether there are match-
  ing file names for *.bak in the current directory—but the outcome will al-
  most never be what a DOS user would expect! Usually the shell will pass
  the cp command the unexpanded *.bak wildcard pattern as the final argu-
  ment, which fails from the point of view of cp since this is (unlikely to be)
  an existing directory name.

    While the cp command makes an exact copy of a file, physically duplicating the
file on the storage medium or creating a new, identical copy on a different storage
medium, the mv (“move”) command serves to move a file to a different place or Move/rename files
change its name. This is strictly an operation on directory contents, unless the file
is moved to a different file system—for example from a hard disk partition to a
USB key. In this case it is necessary to move the file around physically, by copying
it to the new place and removing it from the old.
    The syntax and rules of mv are identical to those of cp —you can again specify
a list of source files instead of merely one, and in this case the command expects
a directory name as the final argument. The main difference is that mv lets you
rename directories as well as files.
    The -b , -f , -i , -u , and -v options of mv correspond to the eponymous ones de-
scribed with cp .

$ mv passwd list2
$ ls -l
-rw-r--r-- 1 joe     users   2500   Oct 4 11:11 list
-rw-r--r-- 1 joe     users   8765   Oct 4 11:26 list2

In this example, the original file list2 is replaced by the renamed file passwd . Like
cp , mv does not ask for confirmation if the target file name exists, but overwrites
the file mercilessly.
     The command to delete files is called rm (“remove”). To delete a file, you must Deleting files
have write permission in the corresponding directory. Therefore you are “lord of
the manor” in your own home directory, where you can remove even files that do
not properly belong to you.

A Write permission on a file, on the other hand, is completely irrelevant as far
  as deleting that file is concerned, as is the question to which user or group
  the file belongs.

   rm goes about its work just as ruthlessly as cp or mv —the files in question are Deleting is forever!
obliterated from the file system without confirmation. You should be especially
careful, in particular when shell wildcard patterns are used. Unlike in DOS, the
dot in a Linux file name is a character without special significance. For this rea-
son, the “rm * ” command deletes all non-hidden files from the current directory.
Subdirectories will remain unscathed; with “rm -r * ” they can also be removed.

A As the system administrator, you can trash the whole system with a com-
  mand such as “rm -rf / ”; utmost care is required! It is easy to type “rm -rf
  foo * ” instead of “rm -rf foo* ”.

  Where rm removes all files whose names are passed to it, “rm -i ” proceeds a little
more carefully:

$ rm -i lis*
rm: remove 'list'? n
rm: remove 'list2'? y
$ ls -l
-rw-r--r-- 1 joe users       2500   Oct 4 11:11 list
76                                                                                       6 Files: Care and Feeding

                             The example illustrates that, for each file, rm asks whether it should be removed
                             (“y ” for “yes”) or not (“n ” for “no”).

                              B Desktop environments such as KDE usually support the notion of a “dust-
                                bin” which receives files deleted from within the file manager, and which
                                makes it possible to retrieve files that have been removed inadvertently.
                                There are similar software packages for the command line.

                             In addition to the -i and -r options, rm allows cp ’s -v and -f options, with similar

                              C 6.11 [!2] Create, within your home directory, a copy of the file /etc/services
                                called myservices . Rename this file to srv.dat and copy it to the /tmp directory
                                (keeping the new name intact). Remove both copies of the file.

                              C 6.12 [1] Why doesn’t mv have an -R option (like cp has)?

                              C 6.13 [!2] Assume that one of your directories contains a file called “- file ”
                                (with a dash at the start of the name). How would you go about removing
                                this file?

                              C 6.14 [2] If you have a directory where you do not want to inadvertently fall
                                victim to a “rm * ”, you can create a file called “- i ” there, as in

                                     $ > -i

                                     (will be explained in more detail in Chapter 7). What happens if you now
                                     execute the “rm * ” command, and why?

                             6.4.2     Linking Files—ln and ln -s
                             Linux allows you to create references to files, so-called “links”, and thus to assign
                             several names to the same file. But what purpose does this serve? The applica-
                             tions range from shortcuts for file and directory names to a “safety net” against
                             unwanted file deletions, to convenience for programmers, to space savings for
                             large directory trees that should be available in several versions with only small
                                The ln (“link”) command assigns a new name (second argument) to a file in
                             addition to its existing one (first argument):

                              $ ln list list2
                              $ ls -l
                              -rw-r--r-- 2 joe       users   2500   Oct 4 11:11 list
                              -rw-r--r-- 2 joe       users   2500   Oct 4 11:11 list2

     A file with multiple names The directory now appears to contain a new file called list2 . Actually, there are
            reference counter just two references to the same file. This is hinted at by the reference counter in
                            the second column of the “ls -l ” output. Its value is 2, denoting that the file really
                            has two names. Whether the two file names really refer to the same file can only be
                            decided using the “ls -i ” command. If this is the case, the file number in the first
              inode numbers column must be identical for both files. File numbers, also called inode numbers,
                            identify files uniquely within their file system:

                              $ ls -i
                              876543 list     876543 list2
6.4 Handling Files                                                                         77

B “Inode” is short for “indirection node”. Inodes store all the information that
  the system has about a file, except for the name. There is exactly one inode
  per file.

If you change the content of one of the files, the other’s content changes as well,
since in fact there is only one file (with the unique inode number 876543). We only
gave that file another name.

B Directories are simply tables mapping file names to inode numbers. Obvi-
  ously there can be several entries in a table that contain different names but
  the same inode number. A directory entry with a name and inode number
  is called a “link”.

   You should realise that, for a file with two links, it is quite impossible to find
out which name is “the original”, i. e., the first parameter within the ln command.
From the system’s point of view both names are completely equivalent and indis-

A Incidentally, links to directories are not allowed on Linux. The only excep-
  tions are “. ” and “.. ”, which the system maintains for each directory. Since
  directories are also files and have their own inode numbers, you can keep
  track of how the file system fits together internally. (See also Exercise 6.19).

   Deleting one of the two files decrements the number of names for file no.
876543 (the reference counter is adjusted accordingly). Not until the reference
counter reachers the value of 0 will the file’s content actually be removed.

$ rm list
$ ls -li
876543 -rw-r--r--    1   joe   users   2500   Oct 4 11:11 list2

B Since inode numbers are only unique within the same physical file system
  (disk partition, USB key, …), such links are only possible within the same
  file system where the file resides.

B The explanation about deleting a file’s content was not exactly correct: If the
  last file name is removed, a file can no longer be opened, but if a process is
  still using the file it can go on to do so until it explicitly closes the file or ter-
  minates. In Unix software this is a common idiom for handling temporary
  files that are supposed to disappear when the program exits: You create
  them for reading and writing and “delete” them immediately afterwards
  without closing them within your program. You can then write data to the
  file and later jump back to the beginning to reread them.

B You can invoke ln not just with two file name arguments but also with one
  or with many. In the first case, a link with the same name as the original
  will be created in the current directory (which should really be different
  from the one where the file is located), in the second case all named files
  will be “linked” under their original names into the diréctory given as the
  last argument (think mv ).

    You can use the “cp -l ” command to create a “link farm”. This means that link farm
instead of copying the files to the destination (as would otherwise be usual), links
to the originals will be created:

$ mkdir prog-1.0.1                                                      New directory
$ cp -l prog-1.0/* prog-1.0.1
78                                                                                                  6 Files: Care and Feeding

                        The advantage of this approach is that the files still exist only once on the disk, and
                        thus take up space only once. With today’s prices for disk storage this may not be
                        compellingly necessary—but a common application of this idea, for example, con-
                        sists of making periodic backup copies of large file hierarchies which should ap-
                        pear on the backup medium (disk or remote computer) as separate, date-stamped
                        file hierarchies. Experience teaches that most files only change very rarely, and
                        if these files then need to be stored just once instead of over and over again, this
                        tends to add up over time. In addition, the files do not need to be written to the
                        backup medium time and again, and that can save considerable time.

                         B Backup packages that adopt this idea include, for example, Rsnapshot (http:
                           // ) or Dirvish ( ).

                        A This approach should be taken with a certain amount of caution. Using
                          links may let you “deduplicate” identical files, but not identical directo-
                          ries. This means that for every date-stamped file hierarchy on the backup
                          medium, all directories must be created anew, even if the directories only
                          contain links to existing files. This can lead to very complicated directory
                          structures and, in the extreme case, to consistency checks on the backup
                          medium failing because the computer does not have enough virtual mem-
                          ory to check the directory hierarchy.

                        A You will also need to watch out if – as alluded to in the example – you make
                          a “copy” of a program’s source code as a link farm (which in the case of,
                          e. g., the Linux source code could really pay off): Before you can modify a
                          file in your newly-created version, you will need to ensure that it is really a
                          separate file and not just a link to the original (which you will very probably
                          not want to change). This means that you either need to manually replace
                          the link to the file by an actual copy of the file, or else use an editor which
                          writes modified versions as separate files automatically2 .

                             This is not all, however: There are two different kinds of link in Linux systems.
                          The type explained above is the default case for the ln command and is called a
                          “hard link”. It always uses a file’s inode number for identification. In addition,
          symbolic links there are symbolic links (also called “soft links” in contrast to “hard links”). Sym-
                          bolic links are really files containing the name of the link’s “target file”, together
                          with a flag signifying that the file is a symbolic link and that accesses should be
                          redirected to the target file. Unlike with hard links, the target file does not “know”
                          about the symbolic link. Creating or deleting a symbolic link does not impact the
                          target file in any way; when the target file is removed, however, the symbolic link
                          “dangles”, i.e., points nowhere (accesses elicit an error message).
     Links to directories    In contrast to hard links, symbolic links allow links to directories as well as files
                          on different physical file systems. In practice, symbolic links are often preferred,
                          since it is easier to keep track of the linkage by means of the path name.

                         B Symbolic links are popular if file or directory names change but a certain
                           backwards compatibility is desired. For example, it was agreed that user
                           mailboxes (that store unread e-mail) should be stored in the /var/mail di-
                           rectory. Traditionally, this directory was called /var/spool/mail , and many
                           programs hard-code this value internally. To ease a transition to /var/mail ,
                           a distribution can set up a symbolic link under the name of /var/spool/mail
                           which points to /var/mail . (This would be impossible using hard links, since
                           hard links to directories are not allowed.)

                            To create a symbolic link, you must pass the -s option to ln :

                         $ ln -s /var/log short
                         $ ls -l
                            2 If you use Vim (a. k. a vi , you can add the “set backupcopy=auto,breakhardlink ” command to the .vimrc

                        file in your home directory.
6.4 Handling Files                                                                                                 79

-rw-r--r--   1   joe   users   2500   Oct 4 11:11 liste2
lrwxrwxrwx   1   joe   users     14   Oct 4 11:40 short -> /var/log
$ cd short
$ pwd -P

Besides the -s option to create “soft links”, the ln command supports (among oth-
ers) the -b , -f , -i , and -v options discussed earlier on.
   To remove symbolic links that are no longer required, delete them using rm just
like plain files. This operation applies to the link rather than the link’s target.

$ cd
$ rm short
$ ls

   As you have seen above, “ls -l ” will, for symbolic links, also display the file
that the link is pointing to. With the -L and -H options, you can get ls to resolve
symbolic links directly:

$ mkdir dir
$ echo XXXXXXXXXX >dir/file
$ ln -s file dir/symlink
$ ls -l dir
total 4
-rw-r--r-- 1 hugo users 11 Jan 21 12:29 file
lrwxrwxrwx 1 hugo users 5 Jan 21 12:29 symlink -> file
$ ls -lL dir
-rw-r--r-- 1 hugo users 11 Jan 21 12:29 file
-rw-r--r-- 1 hugo users 11 Jan 21 12:29 symlink
$ ls -lH dir
-rw-r--r-- 1 hugo users 11 Jan 21 12:29 file
lrwxrwxrwx 1 hugo users 5 Jan 21 12:29 symlink -> file
$ ls -l dir/symlink
lrwxrwxrwx 1 hugo users 5 Jan 21 12:29 dir/symlink -> file
$ ls -lH dir/symlink
-rw-r--r-- 1 hugo users 11 Jan 21 12:29 dir/symlink

The difference between -L and -H is that the -L option always resolves symbolic links
and displays information about the actual file (the name shown is still always the
one of the link, though). The -H , as illustrated by the last three commands in the
example, does that only for links that have been directly given on the command
   By analogy to “cp -l ”, the “cp -s ” command creates link farms based on sym-         cp   and symbolic links
bolic links. These, however, are not quite as useful as the hard-link-based ones
shown above. “cp -a ” copies directory hierarchies as they are, keeping symbolic
links as they are; “cp -L ” arranges to replace symbolic links by their targets in the
copy, and “cp -P ” precludes that.

C 6.15 [!2] In your home directory, create a file with arbitrary content (e. g.,
  using “echo Hello >~/hello ” or a text editor). Create a hard link to that file
  called link . Make sure that the file now has two names. Try changing the
  file with a text editor. What happens?

C 6.16 [!2] Create a symbolic link called ~/symlink to the file in the previous ex-
  ercise. Check whether accessing the file via the symbolic link works. What
  happens if you delete the file (name) the symbolic link is pointing to?
80                                                                                                   6 Files: Care and Feeding

                                                          Table 6.4: Keyboard commands for more

                                         Key                  Result
                                          ↩                   Scrolls up a line
                                                              Scrolls up a screenful
                                              b               Scrolls back a screenful
                                              h               Displays help
                                              q               Quits more
                                   / ⟨word⟩ ↩                 Searches for ⟨word⟩
                               !   ⟨command⟩ ↩                Executes ⟨command⟩ in a subshell
                                              v               Invokes editor (vi )
                                       Ctrl       +   l       Redraws the screen

                       C 6.17 [!2] What directory does the .. link in the “/ ” directory point to?

                       C 6.18 [3] Consider the following command and its output:
                               $ ls -ai /
                                    2 .                   330211   etc         1   proc   4303 var
                                    2 ..                       2   home    65153   root
                                 4833 bin                 244322   lib    313777   sbin
                               228033 boot                460935   mnt    244321   tmp
                               330625 dev                 460940   opt    390938   usr

                               Obviously, the / and /home directories have the same inode number. Since
                               the two evidently cannot be the same directory—can you explain this phe-

                       C 6.19 [3] We mentioned that hard links to directories are not allowed. What
                         could be a reason for this?

                       C 6.20 [3] How can you tell from the output of “ls -l ~ ” that a subdirectory of
                         ~ contains no further subdirectories?

                       C 6.21 [2] How do “ls -lH ” and “ls -lL ” behave if a symbolic link points to a
                         different symbolic link?

                       C 6.22 [3] What is the maximum length of a “chain” of symbolic links? (In
                         other words, if you start with a symbolic link to a file, how often can you
                         create a symbolic link that points to the previous symbolic link?)

                       C 6.23 [4] (Brainteaser/research exercise:) What requires more space on disk,
                         a hard link or a symbolic link? Why?

                       6.4.3       Displaying File Content—more and less
     display of text files A convenient display of text files on screen is possible using the more command,
                      which lets you view long documents page by page. The output is stopped after
                      one screenful, and “--More-- ” appears in the final line (possibly followed by the
                      percentage of the file already displayed). The output is continued after a key press.
                      The meanings of various keys are explained in Table 6.4.
              Options     more also understands some options. With -s (“squeeze”), runs of empty lines
                      are compressed to just one, the -l option ignores page ejects (usually represented
                      by “^L ”) which would otherwise stop the output. The -n ⟨number⟩ option sets the
                      number of screen lines to ⟨number⟩, otherwise more takes the number from the
                      terminal definition pointed to by TERM .
                          more ’s output is still subject to vexing limitations such as the general impossibil-
                      ity of moving back towards the beginning of the output. Therefore, the improved
6.4 Handling Files                                                                                           81

                                         Table 6.5: Keyboard commands for less

                             Key                     Result
        ↓   or          e    or      j   or      ↩   Scrolls up one line
                         f   or                      Scrolls up one screenful
                ↑       or    y    or        k       Scrolls back one line
                              b                      Scrolls back one screenful
                    Home      or g                   Jumps to the beginning of the text
        End             or Shift ⇑ +             g   Jumps to the end of the text
                p       ⟨percent⟩ ↩                  Jumps to position ⟨percent⟩ (in %) of the text
                              h                      Displays help
                              q                      Quits less
                    /   ⟨word⟩           ↩           Searches for ⟨word⟩ towards the end
                              n                      Continues search towards the end
                  ⟨word⟩ ↩
                    ?                                Searches for ⟨word⟩ towards the beginning
                 Shift ⇑ + n                         Continues search towards the beginning
            !   ⟨command⟩ ↩                          Executes ⟨command⟩ in subshell
                              v                      Invokes editor (vi )
                r       or    Ctrl       +   l       Redraws screen

version less (a weak pun—think “less is more”) is more [sic!] commonly seen to-                       less
day. less lets you use the cursor keys to move around the text as usual, the search
routines have been extended and allow searching both towards the end as well
as towards the beginning of the text. The most common keyboard commands are
summarised in Table 6.5.
    As mentioned in Chapter 4, less usually serves as the display program for man-
ual pages via man . All the commands are therefore available when perusing man-
ual pages.

6.4.4       Searching Files—find
Who does not know the following feeling: “There used to be a file foobar … but
where did I put it?” Of course you can tediously sift through all your directories
by hand. But Linux would not be Linux if it did not have something handy to help
    The find command searches the directory tree recursively for files matching a
set of criteria. “Recursively” means that it considers subdirectories, their subdirec-
tories and so on. find ’s result consists of the path names of matching files, which
can then be passed on to other programs. The following example introduces the
command structure:

$ find . -user joe -print

This searches the current directory including all subdirectories for files belonging
to the user joe . The -print command displays the result (a single file in our case)
on the terminal. For convenience, if you do not specify what to do with matching
files, -print will be assumed.
    Note that find needs some arguments to go about its task.

Starting Directory The starting directory should be selected with care. If you
pick the root directory, the required file(s)—if they exist—will surely be found,
but the search may take a long time. Of course you only get to search those files
where you have appropriate privileges.
82                                                                                              6 Files: Care and Feeding

Absolute or relative path names?    B An absolute path name for the start directory causes the file names in the
                                      output to be absolute, a relative path name for the start directory accord-
                                      ingly produces relative path names.

                   Directory list      Instead of a single start directory, you can specify a list of directories that will
                                    be searched in turn.

                                    Test Conditions These options describe the requirements on the files in detail.
                                    Table 6.6 shows the most important tests. The find documentation explains many

                                                             Table 6.6: Test conditions for find

                                         Test    Description
                                        -name    Specifies a file name pattern. All shell search pattern characters
                                                 are allowed. The -iname option ignores case differences.
                                        -type    Specifies a file type (see Section 9.2). This includes:
                                                   b   block device file
                                                   c   character device file
                                                   d   directory
                                                   f   plain file
                                                   l   symbolic link
                                                   p   FIFO (named pipe)
                                                   s   Unix domain socket
                                        -user    Specifies a user that the file must belong to. User names as well
                                                 as numeric UIDs can be given.
                                        -group   Specifies a group that the file must belong to. As with -user , a
                                                 numeric GID can be specified as well as a group name.
                                        -size    Specifies a particular file size. Plain numbers signify 512-byte
                                                 blocks; bytes or kibibytes can be given by appending c or k , re-
                                                 spectively. A preceding plus or minus sign stands for a lower or
                                                 upper limit; -size +10k , for example, matches all files bigger than
                                                 10 KiB.
                                        -atime   (engl. access) allows searching for files based on the time of last
                                                 access (reading or writing). This and the next two tests take their
                                                 argument in days; …min instead of …time produces 1-minute ac-
                                        -mtime   (engl. modification) selects according to the time of modification.
                                        -ctime   (engl. change) selects according to the time of the last inode
                                                 change (including access to content, permission change, renam-
                                                 ing, etc.)
                                        -perm    Specifies a set of permissions that a file must match. The per-
                                                 missions are given as an octal number (see the chmod command.
                                                 To search for a permission in particular, the octal number must
                                                 be preceded by a minus sign, e.g., -perm -20 matches all files with
                                                 group write permission, regardless of their other permissions.
                                        -links   Specifies a reference count value that eligible files must match.
                                         -inum   Finds links to a file with a given inode number.

                  Multiple tests        If multiple tests are given at the same time, they are implicitly ANDed together—
                                    all of them must match. find does support additional logical operators (see Ta-
                                    ble 6.7).
                                        In order to avoid mistakes when evaluating logical operators, the tests are best
                                    enclosed in parentheses. The parentheses must of course be escaped from the
                                    $ find . \( -type d -o -name "A*" \) -print
6.4 Handling Files                                                                                   83

                                       Table 6.7: Logical operators for find

       Option    Operator     Meaning
       !             Not      The following test must not match
       -a            And      Both tests to the left and right of -a must match
       -o             Or      At least one of the tests to the left and right of -o must match

$ _

This example lists all names that either refer to directories or that begin with “A ”
or both.

Actions As mentioned before, the search results can be displayed on the screen
using the -print option. In addition to this, there are two options, -exec and -
ok , which execute commands incorporating the file names. The single difference Executing commands
between -ok and -exec is that -ok asks the user for confirmation before actually exe-
cuting the command; with -exec , this is tacitly assumed. We will restrict ourselves
to discussing -exec .
     There are some general rules governing the -exec option:
   • The command following -exec must be terminated with a semicolon (“; ”).
     Since the semicolon is a special character in most shells, it must be escaped
     (e.g., as “\\; ” or using quotes) in order to make it visible to find .
   • Two braces (“{} ”) within the command are replaced by the file name that
     was found. It is best to enclose the braces in quotes to avoid problems with
     spaces in file names.
For example:

$ find . -user joe -exec ls -l '{}' \;
-rw-r--r-- 1 joe users      4711 Oct 4 11:11 file.txt
$ _

This example searches for all files within the current directory (and below) be-
longing to user test , and executes the “ls -l ” command for each of them. The
following makes more sense:

$ find . -atime +13 -exec rm -i '{}' \;

This interactively deletes all files within the current directory (and below) that
have not been accessed for two weeks.

B Sometimes—say, in the last example above—it is very inefficient to use -
  exec to start a new process for every single file name found. In this case,
  the xargs command, which collects as many file names as possible before
  actually executing a command, can come in useful:

       $ find . -atime +13 | xargs -r rm -i

      xargs  reads its standard input up to a (configurable) maximum of characters
      or lines and uses this material as arguments for the specified command (here
      rm ). On input, arguments are separated by space characters (which can be
      escaped using quotes or “\ ”) or newlines. The command is invoked as often
84                                                                6 Files: Care and Feeding

             as necessary to exhaust the input.—The -r option ensures that rm is executed
             only if find actually sends a file name; otherwise it would be executed at least

     B Weird filenames can get the find /xargs combination in trouble, for example
       ones that contain spaces or, indeed, newlines which may be mistaken as
       separators. The silver bullet consists of using the “-print0 ” option to find ,
       which outputs the file names just as “-print ” does, but uses null bytes to
       separate them instead of newlines. Since the null byte is not a valid character
       in path names, confusion is no longer possible. xargs must be invoked using
       the “-0 ” option to understand this kind of input:

             $ find . -atime +13 -print0 | xargs -0r rm -i

     C 6.24 [!2] Find all files on your system which are longer than 1 MiB, and
       output their names.

     C 6.25 [2] How could you use find to delete a file with an unusual name (e. g.,
       containing invisible control characters or umlauts that older shells cannot
       deal with)?

     C 6.26 [3] (Second time through the book.) How would you ensure that files
       in /tmp which belong to you are deleted once you log out?

     6.4.5     Finding Files Quickly—locate and slocate
     The find command searches files according to many different criteria but needs to
     walk the complete directory tree below the starting directory. Depending on the
     tree size, this may take considerable time. For the typical application—searching
     files with particular names—there is an accelerated method.
         The locate command lists all files whose names match a given shell wildcard
     pattern. In the most trivial case, this is a simple string of characters:

     $ locate letter.txt

     A Although locate is a fairly important service (as emphasised by the fact that
       it is part of the LPIC1 curriculum), not all Linux distributions include it as
       part of the default installation.

             For example, if you are using a SUSE distribution, you must explicitly install
             the findutils-locate package before being able to use locate .

     The “* ”, “? ”, and “[ …] ” characters mean the same thing to locate as they do to
     the shell. But while a query without wildcard characters locates all file names that
     contain the pattern anywhere, a query with wildcard characters returns only those
     names which the pattern describes completely—from beginning to end. Therefore
     pattern queries to locate usually start with “* ”:

     $ locate "*/letter.t*"
6.4 Handling Files                                                                        85

B Be sure to put quotes around locate queries including shell wildcard char-
  acters, to keep the shell from trying to expand them.

The slash (“/ ”) is not handled specially:

$ locate Letters/granny

    locate is so fast because it does not walk the file system tree, but checks a
“database” of file names that must have been previously created using the updat-
edb program. This means that locate does not catch files that have been added to
the system since the last database update, and conversely may output the names
of files that have been deleted in the meantime.

B You can get locate to return existing files only by using the “-e ” option, but
  this negates locate ’s speed advantage.

   The updatedb program constructs the database for locate . Since this may take
considerable time, your system administrator usually sets this up to run when the
system does not have a lot to do, anyway, presumably late at night.

B The cron service which is necessary for this will be explained in detail in
  Advanced Linux. For now, remember that most Linux distributions come
  with a mechanism which causes updatedb to be run every so often.

As the system administrator, you can tell updatedb which files to consider when
setting up the database. How that happens in detail depends on your distribution:
updatedb itself does not read a configuration file, but takes its settings from the
command line and (partly) environment variables. Even so, most distributions
call updatedb from a shell script which usually reads a file like /etc/updatedb.conf or
/etc/sysconfig/locate , where appropriate environment variables can be set up.

B You may find such a file, e.g., in /etc/cron.daily (details may vary according
  to your distribution).

   You can, for instance, cause updatedb to search certain directories and omit oth-
ers; the program also lets you specify “network file systems” that are used by sev-
eral computers and that should have their own database in their root directories,
such that only one computer needs to construct the database.

B An important configuration setting is the identity of the user that runs up-
  datedb . There are essentially two possibilities:

         • updatedb runs as root and can thus enter every file in its database. This
           also means that users can ferret out file names in directories that they
           would not otherwise be able to look into, for example, other users’
           home directories.
         • updatedb runs with restricted privileges, such as those of user nobody . In
           this case, only names within directories readable by nobody end up in
           the database.

B The slocate program—an alternative to the usual locate —circumvents this
  problem by storing a file’s owner, group and permissions in the database in
  addition to the file’s name. It outputs a file name only if the user who runs
  slocate can, in fact, access the file in question. slocate comes with an updatedb
  program, too, but this is merely another name for slocate itself.

B In many cases, slocate is installed such that it can also be invoked using the
  locate command.
86                                                               6 Files: Care and Feeding

     C 6.27 [!1] README is a very popular file name. Give the absolute path names of
       all files on your system called README .

     C 6.28 [2] Create a new file in your home directory and convince yourself by
       calling locate that this file is not listed (use an appropriately outlandish file
       name to make sure). Call updatedb (possibly with administrator privileges).
       Does locate find your file afterwards? Delete the file and repeat these steps.

     C 6.29 [1] Convince yourself that the slocate program works, by searching for
       files like /etc/shadow as normal user.

     Commands in this Chapter
     cd       Changes a shell’s current working directory                      bash (1) 67
     convmv   Converts file names between character encodings                convmv (1) 64
     cp       Copies files                                                       cp (1) 74
     find     Searches files matching certain given criteria      find (1), Info: find 81
     less     Displays texts (such as manual pages) by page                    less (1) 80
     ln       Creates (“hard” or symbolic) links                                 ln (1) 76
     locate   Finds files by name in a file name database                    locate (1) 84
     ls       Lists file information or directory contents                       ls (1) 67
     mkdir    Creates new directories                                         mkdir (1) 69
     more     Displays text data by page                                       more (1) 80
     mv       Moves files to different directories or renames them               mv (1) 75
     pwd      Displays the name of the current working directory pwd (1), bash (1) 67
     rm       Removes files or directories                                       rm (1) 75
     rmdir    Removes (empty) directories                                     rmdir (1) 70
     slocate  Searches file by name in a file name database, taking file permissions into
              account                                                       slocate (1) 85
     updatedb Creates the file name database for locate                   updatedb (1) 85
     xargs    Constructs command lines from its standard input
                                                                 xargs (1), Info: find 83

        • Nearly all possible characters may occur in file names. For portability’s sake,
          however, you should restrict yourself to letters, digits, and some special
        • Linux distinguishes between uppercase and lowercase letters in file names.
        • Absolute path names always start with a slash and mention all directories
          from the root of the directory tree to the directory or file in question. Relative
          path names start from the “current directory”.
        • You can change the current directory of the shell using the cd command.
          You can display its name using pwd .
        • ls displays information about files and directories.
        • You can create or remove directories using mkdir and rmdir .
        • The cp , mv and rm commands copy, move, and delete files and directories.
        • The ln command allows you to create “hard” and “symbolic” links.
        • more and less display files (and command output) by pages on the terminal.
        • find searches for files or directories matching certain criteria.
                                                                                          $ echo tux
                                                                                          $ ls
                                                                                          $ /bin/su -

Standard I/O and Filter

7.1     I/O Redirection and Command Pipelines . . . . . . .         .   .   .   .    88
      7.1.1 Standard Channels . . . . . . . . . . . . .             .   .   .   .    88
      7.1.2 Redirecting Standard Channels . . . . . . . . .         .   .   .   .    89
      7.1.3 Command Pipelines . . . . . . . . . . . . .             .   .   .   .    92
7.2     Filter Commands . . . . . . . . . . . . . . . .             .   .   .   .    94
7.3     Reading and Writing Files . . . . . . . . . . . . .         .   .   .   .    94
      7.3.1 Outputting and Concatenating Text Files—cat and tac     .   .   .   .    94
      7.3.2 Beginning and End—head and tail . . . . . . . .         .   .   .   .    96
      7.3.3 Just the Facts, Ma’am—od and hexdump . . . . . . .      .   .   .   .    97
7.4     Text Processing. . . . . . . . . . . . . . . . .            .   .   .   .   100
      7.4.1 Character by Character—tr , expand and unexpand . . .   .   .   .   .   100
      7.4.2 Line by Line—fmt , pr and so on . . . . . . . . .       .   .   .   .   103
7.5     Data Management . . . . . . . . . . . . . . .               .   .   .   .   108
      7.5.1 Sorted Files—sort and uniq . . . . . . . . . .          .   .   .   .   108
      7.5.2 Columns and Fields—cut , paste etc. . . . . . . .       .   .   .   .   113

      • Mastering shell I/O redirection
      • Knowing the most important filter commands

      • Shell operation (see Chapter 2)
      • Use of a text editor (see Chapter 5)
      • File and directory handling (see Chapter 6)

grd1-filter-opt.tex [!complex ] (be27bba8095b329b )
88                                                                        7 Standard I/O and Filter Commands

                                                    stdin                      stdout
                                    Keyboard                   Process                     Screen

                                                    stdin                      stdout
                                    Keyboard                   Process                     Screen


                                                Figure 7.1: Standard channels on Linux

                            7.1     I/O Redirection and Command Pipelines
                            7.1.1       Standard Channels
                            Many Linux commands—like grep and friends—are designed to read input data,
                            manipulate it in some way, and output the result of these manipulations. For
                            example, if you enter

                            $ grep xyz

                            you can type lines of text on the keyboard, and grep will only let those pass that
                            contain the character sequence, “xyz”:

                            $ grep xyz
                            abc def
                            xyz 123
                            xyz 123
                            aaa bbb
                             Ctrl + d

                           (The key combination at the end lets grep know that the input is at an end.)
           standard input      We say that grep reads data from “standard input”—in this case, the keyboard—
          standard output and writes to “standard output”—in this case, the console screen or, more likely,
                           a terminal program in a graphical desktop environment. The third of these
     standard error output “standard channels” is “standard error output”; while the “payload data” grep
                           produces are written to standard output, standard error output takes any error
                           messages (e. g., about a non-existent input file or a syntax error in the regular
                               In this chapter you will learn how to redirect a program’s standard output to
                           a file or take a program’s standard input from a file. Even more importantly, you
                           will learn how to feed one program’s output directly (without the detour via a
                           file) into another program as that program’s input. This opens the door to using
                           the Linux commands, which taken on their own are all fairly simple, as building
                           blocks to construct very complex applications. (Think of a Lego set.)

                            B We will not be able to exhaust this topic in this chapter. Do look forward
                              to the manual, Advanced Linux, where constructing shell scripts with the
                              commands from the Unix “toolchest” plays a very important rôle! Here is
                              where you learn the very important fundamentals of cleverly combining
                              Linux commands even on the command line.
7.1 I/O Redirection and Command Pipelines                                                                     89

                                       Table 7.1: Standard channels on Linux

        Channel     Name                      Abbreviation      Device       Use
            0       standard input            stdin             keyboard     Input for programs
            1       standard output           stdout            screen       Output of programs
            2       standard error output     stderr            screen       Programs’ error messages

   The standard channels are summarised once more in Table 7.1. In the pa- standard channels
tois, they are normally referred to using their abbreviated names—stdin , stdout
and stderr for standard input, standard output, and standard error output. These
channels are respectively assigned the numbers 0, 1, and 2, which we are going to
use later on.
   The shell can redirect these standard channels for individual commands, with- Redirection
out the programs in question noticing anything. These always use the standard
channels, even though the output might no longer be written to the screen or ter-
minal window but some arbitrary other file. That file could be a different device,
like a printer—but it is also possible to specify a text file which will receive the
output. That file does not even have to exist but will be created if required.
   The standard input channel can be redirected in the same way. A program no
longer receives its input from the keyboard, but takes it from the specified file,
which can refer to another device or a file in the proper sense.

B The keyboard and screen of the “terminal” you are working on (no matter
  whether this is a Linux text console, a “genuine” terminal on a serial port,
  a terminal window in a graphical environment, or a network session using,
  say, the secure shell) can be accessed by means of the /dev/tty file—if you
  want to read data this means the keyboard, for output the screen (the other
  way round would be quite silly). The

        $ grep xyz /dev/tty

        would be equivalent to our example earlier on in this section. You can find
        out more about such “special files” from Chapter 9.)

7.1.2     Redirecting Standard Channels
You can redirect the standard output channel using the shell operator “> ” (the Redirecting standard output
“greater-than” sign). In the following example, the output of “ls -laF ” is redi-
rected to a file called filelist ; the screen output consists merely of

$ ls -laF >filelist
$ __

If the filelist file does not exist it is created. Should a file by that name exist,
however, its content will be overwritten. The shell arranges for this even before
the program in question is invoked—the output file will thus be created even if
the actual command invocation contained typos, or if the program did not indeed
write any output at all (in which case the filelist file will remain empty).

B If you want to avoid overwriting existing files using shell output redirection, Protecting existing files
  you can give the bash command “set -o noclobber ”. In this case, if output is
  redirected to an existing file, an error occurs.

   You can look at the filelist file in the usual way, e. g., using less :

$ less inhalt
total 7
90                                                                                       7 Standard I/O and Filter Commands

                                  drwxr-xr-x   12   joe    users    1024   Aug   26   18:55   ./
                                  drwxr-xr-x    5   root   root     1024   Aug   13   12:52   ../
                                  drwxr-xr-x    3   joe    users    1024   Aug   20   12:30   photos/
                                  -rw-r--r--    1   joe    users       0   Sep    6   13:50   filelist
                                  -rw-r--r--    1   joe    users   15811   Aug   13   12:33   pingu.gif
                                  -rw-r--r--    1   joe    users   14373   Aug   13   12:33   hobby.txt
                                  -rw-r--r--    2   joe    users    3316   Aug   20   15:14   chemistry.txt

                                  If you look closely at the content of filelist , you can see a directory entry for
                                  filelist with size 0. This is due to the shell’s way of doing things: When parsing
                                  the command line, it notices the output redirection first and creates a new filelist
                                  file (or removes its content). After that, the shell executes the command, in this
                                  case ls , while connecting ls ’s standard output to the filelist file instead of the

                                  B The file’s length in the ls output is 0 because the ls command looked at the
                                    file information for filelist before anything was written to that file – even
                                    though there are three other entries above that of filelist . This is because
                                    ls first reads all directory entries, then sorts them by file name, and only
                                    then starts writing to the file. Thus ls sees the newly created (or emptied)
                                    file filelist , with no content so far.
              Appending stan-        If you want to append a command’s output to an existing file without replacing
           dard output to a file its previous content, use the >> operator. If that file does not exist, it will be created
                                  in this case, too.
                                  $ date >> filelist
                                  $ less filelist
                                  total 7
                                  drwxr-xr-x 12 joe     users       1024   Aug   26   18:55   ./
                                  drwxr-xr-x   5 root   root        1024   Aug   13   12:52   ../
                                  drwxr-xr-x   3 joe    users       1024   Aug   20   12:30   photos/
                                  -rw-r--r--   1 joe    users          0   Sep    6   13:50   filelist
                                  -rw-r--r--   1 joe    users      15811   Aug   13   12:33   pingu.gif
                                  -rw-r--r--   1 joe    users      14373   Aug   13   12:33   hobby.txt
                                  -rw-r--r--   2 joe    users       3316   Aug   20   15:14   chemistry.txt
                                  Wed Oct 22 12:31:29 CEST 2003

                              In this example, the current date and time was appended to the filelist file.
                                 Another way to redirect the standard output of a command is by using “back-
         command substitution ticks” (` …` ). This is also called command substitution: The standard output of a
                              command in backticks will be inserted into the command line instead of the com-
                              mand (and backticks); whatever results from the replacement will be executed.
                              For example:

                                  $ cat dates                                                                  Our little diary
                                  22/12 Get presents
                                  23/12 Get Christmas tree
                                  24/12 Christmas Eve
                                  $ date +%d/%m                                                               What’s the date?
                                  $ grep `̂date +%d/%m.` dates                                                     What’s up?
                                  23/12 Get Christmas tree

                                  B A possibly more convenient syntax for “`date` ” is “$(date) ”. This makes it
                                    easier to nest such calls. However, this syntax is only supported by modern
                                    shells such as bash .
     Redirecting standard input      You can use < , the “less-than” sign, to redirect the standard input channel. This
                                  will read the content of the specified file instead of keyboard input:
7.1 I/O Redirection and Command Pipelines                                                                 91

$ wc -w <frog.txt

In this example, the wc filter command counts the words in file frog.txt .

B There is no << redirection operator to concatenate multiple input files; to
  pass the content of several files as a command’s input you need to use cat :

       $ cat file1 file2 file3 | wc -w

      (We shall find out more about the “| ” operator in the next section.) Most
      programs, however, do accept one or more file names as command line ar-

B You can, however, use the << operator to take input data for a command
  from the lines following the command invocation in the shell. This is less
  interesting for interactive use than it is for shell scripts, but must be men-
  tioned here for completeness. The feature is called a “here document”. For
  example, in

       $ grep Linux <<END
       Roses are red,
       Violets are blue,
       Linux is lovely,
       I know this is true.

      the input to grep consists of the lines following the grep call up to the line
      containing only “END ”. The output of the command is

       Linux is lovely,

B If you specify the “end string” of a here document without quotes, shell
  variables will be evaluated and command substitution (using ` …` or $( …) )
  will be performed on the lines of the here document. However, if the end
  string is quoted (single or double quotes), the here document will be pro-
  cessed verbatim. Compare the output of

       $ cat <<EOF
       Today's date: `date`

      to that of
       $ cat <<"EOF"
       Today's date: `date`

      Finally: If the here document is introduced by “<<- ” instead of “<< ”, all tab
      characters will be removed from the beginning of the here document’s lines.
      This lets you indent here documents properly in shell scripts.
   Of course, standard input and standard output may be redirected at the same Simultaneous redirection
time. The output of the word-count example is written to a file called wordcount
$ wc -w <frog.txt >wordcount
$ cat wordcount
92                                                                           7 Standard I/O and Filter Commands

     standard error output      Besides the standard input and standard output channels, there is also the stan-
                             dard error output channel. If errors occur during a program’s operation, the cor-
                             responding messages will be written to that channel. That way you will see them
                             even if standard output has been redirected to a file. If you want to redirect stan-
                             dard error output to a file as well, you must state the channel number for the
                             redirection operator—this is optional for stdin (0< ) and stdout (1> ) but mandatory
                             for stderr (2> ).
                                You can use the >& operator to redirect a channel to a different one:

                             make >make.log 2>&1

                             redirects standard output and standard error output of the make command to make.
                             log .

                             B Watch out: Order is important here! The two commands
                                     make >make.log 2>&1
                                     make 2>&1 >make.log

                                     lead to completely different results. In the second case, standard error out-
                                     put will be redirected to wherever standard output goes (/dev/tty , where
                                     standard error output would go anyway), and then standard output will
                                     be sent to make.log , which, however, does not change the target for standard
                                     error output.

                             C 7.1 [2] You can use the -U option to get ls to output a directory’s entries with-
                               out sorting them. Even so, after “ls -laU >filelist ”, the entry for filelist in
                               the output file gives length zero. What could be the reason?

                             C 7.2 [!2] Compare the output of the commands “ls /tmp ” and “ls /tmp >ls-
                               tmp.txt ” (where, in the second case, we consider the content of the ls- tmp.txt
                               to be the output). Do you notice something? If so, how could you explain
                               the phenomenon?

                             C 7.3 [!2] Why isn’t it possible to replace a file by a new version in one step,
                               for example using “grep xyz file >file ”?

                             C 7.4 [!1] And what is wrong with “cat foo >>foo ”, assuming a non-empty file
                               foo ?

                             C 7.5 [2] In the shell, how would you output an error message such that it goes
                               to standard error output?

                             7.1.3     Command Pipelines
                             Output redirection is frequently used to store the result of a program in order to
                             continue processing it with a different command. However, this type of interme-
                             diate storage is not only quite tedious, but you must also remember to get rid of
                             the intermediate files once they are no longer required. Therefore, Linux offers a
                    pipes    way of linking commands directly via pipes: A program’s output automatically
                             becomes another program’s input.
      direct connection of      This direct connection of several commands into a pipeline is done using the
        several commands     | operator. Instead of first redirecting the output of “ls -laF ” to a file and then
                  pipeline   looking at that file using less , you can do the same thing in one step without an
                             intermediate file:
7.1 I/O Redirection and Command Pipelines                                                                  93

                         stdin                                stdout
      Command                              tee                              Command


                           Figure 7.2: The tee command

$ ls -laF | less
total 7
drwxr-xr-x 12 joe     users       1024   Aug   26   18:55   ./
drwxr-xr-x   5 root   root        1024   Aug   13   12:52   ../
drwxr-xr-x   3 joe    users       1024   Aug   20   12:30   photos/
-rw-r--r--   1 joe    users        449   Sep    6   13:50   filelist
-rw-r--r--   1 joe    users      15811   Aug   13   12:33   pingu.gif
-rw-r--r--   1 joe    users      14373   Aug   13   12:33   hobby.txt
-rw-r--r--   2 joe    users       3316   Aug   20   15:14   chemistry.txt

These command pipelines can be almost any length. Besides, the final result can
be redirected to a file:
$ cut -d: -f1 /etc/passwd | sort | pr -2 >userlst

This command pipeline takes all user names from the first comma-separated col-
umn of /etc/passwd file, sorts them alphabetically and writes them to the userlst
file in two columns. The commands used here will be described in the remainder
of this chapter.
    Sometimes it is helpful to store the data stream inside a command pipeline at
a certain point, for example because the intermediate result at that stage is useful intermediate result
for different tasks. The tee command copies the data stream and sends one copy
to standard output and another copy to a file. The command name should be
obvious if you know anything about plumbing (see Figure 7.2).
    The tee command with no options creates the specified file or overwrites it if it
exists; with -a (“append”), the output can be appended to an existing file.

$ ls -laF | tee list | less
total 7
drwxr-xr-x 12 joe     users       1024   Aug   26   18:55   ./
drwxr-xr-x   5 root   root        1024   Aug   13   12:52   ../
drwxr-xr-x   3 joe    users       1024   Aug   20   12:30   photos/
-rw-r--r--   1 joe    users        449   Sep    6   13:50   content
-rw-r--r--   1 joe    users      15811   Aug   13   12:33   pingu.gif
-rw-r--r--   1 joe    users      14373   Aug   13   12:33   hobby.txt
-rw-r--r--   2 joe    users       3316   Aug   20   15:14   chemistry.txt

In this example the content of the current directory is written both to the list file
and the screen. (The list file does not show up in the ls output because it is only
created afterwards by tee .)

C 7.6 [!2] How would you write the same intermediate result to several files
  at the same time?
94                                                                     7 Standard I/O and Filter Commands

                                               Table 7.2: Options for cat (selection)

                            Option    Result
                               -b     (engl. number non-blank lines) Numbers all non-blank lines in
                                      the output, starting at 1.
                               -E     (engl. end-of-line) Displays a $ at the end of each line (useful
                                      to detect otherwise invisible space characters).
                               -n     (engl. number) Numbers all lines in the output, starting at 1.
                               -s     (engl. squeeze) Replaces sequences of empty lines by a single
                                      empty line.
                               -T     (engl. tabs) Displays tab characters as “^I ”.
                               -v     (engl. visible) Makes control characters 𝑐 visible as “^ 𝑐”, char-
                                      acters 𝛼 with character codes greater than 127 as “M- 𝛼”.
                               -A     (engl. show all) Same as -vET .

                      7.2     Filter Commands
       toolkit principle One of the basic ideas of Unix—and, consequently, Linux—is the “toolkit princi-
                      ple”. The system comes with a great number of system programs, each of which
                      performs a (conceptually) simple task. These programs can be used as “building
                      blocks” to construct other programs, to save the authors of those programs from
                      having to develop the requisite functions themselves. For example, not every pro-
                      gram contains its own sorting routines, but many programs avail themselves of
                      the sort command provided by Linux. This modular structure has several advan-
                          • It makes life easier for programmers, who do not need to develop (or incor-
                            porate) new sorting routines all the time.
                          • If sort receives a bug fix or performance improvement, all programs using
                            sort benefit from it, too—and in most cases do not even need to be changed.

                      Tools that take their input from standard input and write their output to standard
                      output are called “filter commands” or “filters” for short. Without input redirec-
                      tion, a filter will read its input from the keyboard. To finish off keyboard input for
                      such a program, you must enter the key sequence Ctrl + d , which is interpreted
                      as “end of file” by the terminal driver.

                       B Note that the last applies to keyboard input only. Files on the disk may of
                         course contain the Ctrl + d character (ASCII 4), without the system believ-
                         ing that the file ended at that point. This as opposed to a certain very pop-
                         ular operating system, which traditionally has a somewhat quaint notion of
                         the meaning of the Control-Z (ASCII 26) character even in text files …

                         Many “normal” commands, such as the aforementioned grep , operate like fil-
                      ters if you do not specify input file names for them to work on.
                         In the remainder of the chapter you will become familiar with a selection of the
                      most important such commands. Some commands have crept in that are not tech-
                      nically genuine filter commands, but all of them form important building blocks
                      for pipelines.

                      7.3     Reading and Writing Files
                      7.3.1    Outputting and Concatenating Text Files—cat and tac
     concatenating files The cat (“concatenate”) command is really intended to join several files named on
                      the command line into one. If you pass just a single file name, the content of that
7.3 Reading and Writing Files                                                                                    95

                          Table 7.3: Options for tac (selection)

      Option     Result
        -b       (engl. before) The separator is considered to occur (and be
                 output) in front of a part, not behind it.
        -r       (engl. regular expression) The separator is interpreted as a reg-
                 ular expression.
        -s   𝑠   (engl. separator) Defines a different separator 𝑠 (in place of \n )
                 an. The separator may be several characters long.

file will be written to standard output. If you do not pass a file name at all, cat
reads its standard input—this may seem useless, but cat offers options to number
lines, make line ends and special characters visible or compress runs of blank lines
into one (Table 7.2).

B It goes without saying that only text files lead to sensible screen output with text files
  cat . If you apply the command to other types of files (such as the binary file
  /bin/cat ), it is more than probable—on a text terminal at least—that the shell
  prompt will consist of unreadable characters once the output is done. In this
  case you can restore the normal character set by (blindly) typing reset . If you
  redirect cat output to a file this is of course not a problem.

B The “Useless Use of cat Award” goes to people using cat where it is extra-
  neous. In most cases, commands do accept filenames and don’t just read
  their standard input, so cat is not required to pass a single file to them on
  standard input. A command like “cat data.txt | grep foo ” is unnecessary if
  you can just as well write “grep foo data.txt ”. Even if grep could only read its
  standard input, “grep foo <data.txt ” would be shorter and would not involve
  an additional cat process.However, the whole issue is a bit more subtle; see
  Exercise 7.21.

   The tac command’s name is “cat backwards”, and it works like that, too: It Output a file’s lines in reverse
reads a number of named files or its standard input and outputs the lines it has order
read in reverse order:
$ tac <<END

However, this is where the similarity ends already: tac does not support the same
options as cat but features its own (Table 7.3). For example, you can use the -s op-
tion to set up an alternative separator which the program will use when reversing separator
the input—normally the separator is a newline character, so the input is reversed
line by line. Consider, for example

$ echo A:B:C:D | tac -s :
C:B:A:$ _

(where the new shell prompt is appended directly to the last output line). This
output, which at first glance looks totally weird, can be explained as follows: The
input consists of the four parts “A: ”, “B: ”, “C: ”, and “D\n ” (the separator, here “: ”
is considered to belong to the immediately preceding part, and the final newline
96                                                     7 Standard I/O and Filter Commands

     character is contributed by echo ). These parts are output in reverse order, i. e.,
     “D\n ” comes first and then the other three, with no other intervening separators
     (since every part contains a perfectly workable separator already); the next shell
     prompt is appended immediately (without a new line) to the output. The -b option
     considers the separator to belong to the following part rather than the preceding
     one; with “tac -s : -b ”, our example would produce the following output:

     :C:BA$ _

     (think it through!).

     C 7.7 [2] How can you check whether a directory contains files with “weird”
       names (e. g., ones with spaces at the end or invisible control characters in
       the middle)?

     7.3.2      Beginning and End—head and tail
     Sometimes you are only interested in part of a file: The first few lines to check
     whether it is the right file, or, in particular with log files, the last few entries. The
     head and tail commands deliver exactly that—by default, the first ten and the last
     ten lines of every file passed as an argument, respectively (or else as usual the first
     or last ten lines of their standard input). The -n option lets you specify a different
     number of lines: “head -n 20 ” returns the first 20 lines of its standard input, “tail
     -n 5 data.txt ” the last 5 lines of file data.txt .

     B Tradition dictates that you can specify the number 𝑛 of desired lines directly
       as “- 𝑛”. Officially this is no longer allowed, but the Linux versions of head
       and tail still support it.
        You can use the -c option to specify that the count should be in bytes, not lines:
     “head -c 20 ” displays the first 20 bytes of standard input, no matter how many
     lines they occupy. If you append a “b ”, “k ”, or “m ” (for “blocks”, “kibibytes”, and
     “mebibytes”, respectively) to the count, the count will be multiplied by 512, 1024,
     or 1048576, respectively.

     B head also lets you use a minus sign: “head -c -20 ” displays all of its standard
       input but the last 20 bytes.

     B By way of revenge, tail can do something that head does not support: If the
       number of lines starts with “+ ”, it displays everything starting with the given
             $ tail -n +3 file                                        Everything from line 3

        The tail command also supports the important -f option. This makes tail wait
     after outputting the current end of file, to also output data that is appended later
     on. This is very useful if you want to keep an eye on some log files. If you pass
     several file names to tail -f , it puts a header line in front of each block of output
     lines telling what file the new data was written to.

     C 7.8 [!2] How would you output just the 13th line of the standard input?
     C 7.9 [3] Check out “tail -f ”: Create a file and invoke “tail -f ” on it. Then,
       from another window or virtual console, append something to the file us-
       ing, e. g., “echo >>… ”, and observe the output of tail . What does it look like
       when tail is watching several files simultaneously?
7.3 Reading and Writing Files                                                                                       97

                                           Table 7.4: Options for od (excerpt)

         Option      Result
         -A   𝑟      Base of the offset at the beginning of the line. Valid values are: d (decimal), o (octal), x
                     (hexadecimal), n (no offset at all).
         -j   𝑜      Skip 𝑜 bytes at the beginning of the input, then start writing output.
         -N   𝑛      Output at most 𝑛 bytes.
         -t   𝑡      Use type specification 𝑡. Several -t options may occur, and one line will be output for
                     each of them in the requisite format.
                     Possible values for 𝑡: a (named character), c (ASCII character), d (signed decimal number),
                     f (floating-point number), o (octal number), u (unsigned decimal number), x (hexadeci-
                     mal number).
                     You can append a digit to all options except a and c . This specifies how many bytes
                     of the input should be interpreted as a unit. Details for this and for letter-based width
                     specifiers can be found in od (1).
                     If you append a z to an option, the printable characters of that line will be displayed to
                     the right.
         -v          Outputs all duplicate lines as well.
         -w   𝑤      Writes 𝑤 bytes per line; default value is 16.

C 7.10 [3] What happens to “tail -f ” if the file being observed shrinks?

C 7.11 [3] Explain the output of the following commands:
         $ echo Hello >/tmp/hello
         $ echo "Hiya World" >/tmp/hello

        when you have started the command

         $ tail -f /tmp/hello

        in a different window after the first echo above.

7.3.3         Just the Facts, Ma’am—od and hexdump
cat , tac , head ,
              and tail work best with text files: Arbitrary binary files can in prin-
ciple be processed, but the last three programs in particular prefer dealing with
files that consist of noticeable lines. Even so, it is often useful to be able to check
exactly what is in a file. A suitable tool is the od (“octal dump”) command, which        od
can display arbitrary data in different formats. Binary data can be displayed byte
by byte or word by word in octal, hexadecimal, decimal or ASCII coding. The
standard display style of od is as follows:

 $ od /etc/passwd | head -3
 0000000 067562 072157 074072 030072 030072 071072 067557 035164
 0000020 071057 067557 035164 061057 067151 061057 071541 005150
 0000040 060563 064163 067562 072157 074072 030072 030072 071072

At the very left there is the (octal) offset in the file where the output line starts. Line format
The eight following numbers each correspond to two bytes from the file, printed
in octal. This is only useful in very specific circumstances.
   Fortunately od supports options that let you change the output format in very
many ways (Table 7.4). Most important is the -t option, which describes the for- -t
mat of the data lines. For byte-by-byte hexadecimal output, you could use, for
98                                                                                         7 Standard I/O and Filter Commands

                              $ od -txC /etc/passwd
                              0000000 72 6f 6f 74 3a 78 3a 30 3a 30 3a 72 6f 6f 74 3a
                              0000020 2f 72 6f 6f 74 3a 2f 62 69 6e 2f 62 61 73 68 0a
                              0000040 73 61 73 68 72 6f 6f 74 3a 78 3a 30 3a 30 3a 72

                              (the offset remains octal). Here, x specifies “hexadecimal”, and C specifies “byte-
                              wise”. If you want to see the characters themselves in addition to the hexadecimal
                              numbers, you can append a z :

                              $ od -txCz   /etc/passwd
                              0000000 72   6f 6f 74 3a 78 3a 30 3a 30 3a 72 6f 6f 74 3a                   >root:x:0:0:root:<
                              0000020 2f   72 6f 6f 74 3a 2f 62 69 6e 2f 62 61 73 68 0a                   >/root:/bin/bash.<
                              0000040 73   61 73 68 72 6f 6f 74 3a 78 3a 30 3a 30 3a 72                   >sashroot:x:0:0:r<

                             Non-printable characters (here the 0a —a newline character—at the end of the sec-
                             ond line) are replaced by “. ”.
     several type specifiers    You can also concatenate several type specifiers or put them into separate -t
                             options. This gives you one line per type specifier:

                              $ od -txCc /etc/passwd
                              0000000 72 6f 6f 74 3a 78   3a 30 3a 30 3a 72 6f 6f 74 3a
                                        r   o   o    t    :   x   :   0   :   0   :   r                    o   o   t    :
                              0000020 2f 72 6f 6f 74 3a   2f 62 69 6e 2f 62 61 73 68 0a
                                        /   r   o    o    t   :   /   b   i   n   /   b                    a   s   h   \n
                              0000040 73 61 73 68 72 6f   6f 74 3a 78 3a 30 3a 30 3a 72
                                        s   a   s    h    r   o   o   t   :   x   :   0                    :   0   :    r

                              (which is identical to »od -txC -tc /etc/passwd «).
     identical output lines      A sequence of lines that would be equal to the last previously-output line is
                              replaced by an asterisk (“* ”) at the left margin:

                              $ od -tx -N 64 /dev/zero
                              0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

                              (/dev/zero produces an unlimited supply of null bytes, and the -N option to od limits
                              the output to 64 of them.) The -v option suppresses the abbreviation:

                              $ od -tx -N 64 -v /dev/zero
                              0000000 00 00 00 00 00 00 00   00   00   00   00   00   00   00   00   00
                              0000020 00 00 00 00 00 00 00   00   00   00   00   00   00   00   00   00
                              0000040 00 00 00 00 00 00 00   00   00   00   00   00   00   00   00   00
                              0000060 00 00 00 00 00 00 00   00   00   00   00   00   00   00   00   00

                   hexdump       The hexdump (or hd ) program does a very similar job. It supports output formats
                              that are very like those of od , even though the corresponding options are com-
                              pletely different. For example, the

                              $ hexdump -o -s 446 -n 64 /etc/passwd

                              is mostly equivalent to the first od example above. Most od options have fairly
                              similar counterparts in hexdump
7.3 Reading and Writing Files                                                                          99

   A major difference between hexdump and od is hexdump ’s support for output for- output formats
mats. These let you specify in much more detail than is possible with od what the
output should look like. Consider the following example:

$ cat hexdump.txt
0123456789ABC  XYZabc  xyz
$ hexdump -e '"%x-"' hexdump.txt
33323130-37363534- -a7a79-$

The following points are notable:
   • The “"%x" ” output format writes 4 bytes’ worth of the input in a hexadeci-
     mal representation—“30 ” is the hexadecimal equivalent of 48, the numerical
     value of the “0 ” character according to the ASCII. The “- ” suffix is output
   • The 4 bytes are output in reverse order. This is an artefact of the Intel pro-
     cessor architecture.
   • The double quotes are part of the syntax of hexdump and need to be protected
     using single quotes (or equivalent) lest the shell remove them.
   • The $ at the end is the next command prompt; hexdump does not output new-
     line characters of its own.
Conveniently for programmers, the possible output formats derive from those
used by the printf (3) function found in programming languages like C, Perl, awk ,
and so on (even bash supports a printf command). Check the documentation for
   hexdump output formats are much more sophisticated than the simple example
shown above. As usual with printf , you get to specify a “field width” for the

$ hexdump -e '"%10x-"' hexdump.txt
  33323130- 37363534-  a7a79-$

(In this case every sequence of hexadecimal digits—eight characters in length—
appears flush-right in a ten-character field.)
   You can also specify how often a format will be “executed”:                 repeat count

$ hexdump -e '4 "%x-" "\n"' hexdump.txt

The 4 preceding the commands in this section says that the “"%x" ” format is to be
applied four times. After that, we continue with the next format—“"\n" ”—, which
produces a newline character. After that, hexdump starts again from the front.     newline character
   In addition, you can determine how many bytes a format should process (with byte count
the numerical formats you usually have a choice of 1, 2, and 4):

$ hexdump -e '/2 "%x-" "\n"' hexdump.txt


(“/2 ” is an abbreviation for “1/2 ”, which is why every %x format appears just once
per line.) Repeat count and byte count may be combined:
100                                                                                7 Standard I/O and Filter Commands

                                               Table 7.5: Options for tr

      Option              Result
      -c   (complement)   Replaces all characters not in ⟨s1 ⟩ by characters from ⟨s2 ⟩
      -d   (delete)       Removes all characters in ⟨s1 ⟩ without substitution
      -s   (squeeze)      Runs of identical characters from ⟨s2 ⟩ are replaced by a single character

                                 $ hexdump -e '4/2 "%x-" "\n"' hexdump.txt

                                And you may also mix different output formats:

                                 $ hexdump -e '"%2_ad" "%2.2s" 3/2 " %x" " %1.1s" "\n"' 
                                 0 01 3332 3534 3736 8
                                 9 9A 4342 4544 4746 H

                                In this case we output the first two characters from the file as characters (rather
                                than numerical codes) (“"%2.2s" ”), then three times the codes of two characters
                                in hexadecimal form (“3/2 " %x" ”) followed by another character as a character
                                (“"%1.1s" ”) and a newline character. Then we start again from the front. The
                                “"%2_ad" ” at the beginning of the line outputs the current offset in the file (counted
                                in bytes from the start of the file) in decimal form in a 2-character field.

                                 C 7.12 [2] What is the difference between the “a ” and “c ” type specifiers of od ?

                                 C 7.13 [3] The /dev/random “device” returns random bytes (see Section 9.3). Use
                                   od with /dev/random to assign a decimal random number from 0 to 65535 to
                                   the shell variable r .

                                7.4      Text Processing
                                7.4.1     Character by Character—tr , expand and unexpand
                                 The tr command is used to replace single characters by different ones inside a text,
                                 or to delete them outright. tr is strictly a filter command, it does not take filename
                                 arguments but works with the standard channels only.
                   substitutions     For substitutions, the command syntax is “tr ⟨s1 ⟩ ⟨s2 ⟩”. The two parameters
                                 are character strings describing the substitution: In the simplest case the first char-
                                 acter in ⟨s1 ⟩ will be substituted by the first character in ⟨s2 ⟩, the second character
                                 in ⟨s1 ⟩ by the second in ⟨s2 ⟩, and so on. If ⟨s1 ⟩ is longer than ⟨s2 ⟩, the “excess”
                                 characters in ⟨s1 ⟩ are replaced by the final character in ⟨s2 ⟩; if ⟨s2 ⟩ is longer than
                                 ⟨s1 ⟩, the extra characters in ⟨s2 ⟩ are ignored.
                                     A little example by way of illustration:

                                 $ tr AEiu aey <example.txt >new1.txt
7.4 Text Processing                                                                                              101

                                   Table 7.6: Characters and character classes for tr

       Class          Meaning
       \a             Control-G (ASCII 7), audible alert
       \b             Control-H (ASCII 8), backspace
       \f             Control-L (ASCII 12), form feed
       \n             Control-J (ASCII 10), line feed
       \r             Control-M (ASCII 13), carriage return
       \t             Control-I (ASCII 9), tabulator character
       \v             Control-K (ASCII 11), vertical tabulator
       \ 𝑘𝑘𝑘          the character with octal code 𝑘𝑘𝑘
       \\             a backslash
       [ 𝑐* 𝑛 ]       in ⟨s2 ⟩: 𝑛 times character 𝑐
       [ 𝑐*]          in ⟨s2 ⟩: character 𝑐 as often as needed to make ⟨s2 ⟩ as long as ⟨s1 ⟩
       [:alnum:]      all letters and digits
       [:alpha:]      all letters
       [:blank:]      all horizontal whitespace characters
       [:cntrl:]      all control characters
       [:digit:]      all digits
       [:graph:]      all printable characters (excluding space)
       [:lower:]      all lowercase letters
       [:print:]      all printable characters (including space)
       [:punct:]      all punctuation characters
       [:space:]      all horizontal or vertical whitespace characters
       [:upper:]      all capital letters
       [:xdigit:]     alle hexadecimal letters (0 –9 , A –F , a –f )
       [: 𝑐:]         alle characters equivalent to 𝑐 (at this point only 𝑐 itself)

This command reads file example.txt and replaces all “A” characters by “a”, all “E”
characters by “e”, and all “i” and “u” characters by “y”. The result is stored in file
new1.txt .
   It is permissible to express sequences of characters by ranges of the form “𝑚- 𝑛”, ranges
where 𝑚 must precede 𝑛 in the character collating order. With the -c option, tr
does not replace the content of ⟨s1 ⟩ but its “complement”, all characters not con-
tained in ⟨s1 ⟩. The command

$ tr -c A-Za-z ' ' <example.txt >new1.txt

replaces all non-letters in example.txt by spaces.

B It is also possible to use character classes of the form [: 𝑘:] (the valid class character classes
  names are shown in Table 7.6); in many cases this makes sense in order
  to construct commands that work in different language environments. In
  a German-language environment, the character class “[:alpha:] ”, for exam-
  ple, contains the umlauts, a “home-cooked” class like “A-Za-z ”, which works
  for English, doesn’t. There are some other restrictions on character classes
  which you can look up in the tr documentation (see “info tr ”).

   To delete characters, you need only specify ⟨s1 ⟩: The                                  Deleting characters

$ tr -d a-z <example.txt >new2.txt

command removes all lowercase letters from example.txt . Furthermore, you can
replace runs of equivalent input characters by a single output character: The

$ tr -s '\n' <example.txt >new3.txt
102                                                                        7 Standard I/O and Filter Commands

                         command removes empty lines by replacing sequences of newline characters by
                         a single one.
                            The -s option (“squeeze”) also makes it possible to substitute two different in-
                         put characters by two identical ones, and to replace them by a single one (as with
                         -s with a single argument). The following turns all “A” and “E” characters (and
                         sequences of those) into a single “X” in new3.txt :

                         $ tr -s AE X <example.txt >new3.txt

             tabulator   The “tabulator”—a good old typewriter feature—is a convenient way of pro-
                      ducing indentation when programming (or entering text in general). By conven-
                      tion, “tabulator stops” are set in certain columns (usually every 8 columns, i. e.,
                      at positions 8, 16, 24, …), and common editors move to the next tabulator stop
                      to the right when the Tab key is pressed—if you press Tab when the cursor is
                      at column 11 on the screen, the cursor goes to column 16. In spite of this, the
                      resulting “tabulator characters” (or “tabs”) will be written to the file verbatim,
       Expanding tabs and many programs cannot interpret them correctly. The expand command helps
                      here: It reads the files named as its parameters (or else—you knew it—its stan-
                      dard input) and writes them to its standard output with all tabs removed by the
                      appropriate number of spaces to keep the tabulator stops every 8 columns. With
                      the -t you can define a different “scale factor” for the tabulator stops; a common
                      value is, e. g., “-t 4 ”, which sets up tabulator stops at columns 4, 8, 12, 16, etc.

                         B If you give several comma-separated numbers with -t , tabulator stops will
                           be set at the named columns exactly: “expand -t 4,12,32 ” sets tabulator stops
                           at columns 4, 12 and 32. Additional tabs in an input line will be replaced by

                         B The -i (“initial”) option causes only tabs at the beginning of the line to be
                           expanded to spaces.

      Introducing tabs       The unexpand more or less reverses the effect of expand : All runs of tabs and spaces
                         at the beginning of the input lines (as usual taken from named files or standard in-
                         put) are replaced by the shortest sequence of tabs and spaces resulting in the same
                         indentation. A line starting with a tab, two spaces, another tab and nine spaces
                         will, for example—assuming standard tabulator stops every eight columns—be
                         replaced by a line starting with three tabs and a space. The -a (“all”) option causes
                         all sequences of two or more tabs and spaces to be “optimized”, not just those at
                         the beginning of a line.

                         C 7.14 [!2] The famous Roman general Julius Caesar supposedly used the fol-
                           lowing cipher to transmit secret messages: The letter “A” was replaced by
                           “D”, “B” by “E” and so on; “X” was replaced by “A”, “Y” by “B” and “Z”
                           by “C” (if we start from today’s 26-letter alphabet, disregarding the fact that
                           the ancient Romans did not use J, K, W, or Y). Imagine you are are program-
                           mer in Caesar’s legion. Which tr commands would you use to encrypt the
                           general’s messages and to decrypt them again?

                         C 7.15 [3] What tr command would you use to replace all vowels in a text by
                           a single one? Consider the (German) children’s game:

                                DREI CHINESEN MIT DEM KONTRABASS
                                DRAA CHANASAN MAT DAM KANTRABASS

                         C 7.16 [3] How would you transform a text file such that all punctuation is
                           removed and every word appears on a line on its own?
7.4 Text Processing                                                                               103

C 7.17 [2] Give a tr command to remove the characters “a ”, “z ”, and “- ” from
  the standard input.

C 7.18 [1] How would you convince yourself that unexpand really replaces
  spaces and tabulator characters by an “optimal” sequence?

7.4.2    Line by Line—fmt , pr and so on
While the commands from the previous section considered their input characters
singly or in small groups, Linux also contains many commands that deal with
whole input lines. Some of them are introduced in this and the subsequent sec-
   The fmt program wraps the input lines (as usual taken from the files mentioned Line wrapping
on the command line, or the standard input) such that they have a given maximal
line—75 characters, unless otherwise specified using the -w option. It is quite con-
cerned with producing pleasant-looking output.
   Let us consider some examples of fmt (the frog0.txt file is equivalent to frog.txt ,
except that the first line of each paragraph is indented by two spaces):

$ head frog0.txt
The Frog King, or Iron Henry

  In olden times when wishing still helped one, there lived a king
whose daughters were all beautiful, but the youngest was so beautiful
that the sun itself, which has seen so much, was astonished whenever
it shone in her face.

  Close by the king's castle lay a great dark forest, and under an old
lime-tree in the forest was a well, and when the day was very warm,
the king's child went out into the forest and sat down by the side of

In the first example we reduce the line length to 40 characters:

$ fmt -w 40 frog0.txt
The Frog King, or Iron Henry

  In olden times when wishing still
  helped one, there lived a king
whose daughters were all beautiful,
but the youngest was so beautiful that
the sun itself, which has seen so much,
was astonished whenever it shone in

Note that the second line of the first paragraph is indented by spaces for no ap-
parent reason. This is due to the fact that fmt usually only considers those ranges
of lines for wrapping that are indented the same. The indented first line of the
first paragraph of the example file is therefore considered its own paragraph, and
all resulting lines are indented like the input paragraph’s first (and only) line.
The second and subsequent input lines are considered an independent, additional
“paragraph”, and wrapped accordingly (using the indentation of the second line).

B fmt tries to keep empty lines, the word spacing, and the indentation from
  the input. It prefers line breaks at the end of a sentence and tries to avoid
  them after the first and before the last word of a sentence. The “end of a
  sentence” according to fmt is either the end of a paragraph or a word ending
  with “. ”, “? ”, or “! ”, followed by two (!) spaces.

B There is more information about the way fmt works in the info page of the
  program (Hint to find it: fmt is part of the GNU “coreutils” collection.)
104                                                                          7 Standard I/O and Filter Commands

                                     Table 7.7: Options for pr (selection)

      Option   Result
      -𝑛       Creates 𝑛-column output (any positive integer value is permissible, but only 2 to 5 usu-
               ally make sense)
      -h   𝑡   (engl. header) Outputs 𝑡 instead of the source file name at the top of each page
      -l   𝑛   (engl. length) Sets the number of lines per page, default value is 66
      -n       (engl. number) Labels each line with a five-digit line number separated from the rest of
               the line by a tab character
      -o   𝑛   (engl. offset) Indents the text 𝑛 characters from the left margin
      -t       (engl. omit) Suppresses the header and footer lines (5 each)
      -w   𝑛   (engl. width) Sets the number of characters per line, default value is 72

                              In the next example we use the -c (“crown-margin mode”) option to avoid the
                            phenomenon we just explained:

                             $ fmt -c -w 40 frog0.txt
                             The Frog King, or Iron Henry

                               In olden times when wishing still
                             helped one, there lived a king whose
                             daughters were all beautiful, but the
                             youngest was so beautiful that the
                             sun itself, which has seen so much,
                             was astonished whenever it shone in

                            Here the indentation of the (complete) paragraph is taken from the first two lines
                            of the input; their indentation is kept, and the subsequent input lines follow the
                            indentation of the second.
                               Finally, an example featuring long lines:

                             $ fmt -w 100 frog0.txt
                             The Frog King, or Iron Henry

                               In olden times when wishing still helped one, there lived a king
                             whose daughters were all beautiful, but the youngest was so beautiful that the sun itself,
                             which has seen so much, was astonished whenever it shone in her face.

                               Close by the king's castle lay a great dark forest, and under an old
                             lime-tree in the forest was a well, and when the day was very warm, the king's child went out
                             into the forest and sat down by the side of the cool fountain, and when she was bored she took
                             a golden ball, and threw it up on high and caught it, and this ball was her favourite plaything.

                            We could have used -c here as well to avoid the “short” first lines of the para-
                            graphs. Without this option, the first line is once more considered a paragraph of
                            its own, and not amalgamated with the subsequent lines.
                                The name of the pr (“print”) command may be misleading at first. It does not,
                            as might be surmised, output files to the printer—this is the domain of the lpr
                            command. Instead, pr manages formatting a text for printed output, including
                            page breaks, indentation and header and footer lines. You can either specify input
                            files on the command line or have pr process its standard input (Table 7.7).
                                Here is a somewhat more complex example to illustrate pr ’s workings:

                             $ fmt -w 34 frog.txt | pr -h "Grimm Fairy-Tales" -2
7.4 Text Processing                                                                                                       105

                                                Table 7.8: Options for nl (selection)

         Option                             Result
         -b   𝑠           (body style)      Numbers the body lines according to 𝑠. Possible values for 𝑠 are a (num-
                                            ber all lines), t (number only non-blank lines), n (number no lines at
                                            all), and p ⟨regex⟩ (number only the lines matching regular expression
                                            ⟨regex⟩). The default value is t .
         -d   𝑝[𝑞]         (delimiter)      Use the two characters 𝑝𝑞 instead of “\: ” in delimiter lines. If only 𝑝 is
                                            given, 𝑞 remains set to “: ”.
         -f   𝑠          (footer style)     Formats the footer lines according to 𝑠. The possible values of 𝑠 corre-
                                            spond to those of -b . The default value is n .
         -h   𝑠         (header style)      Similar to -f , for header lines.
         -i   𝑛           (increment)       Increments the line number by 𝑛 for every line.
         -n   𝑓      (number format)        Determines the line number format. Possible values for 𝑓 : ln (flush-
                                            left with no leading zeroes), rn (flush-right with no leading zeroes), rz
                                            (flush-right with leading zeroes).
         -p                     (page)      Does not reset the line number to its original value between logical
         -v   𝑛                             Starts numbering at line number 𝑛.
         -w   𝑛               (width)       Output a 𝑛-character line number (according to -n ).

2004-09-13 08:42                          Grimm Fairy-Tales                 Page 1

The Frog King, or Iron Henry                  >>Whatever you will have, dear
                                              frog,« said she, >>My clothes, my
In olden times when wishing                   pearls and jewels, and even the
still helped one, there lived a               golden crown which I am wearing.«
king whose daughters were all
beautiful, but the youngest                   The frog answered, >>I do not care
was so beautiful that the sun                 for your clothes, your pearls
itself, which has seen so much,               and jewels, nor for your golden
was astonished whenever it shone              crown, but if you will love me
in her face.                                  and let me be your companion and

Here we use fmt to format the text of the Frog King in a long narrow column, and
pr to display the text in two columns.
    The nl command specialises in line numbering. If nothing else is specified, it line numbering
numbers the non-blank lines of its input (which as usual will be taken from named
files or else standard input) in sequence:

$ nl frog.txt
     1 The Frog King, or Iron Henry

     2    In olden times when wishing still helped one, there lived a king whose
     3    daughters were all beautiful, but the youngest was so beautiful that
     4    the sun itself, which has seen so much, was astonished whenever it
     5    shone in her face.

     6    Close by the king's castle lay a great dark forest, and under an old

This by itself is nothing you would not manage using “cat -b ”. For one, though,
nl allows for much closer control of the line numbering process:
106                                                                           7 Standard I/O and Filter Commands

                            $ nl -b    a -n rz -w 5 -v 1000 -i 10 frog.txt
                            01000      The Frog King, or Iron Henry
                            01020      In olden times when wishing still helped one, there lived a king whose
                            01030      daughters were all beautiful, but the youngest was so beautiful that
                            01040      the sun itself, which has seen so much, was astonished whenever it
                            01050      shone in her face.
                            01070      Close by the king's castle lay a great dark forest, and under an old
                            01080      lime-tree in the forest was a well, and when the day was very warm,
                            01090      the king's child went out into the forest and sat down by the side of

                            Taken one by one, the options imply the following (see also Table 7.8): “-b a ” causes
                            all lines to be numbered, not just—as in the previous example—the non-blank
                            ones. “-n rz ” formats line numbers flush-right with leading zeroes, “-w 5 ” caters
                            for a five-column line number, and “-i 10 ” increments the line number by 10 per
                            line (not, as usual, 1).
      Per-page line numbers     In addition, nl can also handle per-page line numbers. This is organized using
                            the “magical” strings “\:\:\: ”, “\:\: ” und “\: ”, as shown in the previous example:

                            $ cat nl-test
                            Header of first page
                            First line of first page
                            Second line of first page
                            Last line of first page
                            Footer of first page
                            Footer of second page
                            (Two lines high)
                            First line of second page
                            Second line of second page
                            Second-to-last line of second page
                            Last line of second page
                            Header of second page
                            (Two lines high)

          header and footer Each (logical) page has a header and footer as well as a “body” containing the text
                            proper. The header is introduced using “\:\:\: ”, and separated from the body
                            using “\:\: ”. The body, in turn, ends at a “\: ” line. Header and footer may also
                            be omitted.
                               By default, nl numbers the lines on each page starting at 1; header and footer
                            lines are not numbered:
                            $ nl nl-test

                                      Header of first page

                                  1    First line of first page
                                  2    Second line of first page
                                  3    Last line of first page

                                      Footer of first page
7.4 Text Processing                                                                                          107

                          Table 7.9: Options for wc (selection)

     Option              Wirkung
     -l   (lines)        outputs line count
     -w   (words)        outputs word count
     -c   (characters)   outputs character count

          Footer of second page
          (Two lines high)

     1     First line of second page
     2     Second line of second page
     3     Second-to-last line of second page
     4     Last line of second page

          Header of second page
          (Two lines high)

The “\: …” separator lines are replaced by blank lines in the output.
   The name of the wc command is an abbreviation of “word count”. In spite of
this moniker, not just a word count can be determined, but also a count of total Count lines, words, characters
characters and lines in the input (files, standard input). This is done using the
options in Table 7.9. A “word”, from wc ’s point of view, is a sequence of one or
more letters. Without an option, all three values are output in the order given in
Table 7.9:

$ wc frog.txt
 144 1397 7210 frog.txt

With the options in Table 7.9, you can limit wc ’s output to only some of the values:

$ ls | wc -l

The example shows how to use wc to determine the number of entries in the current
directory by counting the lines in the output of the ls command.

C 7.19 [1] Number the lines of file frog.txt with an increment of 2 per line
  starting at 100.

C 7.20 [3] How can you number the lines of a file in reverse order, similar to
         144   The Frog King, or Iron Henry
         142   In olden times when wishing still helped one, there lived a king whose
         141   daughters were all beautiful, but the youngest was so beautiful that

      (Hint: Two reversals give the original)?

C 7.21 [!2] How does the output of the “wc a.txt b.txt c.txt ” command differ
  from that of the “cat a.txt b.txt c.txt | wc ” command?
108                                                                            7 Standard I/O and Filter Commands

                      7.5      Data Management
                      7.5.1     Sorted Files—sort and uniq
                      The sort command lets you sort the lines of text files according to predetermined
       default setting criteria. The default setting is ascending (from A to Z) according to the ASCII
                      values1 of the first few characters of each line. This is why special characters such
                      as German umlauts are frequently sorted incorrectly. For example, the character
                      code of “Ä” is 143, so that character ends up far beyond “Z” with its character code
                      of 91. Even the lowercase latter “a” is considered “greater than” the uppercase
                      letter “Z”.

                       B Of course, sort can adjust itself to different languages and cultures. To sort
                         according to German conventions, set one of the environment variables LANG ,
                         LC_ALL , or LC_COLLATE to a value such as “de ”, “de_DE ”, or “de_DE@UTF-8 ” (the
                         actual value depends on your distribution). If you want to set this up for
                         a single sort invocation only, do

                              $ … | LC_COLLATE=de_DE.UTF-8 sort

                              The value of LC_ALL has precedence over the value of LC_COLLATE and that,
                              again, has precedence over the value of LANG . As a side effect, German sort
                              order causes the case of letters to be ignored when sorting.

                        Unless you specify otherwise, the sort proceeds “lexicographically” considering
                        all of the input line. That is, if the initial characters of two lines compare equal,
                        the first differing character within the line governs their relative positioning. Of
                        course sort can sort not just according to the whole line, but more specifically ac-
      Sorting by fields cording to the values of certain “columns” or fields of a (conceptual) table. Fields
                        are numbered starting at 1; with the “-k 2 ” option, the first field would be ignored
                        and the second field of each line considered for sorting. If the values of two lines
                        are equal in the second field, the rest of the line will be looked at, unless you spec-
                        ify the last field to be considered using something like “-k 2,3 ”. Incidentally, it is
                        permissible to specify several -k options with the same sort command.

                       B In addition, sort supports an obsolete form of position specification: Here
                         fields are numbered starting at 0, the initial field is specified as “+ 𝑚” and
                         the final field as “- 𝑛”. To complete the differences to the modern form, the
                         final field is specified “exclusively”—you give the first field that should not
                         be taken into account for sorting. The examples above would, respectively,
                         be “+1 ”, “+1 -3 ”, and “+1 -2 ”.

            separator The space character serves as the separator between fields. If several spaces occur
                      in sequence, only the first is considered a separator; the others are considered
                      part of the value of the following field. Here is a little example, namely the list
                      of participants for the annual marathon run of the Lameborough Track & Field
                      Club. To start, we ensure that we use the system’s standard language environment
                      (“POSIX ”) by resetting the corresponding environment variables. (Incidentally, the
                      fourth column gives a runner’s bib number.)

                       $ unset LANG LC_ALL LC_COLLATE
                       $ cat participants.dat
                       Smith      Herbert Pantington AC                 123   Men
                       Prowler    Desmond Lameborough TFC               13    Men
                       Fleetman   Fred     Rundale Sportsters           217   Men
                       Jumpabout Mike      Fairing Track Society        154   Men

                         1 Of course ASCII only goes up to 127. What is really meant here is ASCII together with whatever

                      extension for the characters with codes from 128 up is currently used, for example ISO-8859-1, also
                      known as ISO-Latin-1.
7.5 Data Management                                                                      109

de Leaping   Gwen       Fairing Track Society   26    Ladies
Runnington   Vivian     Lameborough TFC         117   Ladies
Sweat        Susan      Rundale Sportsters      93    Ladies
Runnington   Kathleen   Lameborough TFC         119   Ladies
Longshanks   Loretta    Pantington AC           55    Ladies
O'Finnan     Jack       Fairing Track Society   45    Men
Oblomovsky   Katie      Rundale Sportsters      57    Ladies

Let’s try a list sorted by last name first. This is easy in principle, since the last
names are at the front of each line:
$ sort participants.dat
Fleetman   Fred     Rundale Sportsters          217   Men
Jumpabout Mike      Fairing Track Society       154   Men
Longshanks Loretta Pantington AC                55    Ladies
O'Finnan   Jack     Fairing Track Society       45    Men
Oblomovsky Katie    Rundale Sportsters          57    Ladies
Prowler    Desmond Lameborough TFC              13    Men
Runnington Kathleen Lameborough TFC             119   Ladies
Runnington Vivian   Lameborough TFC             117   Ladies
Smith      Herbert Pantington AC                123   Men
Sweat      Susan    Rundale Sportsters          93    Ladies
de Leaping Gwen     Fairing Track Society       26    Ladies

You will surely notice the two small problems with this list: “Oblomovsky” should
really be in front of “O’Finnan”, and “de Leaping” should end up at the front of
the list, not the end. These will disappear if we specify “English” sorting rules:

$ LC_COLLATE=en_GB sort participants.dat
de Leaping Gwen     Fairing Track Society       26    Ladies
Fleetman   Fred     Rundale Sportsters          217   Men
Jumpabout Mike      Fairing Track Society       154   Men
Longshanks Loretta Pantington AC                55    Ladies
Oblomovsky Katie    Rundale Sportsters          57    Ladies
O'Finnan   Jack     Fairing Track Society       45    Men
Prowler    Desmond Lameborough TFC              13    Men
Runnington Kathleen Lameborough TFC             119   Ladies
Runnington Vivian   Lameborough TFC             117   Ladies
Smith      Herbert Pantington AC                123   Men
Sweat      Susan    Rundale Sportsters          93    Ladies

(en_GB is short for “British English”; en_US , for “American English”, would also work
here.) Let’s sort according to the first name next:

$ sort -k 2,2 participants.dat
Smith      Herbert Pantington AC                123   Men
Sweat      Susan    Rundale Sportsters          93    Ladies
Prowler    Desmond Lameborough TFC              13    Men
Fleetman   Fred     Rundale Sportsters          217   Men
O'Finnan   Jack     Fairing Track Society       45    Men
Jumpabout Mike      Fairing Track Society       154   Men
Runnington Kathleen Lameborough TFC             119   Ladies
Oblomovsky Katie    Rundale Sportsters          57    Ladies
de Leaping Gwen     Fairing Track Society       26    Ladies
Longshanks Loretta Pantington AC                55    Ladies
Runnington Vivian   Lameborough TFC             117   Ladies

This illustrates the property of sort mentioned above: The first of a sequence of
spaces is considered the separator, the others are made part of the following field’s
110                                                                                        7 Standard I/O and Filter Commands

                                            Table 7.10: Options for sort (selection)

       Option                             Result
       -b                     (blank)     Ignores leading blanks in field contents
       -d                (dictionary)     Sorts in “dictionary order”, i. e., only letters, digits and spaces are taken
                                          into account
       -f                        (fold)   Makes uppercase and lowercase letters equivalent
       -i                     (ignore)    Ignores non-printing characters
       -k   ⟨field⟩[, ⟨field’⟩] (key)     Sort according to ⟨field⟩ (up to and including ⟨field’⟩)
       -n                   (numeric)     Considers field value as a number and sorts according to its numeric
                                          value; leading blanks will be ignored
       -o datei             (output)      Writes results to a file, whose name may match the original input file
       -r                   (reverse)     Sorts in descending order, i. e., Z to A
       -t ⟨char⟩         (terminate)      The ⟨char⟩ character is used as the field separator
       -u                   (unique)      Writes only the first of a sequence of equal output lines

                                     value. As you can see, the first names are listed alphabetically but only within the
                                     same length of last name. This can be fixed using the -b option, which treats runs
                                     of space characters like a single space:

                                      $ sort -b -k 2,2 participants.dat
                                      Prowler    Desmond Lameborough TFC            13    Men
                                      Fleetman   Fred     Rundale Sportsters        217   Men
                                      Smith      Herbert Pantington AC              123   Men
                                      O'Finnan   Jack     Fairing Track Society     45    Men
                                      Runnington Kathleen Lameborough TFC           119   Ladies
                                      Oblomovsky Katie    Rundale Sportsters        57    Ladies
                                      de Leaping Gwen     Fairing Track Society     26    Ladies
                                      Longshanks Loretta Pantington AC              55    Ladies
                                      Jumpabout Mike      Fairing Track Society     154   Men
                                      Sweat      Susan    Rundale Sportsters        93    Ladies
                                      Runnington Vivian   Lameborough TFC           117   Ladies

                                     This sorted list still has a little blemish; see Exercise 7.24.
More detailed field specification       The sort field can be specified in even more detail, as the following example
                                      $ sort -br   -k 2.2 participants.dat
                                      Sweat        Susan    Rundale Sportsters      93    Ladies
                                      Fleetman     Fred     Rundale Sportsters      217   Men
                                      Longshanks   Loretta Pantington AC            55    Ladies
                                      Runnington   Vivian   Lameborough TFC         117   Ladies
                                      Jumpabout    Mike     Fairing Track Society   154   Men
                                      Prowler      Desmond Lameborough TFC          13    Men
                                      Smith        Herbert Pantington AC            123   Men
                                      de Leaping   Gwen     Fairing Track Society   26    Ladies
                                      Oblomovsky   Katie    Rundale Sportsters      57    Ladies
                                      Runnington   Kathleen Lameborough TFC         119   Ladies
                                      O'Finnan     Jack     Fairing Track Society   45    Men

                                    Here, the participants.dat file is sorted in descending order (-r ) according to the
                                    second character of the second table field, i. e., the second character of the first
                                    name (very meaningful!). In this case as well it is necessary to ignore leading
                                    spaces using the -b option. (The blemish from Exercise 7.24 still manifests itself
                                       With the -t (“terminate”) option you can select an arbitrary character in place
                    field separator of the field separator. This is a good idea in principle, since the fields then may
7.5 Data Management                                                                                            111

contain spaces. Here is a more usable (if less readable) version of our example file:

Smith:Herbert:Pantington AC:123:Men
Prowler:Desmond:Lameborough TFC:13:Men
Fleetman:Fred:Rundale Sportsters:217:Men
Jumpabout:Mike:Fairing Track Society:154:Men
de Leaping:Gwen:Fairing Track Society:26:Ladies
Runnington:Vivian:Lameborough TFC:117:Ladies
Sweat:Susan:Rundale Sportsters:93:Ladies
Runnington:Kathleen:Lameborough TFC:119:Ladies
Longshanks:Loretta: Pantington AC:55:Ladies
O'Finnan:Jack:Fairing Track Society:45:Men
Oblomovsky:Katie:Rundale Sportsters:57:Ladies

Sorting by first name now leads to correct results using “LC_COLLATE=en_GB sort -t:
-k2,2 ”. It is also a lot easier to sort, e. g., by a participant’s number (now field 4, no
matter how many spaces occur in their club’s name:

$ sort -t: -k4 participants0.dat
Runnington:Vivian:Lameborough TFC:117:Ladies
Runnington:Kathleen:Lameborough TFC:119:Ladies
Smith:Herbert:Pantington AC:123:Men
Prowler:Desmond:Lameborough TFC:13:Men
Jumpabout:Mike:Fairing Track Society:154:Men
Fleetman:Fred:Rundale Sportsters:217:Men
de Leaping:Gwen:Fairing Track Society:26:Ladies
O'Finnan:Jack:Fairing Track Society:45:Men
Longshanks:Loretta: Pantington AC:55:Ladies
Oblomovsky:Katie:Rundale Sportsters:57:Ladies
Sweat:Susan:Rundale Sportsters:93:Ladies

Of course the “number” sort is done lexicographically, unless otherwise specified—“117”
and “123” are put before “13”, and that in turn before “154”. This can be fixed by
giving the -n option to force a numeric comparison:                                numeric comparison

$ sort -t: -k4 -n participants0.dat
Prowler:Desmond:Lameborough TFC:13:Men
de Leaping:Gwen:Fairing Track Society:26:Ladies
O'Finnan:Jack:Fairing Track Society:45:Men
Longshanks:Loretta: Pantington AC:55:Ladies
Oblomovsky:Katie:Rundale Sportsters:57:Ladies
Sweat:Susan:Rundale Sportsters:93:Ladies
Runnington:Vivian:Lameborough TFC:117:Ladies
Runnington:Kathleen:Lameborough TFC:119:Ladies
Smith:Herbert:Pantington AC:123:Men
Jumpabout:Mike:Fairing Track Society:154:Men
Fleetman:Fred:Rundale Sportsters:217:Men

These and some more important options for sort are shown in Table 7.10; studying
the program’s documentation is well worthwhile. sort is a versatile and powerful
command which will save you a lot of work.
    The uniq command does the important job of letting through only the first of a            uniq   command
sequence of equal lines in the input (or the last, just as you prefer). What is con-
sidered “equal” can, as usual, be specified using options. uniq differs from most
of the programs we have seen so far in that it does not accept an arbitrary number
of named input files but just one; a second file name, if it is given, is considered
the name of the desired output file (if not, standard output is assumed). If no file
is named in the uniq call, uniq reads standard input (as it ought).
112                                                      7 Standard I/O and Filter Commands

         uniq works best if the input lines are sorted such that all equal lines occur one
      after another. If that is not the case, it is not guaranteed that each line occurs only
      once in the output:

      $ cat uniq-test
      $ uniq uniq-test

      Compare this to the output of “sort -u ”:

      $ sort -u uniq-test

      C 7.22 [!2] Sort the list of participants in participants0.dat (the file with colon
        separators) according to the club’s name and, within clubs, the last and first
        names of the runners (in that order).

      C 7.23 [3] How can you sort the list of participants by club name in ascending
        order and, within clubs, by number in descending order? (Hint: Read the

      C 7.24 [!2] What is the “blemish” alluded to in the examples and why does it

      C 7.25 [2] A directory contains files with the following names:
             01-2002.txt    01-2003.txt   02-2002.txt   02-2003.txt
             03-2002.txt    03-2003.txt   04-2002.txt   04-2003.txt
             11-2002.txt    11-2003.txt   12-2002.txt   12-2003.txt

            Give a sort command to sort the output of ls into “chronologically correct”

      C 7.26 [3] How can you produce a sorted list of all words in a text file? Each
        word should occur only once in the list. (Hint: Exercise 7.16)
7.5 Data Management                                                                                    113

7.5.2     Columns and Fields—cut , paste etc.
While you can locate and “cut out” lines of a text file using grep , the cut command Cutting columns
works through a text file “by column”. This works in one of two ways:
   One possibility is the absolute treatment of columns. These columns corre- Absolute columns
spond to single characters in a line. To cut out such columns, the column number
must be given after the -c option (“column”). To cut several columns in one step,
these can be specified as a comma-separated list. Even column ranges may be

$ cut -c 12,1-5 participants.dat
de LeG

In this example, the first letter of the first name and the first five letters of the
last name are extracted. It also illustrates the notable fact that the output always
contains the columns in the same order as in input. Even if the selected column
ranges overlap, every input character is output at most once:

$ cut -c 1-5,2-6,3-7 participants.dat
de Leap

   The second method is to cut relative fields, which are delimited by separator Relative fields
characters. If you want to cut delimited fields, cut needs the -f (“field”) option
and the desired field number. The same rules as for columns apply. The -c and -f
options are mutually exclusive.
   The default separator is the tab character; other separators may be specified separators
with the -d option (“delimiter”):

$ cut -d: -f 1,4 participants0.dat
de Leaping:26

In this way, the participants’ last names (column 1) and numbers (column 4) are
taken from the list. For readability, only the first few lines are displayed.

B Incidentally, using the --output-delimiter option you can specify a different
  separator character for the output fields than is used for the input fields:

        $ cut -d: --output-delimiter=': ' -f 1,4 participants0.dat
        Smith: 123
        Prowler: 13
        Fleetman: 217
        Jumpabout: 154
        de Leaping: 26
114                                                                                     7 Standard I/O and Filter Commands

                                      B If you really want to change the order of columns and fields, you have to
                                        bring in the big guns, such as awk or perl ; you could do it using the paste
                                        command, which will be introduced presently, but that is rather tedious.

       Suppressing no-field lines      When files are treated by fields (rather than columns), the -s option (“sepa-
                                    rator”) is helpful. If “cut -f ” encounters lines that do not contain the separator
                                    character, these are normally output in their entirety; -s suppresses these lines.
             Joining lines of files    The paste command joins the lines of the specified files. It is thus frequently
                                    used together with cut . As you will have noticed immediately, paste is not a filter
                                    command. You may however give a minus sign in place of one of the input file-
                                    names for paste to read its standard input at that point. Its output always goes to
                                    standard output.
           Join files “in parallel”    As we said, paste works by lines. If two file names are specified, the first line
                                    of the first file and the first of the second are joined (using a tab character as the
                                    separator) to form the first line of the output. The same is done with all other lines
                         separator in the files. To specify a different separator, use the -d option.
                                       By way of an example, we can construct a version of the list of marathon run-
                                    ners with the participants’ numbers in front:

                                      $ cut -d: -f4 participants0.dat >number.dat
                                      $ cut -d: -f1-3,5 participants0.dat \
                                      >   | paste -d: number.dat - >p-number.dat
                                      $ cat p-number.dat
                                      123:Smith:Herbert:Pantington AC:Men
                                      13:Prowler:Desmond:Lameborough TFC:Men
                                      217:Fleetman:Fred:Rundale Sportsters:Men
                                      154:Jumpabout:Mike:Fairing Track Society:Men
                                      26:de Leaping:Gwen:Fairing Track Society:Ladies
                                      117:Runnington:Vivian:Lameborough TFC:Ladies
                                      93:Sweat:Susan:Rundale Sportsters:Ladies
                                      119:Runnington:Kathleen:Lameborough TFC:Ladies
                                      55:Longshanks:Loretta: Pantington AC:Ladies
                                      45:O'Finnan:Jack:Fairing Track Society:Men
                                      57:Oblomovsky:Katie:Rundale Sportsters:Ladies

                                      This file may now conveniently be sorted by number using “sort -n p-number.dat ”.
                Join files serially       With -s (“serial”), the given files are processed in sequence. First, all the lines
                                      of the first file are joined into one single line (using the separator character), then
                                      all lines from the second file make up the second line of the output etc.

                                      $ cat list1
                                      $ cat list2
                                      $ paste -s list*
                                      Wood    Bell     Potter
                                      Keeper Chaser Seeker

                                    All files matching the list* wildcard pattern—in this case, list1 and list2 —are
                                    joined using paste . The -s option causes every line of these files to make up one
                                    column of the output.
      “Relational” joining of files    The join command joins the lines of files, too, but in a much more sophisticated
                                    manner. Instead of just joining the first lines, second lines, …, it considers one
                                    designated field per line and joins two lines only if the values in these fields are
                                    equal. Hence, join implements the eponymous operator from relational algebra,
7.5 Data Management                                                                                                 115

                                       Table 7.11: Options for join (selection)

       Option               Result
       -j1 𝑛                Uses field 𝑛 of the first file as the “join field” (𝑛 ≥ 1). Synonym: -1 𝑛.
       -j2 𝑛                Uses field 𝑛 of the second file as the “join field” (𝑛 ≥ 1). Synonym: -2 𝑛.
       -j 𝑛        (join)   Abbreviation for “-j1 𝑛 -j2 𝑛”
       -o 𝑓     (output)    Output line specification. 𝑓 is a comma-separated sequence of field specifica-
                            tions, where each field specification is either the digit “0 ” or a field number 𝑚.𝑛.
                            “0 ” is the “join field”, 𝑚 is 1 or 2, and 𝑛 is a field number in the first or second
       -t   𝑐               The 𝑐 character will be used as the field separator for input and output.

as seen in SQL databases—even though the actual operation is a lot cruder and
more inefficient than with a “real” database.
   Even so, Examplethe join command does come in useful. Imagine that the Example
big day has arrived and the Lameborough TFC’s marathon has been run. The
umpires have been diligent and not only have timed how long everybody took, but
also entered them into a file times.dat . The first columns is always a participant’s
number, the second the time achieved (in whole seconds, for simplicity):

$ cat times.dat

Now we want to join this file with the list of participants, in order to assign each
time to the corresponding participant. To do so, we must first sort the result file
by participant number:

$ sort -n times.dat >times-s.dat

Next we can use join to join the lines of file times- s.dat to the corresponding lines of
the modified list of participants from the paste example—join presumes by default
that the input files are sorted by the value of the “join field”, and that the “join
field” is the first field of each line.
$ cat p-number.dat
123:Smith:Herbert:Pantington AC:Men
13:Prowler:Desmond:Lameborough TFC:Men
217:Fleetman:Fred:Rundale Sportsters:Men
154:Jumpabout:Mike:Fairing Track Society:Men
26:de Leaping:Gwen:Fairing Track Society:Ladies
117:Runnington:Vivian:Lameborough TFC:Ladies
93:Sweat:Susan:Rundale Sportsters:Ladies
119:Runnington:Kathleen:Lameborough TFC:Ladies
55:Longshanks:Loretta: Pantington AC:Ladies
45:O'Finnan:Jack:Fairing Track Society:Men
57:Oblomovsky:Katie:Rundale Sportsters:Ladies
$ sort -n p-number.dat \
> | join -t: times-s.dat - >p-times.dat
116                                                  7 Standard I/O and Filter Commands

      $ cat p-times.dat
      13:8832:Prowler:Desmond:Lameborough TFC:Men
      26:9129:de Leaping:Gwen:Fairing Track Society:Ladies
      45:8445:O'Finnan:Jack:Fairing Track Society:Men
      57:9111:Oblomovsky:Katie:Rundale Sportsters:Ladies
      93:8641:Sweat:Susan:Rundale Sportsters:Ladies
      117:8954:Runnington:Vivian:Lameborough TFC:Ladies
      119:8830:Runnington:Kathleen:Lameborough TFC:Ladies
      123:8517:Smith:Herbert:Pantington AC:Men
      154:8772:Jumpabout:Mike:Fairing Track Society:Men
      217:8533:Fleetman:Fred:Rundale Sportsters:Men

      The resulting file p- times.dat now just needs to be sorted by time:

      $ sort -t: -k2,2 p-times.dat
      45:8445:O'Finnan:Jack:Fairing Track Society:Men
      123:8517:Smith:Herbert:Pantington AC:Men
      217:8533:Fleetman:Fred:Rundale Sportsters:Men
      93:8641:Sweat:Susan:Rundale Sportsters:Ladies
      154:8772:Jumpabout:Mike:Fairing Track Society:Men
      119:8830:Runnington:Kathleen:Lameborough TFC:Ladies
      13:8832:Prowler:Desmond:Lameborough TFC:Men
      117:8954:Runnington:Vivian:Lameborough TFC:Ladies
      57:9111:Oblomovsky:Katie:Rundale Sportsters:Ladies
      26:9129:de Leaping:Gwen:Fairing Track Society:Ladies

      This is a nice example of how Linux’s standard tools make even fairly complicated
      text and data processing possible. In “real life”, one would use shell scripts to
      prepare these processing steps and automate them as far as possible.

      C 7.27 [!2] Generate a new version of the participants.dat file (the one with
        fixed-width columns) in which the participant numbers and club affiliations
        do not occur.

      C 7.28 [!2] Generate a new version of the participants0.dat file (the one with
        fields separated using colons) in which the participant numbers and club
        affiliations do not occur.

      C 7.29 [3] Generate a version of participants0.dat in which the fields are not
        separated by colons but by the string “,␣ ” (a comma followed by a space

      C 7.30 [3] How many groups are used as primary groups by users on your
        system? (The primary group of a user is the fourth field in /etc/passwd .)
7.5 Data Management                                                                    117

Commands in this Chapter
cat      Concatenates files (among other things)                          cat (1) 94
cut      Extracts fields or columns from its input                       cut (1) 112
expand   Replaces tab characters in its input by an equivalent number of spaces
                                                                      expand (1) 102
fmt      Wraps the lines of its input to a given width                   fmt (1) 103
hd       Abbreviation for hexdump                                     hexdump (1) 98
head     Displays the beginning of a file                                head (1) 96
hexdump Displays file contents in hexadecimal (octal, …) form         hexdump (1) 98
join     Joins the lines of two files according to relational algebra join (1) 114
od       Displays binary data in decimal, octal, hexadecimal, … formats
                                                                           od (1) 97
paste    Joins lines from different input files                        paste (1) 114
pr       Prepares its input for printing—with headers, footers, etc.      pr (1) 104
reset    Resets a terminal’s character set to a “reasonable” value       tset (1) 95
sort     Sorts its input by line                                        sort (1) 107
tac      Displays a file back to front                                    tac (1) 95
tail     Displays a file’s end                                           tail (1) 96
tr       Substitutes or deletes characters on its standard input          tr (1) 100
unexpand “Optimises” tabs and spaces in its input lines             unexpand (1) 102
uniq     Replaces sequences of identical lines in its input by single specimens
                                                                        uniq (1) 111
wc       Counts the characters, words and lines of its input              wc (1) 107

   • Every Linux program supports the standard I/O channels stdin , stdout , and
     stderr .
   • Standard output and standard error output can be redirected using opera-
     tors > and >> , standard input using operator < .
   • Pipelines can be used to connect the standard output and input of programs
     directly (without intermediate files).
   • Using the tee command, intermediate results of a pipeline can be stored to
   • Filter commands (or “filters”) read their standard input, manipulate it, and
     write the results to standard output.
   • The tr command substitutes or deletes single characters. expand and unexpand
     convert tabs to spaces and vice-versa.
   • With pr , you can prepare data for printing—not actually print it.
   • wc can be used to count the lines, words and characters of the standard input
     (or a number of named files).
   • sort is a versatile program for sorting.
   • The cut command cuts specified ranges of columns or fields from every line
     of its input.
   • With paste , the lines of files can be joined.
                                                                                                   $ echo tux
                                                                                                   $ ls
                                                                                                   $ /bin/su -

More About The Shell

8.1     Simple Commands: sleep , echo , and date . . .   .   .   .   .   .   .   .   .   .   120
8.2     Shell Variables and The Environment. . . .       .   .   .   .   .   .   .   .   .   121
8.3     Command Types—Reloaded . . . . . . .             .   .   .   .   .   .   .   .   .   123
8.4     The Shell As A Convenient Tool. . . . . .        .   .   .   .   .   .   .   .   .   124
8.5     Commands From A File . . . . . . . .             .   .   .   .   .   .   .   .   .   128
8.6     The Shell As A Programming Language. . .         .   .   .   .   .   .   .   .   .   129
      8.6.1 Foreground and Background Processes .        .   .   .   .   .   .   .   .   .   132

      • Knowing about shell variables and evironment variables
      • Handling foreground and background processes

      • Basic shell knowledge (Chapter 3)
      • File management and simple filter commands (Chapter 6, Chapter 7)
      • Use of a text editor (Chapter 5)

grd1-shell2.tex   (be27bba8095b329b )
120                                                                                    8 More About The Shell

                        8.1     Simple Commands: sleep , echo , and date
                        To give you some tools for experiments, we shall now explain some very simple

                        sleep This command does nothing for the number of seconds specified as the
                        argument. You can use it if you want your shell to take a little break:

                         $ sleep 10
                                                                Nothing happens for approximately 10 seconds
                         $ _

      Output arguments echo    The command echo outputs its arguments (and nothing else), separated by
                        spaces. It is still interesting and useful, since the shell replaces variable references
                        (see Section 8.2) and similar things first:

                         $ p=Planet
                         $ echo Hello $p
                         Hello Planet
                         $ echo Hello ${p}oid
                         Hello Planetoid

                        (The second echo illustrates what to do if you want to append something directly
                        to the value of a variable.)

                         B If echo is called with the -n option, it does not write a line terminator at the
                           end of its output:

                               $ echo -n Hello

          date and time date   The date command displays the current date and time. You have consider-
                        able leeway in determining the format of the output—call “date --help ”, or read
                        the online documentation using “man date ”.

                         B (When reading through this manual for the second time:) In particular, date
                           serves as a world clock, if you first set the TZ environment variable to the
                           name of a time zone or important city (usually capital):

                               $ date
                               Thu Oct 5 14:26:07 CEST 2006
                               $ export TZ=Asia/Tokyo
                               $ date
                               Tue Oct 5 21:26:19 JST 2006
                               $ unset TZ

                               You can find out about valid time zone and city names by rooting around
                               in /usr/share/zoneinfo .

      Set the system time While every user is allowed to read the system time, only the system administra-
                        tor root may change the system time using the date command and an argument of
                        the form MMDDhhmm , where MM is the calendar month, DD the calendar day, hh the hour,
                        and mm the minute. You can optionally add two digits the year (plus possibly an-
                        other two for the century) and the seconds (separated with a dot), which should,
                        however, prove necessary only in very rare cases.
8.2 Shell Variables and The Environment                                                                121

$ date
Thu Oct 5 14:28:13 CEST 2006
$ date 08181715
date: cannot set date: Operation not permitted
Fri Aug 18 17:15:00 CEST 2006

B The date command only changes the internal time of the Linux system. This
  time will not necessarily be transferred to the CMOS clock on the computer’s
  mainboard, so a special command may be required to do so. Many distri-
  butions will do this automatically when the system is shut down.

C 8.1 [!3] Assume now is 22 October 2003, 12:34 hours and 56 seconds. Study
  the date documentation and state formatting instructions to achieve the fol-
  lowing output:
        1. 22-10-2003
        2. 03-294 (WK43) (Two-digit year, number of day within year, calendar
        3. 12h34m56s

C 8.2 [!2] What time is it now in Los Angeles?

8.2     Shell Variables and The Environment
Like most common shells, bash has features otherwise found in programming lan-
guages. For example, it is possible to store pieces of text or numbers in variables
and retrieve them later. Variables also control various aspects of the operation of
the shell itself.
   Within the shell, a variable is set by means of a command like “foo=bar ” (this Setting variables
command sets the foo variable to the textual value bar ). Take care not to insert
spaces in front of or behind the equals sign! You can retrieve the value of the
variable by using the variable name with a dollar sign in front:

$ foo=bar
$ echo foo
$ echo $foo

(note the difference).
    We distinguish environment variables from shell variables. Shell variables environment variables
are only visible in the shell in which they have been defined. On the other hand, shell variables
environment variables are passed to the child process when an external command
is started and can be used there. (The child process does not have to be a shell;
every Linux process has environment variables). All the environment variables of
a shell are also shell variables but not vice versa.
    Using the export command, you can declare an existing shell variable an envi- export
ronment variable:
$ foo=bar                                                    foo is now a shell variable
$ export foo                                     foo   is now an environment variable

Or you define a new variable as a shell and environment variable at the same time:
122                                                                                   8 More About The Shell

                                                  Table 8.1: Important Shell Variables

                               Variable      Meaning
                                  PWD        Name of the current directory
                                EDITOR       Name of the user’s favourite editor
                                  PS1        Shell command prompt template
                                  UID        Current user’s user name
                                 HOME        Current user’s home directory
                                 PATH        List of directories containing executable programs that are
                                             eligible as external commands
                                LOGNAME      Current user’s user name (again)

                          $ export foo=bar

                          The same works for several variables simultaneously:

                          $ export foo baz
                          $ export foo=bar baz=quux

                             You can display all environment variables using the export command (with no
                          parameters). The env command (also with no parameters) also displays the cur-
                          rent environment. All shell variables (including those which are also environment
                          variables) can be displayed using the set command. The most common variables
                          and their meanings are shown in Table 8.1.

                          B The set command also does many other strange and wonderful things. You
                            will encounter it again in the Linup Front training manual Advanced Linux,
                            which covers shell programming.

                          B env , too, is actually intended to manipulate the process environment rather
                            than just display it. Consider the following example:

                                $ env foo=bar bash                                 Launch child shell with foo
                                $ echo $foo
                                $ exit                                                   Back to the parent shell
                                $ echo $foo
                                                                                                     Not defined
                                $ _

                          B At least with bash (and relations) you don’t really need env to execute com-
                            mands with an extended environment – a simple

                                $ foo=bar bash

                                does the same thing. However, env also allows you to remove variables from
                                the environment temporarily (how?).

      Delete a variable      If you have had enough of a shell variable, you can delete it using the unset
                          command. This also removes it from the environment. If you want to remove a
                          variable from the environment but keep it on as a shell variable, use “export -n ”:

                          $ export foo=bar                                      foo is an environment variable
                          $ export -n foo                                          foo is a shell variable (only)
                          $ unset foo                                               foo is gone and lost forever
8.3 Command Types—Reloaded                                                                                     123

8.3      Command Types—Reloaded
One application of shell variables is controlling the shell itself. Here’s another ex- Controlling the shell
ample: As we discussed in Chapter 3, the shell distinguishes internal and external
commands. External commands correspond to executable programs, which the
shell looks for in the directories that make up the value of the PATH environment
variable. Here is a typical value for PATH :

$ echo $PATH

Individual directories are separated in the list by colons, therefore the list in the
example consists of five directories. If you enter a command like

$ ls

the shell knows that this isn’t an internal command (it knows its internal com-
mands) and thus begins to search the directories in PATH , starting with the leftmost
directory. In particular, it checks whether the following files exist:

/home/joe/bin/ls                                                               Nope …
/usr/local/bin/ls                                                      Still no luck …
/usr/bin/ls                                                          Again no luck …
/bin/ls                                                                        Gotcha!
                                               The directory /usr/games is not checked.

This implies that the /bin/ls file will be used to execute the ls command.

B Of course this search is a fairly involved process, which is why the shell
  prepares for the future: If it has once identified the /bin/ls file as the im-
  plementation of the ls command, it remembers this correspondence for the
  time being. This process is called “hashing”, and you can see that it did take
  place by applying type to the ls command.

        $ type ls
        ls is hashed (/bin/ls)

B The hash command tells you which commands your bash has “hashed” and
  how often they have been invoked in the meantime. With “hash -r ” you can
  delete the shell’s complete hashing memory. There are a few other options
  which you can look up in the bash manual or find out about using “help hash ”.

B Strictly speaking, the PATH variable does not even need to be an environment
  variable—for the current shell a shell variable would do just fine (see Exer-
  cise 8.5). However it is convenient to define it as an environment variable so
  the shell’s child processes (often also shells) use the desired value.

  If you want to find out exactly which program the shell uses for a given external
command, you can use the which command:

$ which grep

whichuses the same method as the shell—it starts at the first directory in PATH and
checks whether the directory in question contains an executable file with the same
name as the desired command.
124                                                              8 More About The Shell

      B which knows nothing about the shell’s internal commands; even though
        something like “which test ” returns “/usr/bin/test ”, this does not imply
        that this program will, in fact, be executed, since internal commands have
        precedence. If you want to know for sure, you need to use the “type ” shell
         The whereis command not only returns the names of executable programs, but
      also documentation (man pages), source code and other interesting files pertain-
      ing to the command(s) in question. For example:

      $ whereis passwd
      passwd: /usr/bin/passwd /etc/passwd /etc/ /usr/share/passwd 
        /usr/share/man/man1/passwd.1.gz /usr/share/man/man1/passwd.1ssl.gz 

      This uses a hard-coded method which is explained (sketchily) in whereis (1).

      C 8.3 [!2] Convince yourself that passing (or not passing) environment and
        shell variables to child processes works as advertised, by working through
        the following command sequence:

            $ foo=bar                                              foo is a shell variable
            $ bash                                              New shell (child process)
            $ echo $foo
                                                                         foo is not defined
            $ exit                                                 Back to the parent shell
            $ export foo                                   foo is an environment variable
            $ bash                                               New shell (child process)
            $ echo $foo
            bar                                   Environment variable was passed along
            $ exit                                              Back to the parent shell

      C 8.4 [!2] What happens if you change an environment variable in the child
        process? Consider the following command sequence:

            $ foo=bar                                              foo is a shell variable
            $ bash                                              New shell (child process)
            $ echo $foo
            bar                                   Environment variable was passed along
            $ foo=baz                                                        New value
            $ exit                                              Back to the parent shell
            $ echo $foo                                               What do we get??

      C 8.5 [2] Make sure that the shell’s command line search works even if PATH is
        a “only” simple shell variable rather than an environment variable. What
        happens if you remove PATH completely?

      C 8.6 [!1] Which executable programs are used to handle the following com-
        mands: fgrep , sort , mount , xterm

      C 8.7 [!1] Which files on your system contain the documentation for the
        “crontab ” command?

      8.4    The Shell As A Convenient Tool
      Since the shell is the tool many Linux users use most often, its developers have
      spared no trouble to make its use convenient. Here are some more useful trifles:
8.4 The Shell As A Convenient Tool                                                     125

Command Editor You can edit command lines like in a simple text editor. Hence,
you can move the cursor around in the input line and delete or add characters
arbitrarily before finishing the input using the return key. The behaviour of this
editor can be adapted to that of the most popular editors on Linux (Chapter 5)
using the “set -o vi ” and “set -o emacs ” commands.

Aborting Commands With so many Linux commands around, it is easy to con-
fuse a name or pass a wrong parameter. Therefore you can abort a command
while it is being executed. You simply need to press the Ctrl + c keys at the same

The History The shell remembers ever so many of your most recent commands as
part of the “history”, and you can move through this list using the ↑ and ↓ cur-
sor keys. If you find a previous command that you like you can either re-execute
it unchanged using ↩ , or else edit it as described above. You can search the list
“incrementally” using Ctrl + r —simply type a sequence of characters, and the
shell shows you the most recently executed command containing this sequence.
The longer your sequence, the more precise the search.

B When you log out of the system, the shell stores the history in the hidden
  file ~/.bash_history and makes it available again after your next login. (You
  may use a different file name by setting the HISTFILE variable to the name in

B A consequence of the fact that the history is stored in a “plain” file is that you
  can edit it using a text editor (Chapter 5 tells you how). So in case you acci-
  dentally enter your password on the command line, you can (and should!)
  remove it from the history manually—in particular, if your system is one of
  the more freewheeling ones where home directories are visible to anybody.

B By default, the shell remembers the last 500 commands; you can change this
  by putting the desired number into the HISTSIZE variable. The HISTFILESIZE
  command specifies how many commands to write to the HISTFILE file – usu-
  ally 500 as well.

   Besides the arrow keys you can access the history also via “magical” character
sequences in new commands. The shell replaces these character sequences first,
immediately after the command line has been read. Replacement proceeds in two
   • At first the shell determines which command from the history to use for
     the replacement. The !! sequence stands for the immediately preceding
     command, !- 𝑛 refers to the 𝑛th command before the current one (!-2 , for
     example, to the penultimate one), and ! 𝑛 to the command with number 𝑛
     in the history. (The history command outputs the whole history including
     numbers for the commands.) !xyz selects the most recent command starting
     with xyz , and !?xyz the most recent command containing xyz .
   • After that, the shell decides which part of the selected command will be
     “recycled” and how. If you do not specify anything else, the complete com-
     mand will be inserted; otherwise there are various selection methods. All
     these selection methods are separated from the command selection charac-
     ter sequence by a colon (“: ”).
      𝑛 Selects the 𝑛-th word. Word 0 is the command itself.
      ^   Selects the first word (immediately after the command).
      $   Selects the final word.
      𝑚- 𝑛 Selects words 𝑚 through 𝑛.
126                                                                                 8 More About The Shell

                              𝑛* Selects all words starting at word 𝑛.
                              𝑛- Selects all words starting at word 𝑛 except for the final one.
                              Some examples for clarity:

                              !-2:$   Picks the final word of the penultimate command.
                              !!:0-    Picks the complete immediately preceding command except for the
                                      final word.
                              !333^   Picks the first word from command 333.
                              The final example, incidentally, is not a typo; if the first character from
                              the intra-command selection is from the list ^$*-% you may leave out the
                              colon.—If you like, look at the bash documentation (section HISTORY) to
                              find out what else the shell has in store. As far as we (and the LPI) are con-
                              cerned you do not need to learn all of this off by heart.

                         B The history is one of the things that bash took over from the C shell, and
                           whoever did not use Unix during the 1980s may have some trouble imag-
                           ining what the world looked like before interactive command line editing
                           was invented. (For Windows users, this time doesn’t even go that far back.)
                           During that time, the history with all its ! selectors and transformations was
                           widely considered the best idea since sliced bread; today its documentation
                           exudes the sort of morbid fascination one would otherwise associate with
                           the user manual for a Victorian steam engine.

                         B Some more remarks concerning the history command: An invocation like
                               $ history 33

                              (with a number as the parameter) only outputs that many history lines.
                              “history -c ” empties the history completely. There are some more options;
                              check the bash documentation or try “help history ”.

        Completing com- Autocompletion A massive convenience is bash ’s ability to automatically com-
      mand and file names plete command and file names. If you hit the Tab key, the shell completes an
                        incomplete input if the continuation can be identified uniquely. For the first word
                        of a command, bash considers all executable programs, within the rest of the com-
                        mand line all the files in the current or specified directory. If several commands
                        or files exist whose names start out equal, the shell completes the name as far as
                        possible and then signals acoustically that the command or file name may still be
                        incomplete. Another Tab press then lists the remaining possibilities.

                         B It is possible to adapt the shell’s completion mechanism to specific pro-
                           grams. For example, on the command line of a FTP client it might offer
                           the names of recently visited FTP servers in place of file names. Check the
                           bash documentation for details.

                           Table 8.2 gives an overview of the most important key strokes within bash .

                        Multiple Commands On One Line You are perfectly free to enter several com-
                        mands on the same input line. You merely need to separate them using a semi-

                         $ echo Today is; date
                         Today is
                         Fri 5 Dec 12:12:47 CET 2008

                        In this instance the second command will be executed once the first is done.
8.4 The Shell As A Convenient Tool                                                             127

                       Table 8.2: Key Strokes within bash

     Key Stroke               Function
      ↑  or ↓                 Scroll through most recent commands
      Ctrl+r                  Search command history
      ← bzw. →                Move cursor within current command line
      Home oder Ctrl +    a   Jump to the beginning of the command line
      End oder Ctrl + e       Jump to the end of the command line
      ⇐ bzw. Del              Delete character in front of/under the cursor,
      Ctrl   +   t            Swap the two characters in front of and under
                              the cursor
      Ctrl   +   l            Clear the screen
      Ctrl   +   c            Interrupt a command
      Ctrl   +   d            End the input (for login shells: log off)

Conditional Execution Sometimes it is useful to make the execution of the second
command depend on whether the first was executed correctly or not. Every Unix
process yields a return value which states whether it was executed correctly or return value
whether errors of whatever kind have occurred. In the former case, the return
value is 0; in the latter, it is different from 0.

B You can find the return value of a child process of your shell by looking at
  the $? variable:

      $ bash                                                 Start a child shell …
      $ exit 33                                      … and exit again immediately
      $ echo $?
      33                                             The value from our exit above
      $ _

      But this really has no bearing on the following.

    With && as the “separator” between two commands (where there would other-
wise be the semicolon), the second command is only executed when the first has
exited successfully. To demonstrate this, we use the shell’s -c option, with which
you can pass a command to the child shell on the command line (impressive, isn’t

$ bash -c "exit 0" && echo "Successful"
$ bash -c "exit 33" && echo "Successful"
                                                         Nothing -- 33 isn’t success!

    Conversely, with || as the “separator”, the second command is only executed
if the first did not finish successfully:

$ bash -c "exit 0" || echo "Unsuccessful"
$ bash -c "exit 33" || echo "Unsuccessful"

C 8.8 [3] What is wrong about the command “echo "Hello!" ”? (Hint: Experi-
  ment with commands of the form “!-2 ” or “!ls ”.)
128                                                                           8 More About The Shell

                8.5        Commands From A File
                You can store shell commands in a file and execute them en bloc. (Chapter 5 ex-
                plains how to conveniently create files.) You just need to invoke the shell and pass
                the file name as a parameter:

                 $ bash my-commands

      shell script Such a file is also called a shell script, and the shell has extensive programming
                features that we can only outline very briefly here. (The Linup Front training
                manual Advanced Linux explains shell programming in great detail.)

                 B You can avoid having to prepend the bash command by inserting the magical

                       as the first line of your file and making the file “executable”:

                        $ chmod +x my-commands

                       (You will find out more about chmod and access rights in Chapter 12.) After
                       this, the

                        $ ./my-commands

                       command will suffice.

                    If you invoke a shell script as above, whether with a prepended bash or as an
        subshell executable file, it is executed in a subshell, a shell that is a child process of the
                current shell. This means that changes to, e. g., shell or environment variables
                do not influence the current shell. For example, assume that the file assignment
                contains the line


                Consider the following command sequence:

                 $ foo=quux
                 $ bash assignment                                                   Contains foo=bar
                 $ echo $foo
                 quux                                      No change; assignment was only in subshell

                This is generally considered a feature, but every now and then it would be quite
                desirable to have commands from a file affect the current shell. That works, too:
                The source command reads the lines in a file exactly as if you would type them
                directly into the current shell—all changes to variables (among other things) hence
                take effect in your current shell:

                 $ foo=quux
                 $ source assignment                                                 Contains foo=bar
                 $ echo $foo
                 bar                                                            Variable was changed!

                   A different name for the source command, by the way, is “. ”. (You read correctly
                – dot!) Hence

                 $ source assignment
8.6 The Shell As A Programming Language                                                               129

is equivalent to

$ . assignment

B Like program files for external commands, the files to be read using source
  or . are searched in the directories given by the PATH variable.

8.6     The Shell As A Programming Language
Being able to execute shell commands from a file is a good thing, to be sure.
However, it is even better to be able to structure these shell commands such that
they do not have to do the same thing every time, but—for example—can ob-
tain command-line parameters. The advantages are obvious: In often-used pro-
cedures you save a lot of tedious typing, and in seldom-used procedures you can
avoid mistakes that might creep in because you accidentally leave out some im-
portant step. We do not have space here for a full explanation of the shell als a
programming language, but fortunately there is enough room for a few brief ex-

Command-line parameters When you pass command-line parameters to a shell
script, the shell makes them available in the variables $1 , $2 , …. Consider the Single parameters
following example:

$ cat hello
echo Hello $1, are you free $2?
$ ./hello Joe today
Hello Joe, are you free today?
$ ./hello Sue tomorrow
Hello Sue, are you free tomorrow?

The $* contains all parameters at once, and the number of parameters is in $# :     All parameters

$ cat parameter
echo $# parameters: $*
$ ./parameter
0 parameters:
$ ./parameter dog
1 parameters: dog
$ ./parameter dog cat mouse tree
4 parameters: dog cat mouse tree

Loops The for command lets you construct loops that iterate over a list of words
(separated by white space):

$ for i in 1 2 3
> do
>    echo And $i!
> done
And 1!
And 2!
And 3!

Here, the i variable assumes each of the listed values in turn as the commands
between do and done are executed.
   This is even more fun if the words are taken from a variable:
130                                                                                       8 More About The Shell

                             $ list='4 5 6'
                             $ for i in $list
                             > do
                             >    echo And $i!
                             > done
                             And 4!
                             And 5!
                             And 6!

      Loop over parameters      If you omit the “in …”, the loop iterates over the command line parameters:

                             $ cat sort-wc
                             # Sort files according to their line count
                             for f
                                 echo `wc -l <"$f» lines in $f
                             done | sort -n
                             $ ./sort-wc /etc/passwd /etc/fstab /etc/motd

                             (The “wc -l ” command counts the lines of its standard input or the file(s) passed
                             on the command line.) Do note that you can redirect the standard output of a loop
                             to sort using a pipe line!

                             Alternatives You can use the aforementioned && and || operators to execute cer-
                             tain commands only under specific circumstances. The

                             # grepcp REGEX
                             rm -rf backup; mkdir backup
                             for f in *.txt
                                  grep $1 "$f" && cp "$f" backup

                             script, for example, copies a file to the backup directory only if its name ends with
                             .txt (the for loop ensures this) and which contain at least one line matching the
                             regular expression that is passed as a parameter.
                      test      A useful tool for alternatives is the test command, which can check a large
                             variety of conditions. It returns an exit code of 0 (success), if the condition holds,
                             else a non-zero exit code (failure). For example, consider

                             # filetest NAME1 NAME2 ...
                             for name
                                test -d "$name" && echo $name: directory
                                test -f "$name" && echo $name: file
                                test -L "$name" && echo $name: symbolic link

                             This script looks at a number of file names passed as parameters and outputs for
                             each one whether it refers to a directory, a (plain) file, or a symbolic link.

                             A The test command exists both as a free-standing program in /bin/test and
                               as a built-in command in bash and other shells. These variants can differ
                               subtly especially as far as more outlandish tests are concerned. If in doubt,
                               read the documentation.
8.6 The Shell As A Programming Language                                                            131

   You can use the if command to make more than one command depend on a                    if
condition (in a convenient and readable fashion). You may write “[ …] ” instead
of “test …”:
# filetest2 NAME1 NAME2 ...
for name
     if [ -L "$name" ]
          echo $name: symbolic link
     elif [ -d "$name" ]
          echo $name: directory
     elif [ -f "$name" ]
          echo $name: file
          echo $name: no idea

If the command after the if signals “success” (exit code 0), the commands after
then will be executed, up to the next elif , else , or fi . If on the other hand it sig-
nals “failure”, the command after the next elif will be evaluated next and its exit
code will be considered. The shell continues the pattern until the matching fi is
reached. Commands after the else are executed if none of the if or elif commands
resulted in “success”. The elif and else branches may be omitted if they are not

More loops With the for loop, the number of trips through the loop is fixed at
the beginning (the number of words in the list). However, we often need to deal
with situations where it is not clear at the beginning how often a loop should be
executed. To handle this, the shell offers the while loop, which (like if ) executes       while
a command whose success or failure determines what to do about the loop: On
success, the “dependent” commands will be executed, on failure execution will
continue after the loop.
   The following script reads a file like

Aunt delightful tea cosy
Uncle great football

(whose name is passed on the command line) and constructs a thank-you e-mail
message from each line (Linux is very useful in daily life):

# birthday FILE
while read name email present
    (echo $name
      echo ""
      echo "Thank you very much for $present!"
      echo "I enjoyed it very much."
      echo ""
      echo "Best wishes"
      echo "Tim") | mail -s "Many thanks!" $email
done <$1

The read command reads the input file line by line and splits each line at the colons      read
132                                                                                 8 More About The Shell

                   (variable IFS ) into the three fields name , email , and present which are then made avail-
                   able as variables inside the loop. Somewhat counterintuitively, the input redirec-
                   tion for the loop can be found at the very end.

                   A Please test this script with innocuous e-mail addresses only, lest your rela-
                     tions become confused!

                   C 8.9 [1] What is the difference (as far as loop execution is concerned) between
                           for f; do …; done

                           for f in $*; do …; done

                           ? (Try it, if necessary)

                   C 8.10 [2] In the sort- wc script, why do we use the
                           wc -l <$f

                           instead of
                           wc -l $f

                   C 8.11 [2] Alter the grepcp such that the list of files to be considered is also
                     taken from the command line. (Hint: The shift shell command removes the
                     first command line parameter from $ and pulls all others up to close the gap.
                     After a shift , the previous $2 is now $1 , $3 is $2 and so on.)

                   C 8.12 [2] Why does the filetest script output
                           $ ./filetest foo
                           foo: file
                           foo: symbolic link

                           for symbolic links (instead of just »foo: symbolic link «)?

                   8.6.1      Foreground and Background Processes
                    After a command has been entered, it is processed by the shell. The shell exe-
                    cutes internal commands directly; for external commands, the shell generates a
      child process child process, which is used to execute the command and terminates itself af-
                    terwards. In Unix, a process is a running programm; the same program can be
                    executed several times simultaneously (e. g., by different users) and corresponds
                    with several processes. Every process can generate child processes (even if most
                    of them—unlike shells—don’t).
                        Usually, the shell waits until the child process has done its work and termi-
                    nates. You can tell by the fact that no new shell prompt is displayed while the
                    child process is running. After the child process has exited, the shell reads and
                    processes its return value, and only after that it displays a new shell prompt. The
                    execution of the shell and the child process is, so to speak, synchronised. This
                    “synchronous” manner of processing commands is displayed in Figure 8.1; from
                    the user’s point of view it looks like the following:

                   $ sleep 10
                                                             Nothing happens for approximately 10 seconds
                   $ _
8.6 The Shell As A Programming Language                             133

    Time                                  Start



           Figure 8.1: Synchronous command execution in the shell





          Figure 8.2: Asynchronous command execution in the shell
134                                                                             8 More About The Shell

                                                 Table 8.3: Options for jobs

                          Option           Meaning
                          -l   (long)      Adds PIDs to the output
                          -n   (notify)    Displays only processes that have been terminated since
                                           the last invocation of jobs
                          -p   (process)   Displays only PIDs

                       If you do not want the shell to wait until the child process has finished, you
                    have to append an ampersand (& ) to the command line. Then, while the child
                    process is executed in the background, a short message appears on the terminal,
                    immediately followed by the shell’s command prompt:

                    $ sleep 10 &
                    [2] 6210
                                                                                  And then immediately:
                    $ _

                    This mode of operation is called “asynchronous”, since the shell does not wait
                    idly for the child process to finish (qv. Figure 8.2).

                    B The “[2] 6210 ” means that the system has created the process with the num-
                      ber (or “process ID”) 6210 as “job” number 2. These numbers will probably
                      differ on your system.

                    B Syntactically, the & really acts like a semicolon, and can therefore serve as a
                      separator between commands. See Exercise 8.14.

                       Here are some hints for successful background process operation:
                       • The background process should not expect keyboard input, since the shell
                         cannot determine to which process—foreground or background—any key-
                         board input should be assigned. If necessary, input can be taken from a file.
                         This is covered more extensively in Chapter 7.
                       • The background process should not direct output to the terminal, since
                         these may be mixed up with the output of foreground processes or dis-
                         carded altogether. Again, there is more about this in Chapter 7.
                       • If the parent process (the shell) is aborted, all its children (and consequently
                         their children etc.) will in many cases be terminated as well. Only processes
                         that completely disawov their parents are exempted from this; this applies,
                         e. g., to processes that perform system services in the background.
      Job control       When several processes are executed in the background from the same shell,
                    it is easy to lose track. Therefore the shell makes available an (internal) command
                    that you can use to find out about the state of background processes—jobs . If jobs
                    is invoked without options, its output consists of a list of job numbers, process
                    states and command lines. This looks approximately like the following:

                    $ jobs
                    [1]    Done                  sleep
                    $ _

                    In this case, job number 1 has already finished (“Done”), otherwise the message
                    “Running ” would have appeared. The jobs command supports various options, the
                    most important of which are shown in Table 8.3.
                       The shell makes it possible to stop a foreground process using Ctrl + z . This
                    process is displayed by jobs with a “Stopped ” status and can be continued as a back-
                    ground process using the bg command. (Otherwise, processes stay stopped until
8.6 The Shell As A Programming Language                                               135

hell freezes over, or the next system restart, whichever occurs earlier.) For exam-
ple, “bg %5 ” will send job 5 to the background, where it will continue to run.
    Conversely, you can select one of a number of background processes and fetch
it back to the foreground using the fg command. The syntax of the fg command
is equivalent to that of the bg command.
    You can terminate a foreground process from the shell with the Ctrl + c key
sequence. A background process can be terminated directly using the kill com-
mand followed by a job number with a leading percent character (similar to bg ).

C 8.13 [2] Use a suitably spectacular program (such as the OpenGL demo gears
  under X11 in the SUSE distributions, alternatively, for example, “xclock -
  update 1 ”) to experiment with background processes and job control. Make
  sure that you are able to start background processes, to stop foreground
  processes using Ctrl + z and send them to the background using bg , to list
  background processes using jobs and so on.

C 8.14 [3] Describe (and explain) the differences between the following three
  command lines:
      $   sleep 5 ; sleep 5
      $   sleep 5 ; sleep 5 &
      $   sleep 5 & sleep 5 &

Commands in this Chapter
.       Reads a file containing shell commands as if they had been entered on
        the command line                                             bash (1) 128
bg      Continues a (stopped) process in the background              bash (1) 134
date    Displays the date and time                                   date (1) 120
env     Outputs the process environment, or starts programs with an adjusted
        environment                                                   env (1) 122
export  Defines and manages environment variables                    bash (1) 121
fg      Fetches a background process back to the foreground          bash (1) 134
gears   Displays turning gears on X11                              gears (1) 135
hash    Shows and manages ”‘seen”’ commands in bash                  bash (1) 123
history Displays recently used bash command lines                    bash (1) 125
jobs    Reports on background jobs                                   bash (1) 134
kill    Terminates a background process                    bash (1), kill (1) 135
set     Manages shell variables and options                          bash (1) 122
source  Reads a file containing shell commands as if they had been entered on
        the command line                                             bash (1) 128
test    Evaluates logical expressions on the command line
                                                           test (1), bash (1) 130
unset   Deletes shell or environment variables                       bash (1) 122
whereis Searches executable programs, manual pages, and source code for given
        programs                                                 whereis (1) 123
which   Searches programs along PATH                               which (1) 123
xclock  Displays a graphical clock                               xclock (1x) 135
136                                                        8 More About The Shell

       • The sleep command waits for the number of seconds specified as the argu-
       • The echo command outputs its arguments.
       • The date and time may be determined using date
       • Various bash features support interactive use, such as command and file
         name autocompletion, command line editing, alias names and variables.
       • External programs can be started asynchronously in the background. The
         shell then immediately prints another command prompt.
                                                                                                            $ echo tux
                                                                                                            $ ls
                                                                                                            $ /bin/su -

The File System

9.1       Terms . . . . . . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   138
9.2       File Types. . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   .   138
9.3       The Linux Directory Tree . . .      .   .   .   .   .   .   .   .   .   .   .   .   .   .   139
9.4       Directory Tree and File Systems.    .   .   .   .   .   .   .   .   .   .   .   .   .   .   147
9.5       Removable Media. . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   .   148

      •   Understanding the terms “file” and “file system”
      •   Recognising the different file types
      •   Knowing your way around the directory tree of a Linux system
      •   Knowing how external file systems are integrated into the directory tree

      • Basic Linux knowledge (from the previous chapters)
      • Handling files and directories (Chapter 6)

grd1-dateisystem.tex    (be27bba8095b329b )
138                                                                                      9 The File System

                                                Table 9.1: Linux file types

                         Type                     ls -l     ls -F   Create using …
                         plain file                  -      name    diverse programs
                         directory                   d      name/
                         symbolic link               l      name@   ln -s
                         device file             b   or c   name    mknod
                         FIFO (named pipe)           p      name|   mkfifo
                         Unix-domain socket          s      name=   no command

                   9.1     Terms
             file Generally speaking, a file is a self-contained collection of data. There is no re-
                   striction on the type of the data within the file; a file can be a text of a few letters
                   or a multi-megabyte archive containing a user’s complete life works. Files do not
                   need to contain plain text. Images, sounds, executable programs and lots of other
                   things can be placed on a storage medium as files. To guess at the type of data
            file   contained in a file you can use the file command:

                   $ file /bin/ls /usr/bin/groups /etc/passwd
                   /bin/ls:         ELF 32-bit LSB executable, Intel 80386, 
                     version 1 (SYSV), for GNU/Linux 2.4.1, 
                     dynamically linked (uses shared libs), for GNU/Linux 2.4.1, stripped
                   /usr/bin/groups: Bourne shell script text executable
                   /etc/passwd:     ASCII text

                   B file guesses the type of a file based on rules in the /usr/share/file directory.
                     /usr/share/file/magic contains a clear-text version of the rules. You can define
                     your own rules by putting them into the /etc/magic file. Check magic (5) for
                  To function properly, a Linux system normally requires several thousand different
                  files. Added to that are the many files created and owned by the system’s various
      file system     A file system determines the method of arranging and managing data on a
                  storage medium. A hard disk basically stores bytes that the system must be able
                  to find again somehow—and as efficiently and flexibly as possible at that, even
                  for very huge files. The details of file system operation may differ (Linux knows
                  lots of different file systems, such as ext2 , ext3 , ext4 , ReiserFS, XFS, JFS, btrfs, …)
                  but what is presented to the user is largely the same: a tree-structured hierarchy
                  of file and directory names with files of different types. (See also Chapter 6.)

                   B In the Linux community, the term “file system” carries several meanings. In
                     addition to the meaning presented here—“method of arranging bytes on a
                     medium”—, a file system is often considered what we have been calling a
                     “directory tree”. In addition, a specific medium (hard disk partition, USB
                     key, …) together with the data on it is often called a “file system”—in the
                     sense that we say, for example, that hard links (Section 6.4.2) do not work
                     “across file system boundaries”, that is, between two different partitions on
                     hard disk or between the hard disk and a USB key.

                   9.2     File Types
                   Linux systems subscribe to the basic premise “Everything is a file”. This may seem
                   confusing at first, but is a very useful concept. Six file types may be distinguished
                   in principle:
9.3 The Linux Directory Tree                                                                          139

Plain files This group includes texts, graphics, sound files, etc., but also exe-
      cutable programs. Plain files can be generated using the usual tools like
      editors, cat , shell output redirection, and so on.
Directories Also called “folders”; their function, as we have mentioned, is to help
     structure storage. A directory is basically a table giving file names and as-
     sociated inode numbers. Directories are created using the mkdir command.
Symbolic links Contain a path specification redirecting accesses to the link to
    a different file (similar to “shortcuts” in Windows). See also Section 6.4.2.
    Symbolic links are created using ln -s .

Device files These files serve as interfaces to arbitrary devices such as disk drives.
     For example, the file /dev/fd0 represents the first floppy drive. Every write
     or read access to such a file is redirected to the corresponding device. De-
     vice files are created using the mknod command; this is usually the system
     administrator’s prerogative and is thus not explained in more detail in this

FIFOs Often called “named pipes”. Like the shell’s pipes, they allow the direct
     communication between processes without using intermediate files. A pro-
     cess opens the FIFO for writing and another one for reading. Unlike the
     pipes that the shell uses for its pipelines, which behave like files from a pro-
     gram’s point of view but are “anonymous”—they do not exist within the file
     system but only between related processes—, FIFOs have file names and can
     thus be opened like files by arbitrary programs. Besides, FIFOs may have
     access rights (pipes may not). FIFOs are created using the mkfifo command.
Unix-domain sockets Like FIFOs, Unix-domain sockets are a method of inter-
     process communication. They use essentially the same programming in-
     terface as “real” network communications across TCP/IP, but only work
     for communication peers on the same computer. On the other hand, Unix-
     domain sockets are considerably more efficient than TCP/IP. Unlike FIFOs,
     Unix-domain sockets allow bi-directional communications—both partici-
     pating processes can send as well as receive data. Unix-domain sockets are
     used, e. g., by the X11 graphic system, if the X server and clients run on the
     same computer. There is no special program to create Unix-domain sockets.

C 9.1 [3] Check your system for examples of the various file types. (Table 9.1
  shows you how to recognise the files in question.)

9.3     The Linux Directory Tree
A Linux system consists of hundreds of thousands of files. In order to keep track,
there are certain conventions for the directory structure and the files comprising a
Linux system, the Filesystem Hierarchy Standard (FHS). Most distributions adhere FHS
to this standard (possibly with small deviations). The FHS describes all directories
immediately below the file system’s root as well as a second level below /usr .
    The file system tree starts at the root directory, “/ ” (not to be confused with root directory
/root , the home directory of user root ). The root directory contains either just sub-
directories or else additionally, if no /boot directory exists, the operating system
    You can use the “ls -la / ” command to list the root directory’s subdirectories.
The result should look similar to Figure 9.1. The individual subdirectories follow
FHS and therefore contain approximately the same files on every distribution. We
shall now take a closer look at some of the directories:
140                                                                                                9 The File System

$ cd /
$ ls -l
insgesamt 125
drwxr-xr-x    2 root     root    4096 Dez 20 12:37 bin
drwxr-xr-x    2 root     root    4096 Jan 27 13:19 boot
lrwxrwxrwx    1 root     root      17 Dez 20 12:51 cdrecorder 
                                                                                              -> /media/cdrecorder
lrwxrwxrwx    1   root   root      12   Dez   20   12:51   cdrom -> /media/cdrom
drwxr-xr-x   27   root   root   49152   Mär    4   07:49   dev
drwxr-xr-x   40   root   root    4096   Mär    4   09:16   etc
lrwxrwxrwx    1   root   root      13   Dez   20   12:51   floppy -> /media/floppy
drwxr-xr-x    6   root   root    4096   Dez   20   16:28   home
drwxr-xr-x    6   root   root    4096   Dez   20   12:36   lib
drwxr-xr-x    6   root   root    4096   Feb    2   12:43   media
drwxr-xr-x    2   root   root    4096   Mär   21    2002   mnt
drwxr-xr-x   14   root   root    4096   Mär    3   12:54   opt
dr-xr-xr-x   95   root   root       0   Mär    4   08:49   proc
drwx------   11   root   root    4096   Mär    3   16:09   root
drwxr-xr-x    4   root   root    4096   Dez   20   13:09   sbin
drwxr-xr-x    6   root   root    4096   Dez   20   12:36   srv
drwxrwxrwt   23   root   root    4096   Mär    4   10:45   tmp
drwxr-xr-x   13   root   root    4096   Dez   20   12:55   usr
drwxr-xr-x   17   root   root    4096   Dez   20   13:02   var

                                   Figure 9.1: Content of the root directory (SUSE)

                                  B There is considerable consensus about the FHS, but it is just as “binding”
                                    as anything on Linux, i. e., not that much. On the one hand, there certainly
                                    are Linux systems (for example the one on your broadband router or PVR)
                                    that are mostly touched only by the manufacturer and where conforming
                                    to every nook and cranny of the FHS does not gain anything. On the other
                                    hand, you may do whatever you like on your own system, but must be pre-
                                    pared to bear the consequences—your distributor assures you to keep to his
                                    side of the FHS bargain, but also expects you not to complain if you are not
                                    playing completely by the rules and problems do occur. For example, if you
                                    install a program in /usr/bin and the file in question gets overwritten during
                                    the next system upgrade, this is your own fault since, according to the FHS,
                                    you are not supposed to put your own programs into /usr/bin (/usr/local/bin
                                    would have been correct).

                                 The Operating System Kernel—/boot The /boot directory contains the actual op-
                                 erating system: vmlinuz is the Linux kernel. In the /boot directory there are also
                                 other files required for the boot loader (usually GRUB).
                                    On some systems, /boot is placed on its own separate partition. This can be
                                 necessary if the actual file system is encrypted or otherwise difficult to reach for
                                 the boot loader, possibly because special drivers are required to access a hardware
                                 RAID system.

                                 General Utilities—/bin In /bin there are the most important executable programs
                                 (mostly system programs) which are necessary for the system to boot. This in-
                                 cludes, for example, mount and mkdir . Many of these programs are so essential
                                 that they are needed not just during system startup, but also when the system
                                 is running—like ls and grep . /bin also contains programs that are necessary to get
                                 a damaged system running again if only the file system containing the root direc-
                                 tory is available. Additional programs that are not required on boot or for system
9.3 The Linux Directory Tree                                                                          141

repair can be found in /usr/bin .

Special System Programs—/sbin Like /bin , /sbin contains programs that are nec-
essary to boot or repair the system. However, for the most part these are system
configuration tools that can really be used only by root . “Normal” users can use
some of these programs to query the system, but can’t change anything. As with
/bin , there is a directory called /usr/sbin containing more system programs.

System Libraries—/lib This is where the “shared libraries” used by programs
in /bin and /sbin reside, as files and (symbolic) links. Shared libraries are pieces
of code that are used by various programs. Such libraries save a lot of resources,
since many processes use the same basic parts, and these basic parts must then be
loaded into memory only once; in addition, it is easier to fix bugs in such libraries
when they are in the system just once and all programs fetch the code in question
from one central file. Incidentally, below /lib/modules there are kernel modules, kernel modules
i. e., kernel code which is not necessarily in use—device drivers, file systems, or
network protocols. These modules can be loaded by the kernel when they are
needed, and in many cases also be removed after use.

Device Files—/dev This directory and its subdirectories contain a plethora of en-
tries for device files. Device files form the interface between the shell (or, gener- Device files
ally, the part of the system that is accessible to command-line users or program-
mers) to the device drivers inside the kernel. They have no “content” like other
files, but refer to a driver within the kernel via “device numbers”.

B In former times it was common for Linux distributors to include an entry in
  /dev for every conceivable device. So even a laptop Linux system included
  the device files required for ten hard disks with 63 partitions each, eight
  ISDN adapters, sixteen serial and four parallel interfaces, and so on. Today
  the trend is away from overfull /dev directories with one entry for every
  imaginable device and towards systems more closely tied to the running
  kernel, which only contain entries for devices that actually exist. The magic
  word in this context is udev (short for userspace /dev ) and will be discussed in
  more detail in Linux Administration I.

    Linux distinguishes between character devices and block devices. A character character devices
device is, for instance, a terminal, a mouse or a modem—a device that provides block devices
or processes single characters. A block device treats data in blocks—this includes
hard disks or floppy disks, where bytes cannot be read singly but only in groups
of 512 (or some such). Depending on their flavour, device files are labelled in “ls
-l ” output with a “c ” or “b ”:

crw-rw-rw-   1   root   root   10,   4   Oct   16   11:11   amigamouse
brw-rw----   1   root   disk    8,   1   Oct   16   11:11   sda1
brw-rw----   1   root   disk    8,   2   Oct   16   11:11   sda2
crw-rw-rw-   1   root   root    1,   3   Oct   16   11:11   null

Instead of the file length, the list contains two numbers. The first is the “major
device number” specifying the device’s type and governing which kernel driver
is in charge of this device. For example, all SCSI hard disks have major device
number 8. The second number is the “minor device number”. This is used by the
driver to distinguish between different similar or related devices or to denote the
various partitions of a disk.
    There are several notable pseudo devices. The null device, /dev/null , is like a pseudo devices
“dust bin” for program output that is not actually required, but must be directed
somewhere. With a command like

$ program >/dev/null
142                                                                                9 The File System

      the program’s standard output, which would otherwise be displayed on the ter-
      minal, is discarded. If /dev/null is read, it pretends to be an empty file and returns
      end-of-file at once. /dev/null must be accessible to all users for reading and writ-
          The “devices” /dev/random and /dev/urandom return random bytes of “crypto-
      graphic quality” that are created from “noise” in the system—such as the in-
      tervals between unpredictable events like key presses. Data from /dev/random is
      suitable for creating keys for common cryptographic algorithms. The /dev/zero
      file returns an unlimited supply of null bytes; you can use these, for example, to
      create or overwrite files with the dd command.

      Configuration Files—/etc The /etc directory is very important; it contains the
      configuration files for most programs. Files /etc/inittab and /etc/init.d/* , for ex-
      ample, contain most of the system-specific data required to start system services.
      Here is a more detailed descriptionof the most important files—except for a few
      of them, only user root has write permission but everyone may read them.
      /etc/fstab This describes all mountable file systems and their properties (type,
             access method, “mount point”).
      /etc/hosts  This file is one of the configuration files of the TCP/IP network. It maps
             the names of network hosts to their IP addresses. In small networks and on
             freestanding hosts this can replace a name server.
      /etc/inittab  The /etc/inittab file is the configuration file for the init program and
             thus for the system start.
      /etc/init.d/* This directory contains the “init scripts” for various system services.
             These are used to start up or shut down system services when the system is
             booted or switched off.

                    On Red Hat distributions, this directory is called /etc/rc.d/init.d .

      /etc/issue  This file contains the greeting that is output before a user is asked to
             log in. After the installation of a new system this frequently contains the
             name of the vendor.
      /etc/motd This file contains the “message of the day” that appears after a user has
             successfully logged in. The system administrator can use this file to notify
             users of important facts and events1 .
      /etc/mtab This is a list of all mounted file systems including their mount points.
            /etc/mtab differs from /etc/fstab in that it contains all currently mounted file
            systems, while /etc/fstab contains only settings and options for file systems
             that might be mounted—typically on system boot but also later. Even that
             list is not exhaustive, since you can mount file systems via the command
             line where and how you like.

             B We’re really not supposed to put that kind of information in a file
               within /etc , where files ought to be static. Apparently, tradition has
               carried the day here.

      /etc/passwd  In /etc/passwd there is a list of all users that are known to the system, to-
             gether with various items of user-specific information. In spite of the name
             of the file, on modern systems the passwords are not stored in this file but
             in another one called /etc/shadow . Unlike /etc/passwd , that file is not readable
             by normal users.
         1 There
               is a well-known claim that the only thing all Unix systems in the world have in common is
      the “message of the day” asking users to remove unwanted files since all the disks are 98% full.
9.3 The Linux Directory Tree                                                                               143

Accessories—/opt This directory is really intended for third-party software—
complete packages prepared by vendors that are supposed to be installable with-
out conflicting with distribution files or locally-installed files. Such software pack-
ages occupy a subdirectory /opt/ ⟨package⟩. By rights, the /opt directory should be
completely empty after a distribution has been installed on an empty disk.

“Unchanging Files”—/usr In /usr there are various subdirectories containing
programs and data files that are not essential for booting or repairing the system
or otherwise indispensable. The most important directories include:
/usr/bin System programs that are not essential for booting or otherwise impor-

/usr/sbin    More system programs for root
/usr/lib    Further libraries (not used for programs in /bin or /sbin
/usr/localDirectory for files installed by the local system administrator. Corre-
      sponds to the /opt directory—the distribution may not put anything here

/usr/share Architecture-independent data. In principle, a Linux network consist-
      ing, e. g., of Intel, SPARC and PowerPC hosts could share a single copy of
      /usr/share on a central server. However, today disk space is so cheap that no
      distribution takes the trouble of actually implementing this.
/usr/share/doc    Documentation, e. g., HOWTOs

/usr/share/info   Info pages
/usr/share/man    Manual pages (in subdirectories)
/usr/src    Source code for the kernel and other programs (if available)

B The name /usr is often erroneously considered an acronym of “Unix system
  resources”. Originally this directory derives from the time when computers
  often had a small, fast hard disk and another one that was bigger but slower.
  All the frequently-used programs and files went to the small disk, while the
  big disk (mounted as /usr ) served as a repository for files and programs
  that were either less frequently used or too big. Today this separation can
  be exploited in another way: With care, you can put /usr on its own partition
  and mount that partition “read-only”. It is even possible to import /usr from Read-only /usr
  a remote server, even though the falling prices for disk storage no longer
  make this necessary (the common Linux distributions do not support this,

A Window into the Kernel—/proc This is one of the most interesting and impor-
tant directories. /proc is really a “pseudo file system”: It does not occupy space on pseudo file system
disk, but its subdirectories and files are created by the kernel if and when someone
is interested in their content. You will find lots of data about running processes
as well as other information the kernel possesses about the computer’s hardware.
For instance, in some files you will find a complete hardware analysis. The most
important files include:
/proc/cpuinfo    This contains information about the CPU’s type and clock frequency.

/proc/devicesThis is a complete list of devices supported by the kernel including
      their major device numbers. This list is consulted when device files are cre-
/proc/dma A list of DMA channels in use. On today’s PCI-based systems this is
      neither very interesting nor important.
144                                                                                9 The File System

              /proc/interrupts A list of all hardware interrupts in use. This contains the inter-
                    rupt number, number of interrupts triggered and the drivers handling that
                    particular interrupt. (An interrupt occurs in this list only if there is a driver
                    in the kernel claiming it.)

              /proc/ioports   Like /proc/interrupts , but for I/O ports.
              /proc/kcore This file is conspicuous for its size. It makes available the computer’s
                    complete RAM and is required for debugging the kernel. This file requires
                    root privileges for reading. You do well to stay away from it!

              /proc/loadavg This file contains three numbers measuring the CPU load during
                    the last 1, 5 and 15 minutes. These values are usually output by the uptime
                          Displays the memory and swap usage. This file is used by the free

              /proc/mounts Another list of all currently mounted file systems, mostly identical to

              /proc/scsi In this directory there is a file called scsi listing the available SCSI de-
                    vices. There is another subdirectory for every type of SCSI host adapter in
                    the system containing a file 0 (1 , 2 , …, for multiple adapters of the same type)
                    giving information about the SCSI adapter.

              /proc/version   Contains the version number and compilation date of the current

               B Back when /proc had not been invented, programs like the process status
                 display tool, ps , which had to access kernel information, needed to include
                 considerable knowledge about internal kernel data structures as well as the
                 appropriate access rights to read the data in question from the running ker-
                 nel. Since these data structures used to change fairly rapidly, it was often
                 necessary to install a new version of these programs along with a new ver-
                 sion of the kernel. The /proc file system serves as an abstraction layer be-
                 tween these internal data structures and the utilities: Today you just need
                 to ensure that after an internal change the data formats in /proc remain the
                 same—and ps and friends continue working as usual.

              Hardware Control—/sys The Linux kernel has featured this directory since ver-
              sion 2.6. Like /proc , it is made available on demand by the kernel itself and al-
              lows, in an extensive hierarchy of subdirectories, a consistent view on the available
              hardware. It also supports management operations on the hardware via various
              special files.

               B Theoretically, all entries in /proc that have nothing to do with individual
                 processes should slowly migrate to /sys . When this strategic goal is going
                 to be achieved, however, is anybody’s guess.

                Dynamically Changing Files—/var This directory contains dynamically changing
                files, distributed across different directories. When executing various programs,
                the user often creates data (frequently without being aware of the fact). For ex-
                ample, the man command causes compressed manual page sources to be uncom-
                pressed, while formatted man pages may be kept around for a while in case they
                are required again soon. Similarly, when a document is printed, the print data
                must be stored before being sent to the printer, e. g., in /var/spool/cups . Files in
      log files /var/log record login and logout times and other system events (the “log files”),
                /var/spool/cron contains information about regular automatic command invoca-
                tions, and users’ unread electronic mail is kept in /var/mail .
9.3 The Linux Directory Tree                                                            145

B Just so you heard about it once (it might be on the exam): On Linux, the
  system log files are generally handled by the “syslog” service. A program
  called syslogd accepts messages from other programs and sorts these ac-
  cording to their origin and priority (from “debugging help” to “error” and
  “emergency, system is crashing right now”) into files below /var/log , where
  you can find them later on. Other than to files, the syslog service can also
  write its messages elsewhere, such as to the console or via the network to
  another computer serving as a central “management station” that consoli-
  dates all log messages from your data center.

B Besides the syslogd , some Linux distributions also contain a klogd service.
  Its job is to accept messages from the operating system kernel and to pass
  them on to syslogd . Other distributions do not need a separate klogd since
  their syslogd can do that job itself.

B The Linux kernel emits all sorts of messages even before the system is booted
  far enough to run syslogd (and possibly klogd ) to accept them. Since the mes-
  sages might still be important, the Linux kernel stores them internally, and
  you can access them using the dmesg command.

Transient Files—/tmp Many utilities require temporary file space, for example
some editors or sort . In /tmp , all programs can deposit temporary data. Many
distributions can be set up to clean out /tmp when the system is booted; thus you
should not put anything of lasting importance there.

B According to tradition, /tmp is emptied during system startup but /var/tmp
  isn’t. You should check what your distribution does.

Server Files—/srv      Here you will find files offered by various server programs,
such as
drwxr-xr-x    2 root       root        4096 Sep 13 01:14 ftp
drwxr-xr-x    5 root       root        4096 Sep 9 23:00 www

This directory is a relatively new invention, and it is quite possible that it does
not yet exist on your system. Unfortunately there is no other obvious place for
web pages, an FTP server’s documents, etc., that the FHS authors could agree on
(the actual reason for the introduction of /srv ), so that on a system without /srv ,
these files could end up somewhere completely different, e. g., in subdirectories
of /usr/local or /var .

Access to CD-ROM or Floppies—/media This directory is often generated auto-
matically; it contains additional empty directories, like /media/cdrom and /media/
floppy , that can serve as mount points for CD-ROMs and floppies. Depending
on your hardware setup you should feel free to add further directories such as
/media/dvd , if these make sense as mount points and have not been preinstalled by
your distribution vendor.

Access to Other Storage Media—/mnt This directory (also empty) serves as a
mount point for short-term mounting of additional storage media. With some
distributions, such as those by Red Hat, media mountpoints for CD-ROM, floppy,
… might show up here instead of below /media .

User Home Directories—/home This directory contains the home directories of
all users except root (whose home directory is located elsewhere).

B If you have more than a few hundred users, it is sensible, for privacy protec-
  tion and efficiency, not to keep all home directories as immediate children
  of /home . You could, for example, use the users’ primary group as a criterion
  for further subdivision:
146                                                                        9 The File System

                     Table 9.2: Directory division according to the FHS

                     static                            dynamic
             local   /etc , /bin , /sbin , /lib        /dev , /var/log
           remote    /usr , /opt                       /home , /var/mail


      Administrator’s Home Directory—/root The system administrator’s home direc-
      tory is located in /root . This is a completely normal home directory similar to that
      of the other users, with the marked difference that it is not located below /home but
      immediately below the root directory (/ ).
          The reason for this is that /home is often located on a file system on a separate
      partition or hard disk. However, root must be able to access their own user envi-
      ronment even if the separate /home file system is not accessible for some reason.

      Lost property—lost+found (ext file systems only; not mandated by FHS.) This di-
      rectory is used for files that look reasonable but do not seem to belong to any
      directory. The file system consistency checker creates liks to such files in the
      lost+found directory on the same file system, so the system administrator can fig-
      ure out where the file really belongs; lost+found is created “on the off-chance” for
      the file system consistency checker to find in a fixed place (by convention, on the
      ext file systems, it always uses inode number 11).

      B Another motivation for the directory arrangement is as follows: The FHS di-
        vides files and directories roughly according to two criteria—do they need
        to be available locally or can they reside on another computer and be ac-
        cessed via the network, and are their contents static (do files only change
        by explicit administrator action) or do they change while the system is run-
        ning? (Table 9.2)
            The idea behind this division is to simplify system administration: Direc-
            tories can be moved to file servers and maintained centrally. Directories
            that do not contain dynamic data can be mounted read-only and are more
            resilient to crashes.

      C 9.2 [1] How many programs does your system contain in the “usual”

      C 9.3 [I]f grep is called with more than one file name on the command line,
        it outputs the name of the file in question in front of every matching line.
        This is possibly a problem if you invoke grep with a shell wildcard pattern
        (such as “*.txt ”), since the exact format of the grep output cannot be fore-
        seen, which may mess up programs further down the pipeline. How can
        you enforce output of the file name, even if the search pattern expands to a
        single file name only? (Hint: There is a very useful “file” in /dev .)

      C 9.4 [T]he “cp foo.txt /dev/null ” command does basically nothing, but the
        “mv foo.txt /dev/null ”—assuming suitable access permissions—replaces
        /dev/null by foo.txt . Why?
9.4 Directory Tree and File Systems                                                                    147

C 9.5 [2] On your system, which (if any) software packages are installed below
  /opt ? Which ones are supplied by the distribution and which ones are third-
  party products? Should a distribution install a “teaser version” of a third-
  party product below /opt or elsewhere? What do you think?

C 9.6 [1] Why is it inadvisable to make backup copies of the directory tree
  rooted at /proc ?

9.4     Directory Tree and File Systems
A Linux system’s directory tree usually extends over more than one partition on
disk, and removable media like CD-ROM disks, USB keys as well as portable MP3
players, digital cameras and so on must be taken into account. If you know your
way around Microsoft Windows, you are probably aware that this problem is
solved there by means of identifying different “drives” by means of letters—on
Linux, all available disk partitions and media are integrated in the directory tree
starting at “/ ”.
   In general, nothing prevents you from installing a complete Linux system
on a single hard disk partition. However, it is common to put at least the /home partitioning
directory—where users’ home directories reside—on its own partition. The ad-
vantage of this approach is that you can re-install the actual operating system,
your Linux distribution, completely from scratch without having to worry about
the safety of your own data (you simply need to pay attention at the correct mo-
ment, namely when you pick the target partition(s) for the installation in your
distribution’s installer.) This also simplifies the creation of backup copies.
   On larger server systems it is also quite usual to assign other directories, typi- server systems
cally /tmp , /var/tmp , or /var/spool , their own partitions. The goal is to prevent users
from disturbing system operations by filling important partitions completely. For
example, if /var is full, no protocol messages can be written to disk, so we want to
keep users from filling up the file system with large amounts of unread mail, un-
printed print jobs, or giant files in /var/tmp . On the other hand, all these partitions
tend to clutter up the system.

B More information and strategies for partitioning are presented in the Linup
  Front training manual, Linux Administration I.

    The /etc/fstab file describes how the system is assembled from various disk        /etc/fstab
partitions. During startup, the system arranges for the various file systems to be
made available—the Linux insider says “mounted”—in the correct places, which
you as a normal user do not need to worry about. What you may in fact be inter-
ested in, though, is how to access your CD-ROM disks and USB keys, and these
need to be mounted, too. Hence we do well to cover this topic briefly even though
it is really administrator country.
    To mount a medium, you require both the name of the device file for the
medium (usually a block device such as /dev/sda1 ) and a directory somewhere in
the directory tree where the content of the medium should appear—the so-called
mount point. This can be any directory.

B The directory doesn’t even have to be empty, although you cannot access the
  original content once you have mounted another medium “over” it. (The
  content reappears after you unmount the medium.)

A In principile, somebody could mount a removable medium over an impor-
  tant system directory such as /etc (ideally with a file called passwd containing
  a root entry without a password). This is why mounting of file systems in
  arbitrary places within the directory tree is restricted to the system adminis-
  trator, who will have no need for shenanigans like these, as they are already
  root .
148                                                                       9 The File System

      B Earlier on, we called the “device file for the medium” /dev/sda1 . This is really
        the first partition on the first SCSI disk drive in the system—the real name
        may be completely different depending on the type of medium you are us-
        ing. Still it is an obvious name for USB keys, which for technical reasons are
        treated by the system as if they were SCSI devices.

         With this information—device name and mount point—a system administra-
      tor can mount the medium as follows:

      # mount /dev/sda1 /media/usb

      This means that a file called file on the medium would appear as /media/usb/file
      in the directory tree. With a command such as

      # umount /media/usb                                                     Note: no ‘‘n’’

      the administrator can also unmount the medium again.

      9.5     Removable Media
      The explict mounting of removable media is a tedious business, and the explicit
      unmounting before removing a medium even more so—but especially the latter
      can lead to problems if you remove the medium physically before Linux is com-
      pletely finished with it. Linux does try to speed up the system by not executing
      slow operations like writing to media immediately but later, when the “right mo-
      ment” has arrived, and if you pull out your USB key before the data have actually
      been written there, you have in the best case gained nothing, and in the worst case
      the data on there have descended into chaos.
          As a user of a graphical desktop interface on a modern Linux system, you have
      it easy: If you insert or plug in a medium—no matter whether it is an audio CD,
      USB key, or digital camera—, a dialog appears suggesting various interesting ac-
      tions that you can perform on the medium. “Mounting” is usually one of those,
      and the system also figures out a nice mount point for you. It is just as easy to
      remove the medium later by means of an icon on the desktop background or the
      desktop environment’s control panel. We don’t need to cover this in detail here.
          Things look different on the command line, though, where you must mount
      and unmount removable media explicitly as discussed in the previous section.
      As we said, as a normal user you are not allowed to do this for arbitrary media in
      arbitrary places, but only for media that your system administrator has prepared
      for this and then only at “pre-cooked” mount points. You can recognise these
      because they have been marked with the user or users options:

      $ grep user /etc/fstab
      /dev/hdb        /media/cdrom0    udf,iso9660 ro,user,noauto 0 0
      /dev/sda1       /media/usb       auto    user,noauto        0 0
      /dev/mmcblk0p1 /media/sd         auto    user,noauto        0 0

      For the details of /etc/fstab entries we need to refer you to the Linup Front training
      manual, Linux Administration I (O. K., fstab (5) also works, but our manual is nicer);
      here and now we shall restrict ourselves to pointing out that in our example three
      types of removable media are available, namely CD-ROM disks (the first entry),
      USB-based media such as USB keys, digital cameras or MP3 players (the second
      entry), and SD cards (the third entry). As a “normal user”, you have to stick to the
      given mount points and can (after inserting the medium in question) say things
      $ mount /dev/hdb                                                    for the CD-ROM
      $ mount /media/cdrom0                                                           ditto
9.5 Removable Media                                                                      149

$ mount /dev/sda1                                                    for the USB key
$ mount /media/sd                                                     for the SD card

That is, Linux expects either the device name or the mount point; the matching
counterpart always derives from the /etc/fstab entry. Unmounting using umount
works similarly.

B The user option in /etc/fstab makes this work (it also produces some other ef-
  fects that we shall not be treating in detail here). The users option is roughly
  the same; the difference between the two—and you may want to remem-
  ber this, as it may occur on the exam—is that, with user , only the user who
  mounted the file system originally may unmount it again. With users , any
  user may do so (!). (And root can do it all the time, anyway.)

C 9.7 [1] Insert a floppy in the drive, mount it, copy a file (like /etc/passwd ) to
  the floppy, and unmount the floppy again. (If your system is “legacy-free”
  and no longer sports a floppy disk drive, then do the same with a USB key
  or a similar suitable removable medium.)

Commands in this Chapter
dmesg     Outputs the content of the kernel message buffer           dmesg (8)    145
file      Guesses the type of a file’s content, according to rules    file (1)    138
free      Displays main memory and swap space usage                   free (1)    144
klogd     Accepts kernel log messages                                klogd (8)    145
mkfifo    Creates FIFOs (named pipes)                               mkfifo (1)    139
mknod     Creates device files                                       mknod (1)    139
syslogd   Handles system log messages                              syslogd (8)    145
uptime    Outputs the time since the last system boot as well as the system      load
          averages                                                  uptime (1)    144

   • Files are self-contained collections of data stored under a name. Linux uses
     the “file” abstraction also for devices and other objects.
   • The method of arranging data and administrative information on a disk is
     called a file system. The same term covers the complete tree-structured hi-
     erarchy of directories and files in the system or a specific storage medium
     together with the data on it.
   • Linux file systems contain plain files, directories, symbolic links, device files
     (two kinds), FIFOs, and Unix-domain sockets.
   • The Filesystem Hierarchy Standard (FHS) describes the meaning of the most
     important directories in a Linux system and is adhered to by most Linux
   • Removable media must be mounted into the Linux directory tree to be ac-
     cessible, and be unmounted after use. The mount and umount commands are
     used to do this. Graphical desktop enviroments usually offer more conve-
     nient methods.
                                                                                                      $ echo tux
                                                                                                      $ ls
                                                                                                      $ /bin/su -

System Administration

10.1   Introductory Remarks . . . . . . .           .   .   .   .   .   .   .   .   .   .   .   152
10.2   The Privileged root Account . . . . .        .   .   .   .   .   .   .   .   .   .   .   152
10.3   Obtaining Administrator Privileges . .       .   .   .   .   .   .   .   .   .   .   .   154
10.4   Distribution-specific Administrative Tools   .   .   .   .   .   .   .   .   .   .   .   156

   • Reviewing a system administrator’s tasks
   • Being able to log on as the system administrator
   • Being able to assess the advantages and disadvantage of (graphical) admin-
     istration tools

   • Basic Linux skills
   • Administration skills for other operating systems are helpful

adm1-grundlagen.tex   (33e55eeadba676a3 )
152                                                                                     10 System Administration

                           10.1     Introductory Remarks
                        As a mere user of a Linux system, you are well off: You sit down in front of your
                        computer, everything is configured correctly, all the hardware is supported and
                        works. You have no care in the world since you can call upon a system adminis-
                        trator who will handle all administrative tasks for you promptly and thoroughly
                        (that’s what we wish your environment is like, anyway).
                           Should you be (or strive to be) the system administrator yourself—within your
                        company or the privacy of your home—then you have your work cut out for you:
                        You must install and configure the system and connect any peripherals. Having
                        done that, you need to keep the system running, for example by checking the sys-
                        tem logs for unusual events, regularly getting rid of old log files, making backup
                        copies, installing new software and updating existing programs, and so on.
                           Today, in the age of Linux distributions with luxurious installation tools, sys-
                        tem installation is no longer rocket science. However, an ambitious administrator
                        can spend lots of time mobilising every last resource on their system. In general,
                changes system administration mostly takes place when a noticeable change occurs, for
                        example when new hardware or software is to be integrated, new users arrive or
                        existing ones disappear, or hardware problems arise.

                   Tools   B Many Linux distributions these days contain specialised tools to facilitate
                             system administration. These tools perform different tasks ranging from
                             user management and creating file systems to complete system updates.
                             Utilities like these can make these tasks a lot easier but sometimes a lot more
                             difficult. Standard procedures are simplified but for specialised settings you
                             should know the exact relationships between system components. Further-
                             more, most of these tools are only available for certain distributions.

                             The administration of a Linux system, as of any other computer system, re-
            responsibility quires a considerable amount of responsibility and care. You should not see your-
                       self as a demigod (at least) but as a service provider. No matter whether you are
                       the only system administrator—say, on your own computer—or working in a team
         communication of colleagues to support a company network: communication is paramount. You
                       should get used to documenting configuration changes and other administrative
                       decisions in order to be able to retrace them later. The Linux way of directly edit-
                       ing text files makes this convenient, since you can comment configuration settings
                       right where they are made (a luxury not usually enjoyed with graphical adminis-
                       tration tools). Do so.

                           10.2     The Privileged root Account
                           For many tasks, the system administrator needs special privileges. Accordingly,
                           he can make use of a special user account called root . As root , a user is the so-called
               super user super user. In brief: He may do anything.
                              The normal file permissions and security precautions do not apply to root . He
      unlimited privileges has allowing him nearly unbounded access to all data, devices and system compo-
                           nents. He can institute system changes that all other users are prohibited from by
                           the Linux kernel’s security mechanisms. This means that, as root , you can change
                           every file on the system no matter who it belongs to. While normal users cannot
                           wreak damage (e. g., by destroying file systems or manipulating other users’ files),
                           root is not thus constrained.

                           B In many cases, these extensive system administrator privileges are really
                             a liability. For example, when making backup copies it is necessary to be
                             able to read all files on the system. However, this by no means implies that
                             the person making the backup (possibly an intern) should be empowered to
                             open all files on the system with a text editor, to read them or change them—
                             or start a network service which might be accessible from anywhere in the
10.2 The Privileged root Account                                                                                      153

      world. There are various ways of giving out administrator privileges only in
      controlled circumstances (such as sudo , a system which lets normal users ex- sudo
      ecute certain commands using administrator privileges), of selectively giv-
      ing particular privileges to individual process rather than operating on an
      “all or nothing” principle (cue POSIX capabilities), or of doing away with POSIX capabilities
      the idea of an “omnipotent” system administrator completely (for instance,
      SELinux—“security-enhanced Linux”—a freely available software package SELinux
      by the American intelligence agency, NSA, contains a “role-based” access
      control system that can get by without an omnipotent system administra-

    Why does Linux contain security precautions in the first place? The most im-         Why Security?
portant reason is for users to be able to determine the access privileges that apply
to their own files. By setting permission bits (using the chmod command), users
can ascertain that certain files may be read, written to or executed by certain oth-
ers (or no) users. This helps safeguard the privacy and integrity of their data. You
would certainly not approve of other users being able to read your private e-mail
or change the source code of an important program behind your back.
    The security mechanisms are also supposed to keep users from damaging the
system. Access to many of the device files in /dev corresponding to hardware com-        Access control for devices
ponents such as hard disks is constrained by the system. If normal users could ac-
cess disk storage directly, all sorts of mayhem might occur (a user might overwrite
the complete content of a disk or, having obtained information about the layout
of the filesystem on the disk, access files that are none of his business). Instead,
the system forces normal users to access the disks via the file system and protects
their data in that way.
    It is important to stress that damage is seldom caused on purpose. The system’s
security mechanisms serve mostly to save users from unintentional mistakes and
misunderstandings; only in the second instance are they meant to protect the pri-
vacy of users and data.
    On the system, users can be pooled into groups to which you may assign their         groups
own access privileges. For example, a team of software developers could have
read and write permission to a number of files, while other users are not allowed
to change these files. Every user can determine for their own files how permissive
or restrictive access to them should be.
    The security mechanisms also prevent normal users from performing certain
actions such as the invocation of specific system calls from a program. For exam-        Privileged system calls
ple, there is a system call that will halt the system, which is executed by programs
such as shutdown when the system is to be powered down or rebooted. If normal
users were allowed to invoke this routine from their own programs, they could
inadvertently (or intentionally) stop the system at any time.
    The administrator frequently needs to circumvent these security mechanisms
in order to maintain the system or install updated software versions. The root
account is meant to allow exactly this. A good administrator can do his work
without regard for the usual access permissions and other constraints, since these
do not apply to root . The root account is not better than a normal user account
because it has more privileges; the restriction of these privileges to root is a secu-
rity measure. Since the operating system’s reasonable and helpful protection and
security mechanisms do not apply to the system administrator, working as root
is very risky. You should therefore use root to execute only those commands that
really require the privileges.

B Many of the security problems of other popular operating systems can be
  traced back to the fact that normal users generally enjoy administrator priv-
  ileges. Thus, programs such as “worms” or “Trojan horses”, which users
  often execute by accident, find it easy to establish themselves on the sys-
  tem. With a Linux system that is correctly installed and operated, this is
  hardly possible since users read their e-mail without administrator privi-
154                                                                                          10 System Administration

                                        leges, but administrator privileges are required for all system-wide config-
                                        uration changes.

                                  B Of course, Linux is not magically immune against malicious pests like
                                    “mail worms”; somebody could write and make popular a mail program
                                    that would execute “active content” such as scripts or binary programs
                                    within messages like some such programs do on other operating systems.
                                    On Linux, such a “malicious” program from elsewhere could remove all
                                    the caller’s files or try to introduce “Trojan” code to his environment, but
                                    it could not harm other users nor the system itself—unless it exploited a
                                    security vulnerability in Linux that would let a local user gain administrator
                                    privileges “through the back door” (such vulnerabilities are detected now
                                    and again, and patches are promptly published which you should install in
                                    a timely manner).

                                  C 10.1 [2] What is the difference between a user and an administrator? Name
                                    examples for tasks and actions (and suitable commands) that are typically
                                    performed from a user account and the root account, respectively.

                                  C 10.2 [!1] Why should you, as a normal user, not use the root account for your
                                    daily work?

                                  C 10.3 [W]hat about access control on your computer at home? Do you work
                                    from an administrator account?

                                  10.3     Obtaining Administrator Privileges
                                  There are two ways of obtaining administrator privileges:
                                     1. You can log in as user root directly. After entering the correct root password
                                        you will obtain a shell with administrator privileges. However, you should
                                        avoid logging in to the GUI as root , since then all graphical applications in-
                                        cluding the X server would run with root privileges, which is not necessary
                                        and can lead to security problems. Nor should direct root logins be allowed
                                        across the network.

                                       B You can determine which terminals are eligible for direct root login
                                         by listing them in the /etc/securetty file. The default setting is usually
                                         “all virtual consoles and /dev/ttyS0 ” (the latter for users of the “serial

                                     2. You can, from a normal shell, use the su command to obtain a new shell with
                                        administrator privileges. su , like login , asks for a password and opens the
                                        root shell only after the correct root password has been input. In GUIs like
                                        KDE there are similar methods.
                                  (See also Introduction to Linux for Users and Administrators.)
      Single-user systems, too!       Even if a Linux system is used by a single person only, it makes sense to create
                                  a normal account for this user. During everyday work on the system as root , most
                                  of the kernel’s normal security precautions are circumvented. That way errors can
                                  occur that impact on the whole system. You can avoid this danger by logging into
                                  your normal account and starting a root shell via “/bin/su - ” if and when required.

                                  B Using su , you can also assume the identity of arbitrary other users (here hugo )
                                    by invoking it like

                                         $ /bin/su - hugo
10.3 Obtaining Administrator Privileges                                                                                155

      You need to know the target user’s password unless you are calling su as
      user root .

   The second method is preferable to the first for another reason, too: If you use
the su command to become root after logging in to your own account, su creates a
message like

Apr   1 08:18:21 HOST su: (to root) user1 on /dev/tty2

in the system log (such as /var/log/messages ). This entry means that user user1 suc- system log
cessfully executed su to become root on terminal 2. If you log in as root directly,
no such message is logged; there is no way of figuring out which user has fooled
around with the root account. On a system with several administrators it is often
important to retrace who entered the su command when.

      Ubuntu is one of the “newfangled” distributions that deprecate–and, in the
      default setup, even disable—logging in as root . Instead, particular users
      may use the sudo mechanism to execute individual commands with admin-
      istrator privileges. Upon installation, you are asked to create a “normal”
      user account, and that user account is automatically endowed with “indi-
      rect” administrator privileges.

      When installing Debian GNU/Linux, you can choose between assigning a
      password to the root account and thereby enabling direct administrator lo-
      gins, and declining this and, as on Ubuntu, giving sudo -based administrator
      privileges to the first unprivileged user account created as part of the instal-
      lation process.

   On many systems, the shell prompt differs between root and the other users. shell prompt
The classic root prompt contains a hash mark (# ), while other users see a prompt
containing a dollar sign ($ ) or greater-than sign (> ). The # prompt is supposed
to remind you that you are root with all ensuing privileges. However, the shell
prompt is easily changed, and it is your call whether to follow this convention or

B Of course, if you are using sudo , you never get to see a prompt for root .
    Like all powerful tools, the root account can be abused. Therefore it is impor-      Misuse of root
tant for you as the system administrator too keep the root password secret. It
should only be passed on to users who are trusted both professionally and per-
sonally (or who can be held responsible for their actions). If you are the sole user
of the system this problem does not apply to you.
    Too many cooks spoil the broth! This principle also applies to system admin-         Administration: alone or by
istration. The main benefit of “private” use of the root account is not that the         many
possibility of misuse is minimised (even though this is surely a consequence).
More importantly, root as the sole user of the root account knows the complete
system configuration. If somebody besides the administrator can, for example,
change important system files, then the system configuration could be changed
without the administrator’s knowledge. In a commercial environment, it is nec-
essary to have several suitably privileged employees for various reasons—for ex-
ample, safeguarding system operation during holidays or sudden severe illness
of the administrator—; this requires close cooperation and communication.
    If there is only one system administrator who is responsible for system con-
figuration, you can be sure that one person really knows what is going on on the
system (at least in theory), and the question of accountability also has an obvi-        accountability
ous asnwer. The more users have access to root , the greater is the probability that
somebody will commit an error as root at some stage. Even if all users with root
access possess suitable administration skills, mistakes can happen to anybody.
Prudence and thorough training are the only precautions against accidents.
156                                                             10 System Administration

           There are a few other useful tools for team-based system administration.
           For example, Debian GNU/Linux and Ubuntu support a package called
           etckeeper , which allows storing the complete content of the /etc directory in
           a revision control system such as Git or Mercurial. Revision control systems
           (which we cannot cover in detail here) make it possible to track changes to
           files in a directory hierarchy in a very detailed manner, to comment them
           and, if necessary, to undo them. With Git or Mercurial it is even possible to
           store a copy of the /etc directory on a completely different computer and to
           keep it in sync automatically—great protection from accidents.

      C 10.4 [2] What methods exist to obtain administrator rights? Which method
        is better? Why?

      C 10.5 [!2] On a conventionally configured system, how can you recognise
        whether you are working as root ?

      C 10.6 [2] Log in as a normal user (e. g., test ). Change over to root and back to
        test . How do you work best if you frequently need to change between both
        these accounts (for example, to check on the results of a new configuration)?

      C 10.7 [!2] Log in as a normal user and change to root using su . Where do you
        find a log entry documenting this change? Look at that message.

      10.4     Distribution-specific Administrative Tools
      Many Linux distributions try to stand out in the crowd by providing more or less
      ingenious tools that are supposed to simplify system administration. These tools
      are usually tailored to the distributions in question. Here are a few comments
      about typical specimens:

           A familiar sight to SUSE administrators is “YaST”, the graphical adminis-
           tration interface of the SUSE distributions (it also runs on a text screen). It
           allows the extensive configuration of many aspects of the system either by
           directly changing the configuration files concerned or by manipulating ab-
           stract configuration files below /etc/sysconfig which are then used to adapt
           the real configuration files by means of the SuSEconfig tool. For some tasks
           such as network configuration, the files below /etc/sysconfig are the actual
           configuration files.

           Unfortunately, YaST is not a silver bullet for all problems of system admin-
           istration. Even though many aspects of the system are amenable to YaST-
           based administration, important settings may not be accessible via YaST, or
           the YaST modules in question simply do not work correctly. The danger
           zone starts where you try to administer the computer partly through YaST
           and partly through changing configuration files manually: Yast does exer-
           cise some care not to overwrite your changes (which wasn’t the case in the
           past—up till SuSe 6 or so, YaST and SuSEconfig used to be quite reckless),
           but will then not perform its own changes such that they really take effect in
           the system. In other places, manual changes to the configuration files will
           actually show up in YaST. Hence you have to have some “insider knowl-
           edge” and experience in order to assess which configuration files you may
           change directly and which your grubby fingers had better not touch.

           Some time ago, Novell released the YaST source code under the GPL (in
           SUSE’s time it used to be available but not under a “free” licence). However,
           so far no other distribution of consequence has adapted YaST to its purposes,
           let alone made it a standard tool (SUSE fashion).
10.4 Distribution-specific Administrative Tools                                        157

B The Webmin package by Jamie Cameron ( ) allows the
  convenient administration of various Linux distributions (or Unix versions)
  via a web-based interface. Webmin is very extensive and offers special fa-
  cilities for administering “virtual” servers (for web hosters and their cus-
  tomers). However you may have to install it yourself, since most distribu-
  tions do not provide it. Webmin manages its own users, which means that
  you can extend administrator privileges to users who do not have interac-
  tive system access. (Whether that is a smart idea is a completely different

   Most administration tools like YaST and Webmin share the same disadvan-
   • They are not extensive enough to take over all aspects of system administra-
     tions, and as an administrator you have to have detailed knowledge of their
     limits in order to be able to decide where to intervene manually.
   • They make system administration possible for people whose expertise is
     not adequate to assess the possible consequences of their actions or to find
     and correct mistakes. Creating a user account using an administration tool
     is certainly not a critical job and surely more convenient than editing four
     different system files using vi , but other tasks such as configuring a fire-
     wall or mail server are not suitable for laypeople even using a convenient
     administration tool. The danger is that inexperienced administrators will
     use an administration tool to attempt tasks which do not look more com-
     plicated than others but which, without adequate background knowledge,
     may endanger the safety and/or reliability of the system.
   • They usually do not offer a facility to version control or document any
     changes made, and thus complicate teamwork and auditing by requiring
     logs to be kept externally.
   • They are often intransparent, i. e., they do not provide documentation about
     the actual steps they take on the system to perform administrative tasks.
     This keeps the knowledge about the necessary procedures buried in the pro-
     grams; as the administrator you have no direct way of “learning” from the
     programs like you could by observing an experienced administrator. Thus
     the adminstration tools keep you artificially stupid.
   • As an extension of the previous point: If you need to administer several
     computers, common administration tools force you to execute the same
     steps repeatedly on every single machine. Many times it would be more
     convenient to write a shell script automating the required procedure, and to
     execute it automatically on every computer using, e. g., the “secure shell”,
     but the administration tool does not tell you what to put into this shell
     script. Therefore, viewed in a larger context, their use is inefficient.
    From various practical considerations like these we would like to recommend
against relying too much on the “convenient” administration tools provided by
the distributions. They are very much like training wheels on a bicycle: They
work effectively against falling over too early and provide a very large sense of
achievement very quickly, but the longer the little ones zoom about with them, the
more difficult it becomes to get them used to “proper” bike-riding (here: doing
administration in the actual configuration files, including all advantages such as
documentation, transparency, auditing, team capability, transportability, …).
    Excessive dependence on an administration tool also leads to excessive depen-
dence on the distribution featuring that tool. This may not seem like a real liabil-
ity, but on the other hand one of the more important advantages of Linux is the fact
that there are multiple independent vendors. So, if one day you should be fed up
with the SUSE distributions (for whatever reason) and want to move over to Red
Hat or Debian GNU/Linux, it would be very inconvenient if your administrators
158                                                            10 System Administration

      knew only YaST and had to relearn Linux administration from scratch. (Third-
      party administration tools like Webmin do not exhibit this problem to the same

      C 10.8 [!2] Does your distribution provide an administration tool (such as
        YaST)? What can you do with it?

      C 10.9 [3] (Continuation of the previous exercise—when working through the
        manual for the second time.) Find out how your administration tool works.
        Can you change the system configuration manually so the administration
        tool will notice your changes? Only under some circumstances?

      C 10.10 [!1] Administration tools like Webmin are potentially accessible to ev-
        erybody with a browser. Which advantages and disadvantages result from

      Commands in this Chapter
      su      Starts a shell using a different user’s identity        su (1) 154
      sudo    Allows normal users to execute certain commands with administrator
              privileges                                            sudo (8) 152

         • Every computer installation needs a certain amount of system administra-
           tion. In big companies, universities and similar institutions these services
           are provided by (teams of) full-time administrators; in smaller companies
           or private households, (some) users usually serve as administrators.
         • Linux systems are, on the whole, straightforward to administer. Work arises
           mostly during the initial installation and, during normal operation, when
           the configuration changes noticeably.
         • On Linux systems, there usually is a privileged user account called root , to
           which the normal security mechanisms do not apply.
         • As an administrator, one should not work as root exclusively, but use a nor-
           mal user account and assume root privileges only if necessary.
         • Administration tools such as YaST or Webmin can help perform some ad-
           ministrative duties, but are no substitute for administrator expertise and
           may have other disadvantages as well.
                                                                                         $ echo tux
                                                                                         $ ls
                                                                                         $ /bin/su -

User Administration

11.1 Basics . . . . . . . . . . . . . . . . . .            .   .   .   .   .   .   160
    11.1.1 Why Users? . . . . . . . . . . . . . .          .   .   .   .   .   .   160
    11.1.2 Users and Groups . . . . . . . . . . .          .   .   .   .   .   .   161
    11.1.3 People and Pseudo-Users . . . . . . . . .       .   .   .   .   .   .   163
11.2 User and Group Information . . . . . . . . . .        .   .   .   .   .   .   163
    11.2.1 The /etc/passwd File . . . . . . . . . . .      .   .   .   .   .   .   163
    11.2.2 The /etc/shadow File . . . . . . . . . . .      .   .   .   .   .   .   166
    11.2.3 The /etc/group File . . . . . . . . . . .       .   .   .   .   .   .   168
    11.2.4 The /etc/gshadow File . . . . . . . . . . .     .   .   .   .   .   .   169
    11.2.5 The getent Command . . . . . . . . . .          .   .   .   .   .   .   170
11.3 Managing User Accounts and Group Information . .      .   .   .   .   .   .   170
    11.3.1 Creating User Accounts . . . . . . . . .        .   .   .   .   .   .   171
    11.3.2 The passwd Command . . . . . . . . . .          .   .   .   .   .   .   172
    11.3.3 Deleting User Accounts . . . . . . . . .        .   .   .   .   .   .   174
    11.3.4 Changing User Accounts and Group Assignment     .   .   .   .   .   .   174
    11.3.5 Changing User Information Directly—vipw . . .   .   .   .   .   .   .   175
    11.3.6 Creating, Changing and Deleting Groups . . .    .   .   .   .   .   .   175

   • Understanding the user and group concepts of Linux
   • Knowing how user and group information is stored on Linux
   • Being able to use the user and group administration commands

   • Knowledge about handling configuration files

adm1-benutzer.tex   (33e55eeadba676a3 )
160                                                                                11 User Administration

                      11.1     Basics
                      11.1.1    Why Users?
                      Computers used to be large and expensive, but today an office workplace without
                      its own PC (“personal computer”) is nearly inconceivable, and a computer is likely
                      to be encountered in most domestic “dens” as well. And while it may be sufficient
                      for a family to agree that Dad, Mom and the kids will put their files into different
                      directories, this will no longer do in companies or universities—once shared disk
                      space or other facilities are provided by central servers accessible to many users,
                      the computer system must be able to distinguish between different users and to
                      assign different access rights to them. After all, Ms Jones from the Development
                      Division has as little business looking at the company’s payroll data as Mr Smith
                      from Human Resources has accessing the detailed plans for next year’s products.
                      And a measure of privacy may be desired even at home—the Christmas present
                      list or teenage daughter’s diary (erstwhile fitted with a lock) should not be open
                      to prying eyes as a matter of course.

                      B We shall be discounting the fact that teenage daughter’s diary may be visible
                        to the entire world on Facebook (or some such); and even if that is the case,
                        the entire world should surely not be allowed to write to teenage daughter’s
                        dairy. (Which is why even Facebook supports the notion of different users.)

                          The second reason for distinguishing between different users follows from the
                      fact that various aspects of the system should not be visible, much less change-
                      able, without special privileges. Therefore Linux manages a separate user iden-
                      tity (root ) for the system administrator, which makes it possible to keep informa-
                      tion such as users’ passwords hidden from “common” users. The bane of older
                      Windows systems—programs obtained by e-mail or indiscriminate web surfing
                      that then wreak havoc on the entire system—will not plague you on Linux, since
                      anything you can execute as a common user will not be in a position to wreak
                      system-wide havoc.

                      A Unfortunately this is not entirely correct: Every now and then a bug comes
                        to light that enables a “normal user” to do things otherwise restricted to
                        administrators. This sort of error is extremely nasty and usually corrected
                        very quickly after having been found, but there is a considerable chance that
                        such a bug has remained undetected in the system for an extended period
                        of time. Therefore, on Linux (as on all other operating systems) you should
                        strive to run the most current version of critical system parts like the kernel
                        that your distributor supports.

                      A Even the fact that Linux safeguards the system configuration from unau-
                        thorised access by normal users should not entice you to shut down your
                        brain. We do give you some advice (such as not to log in to the graphical
                        user interface as root ), but you should keep thinking along. E-mail messages
                        asking you to view web site 𝑋 and enter your credit card number and PIN
                        there can reach you even on Linux, and you should disregard them in the
                        same way as everywhere else.

      user accounts      Linux distinguishes between different users by means of different user ac-
                      counts. The common distributions typically create two user accounts during
                      installation, namely root for administrative tasks and another account for a “nor-
                      mal” user. You (as the administrator) may add more accounts later, or, on a client
                      PC in a larger network, they may show up automatically from a user account
                      database stored elsewhere.

                      B Linux distinguishes between user accounts, not users. For example, no one
                        keeps you from using a separate user account for reading e-mail and surf-
                        ing the web, if you want to be 100% sure that things you download from the
11.1 Basics                                                                                         161

       Net have no access to your important data (which might otherwise happen
       in spite of the user/administrator divide). With a little cunning you can
       even display a browser and e-mail program running under your “surfing
       account” among your “normal” programs1 .

    Under Linux, every user account is assigned a unique number, the so-called
user ID (or UID, for short). Every user account also features a textual user name UID
(such as root or joe ) which is easier to remember for humans. In most places where user name
it counts—e. g., when logging in, or in a list of files and their owners—Linux will
use the textual name whenever possible.

B The Linux kernel does not know anything about textual user names; process
  data and the ownership data in the filesystem use the UID exclusively. This
  may lead to difficulties if a user is deleted while he still owns files on the
  system, and the UID is reassigned to a different user. That user “inherits”
  the previous UID owner’s files.

B There is no technical problem with assigning the same (numerical) UID to
  different user names. These users have equal access to all files owned by that
  UID, but every user can have his own password. You should not actually
  use this (or if you do, use it only with great circumspection).

11.1.2      Users and Groups
To work with a Linux computer you need to log in first. This allows the system
to recognise you and to assign you the correct access rights (of which more later).
Everything you do during your session (from logging in to logging out) happens
under your user account. In addition, every user has a home directory, where home directory
only they can store and manage their own files, and where other users often have
no read permission and very emphatically no write permission. (Only the system
administrator—root —may read and write all files.)

A Depending on which Linux distribution you use (cue: Ubuntu) it may be
  possible that you do not have to log into the system explicitly. This is be-
  cause the computer “knows” that it will usually be you and simply assumes
  that this is going to be the case. You are trading security for convenience; this
  particular deal probably makes sense only where you can stipulate with rea-
  sonable certainty that nobody except you will switch on your computer—
  and hence should be restricted by rights to the computer in your single-
  person household without a cleaner. We told you so.

   Several users who want to share access to certain system resources or files can
form a group. Linux identifies group members either fixedly by name or tran- group
siently by a login procedure similar to that for users. Groups have no “home di-
rectories” like users do, but as the administrator you can of course create arbitrary
directories meant for certain groups and having appropriate access rights.
   Groups, too, are identified internally using numerical identifiers (“group IDs”
or GIDs).

B Group names relate to GIDs as user names to UIDs: The Linux kernel only
  knows about the former and stores only the former in process data or the
  file system.

   Every user belongs to a primary group and possibly several secondary or addi-
tional groups. In a corporate setting it would, for example, be possible to introduce
project-specific groups and to assign the people collaborating on those projects
to the appropriate group in order to allow them to manage common data in a
directory only accessible to group members.
   1 Which of course is slightly more dangerous again, since programs runninig on the same screen

can communicate with one another
162                                                                                     11 User Administration

                For the purposes of access control, all groups carry equivalent weight—every
             user always enjoys all rights deriving from all the groups that he is a member of.
             The only difference between the primary and secondary groups is that files newly
             created by a user are usually2 assigned to his primary group.

             B Up to (and including) version 2.4 of the Linux kernel, a user could be a mem-
               ber of at most 32 additional groups; since Linux 2.6 the number of secondary
               groups is unlimited.

                You can find out a user account’s UID, the primary and secondary groups and
             the corresponding GIDs by means of the id program:

             $ id
             uid=1000(joe) gid=1000(joe) groups=24(cdrom),29(audio),44(video), 
             $ id root
             uid=0(root) gid=0(root) groups=0(root)

             B With the options -u , -g , and -G , id lets itself be persuaded to output just the
               account’s UID, the GID of the primary group, or the GIDs of the secondary
               groups. (These options cannot be combined.) With the additional option -n
               you get names instead of numbers:

                      $ id -G
                      1000 24 29 44
                      $ id -Gn
                      joe cdrom audio video

             B The groups command yields the same result as the ”‘id -Gn ”’ command.
      last     You can use the last command to find who logged into your computer and
             when (and, in the case of logins via the network, from where):

             $ last
             joe        pts/1          pcjoe.example.c      Wed   Feb   29   10:51   still logged in
             bigboss    pts/0          pc01.example.c       Wed   Feb   29   08:44   still logged in
             joe        pts/2          pcjoe.example.c      Wed   Feb   29   01:17 - 08:44 (07:27)
             sue        pts/0          :0                   Tue   Feb   28   17:28 - 18:11 (00:43)
             reboot     system boot 3.2.0-1-amd64           Fri Feb      3 17:43 - 13:25 (4+19:42)

             For network-based sessions, the third column specifies the name of the ssh client
             computer. “:0 ” denotes the graphical screen (the first X server, to be exact—there
             might be more than one).

             B Do also note the reboot entry, which tells you that the computer was started.
               The third column contains the version number of the Linux operating sys-
               tem kernel as provided by “uname -r ”.

                With a user name, last provides information about a particular user:

             $ last
             joe        pts/1          pcjoe.example.c      Wed Feb 29 10:51   still logged in
             joe        pts/2          pcjoe.example.c      Wed Feb 29 01:17 - 08:44 (07:27)
                 2 The exception occurs where the owner of a directory has decreed that new files and subdirectories

             within this directory are to be assigned to the same group as the directory itself. We mention this
             strictly for completeness.
11.2 User and Group Information                                                                             163

B You might be bothered (and rightfully so!) by the fact that this somewhat
  sensitive information is apparently made available on a casual basis to arbi-
  trary system users. If you (as the administrator) want to protect your users’
  privacy somewhat better than you Linux distribution does by default, you
  can use the
       # chmod o-r /var/log/wtmp

      command to remove general read permissions from the file that last con-
      sults for the telltale data. Users without administrator privileges then get to
      see something like

       $ last
       last: /var/log/wtmp: Permission denied

11.1.3    People and Pseudo-Users
Besides “natural” persons—the system’s human users—the user and group con-
cept is also used to allocate access rights to certain parts of the system. This means
that, in addition to the personal accounts of the “real” users like you, there are fur-
ther accounts that do not correspond to actual human users but are assigned to pseudo-users
administrative functions internally. They define functional “roles” with their own
accounts and groups.
   After installing Linux, you will find several such pseudo-users and groups in
the /etc/passwd and /etc/group files. The most important role is that of the root user
(which you know) and its eponymous group. The UID and GID of root are 0 (zero).

B root ’s privileges are tied to UID 0; GID 0 does not confer any additional
  access privileges.

    Further pseudo-users belong to certain software systems (e. g., news for Usenet
news using INN, or postfix for the Postfix mail server) or certain components or
devices (such as printers, tape or floppy drives). You can access these accounts,
if necessary, like other user accounts via the su command. These pseudo-users pseudo-users for privileges
are helpful as file or directory owners, in order to fit the access rights tied to file
ownership to special requirements without having to use the root account. The
same appkies to groups; the members of the disk group, for example, have block-
level access to the system’s disks.

C 11.1 [1] How does the operating system kernel differentiate between various
  users and groups?

C 11.2 [2] What happens if a UID is assigned to two different user names? Is
  that allowed?

C 11.3 [1] What is a pseudo-user? Give examples!

C 11.4 [2] (On the second reading.) Is it acceptable to assign a user to group
  disk who you would not want to trust with the root password? Why (not)?

11.2     User and Group Information
11.2.1    The /etc/passwd File
The /etc/passwd file is the system user database. There is an entry in this file for
every user on the system—a line consisting of attributes like the Linux user name,
164                                                                               11 User Administration

                   “real” name, etc. After the system is first installed, the file contains entries for
                   most pseudo-users.
                      The entries in /etc/passwd have the following format:

                    ⟨user name⟩: ⟨password⟩: ⟨UID⟩: ⟨GID⟩: ⟨GECOS⟩: ⟨home directory⟩: ⟨shell⟩

                   ⟨user name⟩ This name should consist of lowercase letters and digits; the first char-
                          acter should be a letter. Unix systems often consider only the first eight
                          characters—Linux does not have this limitation but in heterogeneous net-
                          works you should take it into account.

                        A Resist the temptation to use umlauts, punctuation and similar special
                          characters in user names, even if the system lets you do so—not all
                          tools that create new user accounts are picky, and you could of course
                          edit /etc/passwd by hand. What seems to work splendidly at first glance
                          may lead to problems elsewhere later.

                         B You should also stay away from user names consisting of only upper-
                           case letters or only digits. The former may give their owners trouble
                           logging in (see Exercise 11.6), the latter can lead to confusion, espe-
                           cially if the numerical user name does not equal the account’s numeri-
                           cal UID. Commands such as ”‘ls -l ”’ will display the UID if there is no
                           corresponding entry for it in /etc/passwd , and it is not exactly straight-
                           forward to tell UIDs from purely numerical user names in ls output.

                   ⟨password⟩ Traditionally, this field contains the user’s encrypted password. Today,
                         most Linux distributions use “shadow passwords”; instead of storing the
                         password in the publically readable /etc/passwd file, it is stored in /etc/shadow
                         which can only be accessed by the administrator and some privileged pro-
                         grams. In /etc/passwd , a “x ” calls attention to this circumstance. Every user
                         can avail himself of the passwd program to change his password.
                   ⟨UID⟩ The numerical user identifier—a number between 0 and 232 − 1. By con-
                        vention, UIDs from 0 to 99 (inclusive) are reserved for the system, UIDs
                        from 100 to 499 are for use by software packages if they need pseudo-user
                        accounts. With most popular distributions, “real” users’ UIDs start from
                        500 (or 1000).
                         Precisely because the system differentiates between users not by name but
                         by UID, the kernel treats two accounts as completely identical if they con-
                         tain different user names but the same UID—at least as far as the access
                         privileges are concerned. Commands that display a user name (e. g., ”‘ls
                         -l ”’ or id ) show the one used when the user logged in.

      primary group ⟨GID⟩ The GID of the user’s primary group after logging in.

                              The Novell/SUSE distributions (among others) assign a single group
                              such as users as the shared primary group of all users. This method is
                              quite established as well as easy to understand.

                              Many distributions, such as those by Red Hat or Debian GNU/Linux,
                              create a new group whenever a new account is created, with the GID
                              equalling the account’s UID. The idea behind this is to allow more
                              sophisticated assignments of rights than with the approach that puts
                              all users into the same group users . Consider the following situation:
                              Jim (user name jim ) is the personal assistant of CEO Sue (user name
                              sue ). In this capacity he sometimes needs to access files stored inside
                              Sue’s home directory that other users should not be able to get at. The
                              method used by Red Hat, Debian & co., “one group per user”, makes it
                              straightforward to put user jim into group sue and to arrange for Sue’s
11.2 User and Group Information                                                                     165

           files to be readable for all group members (the default case) but not oth-
           ers. With the “one group for everyone” approach it would have been
           necessary to introduce a new group completely from scratch, and to
           reconfigure the jim and sue accounts accordingly.

      By virtue of the assignment in /etc/passwd , every user must be a member of
      at least one group.

      B The user’s secondary groups (if applicable) are determined from en-
        tries in the /etc/group file.

⟨GECOS⟩ This is the comment field, also known as the “GECOS field”.

      B GECOS stands for “General Electric Comprehensive Operating Sys-
        tem” and has nothing whatever to do with Linux, except that in the
        early days of Unix this field was added to /etc/passwd in order to keep
        compatibility data for a GECOS remote job entry service.

      This field contains various bits of information about the user, in particular
      his “real” name and optional data such as the office number or telephone
      number. This information is used by programs such as mail or finger . The
      full name is often included in the sender’s address by news and mail soft-

      B Theoretically there is a program called chfn that lets you (as a user)
        change the content of your GECOS field. Whether that works in any
        particular case is a different question, since at least in a corporate set-
        ting one does not necessarily want to allow people to change their
        names at a whim.

⟨home directory⟩ This directory is that user’s personal area for storing his own files.
      A newly created home directory is by no means empty, since a new user
      normally receives a number of “profile” files as his basic equipment. When
      a user logs in, his shell uses his home directory as its current directory, i. e.,
      immediately after logging in the user is deposited there.
⟨shell⟩ The name of the program to be started by login after successful authentication—
       this is usually a shell. The seventh field extends through the end of the line.

      B The user can change this entry by means of the chsh program. The
        eligible programs (shells) are listed in the /etc/shells file. If a user is
        not supposed to have an interactive shell, an arbitrary program, with
        arguments, can be entered here (a common candidate is /bin/true ). This
        field may also remain empty, in which case the standard shell /bin/sh
        will be started.

      B If you log in to a graphical environment, various programs will be
        started on your behalf, but not necessarily an interactive shell. The
        shell entry in /etc/passwd comes into its own, however, when you in-
        voke a terminal emulator such as xterm or konsole , since these programs
        usually check it to identify your preferred shell.

Some of the fields shown here may be empty. Absolutely necessary are only the
user name, UID, GID and home directory. For most user accounts, all the fields
will be filled in, but pseudo-users might use only part of the fields.
   The home directories are usually located below /home and take their name from home directories
their owner’s user name. In general this is a fairly sensible convention which
makes a given user’s home directory easy to find. In theory, a home directory
might be placed anywhere in the file system under a completely arbitrary name.

B On large systems it is common to introduce one or more additional levels
  of directories between /home and the “user name” directory, such as
166                                                                       11 User Administration

                    /home/hr/joe                                      Joe from Human Resources
                    /home/devel/sue                                        Sue from Development
                    /home/exec/bob                                                  Bob the CEO

                   There are several reasons for this. On the one hand this makes it easier to
                   keep one department’s home directory on a server within that department,
                   while still making it available to other client computers. On the other hand,
                   Unix (and some Linux) file systems used to be slow dealing with directories
                   containing very many files, which would have had an unfortunate impact
                   on a /home with several thousand entries. However, with current Linux file
                   systems (ext3 with dir_index and similar) this is no longer an issue.

                Note that as an administrator you should not really be editing /etc/passwd by
       tools hand. There is a number of programs that will help you create and maintain user

              B In principle it is also possible to store the user database elsewhere than in
                /etc/passwd . On systems with very many users (thousands), storing user
                data in a relational database is preferable, while in heterogeneous networks
                a shared multi-platform user database, e. g., based on an LDAP directory,
                might recommend itself. The details of this, however, are beyond the scope
                of this course.

             11.2.2      The /etc/shadow File
             For security, nearly all current Linux distributions store encrypted user passwords
             in the /etc/shadow file (“shadow passwords”). This file is unreadable for normal
             users; only root may write to it, while members of the shadow group may read it in
             addition to root . If you try to display the file as a normal user an error occurs.

              B Use of /etc/shadow is not mandatory but highly recommended. However
                there may be system configurations where the additional security afforded
                by shadow passwords is nullified, for example if NIS is used to export user
                data to other hosts (especially in heterogeneous Unix environments).

      format Again, this file contains one line for each user, with the following format:

              ⟨user name⟩: ⟨password⟩: ⟨change⟩: ⟨min⟩: ⟨max⟩
               : ⟨warn⟩: ⟨grace⟩: ⟨lock⟩: ⟨reserved⟩

             For example:


             Here is the meaning of the individual fields:
             ⟨user name⟩ This must correspond to an entry in the /etc/passwd file. This field
                    “joins” the two files.
             ⟨password⟩ The user’s encrypted password. An empty field generally means that
                   the user can log in without a password. An asterisk or an exclamation point
                   prevent the user in question from logging in. It is common to lock user’s ac-
                   counts without deleting them entirely by placing an asterisk or exclamation
                   point at the beginning of the corresponding password.
             ⟨change⟩ The date of the last password change, in days since 1 January 1970.
11.2 User and Group Information                                                                              167

⟨min⟩ The minimal number of days that must have passed since the last password
      change before the password may be changed again.
⟨max⟩ The maximal number of days that a password remains valid without hav-
     ing to be changed. After this time has elapsed the user must change his
⟨warn⟩ The number of days before the expiry of the ⟨max⟩ period that the user will
     be warned about having to change his password. Generally, the warning
     appears when logging in.
⟨grace⟩ The number of days, counting from the expiry of the ⟨max⟩ period, after
      which the account will be locked if the user does not change his password.
      (During the time from the expiry of the ⟨max⟩ period and the expiry of this
      grace period the user may log in but must immediately change his pass-
⟨lock⟩ The date on which the account will be definitively locked, again in days
       since 1 January 1970.
Some brief remarks concerning password encryption are in order. You might password encryption
think that if passwords are encrypted they can also be decrypted again. This would
open all of the system’s accounts to a clever cracker who manages to obtain a copy
of /etc/shadow . However, in reality this is not the case, since password “encryption”
is a one-way street. It is impossible to recover the decrypted representation of a
Linux password from the “encrypted” form because the method used for encryp-
tion prevents this. The only way to “crack” the encryption is by encrypting likely
passwords and checking whether they match what is in /etc/shadow .

B Let’s assume you select the characters of your password from the 95 vis-
  ible ASCII characters (uppercase and lowercase letters are distinguished).
  This means that there are 95 different one-character passwords, 952 = 9025
  two-character passwords, and so on. With eight characters you are already
  up to 6.6 quadrillion (6.6 ⋅ 1015 ) possibilities. Stipulating that you can trial-
  encrypt 10 million passwords per second (not entirely unrealistic on current
  hardware), this means you would require approximately 21 years to work
  through all possible passwords. If you are in the fortunate position of own-
  ing a modern graphics card, another acceleration by a factor of 50–100 is
  quite feasible, which makes that about two months. And then of course
  there are handy services like Amazon’s EC2, which will provide you (or
  random crackers) with almost arbitrary CPU power, or the friendly neigh-
  bourhood Russian bot net … so don’t feel too safe.

B There are a few other problems. The traditional method (usually called
  “crypt” or “DES”—the latter because it is based on, but not identical to, the
  eponymous encryption method3 ) should no longer be used if you can avoid
  it. It has the unpleasant property of only looking at the first eight characters
  of the entered password, and clever crackers can nowadays buy enough disk
  space to build a pre-encrypted cache of the 50 million (or so) most common
  passwords. To “crack” a password they only need to search their cache for
  the encrypted password, which can be done extremely quickly, and read off
  the corresponding clear-text password.

B To make things even more laborious, when a newly entered password is
  encrypted the system traditionally adds a random element (the so-called
    3 If you must know exactly: The clear-text password is used as the key (!) to encrypt a constant

string (typically a sequence of zero bytes). A DES key is 56 bits, which just happens to be 8 characters
of 7 bits each (as the leftmost bit in each character is ignored). This process is repeated for a total of
25 rounds, with the previous round’s output serving as the new input. Strictly speaking the encryption
scheme used isn’t quite DES but changed in a few places, to make it less feasible to construct a special
password-cracking computer from commercially available DES encryption chips.
168                                                                                11 User Administration

                           “salt”) which selects one of 4096 different possibilities for the encrypted
                           password. The main purpose of the salt is to avoid random hits result-
                           ing from user 𝑋, for some reason or other, getting a peek at the content
                           of /etc/shadow and noting that his encrypted password looks just like that
                           of user 𝑌 (hence letting him log into user 𝑌’s account using his own clear-
                           text password). For a pleasant side effect, the disk space required for the
                           cracker’s pre-encrypted dictionary from the previous paragraph is blown
                           up by a factor of 4096.

                     B Nowadays, password encryption is commonly based on the MD5 algorithm,
                       allows for passwords of arbitrary length and uses a 48-bit salt instead of
                       the traditional 12 bits. Kindly enough, the encryption works much more
                       slowly than “crypt”, which is irrelevant for the usual purpose (checking a
                       password upon login—you can still encrypt several hundred passwords per
                       second) but does encumber clever crackers to a certain extent. (You should
                       not let yourself be bothered by the fact that cryptographers poo-poo the
                       MD5 scheme as such due to its insecurity. As far as password encryption is
                       concerned, this is fairly meaningless.)

                     A You should not expect too much of the various password administration pa-
                       rameters. They are being used by the text console login process, but whether
                       other parts of the system (such as the graphical login screen) pay them any
                       notice depends on your setup. Nor is there usually an advantage in forc-
                       ing new passwords on users at short intervals—this usually results in a se-
                       quence like bob1 , bob2 , bob3 , …, or users alternate between two passwords.
                       A minimal interval that must pass before a user is allowed to change their
                       password again is outright dangerous, since it may give a cracker a “win-
                       dow” for illicit access even though the user knows their password has been

                         The problem you need to cope with as a system administrator is usually not
                     people trying to crack your system’s passwords by “brute force”. It is much more
                     promising, as a rule, to use “social engineering”. To guess your password, the
                     clever cracker does not start at a , b , and so on, but with your spouse’s first name,
                     your kids’ first names, your car’s plate number, your dog’s birthday et cetera. (We
                     do not in any way mean to imply that you would use such a stupid password. No,
                     no, not you by any means. However, we are not quite so positive about your boss
                     …) And then there is of course the time-honoured phone call approach: “Hi, this
                     is the IT department. We’re doing a security systems test and urgently require
                     your user name and password.”
                         There are diverse ways of making Linux passwords more secure. Apart from
                     the improved encryption scheme mentioned above, which by now is used by de-
                     fault by most Linux distributions, these include complaining about (too) weak
                     passwords when they are first set up, or proactively running software that will
                     try to identify weak encrypted passwords, just like clever crackers would (Cau-
                     tion: Do this in your workplace only with written (!) pre-approval from your
                     boss!). Other methods avoid passwords completely in favour of constantly chang-
                     ing magic numbers (as in SecurID) or smart cards. All of this is beyond the scope
                     of this manual, and therefore we refer you to the Linup Front manual Linux Secu-

                     11.2.3    The /etc/group File
      group database By default, Linux keeps group information in the /etc/group file. This file contains
                     one-line entry for each group in the system, which like the entries in /etc/passwd
                     consists of fields separated by colons (: ). More precisely, /etc/group contains four
                     fields per line.

                      ⟨group name⟩: ⟨password⟩: ⟨GID⟩: ⟨members⟩
11.2 User and Group Information                                                                                    169

   Their meaning is as follows:
⟨group name⟩ The name of the group, for use in directory listings, etc.
⟨password⟩ An optional password for this group. This lets users who are not mem-
      bers of the group via /etc/shadow or /etc/group assume membership of the
      group using newgrp . A “* ” as an invalid character prevents normal users
      from changing to the group in question. A “x ” refers to the separate pass-
      word file /etc/gshadow .
⟨GID⟩ The group’s numerical group identifier.

⟨Members⟩ A comma-separated list of user names. This list contains all users who
    have this group as a secondary group, i. e., who are members of this group
    but have a different value in the GID field of their /etc/passwd entry. (Users
    with this group as their primary group may also be listed here but that is

   A /etc/group file could, for example, look like this:


The entries for the root and bin groups are entries for administrative groups, sim-        administrative groups
ilar to the system’s pseudo-user accounts. Many files are assigned to groups like
this. The other groups contain user accounts.
    Like UIDs, GIDs are counted from a specific value, typically 100. For a valid          GID values
entry, at least the first and third field (group name and GID) must be filled in.
Such an entry assigns a GID (which might occur in a user’s primary GID field in
/etc/passwd ) a textual name.
    The password and/or membership fields must only be filled in for groups that
are assigned to users as secondary groups. The users listed in the membership              membership list
list are not asked for a password when they want to change GIDs using the newgrp
command. If an encrypted password is given, users without an entry in the mem-             group password
bership list can authenticate using the password to assume membership of the

B In practice, group passwords are hardly if ever used, as the administrative
  overhead barely justifies the benefits to be derived from them. On the one
  hand it is more convenient to assign the group directly to the users in ques-
  tion (since, from version 2.6 of the Linux kernel on, there is no limit to the
  number of secondary groups a user can join), and on the other hand a single
  password that must be known by all group members does not exactly make
  for bullet-proof security.

B If you want to be safe, ensure that there is an asterisk (“* ”) in every group
  password slot.

11.2.4     The /etc/gshadow File
As for the user database, there is a shadow password extension for the group
database. The group passwords, which would otherwise be encrypted but read-
able for anyone in /etc/group (similar to /etc/passwd ), are stored in the separate file
/etc/gshadow . This also contains additional information about the group, for ex-
ample the names of the group administrators who are entitled to add or remove
members from the group.
170                                                                                           11 User Administration

                               11.2.5     The getent Command
                               Of course you can read and process the /etc/passwd , /etc/shadow , and /etc/group files,
                               like all other text files, using programs such as cat , less or grep (OK, OK, you need
                               to be root to get at /etc/shadow ). There are, however, some practical problems:
                                   • You may not be able to see the whole truth: Your user database (or parts of
                                     it) might be stored on an LDAP server, SQL database, or a Windows domain
                                     controller, and there really may not be much of interest in /etc/passwd .
                                   • If you want to look for a specific user’s entry, it is slightly inconvenient to
                                     type this using grep if you want to avoid “false positives”.
                               The getent command makes it possible to query the various databases for user and
                               group information directly. With
                                $ getent passwd

                               you will be shown something that looks like /etc/passwd , but has been assembled
                               from all sources of user information that are currently configured on your com-
                               puter. With
                                $ getent passwd hugo

                               you can obtain user hugo ’s entry, no matter where it is actually stored. Instead
                               of passwd , you may also specify shadow , group , or gshadow to consult the respective
                               database. (Naturally, even with getent you can only access shadow and gshadow as
                               user root .)

                                B The term “database” is understood as “totality of all sources from where
                                  the C library can obtain information on that topic (such as users)”. If you
                                  want to know exactly where that information comes from (or might come
                                  from), then read nsswitch.conf (5) and examine the /etc/nsswitch.conf file on
                                  your system.

                                B You may also specify several user or group names. In that case, information
                                  on all the named users or groups will be output:

                                      $ getent passwd hugo susie fritz

                                C 11.5 [1] Which value will you find in the second column of the /etc/passwd
                                  file? Why do you find that value there?

                                C 11.6 [2] Switch to a text console (using, e. g., Alt + F1 ) and try logging in
                                  but enter your user name in uppercase letters. What happens?

                                C 11.7 [2] How can you check that there is an entry in the shadow database for
                                  every entry in the passwd database? (pwconv only considers the /etc/passwd and
                                  /etc/shadow files, and also rewrites the /etc/shadow file, which we don’t want.

                               11.3      Managing User Accounts and Group Information
                                After a new Linux distribution has been installed, there is often just the root ac-
                                count for the system administrator and the pseudo-users’ accounts. Any other
                                user accounts must be created first (and most distributions today will gently but
                                firmly nudge the installing person to create at least one “normal” user account).
                                   As the administrator, it is your job to create and manage the accounts for all
      tools for user management required users (real and pseudo). To facilitate this, Linux comes with several tools
                                for user management. With them, this is mostly a straightforward task, but it is
                                important that you understand the background.
11.3 Managing User Accounts and Group Information                                                    171

11.3.1       Creating User Accounts
The procedure for creating a new user account is always the same (in principle)
and consists of the following steps:
     1. You must create entries in the /etc/passwd (and possibly /etc/shadow ) files.

     2. If necessary, an entry (or several) in the /etc/group file is necessary.
     3. You must create the home directory, copy a basic set of files into it, and
        transfer ownership of the lot to the new user.
     4. If necessary, you must enter the user in further databases, e. g., for disk quo-
        tas, database access privilege tables and special applications.

All files involved in adding a new account are plain text files. You can perform
each step manually using a text editor. However, as this is a job that is as tedious
as it is elaborate, it behooves you to let the system help you, by means of the useradd
    In the simplest case, you pass useradd merely the new user’s user name. Op-            useradd
tionally, you can enter various other user parameters; for unspecified parameters
(typically the UID), “reasonable” default values will be chosen automatically. On
request, the user’s home directory will be created and endowed with a basic set of
files that the program takes from the /etc/skel directory. The useradd command’s
syntax is:

useradd    [⟨options⟩] ⟨user name⟩

The following options (among others) are available:
-c   ⟨comment⟩ GECOS field entry
-d   ⟨home directory⟩ If this option is missing, /home/ ⟨user name⟩ is assumed

-e   ⟨date⟩ On this date the account will be deactivated automatically (format
-g   ⟨group⟩ The new user’s primary group (name or GID). This group must exist.
-G   ⟨group⟩[,⟨group⟩]… Supplementary groups (names or GIDs). These groups
         must also exist.
-s   ⟨shell⟩ The new user’s login shell
-u   ⟨UID⟩ The new user’s numerical UID. This UID must not be already in use,
        unless the “-o ” option is given

-m   Creates the home directory and copies the basic set of files to it. These files
       come from /etc/skel , unless a different directory was named using “-k
For instance, the

# useradd -c "Joe Smith" -m -d /home/joe -g devel \
>    -k /etc/skel.devel

command creates an account by the name of joe for a user called Joe Smith, and
assigns it to the devel group. joe ’s home directory is created as /home/joe , and the
files from /etc/skel.devel are being copied into it.

B With the -D option (on SUSE distributions, --show-defaults ) you may set de-
  fault values for some of the properties of new user accounts. Without addi-
  tional options, the default values are displayed:
172                                                                           11 User Administration

                      # useradd -D

                     You can change these values using the -g , -b , -f , -e , and -s options, respec-

                      # useradd -D -s /usr/bin/zsh                             zsh   as the default shell

                     The final two values in the list cannot be changed.

                B useradd is a fairly low-level tool. In real life, you as an experienced adminis-
                  trator will likely not be adding new user accounts by means of useradd , but
                  through a shell script that incorporates your local policies (just so you don’t
                  have to remember them all the time). Unfortunately you will have to come
                  up with this shell script by yourself—at least unless you are using Debian
                  GNU/Linux or one of its derivatives (see below).
                  Watch out: Even though every serious Linux distribution comes with a program
               called useradd , the implementations differ in their details.
                     The Red Hat distributions include a fairly run-of-the-mill version of useradd ,
                     without bells and whistles, which provides the features discussed above.
                     The SUSE distributions’ useradd is geared towards optionally adding users to
                     a LDAP directory rather than the /etc/passwd file. (This is why the -D option
                     cannot be used to query or set default values like it can elsewhere—it is
                     already spoken for to do LDAPy things.) The details are beyond the scope
                     of this manual.
                     On Debian GNU/Linux and Ubuntu, useradd does exist but the recom-
                     mended method to create new user accounts is a program called adduser
                     (thankfully this is not confusing). The advantage of adduser is that it plays
                     according to Debian GNU Linux’s rules, and furthermore makes it possible
                     to execute arbitrary other actions for a new account besides creating the
                     actual account. For example, one might create a directory in a web server’s
                     document tree so that the new user (and nobody else) can publish files
                     there, or the user could automatically be authorised to access a database
                     server. You can find the details in adduser (8) and adduser.conf (5).
                  After it has been created using useradd , the new account is not yet accessible;
      password the system administrator must first set up a password. We shall be explaining this

               11.3.2       The passwd Command
               The passwd command is used to set up passwords for users. If you are logged in as
               root , then

                # passwd joe

               asks for a new password for user john (You must enter it twice as it will not be
               echoed to the screen).
                  The passwd command is also available to normal users, to let them change their
               own passwords (changing other users’ passwords is root ’s prerogative):
11.3 Managing User Accounts and Group Information                                              173

$ passwd
Changing password for joe.
(current) UNIX password: secret123
Enter new UNIX password: 321terces
Retype new UNIX password: 321terces
passwd: password updated successfully

Normal users must enter their own password correctly once before being allowed
to set a new one. This is supposed to make life difficult for practical jokers that
play around on your computer if you had to step out very urgently and didn’t
have time to engage the screen lock.
   On the side, passwd serves to manage various settings in /etc/shadow . For exam-
ple, you can look at a user’s “password state” by calling the passwd command with
the -S option:

# passwd -S bob
bob LK 10/15/99 0 99999 7 0

The first field in the output is (once more) the user name, followed by the password
state: “PS ” or “P ” if a password is set, “LK ” or “L ” for a locked account, and “NP ” for
an account with no password at all. The other fields are, respectively, the date of
the last password change, the minimum and maximum interval for changing the
password, the expiry warning interval and the “grace period” before the account
is locked completely after the password has expired. (See also Section 11.2.2.)
    You can change some of these settings by means of passwd options. Here are a
few examples:

#   passwd    -l    joe                                                  Lock the account
#   passwd    -u    joe                                               Unlock the account
#   passwd    -n    7 joe                          Password change at most every 7 days
#   passwd    -x    30 joe                        Password change at least every 30 days
#   passwd    -w    3 joe                     3 days grace period before password expires

E Locking and unlocking accounts by means of -l and -u works by putting
  a “! ” in front of the encrypted password in /etc/shadow . Since “! ” cannot
  result from password encryption, it is impossible to enter something upon
  login that matches the “encrypted password” in the user database—hence
  access via the usual login procedure is prevented. Once the “! ” is removed,
  the original password is back in force. (Astute, innit?) However, you should
  keep in mind that users may be able to gain access to the system by other
  means that do not refer to the encrypted password in the user database,
  such as the secure shell with public-key authentication.

    Changing the remaining settings in /etc/shadow requires the chage command:

#   chage    -E    2009-12-01 joe                         Lock account from 1 Dec 2009
#   chage    -E    -1 joe                                             Cancel expiry date
#   chage    -I    7 joe                       Grace period 1 week from password expiry
#   chage    -m    7 joe                                            Like passwd -n (Grr.)
#   chage    -M    7 joe                                       Like passwd -x (Grr, grr.)
#   chage    -W    3 joe                                   Like passwd -w (Grr, grr, grr.)

(chage can change all settings that passwd can change, and then some.)

B If you cannot remember the option names, invoke chage with the name of
  a user account only. The program will present you with a sequence of the
  current values to change or confirm.
174                                                                                  11 User Administration

                         You cannot retrieve a clear-text password even if you are the administrator.
                      Even checking /etc/shadow doesn’t help, since this file stores all passwords already
                      encrypted. If a user forgets their password, it is usually sufficient to reset their
                      password using the passwd command.

                      B Should you have forgotten the root password and not be logged in as root by
                        any chance, your last option is to boot Linux to a shell, or boot from a rescue
                        disk or CD. (See Chapter 16.) After that, you can use an editor to clear the
                        ⟨password⟩ field of the root entry in /etc/passwd .

                      C 11.8 [3] Change user joe ’s password. How does the /etc/shadow file change?
                        Query that account’s password state.

                      C 11.9 [!2] The user dumbo has forgotten his password. How can you help him?

                      C 11.10 [!3] Adjust the settings for user joe ’s password such that he can change
                        his password after at least a week, and must change it after at most two
                        weeks. There should be a warning two days before the two weeks are up.
                        Check the settings afterwards.

                      11.3.3       Deleting User Accounts
                      To delete a user account, you need to remove the user’s entries from /etc/passwd and
                      /etc/shadow , delete all references to that user in /etc/group , and remove the user’s
                      home directory as well as all other files created or owned by that user. If the
                      user has, e. g., a mail box for incoming messages in /var/mail , that should also be
            userdel      Again there is a suitable command to automate these steps. The userdel com-
                      mand removes a user account completely. Its syntax:

                      userdel [-r ]   ⟨user name⟩

                      The -r option ensures that the user’s home directory (including its content) and
                      his mail box in /var/mail will be removed; other files belonging to the user—e. g.,
                      crontab files—must be delete manually. A quick way to locate and remove files
                      belonging to a certain user is the

                      find / -uid     ⟨UID⟩ -delete

                      command. Without the -r option, only the user information is removed from the
                      user database; the home directory remains in place.

                      11.3.4       Changing User Accounts and Group Assignment
                      User accounts and group assignments are traditionally changed by editing the
                      /etc/passwd and /etc/group files. However, many systems contain commands like
                      usermod and groupmod for the same purpose, and you should prefer these since they
                      are safer and—mostly—more convenient to use.
            usermod      The usermod program accepts mostly the same options as useradd , but changes
                      existing user accounts instead of creating new ones. For example, with

                      usermod -g    ⟨group⟩ ⟨user name⟩

                      you could change a user’s primary group.
      Changing UIDs       Caution! If you want to change an existing user account’s UID, you could edit
                      the ⟨UID⟩ field in /etc/passwd directly. However, you should at the same time trans-
                      fer that user’s files to the new UID using chown : “chown -R tux /home/tux ” re-confers
11.3 Managing User Accounts and Group Information                                                   175

ownership of all files below user tux ’s home directory to user tux , after you have
changed the UID for that account. If “ls -l ” displays a numerical UID instead of
a textual name, this implies that there is no user name for the UID of these files.
You can fix this using chown .

11.3.5    Changing User Information Directly—vipw
The vipw command invokes an editor (vi or a different one) to edit /etc/passwd di-
rectly. At the same time, the file in question is locked in order to keep other users
from simultaneously changing the file using, e. g., passwd (which changes would
be lost). With the -s option, /etc/shadow can be edited.

B The actual editor that is invoked is determined by the value of the VISUAL
  environment variable, alternatively that of the EDITOR environment variable;
  if neither exists, vi will be launched.

C 11.11 [!2] Create a user called test . Change to the test account and create a
  few files using touch , including a few in a different directory than the home
  directory (say, /tmp ). Change back to root and change test ’s UID. What do
  you see when listing user test ’s files?

C 11.12 [!2] Create a user called test1 using your distribution’s graphical tool
  (if available), test2 by means of the useradd command, and another, test3 ,
  manually. Look at the configuration files. Can you work without problems
  using any of these three accounts? Create a file using each of the new ac-

C 11.13 [!2] Delete user test2 ’s account and ensure that there are no files left
  on the system that belong to that user.

C 11.14 [2] Change user test1 ’s UID. What else do you need to do?
C 11.15 [2] Change user test1 ’s home directory from /home/test1 to /home/user/
  test1 .

11.3.6    Creating, Changing and Deleting Groups
Like user accounts, you can create groups using any of several methods. The
“manual” method is much less tedious here than when creating new user ac-
counts: Since groups do not have home directories, it is usually sufficient to edit
the /etc/group file using any text editor, and to add a suitable new line (see be-
low for vigr ). When group passwords are used, another entry must be added to
/etc/gshadow .
    Incidentally, there is nothing wrong with creating directories for groups.
Group members can place the fruits of their collective labour there. The approach
is similar to creating user home directories, although no basic set of configuration
files needs to be copied.
    For group management, there are, by analogy to useradd , usermod , and userdel ,
the groupadd , groupmod , and groupdel programs that you should use in favour of edit-
ing /etc/group and /etc/gshadow directly. With groupadd you can create new groups        groupadd
simply by giving the correct command parameters:

groupadd [-g   ⟨GID⟩] ⟨group name⟩

The -g option allows you to specify a given group number. As mentioned be-
fore, this is a positive integer. The values up to 99 are usually reserved for system
groups. If -g is not specified, the next free GID is used.
   You can edit existing groups with groupmod without having to write to /etc/group      groupmod
176                                                                                   11 User Administration

                          groupmod [-g   ⟨GID⟩] [-n ⟨name⟩] ⟨group name⟩

                          The “-g ⟨GID⟩” option changes the group’s GID. Unresolved file group assign-
                          ments must be adjusted manually. The “-n ⟨name⟩” option sets a new name for the
                          group without changing the GID; manual adjustments are not necessary.
                             There is also a tool to remove group entries. This is unsurprisingly called
               groupdel   groupdel :

                          groupdel   ⟨group name⟩

                          Here, too, it makes sense to check the file system and adjust “orphaned” group
                          assignments for files with the chgrp command. Users’ primary groups may not be
                          removed—the users in question must either be removed beforehand, or they must
                          be reassigned to a different primary group.
                 gpasswd     The gpasswd command is mainly used to manipulate group passwords in a way
                          similar to the passwd command. The system administrator can, however, delegate
      group administrator the administration of a group’s membership list to one or more group adminis-
                          trators. Group administrators also use the gpasswd command:

                          gpasswd -a   ⟨user⟩ ⟨group⟩

                          adds the ⟨user⟩ to the ⟨group⟩, and

                          gpasswd -d   ⟨user⟩ ⟨group⟩

                          removes him again. With

                          gpasswd -A   ⟨user⟩,… ⟨group⟩

                          the system administrator can nominate users who are to serve as group adminis-

                                The SUSE distributions haven’t included gpasswd for some time. Instead
                                there are modified versions of the user and group administration tools that
                                can handle an LDAP directory.

                             As the system administrator, you can change the group database directly using
                   vigr   the vigr command. It works like vipw , by invoking an editor for “exclusive” access
                          to /etc/group . Similarly, “vigr -s ” gives you access to /etc/gshadow .

                          C 11.16 [2] What are groups needed for? Give possible examples.

                          C 11.17 [1] Can you create a directory that all members of a group can access?

                          C 11.18 [!2] Create a supplementary group test . Only user test1 should be a
                            member of that group. Set a group password. Log in as user test1 or test2
                            and try to change over to the new group.
11.3 Managing User Accounts and Group Information                                         177

Commands in this Chapter
adduser    Convenient command to create new user accounts (Debian)
                                                                      adduser (8)   172
chfn       Allows users to change the GECOS field in the user database
                                                                          chfn (1) 165
getent     Gets entries from administrative databases                   getent (1) 170
gpasswd    Allows a group administrator to change a group’s membership and up-
           date the group password                                     gpasswd (1) 176
groupadd     Adds user groups to the system group database            groupadd (8) 175
groupdel     Deletes groups from the system group database            groupdel (8) 176
groupmod     Changes group entries in the system group database groupmod (8) 175
groups     Displays the groups that a user is a member of               groups (1) 162
id         Displays a user’s UID and GIDs                                    id (1) 162
last       List recently-logged-in users                                  last (1) 162
useradd    Adds new user accounts                                      useradd (8) 171
userdel    Removes user accounts                                       userdel (8) 174
usermod    Modifies the user database                                  usermod (8) 174
vigr       Allows editing /etc/group or /etc/gshadow with “file locking”, to avoid con-
           flicts                                                         vipw (8) 176

   •   Access to the system is governed by user accounts.
   •   A user account has a numerical UID and (at least) one textual user name.
   •   Users can form groups. Groups have names and numerical GIDs.
   •   “Pseudo-users” and “pseudo-groups” serve to further refine access rights.
   •   The central user database is (normally) stored in the /etc/passwd file.
   •   The users’ encrypted passwords are stored—together with other password
       parameters—in the /etc/shadow file, which is unreadable for normal users.
   •   Group information is stored in the /etc/group and /etc/gshadow files.
   •   Passwords are managed using the passwd program.
   •   The chage program is used to manage password parameters in /etc/shadow .
   •   User information is changed using vipw or—better—using the specialised
       tools useradd , usermod , and userdel .
   •   Group information can be manipulated using the groupadd , groupmod , groupdel
       and gpasswd programs.
                                                                                            $ echo tux
                                                                                            $ ls
                                                                                            $ /bin/su -

Access Control

12.1 The Linux Access Control System . . . . . . . . .            .   .   .   .   .   180
12.2 Access Control For Files And Directories . . . . . .         .   .   .   .   .   180
    12.2.1 The Basics . . . . . . . . . . . . . . .               .   .   .   .   .   180
    12.2.2 Inspecting and Changing Access Permissions. . .        .   .   .   .   .   181
    12.2.3 Specifying File Owners and Groups—chown and chgrp      .   .   .   .   .   182
    12.2.4 The umask . . . . . . . . . . . . . . .                .   .   .   .   .   183
12.3 Access Control Lists (ACLs) . . . . . . . . . . .            .   .   .   .   .   185
12.4 Process Ownership . . . . . . . . . . . . . .                .   .   .   .   .   185
12.5 Special Permissions for Executable Files . . . . . . .       .   .   .   .   .   185
12.6 Special Permissions for Directories . . . . . . . .          .   .   .   .   .   186
12.7 File Attributes . . . . . . . . . . . . . . . .              .   .   .   .   .   188

   •   Understanding the Linux access control/privilege mechanisms
   •   Being able to assign access permissions to files and directories
   •   Knowing about the “umask”, SUID, SGID and the “sticky bit”
   •   Knowing about file attributes in the ext file systems

   • Knowledge of Linux user and group concepts (see Chapter 11)
   • Knowledge of Linux files and directories

adm1-rechte.tex   (33e55eeadba676a3 )
180                                                                                              12 Access Control

                              12.1      The Linux Access Control System
                              Whenever several users have access to the same computer system there must be
      access control system an access control system for processes, files and directories in order to ensure that
                            user 𝐴 cannot access user 𝐵’s private files just like that. To this end, Linux imple-
                            ments the standard system of Unix privileges.
                                In the Unix tradition, every file and directory is assigned to exactly one user
        separate privileges (its owner) and one group. Every file supports separate privileges for its owner,
                            the members of the group it is assigned to (“the group”, for short), and all other
                            users (“others”). Read, write and execute privileges can be enabled individually
                            for these three sets of users. The owner may determine a file’s access privileges.
                            The group and others may only access a file if the owner confers suitable privileges
             access mode to them. The sum total of a file’s access permissions is also called its access mode.
                                In a multi-user system which stores private or group-internal data on a gen-
                            erally accessible medium, the owner of a file can keep others from reading or
            access control modifying his files by instituting suitable access control. The rights to a file can be
                            determined separately and independently for its owner, its group and the others.
                            Access permissions allow users to map the responsibilities of a group collabora-
                            tive process to the files that the group is working with.

                              12.2      Access Control For Files And Directories
                              12.2.1    The Basics
                            For each file and each directory in the system, Linux allows separate access rights
                            for each of the three classes of users—owner, members of the file’s group, others.
                            These rights include read permission, write permission, and execute permission.
           file permissions    As far as files are concerned, these permissions control approximately what
                            their names suggest: Whoever has read permission may look at the file’s content,
                            whoever has write permission is allowed to change its content. Execute permis-
                            sion is necessary to launch the file as a process.

                              B Executing a binary “machine-language program” requires only execute per-
                                mission. For files containing shell scripts or other types of “interpreted”
                                programs, you also need read permission.

      directory permissions       For directories, things look somewhat different: Read permission is required
                              to look at a directory’s content—for example, by executing the ls command. You
                              need write permission to create, delete, or rename files in the directory. “Execute”
                              permission stands for the possibility to “use” the directory in the sense that you
                              can change into it using cd , or use its name in path names referring to files farther
                              down in the directory tree.

                              B In directories where you have only read permission, you may read the file
                                names but cannot find out anything else about the files. If you have only “ex-
                                ecute permission” for a directory, you can access files as long as you know
                                their names.

                              Usually it makes little sense to assign write and execute permission to a directory
                              separately; however, it may be useful in certain special cases.

                              A It is important to emphasise that write permission on a file is completely
                                immaterial if the file is to be deleted—you need write permission to the direc-
                                tory that the file is in and nothing else! Since “deleting” a file only removes
                                a reference to the actual file information (the inode) from the directory, this
                                is purely a directory operation. The rm command does warn you if you’re
                                trying to delete a file that you do not have write permission for, but if you
                                confirm the operation and have write permission to the directory involved,
                                nothing will stand in the way of the operation’s success. (Like any other
12.2 Access Control For Files And Directories                                                                       181

       Unix-like system, Linux has no way of “deleting” a file outright; you can
       only remove all references to a file, in which case the Linux kernel decides
       on its own that no one will be able to access the file any longer, and gets rid
       of its content.)

B If you do have write permission to the file but not its directory, you cannot
  remove the file completely. You can, however, truncate it down to 0 bytes
  and thereby remove its content, even though the file itself still exists in prin-
   For each user, Linux determines the “most appropriate” access rights. For ex-
ample, if the members of a file’s group do not have read permission for the file
but “others” do, then the group members may not read the file. The (admittedly
enticing) rationale that, if all others may look at the file, then the group members,
who are in some sense also part of “all others”, should be allowed to read it as
well, does not apply.

12.2.2       Inspecting and Changing Access Permissions
You can obtain information about the rights, user and group assignment that ap- information
ply to a file using “ls -l ”:

$ ls -l
-rw-r--r--      1   joe   users    4711   Oct 4 11:11 datei.txt
drwxr-x---      2   joe   group2   4096   Oct 4 11:12 testdir

The string of characters in the first column of the table details the access permis-
sions for the owner, the file’s group, and others (the very first character is just the
file type and has nothing to do with permissions). The third column gives the
owner’s user name, and the fourth that of the file’s group.
    In the permissions string, “r ”, “w ”, and “x ” signify existing read, write, and
execute permission, respectively. If there is just a “- ” in the list, then the corre-
sponding category does not enjoy the corresponding privilege. Thus, “rw-r--r-- ”
stands for “read and write permission for the owner, but read permission only for
group members and others”.
    As the file owner, you may set access permissions for a file using the chmod com-             chmod   command
mand (from “change mode”). You can specify the three categories by means of the
abbreviations “u ” (user) for the owner (yourself), “g ” (group) for the file’s group’s
members, and “o ” (others) for everyone else. The permissions themselves are
given by the already-mentioned abbreviations “r ”, “w ”, and “x ”. Using “+ ”, “- ”,
and “= ”, you can specify whether the permissions in question should be added to
any existing permissions, “subtracted” from the existing permissions, or used to
replace whatever was set before. For example:

$   chmod   u+x file                                             Execute permission for owner
$   chmod   go+w file                              sets write permission for group and others
$   chmod   g+rw file                                sets read and write permission for group
$   chmod   g=rw,o=r file                                      sets read and write permission,
                                                           removes group execute permission;
                                                           sets just read permission for others
$ chmod a+w file                                                            equivalent to ugo+w

B In fact, permission specifications can be considerably more complex. Con-
  sult the info documentation for chmod to find out all the details.
    A file’s owner is the single user (apart from root ) who is allowed to change a
file’s or directory’s access permissions. This privilege is independent of the actual
permissions; the owner may take away all their own permissions, but that does
not keep them from giving them back later.
    The general syntax of the chmod command is
182                                                                       12 Access Control

      chmod   [⟨options⟩] ⟨permissions⟩ ⟨name⟩ …

      You can give as many file or directory names as desired. The most important
      options include:

      -R   If a directory name is given, the permissions of files and directories inside this
               directory will also be changed (and so on all the way down the tree).
      --reference= ⟨name⟩   Uses the access permissions of file ⟨name⟩. In this case no
              ⟨permissions⟩ must be given with the command.

      B You may also specify a file’s access mode “numerically” instead of “symbol-
        ically” (what we just discussed). In practice this is very common for setting
        all permissions of a file or directory at once, and works like this: The three
        permission triples are represented as a three-digit octal number—the first
        digit describes the owner’s rights, the second those of the file’s group, and
        the third those that apply to “others”. Each of these digits derives from
        the sum of the individual permissions, where read permission has value 4,
        write permission 2, and execute permission 1. Here are a few examples for
        common access modes in “ls -l ” and octal form:
              rw-r--r--   644
              r--------   400
              rwxr-xr-x   755

      B Using numerical access modes, you can only set all permissions at once—
        there is no way of setting or removing individual rights while leaving the
        others alone, like you can do with the “+ ” and “- ” operators of the symbolic
        representation. Hence, the command

              $ chmod 644 file

              is equivalent to the symbolic

              $ chmod u=rw,go=r file

      12.2.3      Specifying File Owners and Groups—chown and chgrp
      The chown command lets you set the owner and group of a file or directory. This
      command takes the desired owner’s user name and/or group name and the file
      or directory name the change should apply to. It is called like

      chown ⟨user name⟩[: ][⟨group name⟩]     ⟨name⟩ …
      chown : ⟨group name⟩ ⟨name⟩ …

      If both a user and group name are given, both are changed; if just a user name is
      given, the group remains as it was; if a user name followed by a colon is given,
      then the file is assigned to the user’s primary group. If just a group name is given
      (with the colon in front), the owner remains unchanged. For example:

      # chown joe:devel letter.txt
      # chown www-data foo.html                                           new user www-data
      # chown :devel /home/devel                                           new group devel

      B chown also supports an obsolete syntax where a dot is used in place of the
12.2 Access Control For Files And Directories                                                          183

   To “give away” files to other users or arbitrary groups you need to be root . The
main reason for this is that normal users could otherwise annoy one another if
the system uses quotas (i.e., every user can only use a certain amount of storage
   Using the chgrp command, you can change a file’s group even as a normal
user—as long as you own the file and are a member of the new group:
chgrp   ⟨group name⟩ ⟨name⟩ …

B Changing a file’s owner or group does not change the access permissions
  for the various categories.
    chown and chgrp also support the -R option to apply changes recursively to part
of the directory hierarchy.

B Of course you can also change a file’s permissions, group, and owner using
  most of the popular file browsers (such as Konqueror or Nautilus).

C 12.1 [!2] Create a new file. What is that file’s group? Use chgrp to assign the
  file to one of your secondary groups. What happens if you try to assign the
  file to a group that you are not a member of?

C 12.2 [4] Compare the mechanisms that various file browsers (like Kon-
  queror, Nautilus, …) offer for setting a file’s permissions, owner, group, …
  Are there notable differences?

12.2.4      The umask
New files are usually created using the (octal) access mode 666 (read and write
permission for everyone). New directories are assigned the access mode 777.
Since this is not always what is desired, Linux offers a mechanism to remove cer-
tain rights from these access modes. This is called “umask”.

B Nobody knows exactly where this name comes from—even though there
  are a few theories that all sound fairly implausible.
   The umask is an octal number whose complement is ANDed bitwise to the
standard access mode—666 or 777—to arrive at the new file’s or directory’s actual
access mode. In other words: You can consider the umask an access mode contain- umask interpretation
ing exactly those rights that the new file should not have. Here’s an example—let
the umask be 027:
             1.                        Umask value: 027 ----w-rwx
             2.       Complement of umask value: 750 rwxr-x---
             3.            A new file’s access mode: 666 rw-rw-rw-
             4. Result (2 and 3 ANDed together): 640 rw-r-----
The third column shows the octal value, the fourth a symbolic representation. The
AND operation in step 4 can also be read off the fourth column of the second and
third lines: In the fourth line ther e is a letter in each position that had a letter in
the second and the third line—if there is just one dash (“- ”), the result will be a

B If you’d rather not bother with the complement and AND, you can simply
  imagine that each digit of the umask is subtracted from the corresponding
  digit of the actual access mode and negative results are considered as zero
  (so no “borrowing” from the place to the left). For our example—access
  mode 666 and umask 027—this means something like
                                       666 ⊖ 027 = 640,
        since 6 ⊖ 0 = 6, 6 ⊖ 4 = 2, and 6 ⊖ 7 = 0.
184                                                                                                 12 Access Control

       umask shell command       The umask is set using the umask shell command, either by invoking it di-
                              rectly or via a shell startup file—typically ~/.profile , ~/.bash_profile , or ~/.bashrc .
            process attribute The umask is a process attribute similar to the current directory or the process
                              environment, i. e., it is passed to child processes, but changes in a child process do
                              not modify the parent process’s settings.
                       syntax    The umask command takes a parameter specifying the desired umask:

                               umask [-S |⟨umask⟩]

      symbolic representation The umask may be given as an octal number or in a symbolic representation sim-
                               ilar to that used by chmod —deviously enough, the symbolic form contains the per-
                               missions that should be left (rather than those to be taken away):

                               $ umask 027                                                       … is equivalent to …
                               $ umask u=rwx,g=rx,o=

                               This means that in the symbolic form you must give the exact complement of the
                               value that you would specify in the octal form—exactly those rights that do not
                               occur in the octal specification.
                                   If you specify no value at all, the current umask is displayed. If the -S option
                               is given, the current umask is displayed in symbolic form (where, again, the re-
                               maining permissions are set):

                               $ umask
                               $ umask -S

         execute permission?      Note that you can only remove permissions using the umask. There is no way
                               of making files executable by default.
            umask and chmod       Incidentally, the umask also influences the chmod command. If you invoke chmod
                               with a “+ ” mode (e. g., “chmod +w file ”) without referring to the owner, group or oth-
                               ers, this is treated like “a+ ”, but the permissions set in the umask are not modified.
                               Consider the following example:

                               $ umask 027
                               $ touch file
                               $ chmod +x file
                               $ ls -l file
                               -rwxr-x---   1 tux      users       0 May 25 14:30 file

                               The “chmod +x ” sets execute permission for the user and group, but not the others,
                               since the umask contains the execute bit for “others”. Thus with the umask you
                               can take precautions against giving overly excessive permissions to files.

                               B Theoretically, this also works for the chmod operators “- ” and “= ”, but this
                                 does not make a lot of sense in practice.

                               C 12.3 [!1] State a numerical umask that leaves the user all permissions, but
                                 removes all permissions from group members and others? What is the cor-
                                 responding symbolic umask?

                               C 12.4 [2] Convince yourself that the “chmod +x ” and “chmod a+x ” commands
                                 indeed differ from each other as advertised.
12.3 Access Control Lists (ACLs)                                                                         185

12.3      Access Control Lists (ACLs)
As mentioned above, Linux allows you to assign permissions for a file’s owner,
group, and all others separately. For some applications, though, this three-tier
system is too simple-minded, or the more sophisticated permission schemes of
other operating systems must be mapped to Linux. Access control lists (ACLs)
can be used for this.
   On most file systems, Linux supports “POSIX ACLs” according to IEEE 1003.1e
(draft 17) with some Linux-specific extensions. This lets you specify additional
groups and users for files and directories, who then can be assigned read, write,
and execute permissions that differ from those of the file’s group and “others”.
Other rights, such as that to assign permissions, are still restricted to a file’s owner
(or root ) and cannot be delegated even wiht ACLs. The setfacl and getfacl com-
mands are used to set and query ACLs.
   ACLs are a fairly new and rarely-used addition to Linux, and their use is subject
to certain restrictions. The kernel does oversee compliance with them, but, for
instance, not every program is able to copy ACLs along with a file’s content—you
may have to use a specially-adapted tar (star ) for backups of a file system using
ACLs. ACLs are supported by Samba, so Windows clients get to see the correct
permissions, but if you export file systems to other (proprietary) Unix systems, it
may be possible that your ACLs are ignored by Unix clients that do not support

B You can read up on ACLs on Linux on and in acl (5)
  as well as getfacl (1) and setfacl (1).

   Detailed knowledge of ACLs is not required for the LPIC-1 exams.

12.4      Process Ownership
Linux considers not only the data on a storage medium as objects that can be
owned. The processes on the system have owners, too.
   Many commands create a process in the system’s memory. During normal use,
there are always several processes that the system protects from each other. Every
process together with all data within its virtual address space is assigned to a Processes have owners
user, its owner. This is most often the user who started the process—but processes
created using administrator privileges may change their ownership, and the SUID
mechanism (Section 12.5) can also have a hand in this.
   The owners of processes are displayed by the ps program if it is invoked using
the -u option.

# ps -u
bin      89 0.0     1.0 788        328   ?    S   13:27   0:00   rpc.portmap
test1 190 0.0       2.0 1100        28   3    S   13:27   0:00   bash
test1 613 0.0       1.3 968         24   3    S   15:05   0:00   vi XF86.tex
nobody 167 0.0      1.4 932         44   ?    S   13:27   0:00   httpd
root      1 0.0     1.0 776         16   ?    S   13:27   0:03   init [3]
root      2 0.0     0.0    0         0   ?   SW   13:27   0:00   (kflushd)

12.5      Special Permissions for Executable Files
When listing files using the “ls -l ” command, you may sometimes encounter per-
mission sets that differ from the usual rwx , such as

-rwsr-xr-x    1 root shadow    32916 Dec 11 20:47 /usr/bin/passwd
186                                                                                          12 Access Control

                           What does that mean? We have to digress here for a bit:
                             Assume that the passwd program carries the usual access mode:

                           -rwxr-xr-x   1 root shadow   32916 Dec 11 20:47 /usr/bin/passwd

                         A normal (unprivileged) user, say joe , wants to change his password and invokes
                         the passwd program. Next, he receives the message “permission denied”. What is
                         the reason? The passwd process (which uses joe ’s privileges) tries to open the /etc/
                         shadow file for writing and fails, since only root may write to that file—this cannot
                         be different since otherwise, everybody would be able to manipulate passwords
                         arbitrarily and, for example, change the root password.
                SUID bit    By means of the set-UID bit (frequently called “SUID bit”, for short) a program
                         can be caused to run not with the invoker’s privileges but those of the file owner—
                         here, root . In the case of passwd , the process executing passwd has write permission
                         to /etc/shadow , even though the invoking user, not being a system administrator,
                         generally doesn’t. It is the responsibility of the author of the passwd program to en-
                         sure that no monkey business goes on, e. g., by exploiting programming errors to
                         change arbitrary files except /etc/shadow , or entries in /etc/shadow except the pass-
                         word field of the invoking user. On Linux, by the way, the set-UID mechanism
                         works only for binary programs, not shell or other interpreter scripts.

                           B Bell Labs used to hold a patent on the SUID mechanism, which was invented
                             by Dennis Ritchie [SUID]. Originally, AT&T distributed Unix with the
                             caveat that license fees would be levied after the patent had been granted;
                             however, due to the logistical difficulties of charging hundreds of Unix in-
                             stallations small amounts of money retroactively, the patent was released
                             into the public domain.

                SGID bit      By analogy to the set-UID bit there is a SGID bit, which causes a process to be
                          executed with the program file’s group and the corresponding privileges (usually
                          to access other files assigned to that group) rather than the invoker’s group setting.
             chmod syntax     The SUID and SGID modes, like all other access modes, can be changed using
                          the chmod program, by giving symbolic permissions such as u+s (sets the SUID bit)
                          or g-s (deletes the SGID bit). You can also set these bits in octal access modes by
                          adding a fourth digit at the very left: The SUID bit has the value 4, the SGID bit
                          the value 2—thus you can assign the access mode 4755 to a file to make it readable
                          and executable to all users (the owner may also write to it) and to set the SUID bit.
                ls output     You can recognise set-UID and set-GID programs in the output of “ls -l ” by
                          the symbolic abbreviations “s ” in place of “x ” for executable files.

                           12.6     Special Permissions for Directories
                           There is another exception from the principle of assigning file ownership accord-
                           ing to the identity of its creator: a directory’s owner can decree that files created
                           in that directory should belong to the same group as the directory itself. This can
      SGID for directories be specified by setting the SGID bit on the directory. (As directories cannot be
                           executed, the SGID bit is available to be used for such things.)
                               A directory’s access permissions are not changed via the SGID bit. To create a
                           file in such a directory, a user must have write permission in the category (owner,
                           group, others) that applies to him. If, for example, a user is neither the owner of a
                           directory nor a member of the directory’s group, the directory must be writable for
                           “others” for him to be able to create files there. A file created in a SGID directory
                           then belongs to that directory’s group, even if the user is not a member of that
                           group at all.

                           B The typical application for the SGID bit on a directory is a directory that is
                             used as file storage for a “project group”. (Only) the members of the project
                             group are supposed to be able to read and write all files in the directory, and
12.6 Special Permissions for Directories                                                   187

      to create new files. This means that you need to put all users collaborating
      on the project into a project group (a secondary group will suffice):

       # groupadd project                                           Create new group
       # usermod -a -G project joe                                 joe into the group
       # usermod -a -G project sue                                             sue too

      Now you can create the directory and assign it to the new group. The owner
      and group are given all permissions, the others none; you also set the SGID
       # cd /home/project
       # chgrp project /home/project
       # chmod u=rwx,g=srwx /home/project

      Now, if user hugo creates a file in /home/project , that file should be assigned
      to group project :

       $ id
       uid=1000(joe) gid=1000(joe) groups=101(project),1000(joe)
       $ touch /tmp/joe.txt                                  Test: standard    directory
       $ ls -l /tmp/joe.txt
       -rw-r--r-- 1 joe joe 0 Jan 6 17:23 /tmp/joe.txt
       $ touch /home/project/joe.txt                                 project   directory
       $ ls -l /home/project/joe.txt
       -rw-r--r-- 1 joe project 0 Jan 6 17:24 /home/project/joe.txt

      There is just a little fly in the ointment, which you will be able to discern by
      looking closely at the final line in the example: The file does belong to the
      correct group, but other members of group project may nevertheless only
      read it. If you want all members of group project to be able to write to it as
      well, you must either apply chmod after the fact (a nuisance) or else set the
      umask such that group write permission is retained (see Exercise 12.6).

    The SGID mode only changes the system’s behaviour when new files are cre-
ated. Existing files work just the same as everywhere else. This means, for in-
stance, that a file created outside the SGID directory keeps its existing group as-
signment when moved into it (whereas on copying, the new copy would be put
into the directory’s group).
    The chgrp program works as always in SGID directories, too: the owner of a
file can assign it to any group he is a member of. Is the owner not a member of
the directory’s group, he cannot put the file into that group using chgrp —he must
create it afresh within the directory.

B It is possible to set the SUID bit on a directory—this permission does not
  signify anything, though.

  Linux supports another special mode for directories, where only a file’s owner
may delete or remove files within that directory:

drwxrwxrwt     7 root   root   1024 Apr    7 10:07 /tmp

This t mode, the “sticky bit”, can be used to counter a problem which arises when
public directories are in shared use: Write permission to a directory lets a user
delete other users’ files, regardless of their access mode and owner! For example,
the /tmp directories are common ground, and many programs create their tempo-
rary files there. To do so, all users have write permission to that directory. This
implies that any user has permission to delete files there.
188                                                                                     12 Access Control

                                      Table 12.1: The most important file attributes

                    Attribute    Meaning
                         A       atime is not updated (interesting for mobile computers)
                         a       (append-only) The file can only be appended to
                         c       The file’s content is compressed transparently (not implemented)
                         d       The file will not be backed up by dump
                         i       (immutable) The file cannot be changed at all
                         j       Write operations to the file’s content are passed through the journal
                                 (ext3 only)
                         s       File data will be overwritten with zeroes on deletion (not imple-
                         S       Write operations to the file are performed “synchronously”, i. e.,
                                 without buffering them internally
                         u       The file may be “undeleted” after deletion (not implemented)

                       Usually, when deleting or renaming a file, the system does not consider that
                   file’s access permissions. If the “sticky bit” is set on a directory, a file in that di-
                   rectory can subsequently be deleted only by its owner, the directory’s owner, or
                   root . The “sticky bit” can be set or removed by specifying the symbolic +t and -t
                   modes; in the octal representation it has value 1 in the same digit as SUID and

                    B The “sticky bit” derives its name from an additional meaning it used to have
                      in earlier Unix systems: At that time, programs were copied to swap space
                      in their entirety when started, and removed completely after having termi-
                      nated. Program files with the sticky bit set would be left in swap space
                      instead of being removed. This would accelerate subsequent invocations of
                      those programs since no copy would have to be done. Like most current
                      Unix systems, Linux uses demand paging, i. e., it fetches only those parts
                      of the code from the program’s executable file that are really required, and
                      does not copy anything to swap space at all; on Linux, the sticky bit never
                      had its original meaning.

                    C 12.5 [2] What does the special “s ” privilege mean? Where do you find it?
                      Can you set this privilege on a file that you created yourself?

                    C 12.6 [!1] Which umask invocation can be used to set up a umask that would, in
                      the project directory example above, allow all members of the project group
                      to read and write files in the project directory?

                    C 12.7 [2] What does the special “t ” privilege mean? Where do you find it?

                    C 12.8 [4] (For programmers.) Write a C program that invokes a suitable com-
                      mand (such as id ). Set this program SUID root (or SGID root ) and observe
                      what happens when you execute it.

                    C 12.9 [I]f you leave them alone for a few minutes with a root shell, clever
                      users might try to stash a SUID root shell somewhere in the system, in order
                      to assume administrator privileges when desired. Does that work with bash ?
                      With other shells?

                   12.7      File Attributes
      file attributes Besides the access permissions, the ext2 and ext3 file systems support further file
12.7 File Attributes                                                                                            189

attributes enabling access to special file system features. The most important file
attributes are summarised in Table 12.1.
   Most interesting are perhaps the “append-only” and “immutable” attributes,            a   and i attributes
which you can use to protect log files and configuration files from modification;
only root may set or reset these attributes, and once set they also apply to processes
running as root .

B In principle, an attacker who has gained root privileges may reset these at-
  tributes. However, attackers do not necessarily consider that they might be

    The A attribute may also be useful; you can use it on mobile computers to ensure     A   attribute
that the disk isn’t always running, in order to save power. Usually, whenever
a file is read, its “atime”—the time of last access—is updated, which of course
entails an inode write operation. Certain files are very frequently looked at in
the background, such that the disk never gets to rest, and you can help here by
judiciously applying the A attribute.

B The c , s and u attributes sound very nice in theory, but are not (yet) sup-
  ported by “normal” kernels. There are some more or less experimental en-
  hancements making use of these attributes, and in part they are still pipe

   You can set or reset attributes using the chattr command. This works rather           chattr
like chmod : A preceding “+ ” sets one or more attributes, “- ” deletes one or more
attributes, and “= ” causes the named attributes to be the only enabled ones. The
-R option, as in chmod , lets chattr operate on all files in any subdirectories passed
as arguments and their nested subdirectories. Symbolic links are ignored in the

# chattr +a /var/log/messages                                           Append only
# chattr -R +j /data/important                                    Data journaling …
# chattr -j /data/important/notso                                  … with exception

   With the lsattr command, you can review the attributes set on a file. The pro-        lsattr
gram behaves similar to “ls -l ”:

# lsattr /var/log/messages
-----a----------- /var/log/messages

Every dash stands for a possible attribute. lsattr supports various options such
as -R , -a , and -d , which generally behave like the eponymous options to ls .

C 12.10 [!2] Convince yourself that the a and i attributes work as advertised.

C 12.11 [2] Can you make all dashes disappear in the lsattr output for a given
190                                                                   12 Access Control

      Commands in this Chapter
      chattr  Sets file attributes for ext2 and ext3 file systems       chattr (1)   189
      chgrp   Sets the assigned group of a file or directory             chgrp (1)   182
      chmod   Sets access modes for files and directories                chmod (1)   181
      chown   Sets the owner and/or assigned group of a file or directory
                                                                         chown (1)   182
      getfacl Displays ACL data                                        getfacl (1)   185
      lsattr  Displays file attributes on ext2 and ext3 file systems    lsattr (1)   189
      setfacl Enables ACL manipulation                                 setfacl (1)   185
      star    POSIX-compatible tape archive with ACL support              star (1)   185

         • Linux supports file read, write and execute permissions, where these per-
           missions can be set separately for a file’s owner, the members of the file’s
           group and “all others”.
         • The sum total of a file’s permissions is also called its access mode.
         • Every file (and directory) has an owner and a group. Access rights—read,
           write, and execute permission—are assigned to these two categories and
           “others” separately. Only the owner is allowed to set access rights.
         • Access rights do not apply to the system administrator (root ). He may read
           or write all files.
         • File permissions can be manipulated using the chmod command.
         • Using chown , the system administrator can change the user and group as-
           signment of arbitrary files.
         • Normal users can use chgrp to assign their files to different groups.
         • The umask can be used to limit the standard permissions when files and
           directories are being created.
         • The SUID and SGID bits allow the execution of programs with the privileges
           of the file owner or file group instead of those of the invoker.
         • The SGID bit on a directory causes new files in that directory to be assigned
           the directory’s group (instead of the primary group of the creating user).
         • The “sticky bit” on a directory lets only the owner (and the system admin-
           istrator) delete files.
         • The ext file systems support special additional file attributes.

      SUID Dennis M. Ritchie. “Protection of data file contents”. US patent 4,135,240.
                                                                                        $ echo tux
                                                                                        $ ls
                                                                                        $ /bin/su -

Process Management

13.1   What Is A Process? . . . . . . . . . . . . . .         .   .   .   .   .   192
13.2   Process States . . . . . . . . . . . . . . . .         .   .   .   .   .   193
13.3   Process Information—ps . . . . . . . . . . . .         .   .   .   .   .   194
13.4   Processes in a Tree—pstree . . . . . . . . . . .       .   .   .   .   .   195
13.5   Controlling Processes—kill and killall . . . . . . .   .   .   .   .   .   196
13.6   pgrep and pkill . . . . . . . . . . . . . . . .        .   .   .   .   .   197
13.7   Process Priorities—nice and renice . . . . . . . . .   .   .   .   .   .   199
13.8   Further Process Management Commands—nohup and top      .   .   .   .   .   199

   •   Knowing the Linux process concept
   •   Using the most important commands to query process information
   •   Knowing how to signal and stop processes
   •   Being able to influence process priorities

   • Linux commands

adm1-prozesse.tex   (33e55eeadba676a3 )
192                                                                                        13 Process Management

                                13.1     What Is A Process?
                                A process is, in effect, a “running program”. Processes have code that is executed,
                                and data on which the code operates, but also various attributes the operating uses
                                to manage them, such as:
            process number         • The unique process number (PID or “process identity”) serves to identify
                                     the process and can only be assigned to a single process at a time.
      parent process number        • All processes know their parent process number, or PPID. Every process can
                                     spawn others (“children”) that then contain a reference to their procreator.
                                     The only process that does not have a parent process is the “pseudo” process
                                     with PID 0, which is generated during system startup and creates the “init”
                                     process with a PID of 1, which in turn is the ancestor of all other processes
                                     in the system.

                       user        • Every process is assigned to a user and a set of groups. These are impor-
                     groups          tant to determine the access the process has to files, devices, etc. (See Sec-
                                     tion 12.4.) Besides, the user the process is assigned to may stop, terminate,
                                     or otherwise influence the process. The owner and group assignments are
                                     passed on to child processes.

                                   • The system splits the CPU time into little chunks (“time slices”), each of
                                     which lasts only for a fraction of a second. The current process is entitled to
                                     such a time slice, and afterwards the system decides which process should
                                     be allowed to execute during the next time slice. This decision is made by
                     priority        the appropriate “scheduler” based on the priority of a process.

                                     B In multi-processor systems, Linux also takes into account the particu-
                                       lar topology of the computer in question when assigning CPU time to
                                       processes—it is simple to run a process on any of the different cores
                                       of a multi-core CPU which share the same memory, while the “migra-
                                       tion” of a process to a different processor with separate memory entails
                                       a noticeable administrative overhead and is therefore less often worth-

             other attributes      • A process has other attributes—a current directory, a process environment,
                                     …—which are also passed on to child processes.
          process file system You can consult the /proc file system for this type of information. This process file
                                system is used to make available data from the system kernel which is collected at
                                run time and presented by means of directories and files. In particular, there are
                                various directories using numbers as names; every such directory corresponds to
                                one process and its name to the PID of that process. For example:

                                dr-xr-xr-x   3   root   root    0   Oct 16 11:11 1
                                dr-xr-xr-x   3   root   root    0   Oct 16 11:11 125
                                dr-xr-xr-x   3   root   root    0   Oct 16 11:11 80

                                In the directory of a process, there are various “files” containing process informa-
                                tion. Details may be found in the proc (5) man page.

                 job control    B The job control available in many shells is also a form of process management—
                                  a “job” is a process whose parent process is a shell. From the corresponding
                                  shell, its jobs can be controlled using commands like jobs , bg , and fg , as well
                                  as the key combinations Ctrl + z and Ctrl + c (among others). More in-
                                  formation is available from the manual page of the shell in question, or
                                  from the Linup Front training manual, Introduction to Linux for Users and
13.2 Process States                                                                                           193

    is                runnable                    operating


          Figure 13.1: The relationship between various process states

C 13.1 [3] How can you view the environment variables of any of your pro-
  cesses? (Hint: /proc file system.)

C 13.2 [2] (For programmers.) What is the maximum possible PID? What hap-
  pens when this limit is reached? (Hint: Look for the string “PID_MAX ” in the
  files below /usr/include/linux .)

13.2      Process States
Another important property of a process is its process state. A process in mem- process state
ory waits to be executed by the CPU. This state is called “runnable”. Linux uses
pre-emptive multitasking, i. e., a scheduler distributes the available CPU time to pre-emptive multitasking
waiting processes in pieces called “time slices”. If a process is actually execut-
ing on the CPU, this state is called “operating”, and after its time slice is over the
process reverts to the “runnable” state.

B From an external point of view, Linux does not distinguish between these
  two process states; the process in question is always marked “runnable”.

    It is quite possible that a process requires further input or needs to wait for
peripheral device operations to complete; such a process cannot be assigned CPU
time, and its state is considered to be “sleeping”. Processes that have been stopped
by means of Ctrl + z using the shell’s job control facility are in state “stopped”.
Once the execution of a process is over, it terminates itself and makes a return return code
code available, which it can use to signal, for example, whether it completed suc-
cessfully or not (for a suitable definition of “success”).
    Once in a while processes appear who are marked as zombies using the “Z” zombies
state. These “living dead” usually exist only for a brief instant. A process becomes
a zombie when it finishes and dies for good once its parent process has queried
its return code. If a zombie does not disappear from the process table this means
that its parent should really have picked up the zombie’s return code but didn’t.
A zombie cannot be removed from the process table. Because the original pro-
cess no longer exists and cannot take up neither RAM nor CPU time, a zombie
has no impact on the system except for an unattractive entry in the system state.
Persistent or very numerous zombies usually indicate programming errors in the
parent process; when the parent process terminates they should do so as well.

B Zombies disappear when their parent process disappears because “or-
  phaned” processes are “adopted” by the init process. Since the init process
194                                                                 13 Process Management

              spends most of its time waiting for other processes to terminate so that it
              can collect their return code, the zombies are then disposed of fairly quickly.

      B Of course, zombies take up room in the process table that might be required
        for other processes. If that proves a problem, look at the parent process.

      C 13.3 [2] Start a xclock process in the background. In the $! shell variable you
        will find the PID of that process (it always contains the PID of the most re-
        cently launched background process). Check the state of that process by
        means of the “grep ^State: /proc/$!/status ” command. Stop the xclock by
        moving it to the foreground and stopping it using Ctrl + z . What is the
        process state now? (Alternatively, you may use any other long-running pro-
        gram in place of xclock .)

      C 13.4 [4] (When going over this manual for the second time.) Can you create
        a zombie process on purpose?

      13.3       Process Information—ps
      You would normally not access the process information in /proc directly but use
      the appropriate commands to query it.
         The ps (“process status”) command is available on every Unix-like system.
      Without any otions, all processes running on the current terminal are output. The
      resulting list contains the process number PID , the terminal TTY , the process state
      STAT , the CPU time used so far TIME and the command being executed.

      $ ps
        997   1 S    0:00 -bash
       1005   1 R    0:00 ps
      $ _

      There are two processes currently executing on the tty1 terminal: Apart from the
      bash with PID 997, which is currently sleeping (state “S ”), a ps command is executed
      using PID 1005 (state “R ”). The “operating” state mentioned above is not being
      displayed in ps output.
          The syntax of ps is fairly confusing. Besides Unix98-style options (like -l ) and
      GNU-style long options (such as --help ), it also allows BSD-style options without
      a leading dash. Here is a selection out of all possible parameters:
      a   (“all”) displays all processes with a terminal
      --forest   displays the process hierarchy
      l   (“long”) outputs extra information such as the priority
      r   (“running”) displays only runnable processes
      T   (“terminal”) displays all processes on the current terminal
      U   ⟨name⟩ (“user”) displays processes owned by user ⟨name⟩
      x   also displays processes without a terminal

      B The unusual syntax of ps derives from the fact that AT&T’s ps traditionally
        used leading dashes on options while BSD’s didn’t (and the same option
        can have quite different results in both flavours). When the big reunification
        came in System V Release 4, one could hang on to most options with their
        customary meaning.
13.4 Processes in a Tree—pstree                                                                 195

   If you give ps a PID, only information pertaining to the process in question will
be displayed (if it exists):

$ ps 1
    1 ?         Ss      0:00 init [2]

With the -C option, ps displays information about the process (or processes) based
on a particular command:

$ ps -C   konsole
    PID   TTY           TIME   CMD
   4472   ?         00:00:10   konsole
  13720   ?         00:00:00   konsole
  14045   ?         00:00:14   konsole

(Alternatively, grep would help here as well.)

C 13.5 [!2] What does the information obtainable with the ps command mean?
  Invoke ps without an option, then with the a option, and finally with the ax
  option. What does the x option do?

C 13.6 [3] The ps command allows you to determine the output format your-
  self by means of the -o option. Study the ps (1) manual page and specify a
  ps command line that will output the PID, PPID, the process state and the

13.4        Processes in a Tree—pstree
If you do not want to obtain every bit of information about a process but are rather
interested in the relationships between processes, the pstree command is helpful.      pstree
pstree displays a process tree in which the child processes are shown as depending
on their parent process. The processes are displayed by name:

$ pstree
     |           |-kblockd/0
     |           `-2*[pdflush]
     |         |-kdeinit-+-bash---bash
     |         |          |-2*[bash]
     |         |          |-bash---less
196                                                                                13 Process Management

                    |         |          |-bash-+-pstree
                    |         |          |      `-xdvi---xdvi.bin---gs
                    |         |          `-bash---emacs---emacsserver
                    |         |-kdeinit---3*[bash]
                    |         |-kteatime
                    |         `-tclsh

            Identical processes are collected in brackets and a count and “*” are displayed.
            The most important options of pstree include:
            -p   displays PIDs along with process names
            -u   displays process owners’ user name
            -G   makes the display prettier by using terminal graphics characters—whether this
                   is in fact an improvement depends on your terminal

             B You can also obtain an approximated tree structure using “ps --forest ”. The
               tree structure is part of the COMMAND column in the output.

            13.5          Controlling Processes—kill and killall
      signals The kill command sends signals to selected processes. The desired signal can be
            specified either numerically or by name; you must also pass the process number
            in question, which you can find out using ps :

             $   kill   -15 4711                                  Send signal SIGTERM to process 4711
             $   kill   -TERM 4711                                                        Same thing
             $   kill   -SIGTERM 4711                                               Same thing again
             $   kill   -s TERM 4711                                                Same thing again
             $   kill   -s SIGTERM 4711                                             Same thing again
             $   kill   -s 15 4711                                                        Guess what

                 Here are the most important signals with their numbers and meaning:
            SIGHUP   (1, “hang up”) causes the shell to terminate all of its child processes that
                     use the same controlling terminal as itself. For background processes with-
                     out a controlling terminal, this is frequently used to cause them to re-read
                     their configuration files (see below).

            SIGINT   (2, “interrupt”) Interrupts the process; equivalent to the          Ctrl   +   c   key com-
            SIGKILL   (9, “kill”) Terminates the process and cannot be ignored; the “emergency
            SIGTERM     (15, “terminate”) Default for kill and killall ; terminates the process.
            SIGCONT     (18, “continue”) Lets a process that was stopped using SIGSTOP continue.
            SIGSTOP     (19, “stop”) Stops a process temporarily.

            SIGTSTP     (20, “terminal stop”) Equivalent to the     Ctrl   +   z   key combination.

            A You shouldn’t get hung up on the signal numbers, which are not all guaran-
              teed to be the same on all Unix versions (or even Linux platforms). You’re
              usually safe as far as 1, 9, or 15 are concerned, but for everything else you
              should rather be using the names.
13.6 pgrep and pkill                                                                          197

Unless otherwise specified, the signal SIGTERM (“terminate”) will be sent, which
(usually) ends the process. Programs can be written such that they “trap” signals
(handle them internally) or ignore them altogether. Signals that a process neither
traps nor ignores usually cause it to crash hard. Some (few) signals are ignored
by default.
   The SIGKILL and SIGSTOP signals are not handled by the process but by the kernel
and hence cannot be trapped or ignored. SIGKILL terminates a process without
giving it a chance to object (as SIGTERM would), and SIGSTOP stops the process such
that it is no longer given CPU time.
   kill does not always stop processes. Background processes which provide sys-
tem services without a controlling terminal—daemons—usually reread their con- daemons
figuration files without a restart if they are sent SIGHUP (“hang up”).
   You can apply kill , like many other Linux commands, only to processes that
you actually own. Only root is not subject to this restriction.
   Sometimes a process will not even react to SIGKILL . The reason for this is ei-
ther that it is a zombie (which is already dead and cannot be killed again) or else
blocked in a system call. The latter situation occurs, for example, if a process waits
for a write or read operation on a slow device to finish.
   An alternative to the kill command is the killall command. killall acts just killall
like kill —it sends a signal to the process. The difference is that the process must
be named instead of addressed by its PID, and that all processes of the same name
are signalled. If no signal is specified, it sends SIGTERM by default (like kill ). killall
outputs a warning if there was nothing to signal to under the specified name.
   The most important options for killall include:
-i killall  will query you whether it is actually supposed to signal the process in
-l   outputs a list of all available signals.
-w   waits whether the process that was signalled actually terminates. killall
       checks every second whether the process still exists, and only terminates
       once it is gone.

A Be careful with killall if you get to use Solaris or BSD every now and then.
  On these systems, the command does exactly what its name suggests—it
  kills all processes.

C 13.7 [2] Which signals are being ignored by default? (Hint: signal (7))

13.6        pgrep   and pkill
As useful as ps and kill are, as difficult can it be sometimes to identify exactly the
right processes of interest. Of course you can look through the output of ps using
grep , but to make this “foolproof” and without allowing too many false positives
is at least inconvenient, if not tricky. Nicely enough, Kjetil Torgrim Homme has
taken this burden off us and developed the pgrep program, which enables us to
search the process list conveniently. A command like

$ pgrep -u root sshd

will, for example, list the PIDs of all sshd processes belonging to root .

B By default, pgrep restricts itself to outputting PIDs. Use the -l option to get it
  to show the command name, too. With -a it will list the full command line.

B The -d option allows you to specify a separator (the default is “\n ”):
198                                                                13 Process Management

              $ pgrep -d, -u hugo bash

              You can obtain more detailed information on the processes by feeding the
              PIDs to ps :

              $ ps up $(pgrep -d, -u hugo bash)

              (The p option lets you give ps a comma-separated list of PIDs of interest.)
        pgrep ’s parameter is really an (extended) regular expression (consider egrep )
      which is used to examine the process names. Hence something like

      $ pgrep '^([bd]a|t?c|k|z|)sh$'

      will look for the common shells.

      B Normally pgrep considers only the process name (the first 15 characters of the
        process name, to be exact). Use the -f option to search the whole command
           You can add search criteria by means of options. Here is a small selection:
      -G   Consider only processes belonging to the given group(s). (Groups can be spec-
             ified using names or GIDs.)
      -n   Only display the newest (most recently started) of the found processes.
      -o   Only display the oldest (least recently started) of the found processes.
      -P   Consider only processes whose parent processes have one of the given PIDs.
      -t   Consider only processes whose controlling terminal is listed. (Terminal names
             should be given without the leading “/dev/ ”.)
      -u   Consider only processes with the given (effective) UIDs.

      B If you specify search criteria but no regular expression for the process name,
        all processes matching the search criteria will be listed. If you omit both you
        will get an error message.
         The pkill command behaves like pgrep , except that it does not list the found
      processes’ PIDs but sends them a signal directly (by default, SIGTERM ). As in kill
      you can specify another signal:

      # pkill -HUP syslogd

      The --signal option would also work:

      # pkill --signal HUP syslogd

      B The advantage of pkill compared to killall is that pkill can be much more

      C 13.8 [!1] Use pgrep to determine the PIDs of all processes belonging to user
        hugo . (If you don’t have a user hugo , then specify some other user instead.)

      C 13.9 [2] Use two separate terminal windows (or text consoles) to start one
        “sleep 60 ” command each. Use pkill to terminate (a) the first command
        started, (b) the second command started, (c) the command started in one
        of the two terminal windows.
13.7 Process Priorities—nice and renice                                                             199

13.7      Process Priorities—nice and renice
In a multi-tasking operating system such as Linux, CPU time must be shared
among various processes. This is the scheduler’s job. There is normally more
than one runnable process, and the scheduler must allot CPU time to runnable
processes according to certain rules. The deciding factor for this is the priority priority
of a process. The priority of a process changes dynamically according to its prior
behaviour—“interactive” processes, i. e, ones that do I/O, are favoured over those
that just consume CPU time.
   As a user (or administrator) you cannot set process priorities directly. You can
merely ask the kernel to prefer or penalise processes. The “nice value” quantifies
the degree of favouritism exhibited towards a process, and is passed along to child
   A new process’s nice value can be specified with the nice command. Its syntax nice

nice [- ⟨nice   value⟩] ⟨command⟩ ⟨parameter⟩ …

(nice is used as a “prefix” for another command).
   The possible nice values are numbers between −20 and +19. A negative nice possible nice values
value increases the priority, a positive value decreases it (the higher the value, the
“nicer” you are towards the system’s other users by giving your own processes a
lower priority). If no nice value is specified, the default value of +10 is assumed.
Only root may start processes with a negative nice value (negative nice value are
not generally nice for other users).
   The priority of a running process can be influenced using the renice command. renice
You call renice with the desired new nice value and the PID (or PIDs) of the pro-
cess(es) in question:

renice [- ⟨nice   value⟩] ⟨PID⟩ …

Again, only the system administrator may assign arbitrary nice values. Normal
users may only increase the nice value of their own processes using renice —for
example, it is impossible to revert a process started with nice value 5 back to nice
value 0, while it is absolutely all right to change its nice value to 10. (Think of a

C 13.10 [2] Try to give a process a higher priority. This may possibly not
  work—why? Check the process priority using ps .

13.8      Further Process Management Commands—nohup
          and top
When you invoke a command using nohup , that command will ignore a SIGHUP sig- Ignoring SIGHUP
nal and thus survive the demise of its parent process:

nohup   ⟨command⟩ …

The process is not automatically put into the background but must be placed there
by appending a & to the command line. If the program’s standard output is a ter-
minal and the user has not specified anything else, the program’s output together
with its standard error output will be redirected to the nohup.out file. If the current
directory is not writable for the user, the file is created in the user’s home directory
200                                                                       13 Process Management

      top      top unifies the functions of many process management commands in a single
            program. It also provides a process table which is constantly being updated. You
            can interactively execute various operations; an overview is available using h .
            For example, it is possible to sort the list according to several criteria, send signals
            to processes ( k ), or change the nice value of a process ( r ).

            Commands in this Chapter
            kill      Terminates a background process                      bash (1), kill (1)   196
            killall   Sends a signal to all processes matching the given name
                                                                                 killall (1)    197
            nice      Starts programs with a different nice value                    nice (1)   199
            nohup     Starts a program such that it is immune to SIGHUP signals nohup (1)       199
            pgrep     Searches processes according to their name or other criteria
                                                                                   pgrep (1)    197
            pkill     Signals to processes according to their name or other criteria
                                                                                   pkill (1)    198
            ps        Outputs process status information                               ps (1)   194
            pstree    Outputs the process tree                                    pstree (1)    195
            renice    Changes the nice value of running processes                 renice (8)    199
            top       Screen-oriented tool for process monitoring and control         top (1)   199

               • A process is a program that is being executed.
               • Besides a program text and the corresponding data, a process has attributes
                 such as a process number (PID), parent process number (PPID), owner,
                 groups, priority, environment, current directory, …
               • All processes derive from the init process (PID 1).
               • ps can be used to query process information.
               • The pstree command shows the process hierarchy as a tree.
               • Processes can be controlled using signals.
               • The kill and killall commands send signals to processes.
               • The nice and renice commands are used to influence process priorities.
                 ulimit limits the resource usage of a process.
               • top is a convenient user interface for process management.
                                                                                                  $ echo tux
                                                                                                  $ ls
                                                                                                  $ /bin/su -

Hard Disks (and Other Secondary

14.1 Fundamentals . . . . . . . . . .           .   .   .   .   .   .   .   .   .   .   .   202
14.2 Bus Systems for Mass Storage . . . .       .   .   .   .   .   .   .   .   .   .   .   202
14.3 Partitioning . . . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   205
    14.3.1 Fundamentals . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   205
    14.3.2 The Traditional Method (MBR) . .     .   .   .   .   .   .   .   .   .   .   .   206
    14.3.3 The Modern Method (GPT) . . .        .   .   .   .   .   .   .   .   .   .   .   207
14.4 Linux and Mass Storage . . . . . .         .   .   .   .   .   .   .   .   .   .   .   208
14.5 Partitioning Disks. . . . . . . . .        .   .   .   .   .   .   .   .   .   .   .   210
    14.5.1 Fundamentals . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   210
    14.5.2 Partitioning Disks Using fdisk . .   .   .   .   .   .   .   .   .   .   .   .   212
    14.5.3 Formatting Disks using GNU parted    .   .   .   .   .   .   .   .   .   .   .   215
    14.5.4 gdisk . . . . . . . . . . .          .   .   .   .   .   .   .   .   .   .   .   216
    14.5.5 More Partitioning Tools . . . .      .   .   .   .   .   .   .   .   .   .   .   217
14.6 Loop Devices and kpartx . . . . . .        .   .   .   .   .   .   .   .   .   .   .   217
14.7 The Logical Volume Manager (LVM) . .       .   .   .   .   .   .   .   .   .   .   .   219

   • Understanding how Linux deals with secondary storage devices based on
     ATA, SATA, and SCSI.
   • Understanding MBR-based and GPT-based partitioning
   • Knowing about Linux partitioning tools and how to use them
   • Being able to use loop devices

   • Basic Linux knowledge
   • Knowledge about Linux hardware support

adm1-platten.tex   (33e55eeadba676a3 )
202                                                     14 Hard Disks (and Other Secondary Storage)

                14.1          Fundamentals
                RAM is fairly cheap these days, but even so not many computers can get by with-
                out the permanent storage of programs and data on mass storage devices. These
                include (among others):
                    • Hard disks with rotating magnetic platters
                    • “Solid-state disks” (SSDs) that look like hard disks from the computer’s
                      point of view, but use flash memory internally
                    • USB thumb drives, SD cards, CF cards, and other interchangeable media
                      based on flash memory
                    • RAID systems that aggregate hard disks and present them as one big storage
                    • SAN devices which provide “virtual” disk drives on the network
                    • File servers that offer file access at an abstract level (CIFS, NFS, …)
                In this chapter we shall explain the basics of Linux support for the first three en-
                tries in the list—hard disks, SSDs and flash-based portable media like USB thumb
                drives. RAID sstems and SAN are discussed in the Linup Front training man-
                ual, Linux Storage and File Systems, file servers are discussed in Linux Infrastructure

                14.2          Bus Systems for Mass Storage
                IDE, ATA and SATA Until not so long ago, hard disks and optical drives such
           IDE as CD-ROM and DVD readers and writers used to be connected via a “IDE con-
                troller”, of which self-respecting PCs had at least two (with two “channels” each).

                 B “IDE” is really an abbreviation of “Integrated Drive Electronics”. The “inte-
                   grated drive electronics” alluded to here lets the computer see the disk as a
                   sequence of numbered blocks without having to know anything about sec-
                   tors, cylinders, and read/write heads, even though this elaborate charade
                   is still kept up between the BIOS, disk controller, and disks. However, for a
                   long time this description has applied to all hard disks, not just those with
                   the well-known “IDE” interface, which by now is officially called “ATA”,
                   short for “AT Attachment”1 .

                 Computers bought new these days usually still contain IDE interfaces, but the
                 method of choice to connect hard disks and optical drives today is a serial ver-
      Serial ATA sion of the ATA standard, imaginatively named “Serial ATA” (SATA, for short).
                 Since SATA (i. e., approximately 2003), traditional ATA (or “IDE”) is commonly
                 called “P-ATA”, an abbreviation of “parallel ATA”. The difference applies to the
                 cable, which for traditional ATA is an inconvenient-to-place and electrically not
                 100% fortunate 40- or 80-lead ribbon cable which can transfer 16 data bits in par-
                 allel, and which links several devices at once to the controller. SATA, on the other
                 hand, utilises narrow flat seven-lead cables for serial transmission, one per device
                 (usually produced in a cheerful orange colour).

                 B SATA cables and connectors are much more sensitive mechanically than the
                   corresponding P-ATA hardware, but they make up for that through other
                   advantages: They are less of an impediment to air flow in a PC case, cannot
                   be installed wrongly due to their different connectors on each end, which
                   furthermore cannot be plugged in the wrong way round. In addition, the il-
                   logical separation between 2.5-inch and 3.5-inch diskdrives, which required
                   different connectors for P-ATA, goes away.
                   1 Anyone   remember the IBM PC/AT?
14.2 Bus Systems for Mass Storage                                                                      203

B Interestingly, serial ATA allows considerably faster data transfers than tra-
  ditional ATA, even though with the former all bits are transferred in “single
  file” rather than 16 at a go in parallel. This is due to the electrical proper-
  ties of the interface, which uses differential transmission and a signalling
  voltage of only 0.5 V instead of 5 V. (This is why cables may be longer, too—
  1 m instead of formerly 45 cm.) Current SATA interfaces can theoretically
  transfer up to 16 GiBit/s (SATA 3.2) which due to encoding and other im-
  pediments comes out as approximately 2 MiB/s—rather more than single
  disk drives can keep up with at sustained rates, but useful for RAID sys-
  tems that access multiple disk drives at the same time, and for fast SSDs. It
  is unlikely that SATA speeds will evolve further, since the trend with SSDs
  is towards connecting them directly via PCIe2 .

B Besides the higher speed and more convenient cabling, SATA also offers
  the advantage of “hot-swapping”: It is possible to disconnect a SATA disk
  drive and connect another one in its place, without having to shut down the
  computer. This of course presupposes that the computer can do without
  the data on the drive in question—typically because it is part of a RAID-1
  or RAID-5, where the the data on the new drive can be reconstructed based
  on other drives in the system. With traditional ATA, this was impossible (or
  only possible by jumping through hoops).

B External SATA (“eSATA”) is a derivative of SATA for use with external eSATA
  drives. It has different connectors and electrical specifications, which are
  much more robust mechanically and better suited for hot-swapping. In
  the meantime, it has been almost completely ousted from the market by
  USB 3.𝑥, but can still be found in older hardware.

SCSI and SAS The “Small Computer System Interface” or SCSI (customary pro-
nunciation: “SCUZ-zy”) has served for more than 25 years to connect hard disks,
tape drives and other mass storage devices, but also peripherals such as scanners,
to “small” computers3 . SCSI buses and connectors exist in a confusing variety,
beginning with the “traditional” 8-bit bus and ranging from the fast 16-bit vari-
eties to new, even faster serial implementations (see below). They also differ in
the maximum number of devices per bus, and in physical parameters such as the
maximum cable length and the allowable distances between devices on the cable.
Nicely enough, most of the variants are compatible or can be made compatible
(possibly with loss of efficiency!). Varieties such as FireWire (IEEE-1394) or Fiber-
Channel are treated like SCSI by Linux.

B Nowadays, most work goes into the serial SCSI implementations, most no-
  tably “Serial Attached SCSI” (SAS). As with SATA, data transfer is poten-
  tially faster (at the moment, SAS is slightly slower than the fastest parallel
  SCSI version, Ultra-640 SCSI) and electrically much less intricate. In partic-
  ular, the fast parallel SCSI versions are plagued by clocking problems that
  derive from the electrical properties of the cables and termination, and that
  do not exist with SAS (where the pesky termination is no longer necessary
  at all).

B SAS and SATA are fairly closely related; the most notable differences are that
  SAS allows things like accessing a drive via several cable paths for redun-
  dancy (“multipath I/O”; SATA requires jumping through hoops for this),
  supports more extensive diagnosis and logging functions, and is based on
  a higher signalling voltage, which allows for longer cables (up to 8 m) and
  physically larger servers.
   2 SATA in a strict sense allows speeds up to 6 GiBit/s; the higher speed of SATA 3.2 is already

achieved by means of PCIe. This “SATA Express” specification defines an interface that can carry
SATA signals as well as PCIe, such that compatible devices can be connected not only to SATA Express
controllers, but also to older hosts which support “only” SATA with up to 6 GiBit/s.
   3 Nobody has ever defined the meaning of “small” in this context, but it must be something like

“can be bodily lifted by at most two people”
204                                                       14 Hard Disks (and Other Secondary Storage)

                                            Table 14.1: Different SCSI variants

                      Name                   Width    Transfer rate Devices   Explanation
                      SCSI-1                  8 bit     ≤ 5 MiB/s         8   “Ancestor”
                      SCSI-2 »Fast«           8 bit      10 MiB/s         8
                      SCSI-2 »Wide«          16 bit      20 MiB/s        16
                      SCSI-3 »Ultra«          8 bit      20 MiB/s         8
                      SCSI-3 »Ultrawide«     16 bit      40 MiB/s        16
                      Ultra2 SCSI            16 bit      80 MiB/s        16   LVD busa
                      Ultra-160 SCSIb        16 bit     160 MiB/s        16   LVD bus
                      Ultra-320 SCSIc        16 Bit     320 MiB/s        16   LVD bus
                      Ultra-640 SCSI         16 Bit     640 MiB/s        16   LVD bus

                     B SATA and SAS are compatible to an extent where you can use SATA disk
                       drives on a SAS backplane (but not vice-versa).

      Vorkommen         “Pure-bred” SCSI, as far as PCs are concerned, is found mostly in servers; work
                     stations and “consumer PCs” tend to use IDE or SATA for mass storage and USB
                     for other devices. Devices based on IDE and USB are much cheaper to manu-
                     facture than SCSI-based devices—IDE disks, for example, cost about a third or a
                     fourth of the price of comparatively large SCSI disks.

                     B We do need to mention that SCSI disks are usually designed especially for
                       use in servers, and are therefore optimised for high speed and longevity.
                       SATA disks for workplace PCs do not come with the same warranties, are
                       not supposed to rival a jet fighter for noise, and should support fairly fre-
                       quent starting and stopping.

                        As a Linux administrator, you should know about SCSI even if you do not run
                     any SCSI-based systems, since from the point of view of the Linux kernel, in ad-
                     dition to SATA many USB or FireWire devices are accessed like SCSI devices and
                     use the same infrastrucure.

          SCSI ID    B Every device on a SCSI bus requires a unique “SCSI ID”. This number
                       between 0 and 7 (15 on the wide buses) is used to address the device.
                       Most “true” SCSI devices sport jumpers or a switch to select it; with Fibre-
                       Channel, USB, or SATA devices that are accessed via the SCSI infrastructure,
                       the system arranges for suitable unique SCSI IDs to be assigned.

      host adapter   B To use SCSI, a PC needs at least one host adapter (or “host”). Motherboard-
       SCSI BIOS       based and better expansion card host adapters contain a SCSI BIOS which
                       lets the system boot from a SCSI device. You can also use this to check which
                       SCSI IDs are available and which are used, and which SCSI device, if any,
                       should be used for booting.

                     B The host adapter counts as a device on the SCSI bus—apart from itself you
                       can connect 7 (or 15) other devices.

       boot order    B If your BIOS can boot from SCSI devices, you can also select in the boot order
                       whether the ATA disk C: should be preferred to any (potentially) bootable
                       SCSI devices.

                     B Most important for the correct function of a parallel SCSI system is appro-
      termination      priate termination of the SCSI bus. This can either be ensured via a special
                       plug (“terminator”) or switched on or off on individual devices. Erroneous
                       termination is the possible origin of all sorts of SCSI problems. If you do
                       experience difficulties with SCSI, always check first that termination is in
                       order. SAS does not require termination.
14.3 Partitioning                                                                      205

USB With the new fast USB variants, few if any compromises will be needed
when connecting mass storage devices—reading and writing speeds are bounded
by the storage device, not (as with USB 1.1 and USB 2.0) by the bus. Linux manages
USB-based storage devices exactly like SCSI devices.

C 14.1 [1] How many hard disks or SSDs does your computer contain? What
  is their capacity? How are they connected to the computer (SATA, …)?

14.3      Partitioning
14.3.1     Fundamentals
Mass storage devices such as hard disks or SSDs are commonly “partitioned”, i. e.,
subdivided into several logical storage devices that the operating system can then
access independently. This does not only make it easier to use data structures that
are appropriate to the intended use—sometimes partitioning is the only way to
make a very large storage medium fully accessible, if limits within the operating
system preclude the use of the medium “as a whole” (even though this sort of
problem tends to be rare today).
   Advantages of partitioning include the following:
   • Logically separate parts of the system may be separated. For example, you
     could put your users’ data on a different partition from that used by the op-
     erating system itself. This makes it possible to reinstall the operating system
     from scratch without endangering your users’ data. Given the often rudi-
     mentary “upgrade” functionality of even current distributions this is very
     important. Furthermore, if inconsistencies occur in a file system then only
     one partition may be impacted at first.

   • The structure of the file system may be adapted to the data to be stored.
     Most file systems keep track of data by means of fixed-size “blocks”, where
     every file, no matter how small, occupies at least a single block. With a 4 KiB
     block size this implies that a 500-byte file only occupies 1/8 of its block—the
     rest goes to waste. If you know that a directory will contain mostly small
     files (cue: mail server), it may make sense to put this directory on a parti-
     tion using smaller blocks (1 or 2 KiB). This can reduce waste considerably.
     Some database servers, on the other hand, like to work on “raw” partitions
     (without any file system) that they manage themselves. An operating sys-
     tem must make that possible, too.
   • “Runaway” processes or incautious users can use up all the space available
     on a file system. At least on important server systems it makes sense to
     allow user data (including print jobs, unread e-mail, etc.) only on partitions
     that may get filled up without getting the system itself in trouble, e.g., by
     making it impossible to append information to important log files.
   There are currently two competing methods to partition hard disks for PCs.
The traditional method goes back to the 1980s when the first hard disks (with
awesome sizes like 5 or 10 MB) appeared. Recently a more modern method was
introduced; this does away with various limitations of the traditional approach,
but in some cases requires special tools.

B Hard disks are virtually always partitioned, even though at times only one
  partition will be created. With USB thumb drives, one sometimes eschews
  partitioning altogether.
206                                                                      14 Hard Disks (and Other Secondary Storage)

                                           Table 14.2: Partition types for Linux (hexadecimal)

                                         Type      Description
                                           81      Linux data
                                           82      Linux swap space
                                           86      RAID super block (old style)
                                           8E      Linux LVM
                                           E8      LUKS (encrypted partition)
                                           EE      “Protective partition” for GPT-partitioned disk
                                           FD      RAID super block with autodetection
                                           FE      Linux LVM (old style)

                        14.3.2       The Traditional Method (MBR)
                           The traditional method stores partitioning information inside the “master boot
                           record” (MBR), the first sector (number 0) of a hard disk. (Traditionally, PC hard
                           disk sectors are 512 bytes long, but see below.) The space there—64 bytes starting
       primary partitions at offset 446—is sufficient for four primary partitions. If you want to create more
      extended partition than four partitions, you must use one of these primary partitions as an extended
        logical partitions partition. An extended partition may contain further logical partitions.

                        B The details about logical partitions are not stored inside the MBR, but at the
                          start of the partition (extended or logical) in question, i. e., they are scattered
                          around the hard disk.

                        Partition entries today usually store the starting sector number of the partition on
                        the disk as well as the length of the partition in question in sectors4 . Since these
                        values are 32-bit numbers, given the common 512-byte sectors this results in a
                        maximum partition size of 2 TiB.

                        B There are hard disks available today which are larger than 2 TiB. Such disks
                          cannot be made fully accessible using MBR partitioning. One common ruse
                          consists in using disks whose sectors are 4096 bytes long instead of 512. This
                          will let you have 16-TiB disks even with MBR, but not every operating sys-
                          tem supports such “4Kn” drives (Linux from kernel 2.6.31, Windows from
                          8.1 or Server 2012).

                        B 4-KiB sectors are useful on hard disks even without considering partitions.
                          The larger sectors are more efficient for storing larger files and allow better
                          error correction. Therefore the market offers “512e” disks which use 4-KiB
                          sectors internally but pretend to the outside that they really have 512-byte
                          sectors. This means that if a single 512-byte sector needs to be rewritten, the
                          adjoining 7 sectors must be read and also rewritten (a certain, but usually
                          bearable, loss of efficiency, since data is most often written in larger chunks).
                          When partitioning, you will have to pay attention that the 4-KiB blocks that
                          Linux uses internally for hard disk access coincide with the disk’s internal
                          4-KiB sectors—if that is not the case, then to write one 4-KiB Linux block two
                          4-KiB disk sectors might have to be read and rewritten, and that would not
                          be good. (Fortunately, the partitioning tools help you watch out for this.)

                           Besides the starting address and length of the (primary) partitions, the parti-
          partition type tion table contains a partition type which loosely describe the type of data man-
                        agement structure that might appear on the partition. A selection of Linux parti-
                        tion types appears in Table 14.2.
                            4 In former times, partitions used to be described in terms of the cylinder, head, and sector addresses

                        of the sectors in question, but this has been deprecated for a very long time.
14.3 Partitioning                                                                          207

14.3.3     The Modern Method (GPT)
In the late 1990s, Intel developed a new partitioning method that should do away
with the limitations of the MBR approach, namely “GUID Partition Table” or GPT.

B GPT was developed hand-in-hand with UEFI and is part of the UEFI spec-
  ification today. You can, however, use a BIOS-based Linux system to access
  GPT-partitioned disks and vice-versa.

B GPT uses 64-bit sector addresses and thus allows a maximum disk size of
  8 ZiB—zebibyte, in case you haven’t run into that prefix. 1 ZiB are 270 bytes,
  or, roughly speaking, about one million million tebibytes. This should last
  even the NSA for a while. (Disk manufactures, who prefer to talk powers of
  ten rather than powers of two, will naturally sell you an 8-ZiB disk as a 9.4
  zettabyte disk.)
   With GPT, the first sector of the disk remains reserved as a “protective MBR”
which designates the whole disk as partitioned from a MBR point of view. This
avoids problems if a GPT-partitioned disk is connected to a computer that can’t
handle GPT.
   The second sector (address 1) contains the “GPT header” which stores man-
agement information for the whole disk. Partitioning information is usually con-
tained in the third and subsequent sectors.

B The GPT header points to the partitioning information, and therefore they
  could be stored anywhere on the disk. It is, however, reasonable to place
  them immediately after the GPT header. The UEFI header stipulates a min-
  imum of 16 KiB for partitioning information (regardless of the disk’s sector

B On a disk with 512-byte sectors, with a 16 KiB space for partitioning infor-
  mation the first otherwise usable sector on the disk is the one at address 34.
  You should, however, avoid placing the disk’s first partition at that address
  because that will get you in trouble with 512e disks. The next correctly-
  aligned sector is the one at address 40.

B For safety reasons, GPT replicates the partitioning information at the end of
  the disk.
   Traditionally, partition boundaries are placed at the start of a new “track” on
the disk. Tracks, of course, are a relic from the hard disk paleolithic, since con-
temporary disks are addressed linearly (in other words, the sectors are numbered
consecutively from the start of the disk to the end)—but the idea of describing a
disk by means of a combination of a number of read/write heads, a number of
“cylinders”, and a number of sectors per “track” (a track is the concentric circle a
single head describes on a given cylinder) has continued to be used for a remark-
ably long time. Since the maximum number of sectors per track is 63, this means
that the first partition would start at block 63, and that is, of course, disastrous for
512e disks.

B Since Windows Vista it is common to have the first partition start 1 MiB after
  the start of the disk (with 512-byte sectors, at sector 2048). This isn’t a bad
  idea for Linux, either, since the ample free space between the partition table
  and the first partition can be used to store the GRUB boot loader. (The space
  between the MBR and sector 63 was quite sufficient earlier, too.)
   Partition table entries are at least 128 bytes long and, apart from 16-byte GUIDs
for the partition type and the partition itself and 8 bytes each for a starting and
ending block number, contains 8 bytes for “attributes” and 72 bytes for a partition
name. It is debatable whether 16-byte GUIDs are required for partition types, but
on the one hand the scheme is called “GUID partition table” after all, and on the
other hand this ensures that we won’t run out of partition types anytime soon. A
selection is displayed in Table 14.3.
208                                           14 Hard Disks (and Other Secondary Storage)

                              GUID                     Description
               00000000-0000-0000-0000-000000000000    Unused entry
               C12A7328-F81F-11D2-BA4B-00A0C93EC93B    EFI system partition (ESP)
               21686148-6449-6E6F-744E-656564454649    BIOS boot partition
               0FC63DAF-8483-4772-8E79-3D69D8477DE4    Linux file system
               A19D880F-05FC-4D3B-A006-743F0F84911E    Linux RAID partition
               0657FD6D-A4AB-43C4-84E5-0933C84B4F4F    Linux swap space
               E6D6D379-F507-44C2-A23C-238F2A3DF928    Linux LVM
               933AC7E1-2EB4-4F13-B844-0E14E2AEF915    /home partition
               3B8F8425-20E0-4F3B-907F-1A25A76F98E8    /srv partition
               7FFEC5C9-2D00-49B7-8941-3EA10A5586B7    dm-crypt partition
               CA7D7CCB-63ED-4C53-861C-1742536059CC    LUKS partition

                      Table 14.3: Partition type GUIDs for GPT (excerpt)

      B Linux can use GPT-partitioned media. This needs the “EFI GUID Partition
        support” option enabled in the kernel, but with current distributions this
        is the case. Whether the installation procedure allows you to create GPT-
        partitioned disks is a different question, just like the question of whether
        the boot loader will be able to deal with them. But that is neither here nor

      14.4      Linux and Mass Storage
      If a mass storage device is connected to a Linux computer, the Linux kernel tries
      to locate any partitions. It then creates block-oriented device files in /dev for the
      device itself and its partitions. You can subsequently access the partitions’ device
      files and make the directory hierarchies there available as part of the computer’s
      file system.

      B A new mass storage device may have no partitions at all. In this case you
        can use suitable tools to create partitions. This will be explained later in this
        chapter. The next step after partitioning consists of generating file systems
        on the partitions. This is explained in detail in Chapter 15.

          The device names for mass storage are usually /dev/sda , /dev/sdb , …, in the order
      the devices are recognised. Partitions are numbered, the /dev/sda device therefore
      contains partitions like /dev7sda1 , /dev/sda2 , … A maximum of 15 partitions per de-
      vice is possible. If /dev/sda is partitioned according to the MBR scheme, /dev/sda1
      to /dev/sda4 correspond to the primary partitions (possibly including an extended
      partition), while any logical partitions are numbered starting with /dev/sda5 (re-
      gardless of whether there are four primary partitions on the disk or fewer).

      B The “s ” in /dev/sda derives from “SCSI”. Today, almost all mass storage de-
        vices in Linux are managed as SCSI devices.

      B For P-ATA disks there is another, more specific mechanism. This accesses
        the IDE controllers inside the computer directly—the two drives connected
        to the first controller are called /dev/hda and /dev/hdb , the ones connected to
        the second /dev/hdc and /dev/hdd . (These names are used independently of
        whether the drives actually exist or not—if you have a single hard disk and
        a CD-ROM drive on your system, you do well to connect the one to one
        controller and the other to the other so they will not be in each other’s way.
        Therefore you will have /dev/hda for the disk and /dev/hdc for the CD-ROM
        drive.) Partitions on P-ATA disks are, again, called /dev/hda1 , /dev/hda2 and
        so on. In this scheme, 63 (!) partitions are allowed.
14.4 Linux and Mass Storage                                                                     209

B If you still use a computer with P-ATA disks, you will notice that in the
  vast majority of cases the SCSI infrastructure is used for those, too (note the
  /dev/sda style device names). This is useful for convenience and consistency.
  Some very few P-ATA controllers are not supported by the SCSI infrastruc-
  ture, and must use the old P-ATA specific infrastructure.

B The migration of an existing Linux system from “traditional” P-ATA drivers
  to the SCSI infrastructure should be well-considered and involve changing
  the configuration in /etc/fstab such that file systems are not mounted via
  their device files but via volume labels or UUIDs that are independent of
  the partitions’ device file names. (See Section 15.2.3.)

    The Linux kernel’s mass storage subsystem uses a three-tier architecture. At architecture
the bottom there are drivers for the individual SCSI host adapters, SATA or USB
controllers and so on, then there is a generic “middle layer”, on top of which there
are drivers for the various devices (disks, tape drives, …) that you might encounter
on a SCSI bus. This includes a “generic” driver which is used to access devices
without a specialised driver such as scanners or CD-ROM recorders. (If you can
still find any of those anywhere.)

B Every SCSI host adapter supports one or more buses (“channels”). Up to
  7 (or 15) other devices can be connected to each bus, and every device can
  support several “local unit numbers” (or LUNs), such as the individual CDs LUNs
  in a CD-ROM changer (rarely used). Every SCSI device in the system can
  thus be describe by a quadrupel (⟨host⟩, ⟨channel⟩, ⟨ID⟩, ⟨LUN⟩). Usually
  (⟨host⟩, ⟨channel⟩, ⟨ID⟩) are sufficient.

B In former times you could find information on SCSI devices within the /proc/
  scsi/scsi directory. This is no longer available on current systems unless the
  kernel was compiled using “Legacy /proc/scsi support”.

B Nowadays, information about “SCSI controllers” is available in /sys/class/
  scsi_host (one directory per controller). This is unfortunately not quite as
  accessible as it used to be. You can still snoop around:

       # cd /sys/class/scsi_host/host0/device
       # ls
       power scsi_host subsystem target0:0:0       uevent
       # cd target0:0:0; ls
       0:0:0:0 power subsystem uevent
       # ls 0:0:0:0/block

      A peek into /sys/bus/scsi/devices will also be instructive:

       # ls /sys/bus/scsi/devices
       0:0:0:0 10:0:0:0 host1     host2   host4   target0:0:0   target10:0:0
       1:0:0:0 host0      host10 host3    host5   target1:0:0

   Device names such as /dev/sda , /dev/sdb , etc. have the disadvantage of not being
very illuminating. In addition, they are assigned to devices in the order of their
appearance. So if today you connect first your MP3 player and then your digital
camera, they might be assigned the device files /dev/sdb and /dev/sdc ; if tomorrow
you start with the digital camera and continue with the MP3 player, the names
might be the other way round. This is of course a nuisance. Fortunately, udev
assigns some symbolic names on top of the traditional device names. These can
be found in /dev/block :
210                                          14 Hard Disks (and Other Secondary Storage)

      # ls -l /dev/block/8:0
      lrwxrwxrwx 1 root root 6 Jul 12 14:02 /dev/block/8:0 -> ../sda
      # ls -l /dev/block/8:1
      lrwxrwxrwx 1 root root 6 Jul 12 14:02 /dev/block/8:1 -> ../sda1
      # ls -l /dev/disk/by-id/ata-ST9320423AS_5VH5TBTC
      lrwxrwxrwx 1 root root 6 Jul 12 14:02 /dev/disk/by-id/ 
        ata-ST9320423AS_5VH5TBTC -> ../../sda
      # ls -l /dev/disk/by-id/ata-ST9320423AS_5VH5TBTC-part1
      lrwxrwxrwx 1 root root 6 Jul 12 14:02 /dev/disk/by-id/ 
        ata-ST9320423AS_5VH5TBTC-part1 -> ../../sda1
      # ls -l /dev/disk/by-path/pci-0000:00:1d.0-usb- 
      lrwxrwxrwx 1 root root 6 Jul 12 14:02 /dev/disk/by-path/ 
        pci-0000:00:1d.0-usb-0:1.4:1.0-scsi-0:0:0:0 -> ../../sdb
      # ls -l /dev/disk/by-uuid/c59fbbac-9838-4f3c-830d-b47498d1cd77
      lrwxrwxrwx 1 root root 10 Jul 12 14:02 /dev/disk/by-uuid/ 
        c59fbbac-9838-4f3c-830d-b47498d1cd77 -> ../../sda1
      # ls -l /dev/disk/by-label/root
      lrwxrwxrwx 1 root root 10 Jul 12 14:02 /dev/disk/by-label/root 
       -> ../../sda1

      These device names are derived from data such as the (unique) serial number of
      the disk drive, its position on the PCIe bus, or the UUID or name of the file system,
      and are independent of the name of the actual device file.

      C 14.2 [!2] On your ssytem there are two SATA hard disks. The first disk has
        two primary and two logical partitions. The second disk has one primary
        and three logical partitions. Which are the device names for these partitions
        on Linux?

      C 14.3 [!1] Examine the /dev directory on your system. Which storage me-
        dia are available and what are the corresponding device files called? (Also
        check /dev/block and /dev/disk .)

      C 14.4 [1] Plug a USB thumb drive into your computer. Check whether new
        device files have been added to /dev . If so, which ones?

      14.5      Partitioning Disks
      14.5.1    Fundamentals
      Before you partition the (possibly sole) disk on a Linux system, you should briefly
      consider what a suitable partitioning scheme might look like and how big the
      partitions ought to be. Changes after the fact are tedious and inconvenient at best
      and may at worst necessitate a complete re-install of the system (which would be
      exceedingly tedious and inconvenient). (See Section 14.7 for an alternative, much
      less painful approach.)
         Here are a few basic suggestions for partitioning:
         • Apart from the partition with the root directory / , you should provide at
           least one spearate partition for the file system containing the /home directory.
           This lets you cleanly separate the operating system from your own data, and
           facilitates distribution upgrades or even switching from one Linux distribu-
           tion to a completely different one.
14.5 Partitioning Disks                                                                   211

      B If you follow this approach, you should probably also use symbolic
        links to move the /usr/local and /opt directories to (for example) /home/
        usr- local and /home/opt . This way, these directories, which also contain
        data provided by you, are on “your” partition and can more easily be
        included in regular backups.

   • It is absolutely possible to fit a basic Linux system into a 2 GiB partition, but,
     considering today’s (low) costs per gigabyte for hard disk storage, there is
     little point in scrimping and saving. With something like 30 GiB, you’re sure
     to be on the safe side and will have enough room for log files, downloaded
     distribution packages during a larger update, and so on.
   • On server systems, it may make sense to provide separate partitions for /tmp ,
     /var , and possibly /srv . The general idea is that arbitrary users can put data
     into these directories (besides outright files, this could include unread or
     unsent e-mail, queued print jobs, and so on). If these directories are on
     separate partitions, users cannot fill up the system in general and thereby
     create problems.
   • You should provide swap space of approximately the same size as the com-
     puter’s RAM, up to a maximum of 8 GiB or thereabouts. Much more is
     pointless, but on workstations and mobile computers you may want to avail
     yourself of the possibility to “suspend” your computer instead of shutting
     it down, in order to speed up a restart and end up exactly where you were
     before—and the infrastructures enabling this like to use the swap space to
     save the RAM content.

      B There used to be a rule of thumb saying that the swap space should be
        about twice or three times the available RAM size. This rule of thumb
        comes from traditional Unix systems, where RAM works as “cache”
        for the swap space. Linux doesn’t work that way, instead RAM and
        swap space are added—on a computer with 4 GiB of RAM and 2 GiB
        of swap space, you get to run processes to the tune of 6 GiB or so. With
        8 GiB of RAM, providing 16 to 24 GiB of swap space would be absurd.

      B You should dimension the RAM of a computer (especially a server) to
        be big enough that practically no swap space is necessary during nor-
        mal operations.; on an 8-GiB server, you won’t usually need 16 GiB of
        swap space, but a gigabyte or two to be on the safe side will certainly
        not hurt (especially considering today’s prices for disk storage). That
        way, if RAM gets tight, the computer will slow down before processes
        crash outright because they cannot get memory from the operating sys-

   • If you have several (physical) hard disks, it can be useful to spread the sys-
     tem across the available disks in order to increase the access speeed to indi-
     vidual components.

      B Traditionally, one would place the root file system (/ with the essential
        subdirectories /bin , /lib , /etc , and so on) on one disk and the /usr direc-
        tory with its subdirectories on a separate file system on another disk.
        However, the trend on Linux is decisively away from the (artificial)
        separation between /bin and /usr/bin or /lib and /usr/lib and towards
        a root file system which is created as a RAM disk on boot. Whether the
        traditional separation of / and /usr will gain us a lot in the future is up
        for debate.

      B What will certainly pay off is to spread swap space across several disks.
        Linux always uses the least-used disk for swapping.
212                                                   14 Hard Disks (and Other Secondary Storage)

                 Provided that there is enough empty space on the medium, new partitions
              can be created and included (even while the system is running). This procedure
              consists of the following steps:
                   1. Backup the current boot sectors and data on the hard disk in question
                   2. Partition the disk using fdisk (or a similar program)
                   3. Possibly create file systems on the new partitions (“formatting”)
                   4. Making the new file systems accessible using mount or /etc/fstab

              B Items 3 and 4 on this list will be considered in more detail in Chapter 15.
              Data and boot-sector contents can be saved using the dd program (among others).

              # dd if=/dev/sda of=/dev/st0

              will, for example, save all of the sda hard disk to magnetic tape.
                  You should be aware that the partitioning of a storage medium has nothing to
              do with the data stored on it. The partition table simply specifies where on the
              disk the Linux kernel can find the partitions and hence the file structures. Once the
              Linux kernel has located a partition, the content of the partition table is irrelevant
              until it searches again, possibly during the next system boot. This gives you—
              if you are courageous (or foolhardy)—far-reaching opportunities to tweak your
              system while it is running. For example, you can by all means enlarge partitions
              (if after the end of the partition there is unused space or a partition whose contents
              you can do without) or make them smaller (and possibly place another partition
              in the space thus freed up). As long as you exercise appropriate care this will be
              reasonably safe.

              B This should of course in no way discourage you from making appropriate
                backup copies before doing this kind of open-heart surgery.

              B In addition, the file systems on the disks must play along with such shenani-
                gans (many Linux file systems can be enlarged without loss of data, and
                some of them even be made smaller), or you must be prepared to move the
                data out of the way, generate a new file system, and then fetch the data back.

              14.5.2      Partitioning Disks Using fdisk
      fdisk   fdisk is an interactive program for manipulating disk partition tables. It can also
              create “foreign” partition types such as DOS partitions. Drives are addressed us-
              ing the corresponding device files (such as /dev/sda for the first disk).

              B fdisk confines itself to entering a partition into the partition table and setting
                the correct partition type. If you create a DOS or NTFS partition using fdisk ,
                this means just that the partition exists in the partition table, not that you can
                boot DOS or Windows NT now and write files to that partition. Before doing
                that, you must create a file system, i. e., write the appropriate management
                data structures to the disk. Using Linux-based tools you can do this for
                many but not all non-Linux file systems.

                   After invoking “fdisk ⟨device⟩”, the program comes back with a succinct prompt
              # fdisk /dev/sdb                                                 Neue   (leere) Platte
              Welcome to fdisk (util-linux 2.25.2).
              Changes will remain in memory only, until you decide to write them.
              Be careful before using the write command.

              Device does not contain a recognized partition table.
14.5 Partitioning Disks                                                               213

Created a new DOS disklabel with disk identifier 0x68d97339.

Command (m for help): _

The   m   command will display a list of the available commands.

B fdisk lets you partition hard disks according to the MBR or GPT schemes.
  It recognises an existing partition table and adjusts itself accordingly. On
  an empty (unpartitioned) disk fdisk will by default create an MBR partition
  table, but you can change this afterwards (we’ll show you how in a little

   You can create a new partition using the “n ” command:

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-2097151, default 2048): ↩
Last sector, +sectors or +sizeK,M,G,T,P (2048-2097151, 
  default 2097151): +500M

Created a new partition 1 of type 'Linux' and of size 500 MiB.

Command (m for help): _

The p command displays the current partition table. This could look like this:

Command (m for help): p
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x68d97339

Device      Boot Start     End Sectors   Size Id Type
/dev/sdb1         2048 1026047 1024000   500M 83 Linux

B You can change the partition type using the t command. You must select the
  desired partition and can then enter the code (as a hexadecimal number). L
  displays the whole list.

You can delete a partition you no longer want by means of the d command. When
you’re done, you can write the partition table to disk and quit the program using
w . With q , you can quit the program without rewriting the partition table.

B After storing the partition table, fdisk tries to get the Linux kernel to reread
  the new partition table; this works well with new or unused disks, but fails
  if a partition on the disk is currently in use (as a mounted file system, active
  swap space, …). This lets you repartition the disk with the / file system only
  with a system reboot. One of the rare occasions when a Linux system needs
  to be rebooted …

  Like all Linux commands, fdisk supports a number of command-line options. Options
The most important of those include:
214                                            14 Hard Disks (and Other Secondary Storage)

      -l   displays the partition table of the selected disk and then terminates the pro-
      -u   (“units”) lets you select the units of measure used when displaying partition
              tables. The default is “sectors”; if you specify “-u=cylinders ”, cylinders will
              be used instead (but there is no good reason for that today).

      B If you use fdisk in MBR mode, it tries to observe the usual conventions and
        arrange the partitions such that they work properly on 4Kn and 512e hard
        disks. You should follow the program’s suggestions wherever possible, and
        not deviate from them unless there are very compelling reasons.

         If you partition a hard disk according to the GPT standard and there is no GPT-
      style partition table on the disk yet, you can generate one using the g command
      (Warning: A possibly existing MBR partition table will be overwritten in the pro-

      Command (m for help): g
      Created a new GPT disklabel (GUID: C2A556FD-7C39-474A-B383-963E09AA7269)

      (The GUID shown here applies to the disk as a whole.) Afterwards you can use the
      n command to create partitions in the usual way, even if the dialog looks slightly
      Command (m for help): n
      Partition number (1-128, default 1): 1
      First sector (2048-2097118, default 2048): ↩
      Last sector, +sectors or +sizeK,M,G,T,P (2048-2097118, default 
        2097118): +32M

      Created a new partition 1 of type 'Linux filesystem' and of size 32 MiB.

      The partition type selection is different, too, because it is about GUIDs rather than
      two-digit hexadecimal numbers:

      Command (m for help): t
      Selected partition 1
      Partition type (type L to list all types): L
        1 EFI System                     C12A7328-F81F-11D2-BA4B-00A0C93EC93B
       14   Linux   swap                    0657FD6D-A4AB-43C4-84E5-0933C84B4F4F
       15   Linux   filesystem              0FC63DAF-8483-4772-8E79-3D69D8477DE4
       16   Linux   server data             3B8F8425-20E0-4F3B-907F-1A25A76F98E8
       17   Linux   root (x86)              44479540-F297-41B2-9AF7-D131D5F0458A
       18   Linux   root (x86-64)           4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709
      Partition type (type L to list all types): _

      C 14.5 [!2] Create an empty 1 GiB file using the
              # dd if=/dev/zero of=$HOME/test.img bs=1M count=1024

              command. Use fdisk to “partition” the file according to the MBR scheme:
              Create two Linux partitions of 256 MiB and 512 MiB, respectively, and create
              a swap space partitions using the balance.

      C 14.6 [!2] Repeat the following exercise, but create a GPT partition table in-
        stead. Assume that the 512-MiB partition will contain a /home directory.
14.5 Partitioning Disks                                                               215

14.5.3       Formatting Disks using GNU parted
Another popular program for partitioning storage media is the GNU project’s
parted . Featurewise, it is roughly comparable with fdisk , but it has a few useful

B Unlike fdisk , parted does not come pre-installed with most distributions, but
  can generally be added after the fact from the distribution’s servers.

   Similar to fdisk , parted must be started with the name of the medium to be
partitioned as a parameter.

# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) _

You can create a new partition using mkpart . This works either interactively …

(parted) mkpart
Partition name? []? Test
File system type? [ext2]? ext4
Start? 211MB
End? 316MB

… or directly when the command is invoked:

(parted) mkpart primary ext4 211MB 316MB

B You can abbreviate the commands down to an unambiguous prefix. Hence,
  mkp will work instead of mkpart (mk would collide with the mklabel command).

B The file system type will only be used to guess a partition type. You will
  still need to manually create a file system on the partition later on.

You can examine the partition table using the print command:

(parted) p
Disk /dev/sdb: 1074MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number   Start    End     Size    File system   Name      Flags
 1       1049kB   106MB   105MB
 2       106MB    211MB   105MB
 3       211MB    316MB   105MB   ext4          primary

(parted) _

(This also shows you where the magic numbers “211MB ” and “316MB ” came from,
earlier on.)

B print has a few interesting subcommands: “print devices ” lists all available
  block devices, “print free ” displays free (unallocated) space, and “print all ”
  outputs the partition tables of all block devices.

You can get rid of unwanted partitions using rm . Use name to give a name to a
partition (only for GPT). The quit command stops the program.
216                                          14 Hard Disks (and Other Secondary Storage)

      A Important: While fdisk updates the partition table on the disk only once you
        leave the program, parted does it on an ongoing basis. This means that the
        addition or removal of a partition takes effect on the disk immediately.

         If you use parted on a new (unpartitioned) disk, you must first create a partition
      (parted) mklabel gpt

      creates a GPT-style partition table, and

      (parted) mklabel msdos

      one according to the MBR standard. There is no default value; without a partition
      table, parted will refuse to execute the mkpart command.
         If you inadvertently delete a partition that you would rather have kept, parted
      can help you find it again. You will just need to tell it approximately where on the
      disk the partition used to be:

      (parted) rm 3                                                                 Oops.
      (parted) rescue 200MB 350MB
      Information: A ext4 primary partition was found at 211MB -> 316MB.
      Do you want to add it to the partition table?

      Yes/No/Cancel? yes

      For this to work, there must be a file system on the partition, because parted looks
      for data blocks that appear to be the start of a file system.
         In addition to the interactive mode, parted allows you to pass commands im-
      mediately on the Linux command line, like this:

      # parted /dev/sdb mkpart primary ext4 316MB 421MB
      Information: You may need to update /etc/fstab.

      C 14.7 [!2] Repeat Exercise 14.5 using parted rather than fdisk , for the MBR as
        well as the GPT scheme.

      C 14.8 [2] (If you have worked through Chapter 15 already.) Generate Linux
        file systems on the partitions on the “disk” from the earlier exercises. Re-
        move these partitions. Can you restore them using parted ’s rescue command?

      14.5.4    gdisk
      The gdisk program specialises in GPT-partitioned disks and can do a few useful
      things the previously explained programs can’t. You may however have to install
      it specially.
          The elementary functions of gdisk correspond to those of fdisk and parted , and
      we will not reiterate those (read the documentation and do a few experiments). A
      few features of gdisk , however, are of independent interest:
         • You can use gdisk to convert an MBR-partitioned medium to a GPT-partitioned
           medium. This presupposes that there is enough space at the start and the
           end of the medium for GPT partition tables. With media where, according
           to current conventions, the first partition starts at sector 2048, the former is
           not a problem, but the latter may be. You will possibly have to ensure that
           the last 33 sectors on the medium are not assigned to a partition.
14.6 Loop Devices and kpartx                                                                      217

      For the conversion it is usually sufficient to start gdisk with the device file
      name of the medium in question as a parameter. You will either receive
      a warning that no GPT partition table was found and disk used the MPT
      partition table instead (at this point you can quit the program using w and
      you’re done), or that an intact MBR, but a damaged GPT partition table was
      found (then you tell gdisk to follow the MBR, and can then quit the program
      using w and you’re done).
   • The other direction is also possible. To do this, you must use the r command
     in gdisk to change to the menu for “recovery/transformation commands”,
     and select the g command there (“convert GPT into MBR and exit”). After-
     wards you can quit the program using w and convert the storage medium
     this way.

C 14.9 [!2] Repeat Exercise 14.5 using gdisk rather than fdisk and generate a
  GPT partition table.

C 14.10 [2] Create (e. g., using fdisk ) an MBR-partitioned “disk” and use gdisk
  to convert it to GPT partitioning. Make sure that a correct “protective MBR”
  was created.

14.5.5    More Partitioning Tools
Most distributions come with alternative ways of partitioning disks. Most of them distributions
offer the cfdisk program as an alternative to fdisk . This is screen-oriented and thus
somewhat more convenient to use. Even easier to use are graphical programs,
such as SUSE’s YaST or “DiskDruid” on Red Hat distributions.

B Also worth mentioning is sfdisk , a completely non-interactive partitioning
  program. sfdisk takes partitioning information from an input file and is
  therefore useful for unattended partitioning, e. g., as part of an automated
  installation procedure. Besides, you can use sfdisk to create a backup copy
  of your partitioning information and either print it as a table or else store it
  on a disk or CD as a file. If the worst happens, this copy can then be restored
  using sfdisk .

B sfdisk only works for MBR-partitioned disks. There is a corresponding pro-
  gram called sgdisk which does an equivalent job for GPT-partitioned disks.
  However, sfdisk and sgdisk are not compatible—their option structures are
  completely different.

14.6     Loop Devices and kpartx
Linux has the useful property of being able to treat files like storage media. This
means that if you have a file you can partition it, generate file systems, and gener-
ally treat the “partitions” on that file as if they were partitions on a “real” hard
disk. In real life, this can be useful if you want to access CD-ROMs or DVDs
without having a suitable drive in your computer (it is also faster). For learn-
ing purposes, it means that you can perform various experiments without having
to obtain extra hard disks or mess with your computer.
   A CD-ROM image can be created straightforwardly from an existing CD-ROM CD-ROM image
using dd :

# dd if=/dev/cdrom of=cdrom.iso bs=1M

You can subsequently make the image directly accessible:
218                                           14 Hard Disks (and Other Secondary Storage)

      # mount -o loop,ro cdrom.iso /mnt

      In this example, the content of the CD-ROM will appear within the /mnt directory.
          You can also use the dd command to create an empty file:

      # dd if=/dev/zero of=disk.img bs=1M count=1024

      That file can then be “partitioned” using one of the common partitioning tools:

      # fdisk disk.img

      Before you can do anything with the result, you will have to ensure that there are
      device files available for the partitions (unlike with “real” storage media, this is
      not automatic for simulated storage media in files). To do so, you will first need a
      device file for the file as a whole. This—a so-called “loop device”—can be created
      using the losetup command:

      # losetup -f disk.img
      # losetup -a
      /dev/loop0: [2050]:93 (/tmp/disk.img)

      losetup  uses device file names of the form “/dev/loop 𝑛”. The “-f ” option makes the
      program search for the first free name. “losetup -a ” outputs a list of the currently
      active loop devices.
          Once you have assigned your disk image to a loop device, you can create device
      files for its partitions. This is done using the kpartx command.

      B You may have to install kpartx first. On Debian and Ubuntu, the package is
        called kpartx .
      The command to create device files for the partitions on /dev/loop0 is

      # kpartx -av /dev/loop0
      add map loop0p1 (254:0): 0 20480 linear /dev/loop0 2048
      add map loop0p2 (254:1): 0 102400 linear /dev/loop0 22528

      (without the “-v ” command, kpartx keeps quiet). The device files then appear in
      the /dev/mapper directory:

      # ls /dev/mapper
      control loop0p1      loop0p2

      Now nothing prevents you from, e. g., creating file systems on these “partitions”
      and mounting them into your computer’s directory structure. See Chapter 15.
         When you no longer need the device files for the partitions, you can remove
      them again using the

      # kpartx -dv /dev/loop0
      del devmap : loop0p2
      del devmap : loop0p1

      command. An unused loop device can be released using

      # losetup -d /dev/loop0

      B The
                # losetup -D

            command releases all loop devices.
14.7 The Logical Volume Manager (LVM)                                                      219

C 14.11 [!2] Use the test “disk” from Exercise 14.5. Assign it a loop device
  using losetup and make its partitions accessible with kpartx . Convince your-
  self that the correct device files show up in /dev/mapper . Then release the
  partitions and the loop device again.

14.7      The Logical Volume Manager (LVM)
Partitioning a disk and creating file systems on it seems like a simple and obvious
thing to do. On the other hand, you are committing yourself: The partition scheme
of a hard disk can only be changed with great difficulty and, if the disk in question
contains the root file system, may even involve the use of a “rescue system”. In
addition, there is no compelling reason why you should be constrained in your
system architecture by trivialities such as the limited capacity of hard disks and
the fact that file system can be at most as large as the partitions they are sitting on.
    One method to transcend these limitations is the use of the “Logical Volume
Manager” (LVM). LVM provides an abstraction layer between disks (and disk par-
titions) and file systems—instead of creating file systems directly on partitions,
you can contribute partitions (or whole disks) to a “pool” of disk space and then
allocate space from that pool to create file systems. Single file systems can freely
use space which is located on more than one physical disk.
    In LVM terminology, disks and disk partitions are considered “physical vol-
umes” (PV) which are made part of a “volume group” (VG). There may be more
than one VG on the same computer. The space within a VG is available for cre-
ating “logical volumes” (LV), which can then hold arbitrary file systems or swap

B When creating LVs, you can cause their storage space to be spread deviously
  across several physical disks (“striping”) or multiple copies of their data to
  be stored in several places within the VG at the same time (“mirroring”).
  The former is supposed to decrease retrieval time (even if there is a danger
  of losing data whenever any of the disks in the volume group fail), the latter
  is supposed to reduce the risk of losing data (even if you are paying for this
  with increased access times). In real life, you will probably prefer to rely on
  (hardware or software) RAID instead of using LVM’s features.
   One of the nicer properties of LVM is that LVs can be changed in size while the
system is running. If a file system is running out of space, you can first enlarge
the underlying LV (as long as your VG has unused space available—otherwise you
would first need to install another disk and add it to the VG). Afterwards you can
enlarge the file system on the LV in question.

B This presumes that the file system in question enables size changes after the
  fact. With the popular file systems, e. g., ext3 or ext4 , this is the case. They
  even allow their size to be increased while the file system is mounted. (You
  will need to unmount the file system to reduce the size.)

B If you use a file system that does not let itself be enlarged, you will have
  to bite the bullet, copy the data elsewhere, recreate the file system with the
  new size, and copy the data back.
   If a disk within your VG should start acting up, you can migrate the LVs from
that disk to another within the VG (if you still have or can make enough space).
After that, you can withdraw the flaky disk from the VG, install a new disk, add
that to the VG and migrate the LVs back.

B You can do that, too, while the system is running and with your users none
  the wiser—at least as long as you have invested enough loose change into
  making your hard disks “hot-swappable”.
220                                                        14 Hard Disks (and Other Secondary Storage)

      “snapshots”      Also nice are “snapshots”, which you can use for backup copies without hav-
                    ing to take your system offline for hours (which would otherwise be necessary
                    to ensure that nothing changes while the backup is being performed). You can
                    “freeze” the current state of an LV on another (new) LV—which takes a couple of
                    seconds at most—and then make a copy of that new LV in your own time while
                    normal operations continue on the old LV.

                    B The “snapshot” LV only needs to be big enough to hold the amount of
                      changes to the original LV you expect while the backup is being made (plus
                      a sensible safety margin), since only the changes are being stored inside the
                      new LV. Hence, nobody prevents you from making a snapshot of your 10 TB
                      file system even if you don’t have another 10 TB of free disk space: If you
                      only expect 10 GB of data to be changed while you’re writing the copy to
                      tape, a snapshot LV of 20–30 GB should be fairly safe.

                    B As a matter of fact it is now possible to create writable snapshots. This is
                      useful, for example, if you are working with “virtual machines” that share
                      a basic OS installation but differ in details. Writable snapshots make it pos-
                      sible to make the basic installation in one LV for all virtual machines and
                      then store the configuration specific to each virtual machine in one LV with
                      a writable snapshot each. (You shouldn’t overstretch this approach, though;
                      if you change the LV with the basic installation the virtual machines won’t

                       On Linux, LVM is a special application of the “device mapper”, a system com-
                    ponent enabling the flexible use of block devices. The device mapper also pro-
                    vides other useful features such as encrypted disks or space-saving storage provi-
                    sioning for “virtual servers”. Unfortunately we do not have room in this training
                    manual to do LVM and the device mapper justice, and refer you to the manual,
                    Linux Storage and File Systems (STOR).

                    Commands in this Chapter
                    cfdisk    Character-screen based disk partitioner                   cfdisk (8)   217
                    gdisk     Partitioning tool for GPT disks                            gdisk (8)   216
                    kpartx    Creates block device maps from partition tables           kpartx (8)   218
                    losetup   Creates and maintains loop devices                       losetup (8)   218
                    sfdisk    Non-interactive hard disk partitioner                     sfdisk (8)   217
                    sgdisk    Non-interactive hard disk partitioning tool for GPT disks
                                                                                        sgdisk (8)   217
14.7 Bibliography                                                                            221

   • Linux supports all notable types of mass storage device—magnetic hard
     disks (SATA, P-ATA, SCSI, SAS, Fibre Channel, USB, …), SSDs, USB thumb
     drives, SD cards, …
   • Storage media such as hard disks may be partitioned. Partitions allow the
     independent management of parts of a hard disk, e. g., with different file
     systems or operating systems.
   • Linux can deal with storage media partitioned according to the MBR and
     GPT schemes.
   • Linux manages most storage media like SCSI devices. There is an older
     infrastructure for P-ATA disks which is only rarely used.
   • Linux offers various tools for partitioning such as fdisk , parted , gdisk , cfdisk ,
     or sfdisk . Various distributions also provide their own tools.
   • Loop devices make block-oriented devices from files. Partitions on loop de-
     vices can be made accessible using kpartx .
   • The Logical Volume Manager (LVM) decouples physical storage space on
     media from logical storage structures. It enables the flexible management
     of mass storage, e. g., to create file systems which are larger than a single
     physical storage medium. Snapshots help create backup copies and provi-
     sion storage space for virtual machines.

SCSI-2.4-HOWTO03 Douglas Gilbert. “The Linux 2.4 SCSI subsystem HOWTO”,
      May 2003.            2.4- HOWTO/
                                                                                                             $ echo tux
                                                                                                             $ ls
                                                                                                             $ /bin/su -

File Systems: Care and Feeding

15.1 Creating a Linux File System .        .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   224
    15.1.1 Overview . . . . .              .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   224
    15.1.2 The ext File Systems . .        .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   226
    15.1.3 ReiserFS . . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   234
    15.1.4 XFS . . . . . . . .             .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   235
    15.1.5 Btrfs . . . . . . .             .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   237
    15.1.6 Even More File Systems          .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   238
    15.1.7 Swap space . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   239
15.2 Mounting File Systems . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   240
    15.2.1 Basics . . . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   240
    15.2.2 The mount Command . .           .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   240
    15.2.3 Labels and UUIDs . .            .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   242
15.3 The dd Command . . . . .              .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   244

   •   Knowing the most important file systems for Linux and their properties
   •   Being able to generate file systems on partitions and storage media
   •   Knowing about file system maintenance tools
   •   Being able to manage swap space
   •   Being able to mount local file systems
   •   Knowing how to set up disk quotas for users and groups

   • Competent use of the commands to handle files and directories
   • Knowledge of mass storage on Linux and partitioning (Chapter 14)
   • Existing knowledge about the structure of PC disk storage and file systems
     is helpful

adm1-dateisysteme-opt.tex [!quota ] (33e55eeadba676a3 )
224                                                                   15 File Systems: Care and Feeding

                   15.1      Creating a Linux File System
                   15.1.1     Overview
                   After having created a new partition, you must “format” that partition, i. e., write
                   the data structures necessary to manage files and directories onto it. What these
                   data structures look like in detail depends on the “file system” in question.

                   B Unfortunately, the term “file system” is overloaded on Linux. It means, for
                            1. A method to arrange data and management information on a medium
                               (“the ext3 file system”)
                            2. Part of the file hierarchy of a Linux system which sits on a particular
                               medium or partition (“the root file system”, “the /var file system”)
                            3. All of the file hierarchy, across media boundaries

                   The file systems (meaning 1 above) common on Linux may differ considerably.
                   On the one hand, there are file systems that were developed specifically for Linux,
                   such as the “ext filesystems” or the Reiser file system, and on the other hand there
                   are file systems that originally belonged to other operating systems but that Linux
                   supports (to a greater or lesser degree) for compatibility. This includes the file
                   systems of DOS, Windows, OS X, and various Unix variants as well as “network
                   file systems” such as NFS or SMB which allow access to file servers via the local
                       Many file systems “native” to Linux are part of the tradition of file systems com-
                   mon on Unix, like the Berkeley Fast Filesystem (FFS), and their data structures are
                   based on those. However, development did not stop there; more modern influ-
                   ences are constantly being integrated in order to keep Linux current with the state
                   of the art.

                   B Btrfs (pronounced “butter eff ess”) by Chris Mason (Fusion-IO) is widely
                     considered the answer to the famous ZFS of Solaris. (The source code for
                     ZFS is available but cannot be integrated in the Linux directly, due to li-
                     censing considerations.) Its focus is on “fault tolerance, repairs and simple
                     administration”. By now it seems to be mostly usable, at least some distri-
                     butions rely on it.

      superblock      With Linux file systems it is common to have a superblock at the beginning
                   of the file system. This contains information pertaining to the file system as a
                   whole—such as when it was last mounted or unmounted, whether it was un-
                   mounted “cleanly” or because of a system crash, and so on. The superblock nor-
                   mally also points to other parts of the management data structures, like where the
                   inodes or free/occupied block lists are to be found and which parts of the medium
                   are available for data.

                   B It is usual to keep spare copies of the superblock elsewhere on the file sys-
                     tem, in case something happens to the original. This is what the ext file
                     systems do.

                   B On disk, there is usually a “boot sector” in front of the superblock, into
                     which you can theoretically put a boot loader (Chapter 16). This makes it
                     possible to, e. g., install Linux on a computer alongside Windows and use
                     the Windows boot manager to start the system.

            mkfs      On Linux, file systems (meaning 2 above) are created using the mkfs command.
                   mkfs is independent of the actual file system (meaning 1) desired; it invokes the real
                   routine specific to the file system (meaning 1) in question, mkfs. ⟨file system name⟩.
                   You can select a file system type by means of the -t option—with “mkfs -t ext2 ”,
                   for example, the mkfs.ext2 program would be started (another name for mke2fs ).
15.1 Creating a Linux File System                                                                         225

    When the computer has been switched off inadvertently or crashed, you have
to consider that the file system might be in an inconsistent state (even though this inconsistent state
happens very rarely in real life, even on crashes). File system errors can occur
because write operations are cached inside the computer’s RAM and may be lost
if the system is switched off before they could be written to disk. Other errors can
come up when the system gives up the ghost in the middle of an unbuffered write
    Besides data loss, problems can include errors within the file system manage- structural errors
ment structure. These can be located and repaired using suitable programs and

   • Erroneous directory entries
   • Erroneous inode entries
   • Files that do not occur in any directory
   • Data blocks belonging to several different files

Most but not all such problems can be repaired automatically without loss of data;
generally, the file system can be brought back to a consistent state.

B On boot, the system will find out whether it has not been shut down cor-
  rectly by checking a file system’s state. During a regular shutdown, the file
  systems are unmounted and the “valid flag” in every file system’s super valid flag
  block will be set. On boot, this super block information may be used to au-
  tomatically check these possibly-erroneous file systems and repair them if
  necessary—before the system tries to mount a file system whose valid flag
  is not set, it tries to do a file system check.

B With all current Linux distributions, the system initialisation scripts exe-
  cuted by init after booting contain all necessary commands to perform a
  file system check.

    If you want to check the consistency of a file system you do not need to wait
for the next reboot. You can launch a file system check at any time. Should a file file system check
contain errors, however, it can only be repaired if it is not currently mounted. This
restriction is necessary so that the kernel and the repair program do not “collide”.
This is another argument in favour of the automatic file system checks during
    Actual consistency checks are performed using the fsck command. Like mkfs ,
depending on the type of the file system to be checked this command uses a spe-
cific sub-command called fsck. ⟨type⟩—e.g., fsck.ext2 for ext2 . fsck identifies the
required sub-command by examining the file system in question. Using the

# fsck /dev/sdb1

command, for example, you can check the file system on /dev/sdb1 .

B The simple command
       # fsck

      checks all file systems listed in /etc/fstab with a non-zero value in the sixth
      (last) column in sequence. (If several different values exist, the file systems
      are checked in ascending order.) /etc/fstab is explained in more detail in
      Section 15.2.2.

B fsck supports a -t option which at first sight resembles mkfs but has a dif-
  ferent meaning: A command like
226                                                                   15 File Systems: Care and Feeding

                        # fsck -t ext3

                        checks all file systems in /etc/fstab that are marked as type ext3 there.

      options        The most important options of fsck include:
                -A   (All) causes fsck to check all file systems mentioned in /etc/fstab .

                       B This obeys the checking order in the sixth column of the file. If several
                         file systems share the same value in that column, they are checked in
                         parallel if they are located on different physical disks.

                -R   With -A , the root file system is not checked (which is useful if it is already
                       mounted for writing).
                -V   Outputs verbose messages about the check run.
                -N   Displays what fsck would do without actually doing it.
                -s   Inhibits parallel checking of multiple file systems. The “fsck ” command with-
                        out any parameters is equivalent to “fsck -A -s ”.
                    Besides its own options, you can pass additional options to fsck which it will
                forward to the specific checking program. These must occur after the name of the
                file system(s) to be checked and possibly a “-- ” separator. The -a , -f , -p and -v
                options are supported by most such programs. Details may be found within the
                documentation for the respective programs. The

                # fsck /dev/sdb1 /dev/sdb2 -pv

                for example would check the file systems on the /dev/sdb1 and /dev/sdb2 partitions
                automatically, fix any errors without manual intervention and report verbosely on
                its progress.

                B At program termination, fsck passes information about the file system state
                  to the shell:

                        0    No error was found in the file system
                        1    Errors were found and corrected
                        2    Severe errors were found and corrected. The system should be rebooted
                        4    Errors were found but not corrected
                        8    An error occurred while the program was executed
                        16    Usage error (e. g., bad command line)
                        128   Error in a shared library function
                        It is conceivable to analyse these return values in an init script and deter-
                        mine how to continue with the system boot. If several file systems are being
                        checked (using the -A option), the return value of fsck is the logical OR of
                        the return values of the individual checking programs.

                15.1.2         The ext File Systems
                History and Properties The original “extended file system” for Linux was imple-
                mented in April, 1992, by Rémy Card. It was the first file system designed specif-
                ically for Linux (although it did take a lot of inspiration from general Unix file
                systems) and did away with various limitations of the previously popular Minix
                file system.
15.1 Creating a Linux File System                                                         227

B The Minix file system had various nasty limits such as a maximum file sys-
  tem size of 64 MiB and file names of at most 14 characters. (To be fair, Minix
  was introduced when the IBM PC XT was considered a hot computer and
  64 MiB, for PCs, amounted to an unimaginably vast amount of disk storage.
  By 1990, that assumption had begun to crumble.) ext allowed file systems
  of up to 2 GiB—quite useful at the time, but naturally somewhat ridiculous

B The arrival of the ext file system marks another important improvement
  to the Linux kernel, namely the introduction of the “virtual file system
  switch”, or VFS. The VFS abstracts file system operations such as the open-
  ing and closing of files or the reading and writing of data, and as such
  enables the coexistence of different file system implementations in Linux.

A The original ext file system is no longer used today. From here on, when we
  talk about “the ext file systems”, we refer to ext2 and everything newer than
   The subsequent version, ext2 (the “second extended file system”), which was
begun by Rémy Card in January, 1993, amounted to a considerable rework of the
original “extended file system”. The development of ext2 made use of many ideas
from the BSD “Berkeley Fast Filesystem”. ext2 is still being maintained and makes
eminent sense for certain applications.

B Compared to ext , ext2 pushes various size limits—with the 4 KiB block size
  typical for Intel-based Linux systems, file systems can be 16 TiB and single
  files 2 TiB in size. Another important improvement in ext2 was the intro-
  duction of separate timestamps for the last access, last content modification
  and last inode modification, which achieved compatibility to “traditional”
  Unix in this respect.

B From the beginning, ext2 was geared towards continued development and
  improvement: Most data structures contained surplus space which was
  later used for important extensions. These include ACLs and “extended
   Since the end of the 1990s, Stephen Tweedie worked on a successor to ext2 ,
which was made part of the Linux kernel at the end of 2001 under the name of
ext3 . (That was Linux 2.4.15.) The most important differences between ext2 and
ext3 include:

   • ext3 supports Journaling.
   • ext3 allows enlarging file systems while they are mounted.
   • ext3 supports more efficient internal data structures for directories with
     many entries.
Even so it is largely compatible with ext2 . It is usually possible to access ext3 file
systems as ext2 file systems (which implies that the new features cannot be used)
and vice-versa.

B “Journaling” solves a problem that can be very tedious with the increasing
  size of file systems, namely that an unforeseen system crash makes it neces-
  sary to do a complete consistency check of the file system. The Linux kernel
  does not perform write operations immediately, but buffers the data in RAM
  and writes them to disk when that is convenient (e. g., when the read/write
  head of the disk drive is in the appropriate place). In addition, many write
  operations involve writing data to various places on the disk, e. g., one or
  more data blocks, the inode table, and the list of available blocks on the
  disk. If the power fails in the right (or wrong) moment, such an operation
  can remain only half-done—the file system is “inconsistent” in the sense
  that a data block can be assigned to a file in the inode, but not marked used
  in the free-block list. This can lead to serious problems later on.
228                                                     15 File Systems: Care and Feeding

      B A journaling file system like ext3 considers every write access to the disk
        as a “transaction” which must be performed completely or not at all. By
        definition, the file system is consistent before and after a transaction is per-
        formed. Every transaction is first written into a special area of the file sys-
        tem called the journal. If it has been entirely written, it is marked “complete”
        and, as such, it is official. The Linux kernel can do the actual write opera-
        tions later.—If the system crashes, a journaling file system does not need to
        undergo a complete file system check, which with today’s file system sizes
        could take hours or even days. Instead, the journal is considered and any
        transactions marked “complete” are transferred to the actual file system.
        Transactions not marked “complete” are thrown out.

      A Most journaling file systems use the journal to log changes to the file sys-
        tem’s “metadata”, i. e., directories, inodes, etc. For efficiency, the actual file
        data are normally not written to the journal. This means that after a crash
        and reboot you will have a consistent file system without having to spend
        hours or days on a complete consistency check. However, your file contents
        may have been scrambled—for example, a file might contain obsolete data
        blocks because the updated ones couldn’t be written before the crash. This
        problem can be mitigated by writing the data blocks to disk first and then
        the metadata to the journal, but even that is not without risk. ext3 gives
        you the choice between three operating modes—writing everything to the
        journal (mount option data=journal ), writing data blocks directly and then
        metadata to the journal (data=ordered ), or no restrictions (data=writeback ). The
        default is data=ordered .

      B Writing metadata or even file data twice—once to the journal, and then later
        to the actual file system—involves a certain loss of performance compared
        to file systems like ext2 , which ignore the problem. One approach to fix
        this consists of log-structured file systems, in which the journal makes up the
        actual file system. Within the Linux community, this approach has so far not
        prevailed. Another approach is exemplified by “copy-on-write filesystems”
        like Btrfs.

      A Using a journaling file system like ext3 does not absolve you from having to
        perform complete consistency checks every so often. Errors in a file system’s
        data structures might arise through disk hardware errors, cabling problems,
        or the dreaded cosmic rays (don’t laugh) and might otherwise remain un-
        noticed until they wreak havoc. For this reason, the ext file systems force a
        file system check every so often when the system is booted (usually when
        you can least afford it). You will see how to tweak this later in this chapter.

      A With server systems that are rarely rebooted and that you cannot simply
        take offline for a few hours or days for a prophylactic file system check, you
        may have a big problem. We shall also come back to this.
         The apex of ext file system evolution is currently represented by ext4 , which has
      been developed since 2006 under the guidance of Theodore Ts’o. This has been
      considered stable since 2008 (Kernel version 2.6.28). Like ext3 and ext2 , backward
      compatibility was an important goal: ext2 and ext3 file systems can be mounted
      as ext4 file systems and will profit from some internal improvements in ext4 . On
      the other hand, the ext4 code introduces some changes that result in file systems
      no longer being accessible as ext2 and ext3 . Here are the most important improve-
      ments in ext4 as compared to ext3 :
         • Instead of maintaining the data blocks of individual files as lists of block
           numbers, ext4 uses “extents”, i. e., groups of physically contiguous blocks
           on disk. This leads to a considerable simplification of space management
           and to greater efficiency, but makes file systems using extents incompatible
           to ext3 . It also avoids fragmentation, or the wild scattering of blocks belong-
           ing to the same file across the whole file system.
15.1 Creating a Linux File System                                                                     229

     • When data is written, actual blocks on the disk are assigned as late as pos-
       sible. This also helps prevent fragmentation.
     • User programs can advise the operating system how large a file is going
       to be. Again, this can be used to assign contiguous file space and mitigate
     • Ext4 uses checksums to safeguard the journal. This increases reliability and
       avoids some hairy problems when the journal is replayed after a system
     • Various optimisations of internal data structures increase the speed of con-
       sistency checks.
     • Timestamps now carry nanosecond resolution and roll over in 2242 (rather
       than 2038).
     • Some size limits have been increased—directories may now contain 64,000
       or more subdirectories (previously 32,000), files can be as large as 16 TiB,
       and file systems as large as 1 EiB.
In spite of these useful improvements, according to Ted Ts’o ext4 is not to be con-
sidered an innovation, but rather a stopgap until even better file systems like Btrfs
become available.
   All ext file systems include powerful tools for consistency checks and file sys-
tem repairs. This is very important for practical use.

Creating ext file systems To create a ext2 or ext3 file system, it is easiest to use the
mkfs command with a suitable -t option:

# mkfs -t ext2 /dev/sdb1                                                ext2   file system
# mkfs -t ext3 /dev/sdb1                                                ext3   file system
# mkfs -t ext4 /dev/sdb1                                                ext4   file system

After the -t option and its parameter, you can specify further parameters which
will be passed to the program performing the actual operation—in the case of the
ext file systems, the mke2fs program. (In spite of the e2 in its name, it can also create
ext3 and ext4 file systems.)

B The following commands would also work:
         # mkfs.ext2 /dev/sdb1                                          ext2   file system
         # mkfs.ext3 /dev/sdb1                                          ext3   file system
         # mkfs.ext4 /dev/sdb1                                          ext4   file system

        These are exactly the commands that mkfs would invoke. All three com-
        mands are really symbolic links referring to mke2fs ; mke2fs looks at the name
        used to call it and behaves accordingly.

B You can even call the mke2fs command directly:                                             mke2fs

         # mke2fs /dev/sdb1

        (Passing no options will get you a ext2 file system.)

   The following options for mke2fs are useful (and possibly important for the
-b   ⟨size⟩ determines the block size. Typical values are 1024, 2048, or 4096. On
         partitions of interesting size, the default is 4096.
230                                                                              15 File Systems: Care and Feeding

                           -c   checks the partition for damaged blocks and marks them as unusable.

                                   B Current hard disks can notice “bad blocks” and replace them by blocks
                                     from a “secret reserve” without the operating system even noticing (at
                                     least as long as you don’t ask the disk directly). While this is going on,
                                     “mke2fs -c ”) does not provide an advantage. The command will only
                                     find bad blocks when the secret reserve is exhausted, and at that point
                                     you would do well to replace the disk, anyway. (A completely new
                                     hard disk would at this point be a warranty case. Old chestnuts are
                                     only fit for the garbage.)

                           -i   ⟨count⟩ determines the “inode density”; an inode is created for every ⟨count⟩
                                    bytes of space on the disk. The value must be a multiple of the block size
                                    (option b ); there is no point in selecting a ⟨count⟩ that is less than the block
                                    size. The minimum value is 1024, the default is the current block size.
                           -m   ⟨percentage⟩ sets the percentage of data blocks reserved for root (default: 5%)
                           -S   causes mke2fs to rewrite just the super blocks and group descriptors and leave
                                   the inodes intact
                           -j   creates a journal and, hence, an ext3 or ext4 file system.

                                   B It is best to create an ext4 file system using one of the precooked calls
                                     like “mkfs -t ext4 ”, since mke2fs then knows what it is suppsed to do. If
                                     you must absolutely do it manually, use something like
                                           # mke2fs -j -O extents,uninit_bg,dir_index /dev/sdb1

                             The ext file systems (still) need at least one complete data block for every file, no
                             matter how small. Thus, if you create an ext file system on which you intend
                             to store many small files (cue: mail or Usenet server), you may want to select a
      internal fragmentation smaller block size in order to avoid internal fragmentation. (On the other hand,
                             disk space is really quite cheap today.)

                            B The inode density (-i option) determines how many files you can create on
                              the file system—since every file requires an inode, there can be no more
                              files than there are inodes. The default, creating an inode for every single
                              data block on the disk, is very conservative, but from the point of view of
                              the developers, the danger of not being able to create new files for lack of
                              inodes seems to be more of a problem than wasted space due to unused

                            B Various file system objects require inodes but no data blocks—such as de-
                              vice files, FIFOs or short symbolic links. Even if you create as many inodes
                              as data blocks, you can still run out of inodes before running out of data

                            B Using the mke2fs -F option, you can “format” file system objects that are not
                              block device files. For example, you can create CD-ROMs containing an ext2
                              file system by executing the command sequence

                                    #   dd if=/dev/zero of=cdrom.img bs=1M count=650
                                    #   mke2fs -F cdrom.img
                                    #   mount -o loop cdrom.img /mnt
                                    #                                                        … copy stuff to /mnt …
                                    #   umount /mnt
                                    #   cdrecord -data cdrom.img

                                   (/dev/zero is a “device” that produces arbitrarily many zero bytes.) The re-
                                   sulting CD-ROMs contain “genuine” ext2 file systems with all permissions,
                                   attributes, ACLs etc., and can be mounted using
15.1 Creating a Linux File System                                                                     231

         # mount -t ext2 -o ro /dev/scd0 /media/cdrom

        (or some such command); you should replace /dev/scd0 by the device name
        of your optical drive. (It is best to avoid using an ext3 file system here, since
        the journal would be an utter waste of space. An ext4 file system, though,
        can be created without a journal.)

Repairing ext file systems e2fsck is the consistency checker for ext file systems.          e2fsck
There are usually symbolic links such as fsck.ext2 so it can be invoked from fsck .

B Like mke2fs , e2fsck also works for ext3 and ext4 file systems.

B You can of course invoke the program directly, which might save you a little
  typing when passing options. On the other hand, you can only specify the
  name of one single partition (strictly speaking, one single block device).

The most important options for e2fsck include:                                              options

-b   ⟨number⟩ reads the super block from block ⟨number⟩ of the partition (rather
        than the first super block)

-B   ⟨size⟩ gives the size of a block group between two copies of the super block;
         with the ext file systems, backup copies of the super block are usually placed
         every 8192 blocks, on larger disks every 32768 blocks. (You can query this
         using the tune2fs command explained below; look for “blocks per group” in
         the output of “tune2fs -l ”.)

-f   forces a file system to be checked even if its super block claims that it is clean
-l   ⟨file⟩ reads the list of bad blocks from the ⟨file⟩ and marks these blocks as “used”
-c   (“check”) searches the file system for bad blocks
-p   (“preen”) causes errors to be repaired automatically with no further user in-
-v   (“verbose”) outputs information about the program’s execution status and the
        file system while the program is running
   The device file specifies the partition whose file system is to be checked. If that
partition does not contain an ext file system, the command aborts. e2fsck performs
the following steps:                                                                   steps

     1. The command line arguments are checked
     2. The program checks whether the file system in question is mounted

     3. The file system is opened
     4. The super block is checked for readability
     5. The data blocks are checked for errors
     6. The super block information on inodes, blocks and sizes are compared with
        the current system state
     7. Directory entries are checked against inodes
     8. Every data block that is marked “used” is checked for existence and whether
        it is referred to exactly once by some inode

     9. The number of links within directories is checked with the inode link coun-
        ters (must match)
232                                                                            15 File Systems: Care and Feeding

                             10. The total number of blocks must equal the number of free blocks plus the
                                 number of used blocks

               exit code   B e2fsck returns an exit code with the same meaning as the standard fsck exit

                               It is impossible to list all the file system errors that e2fsck can handle. Here are
                           a few important examples:
                              • Files whose inodes are not referenced from any directory are placed in the
                                file system’s lost+found directory using the inode number as the file name
                                and can be moved elsewhere from there. This type of error can occur, e. g.,
                                if the system crashes after a file has been created but before the directory
                                entry could be written.
                              • An inode’s link counter is greater than the number of links pointing to this
                                inode from directories. e2fsck corrects the inode’s link counter.

                              • e2fsck finds free blocks that are marked used (this can occur, e. g., when the
                                system crashes after a file has been deleted but before the block count and
                                bitmaps could be updated).
                              • The total number of blocks is incorrect (free and used blocks together are
                                different from the total number of blocks).

      complicated errors      Not all errors are straightforward to repair. What to do if the super block is
                           unreadable? Then the file system can no longer be mounted, and e2fsck often fails
                           as well. You can then use a copy of the super block, one of which is included with
                           every block group on the partition. In this case you should boot a rescue system
                           and invoke fsck from there. With the -b option, e2fsck can be forced to consider a
                           particular block as the super block. The command in question then becomes, for

                           # e2fsck -f -b 8193 /dev/sda2

                           B If the file system cannot be automatically repaired using fsck , it is pos-
                             sible to modify the file system directly. However, this requires very de-
                             tailed knowledge of file system structures which is beyond the scope of
                             this course.—There are two useful tools to aid with this. First, the dumpe2fs
                             program makes visible the internal management data structures of a ext
                             file system. The interpretation of its output requires the aforementioned
                             detailed knowledge. An ext file system may be repaired using the debugfs
                             file system debugger.

                           A You should keep your hands off programs like debugfs unless you know ex-
                             actly what you are doing. While debugfs enables you to manipulate the file
                             system’s data structures on a very low level, it is easy to damage a file sys-
                             tem even more by using it injudiciously. Now that we have appropriately
                             warned you, we may tell you that

                                  # debugfs /dev/sda1

                                 will open the ext file system on /dev/sda1 for inspection (debugfs , reasonably,
                                 enables writing to the file system only if it was called with the -w option).
                                 debugfs displays a prompt; “help ” gets you a list of available commands.
                                 These are also listed in the documentation, which is in debugfs (8).
15.1 Creating a Linux File System                                                                             233

Querying and Changing ext File System Parameters If you have created a parti-
tion and put an ext file system on it, you can change some formatting parameters changing format parameters
after the fact. This is done using the tune2fs command, which should be used with
utmost caution and should never be applied on a file system mounted for writing:

tune2fs    [⟨options⟩] ⟨device⟩

The following options are important:
-c   ⟨count⟩ sets the maximum number of times the file system may be mounted
         between two routine file system checks. The default value set by mke2fs is a
         random number somewhere around 30 (so that not all file systems are pre-
         emptively checked at the same time). The value 0 means “infinitely many”.
-C   ⟨count⟩ sets the current “mount count”. You can use this to cheat fsck or (by
         setting it to a larger value than the current maximum set up using -c ) force
         a file system check during the next system boot.

-e   ⟨behaviour⟩ determines the behaviour of the system in case of errors. The fol-
         lowing possibilities exist:
        continue   Go on as normal
        remount-ro   Disallow further writing to the file system
        panic   Force a kernel panic

        In every case, a file system consistency check will be performed during the
        next reboot.
-i   ⟨interval⟩⟨unit⟩ sets the maximum time between two routine file system checks.
         ⟨interval⟩ is an integer; the ⟨unit⟩ is d for days, w for weeks and m for months.
         The value 0 means “infinitely long”.
-l   displays super block information.
-m   ⟨percent⟩ sets the percentage of data blocks reserved for root (or the user speci-
         fied using the -u option). The default value is 5%.

-L   ⟨name⟩ sets a partition name (up to 16 characters). Commands like mount and
        fsck make it possible to refer to partitions by their names rather than the
        names of their device files.
   To upgrade an existing ext3 file system to an ext4 file system, you need to exe-
cute the commands

# tune2fs -O extents,uninit_bg,dir_index /dev/sdb1
# e2fsck -fDp /dev/sdb1

(stipulating that the file system in question is on /dev/sdb1 ). Make sure to change
/etc/fstab such that the file system is mounted as ext4 later on (see Section 15.2).

B Do note, though, that all existing files will still be using ext3 structures—
  improvements like extents will only be used for files created later. The
  e4defrag defragmentation tool is supposed to convert older files but is not
  quite ready yet.

B If you have the wherewithal, you should not upgrade a file system “in place”
  but instead backup its content, recreate the file system as ext4 , and the re-
  store the content. The performance of ext4 is considerably better on “native”
  ext4 file systems than on converted ext3 file systems—this can amount to a
  factor of 2.
234                                                     15 File Systems: Care and Feeding

      B If you have ext2 file systems lying around that you would like to convert into
        ext3 file systems: This is easily done by creating a journal. tune2fs will do
        that for you, too:

             # tune2fs -j /dev/sdb1

            Again, you will have to adjust /etc/fstab if necessary.

      C 15.1 [!2] Generate an ext4 file system on a suitable medium (hard disk par-
        tition, USB thumb drive, file created using dd ).

      C 15.2 [2] Change the maximum mount count of the filesystem created in Ex-
        ercise 15.1 to 30. In addition, 30% of the space available on the file system
        should be reserved for user test .

      15.1.3    ReiserFS
      Overview ReiserFS is a Linux file system meant for general use. It was developed
      by a team under the direction of Hans Reiser and debuted in Linux 2.4.1 (that was
      in 2001). This made it the first journaling file system available for Linux. ReiserFS
      also contained some other innovations that the most popular Linux file system at
      the time, ext2 , did not offer:
         • Using a special tool, ReiserFS file systems could be changed in size. Enlarge-
           ment was even possible while the file system was mounted.
         • Small files and the ends of larger files could be packed together to avoid
           “internal fragmentation” which arises in file systems like ext2 because space
           on disk is allocated based on the block size (usually 4 KiB). With ext2 and
           friends, even a 1-byte file requires a full 4-KiB block, which could be consid-
           ered wasteful (a 4097-byte file requires two data blocks, and that is almost
           as bad). With ReiserFS, several such files could share one data block.

           B There is nothing in principle that would keep the ext developers to add
             this “tail packing” feature to the ext file systems. This was discussed
             and the consensus was that by now, disk space is cheap enough that
             the added complexity would be worth the trouble.

         • Inodes aren’t pregenerated when the file system is created, but are allocated
           on demand. This avoids a pathological problem possible with the ext file
           systems, where there are blocks available in the file system but all inodes
           are occupied and no new files can be generated.

           B The ext file systems mitigate this problem by allocating one inode per
             data block per default (the inode density corresponds to the block size).
             This makes it difficult to provoke the problem.

         • ReiserFS uses trees instead of lists (like ext2 ) for its internal management
           data structures. This makes it more efficient for directories with many files.

           B Ext3 and in particular ext4 can by now do that too.
            As a matter of fact, ReiserFS uses the same tree structure not just for di-
            rectory entries, but also for inodes, file metadata and file block lists, which
            leads to a performance increase in places but to a decrease in others.

            For a long time, ReiserFS used to be the default file system for the SUSE
            distributions (and SUSE contributed to the project’s funding). Since 2006,
            Novell/SUSE has moved from ReiserFS to ext3 ; very new SLES versions use
            Btrfs for their root file system.
15.1 Creating a Linux File System                                                                        235

A In real life you should give the Reiser file system (and its designated succes-
  sor, Reiser4) a wide berth unless you need to manage older systems using
  it. This is less to do with the fact that Hans Reiser was convicted of his
  wife’s murder (which of course does not speak in his favour as a human
  being, but things like these do happen not just among Linux kernel devel-
  opers), but more with the fact that the Reiser file system does have its good
  points but is built on a fairly brittle base. For example, certain directory
  operations in ReiserFS break basic assumptions that are otherwise univer-
  sally valid for Unix-like file systems. This means, for instance, that mail
  servers storing mailboxes on a ReiserFS file system are less resilient against
  system crashes than ones using different file systems. Another grave prob-
  lem, which we will talk about briefly later on, is the existence of technical
  flaws in the file system repair program. Finally—and that may be the most
  serious problem—nobody seems to maintain the code any longer.

Creating ReiserFS file systems mkreiserfs serves to create a ReiserFS file system.     mkreiserfs
The possible specification of a logical block size is currently ignored, the size is
always 4 KiB. With dumpreiserfs you can determine information about ReiserFS           dumpreiserfs
file systems on your disk. resize_reiserfs makes it possible to change the size of     resize_reiserfs
currently-unused ReiserFS partitions. Mounted partitions may be resized using a
command like “mount -o remount,resize= ⟨block count⟩ ⟨mount point⟩”.

Consistency Checks for ReiserFS For the Reiser file system, too, there is a check- Reiser file system
ing and repair program, namely reiserfsck .
   reiserfsck performs a consistency check and tries to repair any errors found,
much like e2fsck . This program is only necessary if the file system is really dam-
aged. Should a Reiser file system merely have been unmounted uncleanly, the
kernel will automatically try to restore it according to the journal.

A reiserfsck has some serious issues. One is that when the tree structure needs
  to be reconstructed (which may happen in certain situations) it gets com-
  pletely mixed up if data files (!) contain blocks that might be misconstrued
  as another ReiserFS file system’s superblock. This will occur if you have
  an image of a ReiserFS file system in a file used as a ReiserFS-formatted
  “virtual” hard disk for a virtualisation environment such as VirtualBox or
  VMware. This effectively disqualifies the ReiserFS file system for serious
  work. You have been warned.

C 15.3 [!1] What is the command to create a Reiser file system on the first
  logical partition of the second disk?

15.1.4    XFS
The XFS file system was donated to Linux by SGI (the erstwhile Silicon Graphics, XFS
Inc.); it is the file system used by SGI’s Unix variant, IRIX, which is able to handle
very large files efficiently. All Linux distributions of consequence offer XFS sup-
port, even though few deploy it by default; you may have to install the XFS tools

B In some circles, “XFS” is the abbreviation of “X11 Font Server”. This can
  occur in distribution package names. Don’t let yourself be confused.

   You can create an XFS file system on an empty partition (or file) using the

# mkfs -t xfs /dev/sda2
236                                                     15 File Systems: Care and Feeding

      command (insert the appropriate device name). Of course, the real work is done
      by a program called mkfs.xfs . You can control it using various options; consult the
      documentation (xfs (5) and mkfs.xfs (8)).

      B If performance is your goal, you can, for example, create the journal on an-
        other (physical) storge medium by using an option like “-l logdev=/dev/sdb1,size=10000b ”.
        (The actual file system should of course not be on /dev/sdb , and the partition
        for the journal should not otherwise be used.)

         The XFS tools contain a fsck.xfs (which you can invoke using “fsck -t xfs ”), but
      this program doesn’t really do anything at all—it is merely there to give the sys-
      tem something to call when “all” file systems are to be checked (which is easier
      than putting a special exception for XFS into fsck ). In actual fact, XFS file sys-
      tems are checked automatically on mounting if they have not been unmounted
      cleanly. If you want to check the consistency of an XFS or have to repair one, use
      the xfs_repair (8) program—“xfs_repair -n ” checks whether repairs are required;
      without the option, any repairs will be performed outright.

      B In extreme cases xfs_repair may not be able to repair the file system. In such a
        situation you can use xfs_metadump to create a dump of the filesystem’s meta-
        data and send that to the developers:

             # xfs_metadump /dev/sdb1 sdb1.dump

            (The file system must not be mounted when you do this.) The dump is a
            binary file that does not contain actual file data and where all file names
            have been obfuscated. Hence there is no risk of inadvertently passing along
            confidential data.

      B A dump that has been prepared using xfs_metadump can be written back
        to a file system (on a “real” storage medium or an image in a file) using
        xfs_mdrestore . This will not include file contents as these aren’t part of the
        dump to begin with. Unless you are an XFS developer, this command will
        not be particularly interesting to you.

         The xfs_info command outputs information about a (mounted) XFS file system:

      # xfs_info /dev/sdb1
      meta-data=/dev/sdb1              isize=256     agcount=4, agsize=16384 blks
               =                       sectsz=512    attr=2, projid32bit=1
               =                       crc=0         finobt=0
      data     =                       bsize=4096    blocks=65536, imaxpct=25
               =                       sunit=0       swidth=0 blks
      naming   =version 2              bsize=4096    ascii-ci=0 ftype=0
      log      =Intern                 bsize=4096    blocks=853, version=2
               =                       sectsz=512    sunit=0 blks, lazy-count=1
      realtime =keine                  extsz=4096    blocks=0, rtextents=0

      You can see, for example, that the file system consists of 65536 blocks of 4 KiB each
      (bsize and blocks in the data section), while the journal occupies 853 4-KiB blocks
      in the same file system (Intern , bsize and blocks in the log section).

      B The same information is output by mkfs.xfs after creating a new XFS file

         You should avoid copying XFS file systems using dd (or at least proceed very
      cautiously). This is because every XFS file system contains a unique UUID, and
      programs like xfsdump (which makes backup copies) can get confused if they run
      into two independent file systems using the same UUID. To copy XFS file systems,
      use xfsdump and xfsrestore or else xfs_copy instead.
15.1 Creating a Linux File System                                                       237

15.1.5        Btrfs
Btrfs is considered the up-and-coming Linux file system for the future. It com-
bines the properties traditionally associated with a Unix-like file system with some
innovative ideas that are partly based on Solaris’s ZFS. Besides some features oth-
erwise provided by the Logical Volum Manager (LVM; Section 14.7)—such as the
creation of file systems that span several physical storage media—or provided
by the Linux kernel’s RAID support—such as the redundant storage of data on
several physical media—this includes transparent data compression, consistency
checks on data blocks by means of checksums, and various others. The “killer
feature” is probably snapshots that can provide views of different versions of files
or complete file hierarchies simultaneously.

B Btrfs is several years younger than ZFS, and its design therefore contains a
  few neat ideas that hadn’t been invented yet when ZFS was first introduced.
  ZFS is currently considered the “state of the art” in file systems, but it is to
  be expected that some time in the not-too-distant future it will be overtaken
  by Btrfs.

B Btrfs is based, in principle, on the idea of “copy on write”. This means that
  if you create a snapshot of a Btrfs file system, nothing is copied at all; the
  system only notes that a copy exists. The data is accessible both from the
  original file system and the snapshot, and as long as data is just being read,
  the file systems can share the complete storage. Once write operations hap-
  pen either in the original file system or the snapshot, only the data blocks
  being modified are copied. The data itself is stored in efficient data struc-
  tures called B-trees.

   Btrfs file systems are created with mkfs , as usual:

# mkfs -t btrfs /dev/sdb1

B You can also mention several storage media, which will all be made part
  of the new file system. Btrfs stores metadata such as directory information
  redundantly on several media; by default, data is spread out across various
  disks (“striping”) in order to accelerate access1 . You can, however, request
  other storage arrangements:

          # mkfs -t btrfs -L MyBtrfs -d raid1 /dev/sdb1 /dev/sdc1

         This example generates a Btrfs file system which encompasses the /dev/sdb1
         and /dev/sdc1 disks and is labeled “MyBtrfs”. Data is stored redundantly on
         both disks (“-d raid1 ”).

B Within Btrfs file systems you can create “subvolumes”, which serve as a type
  of partition at the file system level. Subvolumes are the units of which you
  will later be able to make snapshots. If your system uses Btrfs as its root file
  system, the command

          # btrfs subvolume create /home

         would, for instance, allow you to keep your own data within a separate sub-
         volume. Subvolumes do not take a lot of space, so you should not hesitate
         to create more of them rather than fewer—in particular, one for every direc-
         tory of which you might later want to create independent snapshots, since
         it is not possible to make directories into subvolumes after the fact.

  1 In   other words, Btrfs uses RAID-1 for metadata and RAID-0 for data.
238                                                                 15 File Systems: Care and Feeding

              B You can create a snapshot of a subvolume using
                      # btrfs subvolume snapshot /mnt/sub /mnt/sub-snap

                      The snapshot (here, /mnt/sub- snap ) is at first indistinguishable from the origi-
                      nal subvolume (here, /mnt/sub ); both contain the same files and are writable.
                      At first no extra storage space is being used—only if you change files in the
                      original or snapshot or create new ones, the system copies whatever is re-

                 Btrfs makes on-the-fly consistency checks and tries to fix problems as they are
              detected. The “btrfs scrub start ” command starts a house-cleaning operation that
              recalculates the checksums on all data and metadata on a Btrfs file system and
              repairs faulty blocks according to a different copy if required. This can, of course,
              take a long time; with “btrfs scrub status ” you can query how it is getting on, with
              “btrfs scrub cancel ” you can interrupt it, and restart it later with “btrfs scrub resume ”.
                 There is a fsck.btrfs program, but it does nothing beyond outputting a message
              that it doesn’t do anything. The program is required because something needs
              to be there to execute when all file systems are checked for consistency during
              startup. To really check or repair Btrfs file systems there is the “btrfs check ” com-
              mand. By default this does only a consistency check, and if it is invoked with the
              “--repair ” it tries to actually repair any problems it found.
                 Btrfs is very versatile and complex and we can only give you a small glimpse
              here. Consult the documentation (starting at btrfs (8)).

              C 15.4 [!1] Generate a Btrfs file system on an empty partition, using “mkfs -t
                btrfs ”.

              C 15.5 [2] Within your Btrfs file system, create a subvolume called sub0 . Create
                some files within sub0 . Then create a snapshot called snap0 . Convince your-
                self that sub0 and snap0 have the same content. Remove or change a few files
                in sub0 and snap0 , and make sure that the two subvolumes are independent
                of each other.

              15.1.6      Even More File Systems
      tmpfs   tmpfs  is a flexible implementation of a “RAM disk file system”, which stores files
              not on disk, but in the computer’s virtual memory. They can thus be accessed
              more quickly, but seldom used files can still be moved to swap space. The size of
              a tmpfs is variable up to a set limit. There is no special program for generating a
              tmpfs , but you can create it simply by mounting it: For example, the

              # mount -t tmpfs -o size=1G,mode=0700 tmpfs /scratch

           command creates a tmpfs of at most 1 GiB under the name of /scratch , which can
           only be accessed by the owner of the /scratch directory. (We shall be coming back
           to mounting file systems in Section 15.2.)
               A popular file system for older Windows PCs, USB sticks, digital cameras, MP3
           players and other “storage devices” without big ideas about efficiency and flexi-
      VFAT bility is Microsoft’s venerable VFAT file system. Naturally, Linux can mount, read,
           and write media formatted thusly, and also create such file systems, for example

              # mkfs -t vfat /dev/mcblk0p1
15.1 Creating a Linux File System                                                                  239

(insert the appropriate device name again). At this point you will no longer be sur-
prised to hear that mkfs.vfat is just another name for the mkdosfs program, which
can create all sorts of MS-DOS and Windows file systems—including the file sys-
tem used by the Atari ST of blessed memory. (As there are Linux variants running
on Atari computers, this is not quite as far-fetched as it may sound.)

B mkdosfs supports various options allowing you to determine the type of file
  system created. Most of these are of no practical consequence today, and
  mkdosfs will do the Right Thing in most cases, anyway. We do not want to
  disgress into a taxonomy of FAT file system variants and restrict ourselves to
  pointing out that the main difference between FAT and VFAT is that file sys-
  tems of the latter persuasion allow file names that do not follow the older,
  strict 8 + 3 scheme. The “file allocation table”, the data structure that re-
  members which data blocks belong to which file and that gave the file sys-
  tem its name, also exists invarious flavours, of which mkdosfs selects the one
  most suitable to the medium in question—floppy disks are endowed with a
  12-bit FAT, and hard disk (partitions) or (today) USB sticks of considerable
  capacity get 32-bit FATs; in the latter case the resulting file system is called

   NTFS, the file system used by Windows NT and its successors including Win- NTFS
dows Vista, is a bit of an exasperating topic. Obviously there is considerable
interest in enabling Linux to handle NTFS partitions—everywhere but on Mi-
crosoft’s part, where so far one has not deigned to explain to the general public
how NTFS actually works. (It is well-known that NTFS is based on BSD’s “Berke-
ley Fast Filesystem”, which is reasonably well understood, but in the meantime
Microsoft butchered it into something barely recognisable.) In the Linux com-
munity there have been several attempts to provide NTFS support by trying to
understand NTFS on Windows, but complete success is still some way off. At
the moment there is a kernel-based driver with good support for reading, but
questionable support for writing, and another driver running in user space which
according to the grapevine works well for reading and writing. Finally, there are
the “ntfsprogs”, a package of tools for managing NTFS file systems, which also
allow rudimentary access to data stored on them. Further information is available
from http://www.linux- .

15.1.7    Swap space
In addition to the file system partitions, you should always create a swap parti- swap partition
tion. Linux can use this to store part of the content of system RAM; the effective
amount of working memory available to you is thus greater than the amount of
RAM in your computer.
   Before you can use a swap partition you must “format” it using the mkswap com-
# mkswap /dev/sda4

This writes some administrative data to the partition.
   When the system is started, it is necessary to “activate” a swap partition. This
corresponds to mounting a partition with a file system and is done using the swapon
# swapon /dev/sda4

The partition should subsequently be mentioned in the /proc/swaps file:

# cat /proc/swaps
Filename               Type         Size    Used   Priority
/dev/sda4              partition    2144636 380    -1
240                                                                  15 File Systems: Care and Feeding

                  After use the swap partition can be deactivated using swapoff :

                   # swapoff /dev/sda4

                   B The system usually takes care of activating and deactivating swap parti-
                     tions, as long as you put them into the /etc/fstab file. See Section 15.2.2.

                     You can operate up to 32 swap partitions (up to and including kernel version
                  2.4.10: 8) in parallel; the maximum size depends on your computer’s architecture
                  and isn’t documented anywhere exactly, but “stupendously gigantic” is a reason-
                  able approximation. It used to be just a little less than 2 GiB for most Linux plat-

                   B If you have several disks, you should spread your swap space across all of
                     them, which should increase speed noticeably.

                   B Linux can prioritise swap space. This is worth doing if the disks containing
                     your swap space have different speeds, because Linux will prefer the faster
                     disks. Read up on this in swapon (8).

                   B Besides partitions, you can also use files as swap space. Since Linux 2.6 this
                     isn’t even any slower! This allows you to temporarily provide space for rare
                     humongous workloads. You must initially create a swap file as a file full of
                     zeros, for instance by using

                          # dd if=/dev/zero of=swapfile bs=1M count=256

                         before preparing it using the mkswap command and activating it with swapon .
                         (Desist from tricks using dd or cp ; a swap file may not contain “holes”.)

                   B You can find information about the currently active swap areas in the /proc/
                     swaps file.

                  15.2      Mounting File Systems
                  15.2.1     Basics
                  To access data stored on a medium (hard disk, USB stick, floppy, …), it would in
                  principle be possible to access the device files directly. This is in fact being done,
                  for example when accessing tape drives. However, the well-known file manage-
                  ment commands (cp , mv , and so on) can only access files via the directory tree.
                  To use these commands, storage media must be made part of the directory tree
                  (“mounted”) using their device files. This is done using the mount command.
                     The place in the directory tree where a file system is to be mounted is called a
      mount point mount point. This can be any directory; it does not even have to be empty, but you
                  will not be able to access the original directory content while another file system
                  is mounted “over” it.

                   B The content reappears once the file system is unmounted using umount . Even
                     so you should restrain yourself from mounting stuff on /etc and other im-
                     portant system directories …

                  15.2.2     The mount Command
                  The mount command mounts file systems into the directory tree. It can also be
                  used to display the currently mounted file systems, simply by calling it without
15.2 Mounting File Systems                                                                      241

proc        /proc           proc defaults                     0   0
/dev/sda2   /               ext3 defaults,errors=remount-ro   0   1
/dev/sda1   none            swap sw                           0   0
/dev/sda3   /home           ext3 defaults,relatime            0   1
/dev/sr0    /media/cdrom0   udf,iso9660 ro,user,exec,noauto   0   0
/dev/sdb1   /media/usb      auto user,noauto                  0   0
/dev/fd0    /media/floppy   auto user,noauto,sync             0   0

                       Figure 15.1: The /etc/fstab file (example)

$ mount
/dev/sda2 on / type ext3 (rw,relatime,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)

  To mount a medium, for example a hard disk partition, you must specify its
device file and the desired mount point:

# mount -t ext2 /dev/sda5 /home

It is not mandatory to specify the file system type using the -t option, since the
kernel can generally figure it out for itself. If the partition is mentioned in /etc/
fstab , it is sufficient to give either the mount point or the device file:

# mount /dev/sda5                                                     One possibility …
# mount /home                                                           … and another

    Generally speaking, the /etc/fstab file describes the composition of the whole /etc/fstab
file system structure from various file systems that can be located on different
partitions, disks etc. In addition to the device names and corresponding mount
points, you can specify various options used to mount the file systems. The allow-
able options depend on the file system; many options are to be found in mount (8).
    A typical /etc/fstab file could look similar to Figure 15.1. The root partition
usually occupies the first line. Besides the “normal” file systems, pseudo file sys-
tems such as devpts or proc and the swap areas are mentioned here.
    The third field describes the type of the file system in question. Entries like ext3 type
and iso9660 speak for themselves (if mount cannot decide what to do with the type
specification, it tries to delegate the job to a program called /sbin/mount. ⟨type⟩), swap
refers to swap space (which does not require mounting), and auto means that mount
should try to determine the file system’s type.

B To guess, mount utilises the content of the /etc/filesystems file, or, if that file
  does not exist, the /proc/filesystems file. (/proc/filesystems is also read if /etc/
  filesystems ends with a line containing just an asterisk.) In any case, mount
  processes only those lines that are not marked nodev . For your edification,
  here is a snippet from a typical /proc/filesystems file:

       nodev   sysfs
       nodev   rootfs
       nodev   usbfs
       nodev   nfs
242                                                                 15 File Systems: Care and Feeding


                B The kernel generates /proc/filesystems dynamically based on those file sys-
                  tems for which it actually contains drivers. /etc/filesystems is useful if you
                  want to specify an order for mount ’s guesswork that deviates from the one
                  resulting from /proc/filesystems (which you cannot influence).

                B Before mount refers to /etc/filesystems , it tries its luck with the libblkid and
                  libvolume_id libraries, both of which are (among other things) able to deter-
                  mine which type of file system exists on a medium. You can experiment
                  with these libraries using the command line programs blkid and vol_id :

                         # blkid /dev/sdb1
                         /dev/sdb1: LABEL="TESTBTRFS" UUID="d38d6bd1-66c3-49c6-b272-eabdae 
                           877368" UUID_SUB="3c093524-2a83-4af0-8290-c22f2ab44ef3" 
                           TYPE="btrfs" PARTLABEL="Linux filesystem" 

      options        The fourth field contains the options, including:
                defaults    Is not really an option, but merely a place holder for the standard options
                         (see mount (8)).
                noauto   Opposite of auto , keeps a file system from being mounted automatically
                         when the system is booted.
                user   In principle, only root can mount storage devices (normal users may only
                        use the simple mount command to display information), unless the user op-
                        tion is set. In this case, normal users may say “mount ⟨device⟩” or “mount
                        ⟨mount point⟩”; this will mount the named device on the designated mount
                        point. The user option will allow the mounting user to unmount the device
                        (root , too); there is a similar option users that allows any user to unmount
                        the device.
                sync   Write operations are not buffered in RAM but written to the medium directly.
                       The end of the write operation is only signaled to the application program
                       once the data have actually been written to the medium. This is useful for
                       floppies or USB thumb drives, which might otherwise be inadvertently re-
                       moved from the drive while unwritten data is still buffered in RAM.
                ro   This file system is mounted for reading only, not writing (opposite of rw )
                exec   Executable files on this file system may be invoked. The opposite is noexec ;
                        exec is given here because the user option implies the noexec option (among
                As you can see in the /dev/sdb entry, later options can overwrite earlier ones: user
                implies the noexec option, but the exec farther on the right of the line overwrites
                this default.

                15.2.3       Labels and UUIDs
              We showed you how to mount file systems using device names such as /dev/hda1 .
              This has the disadvantage, though, that the correspondence between device files
              and actual devices is not necessarily fixed: As soon as you remove or repartition a
              disk or add another, the correspondence may change and you will have to adjust
              the configuration in /etc/fstab . With some device types, such as USB media, you
              cannot by design rely on anything. This is where labels and UUIDs come in.
        label    A label is a piece of arbitrary text of up to 16 characters that is placed in a file
              system’s super block. If you have forgotten to assign a label when creating the
15.2 Mounting File Systems                                                                 243

file system, you can add one (or modify an existing one) at any time using e2label .
The command
# e2label /dev/sda3 home

(for example) lets you refer to /dev/sda3 as LABEL=home , e. g., using

# mount -t ext2 LABEL=home /home

The system will then search all available partitions for a file system containing this

B You can do the same using the -L option of tune2fs :
       # tune2fs -L home /dev/sda3

B The other file systems have their ways and means to set labels, too. With
  Btrfs, for example, you can either specify one when the file system is gener-
  ated (option “-L ”) or use

       # btrfs filesystem label /dev/sdb1 MYLABEL

   If you have very many disks or computers and labels do not provide the re-
quired degree of uniqueness, you can fall back to a “universally unique identifier”
or UUID. An UUID typically looks like                                               UUID

$ uuidgen

and is generated automatically and randomly when a file system is created. This
ensures that no two file systems share the same UUID. Other than that, UUIDs
are used much like labels, except that you now need to use UUID=bea6383f-22a7-
453f-8ef5-a5b895c8ccb0 (Gulp.) You can also set UUIDs by means of tune2fs , or create
completely new ones using

# tune2fs -U random /dev/hda3

This should seldom prove necessary, though, for example if you replace a disk or
have cloned a file system.

B Incidentally, you can determine a file system’s UUID using
       # tune2fs -l /dev/hda2 | grep UUID
       Filesystem UUID:              4886d1a2-a40d-4b0e-ae3c-731dd4692a77

B With other file systems (XFS, Btrfs) you can query a file system’s UUID (blkid
  is your friend) but not necessarily change it.

B The
       # lsblk -o +UUID

      command gives you an overview of all your block devices and their UUIDs.

B You can also access swap partitions using labels or UUIDs:
       # swapon -L swap1
       # swapon -U 88e5f06d-66d9-4747-bb32-e159c4b3b247
244                                                                            15 File Systems: Care and Feeding

                           You can find the UUID of a swap partition using blkid or lsblk , or check the
                           /dev/disk/by- uuid directory. If your swap partition does not have a UUID nor
                           a label, you can use mkswap to assign one.
                       You can also use labels and UUIDs in the /etc/fstab file (one might indeed claim
                    that this is the whole point of the exercise). Simply put



                    into the first field instead of the device name. Of course this also works for swap

                     C 15.6 [!2] Consider the entries in files /etc/fstab and /etc/mtab . How do they

                    15.3       The dd Command
                    dd is a command for copying files “by block”. It is used with particular preference
                    to create “images”, that is to say complete copies of file systems—for example,
                    when preparing for the complete restoration of the system in case of a catastrophic
                    disk failure.
                        dd (short for “copy and convert”2 ) reads data block by block from an input file
                    and writes it unchanged to an output file. The data’s type is of no consequence.
                    Neither does it matter to dd whether the files in question are regular files or device
                        Using dd , you can create a quickly-restorable backup copy of your system par-
                    tition as follows:
                     # dd if=/dev/sda2 of=/data/sda2.dump

                    This saves the second partition of the first SCSI disk to a file called /data/sda2.
                    dump —this file should of course be located on another disk. If your first disk is
                    damaged, you can easily and very quickly restore the original state after replacing
                    it with an identical (!) drive:

                     # dd if=/data/sda2.dump of=/dev/sda2

                      (If /dev/sda is your system disk, you must of course have booted from a rescue or
                      live system.)
                          For this to work, the new disk drive’s geometry must match that of the old one.
      partition table In addition, the new disk drive needs a partition table that is equivalent to the old
                      one. You can save the partition table using dd as well (at least for MBR-partitioned

                     # dd if=/dev/sda of=/media/floppy/mbr_sda.dump bs=512 count=1

                    Used like this, dd does not save all of the hard disk to floppy disk, but writes every-
                    thing in chunks of 512 bytes (bs=512 )—one chunk (count=1 ), to be exact. In effect, all
                    of the MBR is written to the floppy. This kills two birds with the same stone: the
                    boot loader’s stage 1 also ends up back on the hard disk after the MBR is restored:
                       2 Seriously! The dd command is inspired by a corresponding command on IBM mainframes (hence

                    the parameter syntax, which according to Unix standards is quite quaint), which was called CC (as in
                    “copy and convert”), but on Unix the cc name was already spoken for by the C compiler.
15.3 The dd Command                                                                     245

# dd if=/media/floppy/mbr_sda.dump of=/dev/sda

You do not need to specify a chunk size here; the file is just written once and is
(hopefully) only 512 bytes in size.

A Caution: The MBR does not contain partitioning information for logical par-
  titions! IF you use logical partitions, you should use a program like sfdisk
  to save all of the partitioning scheme—see below.

B To save partitioning information for GPT-partitioned disks, use, for exam-
  ple, gdisk (the b command).

B dd can also be used to make the content of CD-ROMs or DVDs permanently
  accessible from hard disk. The “dd if=/dev/cdrom of=/data/cdrom1.iso ” places
  the content of the CD-ROM on disk. Since the file is an ISO image, hence
  contains a file system that the Linux kernel can interpret, it can also be
  mounted. After “mount -o loop,ro /data/cdrom.iso /mnt ” you can access the
  image’s content. You can of course make this permanent using /etc/fstab .

Commands in this Chapter
blkid    Locates and prints block device attributes                     blkid (8) 242
dd       “Copy and convert”, copies files or file systems block by block and does
         simple conversions                                                dd (1) 244
debugfs File system debugger for fixing badly damaged file systems. For gurus
         only!                                                       debugfs (8) 232
dumpe2fs Displays internal management data of the ext2 file system. For gurus
         only!                                                      dumpe2fs (8) 232
dumpreiserfs Displays internal management data of the Reiser file system. For
         gurus only!                                           dumpreiserfs (8) 235
e2fsck   Checks ext2 and ext3 file systems for consistency            e2fsck (8) 231
e2label Changes the label on an ext2/3 file system                   e2label (8) 242
fsck     Organises file system consistency checks                        fsck (8) 225
lsblk    Lists available block devices                                  lsblk (8) 243
mkdosfs Creates FAT-formatted file systems                         mkfs.vfat (8) 238
mke2fs   Creates ext2 or ext3 file systems                            mke2fs (8) 229
mkfs     Manages file system creation                                    mkfs (8) 224
mkfs.vfat Creates FAT-formatted file systems                       mkfs.vfat (8) 238
mkfs.xfs Creates XFS-formatted file systems                         mkfs.xfs (8) 235
mkreiserfs Creates Reiser file systems                           mkreiserfs (8) 235
mkswap   Initialises a swap partition or file                         mkswap (8) 239
mount    Includes a file system in the directory tree        mount (8), mount (2) 240
reiserfsck Checks a Reiser file system for consistency           reiserfsck (8) 235
resize_reiserfs Changes the size of a Reiser file system resize_reiserfs (8) 235
swapoff Deactivates a swap partition or file                         swapoff (8) 239
swapon   Activates a swap partition or file                           swapon (8) 239
tune2fs Adjusts ext2 and ext3 file system parameters            tunefs (8) 232, 243
vol_id   Determines file system types and reads labels and UUIDs
                                                                      vol_id (8) 242
xfs_mdrestore Restores an XFS metadata dump to a filesystem image
                                                              xfs_mdrestore (8) 236
xfs_metadump Produces metadata dumps from XFS file systems
                                                               xfs_metadump (8) 236
246                                                15 File Systems: Care and Feeding

       • After partitioning, a file system must be created on a new partition before
         it can be used. To do so, Linux provides the mkfs command (with a number
         of file-system-specific auxiliary tools that do the actual work).
       • Improperly unmounted file systems may exhibit inconsistencies. If Linux
         notes such file systems when it boots, these will be checked automatically
         and, if possible, repaired. These checks can also be triggered manually us-
         ing programs such as fsck and e2fsck .
       • The mount command serves to integrate file systems into the directory tree.
       • With dd , partitions can be backed up at block level.
                                                                                                 $ echo tux
                                                                                                 $ ls
                                                                                                 $ /bin/su -

Booting Linux

16.1 Fundamentals . . . . . . . . . . .            .   .   .   .   .   .   .   .   .   .   248
16.2 GRUB Legacy . . . . . . . . . . .             .   .   .   .   .   .   .   .   .   .   251
    16.2.1 GRUB Basics . . . . . . . . .           .   .   .   .   .   .   .   .   .   .   251
    16.2.2 GRUB Legacy Configuration . . . .       .   .   .   .   .   .   .   .   .   .   252
    16.2.3 GRUB Legacy Installation . . . . .      .   .   .   .   .   .   .   .   .   .   253
    16.2.4 GRUB 2 . . . . . . . . . . .            .   .   .   .   .   .   .   .   .   .   254
    16.2.5 Security Advice . . . . . . . .         .   .   .   .   .   .   .   .   .   .   255
16.3 Kernel Parameters . . . . . . . . .           .   .   .   .   .   .   .   .   .   .   255
16.4 System Startup Problems . . . . . . .         .   .   .   .   .   .   .   .   .   .   257
    16.4.1 Troubleshooting . . . . . . . .         .   .   .   .   .   .   .   .   .   .   257
    16.4.2 Typical Problems . . . . . . . .        .   .   .   .   .   .   .   .   .   .   257
    16.4.3 Rescue systems and Live Distributions   .   .   .   .   .   .   .   .   .   .   259

   • Knowing the GRUB Legacy and GRUB 2 boot loaders and how to configure
   • Being able to diagnose and fix system start problems

   • Basic knowledge of the PC startup procedure
   • Handling of configuration files

adm1-boot.tex   (33e55eeadba676a3 )
248                                                                                            16 Booting Linux

                         16.1      Fundamentals
                         When you switch on a Linux computer, an interesting and intricate process takes
                         place during which the computer initialises and tests itself before launching the
                         actual operating system (Linux). In this chapter, we consider this process in some
                         detail and explain how to adapt it to your requirements and to find and repair
                         problems if necessary.

                          B The word “to boot” is short for “to pull oneself up by one’s bootstraps”.
                            While, as Newton tells us, this is a physical impossibility, it is a good image
                            for what goes on, namely that the computer gets itself started from the most
                            basic beginnings.

                            Immediately after the computer is switched on, its firmware—depending on
                         the computer’s age, either the “basic input/output system” (BIOS) or “unified
                         extensible firmware interface” (UEFI) takes control. What happens next depends
                         on the firmware.

                         BIOS startup On BIOS-based systems, the BIOS searches for an operating system
                         on media like CD-ROM or hard disk, depending on the boot order specified in the
                         BIOS setup. On disks (hard or floppy), the first 512 bytes of the boot medium will
                         be read. These contain special information concerning the system start. Generally,
             boot sector this area is called the boot sector; a hard disk’s boot sector is also called the master
      master boot record boot record (MBR).

                          B We already came across the MBR when discussing the eponymous disk par-
                            titioning scheme in Chapter 14. We’re now looking at the part of the MBR
                            that does not contain partitioning information.

                            The first 446 bytes of the MBR contain a minimal startup program which in
            boot loader turn is responsible for starting the operating system—the boot loader. The rest
                      is occupied by the partition table. 446 bytes are not enough for the complete boot
                      loader, but they suffice for a small program which can fetch the rest of the boot
                      loader from disk using the BIOS. In the space between the MBR and the start of
                      the first partition—at least sector 63, today more likely sector 2048 there is enough
                      room for the rest of the boot loader. (We shall come back to that topic presently.)
                          Modern boot loaders for Linux (in particular, the “Grand Unified Boot loader”
                 GRUB or GRUB) can read common Linux file systems and are therefore able to find the
                      operating system kernel on a Linux partition, load it into RAM and start it there.

          boot manager    B GRUB serves not just as a boot loader, but also as a boot manager. As such,
                            it can, according to the user’s preferences, launch various Linux kernels or
                            even other operating systems.

                          B Bootable CD-ROMs or DVDs play an important role for the installation or
                            update of Linux systems, or as the basis of “live systems” that run directly
                            from read-only media without having to be installed on disk. To boot a
                            Linux computer from CD, you must in the simplest case ensure that the
                            CD-ROM drive is ahead of the firmware’s boot order than the hard disk,
                            and start the computer while the desired CD is in the drive.

                          B In the BIOS tradition, booting off CD-ROMs follows different rules than
                            booting off hard disk (or floppy disk). The “El Torito” standard (which
                            specifies these rules) basically defines two approaches: One method is to
                            include an image of a bootable floppy disk on the CD-ROM (it may be as big
                            as 2.88 MiB), which the BIOS finds and boots; the other method is to boot
                            directly off the CD-ROM, which requires a specialised boot loader (such as
                            ISOLINUX for Linux).
16.1 Fundamentals                                                                             249

B With suitable hardware and software (usually part of the firmware today),
  a PC can boot via the network. The kernel, root file system, and everything
  else can reside on a remote server, and the computer itself can be diskless
  and hence ear-friendly. The details would be a bit too involved and are irrel-
  evant for LPIC-1 in any case; if necessary, look for keywords such as “PXE”
  or “Linux Terminal Server Project”.

UEFI boot procedure UEFI-based systems do not use boot sectors. Instead, the
UEFI firmware itself contains a boot manager which exploits information about
the desired operating system which is held in non-volatile RAM (NVRAM). Boot
loaders for the different operating systems on the computer are stored as regular
files on an “EFI system partition” (ESP), where the firmware can read and start
them. The system either finds the name of the desired boot loader in NVRAM, or
else falls back to the default name, /EFI/BOOT/BOOTX64.EFI . (The X64 here stands for
“64-bit Intel-style PC”. Theoretically, UEFI also works for 32-bit systems, but that
doesn’t mean it is a great idea.) The operating-system specific boot loader then
takes care of the rest, as in the BIOS startup procedure.

B The ESP must officially contain a FAT32 file system (there are Linux distri-
  butions that use FAT16, but that leads to problems with Windows 7, which
  requires FAT32). A size of 100 MiB is generally sufficient, but some UEFI
  implementations have trouble with FAT32 ESPs which are smaller than
  512 MiB, and the Linux mkfs command will default to FAT16 for partitions
  of up to 520 MiB. With today’s prices for hard disks, there is little reason
  not to play it safe and create an ESP of around 550 MiB.

B In principle it is possible to simply write a complete Linux kernel as BOOTX64.
  EFI on the ESP and thus manage without any boot loader at all. PC-based
  Linux distributions don’t usually do this, but this approach is interesting for
  embedded systems.

B Many UEFI-based systems also allow BIOS-style booting from MBR-parti-
  tioned disks, i. e., with a boot sector. This is called “compatibility support
  module” or CSM. Sometimes this method is used automatically if a tradi-
  tional MBR is found on the first recognised hard disk. This precludes an
  UEFI boot from an ESP on an MBR-partitioned disk and is not 100% ideo-
  logically pure.

B UEFI-based systems boot from CD-ROM by looking for a file called /EFI/
  BOOT/BOOTX64.EFI —like they would for disks. (It is feasible to produce CD-
  ROMs that boot via UEFI on UEFI-based systems and via El Torito on BIOS-
  based systems.)

   “UEFI Secure Boot” is supposed to prevent computers being infected with UEFI Secure Boot
“root kits” that usurp the startup procedure and take over the system before the
actual operating system is being started. Here the firmware refuses to start boot
loaders that have not been cryptographically signed using an appropriate key. Ap-
proved boot loaders, in turn, are responsible for only launching operating system
kernels that have been cryptographically signed using an appropriate key, and
approved operating system kernels are expected to insist on correct digital sig-
natures for dynamically loadable drivers. The goal is for the system to run only
“trusted” software, at least as far as the operating system is concerned.

B A side effect is that this way one gets to handicap or exclude potentially un-
  desirable operating systems. In principle, a company like Microsoft could
  exert pressure on the PC industry to only allow boot loaders and operating
  systems signed by Microsoft; since various anti-trust agencies would take a
  dim view to this, it is unlikely that such a step would become part of offi-
  cial company policy. It is more likely that the manufacturers of PC mother-
  boards and UEFI implementations concentrate their testing and debugging
250                                                                                16 Booting Linux

                     efforts on the “boot Windows” application, and that Linux boot loaders will
                     be difficult or impossible to get to run simply due to inadvertent firmware

                  Linux supports UEFI Secure Boot in various ways. There is a boot loader called
          Shim “Shim” (developed by Matthew Garrett) which a distributor can have signed by
               Microsoft. UEFI starts Shim and Shim then starts another boot loader or operating
               system kernel. These can be signed or unsigned; the security envisioned by UEFI
               Secure Boot is, of course, only obtainable with the signatures. You can install your
               own keys and then sign your own (self-compiled) kernels.

                B The details for this would be carrying things too far. Consult the Linup
                  Front training manual Linux System Customisation

      PreLoader An alternative to Shim is “PreLoader” (by James Bottomley, distributed by the
               Linux Foundation). PreLoader is simpler than Shim and makes it possible to ac-
               credit a (possibly unsigned) subsequent boot loader with the system, and boot it
               later without further enquiries.

               Hard disks: MBR vs. GPT The question of which partitioning scheme a hard
               disk is using and the question of whether the computer boots via the BIOS (or
               CSM) or UEFI really don’t have a lot to do with each other. At least with Linux it
               is perfectly possible to boot a BIOS-based system from a GPT-partitioned disk or
               a UEFI-based system from an MBR-partitioned disk (the latter possibly via CSM).

                B To start a BIOS-based system from a GPT-partitioned disk it makes sense to
                  create a “BIOS boot partition” to hold that part of the boot loader that does
                  not fit into the MBR. The alternative—using the empty space between the
                  MBR and the start of the first partition—is not reliable for GPT-partitioned
                  disks, since the GPT partition table takes up at least part of this space and/
                  or the first partition might start immediately after the GPT partition table.
                  The BIOS boot partition does not need to be huge at all; 1 MiB is probably
                  amply enough.

               After the boot loader The boot loader loads the Linux operating system kernel
               and passes the control to it. With that, it is itself extraneous and can be removed
               from the system; the firmware, too, will be ignored from now on—the kernel is
               left to its own devices. In particular, it must be able to access all drivers required
               to initialise the storage medium containing the root file system, as well as that file
               system itself (the boot loader used the firmware to access the disk), typically at
               least a driver for an IDE, SATA, or SCSI controller and the file system in question.
               These drivers must be compiled into the kernel or—the preferred method today—
               will be taken from “early userspace”, which can be configured without having to
               recompile the kernel. (As soon as the root file system is available, everything is
               peachy because all drivers can be read from there.) The boot loader’s tasks also
               include reading the early-userspace data.

                B The “early userspace” used to be called an “initial RAM disk”, because the
                  data was read into memory en bloc as a (usually read-only) medium, and
                  treated by the kernel like a block-oriented disk. There used to be special
                  compressed file systems for this application. The method most commonly
                  used today stipulates that the early-userspace data is available as a cpio
                  archive which the kernel extracts directly into the disk block cache, as if
                  you had read each file in the archive directly from a (hypothetical) storage
                  medium. This makes it easier to get rid of the early userspace once it is no
                  longer required.

                B The kernel uses cpio instead of tar because cpio archives in the format used
                  by the kernel are better-standardised and easier to unpack than tar archives.
16.2 GRUB Legacy                                                                                            251

    As soon as the “early userspace” is available, a program called /init is invoked.
This is in charge of the remaining system initialisation, which includes tasks such
as the identification of the storage medium that should be made available as the
root file system, the loading of any required drivers to access that medium and the
file system (these drivers, of course, also come from early userspace), possibly the
(rudimentary) configuration of the network in case the root file system resides on
a remote file server, and so on. Subsequently, the early userspace puts the desired
root file system into place at “/ ” and transfers control to the actual init program—
today most often either System-V init (Chapter 17) or systemd (Chapter 18), in
each case under the name of /sbin/init . (You can juse the kernel command line
option init= to pick a different program.)

B If no early userspace exists, the operating system kernel makes the storage
  medium named on its command line using the root= option available as the
  root file system, and starts the program given by the init= option, by default
  /sbin/init .

C 16.1 [2] Whereabouts on an MBR-partitioned hard disk may a boot loader
  reside? Why?

16.2        GRUB Legacy
16.2.1       GRUB Basics
Many distributions nowadays use GRUB as their standard boot loader. It has var-
ious advantages compared to LILO, most notably the fact that it can handle the
common Linux file systems. This means that it can read the kernel directly from a
file such as /boot/vmlinuz , and is thus immune against problems that can develop if
you install a new kernel or make other changes to your system. Furthermore, on
the whole GRUB is more convenient—for example offering an interactive GRUB GRUB shell
shell featuring various commands and thus allowing changes to the boot setup
for special purposes or in case of problems.

A The GRUB shell allows access to the file system without using the usual
  access control mechanism. It should therefore never be made available to
  unauthorised people, but be protected by a password (on important com-
  puters, at least). See also Section 16.2.5.

    Right now there are two widespread versions of GRUB: The older version
(“GRUB Legacy”) is found in older Linux distributions—especially those with an
“enterprise” flavour’—, while the newer distributions tend to rely on the more
modern version GRUB 2 (Section 16.2.4).
    The basic approach taken by GRUB Legacy follows the procedure outlined in
Section 16.1. During a BIOS-based startup, the BIOS finds the first part (“stage 1”)
of the boot loader in the MBR of the boot disk (all 446 bytes of it). Stage 1 is able
to find the next stage based on sector lists stored inside the program (as part of
the 446 bytes) and the BIOS disk access functions1 .
    The “next stage” is usually stage 1.5, which is stored in the otherwise un-
used space immediately after the MBR and before the start of the first partition.
Stage 1.5 has rudimentary support for Linux file systems and can find GRUB’s
“stage 2” within the file system (normally below /boot/grub ). Stage 2 may be any-
where on the disk. It can read file systems, too, and it fetches its configuration
file, displays the menu, and finally loads and starts the desired operating system
(in the case of Linux, possibly including the “early userspace”).
   1 At least as long as the next stage can be found within the first 1024 “cylinders” of the disk. There

are historical reasons for this and it can, if necessary, be enforced through appropriate partitioning.
252                                                                                               16 Booting Linux

                            B Stage 1 could read stage 2 directly, but this would be subject to the same
                              restrictions as reading stage 1.5 (no file system access and only within the
                              first 1024 cylinders). This is why things aren’t usually arranged that way.

                            B GRUB can directly load and start most Unix-like operating systems for x86
                              computers, including Linux, Minix, NetBSD, GNU Hurd, Solaris, Reac-
                              tOS, Xen, and VMware ESXi2 . The relevant standard is called “multiboot”.
                              GRUB starts multiboot-incompatible systems (notably Windows) by invok-
                              ing the boot loader of the operating system in question—a procedure called
                              “chain loading”.

                               To make GRUB Legacy work with GPT-partitioned disks, you need a BIOS boot
                            partition to store its stage 1.5. There is a version of GRUB Legacy that can deal with
                            UEFI systems, but for UEFI boot you are generally better off using a different boot

                            16.2.2        GRUB Legacy Configuration
      /boot/grub/menu.lst   The main configuration file for GRUB Legacy is usually stored as /boot/grub/menu.
                            lst . It contains basic configuration as well as the settings for the operating systems
                            to be booted. This file might look as follows:

                            default 1
                            timeout 10

                            title linux
                               kernel (hd0,1)/boot/vmlinuz root=/dev/sda2
                               initrd (hd0,1)/boot/initrd
                            title failsafe
                               kernel (hd0,1)/boot/vmlinuz.bak root=/dev/sda2 apm=off acpi=off
                               initrd (hd0,1)/initrd.bak
                            title someothersystem
                               root (hd0,2)
                               chainloader +1
                            title floppy
                               root (fd0)
                               chainloader +1

                            The individual parameters have the following meaning:
                            default   Denotes the default system to be booted. Caution: GRUB counts from 0!
                                     Thus, by default, the configuration above launches the failsafe entry.
                            timeout This is how many          seconds the GRUB menu will be displayed before the
                                  default entry will be       booted.
                            title    Opens an operating system entry and specifies its name, which will be dis-
                                     played within the GRUB menu.
                            kernel    Specifies the Linux kernel to be booted. (hd0, 1)/boot/vmlinuz , for example,
                                     means that the kernel is to be found in /boot/vmlinuz on the first partition of
                                     the zeroth hard disk, thus in our example, for linux , on /dev/hda2 . Caution:
                                     The zeroth hard disk is the first hard disk in the BIOS boot order! There is
                                     no distinction between IDE and SCSI! And: GRUB starts counting at 0 …
                                     Incidentally, GRUB takes the exact mapping of the individual drives from
                                     the file.
                                     After the kernel location, arbitrary kernel parameters can be passed. This
                                     includes the boot= entry.
                              2 The   “U” in GRUB must stand for something, after all.
16.2 GRUB Legacy                                                                                           253

initrd   Denotes the location of the cpio archive used for the “early userspace”.
root   Determines the system partition for foreign operating systems. You can also
        specify media that only occasionally contain something bootable, such as
        the floppy disk drive—this will let you boot from floppy even though the
        floppy disk is disabled in the BIOS boot order.
chainloader +1   Denotes the boot loader to be loaded from the foreign system’s sys-
         tem partition. Generally this is the content of that partition’s boot loader.
makeactive    Marks the specified partition temporarily as “bootable”. Some operat-
         ing systems (not Linux) require this in order to be able to boot off the par-
         tition in question. By the way: GRUB supports a few more such directives,
         for example map , which makes it possible to fool a system into believing it
         is installed on a different hard disk (than, e. g., the often disdained second
         disk) than it actually is.

16.2.3       GRUB Legacy Installation
Here “installation” does not refer to the installation of an RPM package but the
installation of the GRUB boot sector, or stage 1 (and very likely the stage 1.5). This
is very seldom required, for example during the original installation of the system
(where the installation procedure of your distribution will do it for you).
    The installation is done using the grub command, which invokes the GRUB
shell. It is most convenient to use a “batch” file, since otherwise you would have to
start from the very beginning after an erroneous input. Some distributions (e. g.,
those by SUSE/Novell) already come with a suitable file. In this case, the instal-
lation procedure might look like

# grub --batch --device-map=/boot/grub/ < /etc/grub.inst

The --device-map option creates a file under the specified name, if none
exists already.
   The /etc/grub.inst file could have the following content:                              /etc/grub.inst

root (hd0,1)
setup (hd0)

Here, root denotes the partition containing GRUB’s “home directory” (usually
/boot/grub —the other parts of GRUB necessary for the installation will be looked
for in this directory).

A The partition you specify using root here has nothing to do with the partition
  containing your Linux distribution’s root directory, which you specify using
  root= in your Linux kernels’ menu entries. At least not necessarily. See also
  Section 16.3.
   setup installs GRUB on the specified device, here in hd0 ’s MBR. GRUB’s setup
command is a simplified version of a more general command called install , which
should work in most cases.

B Alternatively, you may use the grub-install script to install the GRUB com-             grub-install
  ponents. This comes with some distributions.
   Inside the GRUB shell it is straightforward to figure out how to specify a hard disk specification
disk in the root or kernel directives. The GRUB shell command find is useful here:
# grub

grub> find /boot/vmlinuz
254                                                                                                 16 Booting Linux

                              16.2.4     GRUB 2
      new implementation GRUB 2 is a completely new implementation of the boot loader that did not make
                              particular concessions to GRUB-Legacy compatibility. GRUB 2 was officially re-
                              leased in June 2012, even though various distributions used earlier versions by

                                    The LPIC-1 certificate requires knowledge of GRUB 2 from version 3.5 of the
                                    exam (starting on 2 July 2012).

                                 As before, GRUB 2 consists of several stages that build on each other:
                                 • Stage 1 (boot.img ) is placed inside the MBR (or a partition’s boot sector) on
                                   BIOS-based systems. It can read the first sector of stage 1.5 by means of the
                                   BIOS, and that in turn will read the remainder of stage 1.5.
                                 • Stage 1.5 (core.img ) goes either between the MBR and the first partition
                                   (on MBR-partitioned disks) or else into the BIOS boot partition (on GPT-
                                   partitioned disks). Stage 1.5 consists of a first sector which is tailored to
                                   the boot medium (disk, CD-ROM, network, …) as well as a “kernel” that
                                   provides rudimentary functionality like device and file access, processing
                                   a command line, etc., and an arbitrary list of modules.

                                    B This modular structure makes it easy to adapt stage 1.5 to size restric-

                                 • GRUB 2 no longer includes an explicit stage 2; advanced functionality will
                                   be provided by modules and loaded on demand by stage 1.5. The modules
                                   can be found in /boot/grub , and the configuration file in /boot/grub/grub.cfg .

                              B On UEFI-based systems, the boot loader sits on the ESP in a file called
                                EFI/ ⟨operating system⟩/grubx64.efi , where ⟨operating system⟩ is something
                                like debian or fedora . Have a look at the /boot/efi/EFI directory on your
                                UEFI”=based Linux system.

                              B Again, the “x64 ” in “grubx64.efi ” stands for “64-bit PC”.
         configuration file      The configuration file for GRUB 2 looks markedly different from that for GRUB
                              Legacy, and is also rather more complicated (it resembles a bash script more than
                              a GRUB Legacy configuration file). The GRUB 2 authors assume that system man-
                              agers will not create and maintain this file manually. Instead there is a command
             grub-mkconfig    called grub-mkconfig which can generate a grub.cfg file. To do so, it makes use of
                              a set of auxiliary tools (shell scripts) in /etc/grub.d , which, e. g., search /boot for
                              Linux kernels to add to the GRUB boot menu. (grub-mkconfig writes the new con-
               update-grub    figuration file to its standard output; the update-grub command calls grub-mkconfig
                              and redirects its output to /boot/grub/grub.cfg .)
                                 You should therefore not modify /boot/grub/grub.cfg directly, since your distri-
                              bution is likely to invoke update-grub after, e. g., installing a kernel update, which
                              would overwrite your changes to grub.cfg .
                                 Usually you can, for instance, add more items to the GRUB 2 boot menu by
                              editing the /etc/grub.d/40_custom file. grub-mkconfig will copy the content of this file
                              verbatim into the grub.cfg file. As an alternative, you could add your configuration
                              settings to the /boot/grub/custom.cfg file, which will be read by grub.cfg if it exists.
                                 For completeness’ sake, here is an excerpt from a typical grub.cfg file. By anal-
                              ogy to the example in Section 16.2.2, a menu entry to start Linux might look like
                              this for GRUB 2:
                              menuentry 'Linux' --class gnu-linux --class os {
                                insmod gzio
                                insmod part_msdos
                                insmod ext2
16.3 Kernel Parameters                                                                                      255

    set root='(hd0,msdos2)'
    linux /boot/vmlinuz root=/dev/hda2
    initrd /boot/initrd.img

(grub-mkconfig usually produces more complicated stuff.) Do note that the GRUB
modules for decompression (gzio ), for MS-DOS-like partitioning support (part_msdos )
and the ext2 file system must be loaded explicitly. With GRUB 2, partition num-
bering starts at 1 (it used to be 0 for GRUB Legacy), so (hd0,msdos2) refers to the
second MS-DOS partition on the first hard disk. Instead of kernel , linux is used to
start a Linux kernel.

16.2.5     Security Advice
The GRUB shell offers many features, in particular access to the file system with-
out the root password! Even entering boot parameters may prove dangerous since boot parameters
it is easy to boot Linux directly into a root shell. GRUB makes it possible to close
these loopholes by requiring a password.                                                password
    For GRUB Legacy, the password is set in the menu.lst file. Here, the entry
“password --md5 ⟨encrypted password⟩” must be added to the global section. You
can obtain the encrypted password via the grub-md5-crypt command (or “md5crypt ”
within the GRUB shell) and then use, e. g., the GUI to “copy and paste” it to the file.
Afterwards, the password will need to be input whenever something is changed
interactively in the GRUB menu.

B You can also prevent particular systems from being booted by adding the
  lock option to the appropriate specific section within menu.lst . GRUB will
  query for the password when that system is to be booted. All other systems
  can still be started without a password.

C 16.2 [2] Which file contains your boot loader’s configuration? Create a new
  entry that will launch another operating system. Make a backup copy of the
  file first.

C 16.3 [!3] Prevent a normal user from circumventing init and booting directly
  into a shell. How do you generate a password request when a particular
  operating system is to be booted?

16.3       Kernel Parameters
Linux can accept a command line from the boot loader and evaluate it during the
kernel start procedure. The parameters on this command line can configure de-
vice drivers and change various kernel options. This mechanism for Linux kernel Linux kernel runtime configura-
runtime configuration is particularly helpful with the generic kernels on Linux tion
distribution boot disks, when a system with problematic hardware needs to be
booted. To do this, LILO supports the append= …option, while GRUB lets you ap-
pend parameters to the kernel specification.
   Alternatively, you can enter parameters interactively as the system is being
booted. You may have to grab GRUB’s attention quickly enough (e. g., by press-
ing a cursor or shift key while the boot menu or splash screen is displayed). Af-
terwards you can navigate to the desired menu entry and type e . GRUB then
presents you with the desired entry, which you can edit to your heart’s content
before continuing the boot operation.
   There are various types of parameters. The first group overwrites hardcoded
defaults, such as root or rw . Another group of parameters serves to configure de- configuring device drivers
256                                                                                              16 Booting Linux

                         vice drivers. If one of these parameters occurs on the command line, the initial-
                         isation function for the device driver in question is called with the arguments
                         specified there rather than the built-in default values.

                         B Nowadays most Linux distributions use modular kernels that have only
                           very few device drivers built in. Modular device drivers cannot be con-
                           figured from the kernel command line.

                         B During booting, if there are problems with a device driver that is built into
                           the kernel, you can usually disable this driver by specifying the number 0
                           as the parameter for the corresponding boot parameter.

      general settings       Finally, there are parameters governing general settings. These include, e. g.,
                         init or reserve .We shall be discussing some typical parameters from the multitude
                         of possible settings. Further parameters can be found within the kernel sources’
                         documentation area. Specific details for particular hardware must be researched
                         in the manual or on the Internet.
                         ro   This causes the kernel to mount the root partition read-only
                         rw   This causes the kernel to mount the root partition with writing enabled, even if
                                the kernel executable or the boot loader configuration file specify otherwise
                         init= ⟨program⟩     Runs ⟨program⟩ (e. g., /bin/bash ) instead of the customary /sbin/init
                         ⟨runlevel⟩ Boots into runlevel ⟨runlevel⟩, where ⟨runlevel⟩ is generally a number
                               between 1 and 5. Otherwise the initial runlevel is taken from /etc/inittab .
                               (Irrelevant for computers running systemd.)
                         single   Boots to single-user mode.
                         maxcpus= ⟨number⟩   On a multi-processor (or, nowadays, multi-core) system, use
                                  only as many CPUs as specified. This is useful for troubleshooting or per-
                                  formance measurements.
                         mem= ⟨size⟩  Specifies the amount of memory to be used. On the one hand, this is
                                  useful if the kernel cannot recognise the correct size by itself (fairly unlikely
                                  these days) or you want to check how the system behaves with little mem-
                                  ory. The ⟨size⟩ is a number, optionally followed by a unit (“TokenG” for
                                  gibibytes, “M ” for mebibytes, or “K ” for kibibytes).

                                A A typical mistake is something like mem=512 . Linux is thrifty about sys-
                                  tem resources, but even it can’t quite squeeze itself into 512 bytes (!) of

                         panic= ⟨seconds⟩  Causes an automatic reboot after ⟨seconds⟩ in case of a catastrophic
                                  system crash (called a “kernel panic” in the patois, Edsger Dijkstra’s dictum,
                                  “The use of anthropomorphic terminology when dealing with computing
                                  systems is a symptom of professional immaturity”, notwithstanding).
                         hd 𝑥=noprobe  Causes the kernel to ignore the disk-like device /dev/hd 𝑥 (IDE disk, CD-
                                  ROM, …) completely. It is not sufficient to disable the device in the BIOS, as
                                  Linux will find and access it even so.
                         noapic    and similar parameters like nousb , apm=off , and acpi=off tell Linux not to use
                                  certain kernel features. These options can help getting Linux to run at all
                                  on unusual computers, in order to analyse problems in these areas more
                                  thoroughly and sort them out.
                         A complete list of all parameters available on the kernel command line is given in
                         the file Documentation/kernel- parameters.txt , which is part of the Linux source code.
                         (However, before you install kernel sources just to get at this file, you should prob-
                         ably look for it on the Internet.)
16.4 System Startup Problems                                                                                           257

B Incidentally, if the kernel notices command-line options that do not corre-
  spond to kernel parameters, it passes them to the init process as environ-            init   environment variables
  ment variables.

16.4      System Startup Problems
16.4.1    Troubleshooting
Usually things are simple: You switch on the computer, stroll over to the coffee
machine (or not—see Section 17.1), and when you come back you are greeted by
the graphical login screen. But what to do if things don’t work out that way?
   The diagnosis of system startup problems sometimes isn’t all that easy—all
sorts of messages zoom by on the screen or (with some distributions) are not dis-
played at all, but hidden behind a nice little picture. The system logging service
(syslogd ) is also started only after a while. Fortunately, though, Linux does not
leave you out in the cold if you want to look at the kernel boot messages at leisure.
   For logging purposes, the system startup process can be divided into two
phases. The “early” phase begins with the first signs of life of the kernel and
continues up to the instant where the system logging service is activated. The
“late” phase begins just then and finishes in principle when the computer is shut
   The kernel writes early-phase messages into an internal buffer that can be dis-
played using the dmesg command. Various distributions arrange for these messages
to be passed on to the system logging service as soon as possible so they will show
up in the “official” log.
   The system logging service, which we are not going to discuss in detail here,
runs during the “late” phase. It will be covered in the Linup Front training man-
ual, Linux Administration II (and the LPI-102 exam). For now it will be sufficient
to know that most distribution place most messages sent to the system logging
service into the /var/log/messages file. This is also where messages from the boot
process end up if they have been sent after the logging service was started.

      On Debian GNU/Linux, /var/log/messages contains only part of the system
      messages, namely anything that isn’t a grave error message. If you would
      like to see everything you must look at /var/log/syslog —this contains all mes-
      sages except (for privacy reasons) those dealing with authentication. The
      “early phase” kernel messages, too, incidentally.

B Theoretically, messages sent after init was started but before the system log-
  ging service was launched might get lost. This is why the system logging
  service is usually among the first services started after init .

16.4.2    Typical Problems
Here are some of the typical snags you might encounter on booting:
The computer does not budge at all If your computer does nothing at all, it
     probably suffers from a hardware problem. (If you’re diagnosing such a
     case by telephone, then do ask the obvious questions such as “Is the power
     cable plugged into the wall socket?”—perhaps the cleaning squad was des-
     perate to find a place to plug in their vacuum cleaner—, and “Is the power
     switch at the back of the case switched to On?”. Sometimes the simple
     things will do.) The same is likely when it just beeps or flashes its LEDs
     rhythmically but does not appear to actually start booting.

     B The beeps or flashes can allow the initiated to come up with a rough di-
       agnosis of the problem. Details of hardware troubleshooting, though,
       are beyond the scope of this manual.
258                                                                     16 Booting Linux

      Things go wrong before the boot loader starts The firmware performs various
           self-tests and outputs error messages to the screen if things are wrong (such
           as malfunctioning RAM chips). We shall not discuss how to fix these prob-
           lems. If everything works fine, your computer ought to identify the boot
           disk and launch the boot loader.

      The boot loader does not finish This could be because the operating system can-
           not find it (e. g., because the drive it resides on does not show up in the
           firmware boot order) or it is damaged. In the former case you should ensure
           that your firmware does the Right Thing (not our topic). In the latter case
           you should receive at least a rudimentary error message, which together
           with the boot loader’s documentation should allow you to come up with an

           B GRUB as a civilised piece of software produces clear-text error mes-
             sages which are explained in more detail in the GRUB info documen-

            The cure for most of the fundamental (as opposed to configuration-related)
            boot loader problems, if they cannot be obviously reduced to disk or BIOS
            errors, consist of booting the system from CD-ROM—the distribution’s
            “rescue system” or a “live distribution” such as Knoppix recommend
            themselves—and to re-install the boot loader.

           B The same applies to problems like a ruined partition table in the MBR.
             Should you ever accidentally overwrite your MBR, you can restore a
             backup (you do have one, don’t you?) using dd or re-instate the par-
             titioning using sfdisk (you do have a printout of your partition table
             stashed away somewhere, don’t you?) and rewrite the boot loader.

           B In case of the ultimate worst-case partition table scenario, there are
             programs which will search the whole disk looking for places that look
             like file system superblocks, and (help) recover the partition scheme
             that way. We’re keeping our fingers crossed on your behalf that you
             will never need to run such a program.

      The kernel doesn’t start Once the boot loader has done its thing the kernel
           should at least start (which generally leads to some activity on the screen).
           Distribution kernels are generic enough to run on most PCs, but there may
           still be problems, e. g., if you have a computer with extremely modern hard-
           ware which the kernel doesn’t yet support (which is fatal if, for example, a
           driver for the disk controller is missing) or you have messed with the initial
           RAM disk (Shame, if you didn’t know what you were doing!). It may be
           possible to reconfigure the BIOS (e. g., by switching a SATA disk controller
           into a “traditional” IDE-compatible mode) or to deactivate certain parts of
           the kernel (see Section 16.3) in order to get the computer to boot. It makes
           sense to have another computer around so you can search the Internet for
           help and explanations.

           B If you are fooling around with the kernel or want to install a new ver-
             sion of your distribution kernel, do take care to have a known-working
             kernel around. If you always have a working kernel in your boot
             loader menu, you can save yourself from the tedious job of slinging
             CDs about.

      Other problems Once the kernel has finished its initialisations, it hands control
           off to the “init” process. You will find out more about this in Chapter 17.
           However, you should be out of the woods by then.
16.4 System Startup Problems                                                            259

16.4.3    Rescue systems and Live Distributions
As a system administrator, you should always keep a “rescue system” for your
distribution handy, since usually you need it exactly when you are least in a posi-
tion to obtain it quickly. (This applies in particular if your Linux machine is your
only computer.) A rescue system is a pared-down version of your distribution
which you can launch from a CD or DVD (formerly a floppy disk or disks) and
which runs in a RAM disk.

B Should your Linux distribution not come with a separate rescue system on
  floppy disk or CD, then get a “live distribution” such as Knoppix. Live dis-
  tributions are started from CD (or DVD) and work on your computer with-
  out needing to be installed first. You can find Knoppix as an ISO image on or, every so often, as a freebie with computer maga-

   The advantage of rescue systems and live distributions consists in the fact that
they work without involving your hard disk. Thus you can do things like fsck
your root file system, which are forbidden while your system is running from
hard disk. Here are a few problems and their solutions:
Hosed the kernel? Boot the rescue system and re-install the corresponding pack-
    age. In the simplest case, you can enter your installed system’s root file from
    the rescue system like so:

       # mount -o rw /dev/sda1 /mnt                          Device name may differ
       # chroot /mnt
       # _                               We are now seeing the installed distribution

      After this you can activate the network interface or copy a kernel package
      from a USB key or CD-ROM and install it using the package management
      tool of your distribution.

Forgot the root password? Boot the rescue system and change to the installed dis-
     tribution as above. Then do
       # passwd

      (You could of course fix this problem without a rescue system by restarting
      your system with “init=/bin/bash rw ” as a kernel parameter.)

B Live distributions such as Knoppix are also useful to check in the computer
  store whether Linux supports the hardware of the computer you have been
  drooling over for a while already. If Knoppix recognises a piece of hardware,
  you can as a rule get it to run with other Linux distributions too. If Knoppix
  does not recognise a piece of hardware, this may not be a grave problem
  (there might be a driver for it somewhere on the Internet that Knoppix does
  not know about) but you will at least be warned.

B If there is a matching live version of your distribution—with Ubuntu, for
  example, the live CD and the installation CD are identical—, things are es-
  pecially convenient, since the live distribution will typically recognise the
  same hardware that the installable distribution does.
260                                                                     16 Booting Linux

      Commands in this Chapter
      dmesg    Outputs the content of the kernel message buffer            dmesg (8) 257
      grub-md5-crypt Determines MD5-encrypted passwords for       GRUB Legacy
                                                                  grub-md5-crypt (8) 255

         • A boot loader is a program that can load and start an operating system.
         • A boot manager is a boot lader that lets the user pick one of several operating
           systems or operating system installations.
         • GRUB is a powerful boot manager with special properties—such as the pos-
           sibility of accessing arbitrary files and a built-in command shell.
         • The GRUB shell helps to install GRUB as well as to configure individual boot
                                                                                                            $ echo tux
                                                                                                            $ ls
                                                                                                            $ /bin/su -

System-V Init and the Init Process

17.1   The Init Process . . . .       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   262
17.2   System-V Init . . . . .        .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   262
17.3   Upstart . . . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   268
17.4   Shutting Down the System       .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   270

   •   Understanding the System-V Init infrastructure
   •   Knowing /etc/inittab structure and syntax
   •   Understanding runlevels and init scripts
   •   Being able to shut down or restart the system orderly

   • Basic Linux system administration knowledge
   • Knowledge of system start procedures (Chapter 16)

adm1-init.tex   (33e55eeadba676a3 )
262                                                                  17 System-V Init and the Init Process

                     17.1      The Init Process
                     After the firmware, the boot loader, the operating system kernel and (possibly)
                     the early userspace have done their thing, the “init process” takes over the reins.
                     Its job is to finish system initialisation and supervise the ongoing operation of the
        /sbin/init   system. For this, Linux locates and starts a program called /sbin/init .

                     B The init process always has process ID 1. If there is an early userspace, it in-
                       herits this from the process that was created to run /init , and subsequently
                       goes on to replace its program text by that of the init process.

                     B Incidentally, the init process enjoys a special privilege: it is the only pro-
                       cess that cannot be aborted using “kill -9 ”. (It can decide to shuffle off this
                       mortal coil of its own accord, though.)

                     B If the init process really quits, the kernel keeps running. There are purists
                       who start a program as the init process that will set up packet filtering rules
                       and then exit. Such a computer makes a practically impregnable firewall,
                       but is somewhat inconvenient to reconfigure without a rescue system …

                     B You can tell the Linux kernel to execute a different program as the init pro-
                       cess by specifying an option like “init=/sbin/myinit ” on boot. There are no
                       special properties that this program must have, but you should remember
                       that, if it ever finishes, you do not get another one without a reboot.

                     17.2      System-V Init
                     Basics   The traditional infrastructure that most Linux distributions used to use
      System-V init is called “System-V init” (pronounced “sys-five init”). The “V” is a roman nu-
                     meral 5, and it takes its name from the fact that it mostly follows the example of
                     Unix System V, where something very similar showed up for the first time. That
                     was during the 1980s.

                     B For some time there was the suspicion that an infrastructure designed ap-
                       proximately 30 years ago was no longer up to today’s demands on a Linux
                       computer’s init system. (Just as a reminder: When System-V init was new,
                       the typical Unix system was a VAX with 30 serial terminals.) Modern com-
                       puters must, for example, be able to deal with frequent changes of hardware
                       (cue USB), and that is something that System-V init finds relatively difficult
                       to handle. Hence there were several suggestions for alternatives to System-V
                       init. One of these—systemd by Lennart Poettering and Kay Sievers—seems
                       to have won out and is the current or upcoming standard of practically all
                       Linux distributions of importance (we discuss it in more detail in Chap-
                       ter 18). Another is Upstart by Scott James Remnant (see Section 17.3).

         runlevels      One of the characteristic features of System-V init are runlevels, which describe
                     the system’s state and the services that it offers. Furthermore, the init process en-
                     sures that users can log in on virtual consoles, directly-connected serial terminals,
                     etc., and manages system access via modems if applicable. All of this is configured
                     by means of the /etc/inittab file
      /etc/inittab      The syntax of /etc/inittab (Figure 17.1), like that of many other Linux configu-
                     ration files, is somewhat idiosyncratic (even if it is really AT&T’s fault). All lines
                     that are not either empty or comments— starting with “# ” as usual—consist of
                     four fields separated by colons:
                     Label The first field’s purpose is to identify the line uniquely. You may pick an
                          arbitrary combination of up to four characters. (Do yourself a favour and
                          stick with letters and digits.) The label is not used for anything else.
17.2 System-V Init                                                    263

# Standard runlevel

# First script to be executed

# runlevels
l0:0:wait:/etc/init.d/rc 0
l1:1:wait:/etc/init.d/rc 1
l2:2:wait:/etc/init.d/rc 2
l3:3:wait:/etc/init.d/rc 3
#l4:4:wait:/etc/init.d/rc 4
l5:5:wait:/etc/init.d/rc 5
l6:6:wait:/etc/init.d/rc 6

ls:S:wait:/etc/init.d/rc S

# Ctrl-Alt-Del
ca::ctrlaltdel:/sbin/shutdown -r -t 4 now

# Terminals
1:2345:respawn:/sbin/mingetty   --noclear tty1
2:2345:respawn:/sbin/mingetty   tty2
3:2345:respawn:/sbin/mingetty   tty3
4:2345:respawn:/sbin/mingetty   tty4
5:2345:respawn:/sbin/mingetty   tty5
6:2345:respawn:/sbin/mingetty   tty6

# Serial terminal
# S0:12345:respawn:/sbin/agetty -L 9600 ttyS0 vt102

# Modem
# mo:235:respawn:/usr/sbin/mgetty -s 38400 modem

                 Figure 17.1: A typical /etc/inittab file (excerpt)
264                                                       17 System-V Init and the Init Process

           B This is not 100% true for lines describing terminals, where according
             to convention the label corresponds to the name of the device file in
             question, but without the “tty ” at the beginning, hence 1 for tty1 or S0
             for ttyS0 . Nobody knows exactly why.

      Runlevels The runlevels this line applies to. We haven’t yet explained in detail
           how runlevels work, so excuse us for the moment for limiting ourselves to
           telling you that they are usually named with digits and the line in question
           will be considered in all runlevels whose digit appears in this field.

           B In addition to the runlevels with digits as names there is one called “S ”.
             More details follow below.

      Action The third field specifies how to handle the line. The most important pos-
           sibilities include
            respawn    The process described by this line will immediately be started again
                    once it has finished. Typically this is used for terminals which, after the
                    current user is done with their session, should be presented brand-new
                    to the next user.
            wait    The process described by this line is executed once when the system
                    changes to the runlevel in question, and init waits for it to finish.
            bootwait    The process described by this line will be executed once during
                    system startup. init waits for it to finish. The runlevel field on this line
                    will be ignored.
            initdefault   The runlevel field of this line specifies which runlevel the system
                    shoud try to reach after booting.
                   B With LSB-compliant distributions, this field usually says “5 ” if the
                     system should accept logins on the graphical screen, otherwise
                     “3 ”. See below for details.
                   B If this entry (or the whole file /etc/inittab ) is missing, you will need
                     to state a run level on the console.
            ctrlaltdel    Specifies what the system should do if the init process is being
                    sent a SIGINT —which usually happens if anyone presses the Ctrl + Alt
                    + Del combination. Normally this turns out to be some kind of shutdown
                    (see Section 17.4).

           B There are a few other actions. powerwait , powerfail , powerokwait , and
             powerfailnow , for example, are used to interface System-V init with
             UPSs. The details are in the documentation (init (8) and inittab (5)).

      Command The fourth field describes the command to be executed. It extends to
          the end of the line and you can put whatever you like.
         If you have made changes to /etc/inittab , these do not immediately take effect.
      You must execute the “telinit q ” command first in order to get init to reread the
      configuration file.

      The Boot Script With System-V init, the init process starts a shell script, the boot
      script, typically /etc/init.d/boot (Novell/SUSE), /etc/rc.d/init.d/boot (Red Hat), or
      /etc/init.d/rcS (Debian). (The exact name occurs in /etc/inittab ; look for an entry
      whose action is bootwait .)
          The boot script performs tasks such as checking and possibly correcting the
      file systems listed in /etc/fstab , initialising the system name and Linux clock, and
      other important prerequisites for stable system operation. Next, kernel modules
      will be loaded if required, file systems mounted and so on. The specific actions
      and their exact order depend on the Linux distribution in use.
17.2 System-V Init                                                                                         265

B Today, boot usually confines itself to executing the files in a directory such as
  /etc/init.d/boot.d (SUSE) in turn. The files are executed in the order of their
  names. You can put additional files in this directory in order to execute
  custom code during system initialisation.

C 17.1 [2] Can you find out where your distribution keeps the scripts that the
  boot script executes?

C 17.2 [!2] Name a few typical tasks performed by the boot script. In which
  order should these be executed?

Runlevels After executing the boot script, the init process attempts to place the
system in one of the various runlevels. Exactly which one is given by /etc/inittab runlevels
or determined at the system’s boot prompt and passed through to init by the
   The various runlevels and their meaning have by now been standardised across standardised runlevels
most distributions roughly as follows:
1 Single-user mode with no networking

2 Multi-user mode with no network servers
3 Multi-user mode with network servers
4 Unused; may be configured individually if required

5 As runlevel 3, but with GUI login
6 Reboot
0 System halt

B The system runs through the S (or s ) runlevel during startup, before it
  changes over to one out of runlevels 2 to 5. If you put the system into
  runlevel 1 you will finally end up in runlevel S .

When the system is started, the preferred runlevels are 3 or 5—runlevel 5 is typical
for workstations running a GUI, while runlevel 3 makes sense for server systems
that may not even contain a video interface. In runlevel 3 you can always start a
GUI afterwards or redirect graphics output to another computer by logging into
your server from that machine over the network.

      These predefined runlevels derive from the LSB standard. Not all distribu-
      tions actually enforce them; Debian GNU/Linux, for example, mostly leaves
      runlevel assignment to the local administrator.

B You may use runlevels 7 to 9 but you will have to configure them yourself.
   During system operation, the runlevel can be changed using the telinit com-         telinit   command
mand. This command can only be executed as root : “telinit 5 ” changes imme-
diately to runlevel ⟨runlevel⟩. All currently running services that are no longer
required in the new runlevel will be stopped, while non-running services that are
required in the new runlevel will be started.

B You may use init in place of telinit (the latter is just a symbolic link to the
  former, anyway). The program checks its PID when it starts, and if it is not 1,
  it behaves like telinit , else init .

   The runlevel command displays the previous and current runlevel:                    runlevel
266                                                                           17 System-V Init and the Init Process

                           # runlevel
                           N 5

                          Here the system is currently in runlevel 5, which, as the value “N ” for the “previous
                          runlevel” suggests, was entered right after the system start. Output such as “5 3 ”
                          would mean that the last runlevel change consisted of bringing the system from
                          runlevel 5 to runlevel 3.

                           B We have concealed some more runlevels from you, namely the “on-demand
                             runlevels” A , B , and C . You may make entries in /etc/inittab which are meant
                             for any of these three runlevels and use the ondemand action, such as


                                 If you say something like

                                  # telinit A

                                 these entries are executed, but the actual runlevel does not change: If you
                                 were in runlevel 3 before the telinit command, you will still be there when
                                 it finishes. a , b , and c are synonyms for A , B , and C.

                           C 17.3 [!2] Display the current runlevel. What exactly is being output? Change
                             to runlevel 2. Check the current runlevel again.

                           C 17.4 [2] Try the on-demand runlevels: Add a line to /etc/inittab which ap-
                             plies to, e. g., runlevel A . Make init reread the inittab file. Then enter the
                             »telinit A « command.

                             Init Scripts The services available in the various runlevels are started and
                             stopped using the scripts in the /etc/init.d (Debian, Ubuntu, SUSE) or /etc/
                             rc.d/init.d (Red Hat) directories. These scripts are executed when changing from
                             one runlevel to another, but may also be invoked manually. You may also add
                init scripts your own scripts. All these scripts are collectively called init scripts.
                                 The init scripts>parametersinit scripts usually support parameters such asstart ,
                             stop , status , restart , or reload , which you can use to start, stop, …, the correspond-
                             ing services. The “/etc/init.d/network restart ” command might conceivably deac-
                             tivate a system’s network cards and restart them with an updated configuration.
                                 Of course you do not need to start all services manually when the system is
      runlevel directories started or you want to switch runlevels. For each runlevel 𝑟 there is a rc 𝑟.d di-
                             rectory in /etc (Debian and Ubuntu), /etc/rc.d (Red Hat), or /etc/init.d (SUSE).
                             The services for each runlevel and the runlevel transitions are defined in terms of
                             these directories, which contain symbolic links to the scripts in the init.d direc-
                             tory. These links are used by a script, typically called /etc/init.d/rc , to start and
                             stop services when a runlevel is entered or exited.
                                 This is done according to the names of the links, in order to determine the start-
                             ing and stopping order of the services. There are dependencies between various
                             services—there would not be much point in starting network services such as the
                             Samba or web servers before the system’s basic networking support has been ac-
       Activating services tivated. The services for a runlevel are activated by calling all symbolic links in
                             its directory that start with the letter “S ”, in lexicographical order with the start
                             parameter. Since the link names contain a two-digit number after the “S ”, you
                             can predetermine the order of invocation by carefully choosing these numbers.
                             Accordingly, to deactivate the services within a runlevel, all the symbolic links
                             starting with the letter “K ” are called in lexicographical order with the stop pa-
17.2 System-V Init                                                                                            267

   If a running service is also supposed to run in the new run level, an extraneous
restart can be avoided. Therefore, before invoking a K link, the rc script checks
whether there is an S link for the same service in the new runlevel’s directory. If
so, the stopping and immediate restart are skipped.

      Debian GNU/Linux takes a different approach: Whenever a new runlevel
      𝑟 is entered, all symbolic links in the new directory (/etc/rc 𝑟.d ) are executed.
      Links beginning with a “K ” are passed stop and links beginning with a “S ”
      are passed start as the parameter.

   To configure services in a runlevel or to create a new runlevel, you can in princi- Configuring services
ple manipulate the symbolic links directly. However, most distributions deprecate

      The Red Hat distributions use a program called chkconfig to configure run-
      levels. “chkconfig quota 35 ”, for example, inserts the quota service not in run-
      level 35, but runlevels 3 and 5. “chkconfig -l ” gives a convenient overview
      of the configured runlevels.

      The SUSE distributions use a program called insserv to order the services
      in each runlevel. It uses information contained in the init scripts to calcu-
      late a sequence for starting and stopping the services in each runlevel that
      takes the dependencies into account. In addition, YaST2 offers a graphical
      “runlevel editor”, and there is a chkconfig program which however is just a
      front-end for insserv .

      Nor do you have to create links by hand on Debian GNU/Linux—you
      may use the update-rc.d program. However, manual intervention is still
      allowed—update-rc.d ’s purpose is really to allow Debian packages to inte-
      grate their init scripts into the boot sequence. With the

       # update-rc.d mypackage defaults

      command, the /etc/init.d/mypackage script will be started in runlevels 2, 3, 4,
      and 5 and stopped in runlevels 0, 1 and 6. You can change this behaviour by
      means of options. If you do not specify otherwise, update-rc.d uses the se-
      quence number 20 to calculate the position of the service—contrary to SUSE
      and Red Hat, this is not automated.—The insserv command is available on
      Debian GNU/Linux as an optional package; if it is installed, it can man-
      age at least those init scripts that do contain the necessary metadata like it
      would on the SUSE distributions. However, this has not been implemented

C 17.5 [!2] What do you have to do to make the syslog service reread its con-

C 17.6 [1] How can you conveniently review the current runlevel configura-

C 17.7 [!2] Remove the cron service from runlevel 2.

Single-User Mode In single-user mode (runlevel S ), only the system administra- single-user mode
tor may work on the system console. There is no way of changing to other virtual
consoles. The single-user mode is normally used for administrative work, espe-
cially if the file system needs to be repaired or the quota system set up.
268                                                  17 System-V Init and the Init Process

      B You can mount the root file system read-only on booting, by passing the S
        option on the kernel command line. If you boot the system to single-user
        mode, you can also disable writing to the root file system “on the fly”, using
        the remount and ro mount options: “mount -o remount,ro / ” remounts the root
        partition read-only; “mount -o remount,rw / ” undoes it again.

      B To remount a file system “read-only” while the system is running, no pro-
        cess may have opened a file on the file system for writing. This means that
        all such programs must be terminated using kill . These are likely to be
        daemons such as syslogd or cron .

        It depends on your distribution whether or not you get to leave single-user
      mode, and how.

            To leave single-user mode, Debian GNU/Linux recommends a reboot
            rather than something like »telinit 2 «. This is because entering single-
            user mode kills all processes that are not required in signle-user mode.
            This removes some essential background processes that were started when
            the system passed through runlevel S during boot, which is why it is unwise
            to change from runlevel S to a multi-user runlevel.

      C 17.8 [!1] Put the system into single-user mode (Hint: telinit ). What do you
        need to do to actually enter single-user mode?

      C 17.9 [1] Convince yourself that you really are the single user on the sys-
        tem while single-user mode is active, and that no background processes are

      17.3     Upstart
      While System-V init traditionally stipulates a “synchronous” approach—the init
      system changes its state only through explicit user action, and the steps taken
      during a state change, like init scripts, are performed in sequence—, Upstart uses
      an “event-based” philosophy. This means that the system is supposed to react to
      external events (like plugging in an USB device). This happens “asynchronously”.
      Starting and stopping services creates new events, so that—and that is one of the
      most important differences between System-V init and Upstart—a service can be
      restarted automatically if it crashes unexpectedly. (System-V init, on the other
      hand, wouldn’t be bothered at all.)
         Upstart has been deliberately designed to be compatible with System-V init, at
      least to a point where init scripts for services can be reused without changes.

            Upstart was developed by Scott James Remnant, at the time an employee of
            Canonical (the company behind Ubuntu) and accordingly debuted in that
            distributon. Since Ubuntu 6.10 (“Edgy Eft”) it is the standard init system on
            Ubuntu, although it used to be run in a System-V compatible mode at first;
            since Ubuntu 9.10 (“Karmic Koala”) it is running in “native” mode.

            It turns out that Ubuntu is currently in the process of switching over to sys-
            temd (see Chapter 18).

            Since version 3.5 of the LPIC-1 certificate exams (as of 2 July 2012) you are
            expected to know that Upstart exists and what its major properties are. Con-
            figuration and operational details are not required.
17.3 Upstart                                                                               269

# rsyslog - system logging daemon
# rsyslog is an enhanced multi-threaded replacement for the traditional
# syslog daemon, logging messages from applications

description      "system logging daemon"

start on filesystem
stop on runlevel [06]

expect fork

exec rsyslogd -c4

               Figure 17.2: Upstart configuration file for job rsyslog

B Upstart is also purported to accelerate the boot process by being able to
  initialise servides in parallel. In actual practice this isn’t the case, as the
  limiting factor during booting is, for the most part, the speed with which
  blocks of data can be moved from disk to RAM. At the Linux Plumbers
  Conference 2008, Arjan van de Ven and Auke Kok demonstrated that it is
  possible to boot an Asus EeePC all the way to a usable desktop (i. e., not a
  Windows-like desktop with a churning hard disk in the background) within
  5 seconds. This work was based on System-V init rather than Upstart.

   Upstart configuration is based on the idea of “Jobs” that take on the role of Jobs
init scripts (although init scripts, as we mentioned, are also supported). Upstart
distinguishes “tasks”—jobs that run for a limited time and then shut themselves
down—and “services”—jobs that run permanently “in the background”.

B Tasks can be long-running, too. The main criterion is that services—think
  of a mail, database, or web server—do not terminate of their own accord
  while tasks do.

Jobs are configured using files within the /etc/init directory. The names of these
files derive from the job name and the “.conf ” suffix. See Figure 17.2 for an exam-
    One of the main objectives of Upstart is to avoid the large amounts of template-
like code typical for most System-V init scripts. Accordingly, the Upstart configu-
ration file confines itself to stating how the service is to be started (“exec rsyslogd
-c4 ”). In addition, it specifies that the service is to be restarted in case it crashes
(“respawn ”) and how Upstart can find out which process to track (“expect fork ” says
that the rsyslog process puts itself into the background by creating a child process
and then exiting—Upstart must then watch out for that child process).—Compare
this to /etc/init.d/syslogd (or similar) on a typical Linux based on System-V init.
    While with “classic” System-V init the system administrator assigns a “global”
order in which the init scripts for a particular runlevel are to be executed, with
Upstart the jobs decide “locally” where they want to place themselves within a
network of dependencies. The “start on …” and “stop on …” lines stipulate events
that lead to the job being started or stopped. In our example, rsyslog is started as
soon as the file system is available, and stopped when the system transitions to
the “runlevels” 0 (halt) or 6 (reboot). System-V init’s runlevel directories with
symbolic links are no longer required.

B Upstart supports runlevels mostly for compatibility with Unix tradition and
  to ease the migration of System-V init based systems to Upstart. They are
270                                                  17 System-V Init and the Init Process

            not required in principle, but at the moment are still necessary to shut down
            the system (!).

      B Newer implementations of System-V init also try to provide dependencies
        between services in the sense that init script 𝑋 is always executed after init
        script 𝑌 and so on. (This amounts to a scheme for automatic assignment
        of the priority numbers within the runlevel directories.) This is done using
        metadata contained in standardised comments at the beginning of the init
        scripts. The facilities that this approach provides do fall short of those of
        Upstart, though.

         On system boot, Upstart creates the startup event as soon as its own initialisa-
      tion is complete. This makes it possible to execute other jobs. The complete boot
      sequence derives from the startup event and from events being created through
      the execution of further jobs and expected by others.

      B For example, on Ubuntu 10.04 the startup event invokes the mountall task
        which makes the file systems available. Once that is finished, the filesystem
        event is created (among others), which in turn triggers the start of the rsyslog
        service from Figure 17.2.

         With Upstart, the initctl command is used to interact with the init process:

      # initctl list                                        Which jobs are running now?
      alsa-mixer-save stop/waiting
      avahi-daemon start/running, process 578
      mountall-net stop/waiting
      rc stop/waiting
      rsyslog start/running, process 549
      # initctl stop rsyslog                                                   Stop a job
      rsyslog stop/waiting
      # initctl status rsyslog                                        What is its status?
      rsyslog stop/waiting
      # initctl start rsyslog                                                Restart a job
      rsyslog start/running, process 2418
      # initctl restart rsyslog                                            Stop and start
      rsyslog start/running, process 2432

      B The “initctl stop ”, “initctl start ”, “initctl status ”, and “initctl stop ” can
        be abbreviated to “stop ”, “start ”, ….

      17.4     Shutting Down the System
      A Linux computer should not simply be powered off, as that could lead to data
      loss—possibly there are data in RAM that ought to be written to disk but are still
      waiting for the proper moment in time. Besides, there might be users logged in on
      the machine via the network, and it would be bad form to surprise them with an
      unscheduled system halt or restart. The same applies to users taking advantage
      of services that the computer offers on the Net.

      B It is seldom necessary to shut down a Linux machine that should really run
        continuously. You can install or remove software with impunity and also re-
        configure the system fairly radically without having to restart the operating
        system. The only cases where this is really necessary include kernel changes
        (such as security updates) or adding new or replacing defective hardware
        inside the computer case.
17.4 Shutting Down the System                                                                      271

B The first case (kernel changes) is being worked on. The kexec infrastructure
  makes it possible to load a second kernel into memory and jump into it
  directly (without the detour via a system reboot). Thus it is quite possible
  that in the future you will always be able to run the newest kernel without
  actually having to reboot your machine.

B With the correct kind of (expensive) hardware you can also mostly sort out
  the second case: Appropriate server systems allow you to swap CPUs, RAM
  modules, and disks in and out while the computer is running.

  There are numerous ways of shutting down or rebooting the system:

   • By valiantly pushing the system’s on/off switch. If you keep it pressed until on/off switch
     the computer is audibly shutting down the system will be switched off. You
     should only do this in cases of acute panic (fire in the machine hall or a
     sudden water influx).
   • Using the shutdown command. This is the cleanest method of shutting down          shutdown
     or rebooting.
   • For System-V init: The “telinit 0 ” command can be used to switch to run-
     level 0. This is equivalent to a shutdown.
   • Using the halt command. This is really a direct order to the kernel to halt the
     system, but many distributions arrange for halt to call shutdown if the system
     is not in runlevels 0 or 6 already.

     B There is a reboot command for reboots, which like halt usually relies on        reboot
       shutdown . (In fact, halt and reboot are really the same program.)

The commands are all restricted to the system administrator.

B The key combination Ctrl + Alt + Del may also work if it is configured ap-
  propriately in /etc/inittab (see Section 17.1).

B Graphical display managers often offer an option to shut down or reboot
  the system. You may have to configure whether the root password must be
  entered or not.

B Finally, modern PCs may interpret a (short) press on the on/off switch as
  “Please shut down cleanly” rather than “Please crash right now”.

   Normally you will be using the second option, the shutdown command. It en-
sures that all logged-in users are made aware of the impending system halt, pre-
vents new logins, and, according to its option, performs any requisite actions to
shut down the system:

# shutdown -h +10

for example will shut down the system in ten minutes’ time. With the -r option,
the system will be restarted. With no option, the system will go to single-user
mode after the delay has elapsed.

B You may also give the time of shutdown/reboot as an absolute time:
      # shutdown -h 12:00                                               High Noon

B For shutdown , the now keyword is a synonym of “+0 ”—immediate action. Do
  it only if you are sure that nobody else is using the system.

  Here is exactly what happens when the shutdown command is given:
272                                                                        17 System-V Init and the Init Process

      broadcast message      1. All users receive a broadcast message saying that the system will be shut
                                down, when, and why.
                             2. The shutdown command automatically creates the /etc/nologin file, which is
                                checked by login (or, more precisely, the PAM infrastructure); its existence
                                prevents new user logins (except for root ).

                               B For consolation, users that the system turns away are being shown the
                                 content of the /etc/nologin file.

                                The file is usually removed automatically when the system starts up again.
                             3. The system changes to runlevel 0 or 6. All services will be terminated by
                                means of their init scripts (more exactly, all services that do not occur in
                                runlevels 0 or 6, which is usually all of them).
                             4. All still-running processes are first sent SIGTERM . They may intercept this sig-
                                nal and clean up after themselves before terminating.
                             5. Shortly afterwards, all processes that still exist are forcibly terminated by
                                SIGKILL .

                             6. The file systems are unmounted and the swap spaces are deactivated.
                             7. Finally, all system activities are finished. Then either a warm start is initi-
                                ated or the computer shut off using APM or ACPI. If that doesn’t work, the
                                message “System halted ” is displayed on the console. At that point you can
                                hit the switch yourself.

                          B You may pass some text to shutdown after the shut-down delay time, which
                            is displayed to any logged-in users:

                                 # shutdown -h 12:00 '
                                 System halt for hardware upgrade.
                                 Sorry for the inconvenience!

                          B If you have executed shutdown and then change your mind after all, you can
                            cancel a pending shutdown or reboot using

                                 # shutdown -c "No shutdown after all"

                                (of course you may compose your own explanatory message).

                             By the way: The mechanism that shutdown uses to notify users of an impending
                   wall   system halt (or similar) is available for your use. The command is called wall (short
                          for “write to all”):

                          $ wall "Cake in the break room at 3pm!"

                          will produce a message of the form

                          Broadcast message from hugo@red (pts/1) (Sat Jul 18 00:35:03 2015):

                          Cake in the break room at 3pm!

                          on the terminals of all logged-in users.

                          B If you send the message as a normal user, it will be received by all users who
                            haven’t blocked their terminal for such messages using “mesg n ”. If you want
                            to reach those users, too, you must send the message as root .
17.4 Shutting Down the System                                                             273

B Even if you’re not logged in on a text terminal but are instead using a graphi-
  cal environment: Today’s desktop environments will pick up such messages
  and show them in an extra window (or something; that will depend on the
  desktop environment).

B If you’re root and the parameter of wall looks like the name of an existing
  file, that file will be read and its content sent as the message:

       # echo "Cake in the break room at 3pm!" >cake.txt
       # wall cake.txt

      You don’t get to do this as an ordinary user, but you can still pass the mes-
      sage on wall ’s standard input. (You can do that as root , too, of course.) Don’t
      use this for War and Peace.

B If you’re root , you can suppress the header line “Broadcast message …” using
  the -n option (short for --nobanner ).

C 17.10 [!2] Shut down your system 15 minutes from now and tell your users
  that this is simply a test. How do you prevent the actual shutdown (so that
  it really is simply a test)?

B What happens if you (as root ) pass wall the name of a non-existent file as its

C 17.11 [2] wall is really a special case of the write command, which you can use
  to “chat” with other users of the same computer in an unspeakably primitive
  fashion. Try write , in the easiest case between two different users in different
  windows or consoles. (write was a lot more interesting back when one had
  a VAX with 30 terminals.)

Commands in this Chapter
chkconfig   Starts or shuts down system services (SUSE, Red Hat)
                                                                 chkconfig (8) 267
halt     Halts the system                                             halt (8) 271
initctl  Supervisory tool for Upstart                              initctl (8) 270
insserv  Activates or deactivates init scripts (SUSE)              insserv (8) 267
reboot   Restarts the computer                                      reboot (8) 271
runlevel  Displays the previous and current run level             runlevel (8) 265
shutdown  Shuts the system down or reboots it, with a delay and warnings for
         logged-in users                                          shutdown (8) 271
update-rc.d Installs and removes System-V style init script links (Debian)
                                                               update-rc.d (8) 267
274                                                17 System-V Init and the Init Process

       • After starting, the kernel initialises the system and then hands off control to
         the /sbin/init program as the first userspace process.
       • The init process controls the system and takes care, in particular, of acti-
         vating background services and managing terminals, virtual consoles, and
       • The system distinguishes various “runlevels” (operating states) that are de-
         fined through different sets of running services.
       • A single-user mode is available for large or intrusive administrative tasks.
       • The shutdown command is a convenient way of shutting down or rebooting
         the system (and it’s friendly towards other users, too).
       • You can use the wall command to send a message to all logged-in users.
       • Linux systems seldom need to be rebooted—actually only when a new op-
         erating system kernel or new hardware has been installed.
                                                                                                               $ echo tux
                                                                                                               $ ls
                                                                                                               $ /bin/su -


18.1   Overview. . . . . .          .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   276
18.2   Unit Files . . . . . .       .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   277
18.3   Unit Types . . . . .         .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   281
18.4   Dependencies . . . .         .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   282
18.5   Targets. . . . . . .         .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   284
18.6   The systemctl Command        .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   286
18.7   Installing Units. . . .      .    .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   .   289

   • Understanding the systemd infrastructure
   • Knowing the structure of unit files
   • Understanding and being able to configure targets

   • Knowledge of Linux system administration
   • Knowledge of system start procedures (Chapter 16)
   • Knowledge about System-V init (Chapter 17)

adm1-systemd.tex   (33e55eeadba676a3 )
276                                                                                          18 Systemd

                   18.1      Overview
                   Systemd, by Lennart Poettering and Kay Sievers, is another alternative to the old-
                   fashioned System-V init system. Like Upstart, systemd transcends the rigid lim-
                   itations of System-V init, but implements various concepts for the activation and
                   control of services with much greater rigour than Upstart.

                    B Systemd is considered the future standard init system by all mainstream
                      distributions. On many of them—such as Debian, Fedora, RHEL, CentOS,
                      openSUSE, and SLES—it is now provided by default. Even Ubuntu, origi-
                      nally the main instigator of Upstart, has by now declared for systemd.
                       While System-V init and Upstart use explicit dependencies among services—
                   for instance, services using the system log service can only be started once that
      dependencies service is running—, systemd turns the dependencies around. A service requiring
                   the system log service doesn’t do this because the log service needs to be running,
                   but because it itself wants to send messages to the system log. This means it must
                   access the communication channel that the system log service provides. Hence it
                   is sufficient if systemd itself creates that communication channel and passes it to
                   the system log service once that becomes available—the service wanting to send
                   messages to the log will wait until its messages can actually be accepted. Hence,
                   systemd can in principle create all communication channels first and then start
                   all services simultaneously without regard to any dependencies whatsoever. The
                   dependencies will sort themselves out without any explicit configuration.

                    B This approach also works when the system is running: If a service is ac-
                      cessed that isn’t currently running, systemd can start it on demand.

                    B The same approach can in principle also be used for file systems: If a service
                      wants to open a file on a file system that is currently unavailable, the access
                      is suspended until the file system can actually be accessed.
              units    Systemd uses “units” as an abstraction for parts of the system to be managed
            targets such as services, communication channels, or devices. “Targets” replace SysV
                   init’s runlevels and are used to collect related units. For example, there is a target
          that corresponds to the traditional runlevel 3. Targets can depend
                   on the availability of devices—for instance, a could be requested
                   when a USB Bluetooth adapter is plugged in, and it could launch the requisite
                   software. (System-V init starts the Bluetooth software as soon as it is configured,
                   irrespective of whether Bluetooth hardware is actually available.)
                      In addition, systemd offers more interesting properties that System-V init and
                   Upstart cannot match, including:
                       • Systemd supports service activation “on demand”, not just depending on
                         hardware that is recognised (as in the Bluetooth example above), but also
                         via network connections, D-Bus requests or the availability of certain paths
                         within the file system.
                       • Systemd allows very fine-grained control of the services it launches, con-
                         cerning, e. g., the process environment, resource limits, etc. This includes
                         security improvements, e. g., providing only a limited view on the file sys-
                         tem for certain services, or providing services with a private /tmp directory
                         or networking environment.

                         B With SysV init this can be handled on a case-by-case basis within the
                           init scripts, but by comparison this is very primitive and tedious.

                       • Systemd uses the Linux kernel’s cgroups mechanism to ensure, e. g., that
                         stopping a service actually stops all related processes.
                       • If desired, systemd handles services’ logging output; the services only need
                         to write messages to standard output.
18.2 Unit Files                                                                              277

   • Systemd makes configuration maintenance easier, by cleanly separating dis-
     tribution default and local customisations.
   • Systemd contains a number of tools in C that handle system initialisation
     and do approximately what distribution-specific “runlevel S” scripts would
     otherwise do. Using them can speed up the boot process considerably and
     also improves cross-distribution standardisation.
Systemd is designed to offer maximum compatibility with System-V init and other
“traditions”. For instance, it supports the init scripts of System-V init if no na-
tive configuration file is available for a service, or it takes the file systems to be
mounted on startup from the /etc/fstab file.
    You can use the systemctl command to interact with a running systemd, e. g.,
to start or stop services explicitly:

# systemctl status rsyslog.service
● rsyslog.service - System Logging Service
   Loaded: loaded (/lib/systemd/system/rsyslog.service; enabled)
   Active: active (running) since Do 2015-07-16 15:20:38 CEST; 
     3h 12min ago
     Docs: man:rsyslogd(8)
 Main PID: 497 (rsyslogd)
   CGroup: /system.slice/rsyslog.service
           └─497 /usr/sbin/rsyslogd -n
# systemctl stop rsyslog.service
Warning: Stopping rsyslog.service, but it can still be activated by:
# systemctl start rsyslog.service

Systemd calls such change requests for the system state “jobs”, and puts them into
a queue.

B Systemd considers status change requests “transactions”. If a unit is being transactions
  started or stopped, it (and any units that depend on it) are put into a tempo-
  rary transaction. Then systemd checks that the transaction is consistent—in
  particular, that no circular dependencies exist. If that isn’t the case, systemd
  tries to repair the transaction by removing jobs that are not essential in order
  to break the cycle(s). Non-essential jobs that would lead to running services
  being stopped are also removed. Finally, systemd checks whether the jobs
  within the transaction would conflict with other jobs that are already in the
  queue, and refuses the transaction if that is the case. Only if the transaction
  is consistent and the minimisation of its impact on the system is complete,
  will its jobs be entered into the queue.

C 18.1 [!1] Use “systemctl status ” to get a picture of the units that are active on
  your computer. Check the detailed status of some interesting-looking units.

18.2      Unit Files
One of the more important advantages of systemd is that it uses a unified file
format for all configuration files—no matter whether they are about services to
be started, devices, communication channels, file systems, or other artefacts that
systemd manages.
278                                                                                                  18 Systemd

               B This is in stark contrast to the traditional infrastructure based on System-V
                 init, where almost every functionality is configured in a different way: per-
                 manently running background services in /etc/inittab , runlevels and their
                 services via init scripts, file systems in /etc/fstab , services that are run on
                 demand in /etc/inetd.conf , … Every single such file is syntactically different
                 from all others, while with systemd, only the details of the possible (and
                 sensible) configuration settings differ—the basic file format is always the

                 A very important observation is: Unit files are “declarative”. This means that
              they simply describe what the desired configuration looks like—unlike System V
              init’s init scripts, which contain executable code that tries to achieve the desired

               B Init scripts usually consider of huge amounts of boilerplate code which de-
                 pends on the distribution in question, but which you still need to read and
                 understand line-per-line if there is a problem or you want to do something
                 unusual. For somewhat more complex background services, init scripts of a
                 hundred lines or more are not unusual. Unit files for systemd, though, usu-
                 ally get by with a dozen lines or two, and these lines are generally pretty
                 straightforward to understand.

               B Of course unit files occasionally contain shell commands, for example to
                 explain how a specific service should be started or stopped. These, however,
                 are generally fairly obvious one-liners.

              Syntax The basic syntax of unit files is explained in systemd.unit (5). You can find
              an example for a unit file in Figure 18.1. A typical characteristic is the subdivision
              into sections that start with a title in square brackets1 . All unit files (no matter
              what they are supposed to do) can include [Unit] and [Install] sections (see be-
              low). Besides, there are sections that are specific to the purpose of the unit.
                 As usual, blank lines and comment lines are ignored. Comment lines can start
              with a # or ; . Over-long lines can be wrapped with a \ at the end of the line,
              which will be replaced by a space character when the file is read. Uppercase and
              lowercase letters are important!
                 Lines which are not section headers, empty lines, nor comment lines contain
      options “options” according to a “⟨name⟩ = ⟨value⟩” pattern. Various options may occur
              several times, and systemd’s handling of that depends on the option: Multiple
              options often form a list; if you specify an empty value, all earlier settings will be
              ignored. (If that is the case, the documentation will say so.)

               B Options that are not listed in the documentation will be flagged with a warn-
                 ing by systemd and otherwise ignored. If a section or option name starts
                 with “X- ”, it is ignored completely (options in an “X- ” section do not need
                 their own “X- ” prefix).

               B Yes/no settings in unit files can be given in a variety of ways. 1 , true , yes ,
                 and on stand for “yes”, 0 , false , no , and off for “no”.

               B Times can also be specified in various ways. Simple integers will be inter-
                 preted as seconds2 . If you append a unit, then that unit applies (allowed
                 units include us , ms , s , min , h , d , w in increasing sequence from microseconds
                 to weeks—see systemd.time (7)). You can concatenate several time specifica-
                 tions with units (as in “10min 30s ”), and these times will be added (here,
                 630 seconds).
                1 The syntax is inspired by the .desktop files of the “XDG Desktop Entry Specification” [XDG-DS14],

              which in turn have been inspired by the INI files of Microsoft Windows.
                2 Most of the time, anyway—there are (documented) exceptions.
18.2 Unit Files                                                          279

#   This file is part of systemd.
#   systemd is free software; you can redistribute it and/or modify
#   it under the terms of the GNU Lesser General Public License as
#   published by the Free Software Foundation; either version 2.1
#   of the License, or (at your option) any later version.

Description=Console Getty
After=systemd-user-sessions.service plymouth-quit-wait.service

ExecStart=-/sbin/agetty --noclear --keep-baud console   
  115200,38400,9600 $TERM


              Figure 18.1: A systemd unit file: console- getty.service
280                                                                                                    18 Systemd

                            Searching and finding settings Systemd tries to locate unit files along a list of
                            directories that is hard-coded in the program. Directories nearer the front of the
                            list have precedence over directories nearer the end.

                            B The details are system-dependent, at least to a certain degree. The usual list
                              is normally something like

                                   /etc/systemd/system                                          Local configuration
                                   /run/systemd/system                          Dynamically generated unit files
                                   /lib/systemd/system                         Unit files for distribution packages

      Local customisation       Systemd offers various clever methods for customising settings without having
                            to change the unit files generally provided by your distribution—which would be
                            inconvenient if the distribution updates the unit files. Imagine you want to change
                            a few settings in the example.service file:
                               • You can copy the distribution’s example.service file from /lib/systemd/system
                                 to /etc/systemd/system and make any desired customisations. The unit file
                                 furnished by the distribution will then not be considered at all.
                               • You can create a directory /etc/systemd/system/example.service.d containing a
                                 file—for example, local.conf . The settings in that file override settings with
                                 the same name in /lib/systemd/system/example.service , but any settings not
                                 mentioned in local.conf stay intact.

                                  B Take care to include any required section titles in local.conf , such that
                                    the options can be identified correctly.

                                  B Nobody keeps you from putting several files into /etc/systemd/system/
                                    example.service.d . The only prerequisite is that file names must end in
                                    .conf . Systemd makes no stipulation about the order in which these
                                    files are read—it is best to ensure that every option occurs in just one
                                    single file.

                           Template unit files Sometimes several services can use the same or a very similar
                           unit file. In this case it is convenient not to have to maintain several copies of
                           the same unit file. Consider, for example, the terminal definition lines in /etc/
                           inittab —it would be good not to have to have one unit file per terminal.
                               Systemd supports this by means of unit files with names like example@.service .
             Instantiation You could, for example, have a file called getty@.service and then configure a vir-
                           tual console on /dev/tty2 simply by creating a symbolic link from getty@tty2.service
                           to getty@.service . When this console is to be activated, systemd reads the getty@
                           .service file and replaces the %I key, wherever it finds it, by whatever comes be-
                           tween @ and . in the name of the unit file, i. e., tty2 . The result of that replacement
                           is then put into force as the configuration.

                            B In fact, systemd replaces not just %I but also some other sequences (and that
                              not just in template unit files). The details may be found in systemd.unit (5),
                              in the “Specifiers” section.

                            Basic settings All unit files may contain the [Unit] and [Install] sections. The
                            former contains general information about the unit, the latter provides details for
                            its installation (for example, to introduce explicit dependencies—which we shall
                            discuss later).
                                Here are some of the more important options from the [Unit] section (the com-
                            plete list is in systemd.unit (5)):
                            Description A description of the unit (as free text). Will be used to make user in-
                                  terfaces more friendly.
18.3 Unit Types                                                                            281

Documentation    A space-separated list of URLs containing documentation for the
         unit. The allowed protocol schemes include http: , https: , file: , info: , and
         man: (the latter three refer to locally-installed documentation). An empty
         value clears the list.
OnFailure    A space-separated list of other units which will be activated if this unit
         transitions into the failed state.
SourcePath    The path name of a configuration file from which this unit file has been
         generated. This is useful for tools that create unit files for systemd from
         external configuration files.
ConditionPathExists     Checks whether there is a file (or directory) under the given
         absolute path name. If not, the unit will be classed as failed . If there is a
         ! in front of the path name, then a file (or directory) with that name must
         not exist. (There are loads of other “Condition …” tests—for example, you can
         have the execution of units depend on whether the system has a particular
         computer architecture, is running in a virtual environment, is running on
         AC or battery power or on a computer with a particular name, and so on.
         Read up in systemd.unit (5).)

C 18.2 [!2] Browse the unit files of your system under /lib/systemd/system (or
  /usr/lib/systemd/system , depending on the distribution). How many different
  Condition … options can you find?

18.3         Unit Types
Systemd supports a wide variety of “units”, or system components that it can
manage. These are easy to tell apart by the extensions of the names of the corre-
sponding unit files. As mentioned in Section 18.2, all units share the same basic
file format. Here is a list of the most important unit types:
.service    A process on the computer that is executed and managed by systemd.
         This includes both background services that stay active for a long time (pos-
         sibly until the system is shut down), and processes that are only executed
         once (for example when the system is booting).

         B When a service is invoked by name (such as example ) but no correspond-
           ing unit file (here, example.service ) can be found, systemd looks for a
           System-V init script with the same name and generates a service unit
           for that on the fly. (The compatibility is fairly wide-ranging but not
           100% complete.)

.socket    A TCP/IP or local socket, i. e., a communication end point that client pro-
         grams can use to contact a server. Systemd uses socket units to activate
         background services on demand.

         B Socket units always come with a corresponding service unit which will
           be started when systemd notes activity on the socket in question.

.mount    A “mount point” on the system, i. e., a directory where a file system should
         be mounted.

         B The names of these units are derived from the path name by means
           of replacing all slashes (“/ ”) with hyphens (“- ”) and all other non-
           alphanumeric (as per ASCII) characters with a hexadecimal replace-
           ment such as \x2d (“. ” is only converted if it is the first charac-
           ter of a path name). The name of the root directory (“/ ”) becomes
282                                                                                18 Systemd

                    “- ”, but slashes at the start or end of all other names are removed.
                    The directory name /home/lost+found , for instance, becomes home- lost\
                    textbackslash x2bfound .

               B You can try this replacement using the “systemd-escape -p ” command:
                     $ systemd-escape -p /home/lost+found
                     $ systemd-escape -pu home-lost\\x2bfound

                    The “-p ” option marks the parameter as a path name. The “-u ” option
                    undoes the replacement.

      .automount    Declares that a mount point should be mounted on demand (instead
               of prophylactically when the system is booted). The names of these units
               result from the same path name transformation. The details of mounting
               must be described by a corresponding mount unit.
      .swap    Describes swap space on the system. The names of these units result from
               the path name transformation applied to the device or file name in question.
      .target    A “target”, or synchronisation point for other units during system boot
               or when transitioning into other system states. Vaguely similar to System-V
               init’s runlevels. See Section 18.5.
      .path    Observes a file or a directory and starts another unit (by default, a service
               unit of the same name) when, e. g., changes to the file have been noticed or
               a file has been added to an otherwise empty directory.

      .timer    Starts another unit (by default, a service unit of the same name) at a cer-
               tain point in time or repeatedly at certain intervals. This makes systemd a
               replacement for cron and at .
      (There are a few other unit types, but to explain all of them here would be carrying
      things too far.)

      C 18.3 [!2] Look for examples for all of these units on your system. Examine
        the unit files. If necessary, consult the manpages for the various types.

      18.4         Dependencies
      As we have mentioned before, systemd can mostly get by without explicit depen-
      dencies because it is able to exploit implicit dependencies (e. g., on communication
      channels). Even so, it is sometimes necessary to specify explicit dependencies.
      Various options in the [Unit] section of a service file (e.g., example.service ) allow
      you to do just that. For example:
      Requires    Specifies a list of other units. If the current unit is activated, the listed
               units are also activated. If one of the listed units is deactivated or its acti-
               vation fails, then the current unit will also be deactivated. (In other words,
               the current unit “depends on the listed units”.)

               B The Requires dependencies have nothing to do with the order in which
                 the units are started or stopped—you will have to configure that sepa-
                 rately with After or Before . If no explicit order has been specified, sys-
                 temd will start all units at the same time.
18.4 Dependencies                                                                              283

         B You can specify these dependencies without changing the unit file, by
           creating a directory called /etc/systemd/system/example.service.requires
           and adding symbolic links to the desired unit files to it. A directory
               # ls -l /etc/systemd/system/example.service.requires
               lrwxrwxrwx 1 root root 34 Jul 17 15:56 ->    
               lrwxrwxrwx 1 root root 34 Jul 17 15:57 syslog.service ->    

              corresponds to the setting
               Requires = syslog.service

              in example.service .

Wants    A weaker form of Requires . This means that the listed units will be started to-
         gether with the current unit, but if their activation fails this has no influence
         on the process as a whole. This is the recommended method of making the
         start of one unit depend on the start of another one.

         B Here, too, you can specify the dependencies “externally” by creating a
           directory called example.service.wants .

Conflicts    The reverse of Requires —the units listed here will be stopped when the
         current unit is started, and vice versa.

         B Like Requires , Conflicts makes no stipulation to the order in which units
           are started or stopped.

         B If a unit 𝑈 conflicts with another unit 𝑉 and both are to be started at
           the same time, this operation fails if both units are an essential part of
           the operation. If one (or both) units are not essential parts of the op-
           eration, the operation is modified: If only one unit is not mandatory,
           that one will not be started, if both are not mandatory, the one men-
           tioned in Conflicts will be started and the one whose unit file contains
           the Conflicts option will be stopped.

Before    (and After ) These lists of units determine the starting order. If example.
         service contains the “Before=example2.service ” option and both units are be-
         ing started, the start of example2.service will be delayed until example.service
         has been started. After is the converse of Before , i. e., if example2.service con-
         tains the option “After=example.service ” and both units are being started, the
         same effect results—example2.service will be delayed.

         B Notably, this has nothing to do with the dependencies in Requires and
           Conflicts . It is common, for example, to list units in both Requires and
           After . This means that the listed unit will be started before the one
           whose unit file contains these settings.

         When deactivating units, the reverse order is observed. If a unit with a Before
         or After dependency on another unit is deactivated, while the other is being
         started, then the deactivation takes place before the activation no matter in
         which direction the dependency is pointing. If there is no Before or After
         dependency between two units, they will be started or stopped simultane-
284                                                                           18 Systemd

                      Table 18.1: Common targets for systemd (selection)

       Target                  Description            Basic system startup is finished (file systems, swap
                               space, sockets, timers etc.)     Is executed when Ctrl + Alt + Del was pressed. Often
                               the same as .          Target which systemd attempts to reach on sys-
                               tem startup.        Usually either or
                      .        Starts a shell on the system console.         For emer-
                               gencies.     Is usually activated by means of the
                               “ ” on the kernel command
                               line.            Activates the statically-defined getty instances (for ter-
                               minals). Corresponds to the getty lines in /etc/inittab
                               on System-V init.        Establishes a graphical login prompt. Depends on
                      .             Stops the system (without powering it down).       Establishes a multi-user system without a graphical lo-
                               gin prompt. Used by .   Serves as a dependency for units that require network
                               services (not ones that provide network services), such
                               as mount units for remote file systems. How exactly the
                               system determines whether the network is available de-
                               pends on the method for network configuration.         Stops the system and powers it down.           Restarts the system.           Performs basic system initialisation and then starts a

      C 18.4 [!1] What advantage do we expect from being able to configure depen-
        dencies via symbolic links in directories like example.service.requires instead
        of the example.service unit file?

      C 18.5 [2] Check your system configuration for examples of Requires , Wants and
        Conflicts dependencies, with or without corresponding Before and After de-

      18.5      Targets
      Targets in systemd are roughly similar to runlevels in System-V init: a possibil-
      ity of conveniently describing a set of services. While System-V init allows only
      a relatively small number of runlevels and their configuration is fairly involved,
      systemd makes it possible to define various targets very easily.
         Unit files for targets have no special options (the standard set of options for
      [Unit] and [Install] should be enough). Targets are only used to aggregate other
      units via dependencies or create standardised names for synchronisation points in
      dependencies ( , for example, can be used to start units depending
      on local file systems only once these are actually available). An overview of the
      most important targets is in Table 18.1.
         In the interest of backwards compatibility to System-V init, systemd defines a
      number of targets that correspond to the classical runlevels. Consider Table 18.2.
18.5 Targets                                                                                                  285

                   Table 18.2: Compatibility targets for System-V init

                  Ziele              Äquivalent

   You can set the default target which systemd will attempt to reach on system default target
boot by creating a symbolic link from /etc/systemd/system/ to the de-
sired target’s unit file:

# cd /etc/systemd/system
# ln -sf /lib/systemd/system/

(This is the moral equivalent to the initdefault line in the /etc/inittab file of System-
V init.) A more convenient method is the “systemctl set-default ” command:

# systemctl get-default
# systemctl set-default graphical
Removed symlink /etc/systemd/system/
Created symlink from /etc/systemd/system/ to    
# systemctl get-default

(As you can see, that doesn’t do anything other than tweak the symbolic link,
   To activate a specific target (like changing to a specific runlevel on System-V Activate specific target
init), use the “systemctl isolate ” command:

# systemctl isolate multi-user

(“File*.target” will be appended to the parameter if necessary). This command
starts all units that the target depends upon and stops all other units.

B “systemctl isolate ” works only for units in whose [Unit] sections the “AllowIsolate ”
  option is switched on.

  To stop the system or to change to the rescue mode (System-V init aficionados
would call this “single-user mode”) there are the shortcuts

#   systemctl   rescue
#   systemctl   halt
#   systemctl   poweroff                                Like halt , but with power-down
#   systemctl   reboot

These commands correspond roughly to their equivalents using “systemctl isolate ”,
but also output a warning to logged-in users. You can (and should!) of course
keep using the shutdown command.
   You can return to the default operating state using

# systemctl default
286                                                                                                            18 Systemd

                                 C 18.6 [!2] Which other services does the depend on? Do
                                   these units depend on other units in turn?

                                 C 18.7 [2] Use “systemctl isolate ” to change your system to the rescue (single-
                                   user) mode, and “systemctl default ” to come back to the standard mode.
                                   (Hint: Do this from a text console.)

                                 C 18.8 [2] Restart your system using “systemctl reboot ” and then once again
                                   with shutdown . Consider the difference.

                                 18.6        The systemctl Command
                                 The systemctl command is used to control systemd. We have already seen a few
                                 applications, and here is a more systematic list. This is, however, still only a small
                                 excerpt of the complete description.
                                    The general structure of systemctl invocations is

                                 # systemctl   ⟨subcommand⟩ ⟨parameters⟩ …

                                 systemctlsupports a fairly large zoo of subcommands. The allowable parameters
                                 (and options) depend on the subcommand in question.

      unit names as parameters   B Often unit names are expected as parameters. These can be specified
                                   either with a file name extension (like, e. g., example.service ) or without
                                   (example ). In the latter case, systemd appends an extension that it considers
                                   appropriate—with the start command, for example, “.service ”, with the
                                   isolate command on the other hand, “.target ”.

                                 Commands for units        The following commands deal with units and their man-
                                 list-units Displays the units systemd knows about. You may specify a unit type
                                       (service , socket , …) or a comma-separated list of unit types using the -t op-
                                         tion, in order to confine the output to units of the type(s) in question. You
                                         can also pass a shell search pattern in order to look for specific units:

                                          # systemctl list-units "ssh*"
                                          UNIT        LOAD   ACTIVE SUB     DESCRIPTION
                                          ssh.service loaded active running OpenBSD Secure Shell server

                                          LOAD   = Reflects whether the unit definition was properly loaded.
                                          ACTIVE = The high-level unit activation state, i.e. generalization
                                                   of SUB.
                                          SUB    = The low-level unit activation state, values depend on
                                                   unit type.

                                          1 loaded units listed. Pass --all to see loaded but inactive units,
                                          too. To show all installed unit files use 'systemctl

                                         B As usual, quotes are a great idea here, so the shell will not vandalise
                                           the search patterns that are meant for systemd.

                                 start   Starts one or more units mentioned as parameters.
18.6 The systemctl Command                                                                287

         B You can use shell search patterns here, too. The search patterns only
           work for units that systemd knows about; inactive units that are not
           in a failed state will not be searched, nor will units instantiated from
           templates whose exact names are not known before the instantiation.
           You should not overtax the search patterns.

stop   Stops one or more units mentioned as parameters (again with search pat-
reload    Reloads the configuration for the units mentioned as parameters (if the pro-
         grams underlying these units go along). Search patterns are allowed.

         B This concerns the configuration of the background services them-
           selves, not the configuration of the services from systemd’s point of
           view. If you want systemd to reload its own configuration with respect
           to the background services, you must use the “systemctl daemon-reload ”

         B What exactly happens on a “systemctl reload ” depends on the back-
           ground service in question (it usually involves a SIGHUP ). You can con-
           figure this in the unit file for the service.

restart     Restarts the units mentioned as parameters (search patterns are allowed).
         If a unit doesn’t yet run, it is simply started.
try-restart    Like restart , but units that don’t run are not started.
reload-or-restart    (and reload-or-try-restart ) Reloads the configuration of the
         named units (as per reload ), if the units allow this, or restarts them (as
         per restart or try-restart ) if they don’t.

         B Instead of reload-or-try-restart you can say force-reload for convenience
           (this is at least somewhat shorter).

isolate   The unit in question is started (including its dependencies) and all other
         units are stopped. Corresponds to a runlevel change on System-V init.
kill   Sends a signal to one or several processes of the unit. You can use the
        --kill-who option to specify which process is targeted. (The options include
        main , control , and all —the latter is the default—, and main and control are
        explained in more detail in systemctl (1).) Using the --signal option (-s for
        short) you can determine which signal is to be sent.
status    Displays the current status of the named unit(s), followed by its most recent
         log entries. If you do not name any units, you will get an overview of all
         units (possibly restricted to particular types using the -t option).

         B The log excerpt is usually abridged to 10 lines, and long lines will be
           shortened. You can change this using the --lines (-n ) and --full (-l )

         B “status ” is used for human consumption. If you want output that is
           easy to process by other programs, use “systemctl show ”.

cat    Displays the configuration file(s) for one or more units (including fragments
        in configuration directories specifically for that unit). Comments with the
        file names in question are inserted for clarity.
help   Displays documentation (such as man pages) for the unit(s) in question: For

          $ systemctl help syslog
288                                                                               18 Systemd

            invokes the manual page for the system log service, regardless of which
            program actually provides the log service.

      B With most distributions, commands like
             #   service   example   start
             #   service   example   stop
             #   service   example   restart
             #   service   example   reload

            work independently of whether the system uses systemd or System-V init.
         In the next section, there are a few commands that deal with installing and
      deinstalling units.

      Other commands Here are a few commands that do not specifically deal with
      particular units (or groups of units).
      daemon-reload This command causes systemd to reload its configuration. This in-
            cludes regenerating unit files that have been created at runtime from other
            configuration files on the system, and reconstructing the dependency tree.

            B Communication channels that systemd manages on behalf of back-
              ground services will persist across the reload.
      daemon-reexec Restarts the systemd program. This saves systemd’s internal state and
            restores it later.

            B This command is mostly useful if a new version of systemd has been
              installed (or for debugging systemd . Here, too, communication channels
              that systemd manages on behalf of background services will persist
              across the reload.
      is-system-running     Outputs the current state of the system. Possible answers in-
            initializing The system is in the early boot stage (the , ,
                 or targets have not yet been reached).
            starting  The system is in the late boot stage (there are still jobs in the queue).
            running The system is running normally.
            degraded The system is running normally, but one or more units are in a
                 failed state.
            maintenance One of the or targets are active.
            stopping The system is being shut down.

      C 18.9 [!2] Use systemctl to stop, start, and restart a suitably innocuous service
        (such as cups.service ) and to reload its configuration.

      C 18.10 [2] The runlevel command of System-V init outputs the system’s cur-
        rent runlevel. What would be an approximate equivalent for systemd?

      C 18.11 [1] What is the advantage of
             # systemctl kill example.service

             # killall example

            (or “pkill example ”)?
18.7 Installing Units                                                                                  289

18.7      Installing Units
To make a new background service available using systemd, you need a unit file,
for example example.service . (Thanks to backwards compatibility, a System-V init
script would also do, but we won’t go there just now.) You need to place this in a
suitable file (we recommend /etc/systemd/system . Next, it should appear when you
invoke “systemctl list-unit-files ”:

# systemctl list-unit-files
UNIT FILE                                  STATE
proc-sys-fs-binfmt_misc.automount          static
org.freedesktop.hostname1.busname          static
org.freedesktop.locale1.busname            static

example.service                            disabled

The disabled state means that the unit is available in principle, but is not being
started automatically.
   You can “activate” the unit, or mark it to be started when needed (e. g., during Activating units
system startup or if a certain communication channel is being accessed), by issuing
the “systemctl enable ” command:

# systemctl enable example
Created symlink from /etc/systemd/system/ 
 example.service to /etc/systemd/system/example.service.

The command output tells you what happens here: A symbolic link to the ser-
vice’s unit file from the /etc/systemd/system/multi- directory en-
sures that the unit will be started as a dependency of the .

B You may ask yourself how systemd knows that the example unit should be
  integrated in the (and not some other target). The answer
  to that is: The example.service file has a section saying


After an enable , systemd does the equivalent of a “systemctl daemon-reload ”. How-
ever, no units will be started (or stopped).

B You could just as well create the symbolic links by hand. You would, how-
  ever, have to take care of the “systemctl daemon-reload ” yourself, too.

B If you want the unit to be started immediately, you can either give the
       # systemctl start example

      command immediately afterwards, or you invoke “systemctl enable ” with the
      --now option.

B You can start a unit directly (using “systemctl start ”) without first activating
  it with “systemctl enable ”. The former actually starts the service, while the
  latter only arranges for it to be started at an appropriate moment (e. g., when
  the system is booted, or a specific piece of hardware is connected).

   You can deactivate a unit again with “systemctl disable ”. As with enable , sys-
temd does an implicit daemon-reload .
290                                                                                              18 Systemd

                       B Here, too, the unit will not be stopped if it is currently running. (You are
                         just preventing it from being activated later on.) Use the --now option or an
                         explicit “systemctl stop ”.

                       B The “systemctl reenable ” command is equivalent to a “systemctl disable ” im-
                         mediately followed by a “systemctl enable ” for the units in question. This lets
                         you do a “factory reset” of units.

      Masking a unit      The “systemctl mask ” command lets you “mask” a unit. This means to block it
                       completely. This will not only prevent it from starting automatically, but will also
                       keep it from being started by hand. “systemctl unmask ” reverts that operation.

                       B Systemd implements this by linking the name of the unit file in /etc/systemd/
                         system symbolically to /dev/null . Thus, eponymous files in directories that
                         systemd considers later (like /lib/systemd/system ) will be completely ignored.

                       C 18.12 [!2] What happens if you execute “systemctl disable cups ”? (Watch the
                         commands being output.) Reactivate the service again.

                       C 18.13 [2] How can you “mask” units whose unit files are in /etc/systemd/
                         system ?

                       Commands in this Chapter
                       systemctl   Main control utility for systemd                  systemctl (1)   277, 286

                          • Systemd is a modern alternative to System-V init.
                          • “Units” are system components managed by systemd. They are configured
                            using unit files.
                          • Unit files bear a vague resemblance to Microsoft Windows INI files.
                          • Systemd supports flexible mechanisms for local configuration and the au-
                            tomatic creation of similar unit files from “templates”.
                          • Systemd lets you manage a multitude of different units—services, mount
                            points, timers, …
                          • Dependencies between units can be expressed in various ways.
                          • “Targets” are units that vaguely correspond to System-V init’s runlevels.
                            They are used to group related services and for synchronisation.
                          • You can use the systemctl command to control systemd.
                          • Systemd contains powerful tools to install and deinstall units.

                       systemd “systemd System and Service Manager”.

                       XDG-DS14 Preston Brown, Jonathan Blandford, Owen Taylor, et al. “Desktop
                           Entry Specification”, April 2014.
                                       entry- spec/latest/
                                                                                                           $ echo tux
                                                                                                           $ ls
                                                                                                           $ /bin/su -

Time-controlled Actions—cron and

19.1 Introduction. . . . . . . . .               .   .   .   .   .   .   .   .   .   .   .   .   .   292
19.2 One-Time Execution of Commands              .   .   .   .   .   .   .   .   .   .   .   .   .   292
    19.2.1 at and batch . . . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   292
    19.2.2 at Utilities . . . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   294
    19.2.3 Access Control . . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   294
19.3 Repeated Execution of Commands              .   .   .   .   .   .   .   .   .   .   .   .   .   295
    19.3.1 User Task Lists. . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   295
    19.3.2 System-Wide Task Lists . .            .   .   .   .   .   .   .   .   .   .   .   .   .   296
    19.3.3 Access Control . . . . . .            .   .   .   .   .   .   .   .   .   .   .   .   .   297
    19.3.4 The crontab Command . . .             .   .   .   .   .   .   .   .   .   .   .   .   .   297
    19.3.5 Anacron . . . . . . . .               .   .   .   .   .   .   .   .   .   .   .   .   .   298

   • Executing commands at some future time using at
   • Executing commands periodically using cron
   • Knowing and using anacron

   • Using Linux commands
   • Editing files

grd2-automatisierung.tex   (6eb247d0aa1863fd )
292                                                                   19 Time-controlled Actions—cron and at

                        19.1       Introduction
                        An important component of system administration consists of automating re-
                        peated procedures. One conceivable task would be for the mail server of the
                        company network to dial in to the ISP periodically to fetch incoming messages.
                        In addition, all members of a project group might receive a written reminder half
                        an hour before the weekly project meeting. Administrative tasks like file system
                        checks or system backups can profitably be executed automatically at night when
                        system load is noticably lower.
                           To facilitate this, Linux offers two services which will be discussed in the fol-
                        lowing sections.

                        19.2       One-Time Execution of Commands
                        19.2.1      at   and batch
                        Using the at service, arbitrary shell commands may be executed once at some time
                        in the future (time-shifted). If commands are to be executed repeatedly, the use
                        of cron (Section 19.3) is preferable.
                           The idea behind at is to specify a time at which a command or command se-
                        quence will be executed. Roughly like this:

                         $ at 01:00
                         warning: commands will be executed using /bin/sh
                         at> tar cvzf /dev/st0 $HOME
                         at> echo "Backup done" | mail -s Backup $USER
                         at> Ctrl + D
                         Job 123 at 2003-11-08 01:00

                         This would write a backup copy of your home directory to the first tape drive at
                         1 A. M. (don’t forget to insert a tape) and then mail a completion notice to you.
      time specification     at ’s argument specifies when the command(s) are to be run. Times like
                         “⟨HH⟩: ⟨MM⟩” denote the next possible such time: If the command “at 14:00 ”
                         is given at 8 A. M., it refers to the same day; if at 4 P. M., to the next.

                         B You can make these times unique by appending today or tomorrow : “at 14:00
                           today ”, given before 2 P. M., refers to today, “at 14:00 tomorrow ”, to tomorrow.

                          Other possibilities include Anglo-Saxon times such as 01:00am or 02:20pm as well as
                          the symbolic names midnight (12 A. M.), noon (12 P. M.), and teatime (4 P. M.) (!); the
                          symbolic name now is mostly useful together with relative times (see below).
      date specifications    In addition to times, at also understands date specifications in the format
                          “⟨MM⟩⟨DD⟩⟨YY⟩” and “⟨MM⟩/ ⟨DD⟩/ ⟨YY⟩” (according to American usage, with
                          the month before the day) as well as “⟨DD⟩. ⟨MM⟩. ⟨YY⟩” (for Europeans). Be-
                          sides, American-style dates like “⟨month name⟩ ⟨day⟩” and “⟨month name⟩ ⟨day⟩
                          ⟨year⟩” may also be spelled out. If you specify just a date, commands will be
                          executed on the day in question at the current time; you can also combine a date
                          and time specification but must give the date after the time:

                         $ at 00:00 January 1 2005
                         warning: commands will be executed using /bin/sh
                         at> echo 'Happy New Year!'
                         at> Ctrl + D
                         Job 124 at 2005-01-01 00:00

                          Besides “explicit” time and date specification, you can give “relative” times
                        and dates by passing an offset from some given point in time:
19.2 One-Time Execution of Commands                                                                293

$ at now + 5 minutes

executes the command(s) five minutes from now, while

$ at noon + 2 days

refers to 12 P. M. on the day after tomorrow (as long as the at command is given
before 12 P. M. today). at supports the units minutes , hours , days and weeks .

B A single offset by one single measurement unit must suffice: Combinations
  such as
      $ at noon + 2 hours 30 minutes

      $ at noon + 2 hours + 30 minutes

     are, unfortunately, disallowed. Of course you can express any reasonable
     offset in minutes …

   at reads the commands from standard input, i. e., usually the keyboard; with commands
the “-f ⟨file⟩” option you can specify a file instead.

B at tries to run the commands in an environment that is as like the one current
  when at was called as possible. The current working directory, the umask,
  and the current environment variables (excepting TERM , DISPLAY , and _ ) are
  saved and reactivated before the commands are executed.

Any output of the commands executed by at —standard output and standard error output
output—is sent to you by e-mail.

B If you have assumed another user’s identity using su before calling at , the
  commands will be executed using that identity. The output mails will still
  be sent to you, however.

   While you can use at to execute commands at some particular point in time,
the (otherwise analogous) batch command makes it possible to execute a command
sequence “as soon as possible”. When that will actually be depends on the current ASAP execution
system load; if the system is very busy just then, batch jobs must wait.

B An at -style time specification on batch is allowed but not mandatory. If it is
  given, the commands will be executed “some time after” the specified time,
  just as if they had been submitted using batch at that time.

B batch is not suitable for environments in which users compete for resources
  such as CPU time. Other systems must be employed in these cases.

C 19.1 [!1] Assume now is 1 March, 3 P. M. When will the jobs submitted using
  the following commands be executed?
          1. at 17:00
          2. at 02:00pm
          3. at teatime tomorrow
          4. at now + 10 hours

C 19.2 [1] Use the logger command to write a message to the system log 3 min-
  utes from now.
294                                                                     19 Time-controlled Actions—cron and at

                         19.2.2      at   Utilities
                         The system appends at -submitted jobs to a queue. You can inspect the contents
      Inspect at queue of that queue using atq (you will see only your own jobs unless you are root ):

                         $ atq
                         123       2003-11-08 01:00 a hugo
                         124       2003-11-11 11:11 a hugo
                         125       2003-11-08 21:05 a hugo

                         B The “a ” in the list denotes the “job class”, a letter between “a ” and “z ”. You
                           can specify a job class using the -q option to at ; jobs in classes with “later”
                           letters are executed with a higher nice value. The default is “a ” for at jobs
                           and “b ” for batch jobs.

                         B A job that is currently being executed belongs to the special job class “= ”.
       Cancelling jobs You can use atrm to cancel a job. To do so you must specify its job number,
                    which you are told on submission or can look up using atq . If you want to check
                    on the commands making up the job, you can do that with “at -c ⟨job number⟩”.
             daemon    The entity in charge of actually executing at jobs is a daemon called atd . It is
                    generally started on system boot and waits in the background for work. When
                    starting atd , several options can be specified:
                         -b   (“batch”) Determines the minimum interval between two batch job executions.
                                 The default is 60 seconds.
                         -l   (“load”) Determines a limit for the system load, above which batch jobs will not
                                 be executed. The default is 0.8.
                         -d   (“debug”) Activates “debug” mode, i. e., error messages will not be passed to
                                 syslogd but written to standard error output.

                         The atd daemon requires the following directories:
                              • at jobs are stored in /var/spool/atjobs . Its access mode should be 700, the
                                owner is at .
                              • The /var/spool/atspool directory serves to buffer job output. Its owner should
                                be at and access mode 700, too.

                         C 19.3 [1] Submit a few jobs using at and display the job queue. Cancel the
                           jobs again.

                         C 19.4 [2] How would you create a list of at jobs which is not sorted according
                           to job number but according to execution time (and date)?

                         19.2.3      Access Control
         /etc/at.allow   The /etc/at.allow and /etc/at.deny files determine who may submit jobs using at
          /etc/at.deny   and batch . If the /etc/at.allow file exists, only the users listed in there are entitled
                         to submit jobs. If the /etc/at.allow file does not exist, the users not listed in /etc/
                         at.deny may submit jobs. If neither one nor the other exist, at and batch are only
                         available to root .

                                 Debian GNU/Linux comes with a /etc/at.deny file containing the names of
                                 various system users (including alias , backup , guest , and www-data ). This pre-
                                 vents these users from using at .

                                 Here, too, the Ubuntu defaults correspond to the Debian GNU/Linux de-
19.3 Repeated Execution of Commands                                                                  295

      Red Hat includes an empty /etc/at.deny file; this implies that any user may
      submit jobs.

      The openSUSE default corresponds (interestingly) to that of Debian GNU/Linux
      and Ubuntu—various system users are not allowed to use at . (The explic-
      itly excluded user www-data , for example, doesn’t exist on openSUSE; Apache
      uses the identity of the wwwrun user.)

C 19.5 [1] Who may use at and batch on your system?

19.3      Repeated Execution of Commands
19.3.1    User Task Lists
Unlike the at commands, the cron daemon’s purpose is to execute jobs at periodic
intervals. cron , like atd , should be started during system boot using an init script.
No action is required on your side, though, because cron and atd are essential parts
of a Linux system. All major distributions install them by default.
   Every user has their own task list (commonly called crontab ), which is stored in task list
the /var/spool/cron/crontabs (on Debian GNU/Linux and Ubuntu; on SUSE: /var/
spool/cron/tabs , on Red Hat: /var/spool/cron ) directory under that user’s name. The
commands described there are executed with that user’s permissions.

B You do not have direct access to your task lists in the cron directory, so you
  will have to use the crontab utility instead (see below). See also: Exercise 19.6.

    crontab files are organised by lines; every line describes a (recurring) point in syntax
time and a command that should be executed at that time. Empty lines and com-
ments (starting with a “# ”) will be ignored. The remaining lines consist of five time time fields
fields and the command to be executed; the time fields describe the minute (0–59), command
hour (0–23), day of month (1–31), month (1–12 or the English name), and weekday
(0–7, where 0 and 7 stand for Sunday, or the English name), respectively, at which
the command is to be executed. Alternatively, an asterisk (“* ”) is allowed, which
means “whatever”. For example,

58 17 * * * echo "News is coming on soon"

that the command will be executed daily at 5.58 P. M. (day, month and weekday
are arbitrary).

B The command will be executed whenever hour, minute, and month match
  exactly and at least one of the two day specifications—day of month or
  weekday—applies. The specification

       1 0 13 * 5 echo "Shortly after midnight"

      says that the message will be output on any 13th of the month as well as
      every Friday, not just every Friday the 13th.

B The final line of a crontab file must end in a newline character, lest it be ig-

   In the time fields, cron accepts not just single numbers, but also comma-
separated lists. The “0,30 ” specification in the minute field would thus lead to the lists
command being executed every “full half” hour. Besides, ranges can be specified:
“8-11 ” is equivalent to “8,9,10,11 ”, “8-10,14-16 ” corresponds to “8,9,10,14,15,16 ”.
296                                                                               19 Time-controlled Actions—cron and at

                                   Also allowed is a “step size” in ranges. “0-59/10 ” in the minute field is equivalent
                                   to “0,10,20,30,40,50 ”. If—like here—the full range of values is being covered, you
                                   could also write “*/10 ”.
                month and week-        The names allowed in month and weekday specifications each consist of the
                day specifications first three letters of the English month or weekday name (e. g., may , oct , sun , or wed ).
                                   Ranges and lists of names are not permissible.
                       command         The rest of the line denotes the command to be executed, which will be passed
                                   by cron to /bin/sh (or the shell specified in the SHELL variable, see below).

                                    B Percent signs (% ) within the command must be escaped using a backslash
                                      (as in “\% ”), lest they be converted to newline characters. In that case, the
                                      command is considered to extend up to the first (unescaped) percent sign;
                                      the following “lines” will be fed to the command as its standard input.

                                    B By the way: If you as the system administrator would rather not (as cron is
                                      wont to do) a command execution be logged using syslogd , you can suppress
                                      this by putting a “- ” as the first character of the line.

                                       Besides commands with repetition specifications, crontab lines may also include
            assignments to en- assignments to environment variables. These take the form “⟨variable⟩= ⟨value⟩”
           vironment variables (where, unlike in the shell, there may be spaces before and after the “= ”). If the
                                    ⟨value⟩ contains spaces, it should be surrounded by quotes. The following vari-
                                    ables are pre-set automatically:
                                    SHELL   This shell is used to execute the commands. The default is /bin/sh , but other
                                            shells are allowed as well.

                                    LOGNAME   The user name is taken from /etc/passwd and cannot be changed.
                                    HOME    The home directory is also taken from /etc/passwd . However, changing its
                                             value is allowed.
                                    MAILTO cron    sends e-mail containing command output to this address (by default,
                                            they go to the owner of the crontab file). If cron should send no messages at
                                            all, the variable must be set to a null value, i. e., MAILTO="" .

                                    19.3.2       System-Wide Task Lists
                                    In addition to the user-specific task lists, there is also a system-wide task list. This
                     /etc/crontab   resides in /etc/crontab and belongs to root , who is the only user allowed to change
                                    it. /etc/crontab ’s syntax is slightly different from that of the user-specific crontab
                                    files; between the time fields and the command to be executed there is the name
                                    of the user with whose privileges the command is supposed to be run.

                                    B Various Linux distributions support a /etc/cron.d directory; this directory
                                      may contain files which are considered “extensions” of /etc/crontab . Soft-
                                      ware packages installed via the package management mechanism find it
                                      easier to make use of cron if they do not have to add or remove lines to
                                      /etc/crontab .

                                    B Another popular extension are files called /etc/cron.hourly , /etc/cron.daily
                                      and so on. In these directories, software packages (or the system admin-
                                      istrator) can deposit files whose content will be executed hourly, daily, …
                                      These files are “normal” shell scripts rather than crontab -style files.

                                        cron reads its task lists—from user-specific files, the system-wide /etc/crontab ,
                                    and the files within /etc/cron.d , if applicable—once on starting and then keeps
      crontab   changes and cron    them in memory. However, the program checks every minute whether any crontab
                                    files have changed. The “mtime”, the last-modification time, is used for this. If
                                    cron does notice some modification, the task list is automatically reconstructed. In
                                    this case, no explicit restart of the daemon is necessary.
19.3 Repeated Execution of Commands                                                                    297

C 19.6 [2] Why are users not allowed to directly access their task lists in /var/
  spool/cron/crontabs (or wherever your distribution keeps them)? How does
  crontab access these files?

C 19.7 [1] How can you arrange for a command to be executed on Friday, the
  13th, only?

C 19.8 [3] How does the system ensure that the tasks in /etc/cron.hourly , /etc/
  cron.daily , … are really executed once per hour, once per day, etc.?

19.3.3     Access Control
Which users may work with cron to begin with is specified, in a manner similar
to that of at , in two files. The /etc/cron.allow file (sometimes /var/spool/cron/allow )
lists those users who are entitled to use cron . If that file does not exist but the /etc/
cron.deny (sometimes /var/spool/cron/deny ) file does, that file lists those users who
may not enjoy automatic job execution. If neither of the files exists, it depends on
the configuration whether only root may avail himself of cron ’s services or whether
cron is “free for all”, and any user may use it.

19.3.4     The crontab Command
Individual users cannot change their crontab files manually, because the system
hides these files from them. Only the system-wide task list in /etc/crontab is subject
to root ’s favourite text editor.
   Instead of invoking an editor directly, all users should use the crontab com- managing task lists
mand. This lets them create, inspect, modify, and remove task lists. With

$ crontab -e

you can edit your crontab file using the editor which is mentioned in the VISUAL or
EDITOR environment variables—alternatively, the vi editor. After the editor termi-
nates, the modified crontab file is automatically installed. Instead of the -e option,
you may also specify the name of a file whose content will be installed as the task
list. The “- ” file name stands for standard input.
    With the -l option, crontab outputs your crontab file to standard output; with
the -r option, an existing task list is deleted with prejudice.

B With the “-u ⟨user name⟩” option, you can refer to another user (expect to
  be root to do so). This is particularly important if you are using su ; in this
  case you should always use -u to ensure that you are working on the correct
  crontab file.

C 19.9 [!1] Use the crontab program to register a cron job that appends the
  current date to the file /tmp/date.log once per minute. How can you make it
  append the date every other minute?

C 19.10 [1] Use crontab to print the content of your task list to the standard
  output. Then delete your task list.

C 19.11 [2] (For administrators:) Arrange that user hugo may not use the cron
  service. Check that your modification is effective.
298                                               19 Time-controlled Actions—cron and at

      19.3.5    Anacron
      Using cron you can execute commands repeatedly at certain points in time. This
      obviously works only if the computer is switched on at the times in question –
      there is little point in configuring a 2am cron job on a workstation PC when that
      PC is switched off outside business hours to save electricity. Mobile computers,
      too, are often powered on or off at odd times, which makes it difficult to schedule
      the periodic automated clean-up tasks a Linux system needs.
         The anacron program (originally by Itai Tzur, now maintained by Pascal Hakim),
      like cron , can execute jobs on a daily, weekly, or monthly basis. (In fact, arbitrary
      periods of 𝑛 days are fine.) The only prerequisite is that, on the day in question,
      the computer be switched on long enough for the jobs to be executed—the exact
      time of day is immaterial. However, anacron is activated at most once a day; if you
      need a higher frequency (hours or minutes) there is no way around cron .

      B Unlike cron , anacron is fairly primitive as far as job management is concerned.
        With cron , potentially every user can create jobs; with anacron , this is the
        system administrator’s privilege.

         The jobs for anacron are specified in the /etc/anacrontab file. In addition to the
      customary comments and blank lines (which will be ignored) it may contain as-
      signments to environment variables of the form


      and job descriptions of the form

      7 10 weekly run-parts /etc/cron.weekly

      where the first number (here 7 ) stands for the period (in days) between invocations
      of the job. The second number (10 ) denotes how many minutes after the start of
      anacron the job should be launched. Next is a name for the job (here, weekly ) and
      finally the command to be executed. Overlong lines can be wrapped with a “\ ” at
      the end of the line.

      B The job name may contain any characters except white space and the slash.
        It is used to identify the job in log messages, and anacron also uses it as the
        name of the file in which it logs the time the job was last executed. (These
        files are usually placed in /var/spool/anacron .)

         When anacron is started, it reads /etc/anacrontab and, for each job, checks
      whether it was run within the last 𝑡 days, where 𝑡 is the period from the job
      definition. If not, then anacron waits the number of minutes given in the job
      definition and then launches the shell command.

      B You can specify a job name on anacron ’s command line to execute only that
        job (if any). Alternatively, you can specify shell search patterns on the com-
        mand line in order to launch groups of (skilfully named) jobs with one
        anacron invocation. Not specifying any job names at all is equivalent to the
        job name, “* ”.

      B You may also specify the time period between job invocations symbolically:
        Valid values include @daily , @weekly , @monthly , @yearly and @annually (the last
        two are equivalent).

      B In the definition of an environment variable, white space to the left of the “= ”
        is ignored. To the right of the “= ”, it becomes part of the variable’s value.
        Definitions are valid until the end of the file or until the same variable is
19.3 Repeated Execution of Commands                                                     299

B Some “environment variables” have special meaning to anacron . With
  RANDOM_DELAY , you can specify an additional random delay1 for the job launches:
  When you set the variable to a number 𝑡, then a random number of minutes
  between 0 and 𝑡 will be added to the delay given in the job description.
  START_HOURS_RANGE lets you denote a range of hours (on a clock) during which
  jobs will be started. Something like


      allows new jobs to be started only between 10am and 12pm. Like cron ,
      anacron sends job output to the address given by the MAILTO variable, oth-
      erwise to the user executing anacron (usually root ).

   Usually anacron executes the jobs independently and without attention to over-
laps. Using the -s option, jobs are executed “serially”, such that anacron starts a
new job only when the previous one is finished.
   Unlike cron , anacron is not a background service, but is launched when the sys-
tem is booted in order to execute any leftover jobs (the delay in minutes is used to
postpone the jobs until the system is running properly, in order to avoid slowing
down the start procedure). Later on you can execute anacron once a day from cron
in order to ensure that it does its thing even if the system is running for a longer
period of time than normally expected.

B It is perfectly feasible to install cron and anacron on the same system. While
  anacron usually executes the jobs in /etc/cron.daily , /etc/cron.weekly , and /etc/
  cron.monthly that are really meant for cron , the system ensures that anacron
  does nothing while cron is active. (See also Exercise 19.13.)

C 19.12 [!2] Convince yourself that anacron is working as claimed. (Hint: If
  you don’t want to wait for days, try cleverly manipulating the time stamps
  in /var/spool/anacron .)

C 19.13 [2] On a long-running system that has both cron and anacron installed,
  how do you avoid anacron interfering with cron ? (Hint: Examine the content
  of /etc/cron.daily and friends.)

Commands in this Chapter
anacron Executes periodic job even if the computer does not run all the time
                                                                anacron (8) 298
at      Registers commands for execution at a future point in time at (1) 292
atd     Daemon to execute commands in the future using at            atd (8) 294
atq     Queries the queue of commands to be executed in the future
                                                                     atq (1) 293
atrm    Cancels commands to be executed in the future               atrm (1) 294
batch   Executes commands as soon as the system load permits batch (1) 293
crontab Manages commands to be executed at regular intervals crontab (1) 297
  1 Duh!
300                                            19 Time-controlled Actions—cron and at

       • With at , you can register commands to be executed at some future (fixed)
         point in time.
       • The batch command allows the execution of commands as soon as system
         load allows.
       • atq and atrm help manage job queues. The atd daemon causes the actual
         execution of jobs.
       • Access to at and batch is controlled using the /etc/at.allow and /etc/at.deny
       • The cron daemon allows the periodic repetition of commands.
       • Users can maintain their own task lists (crontab s).
       • A system-wide task list exists in /etc/crontab and—on many distribu-
         tions—in the /etc/cron.d directory.
       • Access to cron is managed similarly to at , using the /etc/cron.allow and /etc/
         cron.deny files.
       • The crontab command is used to manage crontab files.
                                                                                                       $ echo tux
                                                                                                       $ ls
                                                                                                       $ /bin/su -

System Logging

20.1   The Problem . . . . . . . .           .   .   .   .   .   .   .   .   .   .   .   .   .   302
20.2   The Syslog Daemon . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   302
20.3   Log Files . . . . . . . . . .         .   .   .   .   .   .   .   .   .   .   .   .   .   305
20.4   Kernel Logging . . . . . . .          .   .   .   .   .   .   .   .   .   .   .   .   .   306
20.5   Extended Possibilities: Rsyslog . .   .   .   .   .   .   .   .   .   .   .   .   .   .   306
20.6   The “next generation”: Syslog-NG.     .   .   .   .   .   .   .   .   .   .   .   .   .   310
20.7   The logrotate Program . . . . .       .   .   .   .   .   .   .   .   .   .   .   .   .   314

   • Knowing the syslog daemon and how to configure it
   • Being able to manage log file using logrotate
   • Understanding how the Linux kernel handles log messages

   • Basic knowledge of the components of a Linux system
   • Handling configuration files

adm2-syslog.tex   (0cd20ee1646f650c )
302                                                                                           20 System Logging

                         20.1      The Problem
                         Application programs need to tell their users something now and then. The com-
                         pletion of a task or an error situation or warning must be reported in a suitable
                         manner. Text-oriented programs output appropriate messages on their “termi-
                         nal”; GUI-based programs might use “alert boxes” or status lines whose content
                            The operating system kernel and the system and network services running in
                         the background, however, are not connected to user terminals. If such a process
                         wants to output a message, it might write it to the system console’s screen; on X11,
                         such messages might show up in the xconsole window.
                            In multi-user mode, writing a system message to the system console only is
                         not sufficient. Firstly, it is not clear that the message will actually be read by root ,
                         secondly, these screen messages cannot be saved and may easily get lost.

                         20.2      The Syslog Daemon
                         The solution of this problem consists of the syslog daemon or syslogd . Instead of
                         outputting a message directly, system messages with a specific meaning can be
                         output using the syslog() function, which is part of the Linux C runtime library.
                         Such messages are accepted by syslogd via the local socket /dev/log .

                         B Kernel messages are really handled by a different program called klogd .
                           This program preprocesses the messages and usually passes them along to
                           syslogd . See Section 20.4.

                   log syslogd proves very useful when debugging. It logs the different system messages
                         and is—as its name suggests—a daemon program. The syslogd program is usually
                         started via an init script while the system is booted. When it receives messages, it
                         can write them to a file or sends them on across the network to another computer
                         which manages a centralised log.

                         B The common distributions (Debian GNU/Linux, Ubuntu, Red Hat Enter-
                           prise Linux, Fedora, openSUSE, …) have all been using, for various lengths
                           of time, a package called “Rsyslog”, which is a more modern implementa-
                           tion of a syslogd with more room for configuration. The additional capabil-
                           ities are, however, not essential for getting started and/or passing the LPI
                           exam. If you skip the first part of the Rsyslog configuration file, the remain-
                           der corresponds, to a very large extent, to what is discussed in this chapter.
                           There is more about Rsyslog in Section 20.5.

                               Instead of syslogd , certain versions of the Novell/SUSE distributions, in par-
                               ticular the SUSE Linux Enterprise Server, use the Syslog-NG package in-
                               stead of syslogd . This is configured in a substantially different manner. For
                               the LPIC-1 exam, you need to know that Syslog-NG exists and roughly what
                               it does; see Section 20.6.

                            The administrator decides what to do with individual messages. The configu-
      /etc/syslog.conf   ration file /etc/syslog.conf specifies which messages go where.

                         B By default, Rsyslog uses /etc/rsyslog.conf as its configuration file. This is
                           largely compatible to what syslogd would use. Simply ignore all lines start-
                           ing with a dollar sign ($ ).

                            The configuration file consists of two columns and might look like this:

                         kern.warn;*.err;authpriv.none      /dev/tty10
                         kern.warn;*.err;authpriv.none     |/dev/xconsole
                         *.emerg                            *
20.2 The Syslog Daemon                                                                         303

                                Table 20.1: syslogd facilities

 Facility    Meaning
 authpriv    Confidential security subsystem messages
   cron      Messages from cron and at
  daemon     Messages from daemon programs with no more specific facility
    ftp      FTP daemon messages
   kern      System kernel messages
    lpr      Printer subsystem messages
   mail      Mail subsystem messages
   news      Usenet news subsystem messages
  syslog     syslogd messages
   user      Messages about users
   uucp      Messages from the UUCP subsystem
  local 𝑟    (0 ≤ 𝑟 ≤ 7) Freely usable for local messages

               Table 20.2: syslogd priorities (with ascending urgency)

 Priority    Meaning
   none      No priority in the proper sense—serves to exclude all messages from
             a certain facility
   debug     Message about internal program states when debugging
    info     Logging of normal system operations
  notice     Documentation of particularly noteworthy situations during normal
             system operations
 warning     (or warn ) Warnings about non-serious occurrences which are not se-
             rious but still no longer part of normal operations
    err      Error messages of all kinds
   crit      Critical error messages (the dividing line between this and err is not
             strictly defined)
  alert      “Alarming” messages requiring immediate attention
  emerg      Final message before a system crash

*.=warn;*.=err                        -/var/log/warn
*.crit                                 /var/log/warn
*.*;mail.none;news.none               -/var/log/messages

The first column of each line determines which messages will be selected, and the
second line says where these messages go. The first column’s format is

⟨facility⟩. ⟨priority⟩[; ⟨facility⟩. ⟨priority⟩]…

where the ⟨facility⟩ denotes the system program or component giving rise to the facilities
message. This could be the mail server, the kernel itself or the programs managing
access control to the system. Table 20.1 shows the valid facilities. If you specify
an asterisk (“* ”) in place of a facility, this serves as placeholder for any facility. It
is not easily possible to define additional facilities; the “local” facilities local0 to
local7 should, however, suffice for most purposes.
    The ⟨priority⟩ specifies how serious the message is. The valid priorities are priorities
summarised in Table 20.2.

B Who gets to determine what facility or priority is attached to a message?
  The solution is simple: Whoever uses the syslog() function, namely the de-
  veloper of the program in question, must assign a facility and priority to
  their code’s messages. Many programs allow the administrator to at least
  redefine the message facility.
304                                                                                                    20 System Logging

               selection criteria    A selection criterion of the form means “all messages of the mail sub-
                                  system with a priority of info and above”. If you just want to capture messages
                                  of a single priority, you can do this using a criterion such as mail.=info . The as-
                                  terisk (“* ”) stands for any priority (you could also specify “debug ”). A preceding !
                                  implies logical negation: mail.!info deselects messages from the mail subsystem
                                  at a priority of info and above; this makes most sense in combinations such as
                                  mail.*;mail.!err , to select certain messages of low priority. ! and = may be com-
                                  bined; mail.!=info deselects (exactly) those messages from the mail subsystem with
                                  priority info .
Multiple facilities—same priority    You may also specify multiple facilites with the same priority like mail, ;
                                  this expression selects messages of priority info and above that belong to the mail
                                  or news facilities.
                         actions     Now for the right-hand column, the messages’ targets. Log messages can be
                                  handled in different ways:
                                       • They can be written to a file. The file name must be specified as an absolute
                                         path. If there is a - in front of the path, then unlike normal syslogd opera-
                                         tion, the file will not immediately be written to on disk. This means that
                                         in case of a system crash you might lose pending log messages—for fairly
                                         unimportant messages such as those of priority notice and below, or for mes-
                                         sages from “chatty” facilities such as mail and news, this may not really be
                                         a problem.
                                         The file name may also refer to a device file (e. g., /dev/tty10 in the example
                                       • Log messages can be written to a named pipe (FIFO). The FIFO name must
                                         be given as an absolute path with a preceding “| ”. One such FIFO is /dev/
                                         xconsole .

                                       • They can be passed across the network to another syslogd . This is specified
                                         as the name or IP address of the target system with a preceding @ character.
                                         This is especially useful if a critical system state occurs that renders the local
                                         log file inaccessible; to deprive malicious crackers from a way to hide their
                                         traces; or to collect the log messages of all hosts in a network on a single
                                         computer and process them there.
                                         On the target host, the syslogd must have been started using the -r (“remote”)
                                         option in order to accept forwarded messages. How to do that depends on
                                         your Linux distribution.
                                       • They can be sent directly to users. The user names in question must be given
                                         as a comma-separated list. The message will be displayed on the listed
                                         users’ terminals if they are logged in when the message arrives.
                                       • They can be sent to all logged-in users by specifying an asterisk (“* ”) in place
                                         of a login name.
        Changing configuration As a rule, after installation your system already contains a running syslogd and
                                    a fairly usable /etc/syslog.conf . If you want to log more messages, for example
                                    because specific problems are occurring, you should edit the syslog.conf file and
                                    then send syslogd a SIGHUP signal to get it to re-read its configuration file.

                                    B You can test the syslogd mechanism using the logger program. An invocation
                                      of the form
                                          $ logger -p local0.err -t TEST "Hello World"

                                         produces a log message of the form

                                          Aug   7 18:54:34 red TEST: Hello World

                                         Most modern programming languages make it possible to access the
                                         syslog() function.
20.3 Log Files                                                                                             305

C 20.1 [2] Find out when somebody last assumed root ’s identity using su .

C 20.2 [!2] Reconfigure syslogd such that, in addition to the existing configu-
  ration, it writes all (!) messages to a new file called /var/log/test . Test your

C 20.3 [3] (Requires two computers and a working network connection.) Re-
  configure syslogd on the first computer such that it accepts log messages
  from the network. Reconfigure syslogd on the second computer such that it
  sends messages from facility local0 to the first computer. Test the configu-

C 20.4 [2] How can you implement a logging mechanism that is safe from
  attackers that assume control of the logging computer? (An attacker can
  always pretend further messages from being logged. We want to ensure
  that the attacker cannot change or delete messages that have already been

20.3      Log Files
Log files are generally created below /var/log . The specific file names vary—refer             /var/log
to the syslog.conf file if you’re in doubt. Here are some examples:

      Debian GU/Linux collects all messages except those to do with authentica-
      tion in the /var/log/syslog file. There are separate log files for the auth , daemon ,
      kern , lpr , mail , user , and uucp facilities, predictably called auth.log etc. On top
      of that, the mail system uses files called , mail.warn , and mail.err ,
      which respectively contain only those messages with priority info etc. (and
      above). Debugging messages from all facilities except for authpriv , news , and
      mail end up in /var/log/debug , and messages of priority info , notice , and warn
      from all facilities except those just mentioned as well as cron and daemon in
      /var/log/messages .

      The defaults on Ubuntu correspond to those on Debian GNU/Linux.

      On Red Hat distributions, all messages with a priority of info or above,
      except those from authpriv and cron , are written to /var/log/messages , while
      messages from authpriv are written to /var/log/secure and those from cron to
      /var/log/cron . All messages from the mail system end up in /var/log/maillog .

      OpenSUSE logs all messages except those from iptables and the news and
      mail facilities to /var/log/messages . Messages from iptables go to /var/log/
      firewall . Messages that are not from iptables and have priority warn , err ,
      or crit are also written to /var/log/warn . Furthermore, there are the /var/
      log/localmessages file for messages from the local* facilities, the /var/log/
      NetworkManager file for messages from the NetworkManager program, and the
      /var/log/acpid file for messages from the ACPI daemon. The mail sys-
      tem writes its log both to /var/log/mail (all messages) and to the files , mail.warn , and mail.err (the latter for the priorities err and crit ),
      while the news system writes its log to news/news.notice , news/news.err , and
      news/news.crit (according to the priority)—there is no overview log file for
      news. (If you think this is inconsistent and confusing, you are not alone.)

A Some log files contain messages concerninig users’ privacy and should thus
  only be readable by root . In most cases, the distributions tend to err towards
  caution and restrict the access rights to all log files.
306                                                                                           20 System Logging

      Inspecting log files You can peruse the log files created by syslogd using less ; tail lends itself to
                        long files (possibly using the -f option). There are also special tools for reading
                        log files, the most popular of which include logsurfer and xlogmaster .
               messages    The messages written by syslogd normally contain the date and time, the host
                        name, a hint about the process or component that created the message, and the
                        message itself. Typical messages might look like this:

                             Mar 31 09:56:09 red modprobe: modprobe: Can't locate ...
                             Mar 31 11:10:08 red su: (to root) user1 on /dev/pts/2
                             Mar 31 11:10:08 red su: pam-unix2: session started for ...

                                You can remove an overly large log file using rm or save it first by renaming it
                             with an extension like .old . A new log file will be created when syslogd is next
                             restarted. However, there are more convenient methods.

                             20.4     Kernel Logging
                             The Linux kernel does not send its log messages to syslogd but puts them into
                             an internal “ring buffer”. They can be read from there in various ways—via a
                             specialised system call, or the /proc/kmsg “file”. Traditionally, a program called
                             klogd is used to read /proc/kmsg and pass the messages on to syslogd .

                             B Rsyslog gets by without a separate klogd program, because it takes care of
                               kernel log messages directly by itself. Hence, if you can’t find a klogd on your
                               system, this may very likely be because it is using rsyslog.
                                During system startup, syslogd and possibly klogd are not immediately available—
                             they must be started as programs and thus cannot handle the kernel’s start mes-
                             sages directly. The dmesg command makes it possible to access the kernel log buffer
                             retroactively and look at the system start log. With a command such as

                             # dmesg >boot.msg

                             you can write these messages to a file and send it to a kernel developer.

                             B Using the dmesg command you can also delete the kernel ring buffer (-c op-
                               tion) and set a priority for direct notifications: messages meeting or exceed-
                               ing this priority will be sent to the console immediately (-n option). Kernel
                               messages have priorities from 0 to 7 corresponding to the syslogd priorities
                               from emerg down to debug . The command

                                    # dmesg -n 1

                                   for example causes only emerg messages to be written to the console directly.
                                   All messages will be written to /proc/kmsg in every case—here it is the job of
                                   postprocessing software such as syslogd to suppress unwanted messages.

                             C 20.5 [2] What does dmesg output tell you about the hardware in your com-

                             20.5     Extended Possibilities: Rsyslog
                             Rsyslog by Rainer Gerhards has replaced the traditional BSD syslogd on most com-
                             mon Linux distributions. Besides greater efficiency, rsyslog’s goal is supporting
                             various sources and sinks for log messages. For example, it writes messages not
                             just to text files and terminals, but also a wide selection of databases.
20.5 Extended Possibilities: Rsyslog                                                    307

B According to its own web site, “rsyslog” stands for “rocket-fast syslog”.
  Of course one should not overestimate the value of that kind of self-
  aggrandisement, but in this case the self-praise is not entirely unwarranted.

   The basic ideas behind rsyslog are basically as follows:
   • “Sources” pass messages on to “rulesets”. There is one standard built-in
     ruleset (RSYSLOG_DefaultRuleset ), but you as the user get to define others.

   • Every ruleset may contain arbitrarily many rules (even none at all, even
     though that does not make a great deal of sense).
   • A rule consists of a “filter” and an “action list”. Filters make yes-no deci-
     sions about whether the corresponding action list will be executed.

   • For each message, all the rules in the ruleset will be executed in order from
     the first to the last (and no others). All rules will always be executed, no
     matter how the filter decisions go, although there is a “stop processing”
   • An action list may contain many actions (at least one). Within an action
     list, no further filters are allowed. The actions determine what happens to
     matching log messages.
   • The exact appearance of log messages in the output may be controlled
     through “templates”.
Rsyslog’s configuration can be found in the /etc/rsyslog.conf file. In this file you
may use three different styles of configuration setting in parallel:
   • The traditional /etc/syslog.conf syntax (“sysklogd”).
   • An obsolete rsyslog syntax (“legacy rsyslog”). You can recognise this by the
     commands that start with dollar signs ($ ).

   • The current rsyslog syntax (“RainerScript”). This is best suited for complex
The first two flavours are line-based. In the current syntax, line breaks are irrele-
   For very simple applications you can still—and should!—use the sysklogd syn-
tax (as discussed in the previous sections). If you want to set configuration pa-
rameters or express complex control flows, RainerScript is more appropriate. You
should avoid the obsolete rsyslog syntax (even if various Linux distributions don’t
do this in their default configurations), except that various features of rsyslog are
only accessible using that syntax.

B As usual, empty lines and comment lines will be ignored. Comment lines
  include both lines (and parts of lines) that start with a # (the comment then
  stops at the end of the line) and C-style comments that reach from a /* ,
  disregarding line breaks, until a */ .
B C-style comments may not be nested , but # comments may occur inside C-
  style comments. That makes C-style comments particularly useful to “com-
  ment out” large swathes of a configuration file in order to make it invisible
  to rsyslog.

  Rsyslog offers various features that surpass those of BSD syslogd . For example,
you can use extended filter expressions for messages:

:msg, contains, "FOO"            /var/log/foo.log

  1 You   don’t get to do that in C, either, so it shouldn’t be a major nuisance.
308                                                                      20 System Logging

      Extended filter expressions always consist of a colon at the left margin, a “prop-
      erty” that rsyslog takes from the message, a filter operator (here, contains ), and a
      search term. In our example, all log messages whose text contains the character
      sequence FOO will be written to the /var/log/foo.log file.

      B Apart from msg (the log message proper), the “properties” you may use
        include, for example, hostname (the name of the computer sending the mes-
        sage), fromhost (the name of the computer that forwarded the message to
        rsyslog), pri (the category and priority of the message as an undecoded
        number), pri-text (the category and priority as a text string, with the num-
        ber appended, as in “local0.err<133> ”), syslogfacility and syslogseverity as
        well as syslogfacility-text and syslogseverity-text for direct access to the
        category and priority, timegenerated (when the message was received) or
        inputname (the rsyslog module name of the source of the message). There are
        various others; look at rsyslog’s documentation.

      B The allowable comparison operators are contains , isequal , startswith , regex ,
        and eregex . These speak for themselves, except for the latter two—regex con-
        siders its parameter as a simple and eregex as an “extended” regular expres-
        sion according to POSIX. All comparison operators take upper and lower
        case into account.

      A The startswith comparison is useful because it is considerably more efficient
        than a regular expression that is anchored to the start of the message (as
        long as you’re looking for a constant string, anyway). You should, however,
        be careful, because what you consider the start of the message and what
        rsyslog thinks of that can be quite different. If rsyslog receives a message
        via the syslog service, this will, for example, look like

             <131>Jul 22 14:25:50 root: error found

            As far as rsyslog is concerned, msg does not start (as one might naively as-
            sume) at the e of error , but with the space character in front of it. So if you
            are looking for messages that start with error , you should say

             :msg, startswith, " error"   /var/log/error.log

      B There is a nice addition on the “action side” of simple rules: With traditional
        syslogd , you have already seen that an entry like


            will forward log messages to a remote host via the (UDP-based) syslog pro-
            tocol. With rsyslog, you may also write


            to transmit log messages via TCP. This is potentially more reliable, especially
            if firewalls are involved.

      B At the other end of the TCP connection, of course, there must be a suitably
        configured rsyslog listening for messages. You can ensure this, for example,
             module(load="imtcp" MaxSessions="500")
             input(type="imtcp" port="514")

            In the obsolete syntax,
20.5 Extended Possibilities: Rsyslog                                                                 309

        $ModLoad imtcp
        $InputTCPMaxSessions 500
        $InputTCPServerRun 514

       does the same thing.

A Do consider that only the UDP port 514 is officially reserved for the syslog
  protocol. The TCP port 514 is really used for a different purpose2 . You can
  specify a different port just in case:


       (and that works for UDP, too, if necessary). The changes required on the
       server side will be easy for you to figure out on your own.
    The next level of complexity are filters based on expressions that may contain
arbitrary Boolean, arithmetic, or string operations. These always start with an if
at the very left of a new line:

if $syslogfacility-text == "local0" and $msg startswith " FOO"           
  and ($msg contains "BAR" or $msg contains "BAZ") 
  then /var/log/foo.log

(in your file this should all be on one line). With this rule, messages of category
local0 will be written to the /var/log/foo.log file as long as they start with FOO and
also contain either BAR or BAZ (or both). (Watch for the dollar signs at the start of
the property names.)
   Rsyslog supports a large number of modules that determine what should hap-
pen to log messages. You might, for example, forward important messages by
e-mail. To do so, you might put something like

template(name="mailBody" type="string" string="ALERT\\r\\n%msg%")
if $msg contains "disk error" then {
    action(type="ommail" server="" port="25"
           mailfrom="" mailto=""
           subject.text="disk error detected"
           body.enable="on" template="mailBody"

into your /etc/rsyslog.conf .

B If you have an older version of rsyslog (before 8.5.0) you will need to use the
  obsolete syntax to configure the ommail module. That might, for example,
  look like
        $ModLoad ommail
        $template mailSubject,"disk error detected"
        $template mailBody,"ALERT\\r\\n%msg%"
        $ActionMailSubject mailSubject
        $ActionExecOnlyOnceEveryInterval 3600
        if $msg contains "disk error" then :ommail:;mailBody
        $ActionExecOnlyOnceEveryInterval 0q

   2 … even though nobody nowadays is still interested in the remote-shell service. Nobody reason-

able, anyway.
310                                                                                   20 System Logging

                     B Rsyslog’s SMTP implementation is fairly primitive, since it supports neither
                       encryption nor authentication. This means that the mail server you specify
                       in the rsyslog configuration must be able to accept mail from rsyslog even
                       without encryption or authentication.

                       By the way, rsyslog can handle Linux kernel log messages directly. You simply
                    need to enable the imklog input module:


                    or (obsolete syntax)

                     $ModLoad imklog

                    A separate klogd process is not necessary.
                       Detailed information on rsyslog is available, for example, in the online docu-
                    mentation [rsyslog].

                     C 20.6 [!3] (If your distribution doesn’t use rsyslog already.) Install rsyslog
                       and create a configuration that is as close to your existing syslogd configura-
                       tion as possible. Test it with (for example) logger . Where do you see room
                       for improvement?

                     C 20.7 [2] PAM, the login and authentication system, logs sign-ons and sign-
                       offs in the following format:

                           kdm: :0[5244]: (pam_unix) session opened for user hugo by (uid=0)
                           kdm: :0[5244]: (pam_unix) session closed for user hugo

                          Configure rsyslog such that whenever a particular user (e. g. you) logs on
                          or off, a message is displayed on the system administrator’s (root ’s) terminal
                          if they are logged on. (Hint: PAM messages appear in the authpriv category.)

                     C 20.8 [3] (Cooperate with another class member if necessary.) Configure
                       rsyslog such that all log messages from one computer are passed to another
                       computer by means of a TCP connection. Test this connection using logger .

                    20.6      The “next generation”: Syslog-NG
                    Syslog-NG (“NG” for “new generation”) is a compatible, but extended reim-
      main advantages plementation of a syslog daemon by Balazs Scheidler. The main advantages of
                    Syslog-NG compared to the traditional syslogd include:
                        • Filtering of messages based on their content (not just categories and priori-
                        • Chaining of several filters is possible
                        • A more sophisticated input/output system, including forwarding by TCP
                          and to subprocesses
                    The program itself is called syslog-ng .

                     B For syslog clients there is no difference: You can replace a syslogd with
                       Syslog-NG without problems.

                       You can find information about Syslog-NG in its manual pages as well as on
                    [syslog-ng]. This includes documentation as well as a very useful FAQ collection.
20.6 The “next generation”: Syslog-NG                                                           311

Configuration file   Syslog-NG reads its configuration from a file, normally /etc/
syslog- ng/syslog- ng.conf .
                          Unlike syslogd , Syslog-NG distinguishes various “entry entry types
types” in its configuration file.
Global options These settings apply to all message sources or the Syslog-NG
     daemon itself.
Message sources Sylog-NG can read messages in various ways: from Unix-
    domain sockets or UDP like syslogd , but also, for example, from files, FIFOs,
    or TCP sockets. Every message source is assigned a name.
Filters Filters are Boolean expressions based on internal functions that can, for
      example, refer to the origin, category, priority, or textual content of a log
      message. Filters are also named.
Message sinks Syslog-NG includes all logging methods of syslogd and then some.
Log paths A “log path” connects one or several message sources, filters, and
     sinks: If messages arrive from the sources and pass the filter (or filters),
     they will be forwarded to the specified sink(s). At the end of the day, the
     configuration file consists of a number of such log paths.

Options You can specify various “global” options that control Syslog-NG’s gen-
eral behaviour or determine default values for individual message sources or
sinks (specific options for the sources or sinks take priority). A complete list is
part of the Syslog-NG documentation. The general options include various set-
tings for handling DNS and the forwarding or rewriting of messages’ sender host

B If Syslog-NG on host 𝐴 receives a message from host 𝐵, it checks the
  keep_hostnames() option. If its value is yes , 𝐵 will be kept as the host name for
  the log. If not, the outcome depends on the chain_hostnames() option; if this
  is no , then 𝐴 will be logged as the host name, if it is yes , then Syslog-NG will
  log 𝐵/ 𝐴. This is particularly important if the log is then forwarded to yet
  another host.

Message Sources In Syslog-NG, message sources are defined using the source
keyword. A message source collects one or more “drivers”. To accomplish the
same as a “normal” syslogd , you would include the line
source src { unix-stream("/dev/log"); internal(); };

in your configuration; this tells Syslog-NG to listen to the Unix-domain socket
/dev/log . internal() refers to messages that Syslog-NG creates by itself.

B A Syslog-NG message source corresponding to the -r option of syslogd might
  look like this:
       source s_remote { udp(ip( port(514)); };

       Since that is the default setting,

       source s_remote { udp(); };

       would also do.

B With ip() , you can let Syslog-NG listen on specific local IP addresses only.
  With syslogd , this isn’t possible.
   The following source specification lets Syslog-NG replace the klogd program:
source kmsg { file("/proc/kmsg" log_prefix("kernel: ")); };

B All message sources support another parameter, log_msg_size() , which spec-
  ifies the maximum message length in bytes.
312                                                                        20 System Logging

                           Table 20.3: Filtering functions for Syslog-NG

       Syntax                                   Description
       facility( ⟨category⟩[, ⟨category⟩ … ])   Matches messages with one of the listed
       level( ⟨priority⟩[, ⟨priority⟩ … ])      Matches messages with one of the listed
       priority( ⟨priority⟩[, ⟨priority⟩ … ])   Same as level()
       program( ⟨regex⟩)                        Matches messages where the name of the
                                                sending program matches ⟨regex⟩
       host( ⟨regex⟩)                           Matches messages whose sending host
                                                matches ⟨regex⟩
       match( ⟨regex⟩)                          Matches messages which match the ⟨regex⟩
       filter( ⟨name⟩)                          Invokes another filtering rule and returns
                                                its value
       netmask( ⟨IP   address⟩/ ⟨netmask⟩)      Checks whether the IP address is in the
                                                given network

      Filters Filters are used to sift through log messages or distribute them to various
      sinks. They rely on internal functions that consider specific aspects of messages;
      these functions can be joined using the logical operators, and , or , and not . A list of
      possible functions is shown in Table ??.
          You might, for example, define a filter that matches all messages from host green
      containing the text error :

      filter f_green { host("green") and match("error"); };

      B With the level() (or priority() function, you can specify either one or more
        priorities separated by commas, or else a range of priorities like “warn ..
        emerg ”.

      Message Sinks Like sources, sinks consist of various “drivers” for logging meth-
      ods. For example, you can write messages to a file:

      destination d_file { file("/var/log/messages"); };

      You can also specify a “template” that describes in which format the message
      should be written to the sink in question. When doing so, you can refer to
      “macros” that make various parts of the message accessible. For instance:

      destination d_file {
               template("$HOUR:$MIN:$SEC $TZ $HOST [$LEVEL] $MSG\n")

      The $YEAR , $MONTH , etc. macros will be replaced by the obvious values. $TZ is the cur-
      rent time zone, $LEVEL the message priority, and $MSG the messaeg itself (including
      the sender’s process ID). A complete list of macros is part of Syslog-NG’s docu-

      B The template_escape() parameter controls whether quotes (' and " ) should
        be “escaped” in the output. This is important if you want to feed the log
        messages to, say, an SQL server.
20.6 The “next generation”: Syslog-NG                                                   313

   Unlike syslogd , Syslog-NG allows forwarding messages using TCP. This is not
just more convenient when firewalls are involved, but also ensures that no log
messages can get lost (which might happen with UDP). You could define a TCP
forwarding sink like this