Skip to content
Menu
Lucky's Bookshelf
  • Browse
  • About
Lucky's Bookshelf

Book Summary: Efficient Linux at the Command Line by Daniel J. Barrett

Posted on June 4, 2024June 4, 2024
Topics: Software Engineering

Rating: 7.4/10.

Efficient Linux at the Command Line: Boost Your Command-Line Skills by Daniel J. Barrett

Book on the more advanced aspects of the bash shell and bash scripting, including how child processes work with local and environment variables, different ways of executing commands with output substitution, why changing directories in a shell script doesn’t persist outside, how xargs works, and other nuances that many developers have used but may not fully understand. It is useful for filling in the gaps and explaining the mechanics of these lesser-known aspects of the Linux command line.

The second half of the book offers examples of building complex shell scripts, which I have mixed feelings about. The author tries to be clever by chaining together commands in esoteric ways rather than using normal control flow to make them “brash one-liners” – I’m not sure of the point of this when most of these tasks are straightforward to write in 5-10 lines of Python, since a chain of Linux utilities is difficult to understand. Each command uses single-character arguments like -I, -f, -d – when using these utilities, they operate line by line with things separated by character delimiters. This often results in referring to fixed fields, like using awk where some input is $4 instead of referring to it by a meaningful name. Since everything is done using string manipulation, things will often break if a filename unexpectedly has a space character in it. Various hacks are often used to get around this type of issue. I feel like the second half of this book is more of a demonstration of why not to use bash for more complex tasks than anything practical.

Chapter 1. Commands can be combined together with the pipe operator; note that some commands, like ls, change their output when redirected (in this case, they use new lines instead of tabs). The cut command extracts columns by character or delimiter, with the default being a tab. The uniq command can deduplicate adjacent lines or count them; cut, sort, and uniq are often combined, for example, to find duplicate files by hash.

Chapter 2. The shell uses filename globbing, which is handled directly by the shell so the program only sees the same as if you typed a list of files manually. The * is most common, but ? represents a single character; this pattern matching only works for file and directory names. The syntax name=value (with no spaces around =) is used to set a variable, and $name is used to access it. Like globs, variables are also evaluated by the shell before being passed to a command. Use alias to shorten the names of commands; it is acceptable to have the same name as the command itself. Use the > operator to redirect stdout and stderr to a file; you can also use < to redirect a file into stdin. A single quote means everything inside is treated as literal, while a double quote means most things are literal but variables are still evaluated. The shell uses $PATH to search for an executable to run; put commands in .bashrc to run them at startup.

Chapter 3: The shell stores a history of commands, which you can view using the history command. You can configure various behaviors, such as how many commands to store. The simplest way to access the command history is by repeatedly pressing the up arrow key. History expansion, such as !! for the previous command, includes quite cryptic expressions to select specific entries from the history. You can use Ctrl-r to search through the history and execute commands. By default, the shell uses Emacs shortcuts to edit commands, but it is possible to switch to Vim style.

Chapter 4: Navigating Directories. The cd command, by default, takes you to your home directory. You can use aliases to make cd navigate to frequently used directories. Setting the CDPATH variable allows you to cd into directories from anywhere. The command cd - toggles between two directories, while pushd and popd enable you to navigate directories as a stack.

Chapter 5. A sequence of numbers can be generated with the seq command or brace expansion like {1..10}. The find command recursively lists all files in the directory; optionally, you can also execute a command on each one. grep is used to search in files and has many options; by default, it uses regular expressions. The tac and rev commands are used for printing in reverse order, while paste is the opposite of cut, joining files that are separated by tabs. awk processes lines one by one; if a material matches a pattern, it executes a command on the line, making it good for line-wise reformatting. Additionally, it can perform calculations and handle data structures with arrays. sed performs regex-based find and replace operations.

Chapter 6: Every command creates a child process that is a copy of the parent’s environment but doesn’t modify it. Therefore, changing a local variable inside a script or changing the directory doesn’t affect anything outside once it’s exited. Use export to turn a local variable into an environment variable that will be copied to children. There are some small differences between the startup file (.bash_profile) and the initialization file (.bashrc) regarding whether aliases are visible to children, but they are mostly identical.

Chapter 7: There are several ways of chaining commands. The && operator ensures that the next command is executed only if the previous one succeeds, while the || operator executes the second command only if the first one fails. The ; operator allows all commands to be executed regardless of whether any fail. Use $(...) to substitute the output of a command as another command, and <(...) to put the output of a command into a temporary file in case the input to another command must come from a file. Use bash -c whenever a sudo command is needed; otherwise, some parts of the command, if it includes a redirection operator, will be executed outside of sudo and fail. Use the pipe | bash to generate some commands into stdout and execute all of them. The xargs command executes a command for each line of input; a common use case is performing an action for each file in a directory. It is recommended to use the -0 option to correctly handle file names with spaces. Append & at the end of a command to put the job in the background. The bg and fg commands bring jobs into the background and foreground, respectively.

Chapter 8. Examples of building one-liners in Bash to perform complex manipulations one piece at a time, eg, to check for non-matching files where each text file is supposed to have a matching jpg.

Chapter 9. Examples of using files to build useful utilities like a password manager or automatically checking whois on a list of urls. The best format for text files is line by line with fields separated by tabs, making it easy to extract a specific field with awk.

Chapter 10. The terminal can open browser tabs and fetch web pages with curl and wget; manipulate the clipboard with xclip. This can improve the experience of the password manager script that was built in the last chapter.

Chapter 11. cron is useful for running jobs regularly, and at for running something once at a specific time in the future. rsync is like a copy but only copies files that have changed and works across remote directories, make is useful for avoiding duplicate work when files depend on other files; it can keep track of which ones have changed.

Share this:

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X

Most similar books:

In the Footsteps of Du Fu by Michael Wood Impeachment: A Citizen’s Guide by Cass R. Sunstein How I Built This by Guy Raz Real-World Next.js by Michele Riva

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Search

Subjects

  • Topics (340)
    • Arts and Music (20)
    • Business / Finance (38)
    • Canada (15)
    • China (20)
    • Current Events (10)
    • Data Science / ML (16)
    • Economics (16)
    • History (35)
    • Indigenous (9)
    • Linguistics (21)
    • Mathematics (11)
    • Medicine / Health (17)
    • Natural Sciences (25)
    • Philosophy (15)
    • Self-Help / Career (16)
    • Social Sciences (22)
    • Software Engineering (27)
    • Startups (17)
    • World (34)
  • Type (120)
    • Classics (19)
    • Novels / Fiction (44)
    • Textbooks (57)
  • Uncategorized (2)

Lucky’s Bookshelf is a participant of the Amazon Affiliates Program.

©2025 Lucky's Bookshelf | Powered by SuperbThemes & WordPress