Linux Shell Scripting Cookbook

5 (1 reviews total)
By Sarath Lakshman
    Advance your knowledge in tech with a Packt subscription

  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Shell Something Out

About this book

GNU/Linux is a remarkable operating system that comes with a complete development environment that is stable, reliable, and extremely powerful. The shell being the native interface to communicate with the operating system is capable of controlling the entire operating system. There are numerous commands on Linux shell which are documented but hard to understand. The man pages are helpful but they are very lengthy and it does not give any clues on key areas where commands can be used. Proper usage of shell commands can easily solve many complex tasks with a few lines of code, but most linux users don't have the right know-how to use the Linux shell to its full potential.

Linux Shell Scripting Cookbook is a collection of essential command-line recipes along with detailed descriptions tuned with practical applications. It covers most of the commands on Linux with a variety of usecases accompanied by plenty of examples. This book helps you to perform complex data manipulations involving tasks such as text processing, file management, backups and more with the combination of few commands.

Linux Shell Scripting Cookbook shows you how to capitalize on all the aspects of Linux using the shell scripting language. This book teaches you how to use commands to perform simple tasks all the way to scripting complex tasks such as managing large amounts of data on a network.

It guides you on implementing some of the most common commands in Linux with recipes that handle any of the operations or properties related with files like searching and mining inside a file with grep. It also shows you how utilities such as sed, awk, grep, cut can be combined to solve text processing related problems. The focus is on saving time by automating many activities that we perform interactively through as browser with a few lines of script.

This book will take you from a clear problem description to a fully functional program. The recipes contained within the chapter will introduce the reader to specific problems and provide hands-on solutions.

Publication date:
January 2011


Chapter 1. Shell Something Out

In this chapter, we will cover:

  • Printing in the terminal

  • Playing with variables and environment variables

  • Doing Math calculations with the shell

  • Playing with file descriptors and redirection

  • Arrays and associative arrays

  • Visiting aliases

  • Grabbing information about the terminal

  • Getting, setting dates, and delays

  • Debugging the script

  • Functions and arguments

  • Reading output of a sequence of commands in a variable

  • Reading "n" characters without pressing Return

  • Field separators and iterators

  • Comparisons and tests



UNIX-like systems are amazing operating system designs. Even after many decades, the UNIX-style architecture for operating systems serves as one of the best designs. One of the most important features of this architecture is the command-line interface or the shell. The shell environment helps users to interact with and access core functions of the operating system. The term scripting is more relevant in this context. Scripting is usually supported by interpreter-based programming languages. Shell scripts are files in which we write a sequence of commands that we need to perform. And the script file is executed using the shell utility.

In this book we are dealing with Bash (Bourne Again Shell), which is the default shell environment for most GNU/Linux systems. Since GNU/Linux is the most prominent operating system based on a UNIX-style architecture, most of the examples and discussions are written by keeping Linux systems in mind.

The primary purpose of this chapter is to give readers an insight about the shell environment and become familiar with the basic features that come around the shell. Commands are typed and executed in a shell terminal. When opened, in a terminal, a prompt is available. It is usually in the following format:


Or simply as $ or #.

$ represents regular users and # represents the administrative user root. Root is the most privileged user in a Linux system.

A shell script is a text file that typically begins with a shebang, as follows:


For any scripting language in a Linux environment, a script starts with a special line called shebang. Shebang is a line for which #! is prefixed to the interpreter path. /bin/bash is the interpreter command path for Bash.

Execution of a script can be done in two ways. Either we can run the script as a command-line argument for sh or run a self executable with execution permission.

The script can be run with the filename as a command-line argument as follows:

$ sh # Assuming script is in the current directory.


$ sh /home/path/ # Using full path of

If a script is run as a command-line argument for sh, the shebang in the script is of no use.

In order to self execute a shell script, it requires executable permission. While running as a self executable, it makes use of the shebang. It runs the script using the interpreter path that is appended to #! in shebang. The execution permission for the script can be set as follows:

$ chmod a+x

This command gives the file the executable permission for all users. The script can be executed as:

$ ./ #./ represents the current directory


$ /home/path/ # Full path of the script is used

The shell program will read the first line and see that the shebang is #!/bin/bash. It will identify the /bin/bash and execute the script internally as:

$ /bin/bash

When a terminal is opened it initially executes a set of commands to define various settings like prompt text, colors, and many more. This set of commands (run commands) are read from a shell script called .bashrc, which is located in the home directory of the user (~/.bashrc). The bash shell also maintains a history of commands run by the user. It is available in the file ~/.bash_history. ~ is the shorthand for the user home directory path.

In Bash, each command or command sequence is delimited by using a semicolon or a new line. For example:

$ cmd1 ; cmd2

This is equivalent to:

$ cmd1
$ cmd2

Finally, the # character is used to denote the beginning of unprocessed comments. A comment section starts with # and proceeds up to the end of that line. The comment lines are most often used to provide comments about the code in the file or to stop a line of code from being executed.

Now let's move on to the basic recipes in this chapter.


Printing in the terminal

The terminal is an interactive utility by which a user interacts with the shell environment. Printing text in the terminal is a basic task that most shell scripts and utilities need to perform regularly. Printing can be performed via various methods and in different formats.

How to do it...

echo is the basic command for printing in the terminal.

echo puts a newline at the end of every invocation by default:

$ echo "Welcome to Bash"
Welcome to Bash

Simply using double-quoted text with the echo command prints the text in the terminal. Similarly, text without double-quotes also gives the same output:

$ echo Welcome to Bash
Welcome to Bash

Another way to do the same task is by using single quotes:

$ echo 'text in quote'

These methods may look similar, but some of them have got a specific purpose and side effects too. Consider the following command:

$ echo "cannot include exclamation - ! within double quotes"

This will return the following:

bash: !: event not found error

Hence, if you want to print !, do not use within double-quotes or you may escape the ! with a special escape character (\) prefixed with it.

$ echo Hello world !


$ echo 'Hello world !'


$ echo "Hello world \!" #Escape character \ prefixed.

When using echo with double-quotes, you should add set +H before issuing echo so that you can use !.

The side effects of each of the methods are as follows:

  • When using echo without quotes, we cannot use a semicolon as it acts as a delimiter between commands in the bash shell.

  • echo hello;hello takes echo hello as one command and the second hello as the second command.

  • When using echo with single quotes, the variables (for example, $var will not be expanded) inside the quotes will not be interpreted by Bash, but will be displayed as is.

    This means:

    $ echo '$var' will return $var


    $ echo $var will return the value of the variable $var if defined or nothing at all if it is not defined.

Another command for printing in the terminal is the printf command. printf uses the same arguments as the printf command in the C programming language. For example:

$ printf "Hello world"

printf takes quoted text or arguments delimited by spaces. We can use formatted strings with printf. We can specify string width, left or right alignment, and so on. By default, printf does not have newline as in the echo command. We have to specify a newline when required, as shown in the following script:


printf  "%-5s %-10s %-4s\n" No Name  Mark 
printf  "%-5s %-10s %-4.2f\n" 1 Sarath 80.3456 
printf  "%-5s %-10s %-4.2f\n" 2 James 90.9989 
printf  "%-5s %-10s %-4.2f\n" 3 Jeff 77.564

We will receive the formatted output:

No    Name       Mark
1     Sarath     80.35
2     James      91.00
3     Jeff       77.56

%s, %c, %d, and %f are format substitution characters for which an argument can be placed after the quoted format string.

%-5s can be described as a string substitution with left alignment (- represents left alignment) with width equal to 5. If - was not specified, the string would have been aligned to the right. The width specifies the number of characters reserved for that variable. For Name, the width reserved is 10. Hence, any name will reside within the 10-character width reserved for it and the rest of the characters will be filled with space up to 10 characters in total.

For floating point numbers, we can pass additional parameters to round off the decimal places.

For marks, we have formatted the string as %-4.2f, where .2 specifies rounding off to two decimal places. Note that for every line of the format string a \n newline is issued.

There's more...

It should be always noted that flags (such as -e, -n, and so on) for echo and printf should appear before any strings in the command, else Bash will consider the flags as another string.

Escaping newline in echo

By default, echo has a newline appended at the end of its output text. This can be avoided by using the -n flag. echo can also accept escape sequences in double-quoted strings as argument. For using escape sequences, use echo as echo -e "string containing escape sequences". For example:

echo -e "1\t2\t3"

Printing colored output

Producing colored output on the terminal is very interesting stuff. We produce colored output using escape sequences.

Color codes are used to represent each color. For example, reset=0, black=30, red=31, green=32, yellow=33, blue=34, magenta=35, cyan=36, and white=37.

In order to print colored text, enter the following:

echo -e "\e[1;31m This is red text \e[0m"

Here \e[1;31 is the escape string that sets the color to red and \e[0m resets the color back. Replace 31 with the required color code.

For a colored background, reset = 0, black = 40, red = 41, green = 42, yellow = 43, blue = 44, magenta = 45, cyan = 46, and white=47, are the color code that are commonly used.

In order to print a colored background, enter the following:

echo -e "\e[1;42m Green Background \e[0m"

Playing with variables and environment variables

Variables are essential components of every programming language and are used to hold varying data. Scripting languages usually do not require variable type declaration before its use. It can be assigned directly. In Bash, the value for every variable is string. If we assign variables with quotes or without quotes, they are stored as string. There are special variables used by the shell environment and the operating system environment to store special values, which are called environment variables.

Let's look at the recipes.

Getting ready

Variables are named with usual naming constructs. When an application is executing, it will be passed with a set of variables called environment variables. From the terminal, to view all the environment variables related to that terminal process, issue the env command. For every process, environment variables in its runtime can be viewed by:

cat /proc/$PID/environ

Set the PID with the process ID of the relevant process (PID is always an integer).

For example, assume that an application called gedit is running. We can obtain the process ID of gedit with the pgrep command as follows:

$ pgrep gedit

You can obtain the environment variables associated with the process by executing the following command:

$ cat /proc/12501/environ

Note that many environment variables are stripped off for convenience. The actual output may contain numerous variables.

The above mentioned command returns a list of environment variables and their values. Each variable is represented as a name=value pair and are separated by a null character (\0). If you can substitute the \0 character with \n, you can reformat the output to show each variable=value pair in each line. Substitution can be made using the tr command as follows:

$ cat /proc/12501/environ  | tr '\0' '\n'

Now, let's see how to assign and manipulate variables and environment variables.

How to do it...

A variable can be assigned as follows:


var is the name of a variable and value is the value to be assigned. If value does not contain any white space characters (like a space), it need not be enclosed in quotes, else it must be enclosed in single or double quotes.

Note that var = value and var=value are different. It is a common mistake to write var =value instead of var=value. The later is the assignment operation, whereas the former is an equality operation.

Printing the contents of a variable is done using by prefixing $ with the variable name as follows:

var="value" #Assignment of value to variable var.

echo $var


echo ${var}

The output is as follows:


We can use variable values inside printf or echo in double quotes.

echo "We have $count ${fruit}(s)"

The output is as follows:

We have 5 apple(s)

Environment variables are variables that are not defined in the current process, but are received from the parent processes. For example, HTTP_PROXY is an environment variable. This variable defines which proxy server should be used for an Internet connection.

Usually, it is set as:


The export command is used to set the env variable. Now any application, executed from the current shell script will receive this variable. We can export custom variables for our own purposes in an application or shell script that is executed. There are many standard environment variables that are available for the shell by default.

For example, PATH. A typical PATH variable will contain:

$ echo $PATH

When given a command for execution, shell automatically searches for the executable in the list of directories in the PATH environment variable (directory paths are delimited by the ":" character). Usually, $PATH is defined in /etc/environment or /etc/profile or ~/.bashrc. When we need to add a new path to the PATH environment, we use:

export PATH="$PATH:/home/user/bin"

Or, alternately, we can use:

$ PATH="$PATH:/home/user/bin"
$ export PATH

$ echo $PATH

Here we have added /home/user/bin to PATH.

Some of the well-known environment variables are: HOME, PWD, USER, UID, SHELL, and so on.

There's more...

Let's see some more tips associated with regular and environment variables.

Finding length of string

Get the length of a variable value as follows:


For example:

$ var=12345678901234567890
$ echo ${#var} 

length is the number of characters in the string.

Identifying the current shell

Display the currently used shell as follows:

echo $SHELL

Or, you can also use:

echo $0

For example:

$ echo $SHELL

$ echo $0

Check for super user

UID is an important environment variable that can be used to check whether the current script has been run as root user or regular user. For example:

if [ $UID -ne 0 ]; then
echo Non root user. Please run as root.
echo "Root user"

The UID for the root user is 0.

Modifying the Bash prompt string ([email protected]:~$)

When we open a terminal or run a shell, we see a prompt string like [email protected]: /home/$. Different GNU/Linux distributions have slightly different prompts and different colors. We can customize the prompt text using the PS1 environment variable. The default prompt text for the shell is set using a line in the ~/.bashrc file.

  • We can list the line used to set the PS1 variable as follows:

    $ cat ~/.bashrc | grep PS1
    PS1='${debian_chroot:+($debian_chroot)}\[email protected]\h:\w\$ '
  • In order to set a custom prompt string, enter:

    [email protected]: ~$ PS1="PROMPT>"
    PROMPT> Type commands here # Prompt string changed.
  • We can use colored text by using the special escape sequences like \e[1;31 (refer to the Printing in the terminal recipe of this chapter).

There are also certain special characters that expand to system parameters. For example,\u expands to username, \h expands to hostname, and \w expands to the current working directory.


Doing math calculations with the shell

Arithmetic operations are an essential requirement for every programming language. The Bash shell comes with a variety of methods for arithmetic operations.

Getting ready

The Bash shell environment can perform basic arithmetic operations using the commands let, (( )), and []. The two utilities expr and bc are also very helpful in performing advanced operations.

How to do it...

A numeric value can be assigned as a regular variable assignment, which is stored as string. However, we use methods to manipulate as numbers.


The let command can be used to perform basic operations directly.

While using let, we use variable names without the $ prefix, for example:

let result=no1+no2
echo $result
  • Increment operation:

    $ let no1++
  • Decrement operation:

    $ let no1--
  • Shorthands:

    let no+=6
    let no-=6

    These are equal to let no=no+6 and let no=no-6 respectively.

  • Alternate methods:

    The [] operator can be used similar to the let command as follows:

    result=$[ no1 + no2 ]

    Using $ prefix inside [] operators are legal, for example:

    result=$[ $no1 + 5 ]

    (( )) can also be used. $ prefixed with a variable name is used when the (( )) operator is used, as follows:

    result=$(( no1 + 50 ))

    expr can also be used for basic operations:

    result=`expr 3 + 4`
    result=$(expr $no1 + 5)

    All of the above methods do not support floating point numbers, and operate on integers only.

    bc the precision calculator is an advanced utility for mathematical operations. It has a wide range of options. We can perform floating point operations and use advanced functions as follows:

    echo "4 * 0.56" | bc
    result=`echo "$no * 1.5" | bc`
    echo $result

    Additional parameters can be passed to bc with prefixes to the operation with semicolon as delimiters through stdin.

    • Specifying decimal precision (scale): In the following example the scale=2 parameter sets the number of decimal places to 2. Hence the output of bc will contain a number with two decimal places:

      echo "scale=2;3/8" | bc
    • Base conversion with bc: We can convert from one base number system to another one. Let's convert from decimal to binary, and binary to octal:

      Description: Number conversion
      echo "obase=2;$no" | bc
      echo "obase=10;ibase=2;$no" | bc
    • Calculating squares and square roots can be done as follows:

      echo "sqrt(100)" | bc #Square root
      echo "10^10" | bc #Square

Playing with file descriptors and redirection

File descriptors are integers that are associated with file input and output. They keep track of opened files. The best-known file descriptors are stdin , stdout , and stderr . We can redirect the contents of one file descriptor to another. The following recipe will give examples on how to manipulate and redirect with file descriptors.

Getting ready

While writing scripts we use standard input (stdin), standard output (stdout), and standard error (stderr) frequently. Redirection of output to a file by filtering the contents is one of the essential things we need to perform. While a command outputs some text, it can be either an error or an output (non-error) message. We cannot distinguish whether it is output text or an error text by just looking at it. However, we can handle them with file descriptors. We can extract text that is attached to a specific descriptor.

File descriptors are integers associated with an opened file or data stream. File descriptors 0, 1, and 2 are reserved as follows:

  • 0 – stdin (standard input)

  • 1 – stdout (standard output)

  • 2 – stderr (standard error)

How to do it...

Redirecting or saving output text to a file can be done as follows:

$ echo "This is a sample text 1" > temp.txt

This would store the echoed text in temp.txt by truncating the file, the contents will be emptied before writing.

Next, consider the following example:

$ echo "This is sample text 2" >> temp.txt

This would append the text into the file.

> and >> operators are different. Both of them redirect text to a file, but the first one empties the file and then writes to it, whereas the later one adds the output to the end of the existing file.

View the contents of the file as follows:

$ cat temp.txt
This is sample text 1
This is sample text 2

When we use a redirection operator, it won't print in the terminal but it is directed to a file. When redirection operators are used, by default, it takes standard output. In order to explicitly take a specific file descriptor, you must prefix the descriptor number to the operator.

> is equivalent to 1> and similarly it applies for >> (equivalent to 1>>).

Let's see what a standard error is and how you can redirect it. stderr messages are printed when commands output an error message. Consider the following example:

$ ls +
ls: cannot access +: No such file or directory

Here + is an invalid argument and hence an error is returned.


Successful and unsuccessful command

When a command returns after error, it returns a non-zero exit status. The command returns zero when it terminates after successful completion. Return status can be read from special variable $? (run echo $? immediately after the command execution statement to print the exit status).

The following command prints the stderr text to the screen rather than to a file:

$ ls + > out.txt 
ls: cannot access +: No such file or directory 

However, in the following command the stdout output is empty, so an empty file out.txt is generated:

$ ls + 2> out.txt # works

You can redirect stderr exclusively to a file and stdout to another file as follows:

$ cmd 2>stderr.txt 1>stdout.txt

It is also possible to redirect stderr and stdout to a single file by converting stderr to stdout using this preferred method:

$ cmd 2>&1 output.txt

or an alternate approach:

$ cmd &> output.txt 

Sometimes the output may contain unnecessary information (such as debug messages). If you don't want the output terminal burdened with the stderr details, then you should redirect stderr output to /dev/null, which removes it completely. For example, consider that we have three files a1, a2, and a3. However, a1 does not have read-write-execute permission for the user. When you need to print the contents of files starting with a, you can use the cat command.

Set up the test files as follows:

$ echo a1 > a1 
$ cp a1 a2 ; cp a2 a3;
$ chmod 000 a1  #Deny all permissions

While displaying contents of the files using wildcards (a*), it will show an error message for file a1 as it does not have the proper read permission:

$ cat a*
cat: a1: Permission denied

Here cat: a1: Permission denied belongs to stderr data. We can redirect stderr data into a file, whereas stdout remains printed in the terminal. Consider the following code:

$ cat a* 2> err.txt #stderr is redirected to err.txt

$ cat err.txt
cat: a1: Permission denied

Take a look at the following code:

$ some_command 2> /dev/null

In this case, the stderr output is dumped to the /dev/null file. /dev/null is a special device file where any data received by the file is discarded. The null device is often called the bit bucket or black hole.

When redirection is performed for stderr or stdout, the redirected text flows into a file. As the text has already been redirected and has gone into the file, no text remains to flow to the next command through pipe (|), and it appears to the next set of command sequence through stdin.

However, there is a tricky way to redirect data to a file as well as provide a copy of redirected data as stdin for the next set of commands. This can be done using the tee command. For example, to print the stdout in the terminal as well as redirect stdout into a file, the syntax for tee is as follows:

command | tee FILE1 FILE2

In the following code, stdin data is received by the tee command. It writes a copy of stdout to the file out.txt and sends another copy as stdin for the next command. The cat –n command puts a line number for each line received from stdin and writes it into stdout:

$ cat a* | tee out.txt | cat -n
cat: a1: Permission denied

Examine the contents of out.txt as follows:

$ cat out.txt

Note that cat: a1: Permission denied does not appear because it belongs to stdin. tee can read from stdin only.

By default, the tee command overwrites the file, but it can be used with appended options by providing the -a option, for example:

$ cat a* | tee –a out.txt | cat –n.

Commands appear with arguments in the format: command FILE1 FILE2… or simply command FILE.

We can use stdin as a command argument. It can be done by using as the filename argument for the command as follows:

$ cmd1 | cmd2 | cmd -

For example:

$ echo who is this | tee -
who is this
who is this

Alternately, we can use /dev/stdin as the output filename to use stdin.

Similarly, use /dev/stderr for standard error and /dev/stdout for standard output. These are special device files that correspond to stdin, stderr, and stdout.

There's more...

A command that reads stdin for input can receive data in multiple ways. Also, it is possible to specify file descriptors of our own using cat and pipes, for example:

$ cat file | cmd
$ cmd1 | cmd2

Redirection from file to command

By using redirection, we can read data from a file as stdin as follows:

$ cmd < file

Redirecting from a text block enclosed within a script

Sometimes we need to redirect a block of text (multiple lines of text) as standard input. Consider a particular case where the source text is placed within the shell script. A practical usage example is writing a log file header data. It can be performed as follows:

cat <<EOF>log.txt
This is a test log file
Function: System statistics

The lines that appear between cat <<EOF >log.txt and the next EOF line will appear as stdin data. Print the contents of log.txt as follows:

$ cat log.txt
This is a test log file
Function: System statistics

Custom file descriptors

A file descriptor is an abstract indicator for accessing a file. Each file access is associated with a special number called a file descriptor. 0, 1, and 2 are reserved descriptor numbers for stdin, stdout, and stderr.

We can create our own custom file descriptors using the exec command. If you are already familiar with file programming with any other programming languages, you might have noticed modes for opening files. Usually, three modes are used:

  • Read mode

  • Write with truncate mode

  • Write with append mode

< is an operator used to read from the file to stdin. > is the operator used to write to a file with truncation (data is written to the target file after truncating the contents). >> is an operator used to write to a file with append (data is appended to the existing file contents and the contents of the target file will not be lost). File descriptors can be created with one of the three modes.

Create a file descriptor for reading a file, as follows:

$ exec 3<input.txt # open for reading with descriptor number 3

We could use it as follows:

$ echo this is a test line > input.txt
$ exec 3<input.txt

Now you can use file descriptor 3 with commands. For example, cat <&3 as follows:

$ cat <&3
this is a test line

If a second read is required, we cannot reuse file descriptor 3. It is needed to reassign file descriptor 3 for read using exec for making a second read.

Create a file descriptor for writing (truncate mode) as follows:

$ exec 4>output.txt # open for writing

For example:

$ exec 4>output.txt
$ echo newline >&4
$ cat output.txt

Create a file descriptor for writing (append mode) as follows:

$ exec 5>>input.txt

For example:

$ exec 5>>input.txt
$ echo appended line >&5
$ cat input.txt 
appended line

Arrays and associative arrays

Arrays are a very important component for storing a collection of data as separate entities using indexes.

Getting ready

Bash supports regular arrays as well as associative arrays. Regular arrays are arrays which can use only integers as its array index. But associative arrays are arrays which can take a string as its array index.

Associative arrays are very useful in many types of manipulations. Associative array support came with version 4.0 of Bash. Therefore, older versions of Bash will not support associative arrays.

How to do it...

An array can be defined in many ways. Define an array using a list of values in a line, as follows:

array_var=(1 2 3 4 5 6)
#Values will be stored in consecutive locations starting from index 0.

Alternately, define an array as a set of index-value pairs as follows:


Print the contents of an array at a given index using:

$ echo ${array_var[0]}
$ echo ${array_var[$index]}

Print all of the values in an array as a list using:

$ echo ${array_var[*]}
test1 test2 test3 test4 test5 test6

Alternately, you can use:

$ echo ${array_var[@]}
test1 test2 test3 test4 test5 test6

Print the length of an array (the number of elements in an array), as follows:

$ echo ${#array_var[*]}

There's more...

Associative arrays have been introduced to Bash from version 4.0. They are useful entities to solve many problems using the hashing technique. Let's go into more details.

Defining associative arrays

In an associative array, we can use any text data as an array index. However, ordinary arrays can only use integers for array indexing.

Initially, a declaration statement is required to declare a variable name as an associative array. A declaration can be made as follows:

$ declare -A ass_array

After the declaration, elements can be added to the associative array using two methods, as follows:

  1. By using inline index-value list method, we can provide a list of index-value pairs:

    $ ass_array=([index1]=val1 [index2]=val2)
  2. Alternately, you could use separate index-value assignments:

    $ ass_array[index1]=val1
    $ ass_array[index2]=val2

For example, consider the assignment of prices for fruits using an associative array:

$ declare -A fruits_value
$ fruits_value=([apple]='100dollars' [orange]='150 dollars')

Display the content of an array as follows:

$ echo "Apple costs ${fruits_value[apple]}"
Apple costs 100 dollars

Listing of array indexes

Arrays have indexes for indexing each of the elements. Ordinary and associative arrays differ in terms of index type. We can obtain the list of indexes in an array as follows:

$ echo ${!array_var[*]}

Or, we can also use:

$ echo ${!array_var[@]}

In the previous fruits_value array example, consider the following:

$ echo ${!fruits_value[*]}
orange apple

This will work for ordinary arrays too.


Visiting aliases

An alias is basically a shortcut that takes the place of typing a long command sequence.

Getting ready

Aliases can be implemented in multiple ways, either by using functions or by using the alias command.

How to do it...

An alias can be implemented as follows:

$ alias new_command='command sequence'

Giving a shortcut to the install command, apt-get install, can be done as follows:

$ alias install='sudo apt-get install'

Therefore, we can use install pidgin instead of sudo apt-get install pidgin.

The alias command is temporary; aliasing exists until we close the current terminal only. In order to keep these shortcuts permanent, add this statement to the ~/.bashrc file. Commands in ~/.bashrc are always executed when a new shell process is spawned.

$ echo 'alias cmd="command seq"' >> ~/.bashrc

To remove an alias, remove its entry from ~/.bashrc or use the unalias command. Another method is to define a function with a new command name and write it in ~/.bashrc.

We can alias rm so that it will delete the original and keep a copy in a backup directory:

alias rm='cp [email protected] ~/backup; rm [email protected]'

When you create an alias, if the item being aliased already exists, it will be replaced by this newly aliased command for that user.

There's more...

There are situations when aliasing can also be a security breach. See how to identify them:

Escaping aliases

The alias command can be used to alias any important command, and you may not always want to run the command using the alias. We can ignore any aliases currently defined by escaping the command we want to run. For example:

$ \command

The \ character escapes the command, running it without any aliased changes. While running privileged commands on an untrusted environment, it is always a good security practise to ignore aliases by prefixing the command with \. The attacker might have aliased the privileged command with his own custom command to steal the critical information that is provided to the command by the user.


Grabbing information about terminal

While writing command-line shell scripts, we will often need to heavily manipulate information about the current terminal, such as number of columns, rows, cursor positions, masked password fields, and so on. This recipe helps to learn about collecting and manipulating terminal settings.

Getting ready

tput and stty are utilities that can be used for terminal manipulations. Let's see how to use them to perform different tasks.

How to do it...

Get number of columns and rows in a terminal as follows:

tput cols
tput lines

In order to print the current terminal name, use:

tput longname

For moving the cursor to a position 100,100 you can enter:

tput cup 100 100

Set the background color for terminal as follows:

tput setb no

no can be a value in the range of 0 to 7.

Set the foreground color for text as follows:

tput setf no

no can be a value in the range of 0 to 7.

In order to make the text bold use:

tput bold

Start and end underlining by using:

tput smul
tput rmul

In order to delete from cursor to end of the line use:

tput ed

While typing a password, we should not display the characters typed. In the following example, we will see how to do it using stty:

echo -e "Enter password: "
stty -echo
read password
stty echo
echo Password read.

The -echo option above disables output to the terminal, whereas echo enables output.


Getting, setting dates, and delays

Many applications require printing dates in different formats, setting the date and time, and performing manipulations based on the date and time. Delays are commonly used to provide a wait time (for example, 1 second) during the program's execution. Scripting contexts, such as performing a monitoring task every five seconds, demand the understanding of writing delays in a program. This recipe will show you how to work with dates and time delays.

Getting ready

Dates can be printed in a variety of formats. We can also set dates from the command line. In UNIX-like systems, dates are stored as an integer in seconds since 1970-01-01 00:00:00 UTC. This is called epoch or UNIX time. Let's see how to read dates and set them.

How to do it...

You can read the date as follows:

$ date
Thu May 20 23:09:04 IST 2010

The epoch time can be printed as follows:

$ date +%s

Epoch is defined as the number of seconds that have elapsed since midnight proleptic Coordinated Universal Time (UTC) of January 1, 1970, not counting leap seconds. Epoch time is useful when you need to calculate the difference between two dates or time. You may find out the epoch times for two given timestamps and take the difference between the epoch values. Therefore, you can find out the total number of seconds between two dates.

We can find out epoch from a given formatted date string. You can use dates in multiple date formats as input. Usually, you don't need to bother about the date string format that you use if you are collecting the date from a system log or any standard application generated output. You can convert a date string into epoch as follows:

$ date --date "Thu Nov 18 08:07:21 IST 2010" +%s

The --date option is used to provide a date string as input. However, we can use any date formatting options to print output. Feeding input date from a string can be used to find out the weekday, given the date.

For example:

$ date --date "Jan 20 2001" +%A

The date format strings are listed in the following table:

Date component



%a (for example:. Sat)

%A (for example: Saturday)


%b (for example: Nov)

%B (for example: November)


%d (for example: 31)

Date in format (mm/dd/yy)

%D (for example: 10/18/10)


%y (for example: 10)

%Y (for example: 2010)


%I or %H (for example: 08)


%M (for example: 33)


%S (for example: 10)

Nano second

%N (for example:695208515)

epoch UNIX time in seconds

%s (for example: 1290049486)

Use a combination of format strings prefixed with + as an argument for the date command to print the date in the format of your choice. For example:

$ date "+%d %B %Y"
20 May 2010

We can set the date and time as follows:

# date -s "Formatted date string"

For example:

# date -s "21 June 2009 11:01:22"

Sometimes we need to check the time taken by a set of commands. We can display it as follows:

start=$(date +%s)

end=$(date +%s)
difference=$(( end - start))
echo Time taken to execute commands is $difference seconds.

An alternate method would be to use timescriptpath to get the time that it took to execute the script.

There's more...

Producing time intervals are essential when writing monitoring scripts that execute in a loop. Let's see how to generate time delays.

Producing delays in a script

In order to delay execution in a script for some period of time, use sleep:$ sleep no_of_seconds.

For example, the following script counts from 0 to 40 by using tput and sleep:

echo -n Count:
tput sc

while true;
if [ $x -lt 40 ];
then let count++;
sleep 1;
tput rc
tput ed
echo -n $count;
else exit 0;

In the above example, a variable count is initialized to 0 and is incremented on every loop execution. The echo statement prints the text. We use tput sc to store the cursor position. On every loop execution we write the new count in the terminal by restoring the cursor position for the number. The cursor position is restored using tput rc. tput ed clears text from the current cursor position to the end of the line, so that the older number can be cleared and the count can be written. A delay of 1 second is provided in the loop by using the sleep command.


Debugging the script

Debugging is one of the critical features every programming language should implement to produce race-back information when something unexpected happens. Debugging information can be used to read and understand what caused the program to crash or to act in an unexpected fashion. Bash provides certain debugging options that every sysadmin should know. There are also some other tricky ways to debug.

Getting ready

No special utilities are required to debug shell scripts. Bash comes with certain flags that can print arguments and inputs taken by the scripts. Let's see how to do it.

How to do it...

Add the -x option to enable debug tracing of a shell script as follows:

$ bash -x

Running the script with the -x flag will print each source line with current status. Note that you can also use sh –x script.

The -x flag outputs every line of script as it is executed to stdout. However, we may require only some portions of the source lines to be observed such that commands and arguments are to be printed at certain portions. In such conditions we can use set built-in to enable and disable debug printing within the script.

  • set -x: Displays arguments and commands upon their execution

  • set +x: Disables debugging

  • set –v: Displays input when they are read

  • set +v: Disables printing input

For example:

for i in {1..6}
set -x
echo $i
set +x
echo "Script executed"

In the above script, debug information for echo $i will only be printed as debugging is restricted to that section using -x and +x.

The above debugging methods are provided by bash built-ins. But they always produce debugging information in a fixed format. In many cases, we need debugging information in our own format. We can set up such a debugging style by passing the _DEBUG environment variable.

Look at the following example code:

function DEBUG()
[ "$_DEBUG" == "on" ] && [email protected] || :

for i in {1..10}
DEBUG echo $i

We can run the above script with debugging set to "on" as follows:

$ _DEBUG=on ./

We prefix DEBUG before every statement where debug information is to be printed. If _DEBUG=on is not passed to script, debug information will not be printed. In Bash the command ':' tells the shell to do nothing.

There's more...

We can also use other convenient ways to debug scripts. We can make use of shebang in a trickier way to debug scripts.

Shebang hack

The shebang can be changed from #!/bin/bash to #!/bin/bash –xv to enable debugging without any additional flags (-xv flags themselves).


Functions and arguments

Like any other scripting languages, Bash also supports functions. Let's see how to define and use functions.

How to do it...

A function can be defined as follows:

function fname()

Or alternately,


A function can be invoked just by using its name:

$ fname ; # executes function

Arguments can be passed to functions and can be accessed by our script:

fname arg1 arg2 ; # passing args

Following is the definition of the function fname. In the fname function, we have included various ways of accessing the function arguments.

  echo $1, $2; #Accessing arg1 and arg2
  echo "[email protected]"; # Printing all arguments as list at once
  echo "$*"; # Similar to [email protected], but arguments taken as single entity
  return 0; # Return value

Similarly, arguments can be passed to scripts and can be accessed by script:$0 (the name of the script):

  • $1 is the first argument

  • $2 is the second argument

  • $n is the nth argument

  • "[email protected]" expands as "$1" "$2" "$3" and so on

  • "$*" expands as "$1c$2c$3", where c is the first character of IFS

  • "[email protected]" is the most used one. "$*" is used rarely since it gives all arguments as a single string.

There's more...

Let's explore more tips on Bash functions.

Recursive function

Functions in Bash also support recursion (the function that can call itself). For example, F() { echo $1; F hello; sleep 1; }.


Fork bomb

:(){ :|:& };:

This recursive function is a function that calls itself. It infinitely spawns processes and ends up in a denial of service attack. & is postfixed with the function call to bring the subprocess into the background. This is a dangerous code as it forks processes and, therefore, it is called a fork bomb.

You may find it difficult to interpret the above code. See Wikipedia page for more details and interpretation of the fork bomb.

It can be prevented by restricting the maximum number of processes that can be spawned from the config file /etc/security/limits.conf.

Exporting functions

A function can be exported like environment variables using export such that the scope of the function can be extended to subprocesses, as follows:

export -f fname

Reading command return value (status)

We can get the return value of a command or function as follows:

echo $?;

$? will give the return value of the command cmd.

The return value is called exit status. It can be used to analyze whether a command completed its execution successfully or unsuccessfully. If the command exits successfully, the exit status will be zero, else it will be non-zero.

We can check whether a command terminated successfully or not as follows:

CMD="command" #Substitute with command for which you need to test exit status
if [ $? –eq 0 ];
echo "$CMD executed successfully"
echo "$CMD terminated unsuccessfully"

Passing arguments to commands

Arguments to commands can be passed in different formats. Suppose –p and -v are the options available and -k NO is another option that takes a number. Also the command takes a filename as argument. It can be executed in multiple ways as follows:

$ command -p -v -k 1 file


$ command -pv -k 1 file


$ command -vpk 1 file


$ command file -pvk 1

Reading the output of a sequence of commands

One of the best-designed features of shell scripting is the ease of combining many commands or utilities to produce output. The output of one command can appear as the input of another, which passes its output to another command, and so on. The output of this combination can be read in a variable. This recipe illustrates how to combine multiple commands and how its output can be read.

Getting ready

Input is usually fed into a command through stdin or arguments. Output appears as stderr or stdout. While we combine multiple commands, we usually use stdin to give input and stdout for output.

Commands are called as filters. We connect each filter using pipes. The piping operator is "|". An example is as follows:

$ cmd1 | cmd2 | cmd3 

Here we combine three commands. The output of cmd1 goes to cmd2 and output of cmd2 goes to cmd3 and the final output (which comes out of cmd3) will be printed or it can be directed to a file.

How to do it...

Have a look at the following code:

$ ls | cat -n > out.txt

Here the output of ls (the listing of the current directory) is passed to cat -n. cat –n puts line numbers to the input received through stdin. Therefore, its output is redirected to the out.txt file.

We can read the output of a sequence of commands combined by pipes as follows:


This is called the subshell method. For example:

cmd_output=$(ls | cat -n)
echo $cmd_output

Another method, called back-quotes can also be used to store the command output as follows:


For example:

cmd_output=`ls | cat -n`
echo $cmd_output

Back quote is different from the single quote character. It is the character on the ~ button in the keyboard.

There's more...

There are multiple ways of grouping commands. Let's go through few of them.

Spawning a separate process with subshell

Subshells are separate processes. A subshell can be defined using the ( )operators as follows:

(cd /bin; ls);

When some commands are executed in a subshell none of the changes occur in the current shell; changes are restricted to the subshell. For example, when the current directory in a subshell is changed using the cd command, the directory change is not reflected in the main shell environment.

The pwd command prints the path of the working directory.

The cd command changes the current directory to the given directory path.

Subshell quoting to preserve spacing and newline character

Suppose we are reading the output of a command to a variable using a subshell or the back-quotes method, we always quote them in double-quotes to preserve the spacing and newline character (\n). For example:

$ cat text.txt

$ out=$(cat text.txt)
$ echo $out
1 2 3 # Lost \n spacing in 1,2,3 

$ out="$(cat tex.txt)"
$ echo $out

Reading "n" characters without pressing Return

read is an important Bash command that can be used to read text from keyboard or standard input. We can use read to interactively read an input from the user, but read is capable of much more. Let's look at a new recipe to illustrate some of the most important options available with the read command.

Getting ready

Most of the input libraries in any programming language read the input from the keyboard; but string input termination is done when Return is pressed. There are certain critical situations when Return cannot be pressed, but the termination is done based on number of characters or a single character. For example, in a game a ball is moved up when up + is pressed. Pressing + and then pressing Return everytime to acknowledge the + press is not efficient. The read command provides a way to accomplish this task without having to press Return.

How to do it...

The following statement will read "n" characters from input into the variable variable_name:

read -n number_of_chars variable_name

For example:

$ read -n 2 var
$ echo $var

Many other options are possible with read. Let's see take a look at these.

Read a password in non-echoed mode as follows:

read -s var

Display a message with read using:

read -p "Enter input:"  var

Read the input after a timeout as follows:

read -t timeout var

For example:

$ read -t 2 var
#Read the string that is typed within 2 seconds into variable var.

Use a delimiter character to end the input line as follows:

read -d delim_charvar

For example:

$ read -d ":" var
hello:#var is set to hello

Field separators and iterators

The Internal Field Separator is an important concept in shell scripting. It is very useful while manipulating text data. We will now discuss delimiters that separate different data elements from single data stream. An Internal Field Separator is a delimiter for a special purpose. An Internal Field Separator (IFS) is an environment variable that stores delimiting characters. It is the default delimiter string used by a running shell environment.

Consider the case where we need to iterate through words in a string or comma separated values (CSV). In the first case we will use IFS=" " and in the second,IFS=",". Let's see how to do it.

Getting ready

Consider the case of CSV data:

#To read each of the item in a variable, we can use IFS.
IFS=, now,
for item in $data;
echo Item: $item


The output is as follows:

Item: name
Item: sex
Item: rollno
Item: location

The default value of IFS is a space component (newline, tab, or a space character).

When IFS is set as "," the shell interprets the comma as a delimiter character, therefore, the $item variable takes substrings separated by a comma as its value during the iteration.

If IFS were not set as "," then it would print the entire data as a single string.

How to do it...

Let's go through another example usage of IFS by taking /etc/passwd file into consideration. In the /etc/passwd file, every line contains items delimited by ":". Each line in the file corresponds to an attribute related to a user.

Consider the input:root:x:0:0:root:/root:/bin/bash. The last entry on each line specifies the default shell for the user. In order to print users and their default shells, we can use the IFS hack as follows:

#Description: Illustration of IFS
for item in $line;

[ $count -eq 0 ]  && user=$item;
[ $count -eq 6 ]  && shell=$item;
let count++
echo $user\'s shell is $shell;

The output will be:

root's shell is /bin/bash

Loops are very useful in iterating through a sequence of values. Bash provides many types of loops. Let's see how to use them.

For loop:

for var in list;
commands; # use $var
list can be a string, or a sequence.

We can generate different sequences easily.

echo {1..50}can generate a list of numbers from 1 to 50

echo {a..z}or{A..Z} or we can generate partial list using {a..h}. Similarly, by combining these we can concatenate data.

In the following code, in each iteration, the variable i will hold a character in the range a to z:

for i in {a..z}; do actions; done;

The for loop can also take the format of the for loop in C. For example:

commands; # Use $i

While loop:

while condition

For an infinite loop, use true as the condition.

Until loop:

A special loop called until is available with Bash. This executes the loop until the given condition becomes true. For example:

until [ $x -eq 9 ]; # [ $x -eq 9 ] is the condition
do let x++; echo $x;

Comparisons and tests

Flow control in a program is handled by comparison and test statements. Bash also comes with several options to perform tests that are compatible with the UNIX system-level features.

Getting ready

We can use if, if else, and logical operators to perform tests and certain comparison operators to compare data items. There is also a command called test available to perform tests. Let's see how to use those commands.

How to do it...

If condition:

if condition;

else if and else:

if condition; 
elif condition; 

Nesting is also possible with if and else. if conditions can be lengthy. We can use logical operators to make them shorter as follows:

[ condition ] && action; # action executes if condition is true.
[ condition ] || action; # action executes if condition is false.

&& is the logical AND operation and || is the logical OR operation. This is a very helpful trick while writing Bash scripts. Now let's go into conditions and comparisons operations.

Mathematical comparisons:

Usually, conditions are enclosed in square brackets []. Note that there is a space between [ or ] and operands. It will show an error if no space is provided. An example is as follows:

[ $var -eq 0 ] or [ $var -eq 0 ]

Performing mathematical conditions over variables or values can be done as follows:

[ $var -eq 0 ]  # It returns true when $var equal to 0.
[ $var -ne 0 ] # It returns true when $var not equals 0

Other important operators are:

  • -gt: Greater than

  • -lt: Less than

  • -ge: Greater than or equal to

  • -le: Less than or equal to

Multiple test conditions can be combined as follows:

[ $var1 -ne 0 -a $var2 -gt 2 ]  # using AND -a
[ $var -ne 0 -o var2 -gt 2 ] # OR -o

Filesystem related tests:

We can test different filesystem related attributes using different condition flags as follows:

  • [ -f $file_var ]: Returns true if the given variable holds a regular filepath or filename.

  • [ -x $var ]: Returns true if the given variable holds a file path or filename which is executable.

  • [ -d $var ]: Returns true if the given variable holds a directory path or directory name.

  • [ -e $var ]: Returns true if the given variable holds an existing file.

  • [ -c $var ]: Returns true if the given variable holds path of a character device file.

  • [ -b $var ]: Returns true if the given variable holds path of a block device file.

  • [ -w $var ]: Returns true if the given variable holds path of a file which is writable.

  • [ -r $var ]: Returns true if the given variable holds path of a file which is readable.

  • [ -L $var ]: Returns true if the given variable holds path of a symlink.

An example of the usage is as follows:

if [ -e $fpath ]; then
echo File exists; 
echo Does not exist; 

String comparisons:

While using string comparison, it is best to use double square brackets since use of single brackets can sometimes lead to errors. Usage of single brackets sometimes lead to error. So it is better to avoid them.

Two strings can be compared to check whether they are the same as follows;

  • [[ $str1 = $str2 ]]: Returns true when str1 equals str2, that is, the text contents of str1 and str2 are the same

  • [[ $str1 == $str2 ]]: It is alternative method for string equality check

We can check whether two strings are not the same as follows:

  • [[ $str1 != $str2 ]]: Returns true when str1 and str2 mismatches

We can find out the alphabetically smaller or larger string as follows:

  • [[ $str1 > $str2 ]]: Returns true when str1 is alphabetically greater than str2

  • [[ $str1 < $str2 ]]: Returns true when str1 is alphabetically lesser than str2


    Note that a space is provided after and before =. If space is not provided, it is not a comparison, but it becomes an assignment statement.

  • [[ -z $str1 ]]: Returns true if str1 holds an empty string

  • [[ -n $str1 ]]: Returns true if str1 holds a non-empty string

It is easier to combine multiple conditions using the logical operators && and || as follows:

if [[ -n $str1 ]] && [[ -z $str2 ]] ;

For example:

str1="Not empty "
if [[ -n $str1 ]] && [[ -z $str2 ]];
echo str1 is non-empty and str2 is empty string.

The output is as follows:

str1 is non-empty and str2 is empty string.

The test command can be used for performing condition checks. It helps to avoid usage of many braces. The same set of test conditions enclosed within [] can be used for the test command.

For example:

if  [ $var -eq 0 ]; then echo "True"; fi
can be written as
if  test $var -eq 0 ; then echo "True"; fi

About the Author

  • Sarath Lakshman

    Sarath Lakshman is a 23 year old who was bitten by the Linux bug during his teenage years. He is a software engineer working in ZCloud engineering group at Zynga, India. He is a life hacker who loves to explore innovations. He is a GNU/Linux enthusiast and hactivist of free and open source software. He spends most of his time hacking with computers and having fun with his great friends. Sarath is well known as the developer of SLYNUX (2005)—a user friendly GNU/Linux distribution for Linux newbies. The free and open source software projects he has contributed to are PiTiVi Video editor, SLYNUX GNU/Linux distro, Swathantra Malayalam Computing, School-Admin, Istanbul, and the Pardus Project. He has authored many articles for the Linux For You magazine on various domains of FOSS technologies. He had made a contribution to several different open source projects during his multiple Google Summer of Code projects. Currently, he is exploring his passion about scalable distributed systems in his spare time. Sarath can be reached via his website

    Browse publications by this author

Latest Reviews

(1 reviews total)
Book Title
Unlock this book and the full library for FREE
Start free trial