Working with Processes
As a developer, you are already intuitively familiar with processes. They are the fruits of your labor: after writing and debugging code, your program finally executes, transforming into a beautiful operating system process!
A process on Linux can be a long-running application, a quick shell command like
ls, or anything that the kernel spawns to do some work on the system. If something is getting done in Linux, a process is doing it. Your web browser, text editor, vulnerability scanner, and even things like reading files and the commands you’ve learned so far all spawn a process.
Linux’s process model is important to understand because the abstraction it gives you – the Linux process – is what all the commands and tools you’ll use to manage processes depend on. Gone are the details you’re used to seeing from a developer’s perspective: variables, functions, and threads have all been encapsulated as “a process.” You’re left with a different, external set of knobs to manipulate and gauges to check: process ID, status, resource usage, and all the other process attributes we’ll be covering in this chapter.
First, we’ll take a close look at the process abstraction itself, and then we’ll dive into useful, practical things you can do with Linux processes. While we’re covering the practical aspects, we’ll pause to add detail to a few aspects that are a common source of problems, like permissions, and give you some heuristics for troubleshooting processes.
In this chapter, you’ll learn about the following topics:
- What a Linux process is, and how to see the processes currently running on your system
- The attributes a process has, so you know what information you can gather while troubleshooting
- Common commands for viewing and finding processes
- More advanced topics that can come in handy for a developer actually writing programs that execute as Linux processes: Signals and inter-process communication, the
/procvirtual filesystem, seeing open file handles with the
lsofcommand, and how processes are created in Linux
You’ll also get a practical review of everything you’ve learned in an example troubleshooting session that uses the theory and commands we cover in this chapter. Now, let’s dive into what exactly a Linux process is.
When we refer to a “process” in Linux, we’re referring to the operating system’s internal model of what exactly a running program is. Linux needs a general abstraction that works for all programs, which can encapsulate the things the operating system cares about. A process is that abstraction, and it enables the OS to track some of the important context around programs that are executing; namely:
- Memory usage
- Processor time used
- Other system resource usage (disk access, network usage)
- Communication between processes
- Related processes that a program starts, for example, firing off a shell command
You can get a listing of all system processes (at least the ones your user is allowed to see) by running the
ps program with the
Figure 2.1: List of system processes
What is a Linux process made of?
- Process ID (PID in the
psoutput above). PID 1 is the init system – the original parent of all other processes, which bootstraps the system. The kernel starts this as one of the first things it does after starting to execute. When a process is created, it gets the next available process ID, in sequential order. Because it is so important to the normal functioning of the operating system, init cannot be killed, even by the root user. Different Unix operating systems use different init systems – for example, most Linux distributions use
systemd, while macOS uses
launchd, and many other Unixes use SysV. Regardless of the specific implementation, we’ll refer to this process by the name of the role it fills: “init.”
- Parent Process PID (PPID). Each process is spawned by a parent. If the parent process dies while the child is alive, the child becomes an “orphan.” Orphaned processes are re-parented to init (PID 1).
- Status (STAT in the
man pswill show you an overview:
- D – uninterruptible sleep (usually IO)
- I – idle kernel thread
- R – running or runnable (on run queue)
- S – interruptible sleep (waiting for an event to complete)
- T – stopped by job control signal
- t – stopped by debugger during tracing
- X – dead (should never be seen)
- Z – defunct (“zombie”) process, terminated but not reaped by its parent
- Priority status (“niceness” – does this process allow other processes to take priority over it?).
- A process Owner (USER in the
psoutput above); the effective user ID.
- Effective Group ID (EGID), which is used.
- An address map of the process’s memory space.
- Resource usage – open files, network ports, and other resources the process is using (VSZ and RSS for memory usage in the
(Citation: from the Unix and Linux System Administration Handbook, 5th edition, p.91.)
Let’s take a closer look at a few of the process attributes that are most important for developers and occasional troubleshooters to understand.
Process ID (PID)
Each process is uniquely identifiable by its process ID, which is just a unique integer that is assigned to a process when it starts. Much like a relational database with IDs that uniquely identify each row of data, the Linux operating system keeps track of each process by its PID.
A PID is by far the most useful label for you to use when interacting with processes.
Effective User ID (EUID) and Effective Group ID (EGID)
As you’ll see in Chapter 5, Introducing Files, files have user and group ownership set on them, which determines who their permissions apply to. If a file’s ownership and permissions are essentially a lock, then a process with the right user/group permissions is like a key that opens the lock and allows access to the file. We’ll dive deeper into this later, when we talk about permissions.
You’ve probably used environment variables in your applications – they’re a way for the operating system environment that launches your process to pass in data that the process needs. This commonly includes things like configuration directives (
LOG_DEBUG=1) and secret keys (
AWS_SECRET_KEY), and every programming language has some way to read them out from the context of the program.
For example, this Python script gets the user’s home directory from the
HOME environment variable, and then prints it:
home_dir = os.environ['HOME']
print("The home directory for this user is", home_dir)
The home directory for this user is /home/dcohen
A process has a “current working directory,” just like your shell (which is just a process, anyway). Typing
pwd in your shell prints its current working directory, and every process has a working directory. The working directory for a process can change, so don’t rely on it too much.
This concludes our overview of the process attributes that you should know about. In the next section, we’ll step away from theory and look at some commands you can use to start working with processes right away.
Practical commands for working with Linux processes
Here are some of the commands you’ll use most often:
ps– Shows processes on the system; you saw an example of this command earlier in the chapter. Flags modify which process attributes are displayed as columns. This command is usually used with filters to control how much output you get, for example, (
ps aux | head –n 10) to cut your output down to just the top 10 lines. A few more useful tricks:
ps –eLfshows thread information for processes
ps -ejHis useful for seeing the relationships between parent and child processes visually (children are indented under their parents)
Figure 2.2: Examples of outputs of the ps command with flags
pgrep– Find process IDs by name. Can use regular expressions.
Figure 2.3: Examples of outputs of the pgrep command with flags
top– An interactive program that polls all processes (once a second, by default) and outputs a sorted list of resource usage (you can configure what it sorts by). Also displays total system resource usage. Press Q or use Ctrl + C to quit. You’ll see an example of this command’s output later in this chapter.
top, but for disk IO. Extremely useful for finding IO-hungry processes. Not installed on all systems by default, but available via most package managers.
Figure 2.4: Example of output of the iotop command
top, but for network IO. Groups network usage by process, which is incredibly convenient. Available via most package managers.
Advanced process concepts and tools
This marks the beginning of the “advanced” section of this chapter. While you don’t need to master all the concepts in this section to work effectively with Linux processes, they can be extremely helpful. If you have a few extra minutes, we recommend at least familiarizing yourself with each one.
systemctl tell your web server to re-read its configuration files? How can you politely ask a process to shut down cleanly? And how can you kill a malfunctioning process immediately, because it’s bringing your production application to its knees?
In Unix and Linux, all of this is done with signals. Signals are numerical messages that can be sent between programs. They’re a way for processes to communicate with each other and with the operating system, allowing processes to send and receive specific messages.
These messages can be used to communicate a variety of things to a process, for example, indicating that a particular event has happened or that a specific action or response is required.
Practical uses of signals
Let’s look at a few examples of the practical value that the signal mechanism enables. Signals can be used to implement inter-process communication; for example, one process can send a signal to another process indicating that it’s finished with a particular task and that the other process can now start working. This allows processes to coordinate their actions and work together in a smooth and efficient manner, much like execution threads in programming languages (but without the associated memory sharing).
Another common application of process signals is to handle program errors. For example, a process can be designed to catch the
SIGSEGV signal, which indicates a segmentation fault. When a process receives this signal, it can trap that signal and then take action to log the error, dump core for debugging purposes, or clean up any resources that were being used before shutting down gracefully.
Process signals can also be used to implement graceful shutdowns. For example, when a system is shutting down, a signal can be sent to all processes to give them a chance to save their state and clean up any resources they were using, via “trapping” signals.
If the receiving process has a handler function for the signal that’s being sent, then that handler function is run. That’s how programs re-read their configuration without restarting, and finish their database writes and close their file handles after receiving the shutdown signal.
The kill command
However, it’s not just processes that communicate via signals: the frighteningly named (and, technically speaking, incorrectly named)
kill is a program that allows users to send signals to processes, too.
One of the most common uses of user-sent processes via the
kill command is to interrupt a process that is no longer responding. For example, if a process is stuck in an infinite loop, a “kill” signal can be sent to force it to stop.
kill command allows you to send a signal to a process by specifying its PID. If the process you’d like to terminate has PID
2600, you’d run:
This command would send signal 15 (
SIGTERM, or “terminate”) to the process, which would then have a chance to trap the signal and shut down cleanly.
As you can see from the included table of standard signal numbers, the default signal that
kill sends is “terminate” (signal 15), not “kill” (
SIGKILL is 9). The
kill program is not just for killing processes but also for sending any kind of signal. It’s really confusingly named and I’m sorry about that – it’s just one of those idiosyncrasies of Unix and Linux that you’ll get used to.
kill –1 2600
signal will give you a list of signals that you can send:
Figure 2.6: Example of output of the man signal command
It pays – sometimes quite literally, in engineering interviews – to be familiar with a few of these:
SIGHUP(1) – “hangup”: interpreted by many applications – for example, nginx – as “re-read your configuration because I’ve made changes to it.”
SIGINT(2) – “interrupt”: often interpreted the same as
SIGTERM- “please shut down cleanly.”
SIGTERM(15) – “terminate”: nicely asks a process to shut down.
SIGUSR2(31) are sometimes used for application-defined messaging For example, SIGUSR1 asks nginx to re-open the log files it’s writing to, which is useful if you’ve just rotated them.
SIGKILLcannot be trapped and handled by processes. If this signal is sent to a program, the operating system will kill that program immediately. Any cleanup code, like flushing writes or safe shutdown, is not performed, so this is generally a last resort, since it could lead to data corruption.
If you want to explore Linux a bit deeper, feel free to poke around the
/proc directory. That’s definitely beyond the basics, but it’s a directory that contains a filesystem subtree for every process, where live information about the processes is looked up as you read those files.
In practice, this knowledge can come in handy during troubleshooting when you’ve identified a misbehaving (or mysterious) process and want to know exactly what it’s doing in real time.
You can learn a lot about a process by poking around in its
/proc subdirectory and casually googling.
Many of the tools we show you in this chapter actually use
/proc to gather process information, and only show you a subset of what’s there. If you want to see everything and do the filtering yourself,
/proc is the place to look.
lsof – show file handles that a process has open
lsof command shows all files that a process has opened for reading and writing. This is useful because it only takes one small bug for a program to leak file handles (internal references to files that it has requested access to). This can lead to resource usage issues, file corruption, and a long list of strange behavior.
Thankfully, getting a list of files that a process has open is easy. Just run
lsof and pass the
–p flag with a PID (you’ll usually have to run this as root). This will return the list of files that the process (in this case, with PID 1589) has open:
~ lsof -p 1589
Figure 2.7: Example of list of files opened by the 1589 process using the lsof -p 1589 command
The above is the output for an nginx web server process. The first line shows you the current working directory for the process: in this case, the root directory (
/). You can also see that it has file handles open on its own binary (
/usr/sbin/nginx) and various libraries in
Further down, you might notice a few more interesting filepaths:
Figure 2.8: Further opened files of the 1589 process
This listing includes the log files nginx is writing to, and socket files (Unix, IPv4, and IPv6) that it’s reading and writing to. In Unix and Linux, network sockets are just a special kind of file, which makes it easy to use the same core toolset across a wide variety of use cases – tools that work with files are extremely powerful in an environment where almost everything is represented as a file.
Except for the very first process,
PID 1), all processes are created by a parent process, which essentially makes a copy of itself and then “forks” (splits) that copy off. When a process is forked, it typically inherits its parent’s permissions, environment variables, and other attributes.
Although this default behavior can be prevented and changed, it’s a bit of a security risk: software that you run manually receives the permissions of your current user (or even root privileges, if you use
sudo). All child processes that might be created by that process – for example, during installation, compilation, and so on – inherit those permissions.
Imagine a web server process that was started with root privileges (so it could bind to a network port) and environment variables containing cloud authentication keys (so it could grab data from the cloud). When this main process forks off a child process that needs neither root privileges nor sensitive environment variables, it’s an unnecessary security risk to pass those along to the child. As a result, dropping privileges and clearing environment variables is a common pattern in services spawning child processes.
From a security perspective, it is important to keep this in mind to prevent situations where information such as passwords or access to sensitive files could be leaked. While it is outside the scope of this book to go into details of how to avoid this, it’s important to be aware of this if you’re writing software that’s going to run on Linux systems.
Review – example troubleshooting session
To begin with, we want to see what’s happening on the system. You just learned that you can see a live view of processes running on a system by running the interactive
top command. Let’s try that now.
Figure 2.9: Example of output of the top command
By default, the
top command sorts processes by CPU usage, so we can simply look at the first listed process to find the offending one. Indeed, the top process is using 94% of one CPU’s available processing time.
As a result of running
top, we’ve gotten a few useful pieces of information:
- The problem is CPU usage, as opposed to some other kind of resource contention.
- The offending process is PID 1763, and the command being run (listed in the COMMAND column) is
bzip2, which is a compression program.
We determine that this
bzip2 process doesn’t need to be running here, and we decide to stop it. Using the
kill command, we ask the process to terminate:
After waiting a few seconds, we check to see if this (or any other)
bzip2 process is running:
Unfortunately, we see that the same PID is still running. It’s time to get serious:
kill –9 1763
This orders the operating system to kill the process without allowing the process to trap (and potentially ignore) the signal. A
SIGKILL (signal #9) simply kills the process where it stands.
Now that you’ve killed the offending process, the server is running smoothly again and you can start tracking down the developer who thought it was a good idea to compress large source directories on this machine.
- We looked at resource usage (via
topin this example). This can be any of the other tools we discussed, depending on which resource is the one being exhausted.
- We found a PID to investigate.
- We acted on that process. In this example, no further investigation was necessary and we sent a signal, asking it to shut down (15,
In this chapter, we took a close look at the process abstraction that Linux wraps around executing programs. You’ve seen the common components that all processes have and learned the basic commands you need to find and inspect running processes. With these tools, you’ll be able to identify when a process is misbehaving, and more importantly, which process is misbehaving.
Learn more on Discord
To join the Discord community for this book – where you can share feedback, ask questions to the author, and learn about new releases – follow the QR code below: