This chapter, which happens to be the first chapter, is probably the most important and least technical. Most chapters in this book cover specific issues and the commands necessary to troubleshoot those issues. This chapter, however, will cover some troubleshooting best practices that can be applied to any issue.
You can think of this chapter as the principles behind the practices being applied.
Before covering the best practices of troubleshooting, it is important to understand the different styles of troubleshooting. In my experience, I have found that people tend to use one of three styles of troubleshooting, which are as follows:
The Data Collector
The Educated Guesser
Each of these styles have their own strengths and weaknesses. Let's have a look at the characteristics of these styles.
I like to call the first style of troubleshooting, the Data Collector. The Data Collector is someone who generally utilizes a systematic approach to solve issues. The systematic troubleshooting approach is generally characterized as follows:
Asking specific questions to parties reporting issues, expecting detailed answers
Running commands to identify system performance for most issues
Running through a predefined set of troubleshooting steps before stepping into action
The strength of this style is that it is effective, no matter what level of engineer or administrator is using it. By going through issues systematically, collecting each data point, and understanding the results before executing any resolution, the Data Collector is able to resolve issues that they might not necessarily be familiar with.
The weakness of this style is that the data collection is not usually the fastest method to resolve issues. Depending on the issue, collecting data can take a long time and some of that data might not be necessary to find the resolution.
I like to call the second style of troubleshooting, the Educated Guesser. The Educated Guesser is someone who generally utilizes an intuitive approach to solve issues. The intuitive approach is generally characterized by the following:
The strength of this style of troubleshooting is that it allows you to come up with resolutions sooner. When confronted with an issue, this type of troubleshooter tends to pull from experience and requires minimal information to find a resolution.
The weakness of this style is that it relies heavily on experience, and thus requires time before being effective. When focusing on resolution, this troubleshooter might also attempt multiple actions to resolve the issue, which can make it seem like the Educated Guesser does not fully understand the issue at hand.
There is a third and often-overlooked style of troubleshooting; this style utilizes both the systematic and intuitive styles. I like to call this style the Adaptor. The Adaptor has a personality which enables it to switch between systematic and intuitive troubleshooting styles. This combined style is often faster than the Data Collector style and is more detail oriented than the Educated Guesser style. This is because they are able to apply the troubleshooting style appropriate for the task at hand.
While it is easy to say that one method is better than the other, the fact of the matter is that picking the appropriate troubleshooting style depends greatly on the person. It is important to understand which troubleshooting style best fits your own personality. By understanding which style fits you better, you can learn and use techniques that fit that style. You can also learn and adopt techniques from other styles to apply troubleshooting steps that you would normally overlook.
This book will show both the Data Collector and Educated Guesser styles of troubleshooting, and periodically highlighting which personality style the steps best fit.
Troubleshooting is a process that is both rigid and flexible. The rigidity of the troubleshooting process is based on the fact that there are basic steps to be followed. In this way, I like to equate the troubleshooting process to the scientific method, where the scientific method has a specific list of steps that must be followed.
The flexibility of the troubleshooting process is that these steps can be followed in any order that makes sense. Unlike the scientific method, the troubleshooting process often has the goal of resolving the issue quickly. Sometimes, in order to resolve an issue quickly, you might need to skip a step or execute them out of order. For example, with the troubleshooting process, you might need to resolve the immediate issue, and then identify the root cause of that issue.
The following list has five steps that make up the troubleshooting process. Each of these steps could also include several sub-tasks, which may or may not be relevant to the issue. It is important to follow these steps with a grain of salt, as not every issue can be placed into the same bucket. The following steps are meant to be used as a best practice but, as with all things, it should be adapted to the issue at hand:
Understanding the problem statement.
Establishing a hypothesis.
Trial and error.
With the scientific method, the first step is to establish a problem statement, which is another way of saying: to identify and understand the goal of the experiment. With the troubleshooting process, the first step is to understand the problem being reported. The better we understand an issue, the easier it is to resolve the issue.
There are a number of tasks we can perform that will help us understand issues better. This first step is where a Data Collector's personality stands out. Data Collectors, by nature, will gather as much data as they can before moving on to the next step, whereas, the Educated Guessers generally tend to run through this step quickly and then move on to the next step, which can sometimes cause critical pieces of information to be missed.
Adaptors tend to understand which data collecting steps are necessary and which ones are not. This allows them to collect data as a Data Collector would, but without spending time gathering data that does not add value to the issue at hand.
The sub-task in this troubleshooting step is asking the right questions.
Whether via human or automated processes such as a ticket system, the reporter of the issue is often a great source of information.
When they receive a ticket, the Educated Guesser personality will often read the heading of the ticket, make an assumption of the issue and move to the next stage of understanding the issue. The Data Collector personality will generally open the ticket and read the full details of the ticket.
While it depends on the ticketing and monitoring system, in general, there can be useful information within a ticket. Unless the issue is a common issue and you are able to understand all that you know from the header, it is generally a good idea to read the ticket description. Even small amounts of information might help with particularly tricky issues.
Gathering additional information from humans, however, can be inconsistent. This varies greatly depending on the environment being supported. In some environments, the person reporting an issue can provide all of the details required to resolve the issue. In other environments, they might not understand the issue and simply explain the symptoms.
No matter what troubleshooting style fits your personality best, being able to get important information from the person reporting the issue is an important skill. Intuitive problem solvers such as the Educated Guesser or Adaptor tend to find this process easier as compared to Data Collector personalities, not because these personalities are necessarily better at obtaining details from people but rather because they are able to identify patterns with less information. Data Collectors, however, can get the information they need from those reporting the issue if they are prepared to ask troubleshooting questions.
Don't be afraid to ask obvious questions
My first technical job was in a webhosting technical support call center. There I often received calls from users who did not want to perform the basic troubleshooting steps and simply wanted the issue escalated. These users simply felt that they had performed all of the troubleshooting steps themselves and had found an issue beyond first level support.
While sometimes this was true, more often, the issue was something basic that they had overlooked. In that role, I quickly learned that even if the user is reluctant to answer basic or obvious questions, at the end of the day, they simply want their issue resolved. If that meant going through repetitive steps, that was ok, as long as the issue is resolved.
Even today, as I am now the escalation point for senior engineers, I find that many times engineers (even with years of troubleshooting experience under their belt) overlook simple basic steps.
Asking simple questions that might seem basic are sometimes a great time saver; so don't be afraid to ask them.
One of the best ways to gather information and understand an issue is to experience it. When an issue is reported, it is best to duplicate the issue.
While users can be a source of a lot of information, they are not always the most reliable; oftentimes a user might experience an error and overlook it or simply forget to relay the error when reporting the issue.
Often, one of the first questions I will ask a user is how to recreate the issue. If the user is able to provide this information, I will be able to see any errors and often identify the resolution of the issue faster.
Sometimes duplicating the issue is not possible
While it is always best to duplicate the issue, it is not always possible. Every day, I work with many teams; sometimes, those teams are within the company but many times they are external vendors. Every so often during a critical issue, I will see someone make a blanket statement such as "If we can't duplicate it, we cannot troubleshoot it."
While it is true that duplicating an issue is sometimes the only way to find the root cause, I often hear this statement abused. Duplicating an issue should be viewed like a tool; it is simply one of many tools in your troubleshooting tool belt. If it is not available, then you simply have to make do with another tool.
There is a significant difference between not being able to find a resolution and not attempting to find a resolution due to the inability to duplicate an issue. The latter is not only unhelpful, but also unprofessional.
Most likely, you are reading this book to learn techniques and commands to troubleshoot Red Hat Enterprise Linux systems. The third sub-task in understanding the problem statement is just that—running investigative commands to identify the cause of the issue. Before executing investigatory commands, however, it is important to know that the previous steps are in a logical order.
It is a best practice to first ask the user reporting an issue some basic details of the issue, then after obtaining enough information, duplicate the issue. Once the issue has been duplicated, the next logical step is to run the necessary commands to troubleshoot and investigate the cause of the issue.
It is very common to find yourself returning to previous steps during the troubleshooting process. After you have identified some key errors, you might find that you must ask the original reporter for additional information. When troubleshooting, do not be afraid to take a few steps backwards in order to gain clarity of the issue at hand.
With the scientific method, once a problem statement has been formulated it is then time to establish a hypothesis. With the troubleshooting process, after you have identified the issue, gathered the information about the issue such as errors, system current state, and so on, it is also time to establish what you believe caused or is causing the issue.
Some issues, however, might not require much of a hypothesis. It is common that errors in log files or the systems current state might answer why the issue occurred. In such scenarios, you can simply resolve the issue and move on to the Documentation step.
For issues that are not cut and dry, you will need to put together a hypothesis of the root cause. This is necessary as the next step after forming a hypothesis is attempting to resolve the issue. It is difficult to resolve an issue if you do not have at least, a theory of the root cause.
Here are a few techniques that can be used to help form a hypothesis.
While performing data collection during the previous steps, you might start to see patterns. Patterns can be something as simple as similar log entries across multiple services, the type of failure that occurred (such as, multiple services going offline), or even a reoccurring spike in system resource utilization.
These patterns can be used to formulate a theory of the issue. To drive the point home, let's go through a real-world scenario.
You are managing a server that both runs a web application and receives e-mails. You have a monitoring system that detected an error with the web service and created a ticket. While investigating the ticket, you also receive a call from an e-mail user stating they are getting e-mail bounce backs.
When you ask the user to read the error to you they mention
No space left on device.
Let's break down this scenario:
A ticket from our monitoring solution has told us Apache is down
We have also received reports from e-mail users with errors indicative of a file system being full
Could all of this mean that Apache is down because the file system is full? Possibly. Should we investigate it? Absolutely!
The above breakdown leads into the next technique for forming a hypothesis. It might sound simple but is often forgotten. "Have I seen something like this before?"
With the previous scenario, the error reported from the e-mail bounce back was one that generally indicated that a file system was full. How do we know this? Well, simple, we have seen it before. Maybe we have seen that same error with e-mail bounce backs or maybe we have seen the error from other services. The point is, the error is familiar and the error generally means one thing.
Remembering common errors can be extremely useful for the intuitive types such as the Educated Guesser and Adaptor; this is something they tend to naturally perform. For the Data Collector, a handy trick would be to keep a reference table of common errors handy.
From my experience, most Data Collectors tend to keep a set of notes that contain things such as common commands or steps for procedures. Adding common errors and the meaning behind those errors are a great way for systematic thinkers such as Data Collectors to establish a hypothesis faster.
Overall, establishing a hypothesis is important for all types of troubleshooters. This is the area where the intuitive thinkers such as Educated Guessers and Adaptors excel. Generally, those types of troubleshooters will form a hypothesis sooner, even if sometimes those hypotheses are not always correct.
In the scientific method, once a hypothesis is formed, the next stage is experimentation. With troubleshooting, this equates to attempting to resolve the issue.
Some issues are simple and can be resolved using a standard procedure or steps from experience. Other issues, however, are not as simple. Sometimes, the hypothesis turns out to be wrong or the issue ends up being more complicated than initially thought.
In such cases, it might take multiple attempts to resolve the issue. I personally like to think of this as similar to trial and error. In general, you might have an idea of what is wrong (the hypothesis) and an idea on how to resolve it. You attempt to resolve it (trial), and if that doesn't work (error), you move on to the next possible solution.
To those taking up a new role as a Linux Systems Administrator, if there were only one piece of advice I could give, it would be one that most have learned the hard way: back everything up before making changes.
Many times as systems administrators we find ourselves needing to change a configuration file or delete a few unneeded files in order to solve an issue. Unfortunately, we might think we know what needs to be removed or changed but are not always correct.
If a backup was taken, then the change can simply be restored to its previous state, however, without a backup. Thus reverting changes is not as easy.
A backup can consist of many things, it can be a full system backup using something like
rdiff-backup, a VM snapshot, or something as simple as creating a copy of a file.
In many cases at this point the issue is resolved, but much like each step in the troubleshooting process, it depends on the issue at hand. While getting help is not exactly a troubleshooting step, it is often the next logical step if you cannot solve the issue on your own.
When looking for help, there are generally six resources available:
Team Wikis or Runbooks
Red Hat kernel docs
Books (such as this one) are good for referencing commands or troubleshooting steps for particular types of issues. Other books such as the ones that specialize on a specific technology are good for referencing how that technology works. In previous years, it was not uncommon to see a senior admin with a bookshelf full of technical books at his or her disposal.
In today's world, as books are more frequently seen in a digital format, they are even easier to use as references. The digital format makes them searchable and allows readers to find specific sections faster than a traditional printed version.
Before Team Wikis became common, many operations groups had physical books called Runbooks. These books are a list of processes and procedures used daily by the operations team to keep the production environments operating normally. Sometimes, these Runbooks would contain information for provisioning new servers and sometimes they would be dedicated to troubleshooting.
In today's world, these Runbooks have mostly been replaced by Team Wikis, these Wikis will often have the same content but are online. They also tend to be searchable and easier to keep up to date, which means they are frequently more relevant than a traditional printed Runbook.
The benefit of Team Wikis and Runbooks are that not only can they often address issues that are specific to your environment, but they can also resolve those issues. There are many ways to configure services such as Apache, and there are even more ways that external systems create dependencies on these services.
In some environments, you might be able to simply restart Apache whenever there is an issue, but in others, you might actually have to go through several prerequisite steps. If there is a specific process that needs to be followed before restarting a service, it is a best practice to document the process in either a Team Wiki or Runbook.
Google is such a common tool for systems administrators that at one point there were specific search portals available at
Google has depreciated these search portals but that doesn't mean that the number of times systems administrators use Google or any other search engine for troubleshooting has decreased.
In fact, in today's world, it is not uncommon to hear the words "I would Google it" in technical interviews.
A few tips for those new to using Google for systems administration tasks are:
If you copy and paste a full error message (removing the server specific text) you will likely find more relevant results:
For example, searching for kdumpctl: No memory reserved for crash kernel returns 600 results, whereas searching for memory reserved for crash kernel returns 449,000 results.
You can find an online version of any man page by searching for
manthen a command such as
You can wrap an error in double quotes to refine search results to those that contain the same error.
Asking what you're looking for in the form of a question usually results in tutorials. For example, How do you restart Apache on RHEL 7?
While Google can be a great resource, the results should always be taken with a grain of salt. Often while searching for an error on Google, you might find a suggested command that offers little explanation but simply says "run this and it will fix it". Be very cautious when running these commands, it is important that any command you execute on a system should be a command you are familiar with. You should always know what a command does before executing it.
When Google is not available or even sometimes when it is, the best source of information on commands or Linux, in general, are the man pages. The man pages are core Linux manual documents that are accessible via the
To look up documentation for the
netstat command, for example, simply run the following:
$ man netstat NETSTAT(8) Linux System Administrator's Manual NETSTAT(8) NAME netstat - Print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships
As you can see, this command outputs not only the information on what the
netstat command is, but also contains a quick synopsis of usage information such as the following:
SYNOPSIS netstat [address_family_options] [--tcp|-t] [--udp|-u] [--udplite|-U] [--raw|-w] [--listening|-l] [--all|-a] [--numeric|-n] [--numeric-hosts] [--numeric-ports] [--numeric-users] [--symbolic|-N] [--extend|-e[--extend|-e]] [--timers|-o] [--program|-p] [--verbose|-v] [--continuous|-c] [--wide|-W] [delay]
Also, it gives detailed descriptions of each flag and what it does:
--route , -r Display the kernel routing tables. See the description in route(8) for details. netstat -r and route -e produce the same output. --groups , -g Display multicast group membership information for IPv4 and IPv6. --interfaces=iface , -I=iface , -i Display a table of all network interfaces, or the specified iface.
In general, the base manual pages for the core system and libraries are distributed with the
man-pages package. The man pages for specific commands such as
ps are distributed as part of that command's installation package. The reason for this is because the documentation of individual commands and components is left to the package maintainers.
This can mean that some commands are not documented to the level of others. In general, however, the man pages are extremely useful sources of information and can answer most day-to-day questions.
In the previous example, we can see that the man page for
netstat includes a few sections of information. In general, man pages have a consistent layout with some common sections that can be found within most man pages. The following is a simple list of some of these common sections:
The Name section generally contains the name of the command and a very brief description of the command. The following is the name section from the
ps command's man page:
NAME ps - report a snapshot of the current processes.
The Synopsis section of a command's man page will generally list the command followed by the possible command flags or options. A very good example of this section can be seen in the
netstat command's synopsis:
SYNOPSIS netstat [address_family_options] [--tcp|-t] [--udp|-u] [--raw|-w] [--listening|-l] [--all|-a] [--numeric|-n] [--numeric-hosts] [--numeric-ports] [--numeric-users] [--symbolic|-N] [--extend|-e[--extend|- e]] [--timers|-o] [--program|-p] [--verbose|-v] [--continuous|-c]
This section can be very useful as a quick reference for command syntax.
The Description section will often contain a longer description of the command as well as a list and explanation of the various command options. The following snippet is from the
cat command's man page:
DESCRIPTION Concatenate FILE(s), or standard input, to standard output. -A, --show-all equivalent to -vET -b, --number-nonblank number nonempty output lines, overrides -n
The description section is very useful, since it goes beyond simply looking up options. This section is often where you will find documentation about the nuances of commands.
Often man pages will also include examples of using the command:
EXAMPLES cat f - g Output f's contents, then standard input, then g's infocontents.
The preceding is a snippet from the
cat command's man page. We can see, in this example, how to use
cat to read from files and standard input in one command.
This section is often where I find new ways of using commands that I've used many times before.
Along with man pages, Linux systems generally also contain info documentation, which are designed to contain additional documentation, which go beyond that, within man pages. Much like man pages, the info documentation is included with a command package, and the quality/quantity of the documentation can vary by package.
The method to invoke the info documentation is similar to man pages, simply execute the
info command followed by the subject you wish to view:
$ info gzip GNU Gzip: General file (de)compression ************************************** This manual is for GNU Gzip (version 1.5, 10 June 2014), and documents commands for compressing and decompressing data. Copyright (C) 1998-1999, 2001-2002, 2006-2007, 2009-2012 Free Software Foundation, Inc.
In addition to using man pages and info documentation to look up commands; these tools can also be used to view documentation around other items such as system calls or configuration files.
As an example, if you were to use
man to search for the term
signal, you would see the following:
$ man signal SIGNAL(2) Linux Programmer's Manual SIGNAL(2) NAME signal - ANSI C signal handling SYNOPSIS #include <signal.h> typedef void (*sighandler_t)(int); sighandler_t signal(int signum, sighandler_t handler); DESCRIPTION The behavior of signal() varies across UNIX versions, and has also varied historically across different versions of Linux. Avoid its use: use sigaction(2) instead. See Portability below. signal() sets the disposition of the signal signum to handler, which is either SIG_IGN, SIG_DFL, or the address of a programmer-defined function (a "signal handler").
Signal is a very important system call and a core concept of Linux. Knowing that it is possible to use the
info commands to look up core Linux concepts and behaviors can be very useful during troubleshooting.
In addition to man pages, the Red Hat distribution also has a package called kernel-doc. This package contains quite a bit of information on how the internals of the system works.
The kernel documentation is a set of text files that are placed into
/usr/share/doc/kernel-doc-<kernel-version>/ and are categorized by the topic they cover. This resource is quite useful for deeper troubleshooting such as adjusting kernel tunables or understanding how
ext4 filesystems utilize the journal.
By default, the
kernel-doc package is not installed, however, it can be easily installed using the
# yum install kernel-doc
Whether it is a friend or a team leader, there is certain etiquette when asking others for help. The following is a list of things that people tend to expect when asked to help solve an issue. When I am asked for help, I would expect you to:
Try to resolve it yourself: When escalating an issue, it is always best to at least try to follow the Understanding the problem statement and Forming a hypothesis steps of the troubleshooting process.
Document what you've tried: Documentation is key to escalating issues or getting help. The better you document the steps tried and errors found, the faster it will be for others to identify and resolve the issue.
Explain what you think the issue is and what was reported: When you escalate the issue, one of the first things to point out is your hypothesis. Often this can help expedite resolution by leading the next person to a possible solution without having to perform data collection activities.
Mention whether there is anything else that happened to this system recently: Often issues come in pairs, it is important to highlight all factors of what is happening on the system or systems affected.
The preceding list, while not extensive, is important as each of these key pieces of information can help the next person troubleshoot the issue effectively.
When escalating issues, it is always best to follow up with that other person to find out what they did and how they did it. This is important as it will show the person you asked that you are willing to learn more, which many times will lead to them taking time to explain how they resolved and identified the issue.
Interactions like these will give you more knowledge and help build your system's administration skills and experience.
Documentation is a critical step in the troubleshooting process. At every step during the process, it is key to take note and document the actions being performed. Why is it important to document? Three reasons mainly:
When escalating the issue, the more information you have written down the more you can pass on to another
If the issue is a reoccurring issue, the documentation can be used to update a Team Wiki or Runbook
If, in your environment, you perform Root Cause Analysis (RCA), all of this information will be required for a RCA
Depending on environments, the documentation can be anything from simple notes saved in a text file on a local system to required notes for a ticket system. Each work environment is different but a general rule is there is no such thing as too much documentation.
For Data Collectors, this step is fairly natural. As most Data Collector personalities will generally keep quite a few notes for their own personal use. For Educated Guessers, this step might seem unnecessary. However, for any issue that is reoccurring or needs to be escalated, documentation is critical.
What kind of information should be documented? The following list is a good starting point but as with most things in troubleshooting, it depends on the environment and the issue:
The problem statement, as you understand it
The hypothesis of what is causing the issue
Data collected during the information gathering steps:
Specific errors found
Relevant system metrics (for example, CPU, Memory, and Disk utilization)
Commands executed during the information gathering steps (within reason, it is not required to include every
Steps taken during attempts to resolve the issue, including specific commands executed
With the preceding items well documented, if the issue reoccurs, it is relatively simple to take the documentation and move it to a Team Wiki. The benefit to this is that a Wiki article can be used by other team members who need to resolve the same issue during reoccurrences.
One of the three reasons listed previously for documentation is to use the documentation during Root Cause Analysis, which leads to our next topic—Establishing a Root Cause Analysis.
Root cause analysis is a process that is performed after incidents occur. The goal of the RCA process is to identify the root cause of an incident and identify any possible corrective actions to prevent the same incident from occurring again. These corrective actions might be as simple as establishing user training to reconfiguring Apache across all web servers.
The RCA process is not unique to technology and is a widely practiced process in fields such as aviation and occupational safety. In these fields, an incident is often more than simply a few computers being offline. They are incidents where a person's life might have been at risk.
Different work environments might implement RCA processes differently but at the end of the day there are a few key elements in every good RCA:
The problem as it was reported
The actual root cause of the problem
A timeline of events and actions taken
Any key data points
A plan of action to prevent the incident from reoccurring
One of the first steps in the troubleshooting process is to identify the problem; this information is a key piece of information for RCAs. The importance can vary in reason depending on the issue. Sometimes, this information will show whether or not the issue was correctly identified. Most times, it can serve as an estimate of the impact of the issue.
Understanding the impact of an issue can be very important, for some companies and issues it could mean lost revenue; for other companies, it could mean damage to their brand or depending on the issue, it could mean nothing at all.
This element of a Root Cause Analysis is pretty self-explanatory on its importance. However, sometimes it might not be possible to identify a root cause. In this chapter and in Chapter 12, Root Cause Analysis of an Unexpected Reboot, I will discuss how to handle issues where a full root cause is unavailable.
If we use an aviation incident as an example, it is easy to see where a timeline of events such as, when did the plane take off, when were passengers boarded, and when did the maintenance crew finish their evaluation, can be useful. A timeline for technology incidents can also be very useful, as it can be used to identify the length of impact and when key actions are taken.
A good timeline should consist of times and major events of the incident. The following is an example timeline of a technology incident:
At 08:00, Joe B. phones the NOC helpline reporting an outage with e-mail servers in Tempe
At 08:15, John C. logged into the e-mail servers in Tempe and noticed they were running out of available memory
At 08:17, as per the Runbook, John C. began rebooting the e-mail servers one by one
In addition to a timeline of events, the RCA should also include key data points. To use the aviation example again, a key data point would be the weather conditions during the incident, the work hours of those involved, or the condition of the aircraft.
Our timeline example included a few key data points, which include:
Time of incident: 08:00
Condition of e-mail servers: Running out of available memory
Affected service: E-mail
Whether the data points are on their own or within a timeline, it is important to ensure those data points are well documented in the RCA.
The entire point of performing a root cause analysis is to establish why an incident occurred and the plan of action to prevent it from happening again.
Unfortunately, this is an area that I see many RCA's neglect. An RCA process can be useful when implemented well; however, when implemented poorly they can turn into a waste of time and resources.
Often with poor implementations, you will find that RCAs are required for every incident big or small. The problem with this is that it leads to a reduction of quality in the RCAs. An RCA should only be performed when the incident causes significant impact. For example, hardware failures are not preventable, you can proactively identify hardware failure using tools such as
smartd for hard drives but apart from replacing them you cannot always prevent them from failing. Requiring an RCA for every hardware failure and replacement is an example of a poorly implemented RCA process.
When an engineer is required to establish a root cause for something as common as hardware failing, they neglect the root cause process. When engineers neglect the RCA process for one type of incident, it can spread to other types of incidents causing quality of RCAs to suffer.
An RCA should only be reserved for incidents with significant impact. Minor incidents or routine incidents should never have an RCA requirement; they should however, be tracked. By tracking the number of hard drives that have been replaced along with the make and model of those hard drives, it is possible to identify hardware quality issues. The same is true with routine incidents such as resetting user passwords. By tracking these types of incidents, it is possible to identify possible areas of improvement.
To give a better understanding of the RCA process, let's use a hypothetical problem seen in production environments.
After logging into the system, you were able to find that the application crashed because the file system where the application attempted to write to was full.
The root cause is not always the obvious cause
Was the root cause of the issue the fact that the file system was full? No. While the file system being full might have caused the application to crash, this is what is called a contributing factor. A contributing factor, such as the filesystem being full can be corrected but this will not prevent the issue from reoccurring.
At this point, it is important to identify why the filesystem was full. On further investigation, you find that it was due to a co-worker disabling a cron job that removes old application files. After the cron job was disabled, the available space on the filesystem slowly kept decreasing. Eventually, the filesystem was 100 percent utilized.
In this case, the root cause of the issue was the disabled cron job.
Let's look at another hypothetical situation, where an issue causes an outage. Since the issue caused significant impact, it will absolutely require an RCA. The problem is, in order to resolve the issue, you will need to perform an activity that eliminates the possibility of performing an accurate RCA.
These situations sometimes require a judgment call, whether to live with the outage a little longer or resolve the outage and sacrifice any chance of an RCA. Unfortunately, there is no single answer for these situations, the correct answer depends on both the issue and the environment affected.
While working on financial systems, I find myself having to make this decision often. With mission critical systems, the answer was almost always to restore service above performing the root cause analysis. However, whenever possible, it is always preferred to first capture data even if that data cannot be reviewed immediately.
The final section in this chapter is one of the most important best practices I can suggest. The final section covers the importance of understanding your environment.
Some believe that a systems administrator's job stops at the applications installed on the system and that the systems administrator should only be concerned with the operating system and the operating system's components, such as networking or file systems.
I do not follow this philosophy. In reality, it is often that a systems administrator will start to understand how an application works in production better than the development team who created it.
From my experience, in order to truly support a server, you must understand the service and applications running within that server. For example, in many enterprise environments the systems administrator is expected to handle the configuration and management of the web server (for example, Apache and Nginx). However, the same system admin is not expected to manage the application (for example, Java and C) behind Apache.
What makes Apache different from a Java application? The answer is nothing really; at the end of the day they are both applications running on the server. I have seen many administrators simply wash their hands off an issue once the issue is related to an application. Yet if the issue is related to Apache, they spring into action.
In the end, if those administration groups were to partner with the development group the issues could be solved faster. It is the administrator's responsibility to understand and help troubleshoot issues with any software loaded on their systems. Whether that software was distributed with the OS or installed later by an application team.
In this chapter, you learned that there are two main styles of troubleshooting, intuitive (Educated Guessers) and systematic (Data Collectors). We covered which troubleshooting steps work best for those two styles and that it is possible for some (Adaptors) to utilize both styles of troubleshooting.
In the following chapters of this book, as we troubleshoot real-life scenarios, I will utilize both the intuitive and systematic troubleshooting steps highlighted in the processes discussed in this chapter.
This chapter did not get into technical specifics; the next chapter will be full of technical details, as we cover and explore common Linux commands used for troubleshooting.