In any enterprise, no matter how big or small, it is very important to monitor the health of all its components such as servers, network devices, databases, and so on, and make sure that things are working as intended. Monitoring is a critical part for any business that is dependent upon infrastructure. This can be done by giving signals to enable the necessary actions in case of any failures.
In a real production environment, monitoring can be very complex with many components and configurations. There might be different security zones, different ways in which servers are set up, or the same database might have been used in many different ways with servers listening to various service ports.
Before diving into setting up monitoring and logging for Hadoop, it is very important to understand the basics of monitoring, how it works, and some commonly used tools in the market. In Hadoop, we can monitor the resources, services, and also collect the metrics of various Hadoop counters. In this book, we will be looking at monitoring and metrics collection.
In this chapter, we will begin our journey by exploring the open source monitoring tools that we use in enterprises, and learn how to configure them.
The following topics will be covered in this chapter:
Some of the widely used monitoring tools
Installing and configuring Nagios
Installing and configuring Ganglia
Understanding how system logging works
If we have tested our code and found that the functionality and everything else is fine, then why do we need monitoring?
The production load might be different from what we tested and found, there could be human errors while conducting the day-to-days operations, someone could have executed a wrong command or added a wrong configuration. There could also be hardware/network failures that could make your application unavailable. How long can you afford to keep the application down? Maybe for a few minutes or for a few hours, but what about the revenue loss, or what if it is a critical application for carrying out financial transactions? We need to respond to the failures as soon as possible, and this can be done only if we perform early detections and send out notifications.
In the market, there are many tools are available for monitoring, but the important things to keep in mind are as follows:
Some of the monitoring tools available in the market are BandwidthD, EasyNetMonitor, Zenoss, NetXMS, Splunk, and many more.
Of the many tools available, Nagios and Ganglia are most widely deployed for monitoring the Hadoop clusters. Many Hadoop vendors, such as Cloudera and Hortonworks use Nagios and Ganglia for monitoring their clusters.
Nagios is a powerful monitoring system that provides you with instant awareness about your organization's mission-critical IT infrastructure.
By using Nagios, you can do the following:
The Nagios architecture was designed keeping in mind flexibility and scalability. It consists of a central server, which is referred to as the Monitoring Server and the clients are the Nagios agents, that run on each node that needs to be monitored.
The checks can be performed for service, port, memory, disk, and so on, by using either active checks or passive checks. The active checks are initiated by the Nagios server and the passive checks are initiated by the client. Its flexibility allows us to have programmable APIs and customizable plugins for monitoring.
Nagios is an enterprise class monitoring solution, which can manage a large number of nodes. It can be scaled easily, and it has the ability to write custom plugins for your applications. Nagios is quite flexible and powerful, and it supports many configurations and components.
Tip
Nagios is such a vast and extensive product that this chapter is in no way a reference manual for it. This chapter is written with the primary aim of setting up monitoring, as quickly as possible, and familiarizing the readers with it.
Always set up a separate host as the monitoring node/server and do not install other critical services on it. The number of hosts that are monitored can be a few thousand, with each host having from 15 to 20 checks that can be either active or passive.
Before starting with the installation of Nagios, make sure that Apache HTTP Server version 2.0 is running and gcc
and gd
have been installed. Make sure that you are logged in as root
or as with sudo
privileges. Nagios runs on many platforms, such as RHEL, Fedora, Windows, CentOS; however, in this book we will use the CentOS 6.5 platform.
$ ps -ef | grep httpd $ service httpd status $ rpm -qa | grep gcc $ rpm -qa | grep gd
Let's look at the installation of Nagios, and how we can set it up. The following steps are for Rhel, CentOS, Fedora, and Ubuntu:
Download Nagios and the Nagios plugin from the Nagios repository, which can be found at http://www.nagios.org/download/.
The latest stable version of Naigos at the time of writing this chapter was
nagios-4.0.8.tar.gz
.Create a Nagios user to manage the Nagios interface. You have to execute the commands as either
root
or withsudo
privileges.You can download it either from http://sourceforge.net/ or from any other commercial site, but a few sites might ask for registration.
$ sudo /usr/sbin/useradd -m nagios $ passwd nagios
Create a new
nagcmd
group so that external commands can be submitted through the web interface.If you prefer, you can download the file directly into the user's
home
directory.Create a Nagios user and an Apache user, as a part of the group.
$ sudo /usr/sbin/groupadd nagcmd $ sudo /usr/sbin/usermod -a -G nagcmd nagios $ sudo /usr/sbin/usermod -a -G nagcmd apache
Let's start with the configuration.
Navigate to the directory, where the package was downloaded. The downloaded package could be either in the Downloads
folder or in the present working directory.
$ tar zxvf nagios-4.0.8.tar.gz $ cd nagios-4.0.8/ $ ./configure –with-command-group=nagcmd
Tip
On Red Hat, the . /configure
command might not work and might hang while displaying the message. So, add –enable-redhat-pthread-workaround
to the . /configure
command as a work-around for the preceding problem, as follows:
$ make all; sudo make install; sudo make install-init $ sudo make install-config; sudo make install-commandmode
After installing Nagios, we need to do a minimal level of configuration. Explore the
/usr/local/nagios/etc
directory for a few samples.Update
/usr/local/nagios/etc/objects/contacts.cfg
, with the e-mail address on which you want to receive the alerts.Secondly, we need to configure the web interface through which we will monitor and manage the services. Install the Nagios web configuration file in the Apache configuration directory using the following command:
$ sudo make install-webconf
The preceding command will work only in the extracted directory of the Nagios. Make sure that you have extracted Nagios from the
TAR
file and are in that directory.Create an
nagadm
account for logging into the Nagios web interface using the following command:$ sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagadm
Reload apache, to read the changes, using the following command:
$ sudo service httpd restart $ sudo /etc/init.d/nagios restart
Open
http://localhost/nagios/
in any browser on your machine.
If you see a message, such as Return code of 127 is out of bounds – plugin may be missing on the right panel, then this means that your configuration is correct as of now. This message indicates that the Nagios plugins are missing, and we will show you how to install these plugins in the next step.
Nagios provides many useful plug-ins to get us started with monitoring all the basics. We can write our custom checks and integrate it with other plug-ins, such as check_disk
, check_load
, and many more. Download the latest stable version of the plugins and then extract them. The following command lines help you in extracting and installing Nagios plugins:
$ tar zxvf nagios-plugins-2.x.x.tar.gz $ cd nagios-plugins-2.x.x/ $ ./configure -–with-nagios-user=nagios -–with-nagios- group=nagios $ make ; sudo make install
After the installation of the core and the plug-in packages, we will be ready to start nagios
.
Before starting the Nagios service, make sure that there are no configuration errors by using the following command:
$ sudo /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
Start the nagios
service by using the following command:
$ sudo service nagios start $ sudo chkconfig --–add nagios; sudo chkconfig nagios on
There are many configuration files in Nagios, but the major ones are located under the /usr/local/nagios/etc
directory:
The other configuration files under the /usr/local/nagios/etc/objects
directory are described as follows:
The nagios.conf
file under /usr/local/nagios/etc/
is the main configuration file with various directives that define what all the files include. For example, cfg_dir=<directory_name>
.
Nagios will recursively process all the configuration files in the subdirectories of the directory that you specify with this directive as follows:
cfg_dir=/usr/local/nagios/etc/commands cfg_dir=/usr/local/nagios/etc/services cfg_dir=/usr/local/nagios/etc/hosts
The Nagios server can do an active or a passive check. If the Nagios server proactively initiates a check, then it is an active check. Otherwise, it is a passive check.
The following are the steps for setting up monitoring for clients:
Download NRPE addon from http://www.nagios.org and then install
check_nrpe
.Create a host and a service definition for the host to be monitored by creating a new configuration file,
/usr/local/nagios/etc/objects/clusterhosts.cfg
for that particular group of nodes.
Tip
Configuring a disk check
define host { use linux-server host_name remotehost alias Remote Host address 192.168.0.1 contact_groups admins } Service definition sample: define service { use generic-service service_description Root Partition contact_groups admins check_command check_nrpe!check_disk }
Communication among NRPE components:
The NRPE on the server (
check_nrpe
) executes the check on the remote NRPEThe check is returned to the Nagios server through the NRPE on the remote host
On each of the client hosts, perform the following steps:
Install the Nagios Plugins and the NRPE addon, as explained earlier.
Create an account to run
nagios
from, which can be under any username.[client] # useradd nagios; passwd nagios
Install
nagios-plugin
with the LD flags:[client] # tar xvfz nagios-plugins-2.x.x.tar.gz; cd nagios-plugins-2.x.x/ [client]# export LDFLAGS=-ldl [client]# ./configure –with-nagios-user=nagios –with- nagios-group=nagios –enable-redhat-pthread-workaround [client]# make; make install
Change the ownership of the directories, where
nagios
was installed by thenagios
user:[client]# chown nagios.nagios /usr/local/nagios [client]# chown -R nagios.nagios /usr/local/nagios/libexec/
Install NRPE and run it as daemon:
[client]# tar xvfz nrpe-2.x.tar.gz; cd nrpe-2.x [client]# ./configure; make all ;make install-plugin; make install-daemon; make install-daemon-config; make install-xinetd
Start the service, after creating the
/et/xinet.d/nrpe
file with the IP of the server:[client#] service xinetd restart
Modify the
/usr/local/nagios/etc/nrpe.cfg
configuration file:command[check_disk]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /dev/hda1
After getting a good insight into Nagios, we are ready to understand its deployment in the Hadoop clusters.
The second tool that we will look into is Ganglia. It is a beautiful tool for aggregating stats and plotting them nicely. Nagios gives the events and alerts, Ganglia aggregates and presents them in a meaningful way. What if you want to look for the total CPU, memory per cluster of 2000 nodes or total free disk space on 1000 nodes? Plotting the CPU memory for one node is easy, but aggregating it for a group on a node requires a tool that can do this.
Ganglia is an open source, distributed monitoring platform for collecting metrics across the cluster. It can do aggregation on CPU, memory, disk I/O, and many more components across a group of nodes. There are alternate tools, such as Cacti and Munin, but Ganglia scales very well for large enterprises.
Some of the key features of Ganglia are as follows:
You can view historical and real time metrics of a single node or for an entire cluster
You can use the data to make decisions on the cluster sizing and the performance
We will now discuss some components of Ganglia.
Ganglia Monitoring Daemon (
gmond
): It runs on the nodes that need to be monitored, and it captures the state change and sends updates to a central daemon by using XDR.Ganglia Meta Daemon (
gmetad
): It collects data fromgmond
and the othergmetad
daemons. The data is indexed and stored on the disk in a round robin fashion. There is also a Ganglia front-end for a meaningful display of the information collected.
Let's begin by setting up Ganglia, and see what the important parameters that need to be taken care of are. Ganglia can be downloaded from http://ganglia.sourceforge.net/. Perform the following steps to install Ganglia:
Install
gmond
on the nodes that need to be monitored:$ sudo apt-get install ganglia-monitor Configure /etc/ganglia/gmond.conf globals { daemonize = yes setuid = yes user = ganglia debug_level = 0 max_udp_msg_len = 1472 mute = no deaf = no host_dmax = 0 cleanup_threshold = 600 gexec = no send_metadata_interval = 0 } udp_send_channel { host = gmetad.cluster1.com port = 8649 } udp_recv_channel { port = 8649 } tcp_accept_channel { port = 8649 }
Restart the Ganglia service:
$ service ganglia-monitor restart
Install
gmetad
on the master node. It can be downloaded from http://ganglia.sourceforge.net/:$ sudo apt-get install gmetad
Update the
gmetad.conf
file, which tells you where it will collect the data from along with the data source:vi /etc/ganglia/gmetad.conf data_source "my cluster" 120 localhost
Update the
gmond.conf
file on all the nodes so that they point to the master node, which has the same cluster name.
Logging is an important part of any application or a system, as it tells you about the progress, errors, states of services, security breaches, and repeated user failures, and this helps you in troubleshooting and analyzing these events. The important features about logs are collecting, transporting, storing, alerting, and analyzing the events.
Logs can be generated in many ways. They can be generated either through system facilities, such as syslog or through applications that can directly write their logs. In either case, the collection of the logs must be organized so that they can be easily retrieved when needed.
The logs can be transferred from multiple nodes to a central location, so that instead of parsing logs on hundreds of servers individually, you can maintain them in an easy way by central logging. The size of the logs transferred across the network, and how often we need to transfer them, are also matters of concern.
The storage needs will depend upon the retention policy of the logs, and the cost will also vary according to the storage media or the location of storage, such as cloud storage or local storage.
The logs collected need to be parsed and the alerts should be sent for any errors. The errors need to be detected in a speculated time frame and remediation should be provided.
Analyzing the logs to identify the traffic patterns of a website is important. The apache web server hosting a website and its logs needs to be analyzed, which IPs were visited, using which user agent or operating system. All of this information can be used to target advertisements at various sections of the internet user base.
The logging into the Linux system is controlled by the syslogd
daemons and recently by rsyslogd
daemons. There is one more logger called klogd
, which logs kernel messages.
The syslogd
is configured by /etc/syslogd.conf
and the format of the file is defined as facility.priority log_location
.
The logging facility and priority is described in the tables as follows:
Facility |
Description |
---|---|
|
These are the security / authorization messages. |
|
These are the clock daemons ( |
|
These are the kernel messages. |
|
These are reserved for local use. |
|
This is the e-mail system. |
The table shown here describes the priority:
Priority |
Description |
---|---|
|
This displays the debugging information. |
|
This displays the general informative messages. |
|
This displays the warning messages. |
|
This displays an error condition. |
|
This displays the critical condition. |
|
This displays an immediate action that is required. |
|
This displays that the system is no longer available. |
For example, the logging events for an e-mail event can be configured as follows:
mail.* /var/log/mail
This command logs all the e-mail messages to the /var/log/messages
file.
Here's another example; start the logging daemon and it will start capturing the logs from the various daemons and applications. Use the following command to perform this action:
$ service syslogd/rsyslog restart
This chapter has built the base for monitoring, logging, and log collection. In this chapter, we talked about the monitoring concepts, and how we can setup Nagios and Ganglia for monitoring. We also discussed how the structure of the configuration files is, and how they can be segregated into various sections for the ease of use.
Using this as a baseline, we will move on to understand the Hadoop services, the ports used by Hadoop, and then configure monitoring for them in the upcoming chapters of this book.
In the next chapter, we will deal with the Hadoop daemons and services.