Reader small image

You're reading from  Monitoring Hadoop

Product typeBook
Published inApr 2015
Publisher
ISBN-139781783281558
Edition1st Edition
Tools
Right arrow
Author (1)
Aman Singh
Aman Singh
author image
Aman Singh

Gurmukh Singh is a seasoned technology professional with 14+ years of industry experience in infrastructure design, distributed systems, performance optimization, and networks. He has worked in big data domain for the last 5 years and provides consultancy and training on various technologies. He has worked with companies such as HP, JP Morgan, and Yahoo. He has authored Monitoring Hadoop by Packt Publishing
Read more about Aman Singh

Right arrow

Chapter 6. Hadoop Metrics and Visualization Using Ganglia

In this chapter, we will look at the Hadoop metrics and visualization of various components like CPU, memory, and disk, by using Ganglia. This chapter is a build from the initial chapters on the monitoring and installation of Ganglia. Hadoop is a distributed platform with various services running across the cluster, which provides many metrics to tap into the Hadoop counters and other functional parameters.

In this chapter, we will look at the metrics for various Hadoop components.

The following topics will be covered in this chapter:

  • Hadoop metrics contexts

  • Metrics collection under DFS context

  • Metrics collection under mapred context

  • Metrics collection under RPC, JVM, and other contexts

  • Visualizing the metrics with Ganglia

Hadoop metrics


In Hadoop, there are many daemons running, such as DataNode, NameNode, and JobTracker; each of these daemons captures a lot of information about the components they work on. Similarly, in the YARN framework, we have ResourceManager, NodeManager, and ApplicationManager, each of which exposes metrics, explained in the following sections under Metrics2. For example, DataNode collects metrics such as the number of blocks it has for advertising to the NameNode, the number of replicated blocks, and metrics about read/writes from clients. In addition to this, there could be metrics related to events, and so on. Hence, it is very important to gather them for the working of the Hadoop cluster and for debugging, if something goes wrong.

Therefore, Hadoop has a metrics system for collecting all this information. There are two versions of the metrics system, metrics and Metrics2 for Hadoop 1.x and Hadoop 2.x, respectively. The hadoop-metrics.properties and hadoop-metrics2.properties files...

Metrics contexts


Metrics are more relevant to the maintainers of the Hadoop clusters than its users. There might be many users who run MapReduce jobs on a cluster; they are concerned about MapReduce Counters and not the metrics, which are daemon specific. MapReduce counters talk about the number of mappers or reducers, number of bytes read or written to the HDFS and non-HDFS File System, how many spills happened, information about the shuffle phase, etc. However, for Hadoop administrators, metrics about the daemons are of more concern, in order to better understand the cluster.

Named contexts

Each of the daemons has a group of contexts for it. Some of the contexts, which are supported or rather available, are listed in the following table:

Metrics system design


Hadoop provides a framework to collect internal events and metrics and report them to the external system. The external system could be simply writing to a file or a tool like Ganglia. The new Hadoop Metrics2 framework has been revamped to integrate better with Ganglia.

The best things about the framework are the pluggable output plugins and the ability to reconfigure it without the need to restart the daemons.

The metrics have three main parts:

  • Producer: The producer is the source of metrics generation and produces metrics for the upstream

  • Consumers: They are basically the sinks of the framework, as they consume the metrics generated by the producers

  • Pollers: They poll the sources and deliver data to the sink or consumers

Metrics configuration


The Hadoop daemons expose runtime metrics, which can be collected using plugins. The old Metrics1 system has been replaced by the new Metrics2 system, which supports the following:

  • Metrics collection using multiple plugins

  • Better integration with JMX

  • Better filters for cutting out noise

Before configuring metrics, it is important to understand which metrics and servlets are supported by each Hadoop version. For example, the servlet at /metrics works only with Metrics1 and the new servlet at /jmx works with both Metrics1 and Metrics2.

We need to configure a source, consumer, and poller for the framework:

  • Source or producer: A metric source class must implement the following interface:

    org.apache.hadoop.metrics2.MetricsSource
    
  • Consumer or sink: A consumer or sink must be implemented with the following line of code:

    org.apache.hadoop.metrics2.MetricsSink
    

    For example, the configuration for JobTracker sink and filter is as follows:

    jobtracker.sink.file.class=org.apache.hadoop.metrics2...

Configuring Metrics2


For Hadoop version 2, which uses the YARN framework, the metrics can be configured using hadoop-metrics2.properties, in the $HADOOP_HOME folder:

*.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
*.period=10
namenode.sink.file.filename=namenode-metrics.out
datanode.sink.file.filename=datanode-metrics.out
jobtracker.sink.file.filename=jobtracker-metrics.out
tasktracker.sink.file.filename=tasktracker-metrics.out
maptask.sink.file.filename=maptask-metrics.out
reducetask.sink.file.filename=reducetask-metrics.out

We can also script it out and use it for metrics generation, shown as follows:

# namenode
[script://./bin/hadoop_metrics.sh http://192.168.1.70:50070/jmx]
disabled = 0
interval = 10
sourcetype = hadoop_metrics
index = hadoop_metrics

# datanode
[script://./bin/hadoop_metrics.sh http://192.168.1.70:50075/jmx]
disabled = 0
interval = 10
sourcetype = hadoop_metrics
index = hadoop_metrics

# jobtracker
[script://./bin/hadoop_metrics.sh http://192.168.1.70:50030...

Exploring the metrics contexts


Till now, we have seen that there are various metrics contexts such as JVM, DFS, and RPC. Let's look at them and explore some of the examples, depicting what each context looks like and what it logs:

  • JVM context: The JVM context contains stats about JVM memory, threads, heap memory, and so on:

    jvm.metrics: hostName=dn1.cluster1.com, processName=DataNode, sessionId=,logError=0,logFatal=0,logInfo=159,logWarn=0, memHeapCommittedM=9.4,memHeapUsedM=12.63,memNonHeapCommittedM=28.75,memNonHeapUsedM=19.7356,threadsBlocked=0, threadsNew=0, threadsRunnable=3, threadsTerminated=0, threadsTimedWaiting=2, threadsWaiting=1
  • DFS context: The DFS context stats talk about the namenode files, capacity, blocks, and so on:

    dfs.FSNamesystem: hostName=nn1.cluster1.com, sessionId=, BlocksTotal=440, CapacityRemainingGB=100, CapacityTotalGB=254, CapacityUsedGB=0, FilesTotal=160, PendingReplicationBlocks=0, ScheduledReplicationBlocks=0, TotalLoad=1, UnderReplicatedBlocks=20
  • Mapred context...

Hadoop Ganglia integration


Ganglia is a metrics collection and a visualization tool for the enterprise and works very well with Nagios and Hadoop. In addition to just collecting stats about CPU, memory, and disk, other finely tuned metrics are required, which can be provided by this framework.

Until now, we have seen that the metrics collection can be done to a file or to any other tool like Splunk, depending upon the class interface. We can configure which class handles the metrics update.

For Ganglia, we use GangliaContext, which is an implementation of MetricsContext. Ganglia versions higher than 3.0 provide this integration and work very well for collecting the Hadoop metrics.

In Ganglia, the metrics can be collected for NameNode, JobTracker, MapReduce tasks, JVM, RPC, DataNodes, and the new YARN framework.

Hadoop metrics configuration for Ganglia

Firstly, we need to define a sink class, as per Ganglia version 3.1:

*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31

Secondly...

Hadoop configuration


Now, we must set up the Hadoop Configuration file to point to the Ganglia servers.

Metrics1

Update the hadoop-metrics.properties file with the following lines:

dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
dfs.period=10
dfs.servers=192.168.1.10:8649

mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
mapred.period=10
mapred.servers=192.168.1.10:8649

jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
jvm.period=10
jvm.servers=192.168.1.10:8649

rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
rpc.period=10
rpc.servers=192.168.1.10:8649

ugi.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
ugi.period=10
ugi.servers=192.168.1.10:8649

Metrics2

Update the hadoop-metrics2.properties file with the following lines:

namenode.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
namenode.sink.ganglia.period=30
namenode.sink.ganglia.servers=192.168.1.10:8649

datanode.sink.ganglia.class=org.apache.hadoop.metrics2...

Ganglia graphs


Once the configuration is in place and the services have started, we can see the metrics being collected and plotted by using the Ganglia web interface, as shown in the following screenshot:

Let's have a look at the next screenshot:

Metrics APIs


For reporting metrics, we have a package that provides APIs for both Metrics1 and Metrics2. It provides the flexibility to use client libraries and different modules from within an application.

The org.apache.hadoop.metrics package

This package provides sub-packages to do the specified task:

org.apache.hadoop.metrics.spi

The abstract Server Provider Interface package. Those wishing to integrate the metrics API with a particular metrics client library should extend this package:

org.apache.hadoop.metrics.file

An implementation package that writes the metric data to a file or sends it to the standard output stream:

org.apache.hadoop.metrics.ganglia

An implementation package that sends metrics data to Ganglia.

The new Metrics2 provides a lot more packages for the implementation.

The org.apache.hadoop.metrics2 package

  • org.apache.hadoop.metrics2.annotation: This is the public annotation that interfaces for simpler metrics instrumentation.

  • org.apache.hadoop.metrics2.filter: This is the built...

Summary


In this chapter, we looked at how to do metrics collections, the different metrics contexts and their groups, and the package APIs for integration with Ganglia for graphing the metrics. In the next chapter, we will look at the monitoring of some of the other components of Hadoop, such as Hive and HBase, and some performance improvement tips and tuning.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Monitoring Hadoop
Published in: Apr 2015Publisher: ISBN-13: 9781783281558
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Aman Singh

Gurmukh Singh is a seasoned technology professional with 14+ years of industry experience in infrastructure design, distributed systems, performance optimization, and networks. He has worked in big data domain for the last 5 years and provides consultancy and training on various technologies. He has worked with companies such as HP, JP Morgan, and Yahoo. He has authored Monitoring Hadoop by Packt Publishing
Read more about Aman Singh

Hadoop 1.x

Hadoop 2.x

jvm: for Java Virtual Machine

yarn: for the YARN components

dfs: for Distributed File System

jvm: for Java Virtual Machine

mapred: for JobTracker and TaskTracker

dfs: for Distributed File System

rpc: for Remote Procedure...