Reader small image

You're reading from  Apache Hadoop 3 Quick Start Guide

Product typeBook
Published inOct 2018
Reading LevelIntermediate
PublisherPackt
ISBN-139781788999830
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Hrishikesh Vijay Karambelkar
Hrishikesh Vijay Karambelkar
author image
Hrishikesh Vijay Karambelkar

Hrishikesh Vijay Karambelkar is an innovator and an enterprise architect with 16 years of software design and development experience, specifically in the areas of big data, enterprise search, data analytics, text mining, and databases. He is passionate about architecting new software implementations for the next generation of software solutions for various industries, including oil and gas, chemicals, manufacturing, utilities, healthcare, and government infrastructure. In the past, he has authored three books for Packt Publishing: two editions of Scaling Big Data with Hadoop and Solr and one of Scaling Apache Solr. He has also worked with graph databases, and some of his work has been published at international conferences such as VLDB and ICDE.
Read more about Hrishikesh Vijay Karambelkar

Right arrow

Planning and Setting Up Hadoop Clusters

In the last chapter, we looked at big data problems, the history of Hadoop, along with an overview of big data, Hadoop architecture, and commercial offerings. This chapter will focus on hands-on, practical knowledge of how to set up Hadoop in different configurations. Apache Hadoop can be set up in the following three different configurations:

  • Developer mode: Developer mode can be used to run programs in a standalone manner. This arrangement does not require any Hadoop process daemons, and jars can run directly. This mode is useful if developers wish to debug their code on MapReduce.
  • Pseudo cluster (single node Hadoop): A pseudo cluster is a single node cluster that has similar capabilities to that of a standard cluster; it is also used for the development and testing of programs before they are deployed on a production cluster. Pseudo...

Technical requirements

You will need Eclipse development environment and Java 8 installed on your system where you can run/tweak these examples. If you prefer to use maven, then you will need maven installed to compile the code. To run the example, you also need Apache Hadoop 3.1 setup on Linux system. Finally, to use the Git repository of this book, you need to install Git.

The code files of this chapter can be found on GitHub:
https://github.com/PacktPublishing/Apache-Hadoop-3-Quick-Start-Guide/tree/master/Chapter2

Check out the following video to see the code in action:

http://bit.ly/2Jofk5P

Prerequisites for Hadoop setup

In this section, we will look at the necessary prerequites for setting up Apache Hadoop in cluster or pseudo mode. Often, teams are forced to go through a major reinstallation of Hadoop and the data migration of their clusters due to improper planning for their cluster requirements. Hadoop can be installed on Windows as well as Linux; however, most productions that Hadoop installations run on are Unix or Linux-based platforms.

Preparing hardware for Hadoop

One important aspect of Hadoop setup is defining the hardware requirements and sizing before the start of a project. Although Apache Hadoop can run on commodity hardware, most of the implementations utilize server-class hardware for their Hadoop...

Running Hadoop in standalone mode

Now that you have successfully unzipped Hadoop, let's try and run a Hadoop program in standalone mode. As we mentioned in the introduction, Hadoop's standalone mode does not require any runtime; you can directly run your MapReduce program by running your compiled jar. We will look at how you can write MapReduce programs in the Chapter 4, Developing MapReduce Applications. For now, it's time to run a program we have already prepared. To download, compile, and run the sample program, simply take the following steps:

Please note that this is not a mandatory requirement for setting up Apache Hadoop. You do not need a Maven or Git repository setup to compile or run Hadoop. We are doing this to run some simple examples.
  1. You will need Maven and Git on your machine to proceed. Apache Maven can be set up with the following command:
  ...

Setting up a pseudo Hadoop cluster

In the last section, we managed run Hadoop in a standalone mode. In this section, we will create a pseudo Hadoop cluster on a single node. So, let's try and set up HDFS daemons on a system in the pseudo distributed mode. When we set up HDFS in a pseudo distributed mode, we install name nodes and data nodes on the same machine, but before we start the instances for HDFS, we need to set the configuration files correctly. We will study different configuration files in the next chapter. First, open core-sites.xml with the following command:

hadoop@base0:/$  vim etc/hadoop/core-sites.xml

Now, set the DFS default name for the file system using the fs.default.name property. The core site file is responsible for storing all of the configuration related to Hadoop Core. Replace the content of the file with the following snippet:

<configuration...

Planning and sizing clusters

Once you start working on problems and implementing Hadoop clusters, you'll have to deal with the issue of sizing. It's not just the sizing aspect of clusters that needs to be considered, but the SLAs associated with Hadoop runtime as well. A cluster can be categorized based on workloads as follows:

  • Lightweight: This category is intended for low computation and fewer storage requirements, and is more useful for defined datasets with no growth
  • Balanced: A balanced cluster can have storage and computation requirements that grow over time
  • Storage-centric: This category is more focused towards storing data, and less towards computation; it is mostly used for archival purposes, as well as minimal processing
  • Computational-centric: This cluster is intended for high computation which requires CPU or GPU-intensive work, such as analytics, prediction...

Setting up Hadoop in cluster mode

In this section, we will focus on setting up a cluster of Hadoop. We will also go over other important aspects of a Hadoop cluster, such as sizing guidelines, setup instructions, and so on. A Hadoop cluster can be set up with Apache Ambari, which offers a much simpler, semi-automated, and error-prone configuration of a cluster. However, the latest version of Ambari at the time of writing supports older Hadoop versions. To set up Hadoop 3.1, we must do so manually. By the time this book is out, you may be able to use a much simpler installation process. You can read about older Hadoop installations in the Ambari installation guide, available here.

Before you set up a Hadoop cluster, it would be good to check the sizing of a cluster so that you can plan better, and avoid reinstallation due to incorrectly estimated cluster size. Please refer to the...

Diagnosing the Hadoop cluster

As you get into deeper configuration and analysis, you will start facing new issues as you progress. This might include exceptions coming from programs, failing nodes, or even random errors. In this section, we will try to cover how they can be identified and addressed. Note that we will look at debugging MapReduce programs in Chapter 4, Developing MapReduce Applications; this section is more focused on debugging issues pertaining to the Hadoop cluster.

Working with log files

Logging into Hadoop uses the rolling file mechanism based on First In, First Out. There are different types of log files intended for developers, administrators, and other users. You can find out the location of these log...

Summary

In this chapter, we covered the installation and setup of Apache Hadoop. We started with the prerequisites for setting up a Hadoop cluster. We also went through different Hadoop configurations available for users, covering the development mode, pseudo distributed single nodes, and the cluster setup. We learned how each of these configurations can be set up, and we also ran an example application on the configurations. Finally, we covered how one can diagnose the Hadoop cluster by understanding the log files and different debugging tools available. In the next chapter, we will start looking at the Hadoop Distributed File System in detail.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Apache Hadoop 3 Quick Start Guide
Published in: Oct 2018Publisher: PacktISBN-13: 9781788999830
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Hrishikesh Vijay Karambelkar

Hrishikesh Vijay Karambelkar is an innovator and an enterprise architect with 16 years of software design and development experience, specifically in the areas of big data, enterprise search, data analytics, text mining, and databases. He is passionate about architecting new software implementations for the next generation of software solutions for various industries, including oil and gas, chemicals, manufacturing, utilities, healthcare, and government infrastructure. In the past, he has authored three books for Packt Publishing: two editions of Scaling Big Data with Hadoop and Solr and one of Scaling Apache Solr. He has also worked with graph databases, and some of his work has been published at international conferences such as VLDB and ICDE.
Read more about Hrishikesh Vijay Karambelkar