Reader small image

You're reading from  Hadoop 2.x Administration Cookbook

Product typeBook
Published inMay 2017
PublisherPackt
ISBN-139781787126732
Edition1st Edition
Tools
Right arrow
Author (1)
Aman Singh
Aman Singh
author image
Aman Singh

Gurmukh Singh is a seasoned technology professional with 14+ years of industry experience in infrastructure design, distributed systems, performance optimization, and networks. He has worked in big data domain for the last 5 years and provides consultancy and training on various technologies. He has worked with companies such as HP, JP Morgan, and Yahoo. He has authored Monitoring Hadoop by Packt Publishing
Read more about Aman Singh

Right arrow

Chapter 10. Cluster Planning

In this chapter, we will cover the following recipes:

  • Disk space calculations

  • Nodes needed in the cluster

  • Memory requirements

  • Sizing the cluster as per SLA

  • Network design

  • Estimating the cost of the Hadoop cluster

  • Hardware and software options

Introduction


In this chapter, we will look at cluster planning and some of the important aspects of cluster utilization.

Although this is a recipe book, it is good to have an understanding on the Hadoop cluster layout, network components, operating system, disk arrangements, and memory. We will try to cover some of the fundamental concepts on cluster planning and a few formulas to estimate the cluster size.

Let's say we are ready with our big data initiative and want to take the plunge into the Hadoop world. The first few of the primary concerns is what size cluster do we need? How many nodes and what configurations? What will be the roadmap in terms of the software/application stack, what will be the initial investment? What hardware to choose, whether to go with the vanilla Hadoop distribution or to go with vendor-specific Hadoop distributions.

There are no straightforward answers to these or any magic formulas. Many times, these decisions are influenced by market statistics, or by an organizational...

Disk space calculations


In this recipe, we will calculate the disk storage needed for the Hadoop cluster. Once we know what our storage requirement is, we can plan the number of nodes in the cluster and narrow down on the hardware options we have.

The intent of this recipe is not to tune performance, but to plan for capacity. Users are encouraged to read Chapter 9, HBase Administration on optimizing the Hadoop cluster.

Getting ready

To step through the recipe in this section, we need a Hadoop cluster set up and running. We need at least the HDFS configured correctly. It is recommended to complete the first two chapters before starting with this recipe.

How to do it...

  1. Connect to the master1.cyrus.com master node in the cluster and switch to the user hadoop.

  2. On the master node, execute the following command:

    $ hdfs dfsadmin -report
    

    This command will give you an understanding about how the storage in the cluster is represented. The total cluster storage is a summation of storages from each of the...

Nodes needed in the cluster


In this recipe, we will look at the number of nodes needed in the cluster based upon the storage requirements.

From the initial Disk space calculations recipe, we estimated that we need about 2 PB of storage for our cluster. In this recipe, we will estimate the number of nodes required for running a stable Hadoop cluster.

Getting ready

To step through the recipe, the user needs to have understood the Hadoop cluster daemons and their roles. It is recommended to have a cluster running with healthy HDFS and at least two Datanodes.

How to do it...

  1. Connect to the master1.cyrus.com master node in the cluster and switch to the user hadoop.

  2. Execute the command as shown here to see the Datanodes available and the disk space on each node:

    $ hdfs dfsadmin -report
  3. From the preceding command, we can tell the storage available per node, but we cannot tell the number of disks that make up that storage. Refer to the following screenshot for details:

  4. Login to a Datanode dn6.cyrus.com and...

Memory requirements


In this recipe, we will look at the memory requirements per node in the cluster, especially looking at the memory on Datanodes as a factor of storage.

Despite having large clusters with many Datanodes, it is of not much use if the nodes do not have sufficient memory to serve the requests. A Namenode stores the entire metadata in the memory and also has to take care of the block reports sent by the Datanodes in the cluster. The larger the cluster, the larger will be the block reports, and the more resources a Namenode will require.

The intent of this recipe is not to tune it for memory, but to give an estimate on the memory required per node.

Getting ready

To complete this recipe, the user must have completed the Disk space calculations recipe and the Nodes needed in the cluster recipe. For better understanding, the user must have a running cluster with HDFS and YARN configured and must have played around with Chapter 1, Hadoop Architecture and Deployment and Chapter 2, Maintaining...

Sizing the cluster as per SLA


In this recipe, we will look at how service-level agreements can impact our decision to size the clusters. In an organization, there will be multitenant clusters, which are funded differently by business units and ask for a guarantee for their share.

A good thing about YARN is that multiple users can run different jobs such as MapReduce, Hive, Pig, HBase, Spark, and so on. While YARN guarantees what it needs to start a job, it does not control how the job will finish. Users can still step on each other and cause an impact on SLAs.

Getting ready

For this recipe, the users must have completed the Memory requirements and Nodes needed in the cluster recipes. It is good to have a running cluster with HDFS and YARN to run quick commands for reference. It is also good to understand the scheduler recipes covered in Chapter 5, Schedulers.

How to do it...

  1. Connect to the master1.cyrus.com master node and switch to the user hadoop.

  2. Run a teragen and terasort on the cluster using...

Network design


In this recipe, we will be looking at the network design for the Hadoop cluster and what things to consider for planning a Hadoop cluster.

Getting ready

Make sure that the user has a running cluster with HDFS and YARN and has at least two nodes in the cluster.

How to do it...

  1. Connect to the master1.cyrus.com Namenode and switch to the user hadoop.

  2. Execute the commands as follows to check for the link speed and other network option modes:

    $ ethtool eth0
    $ iftop
    $ netstat -s
    
  3. Always have a separate network for Hadoop traffic by using VLANs.

  4. Ensure the DNS resolution works for both forward and reverse lookup.

  5. Run a caching-only DNS within the Hadoop network, which caches records for faster resolution.

  6. Consider NIC teaming or binding for better performance.

  7. Use dedicated core switches and rack top switches.

  8. Consider having static IPs per node in the cluster.

  9. Disable IPv6 for all nodes and just use IPv4.

  10. Increasing the size of the cluster will mean more connections and more data across nodes...

Estimating the cost of the Hadoop cluster


In this recipe, we will estimate the costing for the Hadoop cluster and see what factors to a take into account. The exact figures can vary according to the hardware and software choices.

The Hadoop cluster is a combination of servers, network components, power consumption, man hours to maintain it, software license costs, and cooling costs.

How to do it...

In this recipe, there is nothing to execute or do by logging into the cluster, but it is more of an estimation which will be governed by the following mentioned factors.

Each server in the Hadoop cluster will at least fall into three categories: Master nodes, Datanodes, and Edge nodes.

Costing master nodes: Intensive on memory and CPU, but need less of disk space.

  • Two OS disks in Raid 1 configuration

  • Two disks for logs and two disks for Namenode metadata

  • At least two network cards bounded together with minimum 1 Gbps

  • RAM 128 GB, higher if the HBase master is co-located on a Namenode

  • CPU cores per master...

Hardware and software options


In this recipe, we will discuss the hardware and software option to take account of while considering the Hadoop cluster.

There are many vendors for hardware and software and the options can be overwhelming. But some important things which must be taken into account are as follows:

  1. Run benchmark tests on different hardware examples from HP, IBM, or Dell and compare them for the throughput per unit cost.

  2. What is the roadmap for the hardware you choose? How long will the vendor support it?

  3. Every year, the new hardware will be a better value for compute per unit. What will be the buyback strategy for the old hardware? Will the vendor take back the old hardware and give the new hardware at discounted rates?

  4. Does the hardware have tightly coupled components, which could be difficult to replace in isolation?

  5. What software options does the user have in terms of vendors? Should we go for HDP, Cloudera, or Mapr distribution or use the Apache Hadoop distribution.

  6. Total cost...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Hadoop 2.x Administration Cookbook
Published in: May 2017Publisher: PacktISBN-13: 9781787126732
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Aman Singh

Gurmukh Singh is a seasoned technology professional with 14+ years of industry experience in infrastructure design, distributed systems, performance optimization, and networks. He has worked in big data domain for the last 5 years and provides consultancy and training on various technologies. He has worked with companies such as HP, JP Morgan, and Yahoo. He has authored Monitoring Hadoop by Packt Publishing
Read more about Aman Singh