Reader small image

You're reading from  Apache Hadoop 3 Quick Start Guide

Product typeBook
Published inOct 2018
Reading LevelIntermediate
PublisherPackt
ISBN-139781788999830
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Hrishikesh Vijay Karambelkar
Hrishikesh Vijay Karambelkar
author image
Hrishikesh Vijay Karambelkar

Hrishikesh Vijay Karambelkar is an innovator and an enterprise architect with 16 years of software design and development experience, specifically in the areas of big data, enterprise search, data analytics, text mining, and databases. He is passionate about architecting new software implementations for the next generation of software solutions for various industries, including oil and gas, chemicals, manufacturing, utilities, healthcare, and government infrastructure. In the past, he has authored three books for Packt Publishing: two editions of Scaling Big Data with Hadoop and Solr and one of Scaling Apache Solr. He has also worked with graph databases, and some of his work has been published at international conferences such as VLDB and ICDE.
Read more about Hrishikesh Vijay Karambelkar

Right arrow

Deep Dive into the Hadoop Distributed File System

In the previous chapter, we saw how you can set up a Hadoop cluster in different modes, including standalone mode, pseudo-distributed cluster mode, and full cluster mode. We also covered some aspects on debugging clusters. In this chapter, we will do a deep dive into Hadoop's Distributed File System. The Apache Hadoop release comes with its own HDFS (Hadoop Distributed File System). However, Hadoop also supports other filesystems such as Local FS, WebHDFS, and Amazon S3 file system. The complete list of supported filesystems can be seen here (https://wiki.apache.org/hadoop/HCFS).

In this section, we will primarily focus on HDFS, and we will cover the following aspects of Hadoop's filesystems:

  • How HDFS works
  • Key features of HDFS
  • Data flow patterns of HDFS
  • Configuration for HDFS
  • Filesystem CLIs
  • Working with data structures...

Technical requirements

You will need Eclipse development environment and Java 8 installed on your system where you can run/tweak these examples. If you prefer to use maven, then you will need maven installed to compile the code. To run the example, you also need Apache Hadoop 3.1 setup on Linux system. Finally, to use the Git repository of this book, you need to install Git.

The code files of this chapter can be found on GitHub:
https://github.com/PacktPublishing/Apache-Hadoop-3-Quick-Start-Guide/tree/master/Chapter3

Check out the following video to see the code in action:

http://bit.ly/2Jq5b8N

How HDFS works

When we set up a Hadoop cluster, Hadoop creates a virtual layer on top of your local filesystem (such as a Windows- or Linux-based filesystem). As you might have noticed, HDFS does not map to any physical filesystem on operating system, but Hadoop offers abstraction on top of your Local FS to provide a fault-tolerant distributed filesystem service with HDFS. The overall design and access pattern in HDFS is like a Linux-based filesystem. The following diagram shows the high-level architecture of HDFS:

We have covered NameNode, Secondary Name, and DataNode in Chapter 1, Hadoop 3.0 - Background and Introduction Each file sent to HDFS is sliced into a number of blocks that need to be distributed. The NameNode maintains the registry (or name table) of all of the nodes present in the data in the local filesystem path specified with dfs.namenode.name.dir in hdfs-site...

Key features of HDFS

In this section, we will go over some of the marquee features of HDFS that offer advantages for Hadoop users. We have already covered some of the features of HDFS in Chapter 1, Hadoop 3.0 - Background and Introduction, such as erasure coding and high availability, so we will not be covering them switch.

Achieving multi tenancy in HDFS

HDFS supports multi tenancy through its Linux-like Access Control Lists (ACLs) on its filesystem. The filesystem-specific commands are covered in the next section. When you are working across multiple tenants, it boils down to controlling access for different users through the HDFS command-line interface. So, the HDFS Administrator can add tenant spaces to HDFS through its...

Data flow patterns of HDFS

In this section, we will look at the different types of data flow patterns in HDFS. HDFS serves as storage for all processed data. The data may arrive with different velocity and variety; it may require extensive processing before it is ready for consumption by an application. Apache Hadoop provides frameworks such as MapReduce and YARN to process the data. We will be covering the data variety and velocity aspect in a later part of this chapter. Let's look at the different data flow patterns that are possible with HDFS.

HDFS as primary storage with cache

HDFS can be used as a primary data storage. In fact, in many implementations of Hadoop, that has been the case. The data is usually supplied...

HDFS configuration files

Unlike lots of software, Apache Hadoop provides few configuration files that give you flexibility when configuring your Hadoop cluster. Among them are two primary files that influence the overall functioning of HDFS:

  • core-site.xml: This file is primarily used to configure Hadoop IOs; primarily, all of the common settings of HDFS and MapReduce would go here.
  • hdfs-site.xml: This file is the main file for all HDFS configuration. Anything pertaining to NameNode, SecondaryNameNode, or DataNode can be found here.

The core-site file has more than 315 parameters that can be set. We will look at different configurations in the administration section. The full list can be seen here (https://hadoop.apache.org/docs/r3.1.0/hadoop-project-dist/hadoop-common/core-default.xml). We will cover some important parameters that you may need for configuration:

Property...

Hadoop filesystem CLIs

Hadoop provides a command-line shell for its filesystem, which could be HDFS or any other filesystem supported by Hadoop. There are different ways through which the commands can be called:

hrishikesh@base0:/$ hadoop fs -<command> <parameter>
hrishikesh@base0:/$ hadoop dfs -<command> <parameter>
hrishikesh@base0:/$ hdfs dfs -<command> <parameter>

Although all commands can be used on HDFS, the first command listed is for Hadoop FS, which can be either HDFS or any other filesystem used by Hadoop. The second and third commands are specific to HDFS; however, the second command is deprecated, and it is replaced by the third command. Most filesystem commands are inspired by Linux shell commands, except for minor differences in syntax. The HDFS CLI follows a POSIX-like filesystem interface.

...

Working with data structures in HDFS

When you work with Apache Hadoop, one of the key design decisions that you take is to identify the most appropriate data structures for storing your data in HDFS. In general, Apache Hadoop provides different data storage for any kind of data, which could be text data, image data, or any other binary data format. We will be looking at different data structures supported by HDFS, as well as other ecosystems, in this section.

Understanding SequenceFile

Hadoop SequenceFile is one of the most commonly used file formats for all HDFS storage. SequenceFile is a binary file format that persists all of the data that is passed to Hadoop in <key, value> pairs in a serialized form, depicted in...

Summary

In this chapter, we have took a deep dive into HDFS. We tried to figure out how HDFS works and its key features. We looked at different data flow patterns of HDFS, where we can see HDFS in different roles. This was supported with various configuration files of HDFS and key attributes. We also looked at various command-line interface commands for HDFS and the Hadoop shell. Finally, we looked at the data structures that are used by HDFS with some examples.

In the next chapter, we will study the creation of a new MapReduce application with Apache Hadoop MapReduce.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Apache Hadoop 3 Quick Start Guide
Published in: Oct 2018Publisher: PacktISBN-13: 9781788999830
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Hrishikesh Vijay Karambelkar

Hrishikesh Vijay Karambelkar is an innovator and an enterprise architect with 16 years of software design and development experience, specifically in the areas of big data, enterprise search, data analytics, text mining, and databases. He is passionate about architecting new software implementations for the next generation of software solutions for various industries, including oil and gas, chemicals, manufacturing, utilities, healthcare, and government infrastructure. In the past, he has authored three books for Packt Publishing: two editions of Scaling Big Data with Hadoop and Solr and one of Scaling Apache Solr. He has also worked with graph databases, and some of his work has been published at international conferences such as VLDB and ICDE.
Read more about Hrishikesh Vijay Karambelkar