Reader small image

You're reading from  Mastering Hadoop 3

Product typeBook
Published inFeb 2019
Reading LevelExpert
PublisherPackt
ISBN-139781788620444
Edition1st Edition
Languages
Tools
Right arrow
Authors (2):
Chanchal Singh
Chanchal Singh
author image
Chanchal Singh

Chanchal Singh has over half decades experience in Product Development and Architect Design. He has been working very closely with leadership team of various companies including directors ,CTO's and Founding members to define technical road-map for company.He is the Founder and Speaker at meetup group Big Data and AI Pune MeetupExperience Speaks. He is Co-Author of Book Building Data Streaming Application with Apache Kafka. He has a Bachelor's degree in Information Technology from the University of Mumbai and a Master's degree in Computer Application from Amity University. He was also part of the Entrepreneur Cell in IIT Mumbai. His Linkedin Profile can be found at with the username Chanchal Singh.
Read more about Chanchal Singh

Manish Kumar
Manish Kumar
author image
Manish Kumar

Manish Kumar works as Director of Technology and Architecture at VSquare. He has over 13 years' experience in providing technology solutions to complex business problems. He has worked extensively on web application development, IoT, big data, cloud technologies, and blockchain. Aside from this book, Manish has co-authored three books (Mastering Hadoop 3, Artificial Intelligence for Big Data, and Building Streaming Applications with Apache Kafka).
Read more about Manish Kumar

View More author details
Right arrow

Hadoop logical view


The Hadoop Logical view can be divided into multiple sections. These sections can be viewed as a logical sequence, with steps starting from Ingress/Egress and ending at Data Storage Medium.

The following diagram shows the Hadoop platform logical view:

We will touch upon these sections as shown in the preceding diagram one by one, to understand them. However, when designing any Hadoop application, you should think in terms of those sections and make technological choices according to the use case problems you are trying to solve. Without wasting time, let's look at these sections one by one:

  • Ingress/egress/processing: Any interaction with the Hadoop platform should be viewed in terms of the following:
    • Ingesting (ingress) data 
    • Reading (Egress) data 
    • Processing already ingested data

These actions can be automated via the use of tools or automated code. This can be achieved by user actions, by either uploading data to Hadoop or downloading data from Hadoop. Sometimes, users trigger actions that may result in Ingress/egress or the processing of data.

  • Data integration components: For ingress/egress or data processing in Hadoop, you need data integration components. These components are tools, software, or custom code that help integrate the underlying Hadoop data with user views or actions. If we talk about the user perspective alone, then these components give end users a unified view of data in Hadoop across different distributed Hadoop folders, in different files and data formats. These components provide end users and applications with an entry point for using or manipulating Hadoop data using different data access interfaces and data processing engines. We will exlpore the definition of data access interfaces and processing engines in the next section. In a nutshell, tools such as Hue and software (libraries) such as Sqoop, Java Hadoop Clients, and Hive Beeline Clients are some examples of data integration components.
  • Data access interfaces: Data access interfaces allow you to access underlying Hadoop data using different languages such as SQL, NoSQL, or APIs such as Rest and JAVA APIs, or using different data formats such as search data formats and streams. Sometimes, the interface that you use to access data from Hadoop is tightly coupled with underlying data processing engines. For example, if you're using SPARK SQL then it is bound to use the SPARK processing engine. Something similar is true in the case of the SEARCH interface, which is bound to use search engines such as SOLR or elastic search.
  • Data Processing Engines: Hadoop as a platform provides different processing engines to manipulate underlying data. These processing engines have different mechanisms to use system resources and have completely different SLA guarantees. For example, the MapReduce processing engine is more disk I/O-bound (keeping RAM memory usage under control) and it is suitable for batch-oriented data processing. Similarly, SPARK in a memory processing engine is less disk I/O-bound and more dependent on RAM memory. It is more suitable for stream or micro-batch processing. You should choose processing engines for your application based on the type of data sources you are dealing with along with SLAs you need to satisfy.
  • Resource management frameworks: Resource management frameworks expose abstract APIs to interact with underlying resource managers for task and job scheduling in Hadoop. These frameworks ensure there is a set of steps to follow for submitting jobs in Hadoop using designated resource managers such as YARN or MESOS. These frameworks help establish optimal performance by utilizing underlying resources systematically. Examples of such frameworks are Tez or Slider. Sometimes, data processing engines use these frameworks to interact with underlying resource managers or they have their own set of custom libraries to do so.
  • Task and resource management: Task and resource managment has one primary goal: sharing a large cluster of machines across different, simultaneously running applications in a cluster. There are two major resource managers in Hadoop: YARN and MESOS. Both are built with the same goal, but they use different scheduling or resource allocation mechanisms for jobs in Hadoop. For example, YARN is a Unix process while MESOS is Linux-container-based.
  • Data input/output: The data input/output layer is primarily responsible for different file formats, compression techniques, and data serialization for Hadoop storage.
  • Data Storage Medium: HDFS is the primary data storage medium used in Hadoop. It is a Java-based, high-performant distributed filesystem that is based on the underlying UNIX File System. In the next section, we will study Hadoop distributions along with their benefits.

 

 

Previous PageNext Page
You have been reading a chapter from
Mastering Hadoop 3
Published in: Feb 2019Publisher: PacktISBN-13: 9781788620444
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Chanchal Singh

Chanchal Singh has over half decades experience in Product Development and Architect Design. He has been working very closely with leadership team of various companies including directors ,CTO's and Founding members to define technical road-map for company.He is the Founder and Speaker at meetup group Big Data and AI Pune MeetupExperience Speaks. He is Co-Author of Book Building Data Streaming Application with Apache Kafka. He has a Bachelor's degree in Information Technology from the University of Mumbai and a Master's degree in Computer Application from Amity University. He was also part of the Entrepreneur Cell in IIT Mumbai. His Linkedin Profile can be found at with the username Chanchal Singh.
Read more about Chanchal Singh

author image
Manish Kumar

Manish Kumar works as Director of Technology and Architecture at VSquare. He has over 13 years' experience in providing technology solutions to complex business problems. He has worked extensively on web application development, IoT, big data, cloud technologies, and blockchain. Aside from this book, Manish has co-authored three books (Mastering Hadoop 3, Artificial Intelligence for Big Data, and Building Streaming Applications with Apache Kafka).
Read more about Manish Kumar