Hands-On Beginner's Guide on Big Data and Hadoop 3 [Video]
Do you struggle to store and handle big data sets? This course will teach to smoothly handle big data sets using Hadoop 3.
The course starts by covering basic commands used by big data developers on a daily basis. Then, you'll focus on HDFS architecture and command lines that a developer uses frequently. Next, you'll use Flume to import data from other ecosystems into the Hadoop ecosystem, which plays a crucial role in the data available for storage and analysis using MapReduce. Also, you'll learn to import and export data from RDBMS to HDFS and vice-versa using SQOOP. Then, you'll learn about Apache Pig, which is used to deal with data using Flume and SQOOP. Here you'll also learn to load, transform, and store data in Pig relation. Finally, you'll dive into Hive functionality and learn to load, update, delete content in Hive.
By the end of the course, you'll have gained enough knowledge to work with big data using Hadoop. So, grab the course and handle big data sets with ease.
The code bundle for this course is available at https://github.com/PacktPublishing/Hands-On-Beginner-s-Guide-on-Big-Data-and-Hadoop-3-.
Style and Approach
The course will practically get you started with HDFS to store data efficiently, SQOOP to transfer bulk data, and YARN to ensure efficient data management. You will gain the hands-on knowledge to analyze and process big data sets with MapReduce functions.
|Course Length||3 hours 2 minutes|
|Date Of Publication||26 Jul 2018|