Switch to the store?

Apache Oozie Essentials

More Information
Learn
  • Install and configure Oozie from source code on your Hadoop cluster
  • Dive into the world of Oozie with Java MapReduce jobs
  • Schedule Hive ETL and data ingestion jobs
  • Import data from a database through Sqoop jobs in HDFS
  • Create and process data pipelines with Pig, hive scripts as per business requirements.
  • Run machine learning Spark jobs on Hadoop
  • Create quick Oozie jobs using Hue
  • Make the most of Oozie’s security capabilities by configuring Oozie’s security
About

As more and more organizations are discovering the use of big data analytics, interest in platforms that provide storage, computation, and analytic capabilities is booming exponentially. This calls for data management. Hadoop caters to this need. Oozie fulfils this necessity for a scheduler for a Hadoop job by acting as a cron to better analyze data.

Apache Oozie Essentials starts off with the basics right from installing and configuring Oozie from source code on your Hadoop cluster to managing your complex clusters. You will learn how to create data ingestion and machine learning workflows.

This book is sprinkled with the examples and exercises to help you take your big data learning to the next level. You will discover how to write workflows to run your MapReduce, Pig ,Hive, and Sqoop scripts and schedule them to run at a specific time or for a specific business requirement using a coordinator. This book has engaging real-life exercises and examples to get you in the thick of things. Lastly, you’ll get a grip of how to embed Spark jobs, which can be used to run your machine learning models on Hadoop.

By the end of the book, you will have a good knowledge of Apache Oozie. You will be capable of using Oozie to handle large Hadoop workflows and even improve the availability of your Hadoop environment.

Features
  • Teaches you everything you need to know to get started with Apache Oozie from scratch and manage your data pipelines effortlessly
  • Learn to write data ingestion workflows with the help of real-life examples from the author’s own personal experience
  • Embed Spark jobs to run your machine learning models on top of Hadoop
Page Count 164
Course Length 4 hours 55 minutes
ISBN9781785880384
Date Of Publication 11 Dec 2015

Authors

Jagat Jasjit Singh

Jagat Jasjit Singh works for one of the largest telecom companies in Melbourne, Australia, as a big data architect. He has a total experience of over 10 years and has been working with the Hadoop ecosystem for more than 5 years. He is skilled in Hadoop, Spark, Oozie, Hive, Pig, Scala, machine learning, HBase, Falcon, Kakfa, GraphX, Flume, Knox, Sqoop, Mesos, Marathon, Chronos, Openstack, and Java. He has experience of a variety of Australian and European customer implementations. He actively writes on Big Data and IoT technologies on his personal blog (http://jugnu.life). Jugnu (a Punjabi word) is a firefly that glows at night and illuminates the world with its tiny light. Jagat believes in this same philosophy of sharing knowledge to make the world a better place. You can connect with him on LinkedIn at https://au.linkedin.com/in/jagatsingh.