Switch to the store?

Apache Spark Fundamentals [video]

More Information
  • History of Apache Spark and the introduction of Spark components
  • Learn how to get started with Apache Spark
  • Introduction to Apache Hadoop, it’s processed and components – HDFS, YARN and Map Reduce
  • Introduction of programming language – Scala, Scala fundamentals such as classes, objects in Scala, Collections in Scala, etc.
  • Apache Spark programming fundamentals and Resilient Distributed Datasets (RDD)
  • See which operations can be used to perform a transformation or action operation on the RDD 
  • Find out how to load and save data in Spark 
  • Write Spark application in Scala and execute it on Hadoop cluster

This video is a comprehensive tutorial to help you learn all the fundamentals of Apache Spark, one of the trending big data processing frameworks on the market today. We will introduce you to the various components of the Spark framework to efficiently process, analyze, and visualize data.

You will also get the brief introduction of Apache Hadoop and Scala programming language before start writing with Spark programming. You will learn about the Apache Spark programming fundamentals such as Resilient Distributed Datasets (RDD) and See which operations can be used to perform a transformation or action operation on the RDD. We'll show you how to load and save data from various data sources as different type of files, No-SQL and RDBMS databases etc.. We’ll also explain Spark advanced programming concepts such as managing Key-Value pairs, accumulators etc. Finally, you'll discover how to create an effective Spark application and execute it on Hadoop cluster to the data and gain insights to make informed business decisions.

By the end of this video, you will be well-versed with all the fundamentals of Apache Spark and implementing them in Spark.

Style and Approach

Filled with examples, this course will help you learn Apache Spark Fundamentals and get started with the Apache Spark. You will learn to build Spark applications and also execution of Spark execution on Hadoop cluster.

  • Leverage the power of Apache Spark to perform efficient data processing and analytics on your data in real-time
  • Process and analyze streams of data with ease and perform machine learning efficiently
  • A comprehensive tutorial to help you get the most out of the trending Big Data framework for all your data processing needs
Course Length 2 hours 18 minutes
ISBN 9781787283862
Date Of Publication 29 Jun 2017


Nishant Garg

Nishant Garg has over 17 years' software architecture and development experience in various technologies, such as Java Enterprise Edition, SOA, Spring, Hadoop, Hive, Flume, Sqoop, Oozie, Spark, Shark, YARN, Impala, Kafka, Storm, Solr/Lucene, NoSQL databases (such as HBase, Cassandra, and MongoDB), and MPP databases (such as GreenPlum). He received his MS in software systems from the Birla Institute of Technology and Science, Pilani, India, and is currently working as a technical architect for the Big Data RandD Group with Impetus Infotech Pvt. Ltd. Previously, Nishant has enjoyed working with some of the most recognizable names in IT services and financial industries, employing full software life cycle methodologies such as Agile and SCRUM. Nishant has also undertaken many speaking engagements on big data technologies and is also the author of Apache Kafka and HBase Essentials, Packt Publishing.