About this video

Processing big data in real time is challenging due to scalability, information consistency, and fault-tolerance. Big Data Processing with Apache Spark teaches you how to use Spark to make your overall analytical workflow faster and more efficient. You'll explore all core concepts and tools within the Spark ecosystem, such as Spark Streaming, the Spark Streaming API, machine learning extension, and structured streaming.

You'll begin by learning data processing fundamentals using Resilient Distributed Datasets (RDDs), SQL, Datasets, and Dataframes APIs. After grasping these fundamentals, you'll move on to using Spark Streaming APIs to consume data in real time from TCP sockets, and integrate Amazon Web Services (AWS) for stream consumption.

By the end of this course, you’ll not only have understood how to use machine learning extensions and structured streams but you’ll also be able to apply Spark in your own upcoming big data projects.

Publication date:
January 2019
3 hours 30 minutes

About the Authors

  • Manuel Ignacio Franco Galeano

    Manuel Ignacio Franco Galeano is a computer scientist from Colombia. He works for Fender Musical Instruments as a lead engineer in Dublin, Ireland. He holds a master's degree in computer science from University College, Dublin UCD. His areas of interest and research are music information retrieval, data analytics, distributed systems, and blockchain technologies.

    Browse publications by this author
  • Nimish Narang

    Nimish Narang has graduated from UBC with a degree in biology and computer science in 2016. He has developed Mobile apps for Android and iOS since 2015. He is focused on data analysis and machine learning from the past two years and has previously published Keras and Professional Scala with Packt.

    Browse publications by this author
Book Title
Access this video, plus 7,500 other titles for FREE
Access now