Learning Apache Spark 2
Spark juggernaut keeps on rolling and getting more and more momentum each day. Spark provides key capabilities in the form of Spark SQL, Spark Streaming, Spark ML and Graph X all accessible via Java, Scala, Python and R. Deploying the key capabilities is crucial whether it is on a Standalone framework or as a part of existing Hadoop installation and configuring with Yarn and Mesos.
The next part of the journey after installation is using key components, APIs, Clustering, machine learning APIs, data pipelines, parallel programming. It is important to understand why each framework component is key, how widely it is being used, its stability and pertinent use cases.
Once we understand the individual components, we will take a couple of real life advanced analytics examples such as ‘Building a Recommendation system’, ‘Predicting customer churn’ and so on.
The objective of these real life examples is to give the reader confidence of using Spark for real-world problems.
|Course Length||10 hours 40 minutes|
|Date Of Publication||27 Mar 2017|