Reader small image

You're reading from  Apache Spark for Data Science Cookbook

Product typeBook
Published inDec 2016
Publisher
ISBN-139781785880100
Edition1st Edition
Concepts
Right arrow
Author (1)
Padma Priya Chitturi
Padma Priya Chitturi
author image
Padma Priya Chitturi

Padma Priya Chitturi is Analytics Lead at Fractal Analytics Pvt Ltd and has over five years of experience in Big Data processing. Currently, she is part of capability development at Fractal and responsible for solution development for analytical problems across multiple business domains at large scale. Prior to this, she worked for an Airlines product on a real-time processing platform serving one million user requests/sec at Amadeus Software Labs. She has worked on realizing large-scale deep networks (Jeffrey deans work in Google brain) for image classification on the big data platform Spark. She works closely with Big Data technologies such as Spark, Storm, Cassandra and Hadoop. She was an open source contributor to Apache Storm.
Read more about Padma Priya Chitturi

Right arrow

Building standalone applications


This recipe explains how to develop and build Spark standalone applications using programming languages such as Scala, Java, Python, and R. The sample application under this recipe is written in Scala.

Getting ready

Install any IDE tool for application development (the preferred one is Eclipse). Install the SBT build tool to build the project. Create the Scala project and add all the necessary libraries to the build.sbt file. Add this project to Eclipse. SBT is a build tool like Maven for Scala projects.

How to do it…

  1. Develop a Spark standalone application using the Eclipse IDE as follows:

           import org.apache.spark.SparkContext 
           import org.apache.spark.SparkContext._ 
           import org.apache.spark.SparkConf 
    
            object SparkContextExample {  
            def main(args: Array[String]) {
            val file="hdfs://namenode:9000/stocks.txt"     
            val conf = new SparkConf().setAppName("Counting
                       Lines").setMaster("spark://master:7077")
            val sc = new SparkContext(conf)
            val data = sc.textFile(file, 2)
            val totalLines = data.count()
    
            println("Total number of Lines: %s".format(totalLines))}} 
    
  2. Now go to the project directory and build the project using sbt assembly and sbt package manually or build it using eclipse:

            ~/SparkProject/ SparkContextExample/sbt assembly 
            ~/SparkProject/ SparkContextExample/sbt package 
    

How it works…

sbt assembly compiles the program and generates the JAR as SparkContextExample-assembly-<version>.jar. The sbt package generates the jar as SparkContextExample_2.10-1.0.jar. Both the jars are generated in the path ~/SparkProject/SparkContextExample/target/scala-2.10. Submit SparkContextExample-assembly-<version>.jar to the Spark cluster using the spark-submit shell script under the bin directory of SPARK_HOME.

There's more…

We can develop a variety of complex Spark standalone applications to analyze the data in various ways. When working with any third-party libraries, include the corresponding dependency jars in the build.sbt file. Invoking sbt update will download the respective dependencies and will include them in the project classpath.

See also

The Apache Spark documentation covers how to build standalone Spark applications. Please refer to this documentation page: https://spark.apache.org/docs/latest/quick-start.html#self-contained-applications.

Previous PageNext Page
You have been reading a chapter from
Apache Spark for Data Science Cookbook
Published in: Dec 2016Publisher: ISBN-13: 9781785880100
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Padma Priya Chitturi

Padma Priya Chitturi is Analytics Lead at Fractal Analytics Pvt Ltd and has over five years of experience in Big Data processing. Currently, she is part of capability development at Fractal and responsible for solution development for analytical problems across multiple business domains at large scale. Prior to this, she worked for an Airlines product on a real-time processing platform serving one million user requests/sec at Amadeus Software Labs. She has worked on realizing large-scale deep networks (Jeffrey deans work in Google brain) for image classification on the big data platform Spark. She works closely with Big Data technologies such as Spark, Storm, Cassandra and Hadoop. She was an open source contributor to Apache Storm.
Read more about Padma Priya Chitturi