Reader small image

You're reading from  Apache Spark 2.x for Java Developers

Product typeBook
Published inJul 2017
Reading LevelIntermediate
PublisherPackt
ISBN-139781787126497
Edition1st Edition
Languages
Right arrow
Authors (2):
Sourav Gulati
Sourav Gulati
author image
Sourav Gulati

Sourav Gulati is associated with software industry for more than 7 years. He started his career with Unix/Linux and Java and then moved towards big data and NoSQL World. He has worked on various big data projects. He has recently started a technical blog called Technical Learning as well. Apart from IT world, he loves to read about mythology.
Read more about Sourav Gulati

Sumit Kumar
Sumit Kumar
author image
Sumit Kumar

Sumit Kumar is a developer with industry insights in telecom and banking. At different junctures, he has worked as a Java and SQL developer, but it is shell scripting that he finds both challenging and satisfying at the same time. Currently, he delivers big data projects focused on batch/near-real-time analytics and the distributed indexed querying system. Besides IT, he takes a keen interest in human and ecological issues.
Read more about Sumit Kumar

View More author details
Right arrow

Chapter 3. Let Us Spark

This chapter serves the purpose of providing instructions so that the reader becomes familiar with the process of installing Apache Spark in standalone mode, along with its dependencies. Then we will start our first interaction with Apache Spark by doing a couple of hands on exercises using Spark CLI as known as REPL.

We will move on to discuss Spark components and common terminologies associated with spark, and then finally discuss the life cycle of a Spark Job in a clustered environment. We will also explore the execution of Spark jobs in a graphical sense, from creation of DAG to execution of the smallest unit of tasks by the utilities provided in Spark Web UI.

Finally, we will conclude the chapter by discussing different methods of Spark Job configuration and submission using Spark-Submit tool and Rest APIs.

Getting started with Spark


In this section, we will run Apache Spark in local mode or standalone mode. First we will set up Scala, which is the prerequisite for Apache Spark. After the Scala setup, we will set up and run Apache Spark. We will also perform some basic operations on it. So let's start.

Since Apache Spark is written in Scala, it needs Scala to be set up on the system. You can download Scala from http://www.scala-lang.org/download/ (we will set up Scala 2.11.8 in the following examples).

Once Scala is downloaded, we can set it up on a Linux system as follows:

Also, it is recommended to set the SCALA_HOME environment variable and add Scala binaries to the PATH variable. You can set it in the .bashrc file or /etc/environment file as follows:

export SCALA_HOME=/usr/local/scala-2.11.8
export PATH=$PATH:/usr/local/scala-2.11.8/bin

It is also shown in the following screenshot:

Now, we have set up a Scala environment successfully. So, it is time to download Apache Spark. You can download...

Spark REPL also known as CLI


In Chapter 1, Introduction to Spark, we learnt that one of the advantages of Apache Spark over the MapReduce framework is interactive processing. Apache Spark achieves the same using Spark REPL.

Spark REPL or Spark shell, also known as Spark CLI, is a very useful tool for exploring the Spark programming. REPL is an acronym for Read-Evaluate-Print Loop. It is an interactive shell used by programmers to interact with a framework. Apache Spark also comes with REPL that beginners can use to understand the Spark programming model.

To launch the Spark REPL, we will execute the command that we executed in the previous section:

$SPARK_HOME/bin/spark-shell

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel).
16/11/01 16:38:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/11...

Some basic exercises using Spark shell


Note that Spark shell is available only in the Scala language. However, we have kept examples easy to understand by Java developers.

Checking Spark version

Execute the following command to check the Spark version using spark-shell:

scala>sc.version
res0: String = 2.1.1

It is shown in the following screenshot:

Creating and filtering RDD

Let's start by creating an RDD of strings:

scala>val stringRdd=sc.parallelize(Array("Java","Scala","Python","Ruby","JavaScript","Java"))
stringRdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at <console>:24

Now, we will filter this RDD to keep only those strings that start with the letter J:

scala>valfilteredRdd = stringRdd.filter(s =>s.startsWith("J"))
filteredRdd: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[2] at filter at <console>:26

In the first chapter, we learnt that if an operation on RDD returns an RDD then it is a transformation, or else it is an action.

The...

Spark components


Before moving any further let's first understand the common terminologies associated with Spark:

  • Driver: This is the main program that oversees the end-to-end execution of a Spark job or program. It negotiates the resources with the resource manager of the cluster for delegate and orchestrate the program into smallest possible data local parallel programming unit.
  • Executors: In any Spark job, there can be one or more executors, that is, processes that execute smaller tasks delegated by the driver. The executors process the data, preferably local to the node and store the result in memory, disk, or both.
  • Master: Apache Spark has been implemented in master-slave architecture and hence master refers to the cluster node executing the driver program.
  • Slave: In a distributed cluster mode, slave refers to the nodes on which executors are being run and hence there can be (and mostly is) more than one slave in the cluster.
  • Job: This is a collection of operations performed on any set of...

Spark Driver Web UI


This section will provide some important aspects of the Spark driver's UI. We will see the statistics of the jobs we executed using Spark shell on Spark UI.

As described in the Getting started with Apache Spark section, Spark driver's UI runs at http://localhost:4040/ (unless you make any changes to default settings).

When you start Spark shell, Spark driver's UI will look as follows:

SparkContext is an entry point to every Spark application. Every Spark job is launched with a SparkContext and can consist of only one SparkContext.

Spark shell, being a Spark application starts with SparkContext and every SparkContext launches its own web UI. The default port is 4040. Spark UI can be enabled/disabled or can be launched on a separate port using the following properties:

Property

Default value

spark.ui.enabled

True

spark.ui.port

4040

For example, Spark shell application with Spark UI running on 5050 port can be launched as:

spark-shell --confspark.ui.port=5050

If multiple Spark applications...

Spark job configuration and submission


When a Spark job is launched, it creates a SparkConf object and passes it to the constructor of SparkContext. The SparkConf() object contains a near exhaustive list of customizable parameters that can tune a Spark job as per cluster resources. The SparkConf object becomes immutable once it is passed to invoke a SparkContext() constructor, hence it becomes important to not only identify, but also modify all the SparkConf parameters before creating a SparkContext object.

There are different ways in which Spark job can be configured.

Spark's conf directory provides the default configurations to execute a Spark job. The SPARK_CONF_DIR parameter can be used to override the default location of the conf directory, which usually is SPARK_HOME/conf and some of the configuration files that are expected in this folder are spark-defaults.conf, spark-env.sh, and log4j.properties. Log4j is used by Spark for logging mechanism and can be configured by modifying the log4j...

Spark REST APIs


Representational state transfer (REST) architecture is used very often while developing web services. These days, a lot of frameworks such as Hadoop, Apache Storm, and so on, provide RESTful web services that help users to interact with or monitor the framework. Apache Spark, being in the same league, also provides REST web services that can be used to monitor the Spark applications. In this section, we will learn about the REST APIs provided by Apache Spark.

The response type of Spark REST APIs is JSON, which can be used to design custom monitoring over Spark applications. The REST endpoints are mounted at /api/v1, which means that for a SparkContext with UI running https://localhost:4040, the REST endpoints will be mounted at https://localhost:4040/api/v1.

It is time to explore some of the REST APIs provided by Apache Spark. For that, launch Spark shell and run any of the jobs mentioned in Spark REPL also known as CLI section:

Execute the following command to list the applications...

Summary


In this chapter, we learned how to set up an Apache Spark standalone cluster and interact with it using Spark CLI. We then focused on becoming familiar with various Spark components and how a Spark job gets executed in a clustered environment. Different Spark job configurations and Spark web UI were also discussed, along with REST API usage for job submission and status monitoring.

In the next chapter, we will discuss more on RDD and its operations.

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Apache Spark 2.x for Java Developers
Published in: Jul 2017Publisher: PacktISBN-13: 9781787126497
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (2)

author image
Sourav Gulati

Sourav Gulati is associated with software industry for more than 7 years. He started his career with Unix/Linux and Java and then moved towards big data and NoSQL World. He has worked on various big data projects. He has recently started a technical blog called Technical Learning as well. Apart from IT world, he loves to read about mythology.
Read more about Sourav Gulati

author image
Sumit Kumar

Sumit Kumar is a developer with industry insights in telecom and banking. At different junctures, he has worked as a Java and SQL developer, but it is shell scripting that he finds both challenging and satisfying at the same time. Currently, he delivers big data projects focused on batch/near-real-time analytics and the distributed indexed querying system. Besides IT, he takes a keen interest in human and ecological issues.
Read more about Sumit Kumar