In this chapter, we will cover how to install and configure Spark, either as a local instance, a multi-node cluster, or in a virtual environment. You will learn the following recipes:
- Installing Spark requirements
- Installing Spark from sources
- Installing Spark from binaries
- Configuring a local instance of Spark
- Configuring a multi-node instance of Spark
- Installing Jupyter
- Configuring a session in Jupyter
- Working with Cloudera Spark images
We cannot begin a book on Spark (well, on PySpark) without first specifying what Spark is. Spark is a powerful, flexible, open source, data processing and querying engine. It is extremely easy to use and provides the means to solve a huge variety of problems, ranging from processing unstructured, semi-structured, or structured data, through streaming, up to machine learning. With over 1,000 contributors from over 250 organizations (not to mention over 3,000 Spark Meetup community members worldwide), Spark is now one of the largest open source projects in the portfolio of the Apache Software Foundation.
The origins of Spark can be found in 2012 when it was first released; Matei Zacharia developed the first versions of the Spark processing engine at UC Berkeley as part of his PhD thesis. Since then, Spark has become extremely popular, and its popularity stems from a number of reasons:
- It is fast: It is estimated that Spark is 100 times faster than Hadoop when working purely in memory, and around 10 times faster when reading or writing data to a disk.
- It is flexible: You can leverage the power of Spark from a number of programming languages; Spark natively supports interfaces in Scala, Java, Python, and R.
- It is extendible: As Spark is an open source package, you can easily extend it by introducing your own classes or extending the existing ones.
- It is powerful: Many machine learning algorithms are already implemented in Spark so you do not need to add more tools to your stack—most of the data engineering and data science tasks can be accomplished while working in a single environment.
- It is familiar: Data scientists and data engineers, who are accustomed to using Python's
pandas
, or R'sdata.frames
ordata.tables
, should have a much gentler learning curve (although the differences between these data types exist). Moreover, if you know SQL, you can also use it to wrangle data in Spark! - It is scalable: Spark can run locally on your machine (with all the limitations such a solution entails). However, the same code that runs locally can be deployed to a cluster of thousands of machines with little-to-no changes.
For the remainder of this book, we will assume that you are working in a Unix-like environment such as Linux (throughout this book, we will use Ubuntu Server 16.04 LTS) or macOS (running macOS High Sierra); all the code provided has been tested in these two environments. For this chapter (and some other ones, too), an internet connection is also required as we will be downloading a bunch of binaries and sources from the internet.
Note
We will not be focusing on installing Spark in a Windows environment as it is not truly supported by the Spark developers. However, if you are inclined to try, you can follow some of the instructions you will find online, such as from the following link: http://bit.ly/2Ar75ld.
Knowing how to use the command line and how to set some environment variables on your system is useful, but not really required—we will guide you through the steps.
Spark requires a handful of environments to be present on your machine before you can install and use it. In this recipe, we will focus on getting your machine ready for Spark installation.
To execute this recipe, you will need a bash Terminal and an internet connection.
Also, before we start any work, you should clone the GitHub repository for this book. The repository contains all the codes (in the form of notebooks) and all the data you will need to follow the examples in this book. To clone the repository, go to http://bit.ly/2ArlBck, click on the Clone or download
button, and copy the URL that shows up by clicking on the icon next to it:

Next, go to your Terminal and issue the following command:
If your git
environment is set up properly, the whole GitHub repository should clone to your disk. No other prerequisites are required.
There are just truly two main requirements for installing PySpark: Java and Python. Additionally, you can also install Scala and R if you want to use those languages, and we will also check for Maven, which we will use to compile the Spark sources.
To do this, we will use the checkRequirements.sh
script to check for all the requirements: the script is located in the Chapter01
folder from the GitHub repository.
The following code block shows the high-level portions of the script found in the Chapter01/checkRequirements.sh
file. Note that some portions of the code were omitted here for brevity:
First, we will specify all the required packages and their required minimum versions; looking at the preceding code, you can see that Spark 2.3.1 requires Java 1.8+ and Python 3.4 or higher (and we will always be checking for these two environments). Additionally, if you want to use R or Scala, the minimal requirements for these two packages are 3.1 and 2.11, respectively. Maven, as mentioned earlier, will be used to compile the Spark sources, and for doing that, Spark requires at least the 3.3.9 version of Maven.
Note
You can check the Spark requirements here: https://spark.apache.org/docs/latest/index.html You can check the requirements for building Spark here: https://spark.apache.org/docs/latest/building-spark.html.
Next, we parse the command-line arguments:
You, as a user, can specify whether you want to check additionally for R, Scala, and Maven dependencies. To do so, run the following code from your command line (the following code will check for all of them):
The following is also a perfectly valid usage:
Next, we call three functions: printHeader
, checkJava
, and checkPython
. The printHeader
function is nothing more than just a simple way for the script to state what it does and it really is not that interesting, so we will skip it here; it is, however, fairly self-explanatory, so you are welcome to peruse the relevant portions of the checkRequirements.sh
script yourself.
Next, we will check whether Java is installed. First, we just print to the Terminal that we are performing checks on Java (this is common across all of our functions, so we will only mention it here):
Following this, we will check if the Java environment is installed on your machine:
First, we use the type
command to check if the java
command is available; the type -p
command returns the location of the java
binary if it exists. This also implies that the bin
folder containing Java binaries has been added to the PATH
.
Note
If you are certain you have the binaries installed (be it Java, Python, R, Scala, or Maven), you can jump to the Updating PATH section in this recipe to see how to let your computer know where these binaries live.
If this fails, we will revert to checking if the JAVA_HOME
environment variable is set, and if it is, we will try to see if it contains the required java
binary: [[ -x "$JAVA_HOME/bin/java" ]]
. Should this fail, the program will print the message that no Java environment could be found and will exit (without checking for other required packages, like Python).
If, however, the Java binary is found, then we can check its version:
We first execute the java -version
command in the Terminal, which would normally produce an output similar to the following screenshot:

We then pipe the previous output to awk
to split (the -F
switch) the rows at the quote '"'
character (and will only use the first line of the output as we filter the rows down to those that contain /version/
) and take the second (the $2
) element as the version of the Java binaries installed on our machine. We will store it in the _java_version
variable, which we also print to the screen using the echo
command.
Note
If you do not know what awk
is or how to use it, we recommend this book from Packt: http://bit.ly/2BtTcBV.
Finally, we check if the _java_version
we just obtained is lower than _java_required
. If this evaluates to true, we will stop the execution, instead telling you to install the required version of Java.
The logic implemented in the checkPython
, checkR
, checkScala
, and checkMaven
functions follows in a very similar way. The only differences are in what binary we call and in the way we check the versions:
- For Python, we run
"$_python" --version 2>&1 | awk -F ' ' '{print $2}'
, as checking the Python version (for Anaconda distribution) would print out the following to the screen:Python 3.5.2 :: Anaconda 2.4.1 (x86_64)
- For R, we use
"$_r" --version 2>&1 | awk -F ' ' '/R version/ {print $3}'
, as checking the R's version would write (a lot) to the screen; we only use the line that starts withR version
:R version 3.4.2 (2017-09-28) -- "Short Summer"
- For Scala, we utilize
"$_scala" -version 2>&1 | awk -F ' ' '{print $5}'
, given that checking Scala's version prints the following:Scala code runner version 2.11.8 -- Copyright 2002-2016, LAMP/EPFL
- For Maven, we check
"$_mvn" --version 2>&1 | awk -F ' ' '/Apache Maven/ {print $3}'
, as Maven prints out the following (and more!) when asked for its version:Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T00:58:13-07:00)
If you want to learn more, you should now be able to read the other functions with ease.
If any of your dependencies are not installed, you need to install them before continuing with the next recipe. It goes beyond the scope of this book to guide you step-by-step through the installation process of all of these, but here are some helpful links to show you how to do it.
Installing Java is pretty straightforward.
On macOS, go to https://www.java.com/en/download/mac_download.jsp and download the version appropriate for your system. Once downloaded, follow the instructions to install it on your machine. If you require more detailed instructions, check this link: http://bit.ly/2idEozX.
On Linux, check the following link http://bit.ly/2jGwuz1 for Linux Java installation instructions.
We have been using (and highly recommend) the Anaconda version of Python as it comes with the most commonly used packages included with the installer. It also comes built-in with the conda
package management tool that makes installing other packages a breeze.
You can download Anaconda from http://www.continuum.io/downloads; select the appropriate version that will fulfill Spark's requirements. For macOS installation instructions, you can go to http://bit.ly/2zZPuUf and for a Linux installation manual check, you can go to http://bit.ly/2ASLUvg.
R is distributed via Comprehensive R Archive Network (CRAN). The macOS version can be downloaded from here, https://cran.r-project.org/bin/macosx/, whereas the Linux one is available here: https://cran.r-project.org/bin/linux/.
Download the version appropriate for your machine and follow the installation instructions on the screen. For the macOS version, you can choose to install just the R core packages without the GUI and everything else as Spark does not require those.
Installing Scala is even simpler.
Go to http://bit.ly/2Am757R and download the sbt-*.*.*.tgz
archive (at the time of writing this book, the latest version is sbt-1.0.4.tgz
). Next, in your Terminal, navigate to the folder you have just downloaded Scala to and issue the following commands:
That's it. Now, you can skip to the Updating PATH section in this recipe to update your PATH
.
Maven's installation is quite similar to that of Scala. Go to https://maven.apache.org/download.cgi and download the apache-maven-*.*.*-bin.tar.gz
archive. At the time of writing this book, the newest version was 3.5.2. Similarly to Scala, open the Terminal, navigate to the folder you have just downloaded the archive to, and type:
Once again, that is it for what you need to do with regards to installing Maven. Check the next subsection for instructions on how to update your PATH
.
Unix-like operating systems (Windows, too) use the concept of a PATH
to search for binaries (or executables, in the case of Windows). The PATH
is nothing more than a list of folders separated by the colon character ':'
that tells the operating system where to look for binaries.
To add something to your PATH
(and make it a permanent change), you need to edit either the .bash_profile
(macOS) or .bashrc
(Linux) files; these are located in the root folder for your user. Thus, to add both Scala and Maven binaries to the PATH, you can do the following (on macOS):
On Linux, the equivalent looks as follows:
The preceding commands simply append to the end of either of the .bash_profile
or .bashrc
files using the redirection operator >>
.
Once you execute the preceding commands, restart your Terminal, and:
It should now include paths to both the Scala and Maven binaries.