Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Apache Spark 2.x Cookbook

You're reading from  Apache Spark 2.x Cookbook

Product type Book
Published in May 2017
Publisher
ISBN-13 9781787127265
Pages 294 pages
Edition 1st Edition
Languages
Author (1):
Rishi Yadav Rishi Yadav
Profile icon Rishi Yadav

Table of Contents (19) Chapters

Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface
Getting Started with Apache Spark Developing Applications with Spark Spark SQL Working with External Data Sources Spark Streaming Getting Started with Machine Learning Supervised Learning with MLlib — Regression Supervised Learning with MLlib — Classification Unsupervised Learning Recommendations Using Collaborative Filtering Graph Processing Using GraphX and GraphFrames Optimizations and Performance Tuning

Building the Spark source code with Maven


Installing Spark using binaries works fine in most cases. For advanced cases, such as the following (but not limited to), compiling from the source code is a better option:

  • Compiling for a specific Hadoop version
  • Adding the Hive integration
  • Adding the YARN integration

Getting ready

The following are the prerequisites for this recipe to work:

  • Java 1.8 or a later version
  • Maven 3.x

How to do it...

The following are the steps to build the Spark source code with Maven:

  1. Increase MaxPermSize of the heap:
$ echo "export _JAVA_OPTIONS="-XX:MaxPermSize=1G""  >> 
         /home/hduser/.bashrc
  1. Open a new terminal window and download the Spark source code from GitHub:
$ wget https://github.com/apache/spark/archive/branch-2.1.zip
  1. Unpack the archive:
$ unzip branch-2.1.zip
  1. Rename unzipped folder to spark:
        $ mv spark-branch-2.1 spark
  1. Move to the spark directory:
$ cd spark
  1. Compile the sources with the YARN-enabled, Hadoop version 2.7, and Hive-enabled flags and skip the tests for faster compilation:
$ mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.0 -Phive -
        DskipTests clean package
  1. Move the conf folder to the etc folder so that it can be turned into a symbolic link:
$ sudo mv spark/conf /etc/
  1. Move the spark directory to /opt as it's an add-on software package:
$ sudo mv spark /opt/infoobjects/spark
  1. Change the ownership of the spark home directory to root:
$ sudo chown -R root:root /opt/infoobjects/spark
  1. Change the permissions of the spark home directory, namely 0755 = user:rwx group:r-x world:r-x:
$ sudo chmod -R 755 /opt/infoobjects/spark
  1. Move to the spark home directory:
$ cd /opt/infoobjects/spark
  1. Create a symbolic link:
$ sudo ln -s /etc/spark conf
  1. Put the Spark executable in the path by editing .bashrc:
$ echo "export PATH=$PATH:/opt/infoobjects/spark/bin" >> 
          /home/hduser/.bashrc
  1. Create the log directory in /var:
$ sudo mkdir -p /var/log/spark
  1. Make hduser the owner of Spark's log directory:
$ sudo chown -R hduser:hduser /var/log/spark
  1. Create Spark's tmp directory:
$ mkdir /tmp/spark
  1. Configure Spark with the help of the following command lines:
$ cd /etc/spark
$ echo "export HADOOP_CONF_DIR=/opt/infoobjects/hadoop/etc/hadoop" 
       >> spark-env.sh
$ echo "export YARN_CONF_DIR=/opt/infoobjects/hadoop/etc/Hadoop" 
       >> spark-env.sh
$ echo "export SPARK_LOG_DIR=/var/log/spark" >> spark-env.sh
$ echo "export SPARK_WORKER_DIR=/tmp/spark" >> spark-env.sh
You have been reading a chapter from
Apache Spark 2.x Cookbook
Published in: May 2017 Publisher: ISBN-13: 9781787127265
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}