In this chapter, we will set up Spark and configure it. This chapter is divided into the following recipes:
Installing Spark from binaries
Building the Spark source code with Maven
Launching Spark on Amazon EC2
Deploying Spark on a cluster in standalone mode
Deploying Spark on a cluster with Mesos
Deploying Spark on a cluster with YARN
Using Tachyon as an off-heap storage layer
Apache Spark is a general-purpose cluster computing system to process big data workloads. What sets Spark apart from its predecessors, such as MapReduce, is its speed, ease-of-use, and sophisticated analytics.
Apache Spark was originally developed at AMPLab, UC Berkeley, in 2009. It was made open source in 2010 under the BSD license and switched to the Apache 2.0 license in 2013. Toward the later part of 2013, the creators of Spark founded Databricks to focus on Spark's development and future releases.
Talking about speed, Spark can achieve sub-second latency on big data workloads. To achieve such low latency, Spark makes use of the memory for storage. In MapReduce, memory is primarily used for actual computation. Spark uses memory both to compute and store objects.
Spark also provides a unified runtime connecting to various big data storage sources, such as HDFS, Cassandra, HBase, and S3. It also provides a rich set of higher-level libraries for different big data compute tasks, such as machine learning, SQL processing, graph processing, and real-time streaming. These libraries make development faster and can be combined in an arbitrary fashion.
Though Spark is written in Scala, and this book only focuses on recipes in Scala, Spark also supports Java and Python.
Spark is an open source community project, and everyone uses the pure open source Apache distributions for deployments, unlike Hadoop, which has multiple distributions available with vendor enhancements.
The following figure shows the Spark ecosystem:

The Spark runtime runs on top of a variety of cluster managers, including YARN (Hadoop's compute framework), Mesos, and Spark's own cluster manager called standalone mode. Tachyon is a memory-centric distributed file system that enables reliable file sharing at memory speed across cluster frameworks. In short, it is an off-heap storage layer in memory, which helps share data across jobs and users. Mesos is a cluster manager, which is evolving into a data center operating system. YARN is Hadoop's compute framework that has a robust resource management feature that Spark can seamlessly use.
Spark can be either built from the source code or precompiled binaries can be downloaded from http://spark.apache.org. For a standard use case, binaries are good enough, and this recipe will focus on installing Spark using binaries.
All the recipes in this book are developed using Ubuntu Linux but should work fine on any POSIX environment. Spark expects Java to be installed and the JAVA_HOME
environment variable to be set.
In Linux/Unix systems, there are certain standards for the location of files and directories, which we are going to follow in this book. The following is a quick cheat sheet:
Directory |
Description |
---|---|
|
Essential command binaries |
|
Host-specific system configuration |
|
Add-on application software packages |
|
Variable data |
|
Temporary files |
|
User home directories |
At the time of writing this, Spark's current version is 1.4. Please check the latest version from Spark's download page at http://spark.apache.org/downloads.html. Binaries are developed with a most recent and stable version of Hadoop. To use a specific version of Hadoop, the recommended approach is to build from sources, which will be covered in the next recipe.
The following are the installation steps:
Open the terminal and download binaries using the following command:
$ wget http://d3kbcqa49mib13.cloudfront.net/spark-1.4.0-bin-hadoop2.4.tgz
Unpack binaries:
$ tar -zxf spark-1.4.0-bin-hadoop2.4.tgz
Rename the folder containing binaries by stripping the version information:
$ sudo mv spark-1.4.0-bin-hadoop2.4 spark
Move the configuration folder to the
/etc
folder so that it can be made a symbolic link later:$ sudo mv spark/conf/* /etc/spark
Create your company-specific installation directory under
/opt
. As the recipes in this book are tested oninfoobjects
sandbox, we are going to useinfoobjects
as directory name. Create the/opt/infoobjects
directory:$ sudo mkdir -p /opt/infoobjects
Move the
spark
directory to/opt/infoobjects
as it's an add-on software package:$ sudo mv spark /opt/infoobjects/
Change the ownership of the
spark
home directory toroot
:$ sudo chown -R root:root /opt/infoobjects/spark
Change permissions of the
spark
home directory,0755 = user:read-write-execute group:read-execute world:read-execute
:$ sudo chmod -R 755 /opt/infoobjects/spark
Move to the
spark
home directory:$ cd /opt/infoobjects/spark
Create the symbolic link:
$ sudo ln -s /etc/spark conf
Append to
PATH
in.bashrc
:$ echo "export PATH=$PATH:/opt/infoobjects/spark/bin" >> /home/hduser/.bashrc
Open a new terminal.
Create the
log
directory in/var
:$ sudo mkdir -p /var/log/spark
Make
hduser
the owner of the Sparklog
directory.$ sudo chown -R hduser:hduser /var/log/spark
Create the Spark
tmp
directory:$ mkdir /tmp/spark
Configure Spark with the help of the following command lines:
$ cd /etc/spark $ echo "export HADOOP_CONF_DIR=/opt/infoobjects/hadoop/etc/hadoop" >> spark-env.sh $ echo "export YARN_CONF_DIR=/opt/infoobjects/hadoop/etc/Hadoop" >> spark-env.sh $ echo "export SPARK_LOG_DIR=/var/log/spark" >> spark-env.sh $ echo "export SPARK_WORKER_DIR=/tmp/spark" >> spark-env.sh
Installing Spark using binaries works fine in most cases. For advanced cases, such as the following (but not limited to), compiling from the source code is a better option:
Compiling for a specific Hadoop version
Adding the Hive integration
Adding the YARN integration
The following are the prerequisites for this recipe to work:
Java 1.6 or a later version
Maven 3.x
The following are the steps to build the Spark source code with Maven:
Increase
MaxPermSize
for heap:$ echo "export _JAVA_OPTIONS=\"-XX:MaxPermSize=1G\"" >> /home/hduser/.bashrc
Open a new terminal window and download the Spark source code from GitHub:
$ wget https://github.com/apache/spark/archive/branch-1.4.zip
Unpack the archive:
$ gunzip branch-1.4.zip
Move to the
spark
directory:$ cd spark
Compile the sources with these flags: Yarn enabled, Hadoop version 2.4, Hive enabled, and skipping tests for faster compilation:
$ mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -DskipTests clean package
Move the
conf
folder to theetc
folder so that it can be made a symbolic link:$ sudo mv spark/conf /etc/
Move the
spark
directory to/opt
as it's an add-on software package:$ sudo mv spark /opt/infoobjects/spark
Change the ownership of the
spark
home directory toroot
:$ sudo chown -R root:root /opt/infoobjects/spark
Change the permissions of the
spark
home directory0755 = user:rwx group:r-x world:r-x
:$ sudo chmod -R 755 /opt/infoobjects/spark
Move to the
spark
home directory:$ cd /opt/infoobjects/spark
Create a symbolic link:
$ sudo ln -s /etc/spark conf
Put the Spark executable in the path by editing
.bashrc
:$ echo "export PATH=$PATH:/opt/infoobjects/spark/bin" >> /home/hduser/.bashrc
Create the
log
directory in/var
:$ sudo mkdir -p /var/log/spark
Make
hduser
the owner of the Sparklog
directory:$ sudo chown -R hduser:hduser /var/log/spark
Create the Spark
tmp
directory:$ mkdir /tmp/spark
Configure Spark with the help of the following command lines:
$ cd /etc/spark $ echo "export HADOOP_CONF_DIR=/opt/infoobjects/hadoop/etc/hadoop" >> spark-env.sh $ echo "export YARN_CONF_DIR=/opt/infoobjects/hadoop/etc/Hadoop" >> spark-env.sh $ echo "export SPARK_LOG_DIR=/var/log/spark" >> spark-env.sh $ echo "export SPARK_WORKER_DIR=/tmp/spark" >> spark-env.sh
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute instances in the cloud. Amazon EC2 provides the following features:
EC2 provides different types of instances to meet all compute needs, such as general-purpose instances, micro instances, memory-optimized instances, storage-optimized instances, and others. They have a free tier of micro-instances to try.
The spark-ec2
script comes bundled with Spark and makes it easy to launch, manage, and shut down clusters on Amazon EC2.
Before you start, you need to do the following things:
Log in to the Amazon AWS account (http://aws.amazon.com).
Click on Security Credentials under your account name in the top-right corner.
Click on Access Keys and Create New Access Key:
Note down the access key ID and secret access key.
Now go to Services | EC2.
Click on Key Pairs in left-hand menu under NETWORK & SECURITY.
Click on Create Key Pair and enter
kp-spark
as key-pair name:Download the private key file and copy it in the
/home/hduser/keypairs folder
.Set environment variables to reflect access key ID and secret access key (please replace sample values with your own values):
$ echo "export AWS_ACCESS_KEY_ID=\"AKIAOD7M2LOWATFXFKQ\"" >> /home/hduser/.bashrc $ echo "export AWS_SECRET_ACCESS_KEY=\"+Xr4UroVYJxiLiY8DLT4DLT4D4sxc3ijZGMx1D3pfZ2q\"" >> /home/hduser/.bashrc $ echo "export PATH=$PATH:/opt/infoobjects/spark/ec2" >> /home/hduser/.bashrc
Spark comes bundled with scripts to launch the Spark cluster on Amazon EC2. Let's launch the cluster using the following command:
$ cd /home/hduser $ spark-ec2 -k <key-pair> -i <key-file> -s <num-slaves> launch <cluster-name>
Launch the cluster with the example value:
$ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem --hadoop-major-version 2 -s 3 launch spark-cluster
Sometimes, the default availability zones are not available; in that case, retry sending the request by specifying the specific availability zone you are requesting:
$ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem -z us-east-1b --hadoop-major-version 2 -s 3 launch spark-cluster
If your application needs to retain data after the instance shuts down, attach EBS volume to it (for example, a 10 GB space):
$ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem --hadoop-major-version 2 -ebs-vol-size 10 -s 3 launch spark-cluster
If you use Amazon spot instances, here's the way to do it:
$ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem -spot-price=0.15 --hadoop-major-version 2 -s 3 launch spark-cluster
Note
Spot instances allow you to name your own price for Amazon EC2 computing capacity. You simply bid on spare Amazon EC2 instances and run them whenever your bid exceeds the current spot price, which varies in real-time based on supply and demand (source: amazon.com).
After everything is launched, check the status of the cluster by going to the web UI URL that will be printed at the end.
Check the status of the cluster:
Now, to access the Spark cluster on EC2, let's connect to the master node using secure shell protocol (SSH):
$ spark-ec2 -k kp-spark -i /home/hduser/kp/kp-spark.pem login spark-cluster
You should get something like the following:
Check directories in the master node and see what they do:
Check the HDFS version in an ephemeral instance:
$ ephemeral-hdfs/bin/hadoop version Hadoop 2.0.0-chd4.2.0
Check the HDFS version in persistent instance with the following command:
$ persistent-hdfs/bin/hadoop version Hadoop 2.0.0-chd4.2.0
Change the configuration level in logs:
$ cd spark/conf
The default log level information is too verbose, so let's change it to Error:
Copy the configuration to all slave nodes after the change:
$ spark-ec2/copydir spark/conf
You should get something like this:
Destroy the Spark cluster:
$ spark-ec2 destroy spark-cluster
Compute resources in a distributed environment need to be managed so that resource utilization is efficient and every job gets a fair chance to run. Spark comes along with its own cluster manager conveniently called standalone mode. Spark also supports working with YARN and Mesos cluster managers.
The cluster manager that should be chosen is mostly driven by both legacy concerns and whether other frameworks, such as MapReduce, are sharing the same compute resource pool. If your cluster has legacy MapReduce jobs running, and all of them cannot be converted to Spark jobs, it is a good idea to use YARN as the cluster manager. Mesos is emerging as a data center operating system to conveniently manage jobs across frameworks, and is very compatible with Spark.
If the Spark framework is the only framework in your cluster, then standalone mode is good enough. As Spark evolves as technology, you will see more and more use cases of Spark being used as the standalone framework serving all big data compute needs. For example, some jobs may be using Apache Mahout at present because MLlib does not have a specific machine-learning library, which the job needs. As soon as MLlib gets this library, this particular job can be moved to Spark.
Let's consider a cluster of six nodes as an example setup: one master and five slaves (replace them with actual node names in your cluster):
Master m1.zettabytes.com Slaves s1.zettabytes.com s2.zettabytes.com s3.zettabytes.com s4.zettabytes.com s5.zettabytes.com
Since Spark's standalone mode is the default, all you need to do is to have Spark binaries installed on both master and slave machines. Put
/opt/infoobjects/spark/sbin
in path on every node:$ echo "export PATH=$PATH:/opt/infoobjects/spark/sbin" >> /home/hduser/.bashrc
Start the standalone master server (SSH to master first):
hduser@m1.zettabytes.com~] start-master.sh
Master, by default, starts on port 7077, which slaves use to connect to it. It also has a web UI at port 8088.
Please SSH to master node and start slaves:
hduser@s1.zettabytes.com~] spark-class org.apache.spark.deploy.worker.Worker spark://m1.zettabytes.com:7077
Argument (for fine-grained configuration, the following parameters work with both master and slaves)
Meaning
-i <ipaddress>,-ip <ipaddress>
IP address/DNS service listens on
-p <port>, --port <port>
Port service listens on
--webui-port <port>
Port for web UI (by default, 8080 for master and 8081 for worker)
-c <cores>,--cores <cores>
Total CPU cores Spark applications that can be used on a machine (worker only)
-m <memory>,--memory <memory>
Total RAM Spark applications that can be used on a machine (worker only)
-d <dir>,--work-dir <dir>
The directory to use for scratch space and job output logs
Rather than manually starting master and slave daemons on each node, it can also be accomplished using cluster launch scripts.
First, create the
conf/slaves
file on a master node and add one line per slave hostname (using an example of five slaves nodes, replace with the DNS of slave nodes in your cluster):hduser@m1.zettabytes.com~] echo "s1.zettabytes.com" >> conf/slaves hduser@m1.zettabytes.com~] echo "s2.zettabytes.com" >> conf/slaves hduser@m1.zettabytes.com~] echo "s3.zettabytes.com" >> conf/slaves hduser@m1.zettabytes.com~] echo "s4.zettabytes.com" >> conf/slaves hduser@m1.zettabytes.com~] echo "s5.zettabytes.com" >> conf/slaves
Once the slave machine is set up, you can call the following scripts to start/stop cluster:
Script name
Purpose
start-master.sh
start-slaves.sh
start-all.sh
stop-master.sh
stop-slaves.sh
stop-all.sh
Connect an application to the cluster through the Scala code:
val sparkContext = new SparkContext(new SparkConf().setMaster("spark://m1.zettabytes.com:7077")
Connect to the cluster through Spark shell:
$ spark-shell --master spark://master:7077
In standalone mode, Spark follows the master slave architecture, very much like Hadoop, MapReduce, and YARN. The compute master daemon is called Spark master and runs on one master node. Spark master can be made highly available using ZooKeeper. You can also add more standby masters on the fly, if needed.
The compute slave daemon is called worker and is on each slave node. The worker daemon does the following:
Reports the availability of compute resources on a slave node, such as the number of cores, memory, and others, to Spark master
Spawns the executor when asked to do so by Spark master
Restarts the executor if it dies
There is, at most, one executor per application per slave machine.
Both Spark master and worker are very lightweight. Typically, memory allocation between 500 MB to 1 GB is sufficient. This value can be set in conf/spark-env.sh
by setting the SPARK_DAEMON_MEMORY
parameter. For example, the following configuration will set the memory to 1 gigabits for both master and worker daemon. Make sure you have sudo
as the super user before running it:
$ echo "export SPARK_DAEMON_MEMORY=1g" >> /opt/infoobjects/spark/conf/spark-env.sh
By default, each slave node has one worker instance running on it. Sometimes, you may have a few machines that are more powerful than others. In that case, you can spawn more than one worker on that machine by the following configuration (only on those machines):
$ echo "export SPARK_WORKER_INSTANCES=2" >> /opt/infoobjects/spark/conf/spark-env.sh
Spark worker, by default, uses all cores on the slave machine for its executors. If you would like to limit the number of cores the worker can use, you can set it to that number (for example, 12) by the following configuration:
$ echo "export SPARK_WORKER_CORES=12" >> /opt/infoobjects/spark/conf/spark-env.sh
Spark worker, by default, uses all the available RAM (1 GB for executors). Note that you cannot allocate how much memory each specific executor will use (you can control this from the driver configuration). To assign another value for the total memory (for example, 24 GB) to be used by all executors combined, execute the following setting:
$ echo "export SPARK_WORKER_MEMORY=24g" >> /opt/infoobjects/spark/conf/spark-env.sh
There are some settings you can do at the driver level:
To specify the maximum number of CPU cores to be used by a given application across the cluster, you can set the
spark.cores.max
configuration in Spark submit or Spark shell as follows:$ spark-submit --conf spark.cores.max=12
To specify the amount of memory each executor should be allocated (the minimum recommendation is 8 GB), you can set the
spark.executor.memory
configuration in Spark submit or Spark shell as follows:$ spark-submit --conf spark.executor.memory=8g
The following diagram depicts the high-level architecture of a Spark cluster:

http://spark.apache.org/docs/latest/spark-standalone.html to find more configuration options
Mesos is slowly emerging as a data center operating system to manage all compute resources across a data center. Mesos runs on any computer running the Linux operating system. Mesos is built using the same principles as Linux kernel. Let's see how we can install Mesos.
Mesosphere provides a binary distribution of Mesos. The most recent package for the Mesos distribution can be installed from the Mesosphere repositories by performing the following steps:
Execute Mesos on Ubuntu OS with the trusty version:
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF DISTRO=$(lsb_release -is | tr '[:upper:]' '[:lower:]') CODENAME=$(lsb_release -cs) $ sudo vi /etc/apt/sources.list.d/mesosphere.list deb http://repos.mesosphere.io/Ubuntu trusty main
Update the repositories:
$ sudo apt-get -y update
Install Mesos:
$ sudo apt-get -y install mesos
To connect Spark to Mesos to integrate Spark with Mesos, make Spark binaries available to Mesos and configure the Spark driver to connect to Mesos.
Use Spark binaries from the first recipe and upload to HDFS:
$ hdfs dfs -put spark-1.4.0-bin-hadoop2.4.tgz spark-1.4.0-bin-hadoop2.4.tgz
The master URL for single master Mesos is
mesos://host:5050
, and for the ZooKeeper managed Mesos cluster, it ismesos://zk://host:2181
.Set the following variables in
spark-env.sh
:$ sudo vi spark-env.sh export MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so export SPARK_EXECUTOR_URI= hdfs://localhost:9000/user/hduser/spark-1.4.0-bin-hadoop2.4.tgz
Run from the Scala program:
val conf = new SparkConf().setMaster("mesos://host:5050") val sparkContext = new SparkContext(conf)
$ spark-shell --master mesos://host:5050
To run in the coarse-grained mode, set the
spark.mesos.coarse
property:conf.set("spark.mesos.coarse","true")
Yet another resource negotiator (YARN) is Hadoop's compute framework that runs on top of HDFS, which is Hadoop's storage layer.
YARN follows the master slave architecture. The master daemon is called ResourceManager
and the slave daemon is called NodeManager
. Besides this application, life cycle management is done by ApplicationMaster
, which can be spawned on any slave node and is alive for the lifetime of an application.
When Spark is run on YARN, ResourceManager
performs the role of Spark master and NodeManagers
work as executor nodes.
While running Spark with YARN, each Spark executor is run as YARN container.
Running Spark on YARN requires a binary distribution of Spark that has YARN support. In both Spark installation recipes, we have taken care of it.
To run Spark on YARN, the first step is to set the configuration:
HADOOP_CONF_DIR: to write to HDFS YARN_CONF_DIR: to connect to YARN ResourceManager $ cd /opt/infoobjects/spark/conf (or /etc/spark) $ sudo vi spark-env.sh export HADOOP_CONF_DIR=/opt/infoobjects/hadoop/etc/Hadoop export YARN_CONF_DIR=/opt/infoobjects/hadoop/etc/hadoop
You can see this in the following screenshot:
The following command launches YARN Spark in the
yarn-client
mode:$ spark-submit --class path.to.your.Class --master yarn-client [options] <app jar> [app options]
Here's an example:
$ spark-submit --class com.infoobjects.TwitterFireHose --master yarn-client --num-executors 3 --driver-memory 4g --executor-memory 2g --executor-cores 1 target/sparkio.jar 10
The following command launches Spark shell in the
yarn-client
mode:$ spark-shell --master yarn-client
The command to launch in the
yarn-cluster
mode is as follows:$ spark-submit --class path.to.your.Class --master yarn-cluster [options] <app jar> [app options]
Here's an example:
$ spark-submit --class com.infoobjects.TwitterFireHose --master yarn-cluster --num-executors 3 --driver-memory 4g --executor-memory 2g --executor-cores 1 targe t/sparkio.jar 10
Spark applications on YARN run in two modes:
The yarn-cluster
mode is recommended for production deployments, while the yarn-client
mode is good for development and debugging when you would like to see immediate output. There is no need to specify Spark master in either mode as it's picked from the Hadoop configuration, and the master parameter is either yarn-client
or yarn-cluster
.
The following figure shows how Spark is run with YARN in the client mode:

The following figure shows how Spark is run with YARN in the cluster mode:

In the YARN mode, the following configuration parameters can be set:
Spark RDDs are a great way to store datasets in memory while ending up with multiple copies of the same data in different applications. Tachyon solves some of the challenges with Spark RDD management. A few of them are:
RDD only exists for the duration of the Spark application
The same process performs the compute and RDD in-memory storage; so, if a process crashes, in-memory storage also goes away
Different jobs cannot share an RDD even if they are for the same underlying data, for example, an HDFS block that leads to:
Slow writes to disk
Duplication of data in memory, higher memory footprint
If the output of one application needs to be shared with the other application, it's slow due to the replication in the disk
Tachyon provides an off-heap memory layer to solve these problems. This layer, being off-heap, is immune to process crashes and is also not subject to garbage collection. This also lets RDDs be shared across applications and outlive a specific job or session; in essence, one single copy of data resides in memory, as shown in the following figure:

Let's download and compile Tachyon (Tachyon, by default, comes configured for Hadoop 1.0.4, so it needs to be compiled from sources for the right Hadoop version). Replace the version with the current version. The current version at the time of writing this book is 0.6.4:
$ wget https://github.com/amplab/tachyon/archive/v<version>.zip
Unarchive the source code:
$ unzip v-<version>.zip
Remove the version from the
tachyon
source folder name for convenience:$ mv tachyon-<version> tachyon
Change the directory to the
tachyon
folder:$ cd tachyon $ mvn -Dhadoop.version=2.4.0 clean package -DskipTests=true $ cd conf $ sudo mkdir -p /var/tachyon/journal $ sudo chown -R hduser:hduser /var/tachyon/journal $ sudo mkdir -p /var/tachyon/ramdisk $ sudo chown -R hduser:hduser /var/tachyon/ramdisk $ mv tachyon-env.sh.template tachyon-env.sh $ vi tachyon-env.sh
Comment the following line:
export TACHYON_UNDERFS_ADDRESS=$TACHYON_HOME/underfs
Uncomment the following line:
export TACHYON_UNDERFS_ADDRESS=hdfs://localhost:9000
Change the following properties:
-Dtachyon.master.journal.folder=/var/tachyon/journal/ export TACHYON_RAM_FOLDER=/var/tachyon/ramdisk $ sudo mkdir -p /var/log/tachyon $ sudo chown -R hduser:hduser /var/log/tachyon $ vi log4j.properties
Replace
${tachyon.home}
with/var/log/tachyon
.Create a new
core-site.xml
file in theconf
directory:$ sudo vi core-site.xml <configuration> <property> <name>fs.tachyon.impl</name> <value>tachyon.hadoop.TFS</value> </property> </configuration> $ cd ~ $ sudo mv tachyon /opt/infoobjects/ $ sudo chown -R root:root /opt/infoobjects/tachyon $ sudo chmod -R 755 /opt/infoobjects/tachyon
Add
<tachyon home>/bin
to the path:$ echo "export PATH=$PATH:/opt/infoobjects/tachyon/bin" >> /home/hduser/.bashrc
Restart the shell and format Tachyon:
$ tachyon format $ tachyon-start.sh local //you need to enter root password as RamFS needs to be formatted
Tachyon's web interface is
http://hostname:19999
:Run the sample program to see whether Tachyon is running fine:
$ tachyon runTest Basic CACHE_THROUGH
You can stop Tachyon any time by running the following command:
$ tachyon-stop.sh
Run Spark on Tachyon:
$ spark-shell scala> val words = sc.textFile("tachyon://localhost:19998/words") scala> words.count scala> words.saveAsTextFile("tachyon://localhost:19998/w2") scala> val person = sc.textFile("hdfs://localhost:9000/user/hduser/person") scala> import org.apache.spark.api.java._ scala> person.persist(StorageLevels.OFF_HEAP)
http://www.cs.berkeley.edu/~haoyuan/papers/2013_ladis_tachyon.pdf to learn about the origins of Tachyon