Learning Akka

3.7 (3 reviews total)
By Jason Goodwin
  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. Starting Life as an Actor

About this book

Software today has to work with more data, more users, more cores, and more servers than ever. Akka is a distributed computing toolkit that enables developers to build correct concurrent and distributed applications using Java and Scala with ease, applications that scale across servers and respond to failure by self-healing. As well as simplifying development, Akka enables multiple concurrency development patterns with particular support and architecture derived from Erlang’s concept of actors (lightweight concurrent entities). Akka is written in Scala, which has become the programming language of choice for development on the Akka platform.

Learning Akka aims to be a comprehensive walkthrough of Akka. This book will take you on a journey through all the concepts of Akka that you need in order to get started with concurrent and distributed applications and even build your own.

Beginning with the concept of Actors, the book will take you through concurrency in Akka. Moving on to networked applications, this book will explain the common pitfalls in these difficult problem areas while teaching you how to use Akka to overcome these problems with ease.

The book is an easy to follow example-based guide that will strengthen your basic knowledge of Akka and aid you in applying the same to real-world scenarios.

Publication date:
December 2015


Chapter 1. Starting Life as an Actor

This book is primarily intended for intermediate, to senior-level developers wishing to explore Akka and build fault-tolerant, distributed systems in Scala or modern versions of Java.

This book has been written for the engineer who is faced with building applications that are fast, stable, and elastic, meaning they can scale to meet thousands or tens of thousands concurrent users. With more users having access to the Internet with faster devices and networks, today, more than ever, we need our applications to be able to handle many concurrent users working with larger datasets and with higher expectations of application stability and performance.

This book does not assume that you have a deep understanding of concurrency concepts and does try to introduce all of the concepts needed to know how to start a project from scratch, work with concurrency abstractions, and test and build standalone or networked applications using Akka. While this book should give you everything you need in those regards, it's not meant for an absolute beginner and does assume some does assume some programming proficiency.

Here is a quick overview of what you'll need and what you'll get out of this book.

  • Requirements:

    • Intermediate Scala or Java experience

    • A computer

    • Internet connectivity

  • Recommendations (but you can learn as you go):

    • If using Java, Java8 lambda exposure

    • Git and GitHub experience for assignments

  • What you'll learn:

    • Learn to build distributed and concurrent application

    • Learn techniques for building fault-tolerant systems

    • Learn techniques for sharing code between projects and teams

    • Learn several concepts and patterns to aid in distributed system design


What's in this book?

To meet the modern challenges a platform developer may face, this book puts a strong focus not only on Akka but also on distributed and concurrent computing concepts. It is my intention to give you a toolkit to understand the problems you'll face while trying to scale these distributed and concurrent applications.

These pages are not a re-iteration of the Akka documentation. If you want a desk reference or manual, the 460-page Akka documentation will serve that purpose well. This book is not simply a book about Akka, it is a book about building concurrent and distributed systems with Akka.

This book will take you on a journey to show you a new way of working with distributed and concurrent applications. This book will arm you with an understanding of the tools, and then will show you how to use them. It will demonstrate how to build clusters of applications that talk to each other over the network and can have new computing nodes added or removed to be able to scale to meet the needs of your users. We'll learn how to do things like building pools of workers to handle huge jobs at scale to show how it's done. We will talk about important theorems and common approaches in distributed systems and show how they affect our design decisions, and we will discuss problems you will encounter related to network reliability and demonstrate how we can build our applications to be resilient to those problems.


Chapter overview

At the heart of Akka is an implementation of the Actor Model, which is a theoretical model of concurrent computation. In this, chapter we will introduce core concepts in Akka by looking at the history of Akka and the actor model. This will give you insight into what Akka is and help you understand what problems it tries to solve. Then, the goals of this book will be introduced with recurring examples that will be used.

After covering these concepts, the chapter will move into setting up your development environment with the tools you need to start building. We will set up our environment, Integrated Development Environment (IDE), and our first Akka project, including unit testing.


What is Akka

This section will introduce Akka and the actor model. Akka, purportedly named after a mountain in Sweden, is often referred to as a distribution toolkit—a collection of tools that are used to do work across remote computing resources. Akka is a modern implementation of the actor model of concurrency. Akka today could be seen as an evolution of other technologies, borrowing from Erlang's actor model implementation while introducing many new features to aid with building applications that can handle today's high-scale problems.

Actor Model origins

To better understand what Akka is and how it is used, we will take a brief trip through time looking at the Actor model to understand what it is and how it has evolved into a framework for building fault-tolerant distributed systems in Akka today.

The actor model of concurrency was originally a theoretical model of concurrent computation proposed in a paper called A Universal Modular Actor Formalism for Artificial Intelligence in 1973. We will look at the actor model's qualities here to understand its benefits in aiding our ability to reason about concurrent computation while protecting against common pitfalls in shared state.

What's an Actor anyway?

First, let's define what an Actor is. In the actor model, an actor is a concurrency primitive; more simply stated, an actor can be thought of as a worker like a process or thread that can do work and take action. It might be helpful to think of an actor as a person in an organization that has a role and responsibility in that organization. Let's say a sushi restaurant. Restaurant staff have to do various pieces of work throughout the day such as preparing dishes for customers.

Actors and Message passing

One of the qualities of an object in object oriented languages is that it can be can be directly invoked–one object can examine or change another object's fields, or invoke its methods. This is fine if a single thread is doing it, but if multiple threads are trying to read and change values at the same time, then synchronization and locks are needed.

Actors differ from objects in that they cannot be directly read, changed, and invoked. Instead, Actors can only communicate with the world outside of them through message passing. Message passing simply means that an actor can be sent a message (object in our case) and can, itself, send messages or reply with a message. While you may draw parallels to passing a parameter to a method, and receiving a return value, message passing is fundamentally different because it happens asynchronously. An actor begins processing a message, and replies to the message, on its own terms when it is ready.

The actor processes messages one at a time, synchronously. The mailbox is essentially a queue of work outstanding for the worker to process. When an actor processes a message, the actor can respond by changing its internal state, creating more actors, or sending more messages to other actors.

The term Actor System is often used in implementations to describe a collection of actors and everything related to them including addresses, mailboxes, and configuration.

To reiterate these key concepts:

  • Actor: A worker concurrency primitive, which synchronously processes messages. Actors can hold state, which can change.

  • Message: A piece of data used to communicate with processes (for example, Actors).

  • Message-passing: A software development paradigm where messages are passed to invoke behavior instead of directly invoking the behavior.

  • Mailing address: Where messages are sent for an actor to process when the actor is free.

  • Mailbox: The place messages are stored until an actor is able to process the message. This can be viewed as a queue of messages.

  • Actor system: A collection of actors, their addresses, mailboxes, and configuration, etc.

It might not be obvious yet, but the Actor Model is much easier to reason about than imperative object oriented concurrent applications. Taking a real world example and modeling it in an actor system will help to demonstrate this benefit. Consider a sushi restaurant, We have three actors in this example: a customer, a waiter, and the sushi chef.

Our example starts with the customer telling our waiter their order. The waiter writes it down this onto a piece of paper and places this message in the chef's mailbox (sticks it in the kitchen window). When the chef is free, the chef will pick up the message (order) and start preparing the sushi. The chef will continue to process the message until it's done. When the sushi is prepared, the chef will put this message (plate) in the kitchen window (waiter's mailbox) for the waiter to pick up. The chef can go work on other orders now.

When the waiter has a free minute, the waiter can pick up the food message from the window and deliver it to the customer's mailbox (for example, the table). When the customer is ready, they will process the message by eating the food.

It's easy to reason about the restaurant using the actor model. If you take a moment to imagine more customers coming into the restaurant, you can imagine the waiter taking orders one at a time and handing them off to the chef, the chef processing them, and the waiter delivering food, all concurrently. This is one of the great benefits of the actor model; it's very easy to reason about concurrency when everyone has their own tasks. Modeling real applications with the actor model is not much different than what we have done in this example.

The next benefit to the actor model is elimination of shared state. Because actors process one message at a time, state can be stored inside an actor safely. If you have not worked on concurrent systems this may be harder to see immediately but we can demonstrate it quite easily. If we try to do two operations that read, modify, and write a value at the same time, then one of the operations will be lost unless we carefully employ synchronization and locking. It's a very easy mistake to make.

Let's take a look at a non-atomic increment operation called from two threads at the same time, to see what happens when state is shared across threads. We'll have multiple threads read a value from memory, and then write an incremented value back to memory. This is a race condition and can be partly solved by ensuring mutually exclusive access to the value in memory. Let's actually demonstrate this with a Scala example:

If we try to concurrently increment an integer 100000 times with multiple threads, there is a good chance that we will lose some writes.

import concurrent.Future
import concurrent.ExecutionContext.Implicits.global
var i, j = 0
(1 to 100000).foreach(_ => Future{i = i + 1})
(1 to 100000).foreach(_ => j = j + 1)
println(s"${i} ${j}")

Both i and j are incremented 100000 times using this very simple function—x = x + 1. i is incremented from multiple threads concurrently while j is incremented by only one thread. We wait for a second before printing to ensure all of the updates are done. If you think the output is 100000 100000 you are very wrong.

Shared state is not safe. Values are being read by two threads, and then saved back incremented. Because the same value is read by multiple threads, increment operations are lost along the way. This is a race-condition and is one of the fundamental problems with shared-state concurrency models.

We can demonstrate what may be happening with the race condition more clearly by reasoning about the reads and write operations:

Thread 2 reads value in memory - value read as 9
Thread 2 writes value in memory - value set to 10 (9 + 1)
Thread 1 reads value in memory - value read as 10
Thread 2 reads value in memory - value read as 10
Thread 1 writes value in memory - value set to 11 (10 + 1) !! LOST INCREMENT !!
Thread 2 writes value in memory - value set to 11 (10 + 1)
Thread 1 reads value in memory - value read as 11

For shared state in memory to work correctly, we have to apply locks and synchronization to stop threads from reading and writing from a value at the same time. This introduces complexity and it's hard to reason about it and to ensure it is done right.

The biggest threat is that often your code will appear correct in testing but it will fail intermittently once you have a bunch of concurrent users working on it. The bugs are easily missed because testing often excludes situations with lots of traffic. Dion Almaer once blogged that most Java applications are so rife with concurrency bugs that they only work by accident. Actors help safeguard against these problems by reducing shared state. If you move state inside an actor, access to that state can be limited only to the actor (effectively only one thread can access that state). If you treat all messages as immutable, then you can effectively eliminate shared state in your actor system and build safer applications.

The concepts in this section represent the core of the actor model. Chapter 2, Actors and Concurrency and Chapter 3, Getting the Message Across, will cover concurrency, actors, and message passing in greater detail.

The Evolution of supervision and fault tolerance in Erlang

The actor model evolved over time since its introduction in the aforementioned paper. It was a noted influencer in programming language designs (Scheme for example).

There was a note-worthy appearance of the actor model when Ericsson produced an implementation in the 80s in the Erlang programming language for use in embedded TELECOM applications. The concept of fault tolerance through supervision was introduced here. Ericsson, using Erlang and the actor model, produced an often cited appliance, the AXD301. The AXD301 managed to achieve a remarkable nine nine's of availability (99.9999999% uptime). That's about 3.1 seconds of downtime in 100 years. The team working on the AXD claimed to have done this through elimination of shared state (as we have covered) and by introducing a fault-tolerance mechanism in Erlang: Supervision.

Fault-tolerance is gained in the actor model through supervision. Supervision is more or less moving the responsibility of responding to failure outside of the thing that can fail. Practically speaking, this means that an actor can have other child actors that it is responsible for supervising; it monitors the child actor for failures and can take actions regarding the child actor's lifecycle. When an error is encountered in an actor that is running, the default supervision behavior is to restart (effectively recreate) the actor that encountered the failure. This response to failure—the recreating the component that fails—assumes that if an unexpected error is encountered that it could be a result of bad state, and so throwing away and re-creating the failing piece of the application can restore it to working order. It is possible to write custom responses as supervision strategies so almost any action can be taken to restore working order to the application.

Fault Tolerance in relation to distributed systems will be addressed as a general cross-cutting concern throughout the book with an emphasis on fault tolerance in Akka and distributed systems in Chapter 4, Actor Lifecycle – Handling State and Failure.

The Evolution of distribution and location transparency

Business today demands that the capable engineers be able to design systems that can serve traffic to thousands of users concurrently and a single machine is not enough to do that. Further, multi-core processors are becoming more prevalent so distributing across those cores is becoming important to ensure our software can take advantage of the hardware that it runs on.

Akka takes the actor model and continues to evolve it by introducing an important capability for today's engineers: distribution across the network. Akka presents itself as a toolkit for fault-tolerant distribution. That is, Akka is a tool kit for working across the physical boundaries of servers to scale almost indefinitely while maintaining high availability. In recent releases, many of the features added to Akka are related to solving problems related to networked system. Akka clusters was introduced recently which allows an actor system to span multiple machines transparently, and Akka IO and Akka HTTP are now in the core libraries to help us interact with other systems more easily. One of Akka's key contributions to the actor model is the concept of location transparency—that is, an actor's mailing address can actually be a remote location but the location is more or less transparent to the developer so the code produced is more or less identical.

Akka extends on what Erlang did with the actor model and breaks down the physical barriers of the actor system. Akka adds remoting and location transparency, that is, the mailbox of an actor could suddenly be on a remote machine and Akka would abstract away the transmission of the message over the network.

More recently, Akka introduced Cluster. Akka Cluster uses modern approaches similar to what you might see in distributed systems influenced by the Amazon Dynamo paper such as Dynamo, Cassandra, and Riak. With Cluster, an actor system can exist across multiple machines and nodes will gossip and communicate about state to other members to allow an elastic Akka cluster with no single point of failure. The mechanism is similar to Dynamo style databases such as Riak and Cassandra. This is an incredible feature that makes creating elastic, fault-tolerant systems quite simple.

Typesafe , the company that provides technologies like Scala and Akka, are continuing to push forward distributed computing with a plethora of networking tools such as Akka IO and Akka HTTP. Further, Typesafe have been involved in the Reactive Streams proposal and Akka has one of the first implementations for producing non-blocking back-pressure for asynchronous processing.

We will cover many of these items in detail throughout the course of this book. Chapter 4, Actor Lifecycle – Handling State and Failure and Chapter 5, Scaling Up will cover remoting in greater detail. Cluster will be covered in Chapter 6, Successfully Scaling Out – Clustering. Reactive Streams will be covered in Chapter 7, Handling Mailbox Problems.


What we will build

We will produce two primary services throughout the book and recommend that you follow along. There is a homework section at the end of every chapter that will give you exercises to help put the material to use— complete the activities before you head on to the next chapter. Post them up to GitHub if you want to share your progress or have a good open-source idea.

We can define two main pieces of software we will focus on developing in the book. One example will be used to demonstrate how to handle state and distribution and the other will be used to demonstrate how to get work done.

Example 1 – handling distributed state

We're going to look at how we would build a scalable distributed in memory database that we will store data in from the other example. To be clear, we will build a highly available key value store similar to Redis or memcached. The database that you build will handle all of the concurrency, clustering, and distribution concerns needed for this to really work. Many of the skills you build will be in learning how to separate and distribute the data and load for our database in a cluster so we can take advantage of the hardware, and scale out to utilize multiple machines, you'll get a real taste of the design challenges and common solutions in real world situations. We're also going to look at how to build a client library to interact with our Akka-based database so anyone on the JVM can use it. It is highly recommended that you build a database like this for yourself, put it on GitHub, and show it off on your resume.

If it sounds like a lot of work—good news, this will all actually be fairly simple to do using the Akka toolkit. We will take you from zero to hero in no time.

Example 2 – getting lots of work done

For an example of doing lots of work at scale in this book, we will produce an article reading an API that will take a blog or news article, rip out the main text body, and store it in our database for later consumption.

For a use case, imagine a mobile device will have a reader on it requesting articles from popular RSS feeds from our service and presenting the main body text in a nice reader experience that can reflow the text to fit the display. Our service will do the extraction of that body text from major RSS feeds so the user has a nice fast experience on the device and never has to wait. If you want to see a real example of this on a device, check out Flipboard for iOS: it is a great example of what a consumer of our service might look like.

Now that we've covered the content that is in this book, let's get started by setting up your environment, and building an actor!


Setting up your environment

Before we really dig into Akka, we're going to cover setting up your environment and scaffolding a project. You can refer back to this section in later chapters of the book as we will create a few projects along the way.

Choosing a language

The Scala and Java APIs are more or less 1 to 1 for Scala and Java so use whichever language you are comfortable with. If you know both languages, Scala certainly has a more idiomatic API but both are very serviceable choices. An Actor built in Java is accessible from Scala through the Scala actor API and visa versa so there is no need to decide which to build on immediately; do whatever will get you to where you are going faster. Right now your focus is on learning Akka, not a language. You'll be able to pick up the other API later without much effort once you know Akka.

Installing Java – Oracle JDK8

This book will forego all older versions of Java and focus only on Java8. If you are a Java developer but not familiar with Java8 features, you should take some time to familiarize yourself with lambdas and the stream API as covered in this tutorial: http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/Lambda-QuickStart/index.html

You'll see lambdas are used heavily in this book. you will benefit from taking the time to get acquainted.

Installing on Windows

Download and install the Windows JDK8 installer (dmg) from Oracle: http://www.oracle.com/technetwork/java/javase/downloads/index.html.

Follow the instructions.

Installing on OS X

Download and install the OS X JDK8 installer (dmg) from Oracle: http://www.oracle.com/technetwork/java/javase/downloads/index.html.

Follow the instructions.

Installing on Linux or Unix (Universal instructions)

There are a couple approaches that can be used for Nix installations. You can use the Universal installer, or try to use a package manager like Yum for Red Hat Enterprise Linux (RHEL) based distributions or Apt-Get for Debian-based distribution. Instructions for the package manager can vary from distribution to distribution but instructions can be found via Google if desired.

The Universal installer will work on all systems so will be covered here. This is the most basic installation you can get away with. It will install the JDK and enable it for your current user but will not change your system. If you want to change your system's JDK/JRE you can follow the install instructions for your particular distribution. This would be suitable for servers or desktop environments. If you're working on a desktop environment, you can see if there are instructions for your particular distribution if you want it available for other users as the default JDK/JRE.

Download the Linux tar.gz JDK distribution from Oracle: http://www.oracle.com/technetwork/java/javase/downloads/index.html.

It will likely be in a file named something like jdk-8u31-linux-x64.tar.gz Decompress the tar.gz file in an appropriate location such as /opt:

sudo cp jdk-8u31-linux-x64.tar.gz /opt cd /opt sudo tar -xvf jdk-8u31-linux-x64.tar.gz

You'll want to set your user's Java home to the Java8 folder:

echo 'export JAVA_HOME=/opt/jdk1.8.031' >> ~/.profile

Also ensure that Java bin is on the path:

echo 'export PATH=$PATH:/opt/jdk1.8.031' >> ~/.profile

Now your IDE and Sbt/Activator can use the JDK to build and run apps we build.

Ensuring Java is configured in your environment

Regardless of the OS you're on, you'll want to ensure that JAVA_HOME is set and also that the Java binary is on the path. You shouldn't need to do this unless you use the universal installer but you should validate in a new terminal that JAVA_HOME is set in the environment, and that the JDK bin folder is on the path.

Installing Scala

If you're using Scala, then you'll want to have Scala and the REPL installed on your system. At the time of writing, the current Scala version (2.11) compiles to Java 1.6 byte-code so we can assume you do not need to install JDK8. There is talk of future versions of Scala requiring JDK8 so this may change.

Scala does not need to be installed on its own. Typesafe Activator contains Scala and all of the tools we will need to work with it; we will install next.

Installing Typesafe Activator

Typesafe Activator installation is a bundle that contains Scala, Akka, Play, Simple Build Tool (SBT) and some extra features such as project scaffolding and templates.


Download Typesafe Activator from Typesafe—http://www.typesafe.com/get-started.

Run the installer and follow the onscreen instructions.

Linux/Unix/OS X

Download Typesafe Activator from Typesafe:

Unzip the file in an appropriate location such as /opt cd /opt sudo unzip typesafe-activator-1.2.12.zip.

Make the Activator executable: sudo chmod 755 /opt/activator-1.2.12/activator

Add the Activator to your path: echo 'export PATH=$PATH:/opt/activator-1.2.12'

Log out and back in. Ensure you can run the following on the command line:

activator --version

That should display text similar to this: sbt launcher version 0.13.5


Activator can either be installed with Linux or using brew. This section will cover the brew installation:

Open a terminal.

Place the following in your terminal (copied from http://brew.sh). This will install the Homebrew OS X Package manager.

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Finally, place the following in your terminal and press Enter:

brew install typesafe-activator

Check that the Activator is available on the command line:

activator --version

Creating a new project

We will use the Activator to quickly scaffold projects in this book. We can generate a project from any number of templates. We will only use the basic Java and Scala templates in this book. Feel free to explore other options. Typesafe has many user-submitted Activator templates that will demonstrate various technologies and approaches used together.

To create a new Activator template, in a terminal/command prompt, type:

activator new

You will see the following.

Choose from these featured templates or enter a template name:

  • minimal-akka-java-seed

  • minimal-akka-scala-seed

  • minimal-java

  • minimal-scala

  • play-java

  • play-scala


You can hit Tab to view a list of all templates.

Select the minimal-scala or minimal-java project depending on your language preference. You will be prompted to name your application next, call it akkademy-db.

Enter a name for your application (just press Enter for minimal-scala) > akkademy-db.

To confirm that the project and your environment are set up correctly, change into the folder and run activator test.

cd akkademy-db activator test

You will see output indicating that the project compiled and the test ran. If there are any problems, you may have to head to stack-overflow and sort out your environment before proceeding.

You will see the following success message if all went well:

[info] Passed: Total 1, Failed 0, Errors 0, Passed 1 [success] Total time: 3 s, completed 22-Jan-2015 9:44:21 PM

Installing an IDE

We have our environment set up and running now and we can actually start to work on the code. If you want to use a simple text editor, feel free to skip this section. Emacs or Sublime are good choices for text editors and have syntax highlighting and integrations that can provide autocomplete. If you want to get an IDE up and running, we'll cover setting up Eclipse, and IntelliJ here.

Install IntelliJ CE

If you choose to use an IDE, IntelliJ is the recommended IDE. If you're using another IDE, I still strongly recommend you attempt to use IntelliJ. While writing this book, I've worked with many Java developers who transitioned to working with SBT projects and almost all of them switched to IntelliJ and never looked back.

IntelliJ now has built-in SBT support, which makes it a fast IDE to use for your Akka projects; setting up and configuring of the IDE is virtually non-existent—it will just work with the technology we will use in this book.

Steps for getting the project up and running:

  1. Download and install IntelliJ CE (free).

  2. After installing choose to open a project. Select the akkademy-db folder.

  3. Select Java 1.8 if you're using Java (or if Scala 2.12 requires it). You can use Java 6 or 7 if using Scala 2.11. Turn on the Auto Import feature. Hit okay.


If using Eclipse, it is recommended that you download Scala-Ide, which contains all of the required plugins to work with our sbt/Akka projects in either Java or Scala. Even if you're only using Java, you may find you want to inspect some Scala along the way.

Installing Eclipse (Scala-Ide)

Download Scala-Ide from http://scala-ide.org which is a packaged version of Eclipse with sbt and Scala plugins integrated.

Unzip the file that is downloaded. You can move the unzipped folder to another location if desired such as /opt (linux) or ~/Applications (OSX).

Run the Eclipse binary. Choose a workspace folder or select the default.

Check that the Java JDK is correctly selected in Preferences: Java | Compiler.

Preparing the project for Eclipse

In order to open the project in eclipse, we must first generate an Eclipse project.

First we must add the eclipse sbt plugin to our environment. Open your global sbt plugins file (create it if it's not there), is located in ~/.sbt/{version}/plugins/plugins.sbt where version is the sbt version. It is 0.13 at the time of writing, so ~/.sbt/0.13/plugins/plugins.sbt

Include the following in the file, ensuring there is a blank line between each line in the file if there are multiple lines.

addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "3.0.0")

You may want to ensure this is current by checking the sbteclipse GitHub project: https://github.com/typesafehub/sbteclipse/

Once you have the plugin installed, you need to generate the eclipse project: https://github.com/typesafehub/sbteclipse/

In the terminal, navigate to the project we created earlier (akkademy-db). In the root of the project, run Activator eclipsify to generate the eclipse project structure.

You will see the following success message if all went well:

[info] Successfully created Eclipse project files for project(s): [info] akkademy-db
Importing the project into Eclipse

In Eclipse, select File | Import.

Choose General | Existing Projects into Workspace.

Select the folder and click Next.


Note if you change build.sbt you will need to re-generate the project and may need to re-import it.


Creating your first Akka application – setting up the SBT project

Now that we have covered setting up your environment and how to create a project, we can proceed with creating some actor code in Akka, and then look at how to validate that code. We will be using simple build tool(SBT), which is the preferred build tool for Scala projects and is also the build tool that Play Framework and Activator use under the hood. It's not complex and we will use it only for managing dependencies and building a testing and running applications, so it should not be an obstacle to learning Akka.

Adding Akka to build.sbt

We will now open the application (either Java or Scala) in our favorite IDE. The scaffolding Activator created is not for an Akka project, so we will need to add the Akka dependencies first. We will add both the Akka core Akka module (akka-actor) and the Akka test-kit, which contains tools to more easily allow us to test the actors.

In the build.sbt file, you will see something roughly like this for a Scala project. Note the dependencies are actually Maven dependencies. Any Maven dependencies can easily be added, as we'll cover shortly. The Java and Scala projects will be more or less identical; however the Java project will have a Junit dependency instead of Scalatest:

name := """akkademy-db-java"""
version := "1.0"
scalaVersion := "2.11.1"
libraryDependencies ++= Seq( "com.typesafe.akka" %% "akka-actor" % "2.3.6", "com.typesafe.akka" %% "akka-testkit" % "2.3.6" % "test", "junit"             % "junit"           % "4.11"  % "test", "com.novocode"      % "junit-interface" % "0.10"  % "test" )

To Include Akka, we need to add a new dependency.

Your dependencies should look something like this for Java:

libraryDependencies ++= Seq( "com.typesafe.akka" % "akka-actor_2.11" % "2.3.6", "junit"             % "junit"           % "4.11"  % "test", "com.novocode"      % "junit-interface" % "0.10"  % "test" )

And something like this for Scala:

name := """akkademy-db-scala"""
version := "1.0"
scalaVersion := "2.11.1"
libraryDependencies ++= Seq( "com.typesafe.akka" %% "akka-actor" % "2.3.3", "com.typesafe.akka" %% "akka-testkit" % "2.3.6" % "test", "org.scalatest" %% "scalatest" % "2.1.6" % "test"

A note on getting the right Scala version with %%

As Scala does not have binary compatibility across major versions, libraries will often be built and published across several versions of Scala. To have SBT try to resolve the dependency built for the correct Scala version for your project, you can change the dependency declared in the build.sbt file to use two % symbols after the group ID instead of specifying the Scala version in the artifact id.

For example, in a Scala 2.11 project, these two dependencies are equivalents as shown in the following code:

  "com.typesafe.akka" % "akka-actor_2.11" % "2.3.3"
  "com.typesafe.akka" %% "akka-actor" % "2.3.3"

Adding other Dependencies from Maven Central

Any Maven dependencies can be added here—for example from http://www.mvnrepository.com. You can see on this link that for any artifact there is an sbt tab that will give you the line to add for the dependency.

Creating your first Actor

In this section, we will create an actor that receives a message and updates its internal state by storing the values from the message into a map. This is the humble beginnings of our distributed database.

Making the Message first

We're going to begin our in-memory database with a SetRequest message that will store a key (String) and a value (any Object) in memory. You can think of it as a combination of both an insert and an update in one, or like the set operation on a Map.

Remember, our actor has to get the message from his mailbox and check what the instruction is in that message. We use the class/type of the message to determine what the instruction is. The contents of that message type describe the exact details of how to fulfill the contract of the API; in this case we will describe the key as a String and the value as an Object inside the message so that we know what to store.

Messages should always be immutable in order to avoid strange and unexpected behavior, primarily by ensuring you and your team don't do unsafe things across execution contexts/threads. Remember also that these messages may not be simply destined for a local actor but for so another machine. If possible, mark everything val (Scala) or final (Java) and use immutable collections and types such as those found in Google Guava (Java) or the Scala Standard Library.


Here is our Set message in Java as an immutable object. This is a fairly standard approach to immutable objects in Java. It will be a familiar sight to any skilled Java developer; you should generally prefer immutability in all of your code.

package com.akkademy.messages;
public class SetRequest { 
    private final String key; 
    private final Object value;
    public Set(String key, Object value) { 
        this.key = key; 
        this.value = value; 
    public String getKey() { 
        return key; 
    }    public Object getValue() { 
        return value;

In Scala we have a much more succinct way of defining immutable messages—the case class. The case class lets us create an immutable message; values can only be set once in the constructor and then are read from the fields:

package com.akkademy.messages case class SetRequest(key: String, value: Object)

That's it for the messages.

Defining Actor response to the Message

Now that we have the message created, we can create the actor and describe the behavior that the actor will take in response to our message. In our very early example here, we are going to do two things:

  1. Log the message.

  2. Store the contents of any Set message for later retrieval.

We will build on the example in future chapters to let us retrieve stored messages so that this actor can be used as a thread-safe caching abstraction (and eventually a full-on distributed key-value store).

We'll have a look at the Java8 actor first.

Java – AkkademyDb.java

The following code denotes Actor response to the message in Java:

package com.akkademy;
import akka.actor.AbstractActor;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import akka.japi.pf.ReceiveBuilder;
import com.akkademy.message.SetRequest;
import java.util.HashMap;
import java.util.Map;
public class AkkademyDb extends AbstractActor { protected final LoggingAdapter log = Logging.getLogger(context().system(), this); protected final Map<String, Object> map = new HashMap<>(); 
    private AkkademyDb(){ receive(ReceiveBuilder.match(SetRequest.class, message -> { log.info("Received set request – key: {} value: {}", message.getKey(), message.getValue()); map.put(message.getKey(), message.getValue()); }).matchAny(o -> log.info("received unknown message {}", o)).build()

The actor is a Java class that extends AbstractActor (a Java8 Akka Actor API). We create the logger and the map in the class as protected members so we can access them in test cases later in the chapter.

In the constructor we call receive. The receive method takes a ReceiveBuilder which has several methods that we call chained together to produce the final ReceiveBuilder. With this, we describe how the actor should behave in response to different message types. We define two behaviors here and we will look at them one at a time.

First, we define the behavior to respond to any SetRequest messages with:

match(SetRequest.class, message -> { log.info("Received Set request: {}", message); map.put(message.getKey(), message.getValue()); }).

The ReceiveBuilder match method in the Java8 API is somewhat similar to a case statement except that we can match on class types. More formally, this is pattern matching.

The match method call, then, says: if the message is of type SetRequest.class, take that message, log it, and put a new record in the map using the key and value of that Set message.

Second, we define a catch-all to simply log any unknown message.

matchAny(o -> log.info("received unknown message"))
Scala – AkkademyDb.scala

Scala is a natural fit as the language has pattern matching as a first-class language construct. We'll have a look at the Scala equivalent code now:

package com.akkademy
import akka.actor.Actor
import akka.event.Logging
import scala.collection.mutable.HashMap
import com.akkademy.messages.SetRequest
class AkkademyDb extends Actor {
  val map = new HashMap[String, Object]
  val log = Logging(context.system, this) override def receive = { case SetRequest(key, value) => { log.info("received SetRequest - key: {} value: {}", key, value) map.put(key, value) } case o => log.info("received unknown message: {}", o);

In the Scala API, we mix in the Actor trait, define the map and logger as we did in Java, and then implement the receive method. The receive method on the Actor super-type returns the Receive which, in the Akka source, is defined as a partial function as follows:

type Receive = scala.PartialFunction[scala.Any, scala.Unit]

We define the behavior for the response to the SetRequest message using pattern matching to produce the partial function. We can extract the key and the value variables for clearer code using pattern matching semantics:

case SetRequest(key, value)

The behavior is to simply log the request, and then to set the key/value in the map.

    case SetRequest(key, value) => { log.info("received SetRequest - key: {} value: {}", key, value)
      map.put(key, value)

Finally, we add a catch-all case to simply log unknown messages:

    case o => log.info("received unknown message: {}", o);

That's it for the actor. Now we have to validate we did everything correctly.

Validating the code with unit tests

While books covering frameworks may print to the console or create web pages that are suitable evidence that our code is working, we're going to be using unit tests to validate code and to demonstrate its use. Library code and services often don't have an API that is easy to interact with or to otherwise to observe, testing is generally how these components are validated in almost every project. This is an important skill for any serious developer to have under their belt.

Akka Testkit

Akka provides a test kit that provides almost anything you would ever need to test your actor code. We included the test kit dependencies earlier when we set up our project. For reference, the SBT dependency to place in build.sbt is as follows:

  "com.typesafe.akka" %% "akka-testkit" % "2.3.6" % "test"

We're going to use the TestActorRef generic here from the testkit instead of a normal ActorRef (which we will look at in the next chapter). The TestActorRef does two things: it makes the actor's API synchronous so we don't need to think about concurrency in our tests, and it gives us access to the underlying Actor object.

To be clear, Akka hides the actual Actor (AkkademyDb) and instead gives a reference to the actor that you send messages to. This encapsulates the actor to enforce message passing as nobody can access the actual object instance.

Next we will look at the source code, and then explain it line by line.


This is the source code for Akka toolkit:

package com.akkademy;
import static org.junit.Assert.assertEquals;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import akka.testkit.TestActorRef;
import com.akkademy.messages.SetRequest;
import org.junit.Test;
public class AkkademyDbTest { ActorSystem system = ActorSystem.create(); 
    public void itShouldPlaceKeyValueFromSetMessageIntoMap() { TestActorRef<AkkademyDb> actorRef = TestActorRef.create(system, Props.create(AkkademyDb.class)); actorRef.tell(new SetRequest("key", "value"),ActorRef.noSender());
        AkkademyDb akkademyDb = actorRef.underlyingActor(); assertEquals(akkademyDb.map.get("key"), "value");

The following source code represents interaction with the actor:

package com.akkademy
import akka.util.Timeout
import org.scalatest.{BeforeAndAfterEach, FunSpecLike, Matchers}
import akka.actor.ActorSystem
import com.akkademy.messages.SetRequest
import akka.testkit.TestActorRef
import scala.concurrent.duration. 
class AkkademyDbSpec extends FunSpecLike with Matchers with BeforeAndAfterEach {
  implicit val system = ActorSystem()
  describe("akkademyDb") { 
    describe("given SetRequest"){ 
      it("should place key/value into map"){ 
        val actorRef = TestActorRef(new AkkademyDb) 
        actorRef ! SetRequest("key", "value")
        val akkademyDb = actorRef.underlyingActor 
        akkademyDb.map.get("key") should equal(Some("value"))

This is the first time we are looking at interacting with an actor so there is some new code and behavior, some of it is test-specific and some related to interacting with the actor.

We've described an Actor System as a place where actors and their addresses reside, the first thing we need to do before creating the actor is to get a reference to an actor system. We create one as a field in the test:

ActorSystem system = ActorSystem.create();
implicit val system = ActorSystem()

After creating the actor system, we can now create our actor in the actor system. As mentioned, we're going to use Akka Testkit to create a TestActorRef which has a synchronous API, and lets us get at the underlying actor. We create the actor in our actor system here:

TestActorRef<AkkademyDb> actorRef = TestActorRef.create(system, Props.create(AkkademyDb.class));
val actorRef = TestActorRef(new AkkademyDb)

We call the Akka Testkit TestActorRef create method, passing in the actor system we created (it is implicitly passed in Scala) and a reference to the class. We will look at actor creation in further chapters. Actor instances are hidden away so the act of creating an actor in our actor system returns an ActorRef (in this case, a TestActorRef) that we can send messages to. The system and class reference is enough for Akka to create this simple actor in our actor system so we have successfully created our first actor.

We communicate with an actor via message-passing. We place a message into an actor's mailbox with 'tell' or '!' in Scala, which is still read as 'tell'. We define that there is nobody to respond to for this message as a parameter of the tell method in Java. In Scala, outside of an actor, this is implicit.

actorRef.tell(new SetRequest("key", "value"), ActorRef.noSender());
actorRef ! SetRequest("key", "value")

Because we are using TestActorRef, the call to tell will not continue until the request is processed. This is fine for a look at our first actor but it's important to note that this example does not expose the asynchronous nature of the Actor's API. This is not the usual behavior; tell is an asynchronous operation that returns immediately in normal usage.

Finally, we need to ensure that the behavior is correct by asserting that the actor placed the value into its map. To do this, we get the reference to the underlying Actor instance, and inspect the map by calling get("key") and ensuring the value is there.

AkkademyDb akkademyDb = actorRef.underlyingActor();
assertEquals(akkademyDb.map.get("key"), "value");
val akkademyDb = actorRef.underlyingActor
akkademyDb.map.get("key") should equal(Some("value"))

That's it for the creation of our first simple test case. This basic pattern can be built on for unit testing Actors synchronously. As we go through the book, we will look at more extensive unit-testing examples as well as asynchronous integration testing of our actors.

Running the test

We're almost there! Now that we've built our tests, we can go to the command line and run 'activator' to start the activator cli. Next we can run 'clean' to tidy up any garbage and then 'test' which will fire off the tests. To do this in one step, we can run activator clean test.

You should see something like the following for the Java Junit test:

[INFO] [01/12/2015 23:09:24.893] [pool-7-thread-1] [akka://default/user/$$a] Received Set request: Set{key='key', value=value}
[info] Passed: Total 1, Failed 0, Errors 0, Passed 1
[success] Total time: 7 s, completed 12-Jan-2015 11:09:25 PM

And if you're using Scala, then scala-test will give you a bit nicer output:

[info] AkkademyDbSpec:
[info] akkademyDb
[info] - should place key/value from Set message into map
[info] Run completed in 1 second, 990 milliseconds.
[info] Total number of tests run: 1
[info] Suites: completed 1, aborted 0
[info] Tests: succeeded 1, failed 0, canceled 0, ignored 0, pending 0
[info] All tests passed.

The output will tell you some information about how many tests were run, how many tests failed, and, if there are any errors, it will indicate where the failures occurred so that you can investigate. Once you have a test in place on a behavior, you can be confident that any changes or refactorings you apply did not break the behavior.



To ensure you have a good grasp on the content, an assignment will be given at the end of each chapter:

  • Place the Akka documentation in your Bookmark bar. Also place Hacker News there and read it every day.

  • Come up with an idea for a service you want to build and provide on the Internet. Preferably the service should involve processing some input and storing it or returning it.

  • Create a repository in GitHub for your project. Check your project into GitHub. If you've never worked with Git or GitHub, now is a good time as the source from this book is available there! You can use it to post and display the work you do in this book. Tag your README with LEARNINGAKKAJG so others can search for your project on GitHub to see what you've done.

  • Create an actor. Have the actor store the last string that it was sent.

  • Write a unit test to confirm the actor will receive a message correctly.

  • Write a unit test to confirm the actor behaves correctly if it is sent two messages.

  • Push your project to GitHub.

  • Check out the book source code from http://www.github.com/jasongoodwin/learning-akka



We have officially started our journey into building a scalable and distributed applications using Akka. In this chapter, we looked at a brief history of the actor model to understand what Akka is and where it came from. We also learned how to set-up an sbt project for our Akka code. We set up our environment to work with sbt projects and created an actor. Then, we tested the behavior of our actor with a unit test

In the following few chapters, our example application will really start to take shape as we expand it with a client and distribute it across cores and processes.

About the Author

  • Jason Goodwin

    Jason Goodwin is a developer who is primarily self-taught. His entrepreneurial spirit led him to study business at school, but he started programming when he was 15 and always had a high level of interest in technology. This interest led his career to take a few major changes away from the business side and back into software development. His journey has led him to working on high-scale distributed systems. He likes to create electronic music in his free time.

    He was first introduced to an Akka project at a Scala/Akka shop—mDialog—that built video ad insertion software for major publishers. The company was acquired by Google eventually. He has also been an influential technologist in introducing Akka to a major Canadian telco to help them serve their customers with more resilient and responsive software. He has experience of teaching Akka and functional and concurrent programming concepts to small teams there. He is currently working via Adecco at Google.

    Browse publications by this author

Latest Reviews

(3 reviews total)
Simple and easy to understand, good book to start with.
Awful code samples, the author was really not sure about what he was trying to present, material lacked cohesion and very badly edited book. Even the most simple of concepts was presented poorly and lacked clarity.
Practical guide and very good introduction to akka.

Recommended For You

Book Title
Unlock this full book FREE 10 day trial
Start Free Trial