Home Programming Learning Functional Data Structures and Algorithms

Learning Functional Data Structures and Algorithms

By Atul S. Khot , Raju Kumar Mishra
books-svg-icon Book
Subscription Free Trial for 7 days $12.99/m after trial
eBook + Subscription $12.99
eBook $39.99
Print + eBook $48.99
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
Subscription Free Trial for 7 days $12.99/m after trial
eBook + Subscription $12.99
eBook $39.99
Print + eBook $48.99
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
  1. Free Chapter
    Why Functional Programming?
About this book
Functional data structures have the power to improve the codebase of an application and improve efficiency. With the advent of functional programming and with powerful functional languages such as Scala, Clojure and Elixir becoming part of important enterprise applications, functional data structures have gained an important place in the developer toolkit. Immutability is a cornerstone of functional programming. Immutable and persistent data structures are thread safe by definition and hence very appealing for writing robust concurrent programs. How do we express traditional algorithms in functional setting? Won’t we end up copying too much? Do we trade performance for versioned data structures? This book attempts to answer these questions by looking at functional implementations of traditional algorithms. It begins with a refresher and consolidation of what functional programming is all about. Next, you’ll get to know about Lists, the work horse data type for most functional languages. We show what structural sharing means and how it helps to make immutable data structures efficient and practical. Scala is the primary implementation languages for most of the examples. At times, we also present Clojure snippets to illustrate the underlying fundamental theme. While writing code, we use ADTs (abstract data types). Stacks, Queues, Trees and Graphs are all familiar ADTs. You will see how these ADTs are implemented in a functional setting. We look at implementation techniques like amortization and lazy evaluation to ensure efficiency. By the end of the book, you will be able to write efficient functional data structures and algorithms for your applications.
Publication date:
February 2017
Publisher
Packt
Pages
318
ISBN
9781785888731

 

Chapter 1.  Why Functional Programming?

What is functional programming (FP)? Why is it talked about so much?

A programming paradigm is a style of programming. FP is a programming paradigm characterized by the absence of side effects.

In FP, functions are the primary means of structuring code. The FP paradigm advocates using pure functions and stresses on immutable data structures. So we don't mutate variables, but pass a state to function parameters. Functional languages give us lazy evaluation and use recursion instead of explicit loops. Functions are first-class citizens like numbers or strings. We pass functions as argument values, just like a numeric or string argument. This ability to pass functions as arguments allows us to compose behavior, that is, cobble together something entirely new from existing functions.

In this chapter, we will take a whirlwind tour of functional programming. We will look at bits of code and images to understand the concepts. This will also lay a nice foundation for the rest of the book. We will use the functional paradigm and see how it changes the way we think about data structures and algorithms.

This chapter starts with a look at the concept of abstraction. We will see why abstractions are important in programming. FP is a declarative style of programming, similar to Structured Query Language (SQL). Because it is declarative, we use it to tell what we want the computer to do, rather how it should do it. We will also see how this style helps us stay away from writing common, repetitive boilerplate code.

Passing functions as arguments to other, higher order functions is the central idea in FP; we look at this next. We will also see how to stay away from null checks. Controlled state change allows us to better reason our code. Being immutable is the key for creating code that would be easier to reason about.

Next, we will see how recursion helps us realize looping without mutating any variables. We will wrap up the chapter with a look at lazy evaluation, copy-on-write, and functional composition.

 

The imperative way


We keep contrasting FP with the imperative style of programming. What do we mean by imperative style, though?

The imperative programming style is embodied by a sequence of commands modifying a program's state. A simple example of this is a for loop. Consider the following pseudo code snippet to print all the elements of an array:

  x = [1,2,3,4...] // an array, x.size tells the number of array elements  
  for( int i = 0; i < x.size; ++i ) { 
         println(x[i]) 
      } 

Here is a pictorial rendering of the concepts:

As the figure shows, the for loop establishes an initial state by setting the variable i to 0. The variable is incremented every time the loop is repeated; this is what we mean by the state being modified. We keep reading and modifying the state, that is, the loop variable, until there are no elements left in the array.

FP advocates staying away from any state modification. It gives us tools so we don't worry about how to loop over a collection; instead, we focus on what we need to do with each element of the collection.

 

Higher level of abstraction


FP allows us to work at a higher level of abstraction. Abstraction is selective ignorance. The world we know of runs on abstractions. If we say, "Give me sweet condensed milk frozen with embedded dry fruits' someone might really have a hard time understanding it. Instead, just say "ice-cream"! Aha! It just falls into place and everyone is happy.

Abstractions are everywhere. An airplane is an abstraction, so is a sewing machine or a train. When we wish to travel by train, we buy a ticket, board the train, and get off at the destination. We really don't worry about how the train functions. We simply can't be expected to deal with all the intricacies of a train engine. As long as it serves our purpose, we ignore details related to how an engine really works.

What are the benefits of an abstraction? We don't get bogged down into unnecessary details. Instead, we focus on the task at hand by applying higher level programming abstractions.

Compare the preceding for loop with the functional code snippet:

scala> val x = Array(1,2,3,4,5,6) 
x: Array[Int] = Array(1, 2, 3, 4, 5, 6) 
scala> x.foreach(println _) 
1 
2 
... 

We simply focus on the task at hand (print each element of an array) and don't care about the mechanics of a for loop. The functional version is more abstract.

As software engineers, when we implement an algorithm and run it, we are intentionally ignoring many details.

We know that the preceding sandwich stack somehow works and faithfully translates the algorithm into runnable code.

Applying higher level abstractions is commonly done in functional languages. For example, consider the following Clojure REPL snippet:

user=> ((juxt (comp str *) (comp str +)) 1 2 3 4 5) 
["120" "15"] 

We are juxtaposing two functions; each of these are in turn composed of the str function and an arithmetic function:

We just don't worry about how it works internally. We just use high-level abstractions cobbled together with existing pieces of abstract functionality.

We will be looking at abstract data types (ADT) closely in this book. For example, when we talk about a stack, we think of the Last In First Out (LIFO) order of access. However, this ADT is realized by implementing the stack via a data structure, such as a linked list or an array.

Here is a figure showing the First In First Out (FIFO) ADT in action. The FIFO queue is the normal queue we encounter in life, for example, queuing up at a booking counter. The earlier you get into a queue, the sooner you get serviced.

The FIFO queue is an ADT. True that we think of it as an ADT; however, as shown in the preceding diagram, we can also implement the queue backed by either an array or a linked list.

 

Functional programming is declarative


When we use SQL, we just express our intent. For example, consider this:

mysql> select count(*) from book where author like '%wodehouse%'; 

We just say what we are looking for. The actual mechanism that gets the answer is hidden from us. The following is a little too simplistic but suitable example to prove the point.

The SQL engine will have to loop over the table and check whether the author column contains the wodehouse string. We really don't need to worry about the search  algorithm. The author table resides on a disk somewhere. The number of table rows that need to be filtered could easily exceed the available memory. The engine handles all such complexities for us though.

We just declare our intent. The following Scala snippet is declarative. It counts the number of even elements in the input list:

scala> val list = List(1, 2, 3, 4, 5, 6) 
list: List[Int] = List(1, 2, 3, 4, 5, 6) 
 
scala> list.count( _ % 2 == 0 ) 
res0: Int = 3 

The code uses a higher order function, namely count. This takes another function, a predicate, as an argument. The line loops over each list element, invokes the argument predicate function, and returns the count.

Here is another example of Clojure code that shows how to generate a combination of values from two lists:

user=> (defn fun1 [list1 list2] 
  #_=>   (for [x list1 y list2] 
  #_=>     (list x y))) 
#'user/fun1 
user=> (fun1 '(1 2 3) '(4 5 6)) 
((1 4) (1 5) (1 6) (2 4) (2 5) (2 6) (3 4) (3 5) (3 6)) 

Note the code used to generate the combination. We use for comprehension to just state what we need done and it would be done for us.

 

No boilerplate


Boilerplate code consists sections of the same code written again and again. For example, writing loops is boilerplate, as is writing getters and setters for private class members.

As the preceding code shows, the loop is implicit:

scala> List(1, 2, 3, 4, 5) partition(_ % 2 == 0) 
res3: (List[Int], List[Int]) = (List(2, 4),List(1, 3, 5)) 

We just wish to separate the odd and even numbers. So we just specify the criteria via a function, an anonymous function in this case. This is shown in the following image:

What is boilerplate? It is a for loop, for example. In the imperative world, we code the loop ourselves. We need to tell the system how to iterate over a data structure.

Isn't Scala code just to the point? We tell what we need and the loop is implied for us. No need to write a for loop, no need to invent a name for the loop variable, and so on. We just got rid of the boilerplate.

Here is a Clojure snippet that shows how to multiply each element of a vector by 2:

user=> (map * (repeat 2) [1 2 3 4 5]) 
(2 4 6 8 10) 

The map function hides the loop from us. Then (repeat 2) function call generates an infinite sequence.

So we are just saying this: for the input sequence [1 2 3 4 5], create another lazy sequence of 2's. Then use the map function to multiply these two sequences and output the result. The following figure depicts the flow:

Compare this with an imperative language implementation. We would have needed a loop and a list to collect the result. Instead, we just say what needs to be done.

 

Higher order functions


Unix shells allow us to express computations as pipelines. Small programs called filters are piped together in unforeseen ways to ensure they work together. For example, refer to this code:

~> seq 100 | paste -s -d '*' | bc 

This is a very big number (obviously, as we just generated numbers from 1 to 100 and multiplied them together). There is looping involved, of course. We need to generate numbers from 1 to 100, connect them together via paste, and pass these on to bc. Now consider the following code:

scala> val x = (1 to 10).toList 
x: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) 
 
scala> x.foldLeft(1) { (r,c) => r * c } 
res2: Int = 3628800     

Writing a for loop with a counter and iterating over the elements of the collection one by one is shown in the preceding code. We simply don't have to worry about visiting all the elements now. Instead, we start thinking of how we can process each element.

Here is the equivalent Clojure code:

user=> (apply * (range 1 11) ) 
3628800 

The following figure shows how the code works:

Scala's foldLeft and Clojure's apply are higher order functions. They help us avoid writing boilerplate code. The ability to supply a function brings a lot of flexibility to the table.

Eschewing null checks

Higher order functions help us succinctly express logic. There is an alternative paradigm that expresses logic without resorting to null checks. Refer to http://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare on why nulls are a bad idea; it comes directly from its inventor.

Here is a Scala collection with some strings and numbers thrown in; it is written using the alternative paradigm:

scala> val funnyBag = List("1", "2", "three", "4", "one hundred seventy five") 
funnyBag: List[String] = List(1, 2, three, 4, one hundred seventy five) 

We want to extract the numbers out of this collection and sum them up:

scala> def toInt(in: String): Option[Int] = { 
     |   try { 
     |     Some(Integer.parseInt(in.trim)) 
     |   } catch { 
     |       case e: Exception => None 
     |   } 
     | } 
toInt: (in: String)Option[Int] 

We return an Option container as a return value. If the string was successfully parsed as a number, we return Some[Int], holding the converted number.

The following diagram shows the execution flow:

In the case of a failure, we return None. Now the following higher order function skips None and just flattens Some. The flattening operation pulls out the result value out of Some:

scala> funnyBag.flatMap(toInt) 
res1: List[Int] = List(1, 2, 4) 

We can now do further processing on the resulting number list, for example, summing it up:

scala> funnyBag.flatMap(toInt).sum 
res2: Int = 7 
 

Controlling state changes


Let me share a debugging nightmare with you. We had a multithreaded system written in C++. The state was carefully shared, with concurrent access protected by explicit mutex locks. A team member--ugh--forgot to acquire a lock on a shared data structure and all hell broke loose.

The team member was a senior programmer; he knew what he was doing. He just forgot the locking. It took us some nights full of stack trace to figure out what the issue was.

Writing concurrent programs using shared memory communication can very easily go wrong.

In the book Java Concurrency in Practice, the authors show us how easy it is for internal mutable state to escape (http://jcip.net/ ). Tools, such as Eclipse, make it easy to generate getters, and before you know, a reference escapes and all hell could break loose.

The encapsulation is fine. The mutable state, in this case an array, could be inadvertently exposed. For example, using a getter method, the array reference can be obtained by the outside world. Two or more threads could then try mutating it and everything goes for a toss:

We cannot ignore concurrency anymore. Program design is going to be ruled by the machine design; having a multicore machine is the norm.

It is too hard to make sure the state changes are controlled. If instead, we know that a data structure does not change once it is created, reasoning becomes far easier. There is no need to acquire/release locks as a shared state never changes.

 

Recursion aids immutability


Instead of writing a loop using a mutable loop variable, functional languages advocate recursion as an alternative. Recursion is a widely used technique in imperative programming languages, too. For example, quicksort and binary tree traversal algorithms are expressed recursively. Divide and conquer algorithms naturally translate into recursion.

When we start writing recursive code, we don't need mutable loop variables:

scala> import scala.annotation.tailrec 
import scala.annotation.tailrec 
scala> def factorial(k: Int): Int = { 
     |   @tailrec 
     |   def fact(n: Int, acc: Int): Int = n match { 
     |     case 1 => acc 
     |     case _ => fact(n-1, n*acc) 
     |   } 
     |   fact(k, 1) 
     | } 
factorial: (k: Int)Int 
 
scala> factorial(5) 
res0: Int = 120  

Note the @tailrec annotation. Scala gives us an option to ensure that tail call optimization (TCO) is applied. TCO rewrites a recursive tail call as a loop. So in reality, no stack frames are used; this eliminates the possibility of a stack overflow error.

Here is the equivalent Clojure code:

user=> (defn factorial [n] 
  #_=>   (loop [cur n fac 1] 
  #_=>     (if (= cur 1) 
  #_=>      fac 
  #_=>       (recur (dec cur) (* fac cur) )))) 
#'user/factorial 
user=> (factorial 5) 
120 

The following diagram shows how recursive calls use stack frames:

Clojure's special form, recur, ensures that the TCO kicks in.

Note how these versions are starkly different than the one we would write in an imperative paradigm.

Instead of explicit looping, we use recursion so we wouldn't need to change any state, that is, we wouldn't need any mutable variables; this aids immutability.

 

Copy-on-write


What if we have never mutated data? When we need to update, we could copy and update the data. Consider the following Scala snippet:

scala> val x = 1 to 10 
x: scala.collection.immutable.Range.Inclusive = Range(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) 
 
scala> x  map (_ / 2) filter ( _ > 0 ) partition ( _ < 2 ) 
res4: (scala.collection.immutable.IndexedSeq[Int], scala.collection.immutable.IndexedSeq[Int]) = (Vector(1, 1),Vector(2, 2, 3, 3, 4, 4, 5)) 

Here is a figure showing all of the copying in action:

This is copy-on-write semantics: we make a new data structure every time a change happens to the original data structure. Note the way the filter works. Just one element is removed-the first one-but we cannot simply remove the element from the input vector itself and pass it on to the partition logic.

This solves the problem of the leaking getter. As data structures are immutable, they could be freely shared among different threads of execution. The state is still shared, but just for reading!

What happens when the input is too large? It would end up in a situation where too much of data is copied, wouldn't it? Not really! There is a behind-the-scenes process called structural sharing. So most of the copying is avoided; however, immutability semantics are still preserved. We will be looking at this feature closely when we study Chapter 3, Lists .

 

Laziness and deferred execution


To deal with excessive copying, we can resort to a feature called deferred processing, also known as, lazy collections. A collection is lazy when all of its elements are not realized at the time of creation. Instead, elements are computed on demand.

Let's write a program to generate numbers from 1 to 100. We wish to check which numbers are evenly divisible by 2, 3, 4, and 5.

Let's generate a lazy collection of the input numbers:

scala> val list = (1 to 100).toList.view 
list: scala.collection.SeqView[Int,List[Int]] = SeqView(...) 

We convert an existing Scala collection into a lazy one by calling the view function. Note that the list elements are not printed out, as these are not yet computed.

The following snippet shows a very simple predicate method that checks whether the number n is evenly divisible by d:

scala> def isDivisibleBy(d: Int)(n: Int) = { 
     |   println(s"Checking ${n} by ${d}") 
     |   n % d == 0 
     | } 
isDivisibleBy: (d: Int)(n: Int)Boolean 

We write a method isDivisibleBy in the curried form. We have written the isDivisibleBy as a series of functions, each function taking one argument. In our case, n is 2. We do this so we can partially apply functions to the divisor argument. This form helps us easily generate functions for divisors 2, 3, 4, and 5:

scala> val by2 = isDivisibleBy(2) _ 
by2: Int => Boolean = <function1> 
 
scala> val by3 = isDivisibleBy(3) _ 
by3: Int => Boolean = <function1> 
 
scala> val by4 = isDivisibleBy(4) _ 
by4: Int => Boolean = <function1> 
 
scala> val by5 = isDivisibleBy(5) _ 
by5: Int => Boolean = <function1> 

We can test the preceding functions by entering the code on the REPL, as shown here:

scala> by3(9) 
Checking 9 by 3 
res2: Boolean = true 
 
scala> by4(11) 
Checking 11 by 4 
res3: Boolean = false 

Now we write our checker:

scala> val result = list filter by2 filter by3 filter by4 filter by5 
result: scala.collection.SeqView[Int,List[Int]] = SeqViewFFFF(...) 
scala> result.force 
Checking 1 by 2 
Checking 2 by 2 
Checking 2 by 3 
Checking 3 by 2 
Checking 4 by 2 
... 
Checking 60 by 2 
Checking 60 by 3 
Checking 60 by 4 
Checking 60 by 5 
... 
res1: List[Int] = List(60) 

Note that when 2 is checked by 2 and okayed, it is checked by 3. All the checks happen at the same time and the copying is elided.

Note the force method; this is the opposite of the view method. The force method converts the collection back into a strict one. For a strict collection, all the elements are processed. Once the processing is done, a collection with just the number 60 is returned.

 

Composing functions


We programmers, love reusing code. We use libraries and frameworks, so we use something that already exists out there. No one wants to reinvent the same wheel again. For example, here is how we could weed out zeroes from a Clojure vector:

user=> (filter (complement zero?) [0 1 2 0 3 4]) 
(1 2 3 4) 

The following diagram shows the functional composition:

We composed behavior by composing predicate functions; we did this using complement to negate the zero? predicate function. The zero? predicate returns true if its input is 0; if not, it returns false.

Given a list of numbers, the following Scala snippet checks whether the sequence is strictly increasing:

scala> val x = List(1, 2, 3, 4, 5, 6, 7) 
x: List[Int] = List(1, 2, 3, 4, 5, 6, 7) 
 
scala> val y = x zip x.drop(1) 
y: List[(Int, Int)] = List((1,2), (2,3), (3,4), (4,5), (5,6), (6,7)) 
 
scala> y forall (x => x._1 < x._2) 
res2: Boolean = true 

Just imagine how we would do this in an imperative language.

Using zip, we get each number and its successor as a pair. We pass in a function to know whether the first element of the pair is less than the second.

Here goes the Clojure implementation. First we define a function that takes a list of two elements, de-structures it into its elements, and checks whether these two elements are strictly increasing:

user=> (defn check? [list] 
  #_=>   (let [[x y] list] 
  #_=>     (> y x))) 

Here is a quick test:

user=> (check? '(21 2)) 
false 
user=> (check? '(1 2)) 
true 

Tip

Note that the check? function is a pure function. It works just on its input and nothing else.

Next comes the pair generation; here, each element is paired with its successor:

user=> (defn gen-pairs [list] 
  #_=>   (let [x list 
  #_=>         y (rest list)] 
  #_=>    (partition 2 (interleave x y)))) 

Testing it gives the following:

user=> (gen-pairs '(1 2 3 4)) 
((1 2) (2 3) (3 4)) 

Now comes the magic! We compose these two functions to check whether the input is strictly increasing:

user=> (defn strictly-increasing? [list] 
  #_=>   (every? check? (gen-pairs list))) 
#'user/strictly-increasing? 

Testing it gives the following:

user=> (strictly-increasing? '(1 2 3 4)) 
true 
user=> (strictly-increasing? '(1 2 3 4 2)) 
false 

See how succinct it becomes when we start composing functions:

Note that the functions check? and every? are pure functions. The check? function predicate works on the input only. The gen_pairs is also a pure function. It is, in turn, composed of the interleave and partition functions.

The every? function is also a higher order function. We never wrote any loops (not even any recursion). We composed behavior out of existing pieces. The fun thing is the pieces don't need to know about the composition. We just combine independent processing pieces together.

For a great treatment of the advantages FP brings to the table, we wholeheartedly recommend Functional Thinking-- Paradigm Over Syntax (http://shop.oreilly.com/product/0636920029687.do ).

 

Summary


We looked at some prominent reasons to adopt the FP paradigm.

Firstly, we saw what is imperative programming and the notion of state modification.

FP allows us to program at a higher level of abstraction. We looked at some common examples of applying such abstractions.

We saw how FP encourages us to compose systems from existing building blocks. These blocks themselves, in turn, could have been composed out of other smaller blocks. This is an incredibly powerful way to reuse code.

The declarative style of programming is easily seen in how SQL queries work. This allows us to work at a higher level of abstraction.

FP promotes this same declarative style. For example, we normally use implied loops. Implied loops in FP are similar to how Unix shell filters process data.

Controlling changes to a program's state is way too hard. We saw how important this is, given the multithreaded world we developers live in. We saw how FP makes it a breeze by dealing with mostly immutable data structures.

In the next chapter, we will look at some fundamental concepts in data structures and algorithms.

About the Authors
  • Atul S. Khot

    Atul S. Khot is a self-taught programmer and has written software programmes in C and C++. Having extensively programmed in Java and dabbled in multiple languages, these days, he is increasingly getting hooked on Scala, Clojure, and Erlang. Atul is a frequent speaker at software conferences and a past Dr. Dobb's product award judge. He was the author of Scala Functional Programming Patterns and Learning Functional Data Structures and Algorithms, published by Packt Publishing.

    Browse publications by this author
  • Raju Kumar Mishra

    Raju Kumar Mishra is a consultant and corporate trainer for big data and programming. After completing his B. Tech from the Indian Institute of Technology (ISM) Dhanbad, he worked for Tata Steel. His deep passion for mathematics, data science, and programming took him to the Institute of Science (IISc). After graduating from IISc in computational science, he worked for Oracle as a performance engineer and software developer. He is an Oracle-certified associate for Java 7. He is a Hortonworks-certified Apache Hadoop Java developer, and holds a Developer Certification for Apache Spark (O'Reilly School of Technology and Databriks), and Revolution R Enterprise-certified Specialist Certifications. As well as this, he has also passed the Financial Risk Manager (FRM I) exam. His interest in mathematics helped him in clearing the CT3 (Actuarial Science) exam.

    Browse publications by this author
Latest Reviews (2 reviews total)
Easy and secure
The PDF is very good quality. The book is quite good, even if most of the covered material is quite obvious for people who have some programming experience in Scala. I was expected a deeper treatment about advanced functional data structures, algorithms and techniques like Zippers.
Recommended For You
Learning Functional Data Structures and Algorithms
Unlock this book and the full library FREE for 7 days
Start now