Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Learning Functional Data Structures and Algorithms
Learning Functional Data Structures and Algorithms

Learning Functional Data Structures and Algorithms: Learn functional data structures and algorithms for your applications and bring their benefits to your work now

By Raju Kumar Mishra
$39.99 $27.98
Book Feb 2017 318 pages 1st Edition
eBook
$39.99 $27.98
Print
$48.99
Subscription
$15.99 Monthly
eBook
$39.99 $27.98
Print
$48.99
Subscription
$15.99 Monthly

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Feb 23, 2017
Length 318 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781785888731
Category :
Table of content icon View table of contents Preview book icon Preview Book

Learning Functional Data Structures and Algorithms

Chapter 1.  Why Functional Programming?

What is functional programming (FP)? Why is it talked about so much?

A programming paradigm is a style of programming. FP is a programming paradigm characterized by the absence of side effects.

In FP, functions are the primary means of structuring code. The FP paradigm advocates using pure functions and stresses on immutable data structures. So we don't mutate variables, but pass a state to function parameters. Functional languages give us lazy evaluation and use recursion instead of explicit loops. Functions are first-class citizens like numbers or strings. We pass functions as argument values, just like a numeric or string argument. This ability to pass functions as arguments allows us to compose behavior, that is, cobble together something entirely new from existing functions.

In this chapter, we will take a whirlwind tour of functional programming. We will look at bits of code and images to understand the concepts. This will also lay a nice foundation for the rest of the book. We will use the functional paradigm and see how it changes the way we think about data structures and algorithms.

This chapter starts with a look at the concept of abstraction. We will see why abstractions are important in programming. FP is a declarative style of programming, similar to Structured Query Language (SQL). Because it is declarative, we use it to tell what we want the computer to do, rather how it should do it. We will also see how this style helps us stay away from writing common, repetitive boilerplate code.

Passing functions as arguments to other, higher order functions is the central idea in FP; we look at this next. We will also see how to stay away from null checks. Controlled state change allows us to better reason our code. Being immutable is the key for creating code that would be easier to reason about.

Next, we will see how recursion helps us realize looping without mutating any variables. We will wrap up the chapter with a look at lazy evaluation, copy-on-write, and functional composition.

The imperative way


We keep contrasting FP with the imperative style of programming. What do we mean by imperative style, though?

The imperative programming style is embodied by a sequence of commands modifying a program's state. A simple example of this is a for loop. Consider the following pseudo code snippet to print all the elements of an array:

  x = [1,2,3,4...] // an array, x.size tells the number of array elements  
  for( int i = 0; i < x.size; ++i ) { 
         println(x[i]) 
      } 

Here is a pictorial rendering of the concepts:

As the figure shows, the for loop establishes an initial state by setting the variable i to 0. The variable is incremented every time the loop is repeated; this is what we mean by the state being modified. We keep reading and modifying the state, that is, the loop variable, until there are no elements left in the array.

FP advocates staying away from any state modification. It gives us tools so we don't worry about how to loop over a collection; instead, we focus on what we need to do with each element of the collection.

Higher level of abstraction


FP allows us to work at a higher level of abstraction. Abstraction is selective ignorance. The world we know of runs on abstractions. If we say, "Give me sweet condensed milk frozen with embedded dry fruits' someone might really have a hard time understanding it. Instead, just say "ice-cream"! Aha! It just falls into place and everyone is happy.

Abstractions are everywhere. An airplane is an abstraction, so is a sewing machine or a train. When we wish to travel by train, we buy a ticket, board the train, and get off at the destination. We really don't worry about how the train functions. We simply can't be expected to deal with all the intricacies of a train engine. As long as it serves our purpose, we ignore details related to how an engine really works.

What are the benefits of an abstraction? We don't get bogged down into unnecessary details. Instead, we focus on the task at hand by applying higher level programming abstractions.

Compare the preceding for loop with the functional code snippet:

scala> val x = Array(1,2,3,4,5,6) 
x: Array[Int] = Array(1, 2, 3, 4, 5, 6) 
scala> x.foreach(println _) 
1 
2 
... 

We simply focus on the task at hand (print each element of an array) and don't care about the mechanics of a for loop. The functional version is more abstract.

As software engineers, when we implement an algorithm and run it, we are intentionally ignoring many details.

We know that the preceding sandwich stack somehow works and faithfully translates the algorithm into runnable code.

Applying higher level abstractions is commonly done in functional languages. For example, consider the following Clojure REPL snippet:

user=> ((juxt (comp str *) (comp str +)) 1 2 3 4 5) 
["120" "15"] 

We are juxtaposing two functions; each of these are in turn composed of the str function and an arithmetic function:

We just don't worry about how it works internally. We just use high-level abstractions cobbled together with existing pieces of abstract functionality.

We will be looking at abstract data types (ADT) closely in this book. For example, when we talk about a stack, we think of the Last In First Out (LIFO) order of access. However, this ADT is realized by implementing the stack via a data structure, such as a linked list or an array.

Here is a figure showing the First In First Out (FIFO) ADT in action. The FIFO queue is the normal queue we encounter in life, for example, queuing up at a booking counter. The earlier you get into a queue, the sooner you get serviced.

The FIFO queue is an ADT. True that we think of it as an ADT; however, as shown in the preceding diagram, we can also implement the queue backed by either an array or a linked list.

Functional programming is declarative


When we use SQL, we just express our intent. For example, consider this:

mysql> select count(*) from book where author like '%wodehouse%'; 

We just say what we are looking for. The actual mechanism that gets the answer is hidden from us. The following is a little too simplistic but suitable example to prove the point.

The SQL engine will have to loop over the table and check whether the author column contains the wodehouse string. We really don't need to worry about the search  algorithm. The author table resides on a disk somewhere. The number of table rows that need to be filtered could easily exceed the available memory. The engine handles all such complexities for us though.

We just declare our intent. The following Scala snippet is declarative. It counts the number of even elements in the input list:

scala> val list = List(1, 2, 3, 4, 5, 6) 
list: List[Int] = List(1, 2, 3, 4, 5, 6) 
 
scala> list.count( _ % 2 == 0 ) 
res0: Int = 3 

The code uses a higher order function, namely count. This takes another function, a predicate, as an argument. The line loops over each list element, invokes the argument predicate function, and returns the count.

Here is another example of Clojure code that shows how to generate a combination of values from two lists:

user=> (defn fun1 [list1 list2] 
  #_=>   (for [x list1 y list2] 
  #_=>     (list x y))) 
#'user/fun1 
user=> (fun1 '(1 2 3) '(4 5 6)) 
((1 4) (1 5) (1 6) (2 4) (2 5) (2 6) (3 4) (3 5) (3 6)) 

Note the code used to generate the combination. We use for comprehension to just state what we need done and it would be done for us.

No boilerplate


Boilerplate code consists sections of the same code written again and again. For example, writing loops is boilerplate, as is writing getters and setters for private class members.

As the preceding code shows, the loop is implicit:

scala> List(1, 2, 3, 4, 5) partition(_ % 2 == 0) 
res3: (List[Int], List[Int]) = (List(2, 4),List(1, 3, 5)) 

We just wish to separate the odd and even numbers. So we just specify the criteria via a function, an anonymous function in this case. This is shown in the following image:

What is boilerplate? It is a for loop, for example. In the imperative world, we code the loop ourselves. We need to tell the system how to iterate over a data structure.

Isn't Scala code just to the point? We tell what we need and the loop is implied for us. No need to write a for loop, no need to invent a name for the loop variable, and so on. We just got rid of the boilerplate.

Here is a Clojure snippet that shows how to multiply each element of a vector by 2:

user=> (map * (repeat 2) [1 2 3 4 5]) 
(2 4 6 8 10) 

The map function hides the loop from us. Then (repeat 2) function call generates an infinite sequence.

So we are just saying this: for the input sequence [1 2 3 4 5], create another lazy sequence of 2's. Then use the map function to multiply these two sequences and output the result. The following figure depicts the flow:

Compare this with an imperative language implementation. We would have needed a loop and a list to collect the result. Instead, we just say what needs to be done.

Higher order functions


Unix shells allow us to express computations as pipelines. Small programs called filters are piped together in unforeseen ways to ensure they work together. For example, refer to this code:

~> seq 100 | paste -s -d '*' | bc 

This is a very big number (obviously, as we just generated numbers from 1 to 100 and multiplied them together). There is looping involved, of course. We need to generate numbers from 1 to 100, connect them together via paste, and pass these on to bc. Now consider the following code:

scala> val x = (1 to 10).toList 
x: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) 
 
scala> x.foldLeft(1) { (r,c) => r * c } 
res2: Int = 3628800     

Writing a for loop with a counter and iterating over the elements of the collection one by one is shown in the preceding code. We simply don't have to worry about visiting all the elements now. Instead, we start thinking of how we can process each element.

Here is the equivalent Clojure code:

user=> (apply * (range 1 11) ) 
3628800 

The following figure shows how the code works:

Scala's foldLeft and Clojure's apply are higher order functions. They help us avoid writing boilerplate code. The ability to supply a function brings a lot of flexibility to the table.

Eschewing null checks

Higher order functions help us succinctly express logic. There is an alternative paradigm that expresses logic without resorting to null checks. Refer to http://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare on why nulls are a bad idea; it comes directly from its inventor.

Here is a Scala collection with some strings and numbers thrown in; it is written using the alternative paradigm:

scala> val funnyBag = List("1", "2", "three", "4", "one hundred seventy five") 
funnyBag: List[String] = List(1, 2, three, 4, one hundred seventy five) 

We want to extract the numbers out of this collection and sum them up:

scala> def toInt(in: String): Option[Int] = { 
     |   try { 
     |     Some(Integer.parseInt(in.trim)) 
     |   } catch { 
     |       case e: Exception => None 
     |   } 
     | } 
toInt: (in: String)Option[Int] 

We return an Option container as a return value. If the string was successfully parsed as a number, we return Some[Int], holding the converted number.

The following diagram shows the execution flow:

In the case of a failure, we return None. Now the following higher order function skips None and just flattens Some. The flattening operation pulls out the result value out of Some:

scala> funnyBag.flatMap(toInt) 
res1: List[Int] = List(1, 2, 4) 

We can now do further processing on the resulting number list, for example, summing it up:

scala> funnyBag.flatMap(toInt).sum 
res2: Int = 7 

Controlling state changes


Let me share a debugging nightmare with you. We had a multithreaded system written in C++. The state was carefully shared, with concurrent access protected by explicit mutex locks. A team member--ugh--forgot to acquire a lock on a shared data structure and all hell broke loose.

The team member was a senior programmer; he knew what he was doing. He just forgot the locking. It took us some nights full of stack trace to figure out what the issue was.

Writing concurrent programs using shared memory communication can very easily go wrong.

In the book Java Concurrency in Practice, the authors show us how easy it is for internal mutable state to escape (http://jcip.net/ ). Tools, such as Eclipse, make it easy to generate getters, and before you know, a reference escapes and all hell could break loose.

The encapsulation is fine. The mutable state, in this case an array, could be inadvertently exposed. For example, using a getter method, the array reference can be obtained by the outside world. Two or more threads could then try mutating it and everything goes for a toss:

We cannot ignore concurrency anymore. Program design is going to be ruled by the machine design; having a multicore machine is the norm.

It is too hard to make sure the state changes are controlled. If instead, we know that a data structure does not change once it is created, reasoning becomes far easier. There is no need to acquire/release locks as a shared state never changes.

Recursion aids immutability


Instead of writing a loop using a mutable loop variable, functional languages advocate recursion as an alternative. Recursion is a widely used technique in imperative programming languages, too. For example, quicksort and binary tree traversal algorithms are expressed recursively. Divide and conquer algorithms naturally translate into recursion.

When we start writing recursive code, we don't need mutable loop variables:

scala> import scala.annotation.tailrec 
import scala.annotation.tailrec 
scala> def factorial(k: Int): Int = { 
     |   @tailrec 
     |   def fact(n: Int, acc: Int): Int = n match { 
     |     case 1 => acc 
     |     case _ => fact(n-1, n*acc) 
     |   } 
     |   fact(k, 1) 
     | } 
factorial: (k: Int)Int 
 
scala> factorial(5) 
res0: Int = 120  

Note the @tailrec annotation. Scala gives us an option to ensure that tail call optimization (TCO) is applied. TCO rewrites a recursive tail call as a loop. So in reality, no stack frames are used; this eliminates the possibility of a stack overflow error.

Here is the equivalent Clojure code:

user=> (defn factorial [n] 
  #_=>   (loop [cur n fac 1] 
  #_=>     (if (= cur 1) 
  #_=>      fac 
  #_=>       (recur (dec cur) (* fac cur) )))) 
#'user/factorial 
user=> (factorial 5) 
120 

The following diagram shows how recursive calls use stack frames:

Clojure's special form, recur, ensures that the TCO kicks in.

Note how these versions are starkly different than the one we would write in an imperative paradigm.

Instead of explicit looping, we use recursion so we wouldn't need to change any state, that is, we wouldn't need any mutable variables; this aids immutability.

Copy-on-write


What if we have never mutated data? When we need to update, we could copy and update the data. Consider the following Scala snippet:

scala> val x = 1 to 10 
x: scala.collection.immutable.Range.Inclusive = Range(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) 
 
scala> x  map (_ / 2) filter ( _ > 0 ) partition ( _ < 2 ) 
res4: (scala.collection.immutable.IndexedSeq[Int], scala.collection.immutable.IndexedSeq[Int]) = (Vector(1, 1),Vector(2, 2, 3, 3, 4, 4, 5)) 

Here is a figure showing all of the copying in action:

This is copy-on-write semantics: we make a new data structure every time a change happens to the original data structure. Note the way the filter works. Just one element is removed-the first one-but we cannot simply remove the element from the input vector itself and pass it on to the partition logic.

This solves the problem of the leaking getter. As data structures are immutable, they could be freely shared among different threads of execution. The state is still shared, but just for reading!

What happens when the input is too large? It would end up in a situation where too much of data is copied, wouldn't it? Not really! There is a behind-the-scenes process called structural sharing. So most of the copying is avoided; however, immutability semantics are still preserved. We will be looking at this feature closely when we study Chapter 3, Lists .

Laziness and deferred execution


To deal with excessive copying, we can resort to a feature called deferred processing, also known as, lazy collections. A collection is lazy when all of its elements are not realized at the time of creation. Instead, elements are computed on demand.

Let's write a program to generate numbers from 1 to 100. We wish to check which numbers are evenly divisible by 2, 3, 4, and 5.

Let's generate a lazy collection of the input numbers:

scala> val list = (1 to 100).toList.view 
list: scala.collection.SeqView[Int,List[Int]] = SeqView(...) 

We convert an existing Scala collection into a lazy one by calling the view function. Note that the list elements are not printed out, as these are not yet computed.

The following snippet shows a very simple predicate method that checks whether the number n is evenly divisible by d:

scala> def isDivisibleBy(d: Int)(n: Int) = { 
     |   println(s"Checking ${n} by ${d}") 
     |   n % d == 0 
     | } 
isDivisibleBy: (d: Int)(n: Int)Boolean 

We write a method isDivisibleBy in the curried form. We have written the isDivisibleBy as a series of functions, each function taking one argument. In our case, n is 2. We do this so we can partially apply functions to the divisor argument. This form helps us easily generate functions for divisors 2, 3, 4, and 5:

scala> val by2 = isDivisibleBy(2) _ 
by2: Int => Boolean = <function1> 
 
scala> val by3 = isDivisibleBy(3) _ 
by3: Int => Boolean = <function1> 
 
scala> val by4 = isDivisibleBy(4) _ 
by4: Int => Boolean = <function1> 
 
scala> val by5 = isDivisibleBy(5) _ 
by5: Int => Boolean = <function1> 

We can test the preceding functions by entering the code on the REPL, as shown here:

scala> by3(9) 
Checking 9 by 3 
res2: Boolean = true 
 
scala> by4(11) 
Checking 11 by 4 
res3: Boolean = false 

Now we write our checker:

scala> val result = list filter by2 filter by3 filter by4 filter by5 
result: scala.collection.SeqView[Int,List[Int]] = SeqViewFFFF(...) 
scala> result.force 
Checking 1 by 2 
Checking 2 by 2 
Checking 2 by 3 
Checking 3 by 2 
Checking 4 by 2 
... 
Checking 60 by 2 
Checking 60 by 3 
Checking 60 by 4 
Checking 60 by 5 
... 
res1: List[Int] = List(60) 

Note that when 2 is checked by 2 and okayed, it is checked by 3. All the checks happen at the same time and the copying is elided.

Note the force method; this is the opposite of the view method. The force method converts the collection back into a strict one. For a strict collection, all the elements are processed. Once the processing is done, a collection with just the number 60 is returned.

Composing functions


We programmers, love reusing code. We use libraries and frameworks, so we use something that already exists out there. No one wants to reinvent the same wheel again. For example, here is how we could weed out zeroes from a Clojure vector:

user=> (filter (complement zero?) [0 1 2 0 3 4]) 
(1 2 3 4) 

The following diagram shows the functional composition:

We composed behavior by composing predicate functions; we did this using complement to negate the zero? predicate function. The zero? predicate returns true if its input is 0; if not, it returns false.

Given a list of numbers, the following Scala snippet checks whether the sequence is strictly increasing:

scala> val x = List(1, 2, 3, 4, 5, 6, 7) 
x: List[Int] = List(1, 2, 3, 4, 5, 6, 7) 
 
scala> val y = x zip x.drop(1) 
y: List[(Int, Int)] = List((1,2), (2,3), (3,4), (4,5), (5,6), (6,7)) 
 
scala> y forall (x => x._1 < x._2) 
res2: Boolean = true 

Just imagine how we would do this in an imperative language.

Using zip, we get each number and its successor as a pair. We pass in a function to know whether the first element of the pair is less than the second.

Here goes the Clojure implementation. First we define a function that takes a list of two elements, de-structures it into its elements, and checks whether these two elements are strictly increasing:

user=> (defn check? [list] 
  #_=>   (let [[x y] list] 
  #_=>     (> y x))) 

Here is a quick test:

user=> (check? '(21 2)) 
false 
user=> (check? '(1 2)) 
true 

Tip

Note that the check? function is a pure function. It works just on its input and nothing else.

Next comes the pair generation; here, each element is paired with its successor:

user=> (defn gen-pairs [list] 
  #_=>   (let [x list 
  #_=>         y (rest list)] 
  #_=>    (partition 2 (interleave x y)))) 

Testing it gives the following:

user=> (gen-pairs '(1 2 3 4)) 
((1 2) (2 3) (3 4)) 

Now comes the magic! We compose these two functions to check whether the input is strictly increasing:

user=> (defn strictly-increasing? [list] 
  #_=>   (every? check? (gen-pairs list))) 
#'user/strictly-increasing? 

Testing it gives the following:

user=> (strictly-increasing? '(1 2 3 4)) 
true 
user=> (strictly-increasing? '(1 2 3 4 2)) 
false 

See how succinct it becomes when we start composing functions:

Note that the functions check? and every? are pure functions. The check? function predicate works on the input only. The gen_pairs is also a pure function. It is, in turn, composed of the interleave and partition functions.

The every? function is also a higher order function. We never wrote any loops (not even any recursion). We composed behavior out of existing pieces. The fun thing is the pieces don't need to know about the composition. We just combine independent processing pieces together.

For a great treatment of the advantages FP brings to the table, we wholeheartedly recommend Functional Thinking-- Paradigm Over Syntax (http://shop.oreilly.com/product/0636920029687.do ).

Summary


We looked at some prominent reasons to adopt the FP paradigm.

Firstly, we saw what is imperative programming and the notion of state modification.

FP allows us to program at a higher level of abstraction. We looked at some common examples of applying such abstractions.

We saw how FP encourages us to compose systems from existing building blocks. These blocks themselves, in turn, could have been composed out of other smaller blocks. This is an incredibly powerful way to reuse code.

The declarative style of programming is easily seen in how SQL queries work. This allows us to work at a higher level of abstraction.

FP promotes this same declarative style. For example, we normally use implied loops. Implied loops in FP are similar to how Unix shell filters process data.

Controlling changes to a program's state is way too hard. We saw how important this is, given the multithreaded world we developers live in. We saw how FP makes it a breeze by dealing with mostly immutable data structures.

In the next chapter, we will look at some fundamental concepts in data structures and algorithms.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Moving from object-oriented programming to functional programming? This book will help you get started with functional programming.
  • Easy-to-understand explanations of practical topics will help you get started with functional data structures.
  • Illustrative diagrams to explain the algorithms in detail.
  • Get hands-on practice of Scala to get the most out of functional programming.

Description

Functional data structures have the power to improve the codebase of an application and improve efficiency. With the advent of functional programming and with powerful functional languages such as Scala, Clojure and Elixir becoming part of important enterprise applications, functional data structures have gained an important place in the developer toolkit. Immutability is a cornerstone of functional programming. Immutable and persistent data structures are thread safe by definition and hence very appealing for writing robust concurrent programs. How do we express traditional algorithms in functional setting? Won’t we end up copying too much? Do we trade performance for versioned data structures? This book attempts to answer these questions by looking at functional implementations of traditional algorithms. It begins with a refresher and consolidation of what functional programming is all about. Next, you’ll get to know about Lists, the work horse data type for most functional languages. We show what structural sharing means and how it helps to make immutable data structures efficient and practical. Scala is the primary implementation languages for most of the examples. At times, we also present Clojure snippets to illustrate the underlying fundamental theme. While writing code, we use ADTs (abstract data types). Stacks, Queues, Trees and Graphs are all familiar ADTs. You will see how these ADTs are implemented in a functional setting. We look at implementation techniques like amortization and lazy evaluation to ensure efficiency. By the end of the book, you will be able to write efficient functional data structures and algorithms for your applications.

What you will learn

[*] Learn to think in the functional paradigm [*] Understand common data structures and the associated algorithms, as well as the context in which they are commonly used [*] Take a look at the runtime and space complexities with the O notation [*] See how ADTs are implemented in a functional setting [*] Explore the basic theme of immutability and persistent data structures [*] Find out how the internal algorithms are redesigned to exploit structural sharing, so that the persistent data structures perform well, avoiding needless copying. [*] Get to know functional features like lazy evaluation and recursion used to implement efficient algorithms [*] Gain Scala best practices and idioms

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Feb 23, 2017
Length 318 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781785888731
Category :

Table of Contents

20 Chapters
Learning Functional Data Structures and Algorithms Chevron down icon Chevron up icon
Credits Chevron down icon Chevron up icon
About the Authors Chevron down icon Chevron up icon
About the Reviewer Chevron down icon Chevron up icon
www.PacktPub.com Chevron down icon Chevron up icon
Customer Feedback Chevron down icon Chevron up icon
Preface Chevron down icon Chevron up icon
1. Why Functional Programming? Chevron down icon Chevron up icon
2. Building Blocks Chevron down icon Chevron up icon
3. Lists Chevron down icon Chevron up icon
4. Binary Trees Chevron down icon Chevron up icon
5. More List Algorithms Chevron down icon Chevron up icon
6. Graph Algorithms Chevron down icon Chevron up icon
7. Random Access Lists Chevron down icon Chevron up icon
8. Queues Chevron down icon Chevron up icon
9. Streams, Laziness, and Algorithms Chevron down icon Chevron up icon
10. Being Lazy - Queues and Deques Chevron down icon Chevron up icon
11. Red-Black Trees Chevron down icon Chevron up icon
12. Binomial Heaps Chevron down icon Chevron up icon
13. Sorting Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by


No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.