Functional programming defines a computation using expressions and evaluation; often these are encapsulated in function definitions. It de-emphasizes or avoids the complexity of state change and mutable objects. This tends to create programs that are more succinct and expressive. In this chapter, we'll introduce some of the techniques that characterize functional programming. We'll identify some of the ways to map these features to Python. Finally, we'll also address some ways in which the benefits of functional programming accrue when we use these design patterns to build Python applications.
Python has numerous functional programming features. It is not a purely a functional programming language. It offers enough of the right kinds of features that it confers the benefits of functional programming. It also retains all the optimization power of an imperative programming language.
We'll also look at a problem domain that we'll use for many of the examples in this book. We'll try to stick closely to Exploratory Data Analysis (EDA) because its algorithms are often good examples of functional programming. Furthermore, the benefits of functional programming accrue rapidly in this problem domain.
Our goal is to establish some essential principles of functional programming. The more serious Python code will begin in Chapter 2, Introducing Some Functional Features.
It's difficult to be definitive on the universe of programming paradigms. For our purposes, we will distinguish between only two of the many paradigms: functional programming and imperative programming. One important distinguishing feature between these two is the concept of state.
In an imperative language, such as Python, the state of the computation is reflected by the values of the variables in the various namespaces; some kinds of statements make a well-defined change to the state by adding or changing (or even removing) a variable. A language is imperative because each statement is a command, which changes the state in some way.
Our general focus is on the assignment statement and how it changes the state. Python has other statements, such as global
or nonlocal
, which modify the rules for variables in a particular namespace. Statements such as def
, class
, and import
change the processing context. Other statements such as try
, except
, if
, elif
, and else
act as guards to modify how a collection of statements will change the computation's state. Statements such as for
and while
, similarly, wrap a block of statements so that the statements can make repeated changes to the state of the computation. The focus of all these various statement types, however, is on changing the state of the variables.
Ideally, each assignment statement advances the state of the computation from an initial condition toward the desired final outcome. This advancing the computation assertion can be challenging to prove. One approach is to define the final state, identify a statement that will establish this final state, and then deduce the precondition required for this final statement to work. This design process can be iterated until an acceptable initial state is derived.
In a functional language, we replace the state—the changing values of variables—with a simpler notion of evaluating functions. Each function evaluation creates a new object or objects from existing objects. Since a functional program is a composition of functions, we can design lower-level functions that are easy to understand, and then design higher-level compositions that can also be easier to visualize than a complex sequence of statements.
Function evaluation more closely parallels mathematical formalisms. Because of this, we can often use simple algebra to design an algorithm, which clearly handles the edge cases and boundary conditions. This makes us more confident that the functions work. It also makes it easy to locate test cases for formal unit testing.
It's important to note that functional programs tend to be relatively succinct, expressive, and efficient compared to imperative (object-oriented or procedural) programs. The benefit isn't automatic; it requires a careful design. This design effort for functional programming is often easier than for procedural programming.
We can subdivide imperative languages into a number of discrete categories. In this section, we'll glance quickly at the procedural versus object-oriented distinction. What's important here is to see how object-oriented programming is a subset of imperative programming. The distinction between procedural and object-orientation doesn't reflect the kind of fundamental difference that functional programming represents.
We'll use code examples to illustrate the concepts. For some, this will feel like reinventing the wheel. For others, it provides a concrete expression of abstract concepts.
For some kinds of computations, we can ignore Python's object-oriented features and write simple numeric algorithms. For example, we might write something like the following to sum a range of numbers that share a common property:
The sum s
includes only numbers that are multiples of three or five. We've made this program strictly procedural, avoiding any explicit use of Python's object features. The program's state is defined by the values of the variables s
and n
. The variable n
takes on values such that 1 ≤ n < 10. As the loop involves an ordered exploration of values of n
, we can prove that it will terminate when n == 10
. Similar code would work in C or Java language, using their primitive (non-object) data types.
We can exploit Python's Object-Oriented Programming (OOP) features and create a similar program:
This program produces the same result but it accumulates a stateful collection object, m
, as it proceeds. The state of the computation is defined by the values of the variables m
and n
.
The syntax of m.append(n)
and sum(m)
can be confusing. It causes some programmers to insist (wrongly) that Python is somehow not purely object-oriented because it has a mixture of the function()
and object.method()
syntax. Rest assured, Python is purely object-oriented. Some languages, such as C++, allow the use of primitive data types such as int
, float
, and long
, which are not objects. Python doesn't have these primitive types. The presence of prefix syntax, sum(m)
, doesn't change the nature of the language.
To be pedantic, we could fully embrace the object model, by defining a subclass of the list
class. This new class will include a sum
method:
If we initialize the variable m
with an instance of the Summable_List()
class instead of the list()
method, we can use the m.sum()
method instead of the sum(m)
method. This kind of change can help to clarify the idea that Python is truly and completely object-oriented. The use of prefix function notation is purely syntactic sugar.
All three of these examples rely on variables to explicitly show the state of the program. They rely on the assignment statements to change the values of the variables and advance the computation toward completion. We can insert the assert
statements throughout these examples to demonstrate that the expected state changes are implemented properly.
The point is not that imperative programming is broken in some way. The point is that functional programming leads to a change in viewpoint, which can, in many cases, be very helpful. We'll show a function view of the same algorithm. Functional programming doesn't make this example dramatically shorter or faster.
In a functional sense, the sum of the multiples of three and five can be defined in two parts:
- The sum of a sequence of numbers
- A sequence of values that pass a simple test condition, for example, being multiples of three and five
The sum of a sequence has a simple, recursive definition:
We've defined the sum of a sequence in two cases: the base case states that the sum of a zero length sequence is 0, while the recursive case states that the sum of a sequence is the first value plus the sum of the rest of the sequence. Since the recursive definition depends on a shorter sequence, we can be sure that it will (eventually) devolve to the base case.
Here are some examples of how this function works:
The first example computes the sum of a list with multiple items. The second example shows how the recursion rule works by adding the first item, seq[0]
, to the sum of the remaining items, sumr(seq[1:])
. Eventually, the computation of the result involves the sum of an empty list, which is defined as zero.
The +
operator on the last line of the preceding example and the initial value of 0
in the base case characterize the equation as a sum. If we change the operator to *
and the initial value to 1
, it would just as easily compute a product. We'll return to this simple idea of generalization in the following chapters.
Similarly, a sequence of values can have a simple, recursive definition, as follows:
In this function, we've compared a given value, v
, against the upper bound, n
. If v
reaches the upper bound, the resulting list must be empty. This is the base case for the given recursion.
There are two more cases defined by the given filter_func()
function. If the value of v
is passed by the filter_func()
function, we'll create a very small list, containing one element, and append the remaining values of the until()
function to this list. If the value of v
is rejected by the filter_func()
function, this value is ignored and the result is simply defined by the remaining values of the until()
function.
We can see that the value of v
will increase from an initial value until it reaches n
, assuring us that we'll reach the base case soon.
Here's how we can use the until()
function to generate the multiples of three and five. First, we'll define a handy lambda
object to filter values:
(We will use lambdas to emphasize succinct definitions of simple functions. Anything more complex than a one-line expression requires the def
statement.)
We can see how this lambda works from Command Prompt in the following example:
This function can be used with the until()
function to generate a sequence of values, which are multiples of three and five.
The until()
function for generating a sequence of values works as follows:
We can use our recursive sum()
function to compute the sum of this sequence of values. The various functions such as sum()
, until()
, and mult_3_5()
are defined as simple recursive functions. The values are computed without resorting to using intermediate variables to store the state.
We'll return to the ideas behind this purely functional, recursive definition in several places. It's important to note here that many functional programming language compilers can optimize these kinds of simple recursive functions. Python can't do the same optimizations.
We'll continue this example with a mostly functional version of the previous example to compute the sum of multiples of three and five. Our hybrid functional version might look like the following:
We've used nested generator expressions to iterate through a collection of values and compute the sum of these values. The range(1, 10)
method is iterable and, consequently, a kind of generator expression; it generates a sequence of values

. The more complex expression n for n in range(1, 10) if n%3==0 or n%5==0
is also an iterable expression. It produces a set of values,

. The variable n
is bound to each value, more as a way of expressing the contents of the set than as an indicator of the state of the computation. The sum()
function consumes the iterable expression, creating a final object, 23.
Note
The bound variable doesn't exist outside the generator expression. The variable n
isn't visible elsewhere in the program.
The if
clause of the expression can be extracted into a separate function, allowing us to easily repurpose this for other rules. We could also use a higher-order function named filter()
instead of the if
clause of the generator expression. We'll save this for Chapter 5, Higher-Order Functions.
The variable n
in this example isn't directly comparable to the variable n
in the first two imperative examples. A for
statement (outside a generator expression) creates a proper variable in the local namespace. The generator expression does not create a variable in the same way as a for
statement does:
The variable n
doesn't exist outside the binding in the generator expression. It doesn't define the state of the computation.
In some cases, it might help to look at intermediate objects as a history of the computation. What's important is that the history of a computation is not fixed. When functions are commutative or associative, then changes to the order of evaluation might lead to different objects being created. This might have performance improvements with no changes to the correctness of the results.
Consider this expression:
We are looking at a variety of potential computation histories with the same result. Because the +
operator is commutative and associative, there are a large number of candidate histories that lead to the same result.
Of the candidate sequences, there are two important alternatives, which are as follows:
In the first case, we fold in values working from left to right. This is the way Python works implicitly. Intermediate objects 3 and 6 are created as part of this evaluation.
In the second case, we fold from right to left. In this case, intermediate objects 7 and 9 are created. In the case of simple integer arithmetic, the two results have identical performance; there's no optimization benefit.
When we work with something like the list
append, we might see some optimization improvements when we change the association rules.
Here's a simple example:
In this case, there's some benefit to working from left to right.
What's important for functional design is the idea that the +
operator (or add()
function) can be used in any order to produce the same results. The +
operator has no hidden side effects that restrict the way this operator can be used.
When we use Python for functional programming, we embark down a path that will involve a hybrid that's not strictly functional. Python is not Haskell, OCaml, or Erlang. For that matter, our underlying processor hardware is not functional; it's not even strictly object-oriented, CPUs are generally procedural.
All programming languages rest on abstractions, libraries, frameworks and virtual machines. These abstractions, in turn, may rely on other abstractions, libraries, frameworks and virtual machines. The most apt metaphor is this: the world is carried on the back of a giant turtle. The turtle stands on the back of another giant turtle. And that turtle, in turn, is standing on the back of yet another turtle. It's turtles all the way down.
- Anonymous
There's no practical end to the layers of abstractions.
More importantly, the presence of abstractions and virtual machines doesn't materially change our approach to designing software to exploit the functional programming features of Python.
Even within the functional programming community, there are both purer and less pure functional programming languages. Some languages make extensive use of monads
to handle stateful things such as file system input and output. Other languages rely on a hybridized environment that's similar to the way we use Python. In Python, software can be generally functional, with carefully chosen procedural exceptions.
Our functional Python programs will rely on the following three stacks of abstractions:
- Our applications will be functions—all the way down—until we hit the objects
- The underlying Python runtime environment that supports our functional programming is objects—all the way down—until we hit the libraries
- The libraries that support Python are a turtle on which Python stands
The operating system and hardware form their own stack of turtles. These details aren't relevant to the problems we're going to solve.