While writing concurrent/parallel code, handling state is difficult in an imperative program (something that you would have seen by now). Modern languages and platforms borrow idioms and practices that enable better state management and facilitate strong concurrency models from the functional programming community. In this chapter, we will see what those are and try to understand, through code samples, how to best leverage some of those features (striving for coverage would stretch this chapter to an entire book) to our advantage. We would also see how C# has evolved as a language to bring the best of both worlds (imperative and functional), and to help you apply functional thinking to model real-world scenarios. This chapter will also cover Language Integrated Query (LINQ) as a mechanism for writing compositional code. Through this journey, we will uncover some good design practices, leveraging the functional constructs...
You're reading from .NET Design Patterns
Functional programming is a programming paradigm that involves algorithm composition, to be dealt on the same lines as mathematical function evaluations. This implies that the output of these functions would purely depend on the inputs provided. Moreover, any applicable data structures that the algorithm would need to create the output would be transient, having a lifetime within the function scope, and thus help in avoiding state mutation. It is also a powerful declarative programming paradigm, which involves leveraging expressions for readable code in place of procedural in-line statements.
Let's delve a bit deeper to understand the consequences of being functional, as illustrated by the definitions given in the preceding section. Now, when we try to relate functions from both these worlds (mathematical and imperative programming), we see a strong disparity, as the latter mutates state with commands in the source language, thereby bringing in side effects (though desirable from an imperative programming standpoint). This violates one of the fundamental pre-requisites of functional programming - that of referential transparency, that is, the same expressions (when run at different times) yield different values with respect to the executing program's state. This affects the predictability of the program, which is definitely not desirable. On the other hand, pure functions (say f, one without such side effects) in the mathematical world would yield the same result f(x) each time with the same value of x, say sin(x). This characteristic is attributed to...
Functions are the fundamental processing units in functional programming, and since they can be used like any other value, functions can be stored in variables, properties, objects, and collections. The term first-class function was created by Christopher Strachey.
Note
Higher-order functions are functions that can either take other functions as arguments or return them as results.
C# supports higher-order functions (both named and anonymous), which are treated like ordinary variables with a function type.
C# provides the capability to define both generic functions and strongly typed delegates. The delegate type carries the method signature of a function prototype, and its instances become function pointers. Thus, you can manipulate a function variable whose function method signature matches with that of the function prototype.
In addition to generic function types, C# 2.0 introduced anonymous methods/delegates and iterators (with the yield...
λ-calculus is a mathematical formalism for denoting computation in an abstract form using functions. This brings forth a formal notation and transformation rules for representation (function abstraction) and manipulation (function application) of lambda terms. The key to this formalism is variable binding and substitution. Alonzo Church created lambda calculus in an attempt to prove mathematical logic.
The λ-calculus provides a simple semantics for computation using computable functions based on Church-Turing thesis (readers are urged to take a look at the history of lambda calculus, its motivation, and mathematical implications at a little deeper level to better appreciate this formalism and its consequences, as opposed to just being sheer consumers) by incorporating the following simplifications and concepts:
Recursions are no alien feature to any programmer worth his salt. Recursions are leveraged in functional programming to accomplish iteration/looping. Recursive functions invoke themselves, performing an operation repeatedly till the base case is reached. Tail call-based recursions are a common phenomenon. Recursion typically involves adding stack frames to the call stack, thus growing the stack. You can run out of stack space during deep recursions. The compiler does its own share of optimizations (predominantly tail call optimization/elimination) to conserve stack space and improve throughput. But the functional world (with its first-class and higher-order functions) gives us more flexibility to wire such optimizations in our recursive functions. Let's see how this is achieved with the following factorial example:
//Regular Recursion Func<int, int> factorial = (n) => { Func<int, int> fIterator = null; //Work-around for ...
Now that we have taken a detailed look at the core functional programming constructs, it's time to indulge in power play (with code of course). Let's learn to play the game with some hardcore sample programs.
This was inspired by Peter Norvig's (former Research Director at Google) technical blog on How to Write a Spelling Corrector. What is interesting is the way the solution has been envisaged. The solution employs the probability theory at its core to find all possible corrections for a word of length n, by accounting for user errors in the form of typos arising because of omissions (deletes), characters misplaced (replaces and transposes), and inserted (inserts).
You can refer to this technical blog on How to Write a Spelling Corrector by Peter Norvig for the following:
Note
For a word of length n, there will be n deletions, n-1 transpositions, 26n replacements, and 26(n+1) insertions.
To have fair shot at determining the corrections, we do find all possible corrections...
Before we conclude this chapter, we would like to give you a rough idea about how Language Integrated Query (LINQ) works under the hood, in a schematic manner. As we know, LINQ is a declarative language embedded inside a multi-paradigm language. The primary advantage of LINQ is the alignment to the rich type system of C#. Syntactically, LINQ is very similar to SQL language and the evaluation model is very similar to an SQL engine. As an example, let us explore a LINQ query which retrieves information regarding a set of employees by querying Employee
and Department
table. The query returns an anonymous type consisting of employee name, department name and location of the employee. We are using the comprehension syntax in this particular example:
var empInfo = from emp in db.Employee join dept in db.Department on emp.deptid equals dept.nid select new { emp.Name, dept.Name, emp.Location };
While...
Functional programming model and its idioms help a programmer to write better code in the many-core architecture of the modern CPUs. C# programming language and the .NET platform has incorporated FP constructs into the language to help write certain kind of code in a functional manner. The mastery of lambda expressions and functions, type inference, expression trees, LINQ, and so on helps structure our code better if used judiciously by mixing the OOP and FP codes. Mixing of OOP and FP to write code is termed as object/functional programming, and most modern day languages like C#, Scala, Java (after version 8), Ruby, and so on support this idiom. In the next chapter, we will implement some GoF design patterns using object/functional programming, and also pick up some OOP/FP programming idioms such as map/reduce.