Let's write some tests to verify each technique (simple recursive, memoized, and channeled) works properly. We'll use TDD to help us design and write better code.
The same or similar test(s) are performed again as well as introducing new test code to test the next piece of the feature. The process is iterated as many times as necessary until each unit is functioning according to the desired specifications:
We can start using a table of input values and their corresponding result values to verify that the function under test is working properly:
We use the range statement to iterate through the table, row by row, and check each calculated result (v := FibSimple(ft.a)
) against the expected value (ft.expected
) from that row.
Only if there is a mismatch do we report the error.
In the first benchmark test, we examine the performance of computing the eighth number in the Fibonacci sequence. Note that we pass the -bench=.
argument, which means run all benchmark tests. The ./...
argument means to run all the tests in this directory and all the child directories as well:
When we request the eighth number in the sequence, the simple recursive implementation runs faster than the memoized and channeled (optimized) versions, 213 ns/op
compared to 1302 ns/op
and 2224 ns/op
, respectively.
In fact, when the simple version is executed once, it only takes 3.94 ns/op
.
One very cool feature of Go's benchmark testing facility is that it is smart enough to figure out how many times to execute the function under test. The value of b.N
will increase each time until the benchmark runner is satisfied with the stability of the benchmark. The faster the function runs under a test, the more times the benchmark facility will run it. The more times the benchmark facility runs a function, the more accurate the performance metric, for example, 3.94 ns/op
.
Take the FibSimple
test for example. When it is passed with 1
, it means it only needs to execute once. Since it only takes 3.94 ns/op
, we see it is executed 10,000,000 times. However, when FibSimple
is passed with 40
, we see that it takes 2,509,110,502 ns to complete one operation, and the benchmark facility is smart enough to only run it once. That way, we can be assured that running benchmark tests is as accurate as possible and they run within a reasonable time. How nice is that?
Since the FibSimple
implementation is recursive and has not been optimized, we can test our assumption that the time it takes to calculate each successive number in the sequence will increase exponentially. We can do this using a common testing technique by calling the private function benchmarkFibSimple
, which avoids directly invoking the test driver:
func benchmarkFibSimple(i int, b *testing.B) {
for n := 0; n < b.N; n++ {
FibSimple(i)
}
}
func BenchmarkFibSimple1(b *testing.B) { benchmarkFibSimple(1, b) }
func BenchmarkFibSimple2(b *testing.B) { benchmarkFibSimple(2, b) }
func BenchmarkFibSimple3(b *testing.B) { benchmarkFibSimple(3, b) }
func BenchmarkFibSimple10(b *testing.B) { benchmarkFibSimple(4, b) }
func BenchmarkFibSimple20(b *testing.B) { benchmarkFibSimple(20, b) }
func BenchmarkFibSimple40(b *testing.B) { benchmarkFibSimple(42, b) }
We test the first four numbers in the sequence, 20
and then 42
. Since it takes about 3 seconds for my computer to calculate the 42nd number in the sequence, I decided not to go any higher. No need to wait longer than that when we can easily see the exponential growth pattern, without having to wait for more than a minute to get our results.
Our benchmark testing has proven that our simple, recursive implementation of the Fibonacci sequence behaves as expected. This behavior equates to poor performance.
Let's look at a few ways to increase performance.
We have observed that our FibSimple
implementation always returns the same result, given the same input(s), and that there are no side effects in the environment in which it runs. For example, if we pass FibSimple
an 8
value, we know that every time the result will be 13
. We used this fact to leverage a caching technique called memoization to create the FibMemoized
function.
Now, let's write some tests to see how effective MemoizeFcn
is.
Since our fibTests
structure has been defined in another test in our package, in chapter1/_01_fib/ex1_test.go
, we don't need to define it again. This way, we only define the test table once, and we're able to reuse it in subsequent Fibonacci function implementations to get a reasonable apples-to-apples comparison of each solution.
Here's the basic unit test for the FibMemoized
function:
func TestMemoized(t *testing.T) {
for _, ft := range fibTests {
if v := FibMemoized(ft.a); v != ft.expected {
t.Errorf("FibMemoized(%d) returned %d, expected %d", ft.a, v, ft.expected)
}
}
}
It won't return an error unless there is a bug in our code.
That's one of the great things about running unit tests. You don't hear about them unless something breaks.
Note
We should write unit tests in order to:
- Ensure that what you implement meets your feature requirements
- Leverage testing to help you think about how best to implement your solution
- Produce quality tests that can be used in your constant integration process
- Verify that your implementation meets interface requirements with other parts of your application
- Make developing integration tests easier
- Safeguard your work against other developers, who might implement a component that could break your code in production
Here are the benchmark tests:
func BenchmarkFibMemoized(b *testing.B) {
fn := FibMemoized
for i := 0; i < b.N; i++ {
_ = fn(8)
}
}
As before, in the FibSimple
example, we examine the performance of computing the eighth number in the Fibonacci sequence:
func BenchmarkFibMemoized(b *testing.B) {
fn := FibMemoized
for i := 0; i < b.N; i++ {
_ = fn(8)
}
}
func benchmarkFibMemoized(i int, b *testing.B) {
for n := 0; n < b.N; n++ {
FibMemoized(i)
}
}
func BenchmarkFibMemoized1(b *testing.B) {
benchmarkFibMemoized(1, b) }
func BenchmarkFibMemoized2(b *testing.B) {
benchmarkFibMemoized(2, b) }
func BenchmarkFibMemoized3(b *testing.B) {
benchmarkFibMemoized(3, b) }
func BenchmarkFibMemoized10(b *testing.B) {
benchmarkFibMemoized(4, b) }
func BenchmarkFibMemoized20(b *testing.B) {
benchmarkFibMemoized(20, b) }
func BenchmarkFibMemoized40(b *testing.B) {
benchmarkFibMemoized(42, b) }
As before, we carry out a test calling FibMemoized
, using 1
, 2
, 3
, 4
, 20
, and 42
as input.
Here's the complete listing for the FibChanelled
function:
package fib
import "testing"
func TestChanneled(t *testing.T) {
for _, ft := range fibTests {
if v := FibChanneled(ft.a); v != ft.expected {
t.Errorf("FibChanneled(%d) returned %d, expected %d", ft.a, v, ft.expected)
}
}
}
func BenchmarkFibChanneled(b *testing.B) {
fn := FibChanneled
for i := 0; i < b.N; i++ {
_ = fn(8)
}
}
func benchmarkFibChanneled(i int, b *testing.B) {
for n := 0; n < b.N; n++ {
FibChanneled(i)
}
}
func BenchmarkFibChanneled1(b *testing.B) {
benchmarkFibChanneled(1, b) }
func BenchmarkFibChanneled2(b *testing.B) {
benchmarkFibChanneled(2, b) }
func BenchmarkFibChanneled3(b *testing.B) {
benchmarkFibChanneled(3, b) }
func BenchmarkFibChanneled10(b *testing.B) {
benchmarkFibChanneled(4, b) }
func BenchmarkFibChanneled20(b *testing.B) {
benchmarkFibChanneled(20, b) }
func BenchmarkFibChanneled40(b *testing.B) {
benchmarkFibChanneled(42, b) }
We performed twooptimizations on our original Fibonacci sequence logic using a caching technique and Go's concurrency features. We wrote both the optimization implementations. More optimizations are possible. In some cases, optimization techniques can be combined to produce even faster code.
What if all we had to do was write a simple recursive version and then when we compiled our Go code, the Go compiler would automatically generate object code with performance optimizations?
Note
Lazy evaluation: An evaluation strategy that delays the evaluation of an expression until its value is needed, which improves performance by avoiding needless calculations.