Developer Environment for Go
Go is a modern programming language built for the 21st century application development. Hardware and technology have advanced significantly over the past decade, and most of the other languages do not take advantage of these technical advancements. As we shall see throughout the book, Go allows us to build network applications that take advantage of concurrency and parallelism made available with multicore systems.
In this chapter, we will look at some of the topics required to work through rest of the book, such as:
- Go configuration—GOROOT, GOPATH, and so on.
- Go package management
- Project structure used throughout the book
- Container technology and how to use Docker
- Writing tests in Go
GOROOT
In order to run or build a Go project, we need to have access to the Go binary and its libraries. A typical installation of Go (instructions can be found at https://golang.org/dl/) on Unix-based systems will place the Go binary at /usr/bin/go. However, it is possible to install Go on a different path. In that case, we need to set the GOROOT environment variable to point to our Go installation path and also append it to our PATH environment variable.
GOPATH
Programmers tend to work on many projects and it is good practice to have the source code separate from nonprogramming-related files. It is a common practice to have the source code in a separate location or workspace. Every programming language has its own conventions on how the language-related projects should be set up and Go is no exception to this.
GOPATH is the most important environment variable the developer has to set. It tells the Go compiler where to find the source code for the project and its dependencies. There are conventions within the GOPATH that need to be followed, and they have to deal with folder hierarchies.
src/
This is the directory that will contain the source code of our projects and their dependencies. In general, we want our source code to have version control and be hosted on the cloud. It would also be great if we or anyone else could easily use our project. This requires a little extra setup on our part.
Let's imagine that our project is hosted at http://git-server.com/user-name/my-go-project. We want to clone and build this project on our local system. To make it properly work, we need to clone it to $GOPATH/src/git-server.com/user-name/my-go-project. When we build a Go project with dependencies for the first time, we will see that the src/ folder has many directories and subdirectories that contain the dependencies of our project.
pkg/
Go is a compiled programming language; we have the source code and code for the dependencies that we want to use in our project. In general, every time we build a binary, the compiler has to read the source code of our project and dependencies and then compile it to machine code. Compiling unchanged dependencies every time we compile our main program would lead to a very slow build process. This is the reason that object files exist; they allow us to compile dependencies into reusable machine code that can be readily included in our Go binary.
These object files are stored in $GOPATH/pkg; they follow a directory structure similar to that of src/, except that they are within a subdirectory. These directories tend to follow the naming pattern of <OS>_<CPU-Architecture>, because we can build executable binaries for multiple systems:
$ tree $GOPATH/pkg pkg └── linux_amd64 ├── github.com │ ├── abbot │ │ └── go-http-auth.a │ ├── dimfeld │ │ └── httppath.a │ ├── oklog │ │ └── ulid.a │ ├── rcrowley │ │ └── go-metrics.a │ ├── sirupsen │ │ └── logrus.a │ ├── sony │ │ └── gobreaker.a └── golang.org └── x ├── crypto │ ├── bcrypt.a │ ├── blowfish.a │ └── ssh │ └── terminal.a ├── net │ └── context.a └── sys
bin/
Go compiles and builds our projects into executable binaries and places them in this directory. Depending on the build specs, they might be executable on your current system or other systems. In order to use the binaries that are available in the bin/ directory, we need to set the corresponding GOBIN=$GOPATH/bin environment variable.
Package management
In the days of yore, all programs were written from scratch—every utility function and every library to run the code had to written by hand. Now a days, we don't want to deal with the low level details on a regular basis; it would be unimaginable to write all the required libraries and utilities from scratch. Go comes with a rich library, which will be enough for most of our needs. However, it is possible that we might need a few extra libraries or features not provided by the standard library. Such libraries should be available on the internet, and we can download and add them into our project to start using them.
In the previous section, GOPATH, we discussed how all our projects are saved into qualified paths of the $GOPATH/src/git-server.com/user-name/my-go-project form. This is true for any and all dependencies we might have. There are multiple ways to handle dependencies in Go. Let's look at some of them.
go get
The go get is the utility provided by the standard library for package management. We can install a new package/library by running the following command:
$ go get git-server.com/user-name/library-we-need
This will download and build the source code and then install it as a binary executable (if it can be used as a standalone executable). The go get utility also installs all the dependencies required by the dependency retrieved for our project.
glide
The glide is one of the most widely used package management tool in Go community. It addresses the limitations of go get, but it needs to be installed manually by the developer. The following is a simple way to install and use glide:
$ curl https://glide.sh/get | sh $ mkdir new-project && cd new-project $ glide create $ glide get github.com/last-ent/skelgor # A helper project to generate project skeleton. $ glide install # In case any dependencies or configuration were manually added. $ glide up # Update dependencies to latest versions of the package. $ tree . ├── glide.lock ├── glide.yaml └── vendor └── github.com └── last-ent └── skelgor ├── LICENSE ├── main.go └── README.md
In case you do not wish to install glide via curl and sh, other options are available and described in better detail on the project page, available at https://github.com/masterminds/glide.
go dep
The go dep is a new dependency management tool being developed by the Go community. Right now, it requires Go 1.7 or newer to compile, and it is ready for production use. However, it is still undergoing changes and hasn't yet been merged into Go's standard library.
Structuring a project
A project might have more than just the source code for the project, for example, configuration files and project documentation. Depending upon preferences, the way the project is structured can drastically change. However, the most important thing to remember is that the entry point to the whole program is through the main function, which is implemented within main.go as a convention.
The application we will be building in this book, will have the following initial structure:
$ tree . ├── common │ ├── helpers.go │ └── test_helpers.go └── main.go
Working with book's code
The source code discussed throughout the book can be obtained in two ways:
- Using go get -u github.com/last-ent/distributed-go
- Downloading the code bundle from the website and extracting it to $GOPATH/src/github.com/last-ent/distributed-go
The code for complete book should now be available at $GOPATH/src/github.com/last-ent/distributed-go and the code specific for each chapter will be available in that particular chapter number's directory.
For example,
Code for Chapter 1 -> $GOPATH/src/github.com/last-ent/distributed-go/chapter1
Code for Chapter 2 -> $GOPATH/src/github.com/last-ent/distributed-go/chapter2
And so on.
Whenever we discuss code in any particular chapter, it is implied that we are in the respective chapter's folder.
Containers
Throughout the book, we will be writing Go programs that will be compiled to binaries and run directly on our system. However, in the latter chapters we will be using docker-compose to build and run multiple Go applications. These applications can run without any real problem on our local system; however, our ultimate goal is to be able to run these programs on servers and to be able to access them over the internet.
During the 1990s and early 2000s, the standard way to deploy applications to the internet was to get a server instance, copy the code or binary onto the instance, and then start the program. This worked great for a while, but soon complications began to arise. Here are a few of them:
- Code that worked on the developer's machine might not work on the server.
- Programs that ran perfectly on a server instance might fail upon applying the latest patch to the server's OS.
- For every new instance added as part of a service, various installation scripts had to be run so that we can bring the new instance to be on par with all the other instances. This can be a very slow process.
- Extra care had to be taken to ensure that the new instance and all the software versions installed on it are compatible with the APIs being used by our program.
- It was also important to ensure that all config files and important environment variables were copied to the new instance; otherwise, the application might fail with little or no clue.
- Usually the version of the program that ran on local system versus test system versus production system were all configured differently, and this meant that it was possible for our application to fail on one of the three types of systems. If such a situation occurred, we would end up having to spend extra time and effort trying to figure out whether the issue is specific to one particular instance, one particular system, and so on.
It would be great if we could avoid such a situation from arising, in a sensible manner. Containers try to solve this problem using OS-level virtualization. What does this mean?
All programs and applications are run in a section of memory known as user space. This allows the operating system to ensure that a program is not able to cause major hardware or software issues. This allows us to recover from any program crashes that might occur in the user space applications.
The real advantage of containers is that they allow us to run applications in isolated user spaces, and we can even customize the following attributes of user spaces:
- Connected devices such as network adapters and TTY
- CPU and RAM resources
- Files and folders accessible from host OS
However, how does this help us solve the problems we stated earlier? For that, let's take a deeper look at Docker.
Docker
Modern software development makes extensive use of containers for product development and product deployment to server instances. Docker is a container technology promoted by Docker, Inc (https://www.docker.com), and as of this writing, it is the most predominantly used container technology. The other major alternative is rkt developed by CoreOS (https://coreos.com/rkt), though in this book, we will only be looking at Docker.
Docker versus Virtual Machine (VM)
Looking at the description of Docker so far, we might wonder if it is yet another Virtual Machine. However, this is not the case, because a VM requires us to run a complete guest OS on top of our machine, or hypervisor, as well as all the required binaries. In the case of Docker, we use OS level virtualization, which allows us to run our containers in isolated user spaces.
The biggest advantage of a VM is that we can run different types of OSes on a system, for example, Windows, FreeBSD, and Linux. However, in the case of Docker, we can run any flavor of Linux, and the only limitation is that it has to be Linux:

The biggest advantage of Docker containers is that since it runs natively on Linux as a discrete process making it lightweight and unaware of all the capabilities of the host OS.
Understanding Docker
Before we start using Docker, let's take a brief look at how the Docker is meant to be used, how it is structured, and what are the major components of the complete system.
The following list and the accompanying image should help understand the architecture of Docker pipeline:
- Dockerfile: It consists of instructions on how to build an image that runs our program.
- Docker client: This is a command-line program used by the user to interact with Docker daemon.
- Docker daemon: This is the Daemon application that listens for commands to manage building or running containers and pushing containers to Docker registry. It is also responsible for configuring container networks, volumes, and so on.
- Docker images: Docker images contain all the steps necessary to build a container binary that can be executed on any Linux machine with Docker installed.
- Docker registry: The Docker registry is responsible for storing and retrieving the Docker images. We can use a public Docker registry or a private one. Docker Hub is used as the default Docker registry.
- Docker Container: The Docker container is different from the Container we have been discussing so far. A Docker container is a runnable instance of a Docker image. A Docker container can be created, started, stopped, and so on.
- Docker API: The Docker client we discussed earlier is a command-line interface to interact with Docker API. This means that the Docker daemon need not be running on the same machine as does the Docker client. The default setup that we will be using throughout the book talks to the Docker daemon on the local system using UNIX sockets or a network interface:

Testing Docker setup
Let's ensure that our Docker setup works perfectly. For our purpose, Docker Community Edition should suffice (https://www.docker.com/community-edition). Once we have it installed, we will check if it works by running a few basic commands.
Let's start by checking what version we have installed:
$ docker --version Docker version 17.12.0-ce, build c97c6d6
Let's try to dig deeper into details about our Docker installation:
$ docker info
Containers: 38
Running: 0
Paused: 0
Stopped: 38
Images: 24
Server Version: 17.12.0-ce
Let's try to run a Docker image. If you remember the discussion regarding the Docker registry, you know that we do not need to build a Docker image using Dockerfile, to run a Docker container. We can directly pull it from Docker Hub (the default Docker registry) and run the image as a container:
$ docker run docker/whalesay cowsay Welcome to GopherLand!
Unable to find image 'docker/whalesay:latest' locally Trying to pull repository docker.io/docker/whalesay ... sha256:178598e51a26abbc958b8a2e48825c90bc22e641de3d31e18aaf55f3258ba93b: Pulling from docker.io/docker/whalesay e190868d63f8: Pull complete 909cd34c6fd7: Pull complete 0b9bfabab7c1: Pull complete a3ed95caeb02: Pull complete 00bf65475aba: Pull complete c57b6bcc83e3: Pull complete 8978f6879e2f: Pull complete 8eed3712d2cf: Pull complete Digest: sha256:178598e51a26abbc958b8a2e48825c90bc22e641de3d31e18aaf55f3258ba93b Status: Downloaded newer image for docker.io/docker/whalesay:latest ________________________ < Welcome to GopherLand! > ------------------------ \ \ \ ## . ## ## ## == ## ## ## ## === /""""""""""""""""___/ === ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~ \______ o __/ \ __/ \__________/
The preceding command could also have been executed, as shown here though, merely using docker run ..., which is more convenient:
$ docker pull docker/whalesay & docker run docker/whalesay cowsay Welcome to GopherLand!
Once we have a long set of built images, we can list them all and similarly for Docker containers:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/docker/whalesay latest 6b362a9f73eb 2 years ago 247 MB $ docker container ls --all CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a1b1efb42130 docker/whalesay "cowsay Welcome to..." 5 minutes ago Exited (0) 5 minutes ago frosty_varahamihira
Finally, it is important to note that as we keep using docker to build and run images and containers, we will start creating a backlog of "dangling" images, which we might not really use again. However, they will end up eating storage space. In order to get rid of such "dangling" images, we can use the following command:
$ docker rmi --force 'docker images -q -f dangling=true' # list of hashes for all deleted images.
Dockerfile
Now that we have the basics of Docker under our belt, let's look at the Dockerfile file we will be using as a template in this book.
Next, let's look at an example:
FROM golang:1.10
# The base image we want to use to build our docker image from.
# Since this image is specialized for golang it will have GOPATH = /go
ADD . /go/src/hello
# We copy files & folders from our system onto the docker image
RUN go install hello
# Next we can create an executable binary for our project with the command,
'go install' ENV NAME Bob
# Environment variable NAME will be picked up by the program 'hello'
and printed to console.ENTRYPOINT /go/bin/hello
# Command to execute when we start the container # EXPOSE 9000 # Generally used for network applications. Allows us to connect to the
application running inside the container from host system's localhost.
main.go
Let's create a bare minimum Go program so that we can use it in the Docker image. It will take the NAME environmental variable and print <NAME> is your uncle. and then quit:
package main import ( "fmt" "os" ) func main() { fmt.Println(os.Getenv("NAME") + " is your uncle.") }
Now that we have all the code in place, let's build the Docker image using the Dockerfile file:
$ cd docker
$ tree
.
├── Dockerfile
└── main.go"
0 directories, 2 files $ # -t tag lets us name our docker images so that we can easily refer to them $ docker build . -t hello-uncle Sending build context to Docker daemon 3.072 kB Step 1/5 : FROM golang:1.9.1 ---> 99e596fc807e Step 2/5 : ADD . /go/src/hello ---> Using cache ---> 64d080d7eb39 Step 3/5 : RUN go install hello ---> Using cache ---> 13bd4a1f2a60 Step 4/5 : ENV NAME Bob ---> Using cache ---> cc432fe8ffb4 Step 5/5 : ENTRYPOINT /go/bin/hello ---> Using cache ---> e0bbfb1fe52b Successfully built e0bbfb1fe52b $ # Let's now try to run the docker image. $ docker run hello-uncle Bob is your uncle. $ # We can also change the environment variables on the fly. $ docker run -e NAME=Sam hello-uncle Sam is your uncle.
Testing in Go
Testing is an important part of programming, whether it is in Go or in any other language. Go has a straightforward approach to writing tests, and in this section, we will look at some important tools to help with testing.
There are certain rules and conventions we need to follow to test our code. They can be listed as follows:
- Source files and associated test files are placed in the same package/folder
- The name of the test file for any given source file is <source-file-name>_test.go
- Test functions need to have the "Test" prefix, and the next character in the function name should be capitalized
In the remainder of this section, we will look at three files and their associated tests:
- variadic.go and variadic_test.go
- addInt.go and addInt_test.go
- nil_test.go (there isn't any source file for these tests)
Along the way, we will introduce any further concepts we might use.
variadic.go
In order to understand the first set of tests, we need to understand what a variadic function is and how Go handles it. Let's start with the definition:
Given that Go is a statically typed language, the only limitation imposed by the type system on a variadic function is that the indefinite number of arguments passed to it should be of the same data type. However, this does not limit us from passing other variable types. The arguments are received by the function as a slice of elements if arguments are passed, else nil, when none are passed.
Let's look at the code to get a better idea:
// variadic.go package main func simpleVariadicToSlice(numbers ...int) []int { return numbers } func mixedVariadicToSlice(name string, numbers ...int) (string, []int) { return name, numbers } // Does not work. // func badVariadic(name ...string, numbers ...int) {}
We use the ... prefix before the data type to define a functions as a variadic function. Note that we can have only one variadic parameter per function and it has to be the last parameter. We can see this error if we uncomment the line for badVariadic and try to test the code.
variadic_test.go
We would like to test the two valid functions, simpleVariadicToSlice and mixedVariadicToSlice, for various rules defined in the previous section. However, for the sake of brevity, we will test these:
- simpleVariadicToSlice: This is for no arguments, three arguments, and also to look at how to pass a slice to a variadic function
- mixedVariadicToSlice: This is to accept a simple argument and a variadic argument
Let's now look at the code to test these two functions:
// variadic_test.go package main import "testing" func TestSimpleVariadicToSlice(t *testing.T) { // Test for no arguments if val := simpleVariadicToSlice(); val != nil { t.Error("value should be nil", nil) } else { t.Log("simpleVariadicToSlice() -> nil") } // Test for random set of values vals := simpleVariadicToSlice(1, 2, 3) expected := []int{1, 2, 3} isErr := false for i := 0; i < 3; i++ { if vals[i] != expected[i] { isErr = true break } } if isErr { t.Error("value should be []int{1, 2, 3}", vals) } else { t.Log("simpleVariadicToSlice(1, 2, 3) -> []int{1, 2, 3}") } // Test for a slice vals = simpleVariadicToSlice(expected...) isErr = false for i := 0; i < 3; i++ { if vals[i] != expected[i] { isErr = true break } } if isErr { t.Error("value should be []int{1, 2, 3}", vals) } else { t.Log("simpleVariadicToSlice([]int{1, 2, 3}...) -> []int{1, 2, 3}") } } func TestMixedVariadicToSlice(t *testing.T) { // Test for simple argument & no variadic arguments name, numbers := mixedVariadicToSlice("Bob") if name == "Bob" && numbers == nil { t.Log("Recieved as expected: Bob, <nil slice>") } else { t.Errorf("Received unexpected values: %s, %s", name, numbers) } }
Running tests in variadic_test.go
Let's run these tests and see the output. We'll use the -v flag while running the tests to see the output of each individual test:
$ go test -v ./{variadic_test.go,variadic.go} === RUN TestSimpleVariadicToSlice --- PASS: TestSimpleVariadicToSlice (0.00s) variadic_test.go:10: simpleVariadicToSlice() -> nil variadic_test.go:26: simpleVariadicToSlice(1, 2, 3) -> []int{1, 2, 3} variadic_test.go:41: simpleVariadicToSlice([]int{1, 2, 3}...) -> []int{1, 2, 3} === RUN TestMixedVariadicToSlice --- PASS: TestMixedVariadicToSlice (0.00s) variadic_test.go:49: Received as expected: Bob, <nil slice> PASS ok command-line-arguments 0.001s
addInt.go
The tests in variadic_test.go elaborated on the rules for the variadic function. However, you might have noticed that TestSimpleVariadicToSlice ran three tests in its function body, but go test treats it as a single test. Go provides a good way to run multiple tests within a single function, and we shall look them in addInt_test.go.
For this example, we will use a very simple function as shown in this code:
// addInt.go package main func addInt(numbers ...int) int { sum := 0 for _, num := range numbers { sum += num } return sum }
addInt_test.go
You might have also noticed in TestSimpleVariadicToSlice that we duplicated a lot of logic, while the only varying factor was the input and expected values. One style of testing, known as Table-driven development, defines a table of all the required data to run a test, iterates over the "rows" of the table and runs tests against them.
Let's look at the tests we will be testing against no arguments and variadic arguments:
// addInt_test.go package main import ( "testing" ) func TestAddInt(t *testing.T) { testCases := []struct { Name string Values []int Expected int }{ {"addInt() -> 0", []int{}, 0}, {"addInt([]int{10, 20, 100}) -> 130", []int{10, 20, 100}, 130}, } for _, tc := range testCases { t.Run(tc.Name, func(t *testing.T) { sum := addInt(tc.Values...) if sum != tc.Expected { t.Errorf("%d != %d", sum, tc.Expected) } else { t.Logf("%d == %d", sum, tc.Expected) } }) } }
Running tests in addInt_test.go
Let's now run the tests in this file, and we are expecting each of the row in the testCases table, which we ran, to be treated as a separate test:
$ go test -v ./{addInt.go,addInt_test.go} === RUN TestAddInt === RUN TestAddInt/addInt()_->_0 === RUN TestAddInt/addInt([]int{10,_20,_100})_->_130 --- PASS: TestAddInt (0.00s) --- PASS: TestAddInt/addInt()_->_0 (0.00s) addInt_test.go:23: 0 == 0 --- PASS: TestAddInt/addInt([]int{10,_20,_100})_->_130 (0.00s) addInt_test.go:23: 130 == 130 PASS ok command-line-arguments 0.001s
nil_test.go
We can also create tests that are not specific to any particular source file; the only criteria is that the filename needs to have the <text>_test.go form. The tests in nil_test.go elucidate on some useful features of the language which the developer might find useful while writing tests. They are as follows:
- httptest.NewServer: Imagine the case where we have to test our code against a server that sends back some data. Starting and coordinating a full blown server to access some data is hard. The http.NewServer solves this issue for us.
- t.Helper: If we use the same logic to pass or fail a lot of testCases, it would make sense to segregate this logic into a separate function. However, this would skew the test run call stack. We can see this by commenting t.Helper() in the tests and rerunning go test.
We can also format our command-line output to print pretty results. We will show a simple example of adding a tick mark for passed cases and cross mark for failed cases.
In the test, we will run a test server, make GET requests on it, and then test the expected output versus actual output:
// nil_test.go package main import ( "fmt" "io/ioutil" "net/http" "net/http/httptest" "testing" ) const passMark = "\u2713" const failMark = "\u2717" func assertResponseEqual(t *testing.T, expected string, actual string) { t.Helper() // comment this line to see tests fail due to 'if expected != actual' if expected != actual { t.Errorf("%s != %s %s", expected, actual, failMark) } else { t.Logf("%s == %s %s", expected, actual, passMark) } } func TestServer(t *testing.T) { testServer := httptest.NewServer( http.HandlerFunc( func(w http.ResponseWriter, r *http.Request) { path := r.RequestURI if path == "/1" { w.Write([]byte("Got 1.")) } else { w.Write([]byte("Got None.")) } })) defer testServer.Close() for _, testCase := range []struct { Name string Path string Expected string }{ {"Request correct URL", "/1", "Got 1."}, {"Request incorrect URL", "/12345", "Got None."}, } { t.Run(testCase.Name, func(t *testing.T) { res, err := http.Get(testServer.URL + testCase.Path) if err != nil { t.Fatal(err) } actual, err := ioutil.ReadAll(res.Body) res.Body.Close() if err != nil { t.Fatal(err) } assertResponseEqual(t, testCase.Expected, fmt.Sprintf("%s", actual)) }) }
t.Run("Fail for no reason", func(t *testing.T) {
assertResponseEqual(t, "+", "-")
})
}
Running tests in nil_test.go
We run three tests, where two test cases will pass and one will fail. This way we can see the tick mark and cross mark in action:
$ go test -v ./nil_test.go === RUN TestServer === RUN TestServer/Request_correct_URL === RUN TestServer/Request_incorrect_URL === RUN TestServer/Fail_for_no_reason --- FAIL: TestServer (0.00s) --- PASS: TestServer/Request_correct_URL (0.00s) nil_test.go:55: Got 1. == Got 1.--- PASS: TestServer/Request_incorrect_URL (0.00s) nil_test.go:55: Got None. == Got None.
--- FAIL: TestServer/Fail_for_no_reason (0.00s)
nil_test.go:59: + != -
FAIL
exit status 1
FAIL command-line-arguments 0.003s
Summary
In this chapter, we started by looking at the fundamental setup for running Go projects successfully. Then we looked at how to install dependencies for our Go projects and how to structure our project. We also looked at the important concepts behind Containers, what problems they solve, and how we will be using them in the book along with an example. Next, we looked at how to write tests in Go, and along the way, we learned a few interesting concepts when dealing with a variadic function and other useful test functions.
In the next chapter, we will start looking at one of the core fundamentals of Go programming—goroutines and the important details to keep in mind when using them.