Home Data Go Machine Learning Projects

Go Machine Learning Projects

By Xuanyi Chew
books-svg-icon Book
eBook $39.99 $27.98
Print $48.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $39.99 $27.98
Print $48.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    How to Solve All Machine Learning Problems
About this book
Go is the perfect language for machine learning; it helps to clearly describe complex algorithms, and also helps developers to understand how to run efficient optimized code. This book will teach you how to implement machine learning in Go to make programs that are easy to deploy and code that is not only easy to understand and debug, but also to have its performance measured. The book begins by guiding you through setting up your machine learning environment with Go libraries and capabilities. You will then plunge into regression analysis of a real-life house pricing dataset and build a classification model in Go to classify emails as spam or ham. Using Gonum, Gorgonia, and STL, you will explore time series analysis along with decomposition and clean up your personal Twitter timeline by clustering tweets. In addition to this, you will learn how to recognize handwriting using neural networks and convolutional neural networks. Lastly, you'll learn how to choose the most appropriate machine learning algorithms to use for your projects with the help of a facial detection project. By the end of this book, you will have developed a solid machine learning mindset, a strong hold on the powerful Go toolkit, and a sound understanding of the practical implementations of machine learning algorithms in real-world projects.
Publication date:
November 2018
Publisher
Packt
Pages
348
ISBN
9781788993401

 

How to Solve All Machine Learning Problems

Welcome to the book Go Machine Learning Projects.

This is a rather odd book. It's not a book about how machine learning (ML) works. In fact, originally it was decided that we will assume that the readers are familiar with the machine learning (ML) algorithms I am to introduce in these chapters. Doing so would yield a rather empty book, I feared. If the reader knows the ML algorithm, what happens next is to simply apply the ML algorithm in the correct context of the problem! The ten or so chapters in this book would be completed in under 30 pages—anyone who's written a grant report for government agencies would have experience writing such things.

So what is this book going to be about?

It's going to be about applying ML algorithms within a specific, given context of the problem. These problems are concrete, and are specified by me on a whim. But in order to explore the avenues of the application of ML algorithms to problems, the reader must first be familiar with algorithms and the problems! So, this book has to strike a very delicate balance between understanding the problem, and understanding the specific algorithm used to solve the problem.

But before we go too far, what is a problem? And what do I mean when I say algorithm? And what's with this machine learning business?

 

What is a problem?

In colloquial use, a problem is something to be overcome. When people say they have money problems, the problem may be overcome simply by having more money. When someone has a mathematical problem, the problem may be overcome by mathematics. The thing or process used to overcome a problem is called a solution.

At this point, it may seem a little strange for me to define what is a common word. Ah, but precision and clarity of mind are required in order to solve problems with ML. You have to be precise about what exactly you are trying to solve.

Problems can be broken down into subproblems. But at some point, it no longer makes sense to break down those problems any further. I put it to the reader that there are different types of problems out there. So numerous are the types of problems that it is not worthwhile enumerating them all. Nonetheless, the urgency of a problem should be considered.

If you're building a photo organization tool (perhaps you are planning to rival Google Photos or Facebook), then recognizing faces in a photo is less urgent than knowing where to store, and how to retrieve a photo. If you do not know how to solve the latter, all the knowledge in the world to solve the former would be wasted.

I argue that urgency, despite its subjectivity, is a good metric to use when considering subproblems of the larger problem. To use a set of more concrete examples, consider three scenarios that all require some sort of ML solutions, but the solutions required are of different urgency. These examples are clearly made up examples, and have little or no bearing on real life. Their entire purpose is to make a point.

First, consider a real estate intelligence business. The entire survival of the business depends on being able to correctly predict the prices of houses to be sold, although perhaps they also make their money on some form of second market. To them, the ML problem faced is urgent. They would have to fully understand the ins and outs of the solution, otherwise they risk going out of business. In the view of the popular urgency/importance split, the ML problem can also be considered important and urgent.

Second, consider an online supermarket. They would like to know which groupings of products sell best together so they can bundle them to appear more competitive. This is not the core business activity, hence the ML problem faced is less urgent than the previous example. Some knowledge about how the solution works would be necessary. Imagine their algorithm says that they should bundle diarrhea medication with their home brand food products. They'd need to be able to understand how the solution came to that.

Lastly, consider the aforementioned photo application. Facial recognition is a nice bonus feature, but not the main feature. Therefore, the ML problem is least urgent amongst the three.

Different urgencies lead to different requirements when it comes to solving the problems.

 

What is an algorithm?

The previous section has been pretty diligent in the use of the term algorithm. Throughout this book, the term is liberally sprinkled, but is always used judiciously. But what is an algorithm?

To answer that, well, we must first ask, what is a program? A program is a series of steps to be performed by the computer. An algorithm is a set of rules that will solve the problem. A ML algorithm is hence a set of rules to solve a problem. They are implemented as a program on a computer.

One of the most eye-opening moments in truly and deeply understanding what exactly an algorithm is for me was an experience I had about 15 years ago. I was staying over at a friend's place. My friend had a seven year old child, and the friend was exasperated at trying to get her child to learn programming as her child had been too stubborn to learn the discipline of syntax. The root cause, I surmised, was that the child had not understood the idea of an algorithm. So the following morning, we tasked the child to make his own breakfast. Except he wasn't to make his own breakfast. He was to write down a series of steps that his mother was to follow to the letter.

The breakfast was simple—a bowl of cornflakes in milk. Nonetheless, it took the child some eleven attempts to get a bowl of cereal. It ended in tears and plenty of milk and cereal on the countertop, but it was a lesson well learned for the child.

This may seem like wanton child abuse. but it served me well too. In particular, the child said to his mother and me, in paraphrase, But you already know how to make cereal; why do you need instructions to do so? His mum responded, think of this as teaching me how to to make computer games. Here we have a meta notion of an algorithm. The child giving instructions on how to make cereal is teaching the child how to program; is itself an algorithm!

A ML algorithm can refer to the algorithm that is learned, or the algorithm that teaches the machine to use the correct algorithm. For the most part of this book, we shall refer to the latter, but it's quite useful to think of the former as well, if only as a mental exercise of sorts. For the most parts since Turing, we can substitute algorithm with machine.

Take some time to go through these sentences after reading the following section. It will help in clarifying what I mean upon a second read.

 

What is machine learning?

So what then is ML? As the word may hint, it's the ML to do something. Machines cannot learn the same way as humans can, but they can definitely emulate some parts of human learning. But what are they supposed to learn? Different algorithms learn different things, but the shared themes are that the machines learn a program. Or to put in less specific terms, the machine learns to do the correct thing.

What then is the correct thing? Not wanting to open a philosophical can of worms, the correct thing is what we, as human programmers of the computer, define as the correct thing.

There are multiple classification schemes of ML systems, but amongst the most common classification schemes, is one that splits ML into two types: supervised learning and unsupervised learning. Throughout this book we will see examples of both, but it's my opinion that such forms of classification are squarely in the good to know but not operationally important area of the brain. I say so because outside of a few well-known algorithms, unsupervised learning is still very much under active research. Supervised learning algorithms are too, but have been used in industry for longer than the unsupervised algorithms. That is not to say that unsupervised learning is not of value—a few have escaped the ivory towers of academia and have proven to be quite useful. We shall explore K-means and k-Nearest Neighbors (KNN) in one of the chapters.

Let's suppose for now we have a machine learning algorithm. The algorithm is a black box - we don't know what goes on inside. We feed it some data. And through its internal mechanisms, it produces an output. The output may not be correct. So it checks for whether the output is correct or not. If the output is not correct, it changes its internal mechanism, and tries again and again until the output is correct. This is how machine learning algorithms work in general. This is called training.

There are notions of what "correct" means of course. In supervised learning situations, we, the humans provide the machine with examples of correct data. In unsupervised learning situations, the notion of correctness relies on other metrics like distances between values. Each algorithm has its own specifics, but in general machine learning algorithms are as described.

 

Do you need machine learning?

Perhaps the most surprising question to ask, is whether you need machine learning to solve your problem. There is after all, a good reason why this section is the fourth in the chapter—we must understand what exactly is a problem is; and understand what an algorithm is before we can raise the question: do you need machine learning?

The first question to ask is of course: do you have a problem you need to solve? I assume the answer is yes, because we live in the world and are part of the world. Even ascetics have problems they need solved. But perhaps the question should be more specific: do you have a problem that can be solved with machine learning?

I've consulted a fair bit, and in my early days of consulting, I'd eagerly say yes to most enquiries. Ah, the things one does when one is young and inexperienced. The problems would often show up after I said yes. It turns out many of these consulting enquiries would be better served by having a more thorough understanding of the business domain and a more thorough understanding of computer science in general.

A common variant of a problem that is brought to me often requires information retrieval solutions, not machine learning solutions. Consider the following request I received several years ago:

Hi Xuanyi,
I am XXXX. We met at YYYY meetup a few months ago. My company is currently building a machine learning system that extracts relationships between entities. Wondering if you may be interested to catch up for coffee?

Naturally, this piqued my interest—relationship extraction is a particularly challenging task in machine learning. I was young, and ever so eager to get my hands on tough problems. So I sat down with the company, and we worked out what was needed based on surface information. I suggested several models, all of which were greeted with enthusiasm. We finally settled on an SVM-based model. Then I got to work.

The first step in any machine learning project is to collect data. So I did. Much to my surprise, the data was already neatly classified, and entities already identified. Further, the entities have a static, unchanging relationship. One type of entity would have a permanent relationship with another type of entity. What was the machine learning problem?

I brought this up after one and a half month's worth of data gathering. What was going on? We have clean data, we have clean relationships. All new data had clean relationships. Where is the need for machine learning?

It later emerged that the data came from manual data input, which was at the time required by law. The entity relationships were defined fairly strictly. The only data requirement they really needed was a cleaned up database entity-relationship diagram. Because their database structure was so convoluted, they could not really see that all they needed to do was to define a foreign-key relationship to enforce the relationship. When I had requested the data, the data had came from individual SQL queries. There was no need for machine learning!

To their DBA's credit, that was what their DBA had been saying all along.

This taught me a lesson: Always find out if someone really needs machine learning solutions before spending time working on it.

I've since settled on a very easy way of determining if someone needs machine learning. These are my rules of thumb

  1. Can the problem in this form: "Given X, I want to predict Y"
  2. A what-question is generally suspect. A what question looks like this: "I want to know what is our conversion rate for XYZ product"
 

The general problem solving process

Only if the general rules of thumbs are fulfilled then will I engage to further. The general problem solving process goes as follows for me:

  1. Identify clearly the problems.
  2. Translate the problems into a more concrete statement.
  3. Gather data
  4. Perform exploratory data analysis
  5. Determine the correct machine learning solution to use
  6. Build a model.
  7. Train the model.
  8. Test the model.

Throughout the chapters in this book, the pattern above will be followed. The exploratory data analysis sections will be only done for the first few chapters. It's implicit that those would have been done in the later chapters.

I have attempted to be clear in the section headings on what exactly are we trying to solve, but writing is a difficult task, so I may miss some.

 

What is a model?

All models are wrong; but some are useful.

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law relating pressure , volume and temperature of an "ideal" gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.

For such a model there is no need to ask the question "Is the model true?". If "truth" is to be the "whole truth" the answer must be "No". The only question of interest is "Is the model illuminating and useful?".
- George Box (1978)

Model train are a fairly common hobby, despite being lampooned by the likes of The Big Bang Theory. A model train is not a real train. For one, the sizes are different. Model trains do not work exactly the same way a real train does. There are gradations of model trains, each being more similar to actual trains than the previous.

A model is in that sense a representation of reality. What do we represent it with? By and large, numbers. A model is a bunch of numbers that describes reality, and a bit more.

Every time I try to explain what a model is I inevitably get responses along the lines of "You can't just reduce us to a bunch of numbers!". So what do I mean "numbers"?

Consider the following right angle triangle:

How do we describe all right angle triangles? We might say something like this:

This says that the sum of all angles in a right angle adds up to 180 degrees, and there exists an angle that is 90 degrees. This is sufficient to describe all right angle triangles in Cartesian space.

But is the description the triangle itself? It is not. This issue has plagued philosophers ever since the days of Aristotle. We shall not enter into a philosophical discussion for such a discussion will only serve to prolong the agony of this chapter.

So for our purposes in this chapter, we'll say that a model is the values that describe reality, and the algorithm that produces those values. These values are typically numbers, though they may be of other types as well.

What is a good model?

A model is a bunch of values that describe the world, alongside the program that produces these values. That much, we have concluded from the previous section. Now we have to pass some value judgments on models - whether a model is good or bad.

A good model needs to describe the world accurately. This is said in the most generic way possible. Described thus, this statement about a good model encompasses many notions. We shall have to make this abstract idea a bit more concrete to proceed.

A machine learning algorithm is trained on a bunch of data. To the machine, this bunch of data is the world. But to us, the data that we feed the machine in for training is not the world. To us humans, there is much more to the world than what the machine may know about. So when I say "a good model needs to describe the world accurately", there are two senses to the word "world" that applies - the world as the machine knows, and the world as we know it.

The machine has only seen portions of the world as we know it. There are parts of the world the machine has not seen. So it is then a good machine learning model when it is able to provide the correct outputs for inputs it has not seen yet.

As a concrete example, let's once again suppose that we have a machine learning algorithm that determines if an image is that of a hot dog or not. We feed the model images of hot dogs and hamburgers. To the machine, the world are simply images of hot dogs and hamburgers. What happens when we pass in as input, an image of vegetables? A good model would be able to generalize and say it's not a hot dog. A poor model would simply crash.

And thus with this analogy, we have defined a good model to be one that generalizes well to unknown situations.

Often, as part of the process of building machine learning systems, we would want to put this notion to test. So we would have to split our dataset into testing and training datasets. The machine would be trained on the training dataset, and to test how good the model is once the training has completed, we will then feed in the testing dataset to the machine. It's assumed of course that the machine has never seen the testing dataset. A good machine learning model hence would be able to generate the correct output for the testing dataset, despite never having seen it.

 

On writing and chapter organization

A note on the writing in this book. As you may already have guessed, I have decided to settle upon a more conversational tone. I find this tone to be friendlier to the reader who may be intimidated by machine algorithms. I am also, if you have not yet noticed, quite opinionated in my writing. I strive to make clear, through my writing, what is and what ought to be. Application of Hume's fork is at the discretion at the reader. But as a quick guide, when talking about algorithms and how they work, they are is statements. When talking about what should be done, and organization of code, they are ought statements.

There are two general patterns in the design of the chapters of this book. First, the problems get harder as the chapters go on. Second, one may optionally want to mentally divide the chapters into three different parts. Part 1—Chapters 2, Linear Regression - House Price Prediction, Chapter 3, Classification - Spam Email Detection, Chapter 4, Decomposing CO2 Trends Using Time Series Analysis, Chapter 7, Convolutional Neural Networks - MNIST Handwriting Recognition, Chapter 8, Basic Facial Detection) correspond to readers who have an urgent ML problem. Part 2—Chapters 5, Clean Up Your Personal Twitter Timeline by Clustering Tweets, Chapter 9, Hot Dog or Not Hot Dog - Using External Services, Chapter 6, Neural Networks; MNIST Handwriting Recognition, Chapter 7, Convolutional Neural Networks - MNIST Handwriting Recognition, Chapter 8, Basic Facial Detection) are for those who have ML problems akin to the second example. Part 3, the last two chapters, are for people whose machine learning problems are not as urgent, but still require a solution.

Up to Chapter 8, Basic Facial Detection, for each chapter there will typically be one or two sections dedicated to the explanation of the algorithm itself. I strongly believe that one cannot write any meaningful program without at least a basic understanding of the algorithms they are using. Sure, you may use an algorithm that has been provided by someone else, but, without a proper understanding, it's meaningless. Sometimes meaningless programs may produce results, just as sometimes an elephant may appear to know how to do arithmetic, or a pet dog may appear to do a sophisticated feat, like speak.

This also means that I may sound rather dismissive of some ideas. For example, I am dismissive of using ML algorithms to predict stock prices. I do not believe that doing so will be a productive endeavor, because of an understanding of the basic processes that generate prices in a stock market, and the confounding effects of time.

Time, however, will tell if I am right or wrong. It may well be that one day someone will invent a ML algorithm that will work perfectly well on dynamical systems, but not right now, right here. We are at the dawn of a new era of computation, and we must strive to understand things as best as we can. Often you can learn quite a bit from history. Therefore, I also strive to insert important historical anecdotes on how certain things came to be. These are by no means a comprehensive survey. In fact, it is done without much regularity. However, I do rather hope that it adds to the flavor of the book.

 

Why Go?

This book is a book on ML using Go. Go is a rather opinionated programming language. There's the Go way, or no other way at all. This may sound rather fascist, but it has resulted in a very enjoyable programming experience. It also makes working in teams rather efficient.

Further, Go is a fairly efficient language when compared to Python. I have moved on almost exclusively to using Go to do my ML and data science work.

Go also has the benefit of working well cross-platform. At work, developers may choose to work on different operating systems. Go works well across all of them. The programs that are written in Go can be trivially cross-compiled for other platforms. This makes deployment a lot easier. There's no unnecessary mucking around with Docker or Kubernetes.

Are there drawbacks when using Go for ML? Only as a library author. In general, using Go ML libraries is painless. But in order for it to be painless, you must let go of any previous ways you programmed.

 

Quick start

First install Go, which can be found at https://golang.org. It provides comprehensive guides to Go. And now, quick start.

 

Functions

Functions are the main way that anything is computed in Go.

This is a function:

func addInt(a, b int) int { return a + b }

We call func addInt(a, b int) int, which is the function signature. The function signature is composed of the function name, parameters, and return type(s).

The name of the function is addInt. Note the formatting being used. The function name is in camelCase—this is the preferred casing of names in Go. The first letter of any name, when capitalized, like AddInt indicates that it should be exported. By and large in this book we shan't worry about exported or unexported names, as we will be mostly using functions. But if you are writing a package, then it matters. Exported names are available from outside a package.

Next, note that a and b are parameters, and both have the type int. We'll discuss types in a bit, but the same function can also be written as:

func addInt(a int, b int) int { return a + b }

Following that, this is what the function returns. This function addInt returns an int. This means when a function is called correctly, like so:

 z := addInt(1, 2) 

z will have a type int.

After the return type is defined, {...} denotes the body. When {...} is written in this book, it means the content of the function body is not as important for the discussion at hand. Some parts of the book may have snippets of function bodies, but without the signature func foo(...). Again those snippets are the snippets under discussion. It's expected that the reader will piece together the function from context in the book.

A Go function may return multiple results. The function signature looks something like this:

 func divUint(a, b uint) (uint, error) { ... }
func divUint(a, b uint) (retVal uint, err error) { ... }

Again, the difference is mainly in naming the return values. In the second example, the return values are named retVal and err respectively. retVal is of type uint and err is of type error.

 

Variables

This is a variable declaration:

var a int

It says a is an int. That means a can contain any value that has the type int. Typical int would be like 0, 1, 2, and so on and so forth. It may seem odd to read the previous sentence, but typically is used correctly. All values of type int are typically int.

This is a variable declaration, followed by putting a value into the variable:

s := "Hello World"

Here, we're saying, define s as a string, and let the value be "Hello World". The := syntax can only be used within a function body. The main reason for this is not to cause the programmer to have to type var s string = "Hello World".

A note about the use of variables: variables in Go should be thought of as buckets with a name on them, in that they hold values. The names are important insofar as they inform the readers about the values they are supposed to hold. However, names do not necessarily have to cross barriers. I frequently name my return values with retVal, but give it a different name elsewhere. A concrete example is shown:

 func foo(...) (retVal int) { ... return retVal }
func main() {
something := foo()
...
}

I have taught programming and ML for a number of years now, and I believe that this is a hump every programmer has got to get over. Frequently students or junior team members may get confused by the difference in naming. They would rather prefer something like this:

 func foo(...) (something int) { ... return something }
func main() {
something := foo()
...
}

This is fine. However again, speaking strictly from experience, this tends to dampen the ability to think abstractly, which is a useful skill to have, especially in ML. My advice is, get used to using different names, it makes you think more abstractly.

In particular, names do not persist past what my friend James Koppel calls an abstraction barrier. What is an abstraction barrier? A function is an abstraction barrier. Whatever happens inside the function body, happens inside the function body and cannot be accessed by other things in the language. Therefore if you name a value fooBar inside the function body, the meaning of fooBar is only valid within the function.

Later we will see another form of abstraction barrier—the package.

Values

A value is what a program deals with. If you wrote a calculator program, then the values of the program are numbers. If you wrote a text search program, then the values are strings.

The programs we deal with nowadays as programmers are much more complicated than calculators. We deal with different types of values, ranging from number types (int, float64, and so on) to text (string).

A variable holds a value:

var a int = 1

The preceding line indicates that a is a variable that holds an int with the value 1. We've seen previous examples with the "Hello World" string.

Types

Like all major programming languages (yes, including Python and JavaScript), values in Go are typed. Unlike Python or JavaScript however, Go's functions and variables are also typed, and strongly so. What this means is that the following code will cause the program not to compile:

var a int
a = "Hello World"

This sort of behavior is known outside the academic world as strongly-typed. Within academic circles, strongly-typed is generally meaningless.

Go allows programmers to define their own types too:

 type email string

Here, we're defining a new type email. The underlying kind of data is a string.

Why would you want to do this? Consider this function:

 func emailSomeone(address, person string) { ... }

If both are string, it would be very easy to make a mistake—we might accidentally do something like this:

var address, person string
address = "John Smith"
person = "john@smith.com"
emailSomeone(address, person)

In fact, you could even do this: emailSomeone(person, address) and the program would still compile correctly!

Imagine, however, if emailSomeone is defined thus:

func emailSomeone(address email, person string) {...}

Then the following will fail to compile:

var address email
var person string
person = "John Smith"
address = "john@smith.com"
emailSomeone(person, address)

This is a good thing—it prevents bad things from happening. No more shall be said on this matter.

Go also allows programmers to define their own complex types:

type Record struct {
Name string
Age int
}

Here, we defined a new type called Record. It's a struct that contains two values: Name of type string and Age of type int.

What is a struct? Simply put, a struct is a data structure. The Name and Age in Record are called the fields of the struct.

A struct, if you come from Python, is equivalent to a tuple, but acts as a NamedTuple, if you are familiar with those. The closest equivalent in JavaScript is that it's an object. Likewise the closest equivalent in Java is that it's a plain old Java object. The closest equivalent in C# would be a plain old CLR object. In C++, the equivalent would be plain old data.

Note my careful use of the words closest equivalent and equivalent. The reason why I have delayed introduction to struct is because in most modern languages that the reader is likely to come from, it may have some form of Java-esque object orientation. A struct is not a class. It's just a definition of how data is arranged in the CPU. Hence the comparison with Python's tuples instead of Python's classes, or even Python's new data classes.

Given a value that is of type Record, one might want to extract its inner data. This can be done as so:

 r := Record {
Name: "John Smith",
Age: 20,
}
r.Name

The snippet here showcases a few things:

  • How to write a struct—kinded value—simply write the name of the type, and then fill in the fields.
  • How to read the fields of a struct—the .Name syntax is used.

Throughout this book, I shall use .FIELDNAME as a notation to get the field name of a particular data structure. It is expected that the reader is able to understand which data structure I am talking about from context. Occasionally I may use a full term, like r.Name, to make it clear which fields I am talking about.

Methods

Let's say we wrote these functions, and we have defined email as before:

 type email string

func check(a email) { ... }
func send(a email, msg string) { ... }

Observe that email is always the first type in the function parameters.

Calling the functions look something like this:

e := "john@smith.com"
check(e)
send(e, "Hello World")

We may want to make that into a method of the email type. We can do so as follows:

type email string

func (e email) check() { ... }
func (e email) send(msg string) { ... }

(e email) is called the receiver of the method.

Having defined the methods thus, we may then proceed to call them:

e := "john@smith.com"
e.check()
e.send("Hello World")

Observe the difference between the functions and methods. check(e) becomes e.check(). send(e, "Hello World") becomes e.send("Hello World"). What's the difference other than syntactic difference? The answer is, not much.

A method in Go is exactly the same as a function in Go, with the receiver of the method as the first parameter of the function. It is unlike methods of classes in object-oriented programming languages.

So why bother with methods? For one, it solves the expression problem quite neatly. To see how, we'll look at the feature of Go that ties everything together nicely: interfaces.

Interfaces

An interface is a set of methods. We can define an interface by listing out the methods it's expected to support. For example, consider the following interface:

var a interface {
check()
}

Here we are defining a to be a variable that has the type interface{ check() }. What on earth does that mean?

It means that you can put any value into a, as long as the value has a type that has a method called check().

Why is this valuable? It's valuable when considering multiple types that do similar things. Consider the following:

 type complicatedEmail struct {...}

func (e complicatedEmail) check() {...}
func (e complicatedEmail) send(a string) {...}

type simpleEmail string

func (e simpleEmail) check() {...}
func (e simpleEmail) send(a string) {...}

Now we want to write a function do, which does two things:

  • Check that an email address is correct
  • Send "Hello World" to the email

You would need two do functions:

func doC(a complicatedEmail) {
a.check()
a.send("Hello World")
}

func doS(a simpleEmail) {
a.check()
a.send("Hello World")
}

Instead, if that's all the bodies of the functions are, we may opt to do this:

func do(a interface{
check()
send(a string)
}) {
a.check()
a.send("Hello World")
}

This is quite hard to read. So let's give the interface a name:

type checkSender interface{
check()
send(a string)
}

Then we can simply redefine do to be the following:

func do(a checkSender) {
a.check()
a.send("Hello World")
}

A note on naming interfaces in Go. It is customary to name interfaces with a -er suffix. If a type implements check(), then the interface name should be called checker. This encourages the interfaces to be small. An interface should only define a small number of methods—larger interfaces are signs of poor program design.

Packages and imports

Finally, we come to the concept of packages and imports. For the majority of the book, the projects described live in something called a main package. The main package is a special package. Compiling a main package will yield an executable file that you can run.

Having said that, it's also often a good idea to organize your code into multiple packages. Packages are a form of abstraction barrier that we discussed previously with regards to variables and names. Exported names are accessible from outside the package. Exported fields of structs are also accessible from outside the package.

To import a package, you need to invoke an import statement at the top of the file:

package main
import "PACKAGE LOCATION"

Throughout this book I will be explicit in what to import, especially with external libraries that cannot be found in the Go standard library. We will be using a number of those, so I will be explicit.

Go enforces code hygiene. If you import a package and don't use it, your program will not compile. Again, this is a good thing as it makes it less likely to confuse yourself at a later point in time. I personally use a tool called goimports to manage my imports for me. Upon saving my file, goimports adds the import statements for me, and removes any unused packages from my import statements.

To install goimports, run the following command in your Terminal:

go get golang.org/x/tools/cmd/goimports
 

Let's Go!

In this chapter we've covered what a problem is and how to model a problem as a machine learning problem. Then we learned the basics of using Go. In the next chapter we will dive into our first problem: linear regression.

I highly encourage you to practice some Go beforehand. But if you already know how to use Go, well, let's Go!

About the Author
  • Xuanyi Chew

    Xuanyi Chew is the Chief Data Scientist of a Sydney-based logistics startup. He is the primary author of Gorgonia, an open source deep learning package for Go. He's been practicing machine learning for the past 12 years, applying them typically to help startups. His goal in life is to make an artificial general intelligence a reality. He enjoys learning new things.

    Browse publications by this author
Latest Reviews (1 reviews total)
No comments. Required characters.
Go Machine Learning Projects
Unlock this book and the full library FREE for 7 days
Start now