Reader small image

You're reading from  Hands-On Deep Learning with TensorFlow

Product typeBook
Published inJul 2017
Reading LevelBeginner
PublisherPackt
ISBN-139781787282773
Edition1st Edition
Languages
Right arrow
Author (1)
Dan Van Boxel
Dan Van Boxel
author image
Dan Van Boxel

Dan Van Boxel is a data scientist and machine learning engineer with over 10 years of experience. He is most well-known for Dan Does Data, a YouTube livestream demonstrating the power and pitfalls of neural networks. He has developed and applied novel statistical models of machine learning to topics such as accounting for truck traffic on highways, travel time outlier detection, and other areas. Dan has also published research articles and presented findings at the Transportation Research Board and other academic journals.
Read more about Dan Van Boxel

Right arrow

Chapter 5. Wrapping Up

In the previous chapter, we learned about another interface to TensorFlow and RNN models. This chapter will wrap up our look at TensorFlow, looking at how far we've come and where you can go from here. First, we'll review our progress on the font classification problem, then we'll briefly look at TensorFlow beyond deep learning and see where it will go in the future. At the end of the chapter, you will be familiar with the following concepts:

  • Research evaluation

  • A quick review of all the models

  • The future of TensorFlow

  • Some more TensorFlow projects

Let's now begin by looking at research evaluation in detail.

Research evaluation


In this section, we'll compare our models in the font classification problem. First, we should remind ourselves what the data looks like. Then, we'll inspect the simple logistic dense neural network and convolutional neural network models. You've come a long way in modeling with TensorFlow.

Before we move on from deep learning, however, let's look back and see how models compare on the font classification problem. First, let's look at the data again, so we don't lose sight of the problem. In fact, let's look at one image that includes all the letters and digits from every font, just to see what shapes we have:

# One look at a letter/digit from each font
# Best to reshape as one large array, then plot
all_letters = np.zeros([5*36,62*36])
for font in range(5):
    for letter in range(62):
        all_letters[font*36:(font+1)*36,
                letter*36:(letter+1)*36] = \
                train[9*(font*62 + letter)]

This would be a lot of subplots for Matplotlib to handle...

A quick review of all the models


Let's recap each of the models we built, to model these fonts and some of their strengths and weaknesses:

At a glance, recall that we slowly built up more complicated models and took into account the structure of the data to improve our accuracy.

The logistic regression model

First, we started with a simple logistic regression model:

This has 36x36 pixels plus 1 bias times 5 classes total weights, or 6,485 parameters that we need to train. After 1,000 training epochs, this model achieved about 40 percent validation accuracy. Your results may vary. This is relatively poor, but the model has some advantages.

Let's glance back at the code:

# These will be inputs
## Input pixels, flattened
x = tf.placeholder("float", [None, 1296])
## Known labels
y_ = tf.placeholder("float", [None,5])

# Variables
W = tf.Variable(tf.zeros([1296,5]))
b = tf.Variable(tf.zeros([5]))

# Just initialize
sess.run(tf.initialize_all_variables())

# Define model
y = tf.nn.softmax(tf.matmul...

The future of TensorFlow


In this section, we will observe how TensorFlow is changing, who is starting to use TensorFlow, and how you can make an impact contributing to it.

Since it was released in late 2015, TensorFlow has already seen several more releases:

TensorFlow is constantly being updated. Although it isn't an official Google product, it is also open source and hosted on GitHub. At the time of writing, TensorFlow is at release 1.2. The most recent release added distributed computing capabilities. These are beyond the scope of this book, but generally speaking, they allow computation across multiple GPUs on multiple machines for maximum parallelization. Under heavy development, more features are always just around the corner. TensorFlow is becoming more popular every day.

Several software companies have released machine learning frameworks recently, but TensorFlow stands out in adoption. Internally, Google is practicing what they preach. Their acclaimed DeepMind team has switched to...

Summary


In this chapter, we reviewed how we climbed from a humble logistic regression model to flying high with a deep convolutional neural network to classify fonts. We also discussed the future of TensorFlow. We finally recalled our TensorFlow models for font classification, reviewing their accuracy. We also took some time to discuss where TensorFlow is headed. Congratulations! You're now well-versed in TensorFlow. You've applied it to multiple research problems and models in this series, and learned how it's widely applicable.

The next step is to deploy TensorFlow in one of your own projects. Happy modeling!

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Hands-On Deep Learning with TensorFlow
Published in: Jul 2017Publisher: PacktISBN-13: 9781787282773
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Dan Van Boxel

Dan Van Boxel is a data scientist and machine learning engineer with over 10 years of experience. He is most well-known for Dan Does Data, a YouTube livestream demonstrating the power and pitfalls of neural networks. He has developed and applied novel statistical models of machine learning to topics such as accounting for truck traffic on highways, travel time outlier detection, and other areas. Dan has also published research articles and presented findings at the Transportation Research Board and other academic journals.
Read more about Dan Van Boxel