Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Hands-On Neural Networks with TensorFlow 2.0

You're reading from  Hands-On Neural Networks with TensorFlow 2.0

Product type Book
Published in Sep 2019
Publisher Packt
ISBN-13 9781789615555
Pages 358 pages
Edition 1st Edition
Languages
Author (1):
Paolo Galeone Paolo Galeone
Profile icon Paolo Galeone

Table of Contents (15) Chapters

Preface Section 1: Neural Network Fundamentals
What is Machine Learning? Neural Networks and Deep Learning Section 2: TensorFlow Fundamentals
TensorFlow Graph Architecture TensorFlow 2.0 Architecture Efficient Data Input Pipelines and Estimator API Section 3: The Application of Neural Networks
Image Classification Using TensorFlow Hub Introduction to Object Detection Semantic Segmentation and Custom Dataset Builder Generative Adversarial Networks Bringing a Model to Production Other Books You May Enjoy

Image Classification Using TensorFlow Hub

We have discussed the image classification task in all of the previous chapters of this book. We have seen how it is possible to define a convolutional neural network by stacking several convolutional layers and how to train it using Keras. We also looked at eager execution and saw that using AutoGraph is straightforward.

So far, the convolutional architecture used has been a LeNet-like architecture, with an expected input size of 28 x 28, trained end to end every time to make the network learn how to extract the correct features to solve the fashion-MNIST classification task.

Building a classifier from scratch, defining the architecture layer by layer, is an excellent didactical exercise that allows you to experiment with how different layer configurations can change the network performance. However, in real-life scenarios, the amount...

Getting the data

The task we are going to solve in this chapter is a classification problem on a dataset of flowers, which is available in tensorflow-datasets (tfds). The dataset's name is tf_flowers and it consists of images of five different flower species at different resolutions. Using tfds, gathering the data is straightforward, and we can get the dataset's information by looking at the info variable returned by the tfds.load invocation, as shown here:

(tf2)

import tensorflow_datasets as tfds

dataset, info = tfds.load("tf_flowers", with_info=True)
print(info)

The preceding code produces the following dataset description:

tfds.core.DatasetInfo(
name='tf_flowers',
version=1.0.0,
description='A large set of images of flowers',
urls=['http://download.tensorflow.org/example_images/flower_photos.tgz'],
features=FeaturesDict...

Transfer learning

Only academia and some industries have the required budget and computing power to train an entire CNN from scratch, starting from random weights, on a massive dataset such as ImageNet.

Since this expensive and time-consuming work has already been done, it is a smart idea to reuse parts of the trained model to solve our classification problem.

In fact, it is possible to transfer what the network has learned from one dataset to a new one, thereby transferring the knowledge.

Transfer learning is the process of learning a new task by relying on a previously learned task: the learning process can be faster, more accurate, and require less training data.

The transfer learning idea is bright, and it can be successfully applied when using convolutional neural networks.

In fact, all convolutional architectures for classification have a fixed structure, and we can reuse...

Fine-tuning

Fine-tuning is a different approach to transfer learning. Both share the same goal of transferring the knowledge learned on a dataset on a specific task to a different dataset and a different task. Transfer learning, as shown in the previous section, reuses the pre-trained model without making any changes to its feature extraction part; in fact, it is considered a non-trainable part of the network.

Fine-tuning, instead, consists of fine-tuning the pre-trained network weights by continuing backpropagation.

When to fine-tune

Fine-tuning a network requires having the correct hardware; backpropagating the gradients through a deeper network requires you to load more information in memory. Very deep networks have been...

Summary

In this chapter, the concepts of transfer learning and fine-tuning were introduced. Training a very deep convolutional neural network from scratch, starting from random weights, requires the correct equipment, which is only found in academia and some big companies. Moreover, it can be a costly process since finding the architecture that achieves state-of-the-art results on a classification task requires multiple models to be designed and trained and for each of them to repeat the training process to search for the hyperparameter configuration that achieves the best results.

For this reason, transfer learning is the recommended practice to follow. It is especially useful when prototyping new solutions since it speeds up the training time and reduces the training costs.

TensorFlow Hub is the online library offered by the TensorFlow ecosystem. It contains an online catalog...

Exercises

  1. Describe the concept of transfer learning.
  2. When can the transfer learning process bring good results?
  3. What are the differences between transfer learning and fine-tuning?
  4. If a model has been trained on a small dataset with low variance (similar examples), is it an excellent candidate to be used as a fixed-feature extractor for transfer learning?
  5. The flower classifier built in the transfer learning section has no performance evaluation on the test dataset: add it.
  6. Extend the flower classifier source code, making it log the metrics on TensorBoard. Use the summary writers that are already defined.
  7. Extend the flower classifier to save the training status using a checkpoint (and its checkpoint manager).
  8. Create a second checkpoint for the model that reached the highest validation accuracy.
  9. Since the model suffers from overfitting, a good test is to reduce the number of neurons...
lock icon The rest of the chapter is locked
You have been reading a chapter from
Hands-On Neural Networks with TensorFlow 2.0
Published in: Sep 2019 Publisher: Packt ISBN-13: 9781789615555
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}