Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Computer Vision Projects with OpenCV and Python 3

You're reading from  Computer Vision Projects with OpenCV and Python 3

Product type Book
Published in Dec 2018
Publisher Packt
ISBN-13 9781789954555
Pages 182 pages
Edition 1st Edition
Languages
Author (1):
Matthew Rever Matthew Rever
Profile icon Matthew Rever

Table of Contents (9) Chapters

Preface Setting Up an Anaconda Environment Image Captioning with TensorFlow Reading License Plates with OpenCV Human Pose Estimation with TensorFlow Handwritten Digit Recognition with scikit-learn and TensorFlow Facial Feature Tracking and Classification with dlib Deep Learning Image Classification with TensorFlow Other Books You May Enjoy

Image Captioning with TensorFlow

Primarily, this chapter will provide a brief overview of creating a detailed English language description of an image. Using the image captioning model based on TensorFlow, we will be able to replace a single word or compound words/phrases with detailed captions that perfectly describe the image. We will first use a pre-trained model for image captioning and then retrain the model from scratch to run on a set of images.

In this chapter, we will cover the following:

  • Image captioning introduction
  • Google Brain im2txt captioning model
  • Running our captioning code in Jupyter
  • Retraining the model

Technical requirements

Introduction to image captioning

Image captioning is a process in which textual description is generated based on an image. To better understand image captioning, we need to first differentiate it from image classification.

Difference between image classification and image captioning

Image classification is a relatively simple process that only tells us what is in an image. For example, if there is a boy on a bike, image classification will not give us a description; it will just provide the result as boy or bike. Image classification can tell us whether there is a woman or a dog in the image, or an action, such as snowboarding. This is not a desirable result as there is no description of what exactly is going on in the image...

Google Brain im2txt captioning model

Google Brain im2txt was used by Google in a paper 2015 MSCOCO Image Captioning Challenge, and will form the foundation of the image captioning code that we will implement in our project.

The Google's GitHub TensorFlow page can be found at https://github.com/tensorflow/models/tree/master/research/im2txt.

In the research directory, we will find the im2txt file, which was used by Google in the paper, 2015 MSCOCO Image Captioning Challenge, which is available for free at https://arxiv.org/abs/1609.06647. It covers RNNs, LSTM, and fundamental algorithms in detail.

We can check how CNNs are used for image classification and also learn how to use the LSTM RNNs for actually generating sequential caption outputs.

We can download the code from the GitHub link; however, it has not been set up to run easily as it does not include a pre-trained model...

Running the captioning code on Jupyter

Let's now run our own version of the code on a Jupyter Notebook. We can start up own own Jupyter Notebook and load the Section_1-Tensorflow_Image_Captioning.ipynb file from the GitHub repository (https://github.com/PacktPublishing/Computer-Vision-Projects-with-OpenCV-and-Python-3/blob/master/Chapter01/Section_1-Tensorflow_Image_Captioning.ipynb).

Once we load the file on a Jupyter Notebook, it will look something like this:

In the first part, we are going to load some essential libraries, including math, os, and tensorflow. We will also use our handy utility function, %pylab inline, to easily read and display images within the Notebook.

Select the first code block:

# load essential libraries
import math
import os

import tensorflow as tf

%pylab inline

When we hit Ctrl + Enter to execute the code in the cell, we will get the following output...

Retraining the captioning model

So, now that we have seen image captioning code in action, we are going to retrain the image captioner on our own desired data. However, we need to know that it will be very time consuming and will need over 100 GB of hard drive space for computations if we want it to process in a reasonable time. Even with a good GPU, it may take a few days or a week to complete the computation. Since we are inclined toward implementing it and have the resources, let's start retraining the model.

In the Notebook, the first step is to download the pre-trained Inception model. The webbrowser module will make it easy to open the URL and to download the file:

# First download pretrained Inception (v3) model

import webbrowser
webbrowser.open("http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz")

# Completely unzip tar.gz file to get inception_v3...

Summary

In this chapter, we were introduced to different image captioning methods. We learned about the Google Brain im2txt captioning model. While working on the project, we were able to run our pre-trained model on a Jupyter Notebook and analyze the model based on the results. In the last section of the chapter, we retrained our image captioning model from scratch.

In the next chapter, we will cover reading license plates with OpenCV.

lock icon The rest of the chapter is locked
You have been reading a chapter from
Computer Vision Projects with OpenCV and Python 3
Published in: Dec 2018 Publisher: Packt ISBN-13: 9781789954555
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}