Reader small image

You're reading from  Advanced Deep Learning with TensorFlow 2 and Keras - Second Edition

Product typeBook
Published inFeb 2020
Reading LevelIntermediate
PublisherPackt
ISBN-139781838821654
Edition2nd Edition
Languages
Right arrow
Author (1)
Rowel Atienza
Rowel Atienza
author image
Rowel Atienza

Rowel Atienza is an Associate Professor at the Electrical and Electronics Engineering Institute of the University of the Philippines, Diliman. He holds the Dado and Maria Banatao Institute Professorial Chair in Artificial Intelligence. Rowel has been fascinated with intelligent robots since he graduated from the University of the Philippines. He received his MEng from the National University of Singapore for his work on an AI-enhanced four-legged robot. He finished his Ph.D. at The Australian National University for his contribution on the field of active gaze tracking for human-robot interaction. Rowel's current research work focuses on AI and computer vision. He dreams on building useful machines that can perceive, understand, and reason. To help make his dreams become real, Rowel has been supported by grants from the Department of Science and Technology (DOST), Samsung Research Philippines, and Commission on Higher Education-Philippine California Advanced Research Institutes (CHED-PCARI).
Read more about Rowel Atienza

Right arrow

5. Recurrent Neural Network (RNN)

We're now going to look at the last of our three artificial neural networks, RNN.

RNNs are a family of networks that are suitable for learning representations of sequential data like text in natural language processing (NLP) or a stream of sensor data in instrumentation. While each MNIST data sample is not sequential in nature, it is not hard to imagine that every image can be interpreted as a sequence of rows or columns of pixels. Thus, a model based on RNNs can process each MNIST image as a sequence of 28-element input vectors with timesteps equal to 28. The following listing shows the code for the RNN model in Figure 1.5.1:

Figure 1.5.1: RNN model for MNIST digit classification

Listing 1.5.1: rnn-mnist-1.5.1.py

import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, SimpleRNN
from tensorflow.keras.utils import to_categorical, plot_model
from tensorflow.keras.datasets import mnist

# load mnist dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()

# compute the number of labels
num_labels = len(np.unique(y_train))

# convert to one-hot vector
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)

# resize and normalize
image_size = x_train.shape[1]
x_train = np.reshape(x_train,[-1, image_size, image_size])
x_test = np.reshape(x_test,[-1, image_size, image_size])
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255

# network parameters
input_shape = (image_size, image_size)
batch_size = 128
units = 256
dropout = 0.2

# model is RNN with 256 units, input is 28-dim vector 28 timesteps
model = Sequential()
model.add(SimpleRNN(units=units,
                    dropout=dropout,
                    input_shape=input_shape))
model.add(Dense(num_labels))
model.add(Activation('softmax'))
model.summary()
plot_model(model, to_file='rnn-mnist.png', show_shapes=True)

# loss function for one-hot vector
# use of sgd optimizer
# accuracy is good metric for classification tasks
model.compile(loss='categorical_crossentropy',
              optimizer='sgd',
              metrics=['accuracy'])
# train the network
model.fit(x_train, y_train, epochs=20, batch_size=batch_size)

_, acc = model.evaluate(x_test,
                        y_test,
                        batch_size=batch_size,
                        verbose=0)
print("\nTest accuracy: %.1f%%" % (100.0 * acc))

There are two main differences between the RNN classifier and the two previous models. First is the input_shape = (image_size, image_size), which is actually input_ shape = (timesteps, input_dim) or a sequence of input_dim-dimension vectors of timesteps length. Second is the use of a SimpleRNN layer to represent an RNN cell with units=256. The units variable represents the number of output units. If the CNN is characterized by the convolution of kernels across the input feature map, the RNN output is a function not only of the present input but also of the previous output or hidden state. Since the previous output is also a function of the previous input, the current output is also a function of the previous output and input and so on. The SimpleRNN layer in Keras is a simplified version of the true RNN. The following equation describes the output of SimpleRNN:

    (Equation 1.5.1)

In this equation, b is the bias, while W and U are called recurrent kernel (weights for the previous output) and kernel (weights for the current input), respectively. Subscript t is used to indicate the position in the sequence. For a SimpleRNN layer with units=256, the total number of parameters is 256 + 256 × 256 + 256 × 28 = 72,960, corresponding to b, W, and U contributions.

The following figure shows the diagrams of both SimpleRNN and RNN when used for classification tasks. What makes SimpleRNN simpler than an RNN is the absence of the output values ot = Vht + c before the softmax function is computed:

Figure 1.5.2: Diagram of SimpleRNN and RNN

RNNs might be initially harder to understand when compared to MLPs or CNNs. In an MLP, the perceptron is the fundamental unit. Once the concept of the perceptron is understood, an MLP is just a network of perceptrons. In a CNN, the kernel is a patch or window that slides through the feature map to generate another feature map. In an RNN, the most important is the concept of self-loop. There is in fact just one cell.

The illusion of multiple cells appears because a cell exists per timestep, but in fact it is just the same cell reused repeatedly unless the network is unrolled. The underlying neural networks of RNNs are shared across cells.

The summary in Listing 1.5.2 indicates that using a SimpleRNN requires a fewer number of parameters.

Listing 1.5.2: Summary of an RNN MNIST digit classifier

Layer (type)	               Output Shape	  Param #
=================================================================
simple_rnn_1 (SimpleRNN)       (None, 256)        72960
dense_1 (Dense)                (None, 10)         2570
activation_1 (Activation)      (None, 10)         36928
=================================================================
Total params: 75,530
Trainable params: 75,530
Non-trainable params: 0

Figure 1.5.3 shows the graphical description of the RNN MNIST digit classifier. The model is very concise:

A screenshot of a cell phone  Description automatically generated

Figure 1.5.3: The RNN MNIST digit classifier graphical description

Table 1.5.1 shows that the SimpleRNN has the lowest accuracy among the networks presented:

Layers Optimizer Regularizer Train Accuracy (%) Test Accuracy (%)

256

SGD

Dropout(0.2)

97.26

98.00

256

RMSprop

Dropout(0.2)

96.72

97.60

256

Adam

Dropout(0.2)

96.79

97.40

512

SGD

Dropout(0.2)

97.88

98.30

Table 1.5.1: The different SimpleRNN network configurations and performance measures

In many deep neural networks, other members of the RNN family are more commonly used. For example, Long Short-Term Memory (LSTM) has been used in both machine translation and question answering problems. LSTM addresses the problem of long-term dependency or remembering relevant past information to the present output.

Unlike an RNN or a SimpleRNN, the internal structure of the LSTM cell is more complex. Figure 1.5.4 shows a diagram of LSTM. LSTM uses not only the present input and past outputs or hidden states, but it introduces a cell state, st, that carries information from one cell to the other. The information flow between cell states is controlled by three gates, ft, it, and qt. The three gates have the effect of determining which information should be retained or replaced and the amount of information in the past and current input that should contribute to the current cell state or output. We will not discuss the details of the internal structure of the LSTM cell in this book. However, an intuitive guide to LSTMs can be found at http://colah.github.io/posts/2015-08-Understanding-LSTMs.

The LSTM() layer can be used as a drop-in replacement for SimpleRNN(). If LSTM is overkill for the task at hand, a simpler version called a Gated Recurrent Unit (GRU) can be used. A GRU simplifies LSTM by combining the cell state and hidden state together. A GRU also reduces the number of gates by one. The GRU() function can also be used as a drop-in replacement for SimpleRNN().

Figure 1.5.4: Diagram of LSTM. The parameters are not shown for clarity.

There are many other ways to configure RNNs. One way is making an RNN model that is bidirectional. By default, RNNs are unidirectional in the sense that the current output is only influenced by the past states and the current input.

In bidirectional RNNs, future states can also influence the present and past states by allowing information to flow backward. Past outputs are updated as needed depending on the new information received. RNNs can be made bidirectional by calling a wrapper function. For example, the implementation of bidirectional LSTM is Bidirectional(LSTM()).

For all types of RNNs, increasing the number of units will also increase the capacity. However, another way of increasing the capacity is by stacking the RNN layers. It should be noted though that as a general rule of thumb, the capacity of the model should only be increased if needed. Excess capacity may contribute to overfitting, and, as a result, may lead to both a longer training time and a slower performance during prediction.

lock icon
The rest of the page is locked
Previous PageNext Page
You have been reading a chapter from
Advanced Deep Learning with TensorFlow 2 and Keras - Second Edition
Published in: Feb 2020Publisher: PacktISBN-13: 9781838821654

Author (1)

author image
Rowel Atienza

Rowel Atienza is an Associate Professor at the Electrical and Electronics Engineering Institute of the University of the Philippines, Diliman. He holds the Dado and Maria Banatao Institute Professorial Chair in Artificial Intelligence. Rowel has been fascinated with intelligent robots since he graduated from the University of the Philippines. He received his MEng from the National University of Singapore for his work on an AI-enhanced four-legged robot. He finished his Ph.D. at The Australian National University for his contribution on the field of active gaze tracking for human-robot interaction. Rowel's current research work focuses on AI and computer vision. He dreams on building useful machines that can perceive, understand, and reason. To help make his dreams become real, Rowel has been supported by grants from the Department of Science and Technology (DOST), Samsung Research Philippines, and Commission on Higher Education-Philippine California Advanced Research Institutes (CHED-PCARI).
Read more about Rowel Atienza