Reader small image

You're reading from  Automated Machine Learning with AutoKeras

Product typeBook
Published inMay 2021
Reading LevelBeginner
PublisherPackt
ISBN-139781800567641
Edition1st Edition
Languages
Tools
Right arrow
Author (1)
Luis Sobrecueva
Luis Sobrecueva
author image
Luis Sobrecueva

Luis Sobrecueva is a senior software engineer and ML/DL practitioner currently working at Cabify. He has been a contributor to the OpenAI project as well as one of the contributors to the AutoKeras project.
Read more about Luis Sobrecueva

Right arrow

Understanding RNNs

A common feature of all the neural networks seen so far is that they don't have a memory. Networks formed by either fully connected layers or convolutional layers process each input independently so that it is isolated from the other layers. However, in RNNs, "the past" is taken into account, and this is done using its previous output as the state; so, an RNN layer will have two inputs, one is which is the standard input of the current vector, and the other being the output of the previous vector, as seen in the following diagram:

Figure 5.2 – RNN loop unfolded

Figure 5.2 – RNN loop unfolded

The RNN implements this memory feature with an internal loop over the entire sequence of elements. Let's explain it with some pseudocode, as follows:

state = 0
for input in input_sequence:
     output = f(input, state)
     state = output

There are several types of RNN architectures with much more...

lock icon
The rest of the page is locked
Previous PageNext Page
You have been reading a chapter from
Automated Machine Learning with AutoKeras
Published in: May 2021Publisher: PacktISBN-13: 9781800567641

Author (1)

author image
Luis Sobrecueva

Luis Sobrecueva is a senior software engineer and ML/DL practitioner currently working at Cabify. He has been a contributor to the OpenAI project as well as one of the contributors to the AutoKeras project.
Read more about Luis Sobrecueva