Now, we have familiarized ourselves with the basic idea of what a recurrent layer does and have gone over some specific examples of use cases (from speech recognition, machine translation, and image captioning) where variations of such time-dependent models may be used. The following diagram provides a visual summary of some of the sequential tasks we discussed, along with the type of RNN that's suited for the job:
Next, we will dive deeper into the governing equations, as well as the learning mechanism behind RNNs.