Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Time Series Analysis with Python Cookbook

You're reading from  Time Series Analysis with Python Cookbook

Product type Book
Published in Jun 2022
Publisher Packt
ISBN-13 9781801075541
Pages 630 pages
Edition 1st Edition
Languages
Concepts
Author (1):
Tarek A. Atwan Tarek A. Atwan
Profile icon Tarek A. Atwan

Table of Contents (18) Chapters

Preface Chapter 1: Getting Started with Time Series Analysis Chapter 2: Reading Time Series Data from Files Chapter 3: Reading Time Series Data from Databases Chapter 4: Persisting Time Series Data to Files Chapter 5: Persisting Time Series Data to Databases Chapter 6: Working with Date and Time in Python Chapter 7: Handling Missing Data Chapter 8: Outlier Detection Using Statistical Methods Chapter 9: Exploratory Data Analysis and Diagnosis Chapter 10: Building Univariate Time Series Models Using Statistical Methods Chapter 11: Additional Statistical Modeling Techniques for Time Series Chapter 12: Forecasting Using Supervised Machine Learning Chapter 13: Deep Learning for Time Series Forecasting Chapter 14: Outlier Detection Using Unsupervised Machine Learning Chapter 15: Advanced Techniques for Complex Time Series Index Other Books You May Enjoy

Forecasting with an RNN using Keras

RNNs initially entered the spotlight with Natural Language Processing (NLP), as they were designed for sequential data, where past observations, such as words, have a strong influence on determining the next word in a sentence. This need for the artificial neural network to retain memory (hidden state) inspired the RNN architecture. Similarly, time series data is also sequential, and since past observations influence future observations, it also needs a network with memory. For example, an artificial neural network like the one in Figure 13.1 is considered a Feed-Forward Artificial Neural Network (FFN), as depicted by the arrows pointing from nodes in one layer to the next in one direction; each node has one input and one output. In RNNs, there is a feedback loop where the output of one node or neuron is fed back (the recursive part) as input, allowing the network to learn from a prior time step acting as a memory. Figure 13.3 shows a recurrent cell...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}