Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Hands-On Machine Learning on Google Cloud Platform

You're reading from  Hands-On Machine Learning on Google Cloud Platform

Product type Book
Published in Apr 2018
Publisher Packt
ISBN-13 9781788393485
Pages 500 pages
Edition 1st Edition
Languages
Authors (3):
Giuseppe Ciaburro Giuseppe Ciaburro
Profile icon Giuseppe Ciaburro
V Kishore Ayyadevara V Kishore Ayyadevara
Profile icon V Kishore Ayyadevara
Alexis Perrier Alexis Perrier
Profile icon Alexis Perrier
View More author details

Table of Contents (18) Chapters

Preface 1. Introducing the Google Cloud Platform 2. Google Compute Engine 3. Google Cloud Storage 4. Querying Your Data with BigQuery 5. Transforming Your Data 6. Essential Machine Learning 7. Google Machine Learning APIs 8. Creating ML Applications with Firebase 9. Neural Networks with TensorFlow and Keras 10. Evaluating Results with TensorBoard 11. Optimizing the Model through Hyperparameter Tuning 12. Preventing Overfitting with Regularization 13. Beyond Feedforward Networks – CNN and RNN 14. Time Series with LSTMs 15. Reinforcement Learning 16. Generative Neural Networks 17. Chatbots

Long short-term memory networks

LSTM is a particular architecture of RNN, originally conceived by Hochreiter and Schmidhuber in 1997. This type of neural network has been recently rediscovered in the context of deep learning because it is free from the problem of vanishing gradient, and in practice it offers excellent results and performance.

The vanishing gradient problem affects the training of ANNs with gradient-based learning methods. In gradient-based methods such as backpropagation, weights are adjusted proportionally to the gradient of the error. Because of the way in which the aforementioned gradients are calculated, we obtain the effect that their module decreases exponentially, proceeding towards the deepest layers. The problem is that in some cases, the gradient will be vanishingly small, effectively preventing the weight from changing its value. In the worst case...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}