Search icon
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
The Deep Learning Architect's Handbook

You're reading from  The Deep Learning Architect's Handbook

Product type Book
Published in Dec 2023
Publisher Packt
ISBN-13 9781803243795
Pages 516 pages
Edition 1st Edition
Languages
Author (1):
Ee Kin Chin Ee Kin Chin
Profile icon Ee Kin Chin

Table of Contents (25) Chapters

Preface 1. Part 1 – Foundational Methods
2. Chapter 1: Deep Learning Life Cycle 3. Chapter 2: Designing Deep Learning Architectures 4. Chapter 3: Understanding Convolutional Neural Networks 5. Chapter 4: Understanding Recurrent Neural Networks 6. Chapter 5: Understanding Autoencoders 7. Chapter 6: Understanding Neural Network Transformers 8. Chapter 7: Deep Neural Architecture Search 9. Chapter 8: Exploring Supervised Deep Learning 10. Chapter 9: Exploring Unsupervised Deep Learning 11. Part 2 – Multimodal Model Insights
12. Chapter 10: Exploring Model Evaluation Methods 13. Chapter 11: Explaining Neural Network Predictions 14. Chapter 12: Interpreting Neural Networks 15. Chapter 13: Exploring Bias and Fairness 16. Chapter 14: Analyzing Adversarial Performance 17. Part 3 – DLOps
18. Chapter 15: Deploying Deep Learning Models to Production 19. Chapter 16: Governing Deep Learning Models 20. Chapter 17: Managing Drift Effectively in a Dynamic Environment 21. Chapter 18: Exploring the DataRobot AI Platform 22. Chapter 19: Architecting LLM Solutions 23. Index 24. Other Books You May Enjoy

Exploring autoencoder variations

For tabular data, the network structure can be pretty straightforward. It simply uses an MLP with multiple fully connected layers that gradually shrink the number of features for the encoder, and multiple fully connected layers that gradually increase the data outputs to the same dimension and size as the input for the decoder.

For time-series or sequential data, RNN-based autoencoders can be used. One of the most cited research projects about RNN-based autoencoders is a version where LSTM-based encoders and decoders are used. The research paper is called Sequence to Sequence Learning with Neural Networks by Ilya Sutskever, Oriol Vinyals, and Quoc V. Le (https://arxiv.org/abs/1409.3215). Instead of stacking encoder LSTMs and decoder LSTMs, using the hidden state output sequence of each of the LSTM cells vertically, the decoder layer sequentially continues the sequential flow of the encoder LSTM and outputs the reconstructed input in reversed order...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}