Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
The Deep Learning Architect's Handbook

You're reading from  The Deep Learning Architect's Handbook

Product type Book
Published in Dec 2023
Publisher Packt
ISBN-13 9781803243795
Pages 516 pages
Edition 1st Edition
Languages
Author (1):
Ee Kin Chin Ee Kin Chin
Profile icon Ee Kin Chin

Table of Contents (25) Chapters

Preface Part 1 – Foundational Methods
Chapter 1: Deep Learning Life Cycle Chapter 2: Designing Deep Learning Architectures Chapter 3: Understanding Convolutional Neural Networks Chapter 4: Understanding Recurrent Neural Networks Chapter 5: Understanding Autoencoders Chapter 6: Understanding Neural Network Transformers Chapter 7: Deep Neural Architecture Search Chapter 8: Exploring Supervised Deep Learning Chapter 9: Exploring Unsupervised Deep Learning Part 2 – Multimodal Model Insights
Chapter 10: Exploring Model Evaluation Methods Chapter 11: Explaining Neural Network Predictions Chapter 12: Interpreting Neural Networks Chapter 13: Exploring Bias and Fairness Chapter 14: Analyzing Adversarial Performance Part 3 – DLOps
Chapter 15: Deploying Deep Learning Models to Production Chapter 16: Governing Deep Learning Models Chapter 17: Managing Drift Effectively in a Dynamic Environment Chapter 18: Exploring the DataRobot AI Platform Chapter 19: Architecting LLM Solutions Index Other Books You May Enjoy

Building a CNN autoencoder

Let’s start by going through what a transpose convolution is. Figure 5.3 shows an example transpose convolution operation on a 2x2 sized input with a 2x2 sized convolutional filter, with a stride of 1.

Figure 5.3 – A transposed convolutional filter operation

Figure 5.3 – A transposed convolutional filter operation

In Figure 5.3, note that each of the 2x2 input data is marked with a number from 1 to 4. These numbers are used to map the output results, presented as 3x3 outputs. The convolutional kernel applies each of its weights individually to every value in the input data in a sliding window manner, and the outputs from the four convolutional operations are presented in the bottom part of the figure. After the operation is done, each of the outputs will be elementwise added to form the final output and subjected to a bias. This example process depicts how a 2x2 input can be scaled up to a 3x3 data size without relying completely on padding.

Let’s implement...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}