Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
The Deep Learning Architect's Handbook

You're reading from  The Deep Learning Architect's Handbook

Product type Book
Published in Dec 2023
Publisher Packt
ISBN-13 9781803243795
Pages 516 pages
Edition 1st Edition
Languages
Author (1):
Ee Kin Chin Ee Kin Chin
Profile icon Ee Kin Chin

Table of Contents (25) Chapters

Preface Part 1 – Foundational Methods
Chapter 1: Deep Learning Life Cycle Chapter 2: Designing Deep Learning Architectures Chapter 3: Understanding Convolutional Neural Networks Chapter 4: Understanding Recurrent Neural Networks Chapter 5: Understanding Autoencoders Chapter 6: Understanding Neural Network Transformers Chapter 7: Deep Neural Architecture Search Chapter 8: Exploring Supervised Deep Learning Chapter 9: Exploring Unsupervised Deep Learning Part 2 – Multimodal Model Insights
Chapter 10: Exploring Model Evaluation Methods Chapter 11: Explaining Neural Network Predictions Chapter 12: Interpreting Neural Networks Chapter 13: Exploring Bias and Fairness Chapter 14: Analyzing Adversarial Performance Part 3 – DLOps
Chapter 15: Deploying Deep Learning Models to Production Chapter 16: Governing Deep Learning Models Chapter 17: Managing Drift Effectively in a Dynamic Environment Chapter 18: Exploring the DataRobot AI Platform Chapter 19: Architecting LLM Solutions Index Other Books You May Enjoy

Exploring the foundations of neural networks using an MLP

A deep learning architecture is created when at least three perceptron layers are used, excluding the input layer. A perceptron is a single-layer network consisting of neuron units. Neuron units hold a bias variable and act as nodes for vertices to be connected. These neurons will interact with other neurons in a separate layer with weights applied to the connections/vertices between neurons. A perceptron is also known as a fully connected layer or dense layer, and MLPs are also known as feedforward neural networks or fully connected neural networks.

Let’s refer back to the MLP figure from the previous chapter to get a better idea.

Figure 2.1 – Simple deep learning architecture, also called an MLP

Figure 2.1 – Simple deep learning architecture, also called an MLP

The figure shows how three data column inputs get passed into the input layer, then subsequently get propagated to the hidden layer, and finally, through the output layer. Although not...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}