Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
The Deep Learning Architect's Handbook

You're reading from  The Deep Learning Architect's Handbook

Product type Book
Published in Dec 2023
Publisher Packt
ISBN-13 9781803243795
Pages 516 pages
Edition 1st Edition
Languages
Author (1):
Ee Kin Chin Ee Kin Chin
Profile icon Ee Kin Chin

Table of Contents (25) Chapters

Preface Part 1 – Foundational Methods
Chapter 1: Deep Learning Life Cycle Chapter 2: Designing Deep Learning Architectures Chapter 3: Understanding Convolutional Neural Networks Chapter 4: Understanding Recurrent Neural Networks Chapter 5: Understanding Autoencoders Chapter 6: Understanding Neural Network Transformers Chapter 7: Deep Neural Architecture Search Chapter 8: Exploring Supervised Deep Learning Chapter 9: Exploring Unsupervised Deep Learning Part 2 – Multimodal Model Insights
Chapter 10: Exploring Model Evaluation Methods Chapter 11: Explaining Neural Network Predictions Chapter 12: Interpreting Neural Networks Chapter 13: Exploring Bias and Fairness Chapter 14: Analyzing Adversarial Performance Part 3 – DLOps
Chapter 15: Deploying Deep Learning Models to Production Chapter 16: Governing Deep Learning Models Chapter 17: Managing Drift Effectively in a Dynamic Environment Chapter 18: Exploring the DataRobot AI Platform Chapter 19: Architecting LLM Solutions Index Other Books You May Enjoy

Discovering the counterfactual explanation strategy

Counterfactual explanation or reasoning is a method of understanding and explaining anything in general by considering alternative and counterfactual scenarios or “what-if” situations. In the context of prediction explanations, it involves identifying changes in the input data that would lead to a different outcome. Ideally, the minimal changes should be identified. In the context of NN interpretation, it involves visualizing the opposite of the target label or intermediate latent features. This approach makes sense to use because it closely aligns with how humans naturally explain events and assess causality, which ultimately allows us to comprehend the underlying decision-making process of the model better.

Humans tend to think in terms of cause and effect, and we often explore alternative possibilities to make sense of events or decisions. For example, when trying to understand why a certain decision was made,...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}