Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Causal Inference and Discovery in Python

You're reading from  Causal Inference and Discovery in Python

Product type Book
Published in May 2023
Publisher Packt
ISBN-13 9781804612989
Pages 456 pages
Edition 1st Edition
Languages
Author (1):
Aleksander Molak Aleksander Molak
Profile icon Aleksander Molak

Table of Contents (21) Chapters

Preface 1. Part 1: Causality – an Introduction
2. Chapter 1: Causality – Hey, We Have Machine Learning, So Why Even Bother? 3. Chapter 2: Judea Pearl and the Ladder of Causation 4. Chapter 3: Regression, Observations, and Interventions 5. Chapter 4: Graphical Models 6. Chapter 5: Forks, Chains, and Immoralities 7. Part 2: Causal Inference
8. Chapter 6: Nodes, Edges, and Statistical (In)dependence 9. Chapter 7: The Four-Step Process of Causal Inference 10. Chapter 8: Causal Models – Assumptions and Challenges 11. Chapter 9: Causal Inference and Machine Learning – from Matching to Meta-Learners 12. Chapter 10: Causal Inference and Machine Learning – Advanced Estimators, Experiments, Evaluations, and More 13. Chapter 11: Causal Inference and Machine Learning – Deep Learning, NLP, and Beyond 14. Part 3: Causal Discovery
15. Chapter 12: Can I Have a Causal Graph, Please? 16. Chapter 13: Causal Discovery and Machine Learning – from Assumptions to Applications 17. Chapter 14: Causal Discovery and Machine Learning – Advanced Deep Learning and Beyond 18. Chapter 15: Epilogue 19. Index 20. Other Books You May Enjoy

Score-based causal discovery

In this section, we’ll introduce score-based methods for causal discovery. We’ll discuss the mechanics of the GES algorithm and implement it using gCastle.

Tabula rasa – starting fresh

The very first step of the PC algorithm was to build a fully-connected graph. GES starts on the other end of the spectrum, but first things first.

GES is a two-stage procedure. First, it generates the edges, then it prunes the graph.

The algorithm starts with a blank slate – an entirely disconnected graph – and iteratively adds new edges. At each step, it computes a score that expresses how well a new graph models the observed distribution, and at each step, the edge that leads to the highest score is added to the graph.

When no more improvement can be achieved, the pruning phase begins. In this phase, the algorithm removes edges iteratively and checks for score improvement. The phase continues until no further improvement...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime}