Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Causal Inference and Discovery in Python

You're reading from  Causal Inference and Discovery in Python

Product type Book
Published in May 2023
Publisher Packt
ISBN-13 9781804612989
Pages 456 pages
Edition 1st Edition
Languages
Author (1):
Aleksander Molak Aleksander Molak
Profile icon Aleksander Molak

Table of Contents (21) Chapters

Preface 1. Part 1: Causality – an Introduction
2. Chapter 1: Causality – Hey, We Have Machine Learning, So Why Even Bother? 3. Chapter 2: Judea Pearl and the Ladder of Causation 4. Chapter 3: Regression, Observations, and Interventions 5. Chapter 4: Graphical Models 6. Chapter 5: Forks, Chains, and Immoralities 7. Part 2: Causal Inference
8. Chapter 6: Nodes, Edges, and Statistical (In)dependence 9. Chapter 7: The Four-Step Process of Causal Inference 10. Chapter 8: Causal Models – Assumptions and Challenges 11. Chapter 9: Causal Inference and Machine Learning – from Matching to Meta-Learners 12. Chapter 10: Causal Inference and Machine Learning – Advanced Estimators, Experiments, Evaluations, and More 13. Chapter 11: Causal Inference and Machine Learning – Deep Learning, NLP, and Beyond 14. Part 3: Causal Discovery
15. Chapter 12: Can I Have a Causal Graph, Please? 16. Chapter 13: Causal Discovery and Machine Learning – from Assumptions to Applications 17. Chapter 14: Causal Discovery and Machine Learning – Advanced Deep Learning and Beyond 18. Chapter 15: Epilogue 19. Index 20. Other Books You May Enjoy

Exchangeability

In this section, we’ll introduce the exchangeability assumption (also known as the ignorability assumption) and discuss its relation to confounding.

Exchangeable subjects

The main idea behind exchangeability is the following: the treated subjects, had they been untreated, would have experienced the same average outcome as the untreated did (being actually untreated) and vice versa (Hernán & Robins, 2020).

Formally speaking, exchangeability is usually defined as:

<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mrow><mrow><mfenced open="{" close="}"><mrow><msup><mi>Y</mi><mn>0</mn></msup><mo>,</mo><msup><mi>Y</mi><mn>1</mn></msup></mrow></mfenced><mo>⫫</mo><mi mathvariant="normal">T</mi><mo>|</mo><mi mathvariant="normal">Z</mi></mrow></mrow></mrow></math>

In the preceding formula, <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:msup><mml:mrow><mml:mi>Y</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msup></mml:math> and <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:msup><mml:mrow><mml:mi>Y</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:math> are counterfactual outcomes under <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>T</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:math> and <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>T</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:math> respectively, and <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>Z</mml:mi></mml:math> is a vector of control variables. If you’re getting a feeling of confusion or even circularity when thinking about this definition, you’re most likely not alone. According to Pearl (2009), many people see this definition as difficult to understand.

At the same time, the core idea behind it is simple: the treated and the untreated need to share all the relevant characteristics...

…and more

In this short section, we’ll introduce and briefly discuss three assumptions: the modularity assumption, stable unit treatment value assumption (SUTVA), and the consistency assumption.

Modularity

Imagine that you’re standing on the rooftop of a tall building and you’re dropping two apples. Halfway down, there’s a net that catches one of the apples.

The net performs an intervention for one of the apples, yet the second apple remains unaffected.

That’s the essence of the modularity assumption, also known as the independent mechanisms assumption.

Speaking more formally, if we perform an intervention on a single variable <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>X</mml:mi></mml:math>, the structural equation for this variable will be changed (for example, set to a constant), yet all other structural equations in our system of interest will remain untouched.

Modularity assumption is central to do-calculus as it’s at the core of the logic of interventions.

Let’s see...

Call me names – spurious relationships in the wild

Don’t you feel that when we talk about spurious relationships and unobserved confounding, it’s almost like we’re talking about good old friends now? Maybe they are trouble sometimes, yet they just feel so familiar it’s hard to imagine the future without them.

We will start this section with a reflection on naming conventions regarding bias/spurious relationships/confounding across the fields. In the second part of the section, we’ll discuss selection bias as a special subtype of spuriousness that plays an important role in epidemiology.

Names, names, names

Oh boy! Reading about causality across domains can be a confusing experience! Some authors suggest using the term confounding only when there’s a common cause of the treatment and the outcome (Peters et al., 2017, p. 172; Hernán & Robins, 2020, p. 103); others allow using this term also in other cases of spuriousness...

Wrapping it up

In this chapter, we talked about the challenges that we face while using causal inference methods in practice. We discussed important assumptions and proposed potential solutions to some of the discussed challenges. We got back to the topic of confounding and showed examples of selection bias.

The four most important concepts from this chapter are identifiability, the positivity assumption, modularity, and selection bias.

Are you ready to add some machine learning sauce to all we’ve learned so far?

References

Altucher, J., Altucher C. A. (2014). The Power of No: Because One Little Word Can Bring Health, Abundance, and Happiness. Hay House.

Balestriero, R., Pesenti, J., and LeCun, Y. (2021). Learning in High Dimension Always Amounts to Extrapolation. arXiv, abs/2110.09485.

Cinelli, C., Hazlett, C. (2020). Making Sense of Sensitivity: Extending Omitted Variable Bias. Journal of the Royal Statistical Society, Series B: Statistical Methodology 81(1), 39-67.

Curth, A., Svensson, D., Weatherall, J., and van der Schaar, M. (2021). Really Doing Great at Estimating CATE? A Critical Look at ML Benchmarking Practices in Treatment Effect Estimation. Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks.

Chernozhukov, V., Cinelli, C., Newey, W., Sharma, A., and Syrgkanis, V. (2022). Long Story Short: Omitted Variable Bias in Causal Machine Learning (Working Paper No. 30302; Working Paper Series). National Bureau of Economic Research.

Donnely...

Call me names – spurious relationships in the wild

Don’t you feel that when we talk about spurious relationships and unobserved confounding, it’s almost like we’re talking about good old friends now? Maybe they are trouble sometimes, yet they just feel so familiar it’s hard to imagine the future without them.

We will start this section with a reflection on naming conventions regarding bias/spurious relationships/confounding across the fields. In the second part of the section, we’ll discuss selection bias as a special subtype of spuriousness that plays an important role in epidemiology.

Names, names, names

Oh boy! Reading about causality across domains can be a confusing experience! Some authors suggest using the term confounding only when there’s a common cause of the treatment and the outcome (Peters et al., 2017, p. 172; Hernán & Robins, 2020, p. 103); others allow using this term also in other cases of spuriousness...

Wrapping it up

In this chapter, we talked about the challenges that we face while using causal inference methods in practice. We discussed important assumptions and proposed potential solutions to some of the discussed challenges. We got back to the topic of confounding and showed examples of selection bias.

The four most important concepts from this chapter are identifiability, the positivity assumption, modularity, and selection bias.

Are you ready to add some machine learning sauce to all we’ve learned so far?

References

Altucher, J., Altucher C. A. (2014). The Power of No: Because One Little Word Can Bring Health, Abundance, and Happiness. Hay House.

Balestriero, R., Pesenti, J., and LeCun, Y. (2021). Learning in High Dimension Always Amounts to Extrapolation. arXiv, abs/2110.09485.

Cinelli, C., Hazlett, C. (2020). Making Sense of Sensitivity: Extending Omitted Variable Bias. Journal of the Royal Statistical Society, Series B: Statistical Methodology 81(1), 39-67.

Curth, A., Svensson, D., Weatherall, J., and van der Schaar, M. (2021). Really Doing Great at Estimating CATE? A Critical Look at ML Benchmarking Practices in Treatment Effect Estimation. Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks.

Chernozhukov, V., Cinelli, C., Newey, W., Sharma, A., and Syrgkanis, V. (2022). Long Story Short: Omitted Variable Bias in Causal Machine Learning (Working Paper No. 30302; Working Paper Series). National Bureau of Economic Research.

Donnely...

lock icon The rest of the chapter is locked
You have been reading a chapter from
Causal Inference and Discovery in Python
Published in: May 2023 Publisher: Packt ISBN-13: 9781804612989
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}