Reader small image

You're reading from  Causal Inference and Discovery in Python

Product typeBook
Published inMay 2023
PublisherPackt
ISBN-139781804612989
Edition1st Edition
Concepts
Right arrow
Author (1)
Aleksander Molak
Aleksander Molak
author image
Aleksander Molak

Aleksander Molak is a Machine Learning Researcher and Consultant who gained experience working with Fortune 100, Fortune 500, and Inc. 5000 companies across Europe, the USA, and Israel, designing and building large-scale machine learning systems. On a mission to democratize causality for businesses and machine learning practitioners, Aleksander is a prolific writer, creator, and international speaker. As a co-founder of Lespire, an innovative provider of AI and machine learning training for corporate teams, Aleksander is committed to empowering businesses to harness the full potential of cutting-edge technologies that allow them to stay ahead of the curve.
Read more about Aleksander Molak

Right arrow

Exchangeability

In this section, we’ll introduce the exchangeability assumption (also known as the ignorability assumption) and discuss its relation to confounding.

Exchangeable subjects

The main idea behind exchangeability is the following: the treated subjects, had they been untreated, would have experienced the same average outcome as the untreated did (being actually untreated) and vice versa (Hernán & Robins, 2020).

Formally speaking, exchangeability is usually defined as:

<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mrow><mrow><mfenced open="{" close="}"><mrow><msup><mi>Y</mi><mn>0</mn></msup><mo>,</mo><msup><mi>Y</mi><mn>1</mn></msup></mrow></mfenced><mo>⫫</mo><mi mathvariant="normal">T</mi><mo>|</mo><mi mathvariant="normal">Z</mi></mrow></mrow></mrow></math>

In the preceding formula, <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:msup><mml:mrow><mml:mi>Y</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msup></mml:math> and <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:msup><mml:mrow><mml:mi>Y</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msup></mml:math> are counterfactual outcomes under <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>T</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:math> and <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>T</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:math> respectively, and <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>Z</mml:mi></mml:math> is a vector of control variables. If you’re getting a feeling of confusion or even circularity when thinking about this definition, you’re most likely not alone. According to Pearl (2009), many people see this definition as difficult to understand.

At the same time, the core idea behind it is simple: the treated and the untreated need to share all the relevant characteristics...

…and more

In this short section, we’ll introduce and briefly discuss three assumptions: the modularity assumption, stable unit treatment value assumption (SUTVA), and the consistency assumption.

Modularity

Imagine that you’re standing on the rooftop of a tall building and you’re dropping two apples. Halfway down, there’s a net that catches one of the apples.

The net performs an intervention for one of the apples, yet the second apple remains unaffected.

That’s the essence of the modularity assumption, also known as the independent mechanisms assumption.

Speaking more formally, if we perform an intervention on a single variable <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>X</mml:mi></mml:math>, the structural equation for this variable will be changed (for example, set to a constant), yet all other structural equations in our system of interest will remain untouched.

Modularity assumption is central to do-calculus as it’s at the core of the logic of interventions.

Let’s see...

Call me names – spurious relationships in the wild

Don’t you feel that when we talk about spurious relationships and unobserved confounding, it’s almost like we’re talking about good old friends now? Maybe they are trouble sometimes, yet they just feel so familiar it’s hard to imagine the future without them.

We will start this section with a reflection on naming conventions regarding bias/spurious relationships/confounding across the fields. In the second part of the section, we’ll discuss selection bias as a special subtype of spuriousness that plays an important role in epidemiology.

Names, names, names

Oh boy! Reading about causality across domains can be a confusing experience! Some authors suggest using the term confounding only when there’s a common cause of the treatment and the outcome (Peters et al., 2017, p. 172; Hernán & Robins, 2020, p. 103); others allow using this term also in other cases of spuriousness...

Wrapping it up

In this chapter, we talked about the challenges that we face while using causal inference methods in practice. We discussed important assumptions and proposed potential solutions to some of the discussed challenges. We got back to the topic of confounding and showed examples of selection bias.

The four most important concepts from this chapter are identifiability, the positivity assumption, modularity, and selection bias.

Are you ready to add some machine learning sauce to all we’ve learned so far?

References

Altucher, J., Altucher C. A. (2014). The Power of No: Because One Little Word Can Bring Health, Abundance, and Happiness. Hay House.

Balestriero, R., Pesenti, J., and LeCun, Y. (2021). Learning in High Dimension Always Amounts to Extrapolation. arXiv, abs/2110.09485.

Cinelli, C., Hazlett, C. (2020). Making Sense of Sensitivity: Extending Omitted Variable Bias. Journal of the Royal Statistical Society, Series B: Statistical Methodology 81(1), 39-67.

Curth, A., Svensson, D., Weatherall, J., and van der Schaar, M. (2021). Really Doing Great at Estimating CATE? A Critical Look at ML Benchmarking Practices in Treatment Effect Estimation. Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks.

Chernozhukov, V., Cinelli, C., Newey, W., Sharma, A., and Syrgkanis, V. (2022). Long Story Short: Omitted Variable Bias in Causal Machine Learning (Working Paper No. 30302; Working Paper Series). National Bureau of Economic Research.

Donnely...

Call me names – spurious relationships in the wild

Don’t you feel that when we talk about spurious relationships and unobserved confounding, it’s almost like we’re talking about good old friends now? Maybe they are trouble sometimes, yet they just feel so familiar it’s hard to imagine the future without them.

We will start this section with a reflection on naming conventions regarding bias/spurious relationships/confounding across the fields. In the second part of the section, we’ll discuss selection bias as a special subtype of spuriousness that plays an important role in epidemiology.

Names, names, names

Oh boy! Reading about causality across domains can be a confusing experience! Some authors suggest using the term confounding only when there’s a common cause of the treatment and the outcome (Peters et al., 2017, p. 172; Hernán & Robins, 2020, p. 103); others allow using this term also in other cases of spuriousness...

Wrapping it up

In this chapter, we talked about the challenges that we face while using causal inference methods in practice. We discussed important assumptions and proposed potential solutions to some of the discussed challenges. We got back to the topic of confounding and showed examples of selection bias.

The four most important concepts from this chapter are identifiability, the positivity assumption, modularity, and selection bias.

Are you ready to add some machine learning sauce to all we’ve learned so far?

References

Altucher, J., Altucher C. A. (2014). The Power of No: Because One Little Word Can Bring Health, Abundance, and Happiness. Hay House.

Balestriero, R., Pesenti, J., and LeCun, Y. (2021). Learning in High Dimension Always Amounts to Extrapolation. arXiv, abs/2110.09485.

Cinelli, C., Hazlett, C. (2020). Making Sense of Sensitivity: Extending Omitted Variable Bias. Journal of the Royal Statistical Society, Series B: Statistical Methodology 81(1), 39-67.

Curth, A., Svensson, D., Weatherall, J., and van der Schaar, M. (2021). Really Doing Great at Estimating CATE? A Critical Look at ML Benchmarking Practices in Treatment Effect Estimation. Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks.

Chernozhukov, V., Cinelli, C., Newey, W., Sharma, A., and Syrgkanis, V. (2022). Long Story Short: Omitted Variable Bias in Causal Machine Learning (Working Paper No. 30302; Working Paper Series). National Bureau of Economic Research.

Donnely...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Causal Inference and Discovery in Python
Published in: May 2023Publisher: PacktISBN-13: 9781804612989
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Aleksander Molak

Aleksander Molak is a Machine Learning Researcher and Consultant who gained experience working with Fortune 100, Fortune 500, and Inc. 5000 companies across Europe, the USA, and Israel, designing and building large-scale machine learning systems. On a mission to democratize causality for businesses and machine learning practitioners, Aleksander is a prolific writer, creator, and international speaker. As a co-founder of Lespire, an innovative provider of AI and machine learning training for corporate teams, Aleksander is committed to empowering businesses to harness the full potential of cutting-edge technologies that allow them to stay ahead of the curve.
Read more about Aleksander Molak