Reader small image

You're reading from  Causal Inference and Discovery in Python

Product typeBook
Published inMay 2023
PublisherPackt
ISBN-139781804612989
Edition1st Edition
Concepts
Right arrow
Author (1)
Aleksander Molak
Aleksander Molak
author image
Aleksander Molak

Aleksander Molak is a Machine Learning Researcher and Consultant who gained experience working with Fortune 100, Fortune 500, and Inc. 5000 companies across Europe, the USA, and Israel, designing and building large-scale machine learning systems. On a mission to democratize causality for businesses and machine learning practitioners, Aleksander is a prolific writer, creator, and international speaker. As a co-founder of Lespire, an innovative provider of AI and machine learning training for corporate teams, Aleksander is committed to empowering businesses to harness the full potential of cutting-edge technologies that allow them to stay ahead of the curve.
Read more about Aleksander Molak

Right arrow

Estimand first!

In this section, we’re going to introduce the notion of an estimand – an essential building block in the causal inference process.

We live in a world of estimators

In statistical inference and machine learning, we often talk about estimates and estimators. Estimates are basically our best guesses regarding some quantities of interest given (finite) data. Estimators are computational devices or procedures that allow us to map between a given (finite) data sample and an estimate of interest.

Let’s imagine you just got a new job. You’re interested in estimating how much time you’ll need to get from your home to your new office. You decide to record your commute times over 5 days. The data you obtain looks like this:

<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><mrow><mrow><mrow><mo>[</mo><mn>22.1</mn><mo>,</mo><mn>23.7</mn><mo>,</mo><mn>25.2</mn><mo>,</mo><mn>20.0</mn><mo>,</mo><mn>21.8</mn><mo>]</mo></mrow></mrow></mrow></math>

One thing you can do is to take the arithmetic average of these numbers, which will give you the so-called sample mean - your estimate of the true average commute time. You might feel that this is not enough...

The back-door criterion

The back-door criterion is most likely the best-known technique to find causal estimands given a graph. And the best part is that you already know it!

In this section, we’re going to learn how the back-door criterion works. We’ll study its logic and learn about its limitations. This knowledge will allow us to find good causal estimands in a broad class of cases. Let’s start!

What is the back-door criterion?

The back-door criterion aims at blocking spurious paths between our treatment and outcome nodes. At the same time, we want to make sure that we leave all directed paths unaltered and are careful not to create new spurious paths.

Formally speaking, a set of variables, <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi mathvariant="script">Z</mml:mi></mml:math>, satisfies the back-door criterion, given a graph <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>G</mml:mi></mml:math>, and a pair of variables, if no node in <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi mathvariant="script">Z</mml:mi></mml:math> is a descendant of <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>X</mml:mi></mml:math>, and <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi mathvariant="script">Z</mml:mi></mml:math> blocks all the paths between <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>X</mml:mi></mml:math> and <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>Y</mml:mi></mml:math> that contain an arrow into <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>X</mml:mi></mml:math> (Pearl, Glymour, and Jewell, 2016).

In the preceding definition, <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>X</mml:mi><mml:mo>→</mml:mo><mml:mo>…</mml:mo><mml:mo>→</mml:mo><mml:mi>Y</mml:mi></mml:math> means that...

The front-door criterion

In this section, we’re going to discuss the front-door criterion – a device that allows us to obtain valid causal estimands in (some) cases where the back-door criterion fails.

Can GPS lead us astray?

In their 2020 study, Louisa Dahmani and Véronique Bohbot from McGill University showed that there’s a link between GPS usage and spatial memory decline (Dahmani and Bohbot, 2020). Moreover, the effect is dose-dependent, which means that the more you use GPS, the more spatial memory decline you experience.

The authors argue that their results suggest a causal link between GPS usage and spatial memory decline. We already know that something that looks connected does not necessarily have to be connected in reality.

The authors also know this, so they decided to add a longitudinal component to their design. This means that they observed people over a period of time, and they noticed that those participants who used more GPS had...

Are there other criteria out there? Let’s do-calculus!

In the real world, not all causal graphs will have a structure that allows the use of the back-door or front-door criteria. Does this mean that we cannot do anything about them?

Fortunately, no. Back-door and front-door criteria are special cases of a more general framework called do-calculus (Pearl, 2009). Moreover, do-calculus has been proven to be complete (Shpitser and Pearl, 2006), meaning that if there is an identifiable causal effect in a given DAG, <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>G</mml:mi></mml:math>, it can be found using the rules of do-calculus.

What are these rules?

The three rules of do-calculus

Before we can answer the question, we need to define some new helpful notation.

Given a DAG <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>G</mml:mi></mml:math>, we can say that <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:msub><mml:mrow><mml:mi>G</mml:mi></mml:mrow><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo>-</mml:mo></mml:mover></mml:mrow></mml:msub></mml:math> is a modification of <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>G</mml:mi></mml:math>, where we removed all the incoming edges to the <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mrow><mi mathvariant="normal">n</mi><mi mathvariant="normal">o</mi><mi mathvariant="normal">d</mi><mi mathvariant="normal">e</mi><mi>X</mi></mrow></mrow></math>. We will call <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:msub><mml:mrow><mml:mi>G</mml:mi></mml:mrow><mml:mrow><mml:munder underaccent="false"><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo>_</mml:mo></mml:munder></mml:mrow></mml:msub></mml:math> a modification of <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>G</mml:mi></mml:math>, where we removed all the outgoing edges from the node <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>X</mml:mi></mml:math>.

For example, <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:msub><mml:mrow><mml:mi>G</mml:mi></mml:mrow><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>X</mml:mi></mml:mrow><mml:mo>-</mml:mo></mml:mover><mml:munder underaccent="false"><mml:mrow><mml:mi>Z</mml:mi></mml:mrow><mml:mo>_</mml:mo></mml:munder></mml:mrow></mml:msub></mml:math> will denote a DAG, <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math"><mml:mi>G</mml:mi></mml:math>, where we removed all the incoming edges to the...

Wrapping it up

We learned a lot in this chapter, and you deserve some serious applause for coming this far!

In this chapter, we learned a lot. We started with the notion of d-separation. Then, we showed how d-separation is linked to the idea of an estimand. We discussed what causal estimands are and what their role is in the causal inference process.

Next, we discussed two powerful methods of causal effect identification, the back-door and front-door criteria, and applied them to our ice cream and GPS usage examples.

Finally, we presented a generalization of front-door and back-door criteria, the powerful framework of do-calculus, and introduced a family of methods called instrumental variables, which can help us identify causal effects where other methods fail.

The set of methods we learned in this chapter gives us a powerful causal toolbox that we can apply to real-world problems.

In the next chapter, we’ll demonstrate how to properly structure an end-to-end...

Answer

Controlling for B (Figure 6.7) essentially removes A’s influence on X and Y. If we remove A from the graph, it will not change anything (up to noise) in our estimate of the relationship strength between X and Y. Note that in a graph with a removed node A, controlling for B becomes irrelevant (it does not hurt us to do so, but there’s no benefit to it either).

References

Carroll, R. J., Ruppert, D., Crainiceanu, C. M., Tosteson, T. D., and Karagas, M. R. (2004). Nonlinear and Nonparametric Regression and Instrumental Variables. Journal of the American Statistical Association, 99(467), 736-750.

Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press.

Dahmani, L., and Bohbot, V. D. (2020). Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Scientific reports, 10(1), 6310.

Griesbauer, E. M., Manley, E., Wiener, J. M., and Spiers, H. J. (2022). London taxi drivers: A review of neurocognitive studies and an exploration of how they build their cognitive map of London. Hippocampus, 32(1), 3-20.

Hernán M. A., Robins J. M. (2020). Causal Inference: What If. Boca Raton: Chapman and Hall/CRC.

Hejtmánek, L., Oravcová, I., Motýl, J., Horáček, J., and Fajnerová, I. (2018). Spatial knowledge impairment after GPS guided navigation: Eye-tracking study...

Answer

Controlling for B (Figure 6.7) essentially removes A’s influence on X and Y. If we remove A from the graph, it will not change anything (up to noise) in our estimate of the relationship strength between X and Y. Note that in a graph with a removed node A, controlling for B becomes irrelevant (it does not hurt us to do so, but there’s no benefit to it either).

References

Carroll, R. J., Ruppert, D., Crainiceanu, C. M., Tosteson, T. D., and Karagas, M. R. (2004). Nonlinear and Nonparametric Regression and Instrumental Variables. Journal of the American Statistical Association, 99(467), 736-750.

Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press.

Dahmani, L., and Bohbot, V. D. (2020). Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Scientific reports, 10(1), 6310.

Griesbauer, E. M., Manley, E., Wiener, J. M., and Spiers, H. J. (2022). London taxi drivers: A review of neurocognitive studies and an exploration of how they build their cognitive map of London. Hippocampus, 32(1), 3-20.

Hernán M. A., Robins J. M. (2020). Causal Inference: What If. Boca Raton: Chapman and Hall/CRC.

Hejtmánek, L., Oravcová, I., Motýl, J., Horáček, J., and Fajnerová, I. (2018). Spatial knowledge impairment after GPS guided navigation: Eye-tracking study...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Causal Inference and Discovery in Python
Published in: May 2023Publisher: PacktISBN-13: 9781804612989
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Aleksander Molak

Aleksander Molak is a Machine Learning Researcher and Consultant who gained experience working with Fortune 100, Fortune 500, and Inc. 5000 companies across Europe, the USA, and Israel, designing and building large-scale machine learning systems. On a mission to democratize causality for businesses and machine learning practitioners, Aleksander is a prolific writer, creator, and international speaker. As a co-founder of Lespire, an innovative provider of AI and machine learning training for corporate teams, Aleksander is committed to empowering businesses to harness the full potential of cutting-edge technologies that allow them to stay ahead of the curve.
Read more about Aleksander Molak