Reader small image

You're reading from  Causal Inference and Discovery in Python

Product typeBook
Published inMay 2023
PublisherPackt
ISBN-139781804612989
Edition1st Edition
Concepts
Right arrow
Author (1)
Aleksander Molak
Aleksander Molak
author image
Aleksander Molak

Aleksander Molak is a Machine Learning Researcher and Consultant who gained experience working with Fortune 100, Fortune 500, and Inc. 5000 companies across Europe, the USA, and Israel, designing and building large-scale machine learning systems. On a mission to democratize causality for businesses and machine learning practitioners, Aleksander is a prolific writer, creator, and international speaker. As a co-founder of Lespire, an innovative provider of AI and machine learning training for corporate teams, Aleksander is committed to empowering businesses to harness the full potential of cutting-edge technologies that allow them to stay ahead of the curve.
Read more about Aleksander Molak

Right arrow

Inverse probability weighting (IPW)

In this section, we’ll discuss IPW. We’ll see how IPW can be used to de-bias our causal estimates, and we’ll implement it using DoWhy.

Many faces of propensity scores

Although propensity scores might not be the best choice for matching, they still might be useful in other contexts. IPW is a method that allows us to control for confounding by creating so-called pseudo-populations within our data. Pseudo-populations are created by upweighting the underrepresented and downweighting the overrepresented groups in our dataset.

Imagine that we want to estimate the effect of drug D. If males and females react differently to D and we have 2 males and 6 females in the treatment group and 12 males and 2 females in the control group, we might end up with a situation similar to the one that we’ve seen in Chapter 1: the drug is good for everyone, but is harmful to females and males!

This is Simpson’s paradox at its...

S-Learner – the Lone Ranger

With this section, we begin our journey into the world of meta-learners. We’ll learn why ATE is sometimes not enough and we’ll introduce heterogeneous treatment effects (HTEs) (also known as conditional average treatment effects or individualized treatment effects). We’ll discuss what meta-learners are, and – finally – we’ll implement one (S-Learner) to estimate causal effects on a simulated dataset with interactions (we’ll also use it on real-life experimental data in Chapter 10).

By the end of this section, you will have a solid understanding of what CATE is, understand the main ideas behind meta-learners, and learn how to implement S-Learner using DoWhy and EconML on your own.

Ready?

The devil’s in the detail

In the previous sections, we computed two different types of causal effects: ATE and ATT. Both ATE and ATT provide us with information about the estimated average causal effect...

T-Learner – together we can do more

In this section, we’ll learn what T-Learner is and how it’s different from S-Learner. We’ll implement the model using DoWhy and EconML and compare its performance with the model from the previous section. Finally, we’ll discuss some of the drawbacks of T-Learner before concluding the section.

Forcing the split on treatment

The basic motivation behind T-Learner is to overcome the main limitation of S-Learner. If S-Learner can learn to ignore the treatment, why not make it impossible to ignore the treatment?

This is precisely what T-Learner is. Instead of fitting one model on all observations (treated and untreated), we now fit two models – one only on the treated units, and the other one only on the untreated units.

In a sense, this is equivalent to forcing the first split in a tree-based model to be a split on the treatment variable. Figure 9.12 presents a visual presentation of this concept:

...

X-Learner – a step further

In this section, we’ll introduce X-Learner – a meta-learner built to make better use of the information available in the data. We’ll learn how X-Learner works and implement the model using our familiar DoWhy pipeline.

Finally, we’ll compute the effect estimates on the full earnings dataset and compare the results with S- and T-Learners. We’ll close this section with a set of recommendations on when using X-Learner can be beneficial and a summary of all three sections about meta-learners.

Let’s start!

Squeezing the lemon

Have you noticed something?

Every time we built a meta-learner so far, we estimated two potential outcomes separately (using a single model in the case of S-Learner, and two models in the case of T-Learner) and then subtracted them in order to obtain CATE.

In a sense, we never tried to use our estimators to actually estimate CATE. We were rather estimating both potential outcomes...

Wrapping it up

Congrats on finishing Chapter 9!

We presented a lot of information in this chapter! Let’s summarize!

We started with the basics and introduced the matching estimator. On the way, we defined ATE, ATT, and ATC.

Then, we moved to propensity scores. We learned that propensity score is the probability of being treated, which we compute for each observation. Next, we’ve shown that although it might be tempting to use propensity scores for matching, in reality, it’s a risky idea. We said that propensity scores can shine in other scenarios, and we introduced propensity score weighting, which allows us to construct sub-populations and weight them accordingly in order to deconfound our data (it does not help when we have unobserved confounding).

Next, we started our journey with meta-learners. We said that ATE can sometimes hide important information from us and we defined CATE. This opened the door for us to explore the world of HTEs, where units...

References

Abrevaya, J., Hsu, Y., & Lieli, R.P. (2014). Estimating Conditional Average Treatment Effects. Journal of Business & Economic Statistics, 33, 485–505.

Angrist, J. D., & Pischke, J.-S. (2008). Mostly harmless econometrics. Princeton University Press.

Dehejia, R. H., & Wahba, S. (2002). Propensity score-matching methods for nonexperimental causal studies. The Review of Economics and Statistics, 84(1), 151–161.

Elze, M. C., Gregson, J., Baber, U., Williamson, E., Sartori, S., Mehran, R., Nichols, M., Stone, G. W., & Pocock, S. J. (2017). Comparison of Propensity Score Methods and Covariate Adjustment: Evaluation in 4 Cardiovascular Studies. Journal of the American College of Cardiology, 69(3), 345–357. https://doi.org/10.1016/j.jacc.2016.10.060

Facure, M., A. (2020). Causal Inference for The Brave and True. https://matheusfacure.github.io/python-causality-handbook/landing-page.html

Gelman, A., & Hill, J. (2006). Analytical...

X-Learner – a step further

In this section, we’ll introduce X-Learner – a meta-learner built to make better use of the information available in the data. We’ll learn how X-Learner works and implement the model using our familiar DoWhy pipeline.

Finally, we’ll compute the effect estimates on the full earnings dataset and compare the results with S- and T-Learners. We’ll close this section with a set of recommendations on when using X-Learner can be beneficial and a summary of all three sections about meta-learners.

Let’s start!

Squeezing the lemon

Have you noticed something?

Every time we built a meta-learner so far, we estimated two potential outcomes separately (using a single model in the case of S-Learner, and two models in the case of T-Learner) and then subtracted them in order to obtain CATE.

In a sense, we never tried to use our estimators to actually estimate CATE. We were rather estimating both potential outcomes...

Wrapping it up

Congrats on finishing Chapter 9!

We presented a lot of information in this chapter! Let’s summarize!

We started with the basics and introduced the matching estimator. On the way, we defined ATE, ATT, and ATC.

Then, we moved to propensity scores. We learned that propensity score is the probability of being treated, which we compute for each observation. Next, we’ve shown that although it might be tempting to use propensity scores for matching, in reality, it’s a risky idea. We said that propensity scores can shine in other scenarios, and we introduced propensity score weighting, which allows us to construct sub-populations and weight them accordingly in order to deconfound our data (it does not help when we have unobserved confounding).

Next, we started our journey with meta-learners. We said that ATE can sometimes hide important information from us and we defined CATE. This opened the door for us to explore the world of HTEs, where units...

References

Abrevaya, J., Hsu, Y., & Lieli, R.P. (2014). Estimating Conditional Average Treatment Effects. Journal of Business & Economic Statistics, 33, 485–505.

Angrist, J. D., & Pischke, J.-S. (2008). Mostly harmless econometrics. Princeton University Press.

Dehejia, R. H., & Wahba, S. (2002). Propensity score-matching methods for nonexperimental causal studies. The Review of Economics and Statistics, 84(1), 151–161.

Elze, M. C., Gregson, J., Baber, U., Williamson, E., Sartori, S., Mehran, R., Nichols, M., Stone, G. W., & Pocock, S. J. (2017). Comparison of Propensity Score Methods and Covariate Adjustment: Evaluation in 4 Cardiovascular Studies. Journal of the American College of Cardiology, 69(3), 345–357. https://doi.org/10.1016/j.jacc.2016.10.060

Facure, M., A. (2020). Causal Inference for The Brave and True. https://matheusfacure.github.io/python-causality-handbook/landing-page.html

Gelman, A., & Hill, J. (2006). Analytical...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Causal Inference and Discovery in Python
Published in: May 2023Publisher: PacktISBN-13: 9781804612989
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Aleksander Molak

Aleksander Molak is a Machine Learning Researcher and Consultant who gained experience working with Fortune 100, Fortune 500, and Inc. 5000 companies across Europe, the USA, and Israel, designing and building large-scale machine learning systems. On a mission to democratize causality for businesses and machine learning practitioners, Aleksander is a prolific writer, creator, and international speaker. As a co-founder of Lespire, an innovative provider of AI and machine learning training for corporate teams, Aleksander is committed to empowering businesses to harness the full potential of cutting-edge technologies that allow them to stay ahead of the curve.
Read more about Aleksander Molak