0% found this document useful (0 votes)
6 views

Introduction_to_Causal_Inference-Aug25_2020-Neal

The document is a set of lecture notes for a course on causal inference from a machine learning perspective, authored by Brady Neal. It outlines prerequisites, active reading exercises, and provides a comprehensive table of contents covering various topics such as potential outcomes, causal models, and randomized experiments. The notes emphasize the importance of visual intuition through numerous figures and encourage feedback from readers to improve the material.

Uploaded by

lance-silk-verse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Introduction_to_Causal_Inference-Aug25_2020-Neal

The document is a set of lecture notes for a course on causal inference from a machine learning perspective, authored by Brady Neal. It outlines prerequisites, active reading exercises, and provides a comprehensive table of contents covering various topics such as potential outcomes, causal models, and randomized experiments. The notes emphasize the importance of visual intuition through numerous figures and encourage feedback from readers to improve the material.

Uploaded by

lance-silk-verse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

Course Lecture Notes

Introduction to Causal Inference


from a Machine Learning Perspective

Brady Neal

August 26, 2020


Preface

Prerequisites There is one main prerequisite: basic probability. This course assumes
you’ve taken an introduction to probability course or have had equivalent experience.
Topics from statistics and machine learning will pop up in the course from time to
time, so some familiarity with those will be helpful but is not necessary. For example, if
cross-validation is a new concept to you, you can learn it relatively quickly at the point in
the book that it pops up. And we give a primer on some statistics terminology that we’ll
use in Section 2.4.
Active Reading Exercises Research shows that one of the best techniques to remember
material is to actively try to recall information that you recently learned. You will see
“active reading exercises” throughout the book to help you do this. They’ll be marked by
the Active reading exercise: heading.
Many Figures in This Book As you will see, there are a ridiculous amount of figures in
this book. This is on purpose. This is to help give you as much visual intuition as possible.
We will sometimes copy the same figures, equations, etc. that you might have seen in
preceding chapters so that we can make sure the figures are always right next to the text
that references them.
Sending Me Feedback This is a book draft, so I greatly appreciate any feedback you’re
willing to send my way. If you’re unsure whether I’ll be receptive to it or not, don’t be.
Please send any feedback to me at [email protected] with “[Causal Book]” in the
beginning of your email subject. Feedback can be at the word level, sentence level, section
level, chapter level, etc. Here’s a non-exhaustive list of useful kinds of feedback:
I Typoz.
I Some part is confusing.
I You notice your mind starts to wander, or you don’t feel motivated to read some
part.
I Some part seems like it can be cut.
I You feel strongly that some part absolutely should not be cut.
I Some parts are not connected well. Moving from one part to the next, you notice
that there isn’t a natural flow.
I A new active reading exercise you thought of.

Bibliographic Notes Although we do our best to cite relevant results, we don’t want to
disrupt the flow of the material by digging into exactly where each concept came from.
There will be complete sections of bibliographic notes in the final version of this book,
but they won’t come until after the course has finished.
Contents

Preface ii

Contents iii

1 Motivation: Why You Might Care 1


1.1 Simpson’s Paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Applications of Causal Inference . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Correlation Does Not Imply Causation . . . . . . . . . . . . . . . . . . . . 3
1.3.1 Nicolas Cage and Pool Drownings . . . . . . . . . . . . . . . . . . . 3
1.3.2 Why is Association Not Causation? . . . . . . . . . . . . . . . . . . 4
1.4 Main Themes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Potential Outcomes 6
2.1 Potential Outcomes and Individual Treatment Effects . . . . . . . . . . . . 6
2.2 The Fundamental Problem of Causal Inference . . . . . . . . . . . . . . . . 7
2.3 Getting Around the Fundamental Problem . . . . . . . . . . . . . . . . . . 8
2.3.1 Average Treatment Effects and Missing Data Interpretation . . . . 8
2.3.2 Ignorability and Exchangeability . . . . . . . . . . . . . . . . . . . 9
2.3.3 Conditional Exchangeability and Unconfoundedness . . . . . . . . 10
2.3.4 Positivity/Overlap and Extrapolation . . . . . . . . . . . . . . . . . 12
2.3.5 No interference, Consistency, and SUTVA . . . . . . . . . . . . . . 13
2.3.6 Tying It All Together . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4 Fancy Statistics Terminology Defancified . . . . . . . . . . . . . . . . . . . 15
2.5 A Complete Example with Estimation . . . . . . . . . . . . . . . . . . . . . 16

3 The Flow of Association and Causation in Graphs 19


3.1 Graph Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Bayesian Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Causal Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 Two-Node Graphs and Graphical Building Blocks . . . . . . . . . . . . . . 23
3.5 Chains and Forks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.6 Colliders and their Descendants . . . . . . . . . . . . . . . . . . . . . . . . 26
3.7 d-separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.8 Flow of Association and Causation . . . . . . . . . . . . . . . . . . . . . . 29

4 Causal Models 31
4.1 The do-operator and Interventional Distributions . . . . . . . . . . . . . . 31
4.2 The Main Assumption: Modularity . . . . . . . . . . . . . . . . . . . . . . 33
4.3 Truncated Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3.1 Example Application and Revisiting “Association is Not Causation” 35
4.4 The Backdoor Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.4.1 Relation to Potential Outcomes . . . . . . . . . . . . . . . . . . . . . 38
4.5 Structural Causal Models (SCMs) . . . . . . . . . . . . . . . . . . . . . . . 39
4.5.1 Structural Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.5.2 Interventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.5.3 Collider Bias and Why to Not Condition on Descendants of Treatment 42
4.6 Example Applications of the Backdoor Adjustment . . . . . . . . . . . . . 43
4.6.1 Association vs. Causation in a Toy Example . . . . . . . . . . . . . 43
4.6.2 A Complete Example with Estimation . . . . . . . . . . . . . . . . 44
4.7 Assumptions Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

5 Randomized Experiments 47
5.1 Comparability and Covariate Balance . . . . . . . . . . . . . . . . . . . . . 47
5.2 Exchangeability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.3 No Backdoor Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

6 General Identification 50
6.1 Coming Soon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

7 Estimation 51
7.1 Coming Soon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

8 Counterfactuals 52
8.1 Coming Soon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

Bibliography 53

Alphabetical Index 55
List of Figures

1.1 Causal structure for when to prefer treatment B for COVID-27 . . . . . . . . . 2


1.2 Causal structure for when to prefer treatment A for COVID-27 . . . . . . . . . 2
1.3 Number of Nicolas Cage movies correlates with number of pool drownings . 3
1.4 Causal structure with getting lit as a confounder . . . . . . . . . . . . . . . . . 4

2.2 Causal structure for ignorable treatment assignment mechanism . . . . . . . . 9


2.1 Causal structure of 𝑋 confounding the effect of 𝑇 on 𝑌 . . . . . . . . . . . . . 9
2.3 Causal structure of confounding through 𝑋 . . . . . . . . . . . . . . . . . . . . 11
2.4 Causal structure for conditional exchangeability given 𝑋 . . . . . . . . . . . . 11
2.5 The Identification-Estimation Flowchart . . . . . . . . . . . . . . . . . . . . . . 16
3.1 Terminology machine gun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Directed graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.2 Undirected graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19


3.4 Four node DAG where 𝑋4 locally depends on only 𝑋3 . . . . . . . . . . . . . . 20
3.5 Four node DAG with many independencies . . . . . . . . . . . . . . . . . . . . 21
3.6 Two connected node DAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.7 Basic graph building blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.9 Two connected node DAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.10 Chain with association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.8 Two unconnected node DAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.11 Fork with association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.12 Chain with blocked association . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.13 Fork with blocked association . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.14 Immorality with association blocked by collider . . . . . . . . . . . . . . . . . 26
3.15 Immorality with association unblocked . . . . . . . . . . . . . . . . . . . . . . 26
3.16 Assumptions flowchart from statistical independencies to causal dependencies 30

4.1 The Identification-Estimation Flowchart (extended) . . . . . . . . . . . . . . . 31


4.2 Illustration of the difference between conditioning and intervening . . . . . . 32
4.3 Causal mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.4 Intervention as edge deletion in causal graphs . . . . . . . . . . . . . . . . . . 34
4.5 Causal structure for application of truncated factorization . . . . . . . . . . . . 35
4.6 Manipulated graph for three nodes . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.7 Graph for structural equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.9 Causal structure before simple intervention . . . . . . . . . . . . . . . . . . . . 40
4.8 Causal graph for several structural equations . . . . . . . . . . . . . . . . . . . 40
4.10 Causal structure after simple intervention . . . . . . . . . . . . . . . . . . . . . 41
4.11 Causal graph for completely blocking causal flow . . . . . . . . . . . . . . . . 42
4.12 Causal graph for partially blocking causal flow . . . . . . . . . . . . . . . . . . 42
4.13 Causal graph where a conditioned collider induces bias . . . . . . . . . . . . . 42
4.14 Causal graph where child of a mediator is conditioned on . . . . . . . . . . . . 42
4.15 Magnified causal graph where child of a mediator is conditioned on . . . . . . 43
4.16 Causal graph for M-bias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.17 Causal graph for toy example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.18 Causal graph for blood pressure example with collider . . . . . . . . . . . . . 44
4.19 Causal graph for M-bias with unobserved variables . . . . . . . . . . . . . . . 45
5.1 Causal structure of confounding through 𝑋 . . . . . . . . . . . . . . . . . . . . 49
5.2 Causal structure when we randomize treatment . . . . . . . . . . . . . . . . . 49

List of Tables

1.1 Simpson’s paradox in COVID-27 data . . . . . . . . . . . . . . . . . . . . . . . 1

2.1 Causal Inference as Missing Data Problem . . . . . . . . . . . . . . . . . . . . . 9

3.1 Exponential number of parameters for modeling factors . . . . . . . . . . . . . 20

Listings

2.1 Python code for estimating the ATE . . . . . . . . . . . . . . . . . . . . . . 17


2.2 Python code for estimating the ATE using the coefficient of linear regression 17

4.1 Python code for estimating the ATE, without adjusting for the collider . . 45
Motivation: Why You Might Care 1
1.1 Simpson’s Paradox . . . . . 1
1.1 Simpson’s Paradox
1.2 Applications of Causal Infer-
ence . . . . . . . . . . . . . . 2
Consider a purely hypothetical future where there is a new disease known
1.3 Correlation Does Not Imply
as COVID-27 that is prevalent in the human population. In this purely
Causation . . . . . . . . . . 3
hypothetical future, there are two treatments that have been developed:
Nicolas Cage and Pool
treatment A and treatment B. Treatment B is more scarce than treatment Drownings . . . . . . . . . . 3
A, so the split of those currently receiving treatment A vs. treatment Why is Association Not Cau-
B is roughly 73%/27%. You are in charge of choosing which treatment sation? . . . . . . . . . . . . 4
your country will exclusively use, in a country that only cares about 1.4 Main Themes . . . . . . . . . 5
minimizing loss of life.
You have data on the percentage of people who die from COVID-27,
given the treatment they were assigned and given their condition at the
time treatment was decided. Their condition is a binary variable: either
mild or severe. In this data, 16% of those who receive A die, whereas
19% of those who receive B die. However, when we examine the people
with mild condition separately from the people with severe condition,
the numbers reverse order. In the mild subpopulation, 15% of those who
receive A die, whereas 10% of those who receive B die. In the severe
subpopulation, 30% of those who receive A die, whereas 20% of those
who receive B die. We depict these percentages and the corresponding
counts in Table 1.1.

Table 1.1: Simpson’s paradox in COVID-27


Condition data. The percentages denote the mortality
Mild Severe Total rates in each of the groups. Lower is better.
The numbers in parentheses are the corre-
15% 30% 16% sponding counts. This apparent paradox
t

A
en

stems from the interpretation that treat-


m

(210/1400) (30/100) (240/1500)


ment A looks better when examining the
at
e

10% 20% whole population, but treatment B looks


Tr

19%
B better in all subpopulations.
(5/50) (100/500) (105/550)

The apparent paradox stems from the fact that, in Table 1.1, the “Total”
column could be interpreted to mean that we should prefer treatment
A, whereas the “Mild” and “Severe” columns could both be interpreted
to mean that we should prefer treatment B.1 In fact, the answer is that if 1A key ingredient necessary to find Simp-
we know someone’s condition, we should give them treatment B, and if son’s paradox is the non-uniformity of
allocation of people to the groups. 1400
we do not know their condition, we should give them treatment A. Just
of the 1500 people who received treatment
kidding... that doesn’t make any sense. So really, what treatment should A had mild condition, whereas 500 of
you choose for your country? the 550 people who received treatment
B had severe condition. Because people
Either treatment A or treatment B could be the right answer, depending with mild condition are less likely to die,
on the causal structure of the data. In other words, causality is essential to this means that the total mortality rate
for those with treatment A is lower than
solve Simpson’s paradox. For now, we will just give the intuition for when
what it would have been if mild and severe
you should prefer treatment A vs. when you should prefer treatment B, conditions were equally split among them.
but it will be made more formal in Chapter 4. The opposite bias is true for treatment B.
1 Motivation: Why You Might Care 2

Scenario 1 If the condition 𝐶 is a cause of the treatment 𝑇 (Figure


1.1), treatment B is more effective at reducing mortality 𝑌 . An example
scenario is where doctors decide to give treatment A to most people
who have mild conditions. And they save the more expensive and more
limited treatment B for people with severe conditions. Because having
𝐶
severe condition causes one to be more likely to die (𝐶 → 𝑌 in Figure
1.1) and causes one to be more likely to receive treatment B (𝐶 → 𝑇
in Figure 1.1), treatment B will be associated with higher mortality in
the total population. In other words, treatment B is associated with a 𝑇 𝑌
higher mortality rate simply because condition is a common cause of Figure 1.1: Causal structure of scenario 1,
both treatment and mortality. Here, condition confounds the effect of where condition 𝐶 is a common cause of
treatment 𝑇 and mortality 𝑌 . Given this
treatment on mortality. To correct for this confounding, we must examine
causal structure, treatment B is preferable.
the relationship of 𝑇 and 𝑌 among patients with the same conditions.
This means that the better treatment is the one that yields lower mortality
in each of the subpopulations (the “Mild” and “Severe” columns in Table
1.1): treatment B.

Scenario 2 If the prescription2 of treatment 𝑇 is a cause of the condition 2 𝑇 refers to the prescription of the treat-
𝐶 (Figure 1.2), treatment A is more effective. An example scenario is ment, rather than the subsequent recep-
tion of the treatment.
where treatment B is so scarce that it requires patients to wait a long
time after they were prescribed the treatment before they can receive
the treatment. Treatment A does not have this problem. Because the
condition of a patient with COVID-27 worsens over time, the prescription
of treatment B actually causes patients with mild conditions to develop
severe conditions, causing a higher mortality rate. Therefore, even if
𝑇 𝐶
treatment B is more effective than treatment A once administered (positive
effect along 𝑇 → 𝑌 in Figure 1.2), because prescription of treatment B
causes worse conditions (negative effect along 𝑇 → 𝐶 → 𝑌 in Figure
1.2), treatment B is less effective in total. Note: Because treatment B is 𝑌
more expensive, treatment B is prescribed with 0.27 probability, while Figure 1.2: Causal structure of scenario 2,
treatment A is prescribed with 0.73 probability; importantly, treatment where treatment 𝑇 is a cause of condition
𝐶 . Given this causal structure, treatment
prescription is independent of condition in this scenario.
A is preferable.
In sum, the more effective treatment is completely dependent on the
causal structure of the problem. In Scenario 1, where 𝐶 was a cause of
𝑇 (Figure 1.1), treatment B was more effective. In Scenario 2, where 𝑇
was a cause of 𝐶 (Figure 1.2), treatment A was more effective. Without
causality, Simpson’s paradox cannot be resolved. With causality, it is not
a paradox at all.

1.2 Applications of Causal Inference

Causal inference is essential to science, as we often want to make causal


claims, rather than merely associational claims. For example, if we
are choosing between treatments for a disease, we want to choose the
treatment that causes the most people to be cured, without causing too
many bad side effects. If we want a reinforcement learning algorithm to
maximize reward, we want it to take actions that cause it to achieve the
maximum reward. If we are studying the effect of social media on mental
health, we are trying to understand what the main causes of a given
mental health outcome are and order these causes by the percentage of
the outcome that can be attributed to each cause.
1 Motivation: Why You Might Care 3

Causal inference is essential for rigorous decision-making. For example,


say we are considering several different policies to implement to reduce
greenhouse gas emissions, and we must choose just one due to budget
constraints. If we want to be maximally effective, we should carry out
causal analysis to determine which policy will cause the largest reduc-
tion in emissions. As another example, say we are considering several
interventions to reduce global poverty. We want to know which policies
will cause the largest reductions in poverty.
Now that we’ve gone through the general example of Simpson’s paradox
and a few specific examples in science and decision-making, we’ll move
to how causal inference is so different from prediction.

1.3 Correlation Does Not Imply Causation

Many of you will have heard the mantra “correlation does not imply
causation.” In this section, we will quickly review that and provide you
with a bit more intuition about why this is the case.

1.3.1 Nicolas Cage and Pool Drownings

It turns out that the yearly number of people who drown by falling into
swimming pools has a high degree of correlation with the yearly number
of films that Nicolas Cage appears in [1]. See Figure 1.3 for a graph of this [1]: Vigen (2015), Spurious correlations
data. Does this mean that Nicolas Cage encourages bad swimmers to
hop in the pool in his films? Or does Nicolas Cage feel more motivated to
act in more films when he sees how many drownings are happening that
year, perhaps to try to prevent more drownings? Or is there some other
explanation? For example, maybe Nicolas Cage is interested in increasing
his popularity among causal inference practitioners, so he travels back in
time to convince his past self to do just the right number of movies for us
to see this correlation, but not too close of a match as that would arouse
suspicion and potentiallyNumber of people
cause someone to who drowned
prevent him fromby falling into a pool
rigging
correlates with
the data this way. We may never know for sure.
Films Nicolas Cage appeared in
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
140 drownings 6 films
Swimming pool drownings

120 drownings 4 films


Nicholas Cage

100 drownings 2 films

80 drownings 0 films
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009

Nicholas Cage Swimming pool drownings


tylervigen.com

Figure 1.3: The yearly number of movies Nicolas Cage appears in correlates with the yearly number of pool drownings [1].

Of course, all of the possible explanations in the preceding paragraph


seem quite unlikely. Rather, it is likely that this is a spurious correlation,
where there is no causal relationship. We’ll soon move on to a more
1 Motivation: Why You Might Care 4

illustrative example that will help clarify how spurious correlations can
arise.

1.3.2 Why is Association Not Causation?

Before moving to the next example, let’s be a bit more precise about
terminology. “Correlation” is often colloquially used as a synonym
for statistical dependence. However, “correlation” is technically only a
measure of linear statistical dependence. We will largely be using the
term association to refer to statistical dependence from now on.
Causation is not binary. For any given amount of association, it does not
need to be “all the association is causation” or “no causation.” It is possible
to have some causation while having a large amount of association. The
phrase “association is not causation” simply means that the amount of
association and the amount of causation can be different. Some amount
of association and zero causation is a special case of “association is not
causation.”
Say you happen upon some data that relates wearing shoes to bed and
waking up with a headache, as one does. It turns out that most times
that someone wears shoes to bed, that person wakes up with a headache.
And most times someone doesn’t wear shoes to bed, that person doesn’t
wake up with a headache. It is not uncommon for people to interpret
data like this (with associations) as meaning that wearing shoes to bed
causes people to wake up with headaches, especially if they are looking
for a reason to justify not wearing shoes to bed. A careful journalist might
make claims like “wearing shoes to bed is associated with headaches”
or “people who wear shoes to bed are at higher risk of waking up with
headaches.” However, the main reason to make claims like that is that
most people will internalize claims like that as “if I wear shoes to bed,
I’ll probably wake up with a headache.”
We can explain how wearing shoes to bed and headaches are associated
without either being a cause of the other. It turns out that they are
both caused by a common cause: drinking the night before. We depict
this in Figure 1.4. You might also hear this kind of variable referred
to as a “confounder” or a “lurking variable.” We will call this kind of
association confounding association since the association is facilitated by a
confounder.
The total association observed can be made up of both confounding
association and causal association. It could be the case that wearing shoes
to bed does have some small causal effect on waking up with a headache. Figure 1.4: Causal structure, where drink-
Then, the total association would not be solely confounding association ing the night before is a common cause of
nor solely causal association. It would be a mixture of both. For example, sleeping with shoes on and of waking up
with a headaches.
in Figure 1.4, causal association flows along the arrow from shoe-sleeping
to waking up with a headache. And confounding association flows along
the path from shoe-sleeping to drinking to headachening (waking up
with a headache). We will make the graphical interpretation of these
different kinds of association clear in Chapter 3.
1 Motivation: Why You Might Care 5

The Main Problem The main problem motivating causal inference is


that association is not causation.3 If the two were the same, then causal 3As we’ll see in Chapter 5, if we randomly
inference would be easy. Traditional statistics and machine learning assign the treatment in a controlled exper-
iment, association actually is causation.
would already have causal inference solved, as measuring causation
would be as simple as just looking at measures such as correlation and
predictive performance in data. A large portion of this book will be about
better understanding and solving this problem.

1.4 Main Themes

There are several overarching themes that will keep coming up through-
out this book. These themes will largely be comparisons of two different
categories. As you are reading, it is important that you understand which
categories different sections of the book fit into and which categories
they do not fit into.
Statistical vs. Causal Even with an infinite amount of data, we some-
times cannot compute some causal quantities. In contrast, much of
statistics is about addressing uncertainty in finite samples. When given
infinite data, there is no uncertainty. However, association, a statistical
concept, is not causation. There is more work to be done in causal infer-
ence, even after starting with infinite data. This is the main distinction
motivating causal inference. We have already made this distinction in
this chapter and will continue to make this distinction throughout the
book.
Identification vs. Estimation Identification of causal effects is unique
to causal inference. It is the problem that remains to solve, even when we
have infinite data. However, causal inference also shares estimation with
traditional statistics and machine learning. We will largely begin with
identification of causal effects (in Chapters 2, 4 and 6) before moving to
estimation of causal effects (in Chapter 7). The exceptions are Section 2.5
and Section 4.6.2, where we carry out complete examples with estimation
to give you an idea of what the whole process looks like early on.
Interventional vs. Observational If we can intervene/experiment,
identification of causal effects is relatively easy. This is simply because
we can actually take the action that we want to measure the causal effect
of and simply measure the effect after we take that action. Observational
data is where it gets more complicated because confounding is almost
always introduced into the data.
Assumptions There will be a large focus on what assumptions we are
using to get the results that we get. Each assumption will have its own
box to help make it difficult to not notice. Clear assumptions should make
it easy to see where critiques of a given causal analysis or causal model
will be. The hope is that presenting assumptions clearly will lead to more
lucid discussions about causality.
Potential Outcomes 2
In this chapter, we will ease into the world of causality. We will see that 2.1 Potential Outcomes and Indi-
new concepts and corresponding notations need to be introduced to vidual Treatment Effects . 6
clearly describe causal concepts. These concepts are “new” in the sense 2.2 The Fundamental Problem
that they may not exist in traditional statistics or math, but they should of Causal Inference . . . . 7
be familiar in that we use them in our thinking and describe them with 2.3 Getting Around the Funda-
natural language all the time. mental Problem . . . . . . . 8
Average Treatment Effects
Familiar statistical notation We will use 𝑇 to denote the random vari-
and Missing Data Interpre-
able for treatment, 𝑌 to denote the random variable for the outcome of
tation . . . . . . . . . . . . . 8
interest and 𝑋 to denote covariates. In general, we will use uppercase Ignorability and Exchange-
letters to denote random variables (except in maybe one case) and lower- ability . . . . . . . . . . . . . 9
case letters to denote values that random variables take on. Much of what Conditional Exchangeability
we consider will be settings where 𝑇 is binary. Know that, in general, we and Unconfoundedness . 10
can extend things to work in settings where 𝑇 can take on more than two Positivity/Overlap and Ex-
values or where 𝑇 is continuous. trapolation . . . . . . . . . . 12
No interference, Consis-
tency, and SUTVA . . . . . 13
Tying It All Together . . . . 14
2.1 Potential Outcomes and Individual 2.4 Fancy Statistics Terminology
Treatment Effects Defancified . . . . . . . . . 15
2.5 A Complete Example with
Estimation . . . . . . . . . . 16
We will now introduce the first causal concept to appear in this book.
These concepts are sometimes characterized as being unique to the
Neyman-Rubin [2–4] causal model (or potential outcomes framework), [2]: Splawa-Neyman (1923 [1990]), ‘On the
but they are not. For example, these same concepts are still present Application of Probability Theory to Agri-
cultural Experiments. Essay on Principles.
(just under different notation) in the framework that uses causal graphs Section 9.’
(Chapters 3 and 4). It is important that you spend some time ensuring [3]: Rubin (1974), ‘Estimating causal effects
that you understand these initial causal concepts. If you have not studied of treatments in randomized and nonran-
causal inference before, they will be unfamiliar to see in mathematical domized studies.’
[4]: Sekhon (2008), ‘The Neyman-Rubin
contexts, though they may be quite familiar intuitively because we Model of Causal Inference and Estimation
commonly think and communicate in causal language. via Matching Methods’

Scenario 1 Consider the scenario where you are unhappy. And you are
considering whether or not to get a dog to help make you happy. If you
become happy after you get the dog, does this mean the dog caused you
to be happy? Well, what if you would have also become happy had you
not gotten the dog? In that case, the dog was not necessary to make you
happy, so its claim to a causal effect on your happiness is weak.
Scenario 2 Let’s switch things up a bit. Consider that you will still be
happy if you get a dog, but now, if you don’t get a dog, you will remain
unhappy. In this scenario, the dog has a pretty strong claim to a causal
effect on your happiness.
In both the above scenarios, we have used the causal concept known as
potential outcomes. Your outcome 𝑌 is happiness: 𝑌 = 1 corresponds to
happy while 𝑌 = 0 corresponds to unhappy. Your treatment 𝑇 is whether
or not you get a dog: 𝑇 = 1 corresponds to you getting a dog while 𝑇 = 0
2 Potential Outcomes 7

corresponds to you not getting a dog. We denote by 𝑌(1) the potential


outcome of happiness you would observe if you were to get a dog (𝑇 = 1).
Similarly, we denote by 𝑌(0) the potential outcome of happiness you
would observe if you were to not get a dog (𝑇 = 0). In scenario 1, 𝑌(1) = 1
and 𝑌(0) = 1. In contrast, in scenario 2, 𝑌(1) = 1 and 𝑌(0) = 0.
More generally, the potential outcome 𝑌(𝑡) denotes what your outcome
would be, if you were to take treatment 𝑡 . A potential outcome 𝑌(𝑡) is
distinct from the observed outcome 𝑌 in that not all potential outcomes
are observed. Rather all potential outcomes can potentially be observed.
The one that is actually observed depends on the value that the treatment
𝑇 takes on.
In the previous scenarios, there was only a single individual in the whole
population: you. However, generally, there are many individuals 1 in 1“Unit” is often used in the place of “indi-
the population of interest. We will denote the treatment, covariates, and vidual” as the units of the population are
outcome of the 𝑖 th individual using 𝑇𝑖 , 𝑋𝑖 , and 𝑌𝑖 . Then, we can define not always people.

the individual treatment effect (ITE) 2 for individual 𝑖 : 2 The ITE is also known as the individual
causal effect, unit-level causal effect, or unit-
𝜏𝑖 , 𝑌𝑖 (1) − 𝑌𝑖 (0) (2.1) level treatment effect.

Whenever there is more than one individual in a population, 𝑌(𝑡) is a ran-


dom variable because different individuals will have different potential
outcomes. In contrast, 𝑌𝑖 (𝑡) is usually treated as non-random3 because 3 Though, 𝑌𝑖 (𝑡) can be treated as random.
the subscript 𝑖 means that we are conditioning on so much individual-
ized (and context-specific) information, that we restrict our focus to a
single individual (in a specific context) whose potential outcomes are
deterministic.
ITEs are some of the main quantities that we care about in causal
inference. For example, in scenario 2 above, you would choose to get
a dog because the causal effect of getting a dog on your happiness is
positive: 𝑌(1) − 𝑌(0) = 1 − 0 = 1. In contrast, in scenario 1, you might
choose to not get a dog because there is no causal effect of getting a dog
on your happiness: 𝑌(1) − 𝑌(0) = 1 − 1 = 0.
Now that we’ve introduced potential outcomes and ITEs, we can intro-
duce the main problems that pop up in causal inference that are not
present in fields where the main focus is on association or prediction.

2.2 The Fundamental Problem of Causal


Inference

It is impossible to observe all potential outcomes for a given individual


[3] . Consider the dog example. You could observe 𝑌(1) by getting a dog [3]: Rubin (1974), ‘Estimating causal effects
and observing your happiness after getting a dog. Alternatively, you of treatments in randomized and nonran-

could observe 𝑌(0) by not getting a dog and observing your happiness.
domized studies.’

However, you cannot observe both 𝑌(1) and 𝑌(0), unless you have a time
machine that would allow you to go back in time and choose the version
of treatment that you didn’t take the first time. You cannot simply get
a dog, observe 𝑌(1), give the dog away, and then observe 𝑌(0) because
the second observation will be influenced by all the actions you took
between the two observations and anything else that changed since the
first observation.
2 Potential Outcomes 8

This is known as the fundamental problem of causal inference [5]. It is [5]: Holland (1986), ‘Statistics and Causal
fundamental because if we cannot observe both 𝑌𝑖 (1) and 𝑌𝑖 (0), then we Inference’

cannot observe the causal effect 𝑌𝑖 (1) − 𝑌𝑖 (0). This problem is unique
to causal inference because, in causal inference, we care about making
causal claims, which are defined in terms of potential outcomes. For
contrast, consider machine learning. In machine learning, we often only
care about predicting the observed outcome 𝑌 , so there is no need for
potential outcomes, which means machine learning does not have to
deal with this fundamental problem that we must deal with in causal
inference.
The potential outcomes that you do not (and cannot) observe are known
as counterfactuals because they are counter to fact (reality). “Potential
outcomes” are sometimes referred to as “counterfactual outcomes,” but
we will never do that in this book because a potential outcome 𝑌(𝑡)
does not become counter to fact until another potential outcome 𝑌(𝑡 0) is
observed. The potential outcome that is observed is sometimes referred
to as a factual. Note that there are no counterfactuals or factuals until the
outcome is observed. Before that, there are only potential outcomes.

2.3 Getting Around the Fundamental Problem

I suspect this section is where this chapter might start to get a bit unclear.
If that is the case for you, don’t worry too much, and just continue to the
next chapter, as it will build up parallel concepts in a hopefully more
intuitive way.

2.3.1 Average Treatment Effects and Missing Data


Interpretation

We know that we can’t access individual treatment effects, but what


about average treatment effects? We get the average treatment effect (ATE)4 4 The ATE is also known as the “average
by taking an average over the ITEs: causal effect (ACE).”

𝜏 , 𝔼[𝑌𝑖 (1) − 𝑌𝑖 (0)] = 𝔼[𝑌(1) − 𝑌(0)] , (2.2)

where the average is over the individuals 𝑖 if 𝑌𝑖 (𝑡) is deterministic. If 𝑌𝑖 (𝑡)


is random, the average is also over any other randomness.
Okay, but how would we actually compute the ATE? Let’s look at
some made-up data in Table 2.1 for this. If you like examples, feel free to
substitute in the COVID-27 example from Section 1.1 or the dog-happiness
example from Section 2.1. We will take this table as the whole population
of interest. Because of the fundamental problem of causal inference, this
is fundamentally a missing data problem. All of the question marks in
the table indicate that we do not observe that cell.
A natural quantity that comes to mind is the associational difference:
𝔼[𝑌|𝑇 = 1] − 𝔼[𝑌|𝑇 = 0]. By linearity of expectation, we have that the
ATE 𝔼[𝑌(1) − 𝑌(0)] = 𝔼[𝑌(1)] − 𝔼[𝑌(0)]. Then, maybe 𝔼[𝑌(1)] − 𝔼[𝑌(0)]
equals 𝔼[𝑌|𝑇 = 1] − 𝔼[𝑌|𝑇 = 0]. Unfortunately, this is not true in general.
If it were, that would mean that causation is simply association. 𝔼[𝑌|𝑇 =
1] − 𝔼[𝑌|𝑇 = 0] is an associational quantity, whereas 𝔼[𝑌(1)] − 𝔼[𝑌(0)]
2 Potential Outcomes 9

Table 2.1: Example data to illustrate that


𝑖 𝑇 𝑌 𝑌(1) 𝑌(0) 𝑌(1) − 𝑌(0) the fundamental problem of causal infer-
ence can be interpreted as a missing data
1 0 0 ? 0 ?
problem.
2 1 1 1 ? ?
3 1 0 0 ? ?
4 0 0 ? 0 ?
5 0 1 ? 1 ?
6 1 1 1 ? ? 𝑋

is a causal quantity. They are not equal due to confounding, which we


discussed in Section 1.3. The graphical interpretation of this, depicted in 𝑇 𝑌
Figure 2.1, is that 𝑋 confounds the effect of 𝑇 on 𝑌 because there is this Figure 2.1: Causal structure of 𝑋 con-
𝑇 ← 𝑋 → 𝑌 path that non-causal association flows along.5 founding the effect of 𝑇 on 𝑌 .

2.3.2 Ignorability and Exchangeability 5 Keep reading to Chapter 3, where we


will flesh out and formalize this graphical
Well, what assumption(s) would make it so that the ATE is simply the interpretation.

associational difference? This is equivalent to saying “what makes it valid


to calculate the ATE by taking the sum of the 𝑌(0) column, ignoring the
question marks, and subtracting that from the sum of the 𝑌(1) column,
ignoring the question marks?”6 This ignoring of the question marks 6 Active reading exercise: verify that this

(missing data) is known as ignorability. Assuming ignorability is like procedure is equivalent to 𝔼[𝑌|𝑇 = 1] −
𝔼[𝑌|𝑇 = 0] in the data in Table 2.1.
ignoring how people ended up selecting the treatment they selected and
just assuming they were randomly assigned their treatment; we depict
this graphically in Figure 2.2 by the lack of a causal arrow from 𝑋 to 𝑇 .
We will now state this assumption formally.
𝑋
Assumption 2.1 (Ignorability / Exchangeability)

(𝑌(1), 𝑌(0)) ⊥
⊥𝑇 𝑇 𝑌
This assumption is key to causal inference because it allows us to reduce Figure 2.2: Causal structure when the
treatment assignment mechanism is ig-
the ATE to the associational difference: norable. Notably, this means there’s no
arrow from 𝑋 to 𝑇 , which means there is
𝔼[𝑌(1)] − 𝔼[𝑌(0)] = 𝔼[𝑌(1) | 𝑇 = 1] − 𝔼[𝑌(0) | 𝑇 = 0] (2.3) no confounding.

= 𝔼[𝑌 | 𝑇 = 1] − 𝔼[𝑌 | 𝑇 = 0] (2.4)

The ignorability assumption is used in Equation 2.3. We will talk more


about Equation 2.4 when we get to Section 2.3.5.
Another perspective on this assumption is that of exchangeability. Ex-
changeability means that the treatment groups are exchangeable in
the sense that if they were swapped, the new treatment group would
observe the same outcomes as the old treatment group, and the new
control group would observe the same outcomes as the old control
group. Formally, this assumption means 𝔼[𝑌(1)|𝑇 = 0] = 𝔼[𝑌(1)|𝑇 = 1]
and 𝔼[𝑌(0)|𝑇 = 1] = 𝔼[𝑌(0)|𝑇 = 0], respectively. Then, this implies 7 Technically, this is mean exchangeabil-
𝔼[𝑌(1)|𝑇 = 𝑡] = 𝔼[𝑌(1)] and 𝔼[𝑌(0)|𝑇 = 𝑡] = 𝔼[𝑌(0)], for all 𝑡 , which is ity, which is a weaker assumption than the
full exchangeability that we describe in As-
nearly equivalent7 to Assumption 2.1. sumption 2.1 because it only constrains the
first moment of the distribution. Generally,
An important intuition to have about exchangeability is that it guarantees
we only need mean ignorability/exchange-
that the treatment groups are comparable. In other words, the treatment ability for average treatment effects, but it
groups are the same in all relevant aspects other than the treatment. This is common to assume complete indepen-
intuition is what underlies the concept of “controlling for” or “adjusting dence, as in Assumption 2.1.
2 Potential Outcomes 10

for” variables, which we will discuss shortly when we get to conditional


exchangeability.
We have leveraged Assumption 2.1 to identify causal effects. To identify
a causal effect is to reduce a causal expression to a purely statistical
expression. In this chapter, that means to reduce an expression from
one that uses potential outcome notation to one that uses only statistical
notation such as 𝑇 , 𝑋 , 𝑌 , expectations, and conditioning. This means that
we can calculate the causal effect from just the observational distribution
𝑃(𝑋 , 𝑇, 𝑌).

Definition 2.1 (Identifiability) A causal quantity (e.g. 𝔼[𝑌(𝑡)]) is identifi-


able if we can compute it from a purely statistical quantity (e.g. 𝔼[𝑌 | 𝑡]).

We have seen that ignorability is extremely important (Equation 2.3), but


how realistic of an assumption is it? In general, it is completely unrealistic
because there is likely to be confounding in most data we observe (causal
structure shown in Figure 2.1). However, we can make this assumption
realistic by running randomized experiments, which force the treatment
to not be caused by anything but a coin toss, so then we have the causal
structure shown in Figure 2.2. We cover randomized experiments in
greater depth in Chapter 5.
We have covered two prominent perspectives on this main assumption
(2.1): ignorability and exchangeability. Mathematically, these mean the
same thing, but their names correspond to different ways of thinking
about the same assumption. Exchangeability and ignorability are only
two names for this assumption. We will see more aliases after we cover
the more realistic, conditional version of this assumption.

2.3.3 Conditional Exchangeability and


Unconfoundedness

In observational data, it is unrealistic to assume that the treatment groups


are exchangeable. In other words, there is no reason to expect that the
groups are the same in all relevant variables other than the treatment.
However, if we control for relevant variables by conditioning, then maybe
the subgroups will be exchangeable. We will clarify what the “relevant
variables” are in Chapter 3, but for now, let’s just say they are all of the
covariates 𝑋 . Then, we can state conditional exchangeability formally.

Assumption 2.2 (Conditional Exchangeability / Unconfoundedness)

(𝑌(1), 𝑌(0)) ⊥
⊥𝑇 | 𝑋

The idea is that although the treatment and potential outcomes may
be unconditionally associated (due to confounding), within levels of 𝑋 ,
they are not associated. In other words, there is no confounding within
levels of 𝑋 because controlling for 𝑋 has made the treatment groups
comparable. We’ll now give a bit of graphical intuition for the above. We
will not draw the rigorous connection between the graphical intuition
and Assumption 2.2 until Chapter 3; for now, it is just meant to aid
intuition.
2 Potential Outcomes 11

We do not have exchangeability in the data because 𝑋 is a common cause


of 𝑇 and 𝑌 . We illustrate this in Figure 2.3. Because 𝑋 is a common
cause of 𝑇 and 𝑌 , there is non-causal association between 𝑇 and 𝑌 . This
non-causal association flows along the 𝑇 ← 𝑋 → 𝑌 path; we depict this
with a red dashed arc.
However, we do have conditional exchangeability in the data. This is
because, when we condition on 𝑋 , there is no longer any non-causal 𝑋
association between 𝑇 and 𝑌 . The non-causal association is now “blocked”
at 𝑋 by conditioning on 𝑋 . We illustrate this blocking in Figure 2.4 by
shading 𝑋 to indicate it is conditioned on and by showing the red dashed
arc being blocked there. 𝑇 𝑌
Figure 2.3: Causal structure of 𝑋 con-
Conditional exchangeability is the main assumption necessary for causal founding the effect of 𝑇 on 𝑌 . We depict
inference. Armed with this assumption, we can identify the causal effect the confounding with a red dashed line.
within levels of 𝑋 , just like we did with (unconditional) exchangeability:

𝔼[𝑌(1) − 𝑌(0) | 𝑋] = 𝔼[𝑌(1) | 𝑋] − 𝔼[𝑌(0) | 𝑋] (2.5)


= 𝔼[𝑌(1) | 𝑇 = 1 , 𝑋] − 𝔼[𝑌(0) | 𝑇 = 0 , 𝑋] (2.6)
= 𝔼[𝑌 | 𝑇 = 1 , 𝑋] − 𝔼[𝑌 | 𝑇 = 0 , 𝑋] (2.7) 𝑋

In parallel to before, we get Equation 2.5 by linearity of expectation.


And we now get Equation 2.6 by conditional exchangeability. If we want
𝑇 𝑌
the marginal effect that we had before when assuming (unconditional)
exchangeability, we can get that by simply marginalizing out 𝑋 : Figure 2.4: Illustration of conditioning on
𝑋 leading to no confounding.

𝔼[𝑌(1) − 𝑌(0)] = 𝔼𝑋 𝔼[𝑌(1) − 𝑌(0) | 𝑋] (2.8)


= 𝔼𝑋 [𝔼[𝑌 | 𝑇 = 1, 𝑋] − 𝔼[𝑌 | 𝑇 = 0 , 𝑋]] (2.9)

This marks an important result for causal inference, so we’ll give it its
own proposition box. The proof we give above leaves out some details.
Read through to Section 2.3.6 (where we redo the proof with all details
specified) to get the rest of the details. We will call this result the adjustment
formula.

Theorem 2.1 (Adjustment Formula) Given the assumptions of uncon-


foundedness, positivity, consistency, and no interference, we can identify the
average treatment effect:

𝔼[𝑌(1) − 𝑌(0)] = 𝔼𝑋 [𝔼[𝑌 | 𝑇 = 1, 𝑋] − 𝔼[𝑌 | 𝑇 = 0, 𝑋]]

Conditional exchangeability (Assumption 2.2) is a core assumption for


causal inference and goes by many names. For example, the following
are reasonably commonly used to refer to the same assumption: un-
confoundedness, conditional ignorability, no unobserved confounding,
selection on observables, no omitted variable bias, etc. We will use the
name “unconfoundedness” a fair amount throughout this book.
The main reason for moving from exchangeability (Assumption 2.1) to
conditional exchangeability (Assumption 2.2) was that it seemed like a
more realistic assumption. However, we often cannot know for certain
if conditional exchangeability holds. There may be some unobserved
confounders that are not part of 𝑋 , meaning conditional exchangeability
is violated. Fortunately, that is not a problem in randomized experiments
2 Potential Outcomes 12

(Chapter 5). Unfortunately, it is something that we must always be


conscious of in observational data. Intuitively, the best thing we can do is
to observe and fit as many covariates into 𝑋 as possible to try to ensure
8
unconfoundedness.8 As we will see in Chapters 3 and 4, it is
not necessarily true that conditioning on
more covariates always helps our causal
estimates be less biased.
2.3.4 Positivity/Overlap and Extrapolation

While conditioning on many covariates is attractive for achieving uncon-


foundedness, it can actually be detrimental for another reason that has
to do with another important assumption that we have yet to discuss:
positivity. We will get to why at the end of this section. Positivity is the
condition that all subgroups of the data with different covariates have
some probability of receiving any value of treatment. Formally, we define
positivity for binary treatment as follows.

Assumption 2.3 (Positivity / Overlap / Common Support) For all


values of covariates 𝑥 present in the population of interest (i.e. 𝑥 such that
𝑃(𝑋 = 𝑥) > 0),
0 < 𝑃(𝑇 = 1 | 𝑋 = 𝑥) < 1

To see why positivity is important, let’s take a closer look at Equation 2.9:

𝔼[𝑌(1) − 𝑌(0)] = 𝔼𝑋 [𝔼[𝑌 | 𝑇 = 1, 𝑋] − 𝔼[𝑌 | 𝑇 = 0, 𝑋]]


(2.9 revisited)
In short, if we have a positivity violation, then we will be conditioning
on a zero probability event. This is because there will be some value
of 𝑥 with non-zero probability for which 𝑃(𝑇 = 1 | 𝑋 = 𝑥) = 0 or
𝑃(𝑇 = 0 | 𝑋 = 𝑥) = 0. This means that for some value of 𝑥 that we
are marginalizing out in the above equation, 𝑃(𝑇 = 1 , 𝑋 = 𝑥) = 0 or
𝑃(𝑇 = 0 , 𝑋 = 𝑥) = 0, and these are the two events that we condition on
in Equation 2.9.
To clearly see how a positivity violation translates to division by zero,
let’s rewrite the right-hand side of Equation 2.9. For discrete covariates
and outcome, it can be rewritten as follows:
!
X X X
𝑃(𝑥) 𝑦 𝑃(𝑌 = 𝑦 | 𝑇 = 1, 𝑋 = 𝑥) − 𝑦 𝑃(𝑌 = 𝑦 | 𝑇 = 0 , 𝑋 = 𝑥)
𝑥 𝑦 𝑦
(2.10)
Then, applying Bayes’ rule, this can be further rewritten:
!
X X 𝑃(𝑌 = 𝑦, 𝑇 = 1 , 𝑋 = 𝑥) X 𝑃(𝑌 = 𝑦, 𝑇 = 0 , 𝑋 = 𝑥)
𝑃(𝑥) 𝑦 − 𝑦
𝑥 𝑦 𝑃(𝑇 = 1 | 𝑋 = 𝑥)𝑃(𝑋 = 𝑥) 𝑦 𝑃(𝑇 = 0 | 𝑋 = 𝑥)𝑃(𝑋 = 𝑥)
(2.11)
In Equation 2.11, we can clearly see why positivity is essential. If
𝑃(𝑇 = 1 | 𝑋 = 𝑥) = 0 for any level of covariates 𝑥 with non-zero prob-
ability, then there is division by zero in the first term in the equation,
so 𝔼𝑋 𝔼[𝑌 | 𝑇 = 1 , 𝑋] is undefined. Similarly, if 𝑃(𝑇 = 1 | 𝑋 = 𝑥) = 1
for any level of 𝑥 , then 𝑃(𝑇 = 0 | 𝑋 = 𝑥) = 0, so there is division by
zero in the second term and 𝔼𝑋 𝔼[𝑌 | 𝑇 = 0 , 𝑋] is undefined. With
either of these violations of the positivity assumption, the causal effect is
undefined.
2 Potential Outcomes 13

Intuition That’s the math for why we need the positivity assumption,
but what’s the intuition? Well, if we have a positivity violation, that
means that within some subgroup of the data, everyone always receives
treatment or everyone always receives the control. It wouldn’t make
sense to be able to estimate a causal effect of treatment vs. control in that
subgroup since we see only treatment or only control. We never see the
alternative in that subgroup.
Another name for positivity is overlap. The intuition for this name is that
9
we want the covariate distribution of the treatment group to overlap Whenever we use a random variable (de-
with the covariate distribution of the control group. More specifically, noted by a capital letter) as the argument
for 𝑃 , we are referring to the whole dis-
we want 𝑃(𝑋 | 𝑇 = 1)9 to have the same support as 𝑃(𝑋 | 𝑇 = 0).10 This tribution, rather than just the scalar that
is why another common alias for positivity is common support. something like 𝑃(𝑥 | 𝑇 = 1) refers to.

The Positivity-Unconfoundedness Tradeoff Although conditioning 10 Active reading exercise: convince your-
on more covariates could lead to a higher chance of satisfying uncon- self that this formulation of overlap/posi-
foundedness, it can lead to a higher chance of violating positivity. As we tivity is equivalent to the formulation in
increase the dimension of the covariates, we make the subgroups for any Assumption 2.3.
level 𝑥 of the covariates smaller.11 As each subgroup gets smaller, there
11 This is related to the curse of dimensional-
is a higher and higher chance that either the whole subgroup will have
treatment or the whole subgroup will have control. For example, once ity.

the size of any subgroup has decreased to one, positivity is guaranteed to


not hold. See [6] for a rigorous argument of high-dimensional covariates [6]: D’Amour et al. (2017), Overlap in Ob-
leading to positivity violations. servational Studies with High-Dimensional
Covariates
Extrapolation Violations of the positivity assumption can actually lead
to demanding too much from models and getting very bad behavior in
return. Many causal effect estimators12 fit a model to 𝔼[𝑌|𝑡, 𝑥] using the 12An “estimator” is a function that takes
(𝑡, 𝑥, 𝑦) tuples as data. The inputs to these models are (𝑡, 𝑥) pairs and the a dataset as input and outputs an esti-
mate. We discuss this statistics terminol-
outputs are the corresponding outcomes. These models will be forced
ogy more in Section 2.4.
to extrapolate in regions (using their parametric assumptions) where
𝑃(𝑇 = 1, 𝑋 = 𝑥) = 0 and regions where 𝑃(𝑇 = 0 , 𝑋 = 𝑥) = 0 when
they are used in the adjustment formula (Theorem 2.1) in place of the
corresponding conditional expectations.

2.3.5 No interference, Consistency, and SUTVA

There are a few additional assumptions we’ve been smuggling in through-


out this chapter. We will specify all the rest of these assumptions in this
section. The first assumption in this section is that of no interference.
No interference means that my outcome is unaffected by anyone else’s
treatment. Rather, my outcome is only a function of my own treatment.
We’ve been using this assumption implicitly throughout this chapter.
We’ll now formalize it.

Assumption 2.4 (No Interference)

𝑌𝑖 (𝑡1 , . . . , 𝑡 𝑖−1 , 𝑡 𝑖 , 𝑡 𝑖+1 , . . . , 𝑡 𝑛 ) = 𝑌𝑖 (𝑡 𝑖 )

Of course, this assumption could be violated. For example, if the treatment


is “get a dog” and the outcome is my happiness, it could easily be that my
happiness is influenced by whether or not my friends get dogs because
we could end up hanging out more to have our dogs play together. As you
2 Potential Outcomes 14

might expect, violations of the no interference assumption are rampant


in network data.
The last assumption is consistency. Consistency is the assumption that
the outcome we observe 𝑌 is actually the potential outcome under the
observed treatment 𝑇 .

Assumption 2.5 (Consistency) If the treatment is 𝑇 , then the observed


outcome 𝑌 is the potential outcome under treatment 𝑇 . Formally,

𝑇 = 𝑡 =⇒ 𝑌 = 𝑌(𝑡) (2.12)

We could write this equivalently as follow:

𝑌 = 𝑌(𝑇) (2.13)

Note that 𝑇 is different from 𝑡 , and 𝑌(𝑇) is different from 𝑌(𝑡). 𝑇 is a


random variable that corresponds to the observed treatment, whereas 𝑡
is a specific value of treatment. Similarly, 𝑌(𝑡) is the potential outcome for
some specific value of treatment, whereas 𝑌(𝑇) is the potential outcome
for the actual value of treatment that we observe.
When we were using exchangeability to prove identifiability, we actually
assumed consistency in Equation 2.4 to get the follow equality:

𝔼[𝑌(1) | 𝑇 = 1] − 𝔼[𝑌(0) | 𝑇 = 0] = 𝔼[𝑌 | 𝑇 = 1] − 𝔼[𝑌 | 𝑇 = 0]

Similarly, when we were using conditional exchangeability to prove


identifiability, we assumed consistency in Equation 2.7.
It might seem like consistency is obviously true, but that is not always the
case. For example, if the treatment specification is simply “get a dog” or
“don’t get a dog,” this can be too coarse to yield consistency. It might be
that if I were to get a puppy, I would observe 𝑌 = 1 (happiness) because
I needed an energetic friend, but if I were to get an old, low-energy dog, I
would observe 𝑌 = 0 (unhappiness). However, both of these treatments
fall under the category of “get a dog,” so both correspond to 𝑇 = 1. This
means that 𝑌(1) is not well defined, since it will be 1 or 0, depending
on something that is not captured by the treatment specification. In
this sense, consistency encompasses the assumption that is sometimes
referred to as “no multiple versions of treatment.” See Sections 3.4 and
3.5 of Hernán and Robins [7] and references therein for more discussion [7]: Hernán and Robins (2020), Causal In-
ference: What If
on this topic.
SUTVA You will also commonly see the stable unit-treatment value
assumption (SUTVA) in the literature. SUTVA is satisfied if unit (individual)
𝑖 ’s outcome is simply a function of unit 𝑖 ’s treatment. Therefore, SUTVA is
a combination of consistency and no interference (and also deterministic
potential outcomes).13 13 Active reading exercise: convince your-
self that SUTVA is a combination of con-
sistency and no inference
2.3.6 Tying It All Together

We introduced unconfoundedness (conditional exchangeability) first


because it is the main causal assumption. However, all of the assumptions
are necessary:
2 Potential Outcomes 15

1. Unconfoundedness (Assumption 2.2)


2. Positivity (Assumption 2.3)
3. No interference (Assumption 2.4)
4. Consistency (Assumption 2.5)
We’ll now review the proof of the adjustment formula (Theorem 2.1)
that was done in Equation 2.5 through Equation 2.9 and list which
assumptions are used for each step. Even before we get to these equations,
we use the no interference assumption to justify that the quantity we
should be looking at for causal inference is 𝔼[𝑌(1) − 𝑌(0)], rather than
something more complex like the left-hand side of Assumption 2.4. In
the proof below, the first two equalities follow from mathematical facts,
whereas the last two follow from these key assumptions.

Proof of Theorem 2.1.

𝔼[𝑌(1) − 𝑌(0)] = 𝔼[𝑌(1)] − 𝔼[𝑌(0)] (linearity of expectation)


= 𝔼𝑋 [𝔼[𝑌(1) | 𝑋] − 𝔼[𝑌(0) | 𝑋]]
(law of iterated expectations)
= 𝔼𝑋 [𝔼[𝑌(1) | 𝑇 = 1 , 𝑋] − 𝔼[𝑌(0) | 𝑇 = 0 , 𝑋]]
(unconfoundedness and positivity)
= 𝔼𝑋 [𝔼[𝑌 | 𝑇 = 1 , 𝑋] − 𝔼[𝑌 | 𝑇 = 0 , 𝑋]]
(consistency)

That’s how all of these assumptions tie together to give us identifiability


of the ATE. We’ll soon see how to use this result to get an actual estimated
number for the ATE.

2.4 Fancy Statistics Terminology Defancified

Before we start computing concrete numbers for the ATE, we must


quickly introduce some terminology from statistics that will help clarify
the discussion. An estimand is the quantity that we want to estimate. For
example, 𝔼𝑋 [𝔼[𝑌 | 𝑇 = 1 , 𝑋] − 𝔼[𝑌 | 𝑇 = 0 , 𝑋]] is the estimand we care
about for estimating the ATE. An estimate (noun) is an approximation of
some estimand, which we get using data. We will see concrete numbers
in the next section; these are estimates. Given some estimand 𝛼 , we write
an estimate of that estimand by simply putting a hat on it: 𝛼ˆ . And an
estimator is a function that maps a dataset to an estimate of the estimand.
The process that we will use to go from data + estimand to a concrete
number is known as estimation. To estimate (verb) is to feed data into an
estimator to get an estimate.
In this book, we will use even more specific language that allows us to
make the distinction between causal quantities and statistical quantities.
We will use the phrase causal estimand to refer to any estimand that
contains a potential outcome or do-operator in it. We will use the phrase
statistical estimand to denote the complement: any estimand that does not
2 Potential Outcomes 16

contain a potential outcome or do-operator in it. For an example, recall


the adjustment formula (Theorem 2.1):

𝔼[𝑌(1) − 𝑌(0)] = 𝔼𝑋 [𝔼[𝑌 | 𝑇 = 1, 𝑋] − 𝔼[𝑌 | 𝑇 = 0, 𝑋]] (2.14)

𝔼[𝑌(1) − 𝑌(0)] is the causal estimand that we are interested in. In order
to actually estimate this causal estimand, we must translate it into a
statistical estimand: 𝔼𝑋 [𝔼[𝑌 | 𝑇 = 1 , 𝑋] − 𝔼[𝑌 | 𝑇 = 0 , 𝑋]].14 14 Active reading exercise: Why can’t we di-
rectly estimate a causal estimand without
When we say “identification” in this book, we are referring to the process first translating it to a statistical estimand?
of moving from a causal estimand to an equivalent statistical estimand.
When we say “estimation,” we are referring to the process of moving from
a statistical estimand to an estimate. We illustrate this in the flowchart in
Figure 2.5.

Identification Estimation
Causal Estimand Statistical Estimand Estimate

Figure 2.5: The Identification-Estimation Flowchart – a flowchart that illustrates the process of moving from a target causal estimand to a
corresponding estimate, through identification and estimation.

What do we do when we go to actually estimate quantities such as


𝔼𝑋 [𝔼[𝑌 | 𝑇 = 1, 𝑋] − 𝔼[𝑌 | 𝑇 = 0, 𝑋]]? We will often use a model (e.g.
linear regression or some more fancy predictor from machine learning)
in place of the conditional expectations 𝔼[𝑌 | 𝑇 = 𝑡, 𝑋 = 𝑥]. We will
refer to estimators that use models like this as model-assisted estimators.
Now that we’ve gotten some of this terminology out of the way, we can
proceed to an example of estimating the ATE.

2.5 A Complete Example with Estimation

Theorem 2.1 and the corresponding recent copy in Equation 2.14 give
us identification. However, we haven’t discussed estimation at all. In
this section, we will give a short example complete with estimation. We
will cover the topic of estimation of causal effects more completely in
Chapter 7.
We use Luque-Fernandez et al. [8]’s example from epidemiology. The [8]: Luque-Fernandez et al. (2018), ‘Edu-
outcome 𝑌 of interest is (systolic) blood pressure. This is an important cational Note: Paradoxical collider effect
in the analysis of non-communicable dis-
outcome because roughly 46% of Americans have high blood pressure, ease epidemiological data: a reproducible
and high blood pressure is associated with increased risk of mortality illustration and web application’
[9]. The “treatment” 𝑇 of interest is sodium intake. Sodium intake is [9]: Virani et al. (2020), ‘Heart Disease and
a continuous variable; in order to easily apply Equation 2.14, which is Stroke Statistics—2020 Update: A Report
specified for binary treatment, we will binarize 𝑇 by letting 𝑇 = 1 denote From the American Heart Association’

daily sodium intake above 3.5 grams and letting 𝑇 = 0 denote daily
sodium intake below 3.5 grams.15 We will be estimating the causal effect 15 As we will see, this binarization is purely

of sodium intake on blood pressure. In our data, we also have the age pedagogical and does not reflect any limi-
of the individuals and amount of protein in their urine as covariates 𝑋 . tations of adjusting for confounders.

Luque-Fernandez et al. [8] run a simulation, taking care to be sure that


the range of values is “biologically plausible and as close to reality as
possible.”
Because we are using data from a simulation, we know that the true ATE
of sodium on blood pressure is 1.05. More concretely, the line of code
that generates blood pressure 𝑌 looks as follows:
2 Potential Outcomes 17

blood_pressure = 1.05 * sodium + ...

Now, how do we actually estimate the ATE? First, we assume consistency,


positivity, and unconfoundedness given 𝑋 . As we recently recalled in
Equation 2.14, this means that we’ve identified the ATE as

𝔼𝑋 [𝔼[𝑌 | 𝑇 = 1, 𝑋] − 𝔼[𝑌 | 𝑇 = 0, 𝑋]] .

We then take that outer expectation over 𝑋 and replace it with an


empirical mean over the data, giving us the following:

1X
[𝔼[𝑌 | 𝑇 = 1 , 𝑥 𝑖 ] − 𝔼[𝑌 | 𝑇 = 0, 𝑥 𝑖 ]] (2.15)
𝑛 𝑖

To complete our estimator, we then fit some machine learning model to


the conditional expectation 𝔼[𝑌 | 𝑡, 𝑥]. Minimizing the mean-squared
error (MSE) of predicting 𝑌 from (𝑇, 𝑋) pairs is equivalent to modeling
this conditional expectation [see, e.g., 10, Section 2.4]. Therefore, we can [10]: Hastie et al. (2001), The Elements of
plug in any machine learning model for 𝔼[𝑌 | 𝑡, 𝑥], which gives us a Statistical Learning

model-assisted estimator. We’ll use linear regression here, which works


out nicely since blood pressure is generated as a linear combination of
other variables, in this simulation. We give Python code for this below,
where our data are in a Pandas DataFrame called df. We fit the model
for 𝔼[𝑌 | 𝑡, 𝑥] in line 8, and we take the empirical mean over 𝑋 in lines
10-14.

import numpy as np Listing 2.1: Python code for estimating


the ATE
import pandas as pd
from sklearn.linear_model import LinearRegression

Xt = df[['sodium', 'age', 'proteinuria']]


y = df['blood_pressure']
model = LinearRegression()
model.fit(Xt, y)
Full code, complete with simulation,
is available at https://github.com/
Xt1 = pd.DataFrame.copy(Xt) bradyneal/causal-book-code/blob/
Xt1['sodium'] = 1 master/sodium_example.py.
Xt0 = pd.DataFrame.copy(Xt)
Xt0['sodium'] = 0
ate_est = np.mean(model.predict(Xt1) - model.predict(Xt0))
print('ATE estimate:', ate_est)

This yields an ATE estimate of 0.85. If we were to naively regress 𝑌


on only 𝑇 , which corresponds to replacing line 5 in Listing 2.1 with
Xt = df[['sodium']],16 we would get an ATE estimate of 5.33. That’s a 16 Active reading exercise: This
| 5.33−1.05 |
1.05 × 100% = 407% error! In contrast, when we control for 𝑋 (as in naive version is equivalent to just
|.85−1.05 | taking the associational difference:
Listing 2.1), our percent error is only 1.05 × 100% = 19%. 𝔼[𝑌 | 𝑇 = 1] − 𝔼[𝑌 | 𝑇 = 0]. Why?

All of the above is done using the adjustment formula with model-assisted
estimation, where we first fit a model for the conditional expectation
𝔼[𝑌 | 𝑡, 𝑥], and then we take an empirical mean over 𝑋 , using that model.
However, because we are using a linear model, this is equivalent to just
taking the coefficient in front of 𝑇 in the linear regression as the ATE
estimate. This is what we do in the following code (which gives the exact
same ATE estimate):

Xt = df[['sodium', 'age', 'proteinuria']] Listing 2.2: Python code for estimating


the ATE using the coefficient of linear re-
gression
2 Potential Outcomes 18

y = df['blood_pressure']
model = LinearRegression()
model.fit(Xt, y)
ate_est = model.coef_[0]
print('ATE estimate:', ate_est)

Continuous Treatment What if we allow the treatment, daily sodium


intake, to remain continuous, instead of binarizing it? The cool thing
about just taking the regression coefficient as the ATE estimate is that it
doesn’t require taking a difference between two values of treatment (e.g.
𝑇 = 1 and 𝑇 = 0), so it trivially generalizes to when 𝑇 is continuous. In
other words, we have compressed all of 𝔼[𝑌 | 𝑡], which is a function of 𝑡 ,
into a single value.
However, this effortless compression of all of 𝔼[𝑌 | 𝑡] for continuous 𝑡
comes as a cost: the linear parametric form we assumed. If this model
were misspecified,17 our ATE estimate would be biased. And because 17By “misspecified,” we mean that the
linear models are so simple, they will likely be misspecified. For example, functional form of the model does not
match the functional form of the data gen-
the following assumption is implicit in assuming that a linear model is
erating process.
well-specified: the treatment effect is the same for all individuals. See
Morgan and Winship [11, Sections 6.2 and 6.3] for a more complete critique [11]: Morgan and Winship (2014), Counter-
of using the coefficient in front of treatment as the ATE estimate. factuals and Causal Inference: Methods and
Principles for Social Research
The Flow of Association and
Causation in Graphs 3
We’ve been using causal graphs in the previous chapters to aid intuition. 3.1 Graph Terminology . . . . . 19
In this chapter, we will introduce the formalisms that underlie this 3.2 Bayesian Networks . . . . . 20
intuition. Hopefully, we have sufficiently motivated this chapter and
3.3 Causal Graphs . . . . . . . . 22
made the utility of graphical models clear with all of the graphical
3.4 Two-Node Graphs and
interpretations of concepts in previous chapters.
Graphical Building Blocks 23
3.5 Chains and Forks . . . . . . 24
3.6 Colliders and their Descen-
3.1 Graph Terminology dants . . . . . . . . . . . . . . 26
3.7 d-separation . . . . . . . . . . 28
In this section, we will use the terminology machine gun (see Figure 3.1). To
3.8 Flow of Association and Cau-
be able to use nice convenient graph language in the following sections,
sation . . . . . . . . . . . . . 29
rapid-firing a lot of graph terminology is a necessary evil, unfortunately.
term
The term “graph” is often used to describe a variety of visualizations. termtermterm
term term
term
For example, “graph” might refer to a visualization of a single variable term
function 𝑓 (𝑥), where 𝑥 is plotted on the 𝑥 -axis and 𝑓 (𝑥) is plotted Figure 3.1: Terminology machine gun
on the 𝑦 -axis. Or “bar graph” might be used as a synonym for a bar
chart. However, in graph theory, the term “graph” refers to a specific
mathematical object.
A graph is a collection of nodes (also called “vertices”) and edges that
connect the nodes. For example, in Figure 3.2, 𝐴, 𝐵, and 𝐶 are the nodes
of the graph, and the lines connecting them are the edges. Figure 3.2 is
called an undirected graph because the edges do not have any direction. In
𝐴 𝐵
contrast, Figure 3.3 is a directed graph. A directed graph’s edges go out
of a parent node and into a child node, with the arrows signifying which
direction the edges are going. We will denote the parents of a node 𝑋
with pa(𝑋). We’ll use an even simpler shorthand when the nodes are 𝐶 𝐷
ordered so that we can denote the 𝑖 th node by 𝑋𝑖 ; in that case, we will
Figure 3.2: Undirected graph
also denote the parents of 𝑋𝑖 by pa𝑖 . Two nodes are said to be adjacent
if they are connected by an edge. For example, in both Figure 3.2 and
Figure 3.3, 𝐴 and 𝐶 are adjacent, but 𝐴 and 𝐷 are not.
𝐴 𝐵
A path in a graph is any sequence of adjacent nodes, regardless of the
direction of the edges that join them. For example, 𝐴 — 𝐶 — 𝐵 is a path
in Figure 3.2, and 𝐴 → 𝐶 ← 𝐵 is a path in Figure 3.3. A directed path is
a path that consists of directed edges that are all directed in the same 𝐶 𝐷
direction (no two edges along the path both point into or both point
Figure 3.3: Directed graph
out of the same node). For example, 𝐴 → 𝐶 → 𝐷 is a directed path in
Figure 3.3, but 𝐴 → 𝐶 ← 𝐵 and 𝐶 ← 𝐴 → 𝐵 are not.
If there is a directed path that starts at node 𝑋 and ends at node 𝑌 , then 𝑋
is an ancestor of 𝑌 , and 𝑌 is a descendant of 𝑋 . We will denote descendants
of 𝑋 by de(𝑋). For example, in Figure 3.3, 𝐴 is an ancestor of 𝐵 and
𝐷 , and 𝐵 and 𝐷 are both descendants of 𝐴 (de(𝐴)). If 𝑋 is an ancestor
of itself, then some funky time travel has taken place. In seriousness, a
directed path from some node 𝑋 back to itself is known as a cycle. If there
3 The Flow of Association and Causation in Graphs 20

are no cycles in a directed graph, the graph is known as a directed acyclic


graph (DAG). The graphs we focus on in this book will mostly be DAGs.
If two parents 𝑋 and 𝑌 share some child 𝑍 , but there is no edge connecting
𝑋 and 𝑌 , then 𝑋 → 𝑍 ← 𝑌 is known as an immorality. Seriously; that’s a
real term in graphical models. For example, if the 𝐴 → 𝐵 edge did not
exist in Figure 3.3, then 𝐴 → 𝐶 ← 𝐵 would be an immorality.

3.2 Bayesian Networks

It turns out that much of the work for causal graphical models was done
in the field of probabilistic graphical models. Probabilistic graphical
models are statistical models while causal graphical models are causal
models. Bayesian networks are the main probabilistic graphical model
that causal graphical models (causal Bayesian networks) inherit most of
their properties from.
Imagine that we only cared about modeling association, without any
causal modeling. We would want to model the data distribution 𝑃(𝑥 1 , 𝑥 2 , . . . , 𝑥 𝑛 ).
In general, we can use the chain rule of probability to factorize any distri-
bution:
Y
𝑃(𝑥1 , 𝑥2 , . . . , 𝑥 𝑛 ) = 𝑃(𝑥1 ) 𝑃(𝑥 𝑖 | 𝑥 𝑖−1 , . . . , 𝑥 1 ) (3.1)
𝑖

However, if we were to model these factors with tables, it would take an Table 3.1: Table required to model the
exponential number of parameters. To see this, take each 𝑥 𝑖 to be binary single factor 𝑃(𝑥 𝑛 | 𝑥 𝑛−1 , . . . , 𝑥 1 ) where
and consider how we would model the factor 𝑃(𝑥 𝑛 | 𝑥 𝑛−1 , . . . , 𝑥 1 ). Since 𝑛 = 4 and the variables are binary. The
𝑥 𝑛 is binary, we only need to model 𝑃(𝑋𝑛 = 1 | 𝑥 𝑛−1 , . . . , 𝑥 1 ) because
number of parameters to necessary is ex-
ponential in 𝑛 .
𝑃(𝑋𝑛 = 0 | 𝑥 𝑛−1 , . . . , 𝑥 1 ) is simply 1 − 𝑃(𝑋𝑛 = 1 | 𝑥 𝑛−1 , . . . , 𝑥 1 ). Well, we
would need 2𝑛−1 parameters to model this. As a specific example, let 𝑥1 𝑥2 𝑥3 𝑃(𝑥4 |𝑥3 , 𝑥2 , 𝑥1 )
𝑛 = 4. As we can see in Table 3.1, this would require 24−1 = 8 parameters: 0 0 0 𝛼1
𝛼1 , . . . , 𝛼 8 . This brute-force parametrization quickly becomes intractable 0 0 1 𝛼2
as 𝑛 increases. 0 1 0 𝛼3
An intuitive way to more efficiently model many variables together in 0 1 1 𝛼4
a joint distribution is to only model local dependencies. For example, 1 0 0 𝛼5
rather than modeling the 𝑋4 factor as 𝑃(𝑥 4 |𝑥 3 , 𝑥 2 , 𝑥 1 ), we could model 1 0 1 𝛼6
it as 𝑃(𝑥 4 |𝑥 3 ) if we have reason to believe that 𝑋4 only locally depends 1 1 0 𝛼7
on 𝑋3 . In fact, in the corresponding graph in Figure 3.4, the only node 1 1 1 𝛼8
that feeds into 𝑋4 is 𝑋3 . This is meant to signify that 𝑋4 only locally
depends on 𝑋3 . Whenever we use a graph 𝐺 in relation to a probability
distribution 𝑃 , there will always be a one-to-one mapping between the
nodes in 𝐺 and the random variables in 𝑃 , so when we talk about nodes
𝑋1 𝑋2
being independent, we mean the corresponding random variables are
independent.
Given a probability distribution and a corresponding directed acyclic
graph (DAG), we can formalize the specification of independencies with 𝑋3 𝑋4
the local Markov assumption:
Figure 3.4: Four node DAG where 𝑋4 lo-
cally depends on only 𝑋3 .
Assumption 3.1 (Local Markov Assumption) Given its parents in the
DAG, a node 𝑋 is independent of all of its non-descendants.
3 The Flow of Association and Causation in Graphs 21

This assumption (along with specific DAGs) gives us a lot. We will


demonstrate this in the next few equations. In our four variable example,
the chain rule of probability tells us that we can factorize any 𝑃 such that

𝑃(𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ) = 𝑃(𝑥1 ) 𝑃(𝑥2 |𝑥1 ) 𝑃(𝑥3 |𝑥2 , 𝑥1 ) 𝑃(𝑥4 |𝑥 3 , 𝑥2 , 𝑥1 ) . (3.2)

If 𝑃 is Markov with respect to the graph1 in Figure 3.4, then we can 1 A probability distribution is said to be

simply the last factor: (locally) Markov with respect to a DAG if


they satisfy the local Markov assumption.
𝑃(𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ) = 𝑃(𝑥1 ) 𝑃(𝑥2 |𝑥1 ) 𝑃(𝑥 3 |𝑥2 , 𝑥1 ) 𝑃(𝑥4 |𝑥 3 ) . (3.3)

If we further remove edges, removing 𝑋1 → 𝑋2 and 𝑋2 → 𝑋3 as in 𝑋1 𝑋2


Figure 3.5, we can further simplify the factorization of 𝑃 :

𝑃(𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ) = 𝑃(𝑥 1 ) 𝑃(𝑥2 ) 𝑃(𝑥3 |𝑥1 ) 𝑃(𝑥4 |𝑥 3 ) . (3.4)


𝑋3 𝑋4
With the understanding that we have hopefully built up from a few
Figure 3.5: Four node DAG with even
examples,2 we will now state one of the main consequences of the local more independencies.
Markov assumption:

2 Active reading exercise:: ensure that you


Definition 3.1 (Bayesian Network Factorization) Given a probability
know how we get from Equation 3.2 to
distribution 𝑃 and a DAG 𝐺 , 𝑃 factorizes according to 𝐺 if Equation 3.3 and to Equation 3.4 using the
Y local Markov assumption.
𝑃(𝑥1 , . . . , 𝑥 𝑛 ) = 𝑃(𝑥 𝑖 | pa𝑖 )
𝑖

Hopefully you see the resemblance between the move from Equation 3.2
to Equation 3.3 or the move to Equation 3.4 and the generalization of this
that is presented in Definition 3.1.
The Bayesian network factorization is also known as the chain rule for
Bayesian networks or Markov compatibility. For example, if 𝑃 factorizes
according to 𝐺 , then 𝑃 and 𝐺 are Markov compatible.
We have given the intuition of how the local Markov assumption implies
the Bayesian network factorization, and it turns out that the two are
actually equivalent. In other words, we could have started with the
Bayesian network factorization as the main assumption (and labeled it as
an assumption) and shown that it implies the local Markov assumption.
See Koller and Friedman [12, Chapter 3] for these proofs and more [12]: Koller and Friedman (2009), Proba-
information on this topic. bilistic Graphical Models: Principles and Tech-
niques
As important as the local Markov assumption is, it only gives us infor-
mation about the independencies in 𝑃 that a DAG implies. It does not
even tell us that if 𝑋 and 𝑌 are adjacent in the DAG, then 𝑋 and 𝑌 are
dependent. And this additional information is very commonly assumed
in causal DAGs. To get this guaranteed dependence between adjacent
nodes, we will generally assume a slightly stronger assumption than the
local Markov assumption: minimality.

3 This is often equivalently stated in the


Assumption 3.2 (Minimality Assumption) 1. Given its parents in following way: if we were to remove any
the DAG, a node 𝑋 is independent of all of its non-descendants edges from the DAG, 𝑃 would not be
(Assumption 3.1). Markov with respect to the graph with
the removed edges [see, e.g., 13, Section
2. Adjacent nodes in the DAG are dependent.3
6.5.3].

[13]: Peters et al. (2017), Elements of Causal


Inference: Foundations and Learning Algo-
rithms
3 The Flow of Association and Causation in Graphs 22

To see why this assumption is named “minimality” consider, what we


know when we know that 𝑃 is Markov with respect to a DAG 𝐺 . We know
that 𝑃 satisfies a set of independencies that are specific to the structure of
𝐺 . If 𝑃 and 𝐺 also satisfy minimality, then this set of independencies is
minimal in the sense the 𝑃 does not satisfy any additional independencies.
This is equivalent to saying that adjacent nodes are dependent.
For example, if the DAG were simply two connected nodes 𝑋 and 𝑌 as
in Figure 3.6, the local Markov assumption would tell us that we can
factorize 𝑃(𝑥, 𝑦) as 𝑃(𝑥)𝑃(𝑦|𝑥), but it would also allow us to factorize
𝑃(𝑥, 𝑦) as 𝑃(𝑥)𝑃(𝑦), meaning it allows distributions where 𝑋 and 𝑌 are
𝑋 𝑌
independent. In contrast, the minimality assumption does not allow this
additional independence. Minimality would tell us to factorize 𝑃(𝑥, 𝑦) Figure 3.6: Two connected nodes
as 𝑃(𝑥)𝑃(𝑦|𝑥), and it would tell us that no additional independencies
(𝑋 ⊥⊥ 𝑌 ) exist in 𝑃 that are minimal with respect to Figure 3.6.
Because removing edges in a Bayesian network is equivalent to adding
independencies,4 the minimality assumption is equivalent to saying that 4 Active reading exercise: why is removing
we can’t remove any more edges from the graph. In a sense, every edge is edges in a Bayesian network equivalent to
adding independencies?
“active.” More concretely, consider that 𝑃 and 𝐺 are Markov compatible
and that 𝐺0 is what we get when we remove some edge from 𝐺 . If 𝑃 is
also Markov with respect to 𝐺0, then 𝑃 is not minimal with respect to
𝐺.
Armed with the minimality assumption and what it implies about how
distributions factorize when they are Markov with respect to some DAG
(Definition 3.1), we are now ready to discuss the flow of association in
DAGs. However, because everything in this section is purely statistical,
we are not ready to discuss the flow of causation in DAGs. To do that, we
must make causal assumptions. Pedagogically, this will also allow us to
use intuitive causal language when we explain the flow of association.

3.3 Causal Graphs

The previous section was all about statistical models and modeling
association. In this section, we will augment these models with causal
assumptions, turning them into causal models and allowing us to study
causation. In order to introduce causal assumptions, we must first have
an understanding of what it means for 𝑋 to be a cause of 𝑌 .

Definition 3.2 (What is a cause?) A variable 𝑋 is said to be a cause of a


variable 𝑌 if 𝑌 can change in response to changes in 𝑋 .5 5See Section 4.5.1 for a definition using
mathematical notation.

Another phrase commonly used to describe this primitive is that 𝑌


“listens” to 𝑋 . With this, we can now specify the main causal assumption
that we will use throughout this book.

Assumption 3.3 ((Strict) Causal Edges Assumption) In a directed graph,


every parent is a direct cause of all of their children.

Here, the set of direct causes of 𝑌 is everything that 𝑌 directly responds


to; if we fix all of the direct causes of 𝑌 , then changing any other cause of
𝑌 won’t induce any changes in 𝑌 . This assumption is “strict” in the sense
3 The Flow of Association and Causation in Graphs 23

that every edge is “active,” just like in DAGs that satisfy minimality. In
other words, because the definition of a cause (Definition 3.2) implies
that a cause and its effect are dependent and because we are assuming
all parents are causes of their children, we are assuming that parents
and their children are dependent. So the second part of minimality
(Assumption 3.2) is baked into the strict causal edges assumption.
In contrast, the non-strict causal edges assumption would allow for
some parents to not be causes of their children. It would just assume
that children are not causes of their parents. This allows us to draw
graphs with extra edges to make fewer assumptions, just like we would
in Bayesian networks, where more edges means fewer independence
assumptions. Causal graphs are sometimes drawn with this kind of
non-minimal meaning, but the vast majority of the time, when someone
draws a causal graph, they mean that parents are causes of their children.
Therefore, unless we specify otherwise, throughout this book, we will
use “causal graph” to refer to a DAG that satisfies the strict causal edges
assumption. And we will often omit the word “strict” when we refer to
this assumption.
When we add the causal edges assumption, directed paths in the DAG
take on a very special meaning; they correspond to causation. This is in
contrast to other paths in the graph, which association may flow along,
but causation certainly may not. This will become more clear when we
go into detail on these other kinds of paths in Sections 3.5 and 3.6.
Moving forward, we will now think of the edges of graphs as causal, in
order to describe concepts intuitively with causal language. However,
all of the associational claims about statistical independence will still
hold, even when the edges do not have causal meaning like in the vanilla
Bayesian networks of Section 3.2.
As we will see in the next few sections, the main assumptions that we
need for our causal graphical models to tell us how association and
causation flow between variables are the following two:
1. Local Markov Assumption (Assumption 3.1)
2. Causal Edges Assumption (Assumption 3.3)
We will discuss these assumptions throughout the next few sections and
come back to discuss them more fully again in Section 3.8 after we’ve
established the necessary preliminaries.

3.4 Two-Node Graphs and Graphical Building


Blocks

Now that we’ve gotten the basic assumptions and definitions out of the
way, we can get to the core of this chapter: the flow of association and
causation in DAGs. We can understand this flow in general DAGs by
understanding the flow in the minimal building blocks of graphs. These
minimal building blocks consist of chains (Figure 3.7a), forks (Figure 3.7b),
immoralities (Figure 3.7c), two unconnected nodes (Figure 3.8), and two
connected nodes (Figure 3.9).
3 The Flow of Association and Causation in Graphs 24

𝑋2 𝑋1 𝑋3

𝑋1 𝑋2 𝑋3 𝑋1 𝑋3 𝑋2

(a) Chain (b) Fork (c) Immorality

Figure 3.7: Basic graph building blocks

By “flow of association,” we mean whether any two nodes in a graph are


associated or not associated. Another way of saying this is whether two
nodes are (statistically) dependent or (statistically) independent. Addi-
tionally, we will study whether two nodes are conditionally independent
or not.
For each building block, we will give the intuition for why two nodes
are (conditionally) independent or not, and we will give a proof as well.
We can prove that two nodes 𝐴 and 𝐵 are conditionally independent
given some set of nodes 𝐶 by simply showing that 𝑃(𝑎, 𝑏|𝑐) factorizes
as 𝑃(𝑎|𝑐) 𝑃(𝑏|𝑐). We will now do this in the case of the simplest basic
building block: two unconnected nodes.
Given a graph that is just two unconnected nodes, as depicted in Figure 3.8,
these nodes are not associated simply because there is no edge between
them. To show this, consider the factorization of 𝑃(𝑥 1 , 𝑥 2 ) that the 𝑋1 𝑋2
Bayesian network factorization (Definition 3.1) gives us:
Figure 3.8: Two unconnected nodes
𝑃(𝑥 1 , 𝑥2 ) = 𝑃(𝑥1 ) 𝑃(𝑥2 ) (3.5)

That’s it; applying the Bayesian network factorization immediately gives


us a proof that the two nodes 𝑋1 and 𝑋2 are unassociated (independent)
in this building block. And what is the assumption that allows us to
prove this? That 𝑃 is Markov with respect to the graph in Figure 3.8.
𝑋1 𝑋2
In contrast, if there is an edge between the two nodes (as in Figure 3.9),
then the two nodes are associated. The assumption we leverage here is Figure 3.9: Two connected nodes
the causal edges assumption (Assumption 3.3), which means that 𝑋1
is a cause of 𝑋2 . Since 𝑋1 is a cause of 𝑋2 , 𝑋2 must be able to change
in response to changes in 𝑋1 , so 𝑋2 and 𝑋1 are associated. In general,
any time two nodes are adjacent in a causal graph, they are associated.6 6Two adjacent nodes in a non-strict causal
We will see this same concept several more times in Section 3.5 and graph can be unassociated.
Section 3.6.
Now that we’ve covered the relevant two-node graphs, we’ll cover the
flow of association in the remaining graphical building blocks (three-node
graphs in Figure 3.7), starting with chain graphs.

3.5 Chains and Forks

Chains (Figure 3.10) and forks (Figure 3.11) share the same set of depen-
𝑋1 𝑋2 𝑋3
dencies. In both structures, 𝑋1 and 𝑋2 are dependent, and 𝑋2 and 𝑋3
are dependent for the same reason that we discussed toward the end Figure 3.10: Chain with flow of association
of Section 3.4. Adjacent nodes are always dependent when we make drawn as a dashed red arc.
the causal edges assumption (Assumption 3.3). What about 𝑋1 and 𝑋3 ,
3 The Flow of Association and Causation in Graphs 25

though? Does association flow from 𝑋1 to 𝑋3 through 𝑋2 in chains and


forks?
Usually, yes, 𝑋1 and 𝑋3 are associated in both chains and forks. In chain
graphs, 𝑋1 and 𝑋3 are usually dependent simply because 𝑋1 causes
𝑋2
changes in 𝑋2 which then causes changes in 𝑋3 . In a fork graph, 𝑋1 and
𝑋3 are also usually dependent. This is because the same value that 𝑋2
takes on is used to determine both the value that 𝑋1 takes on and the
value that 𝑋3 takes on. In other words, 𝑋1 and 𝑋3 are associated through 𝑋1 𝑋3
their (shared) common cause. We use the word “usually” throughout this
Figure 3.11: Fork with flow of association
paragraph because there exist pathological cases where the conditional drawn as a dashed red arc.
distributions 𝑃(𝑥 2 |𝑥 1 ) and 𝑃(𝑥 3 |𝑥 2 ) are misaligned in such a specific way
that makes 𝑋1 and 𝑋3 not actually associated [see, e.g., 14, Section 2.2]. [14]: Pearl et al. (2016), Causal inference in
statistics: A primer
An intuitive graphical way of thinking about 𝑋1 and 𝑋3 being associated
in chains and forks is to visualize the flow of association. We visualize
this with a dashed red line in Figure 3.10 and Figure 3.11. In the chain
graph (Figure 3.10), association flows from 𝑋1 to 𝑋3 along the path 𝑋1 →
𝑋2 → 𝑋3 . Symmetrically, association flows from 𝑋3 to 𝑋1 along that same
path, just running opposite the arrows. In the fork graph (Figure 3.11),
association flows from 𝑋1 to 𝑋3 along the path 𝑋1 ← 𝑋2 → 𝑋3 . And
similarly, we can think of association flowing from 𝑋3 to 𝑋1 along that
same path, just as was the case with chains. In general, the flow of
association is symmetric.
Chains and forks also share the same set of independencies. When we
condition on 𝑋2 in both graphs, it blocks the flow of association from 𝑋1
to 𝑋2 . This is because of the local Markov assumption; each variable only
locally depends on its parents. So when we condition on 𝑋2 (𝑋3 ’s parent
in both graphs), 𝑋2 becomes independent of 𝑋1 (and vice versa).
We will refer to this independence as an instance of a blocked path. We
illustrate these blocked paths in Figure 3.12 and Figure 3.13. Conditioning 𝑋1 𝑋2 𝑋3
blocks the flow of association in chains and forks. Without conditioning, Figure 3.12: Chain with association
association is free to flow in chains and forks; we will refer to this as blocked by conditioning on 𝑋2 .
an unblocked path. However, the situation is completely different with
immoralities, as we will see in the next section.
That’s all nice intuition, but what about the proof? We can prove that
𝑋1 ⊥⊥ 𝑋3 | 𝑋2 using just the local Markov assumption. We will do this by
showing that 𝑃(𝑥 1 , 𝑥 3 | 𝑥 2 ) = 𝑃(𝑥 1 | 𝑥 2 ) 𝑃(𝑥 3 | 𝑥 2 ). We’ll show the proof 𝑋2
for chain graphs. It is usually useful to start with the Bayesian network
factorization. For forks, we can factorize 𝑃(𝑥 1 , 𝑥 2 , 𝑥 3 ) as follows:

𝑃(𝑥1 , 𝑥2 , 𝑥3 ) = 𝑃(𝑥1 ) 𝑃(𝑥2 |𝑥1 ) 𝑃(𝑥3 |𝑥2 ) (3.6) 𝑋1 𝑋3

𝑃(𝑥 1 ,𝑥2 ,𝑥3 ) Figure 3.13: Fork with association blocked


Bayes’ rule tells us that 𝑃(𝑥 1 , 𝑥 3 | 𝑥 2 ) = 𝑃(𝑥2 )
, so we have by conditioning on 𝑋2 .

𝑃(𝑥1 ) 𝑃(𝑥2 |𝑥 1 ) 𝑃(𝑥 3 |𝑥2 )


𝑃(𝑥1 , 𝑥3 | 𝑥2 ) = (3.7)
𝑃(𝑥 2 )

Since we’re looking to end up with 𝑃(𝑥 1 | 𝑥 2 ) 𝑃(𝑥 3 | 𝑥 2 ) and we already


have 𝑃(𝑥 3 |𝑥 2 ), we must turn the rest into 𝑃(𝑥 1 | 𝑥 2 ). We can do this by
3 The Flow of Association and Causation in Graphs 26

another application of Bayes rule:

𝑃(𝑥1 , 𝑥2 )
𝑃(𝑥1 , 𝑥3 | 𝑥2 ) = 𝑃(𝑥3 |𝑥2 ) (3.8)
𝑃(𝑥2 )
= 𝑃(𝑥1 |𝑥 2 ) 𝑃(𝑥3 |𝑥2 ) (3.9)

With that, we’ve shown that 𝑋1 ⊥


⊥ 𝑋3 | 𝑋2 . Try it yourself; prove the
analog in forks.7 7 Active reading exercise: prove that
𝑋1 ⊥
⊥ 𝑋3 | 𝑋2 for forks (Figure 3.13).
Flow of Causation The flow of association is symmetric, whereas the
flow of causation is not. Under the causal edges assumption (Assump-
tion 3.3), causation only flows in a single direction. Causation only flows
along directed paths. Association flows along any path that does not
contain an immorality.

3.6 Colliders and their Descendants

Recall from Section 3.1 that we have an immorality when we have a child
whose two parents do not have an edge connecting them (Figure 3.14).
And in this graph structure, the child is known as a bastard. No, just
kidding; it’s called a collider.
In contrast to chains and forks, in an immorality, 𝑋1 ⊥ ⊥ 𝑋3 . Look at
the graph structure and think about it a bit. Why would 𝑋1 and 𝑋3 be
associated? One isn’t the descendent of the other like in chains, and
they don’t share a common cause like in forks. Rather, we can think of
𝑋1 and 𝑋3 simply as unrelated events that happen, which happen to
both contribute to some common effect (𝑋2 ). To show this, we apply the
Bayesian network factorization and marginalize out 𝑥 2 :
𝑋1 𝑋3
X
𝑃(𝑥1 , 𝑥3 ) = 𝑃(𝑥 1 , 𝑥2 , 𝑥3 ) (3.10)
𝑥2
X
= 𝑃(𝑥 1 ) 𝑃(𝑥 3 ) 𝑃(𝑥2 | 𝑥1 , 𝑥3 ) (3.11) 𝑋2
𝑥2
X
= 𝑃(𝑥1 ) 𝑃(𝑥3 ) 𝑃(𝑥2 | 𝑥1 , 𝑥3 ) (3.12)
𝑥2
Figure 3.14: Immorality with association
= 𝑃(𝑥1 ) 𝑃(𝑥3 ) (3.13) blocked by a collider.

We illustrate the independence of 𝑋1 and 𝑋3 in Figure 3.14 by showing


that the association that we could have imagined as flowing along the
path 𝑋1 → 𝑋2 ← 𝑋3 is actually blocked at 𝑋2 . Because we have a collider
on the path connecting 𝑋1 and 𝑋3 , association does not flow through
that path. This is another example of a blocked path, but this time the path
is not blocked by conditioning; the path is blocked by a collider.
Good Looking Men are Jerks Oddly enough, when we condition on 𝑋1 𝑋3
the collider 𝑋2 , its parents 𝑋1 and 𝑋3 become dependent (depicted in
Figure 3.15). An example is the easiest way to see why this is the case.
Imagine that you’re out dating men, and you notice that most of the 𝑋2
nice men you meet are not very good looking, and most of the good
looking men you meet are jerks. It seems that you have to choose between
looks and kindness. In other words, it seems like kindness and looks are
Figure 3.15: Immorality with association
negatively associated. However, what if I also told you that there is an unblocked by conditioning on the collider.
important third variable here: availability (whether men are already in
3 The Flow of Association and Causation in Graphs 27

a relationship or not)? And what if told you that a man’s availability is


largely determined by their looks and kindness; if they are both good
looking and kind, then they are in a relationship. The available men are
the remaining ones, the ones who are either not good looking or not
kind. You see an association between looks and kindness because you’ve
conditioned on a collider (availability). You’re only looking at men who
are not in a relationship. You can see the causal structure of this example
by taking Figure 3.15 and replacing 𝑋1 with “looks,” 𝑋3 with “kindness,”
and 𝑋2 with “availability.”
The above example naturally suggests that, when dating men, maybe
you should consider not conditioning on 𝑋2 = “not in a relationship”
and, instead, condition on 𝑋2 = “in a relationship.” However, you could
run into other variables 𝑋4 that introduce new immoralities there. Such
complexities are outside the scope of this book. Active reading exercise: Come up with
your own example of an immortality
Returning to inside the scope of this book, we have that conditioning and how conditioning on the collider
on a collider can turn a blocked path into an unblocked path. The parents induces association between its parents.
Hint: think of rare events for 𝑋1 and 𝑋3
𝑋1 and 𝑋3 are not associated in the general population, but when we
where, if either of them happens, some
condition on their shared child 𝑋2 taking on a specific value, they become outcome 𝑋2 will happen.
associated. Conditioning on the collider 𝑋2 allows associated to flow
along the path 𝑋1 → 𝑋2 ← 𝑋3 , despite the fact that it does not when we
don’t condition on 𝑋2 . We illustrate this in the move from Figure 3.14 to
Figure 3.15.
We can also illustrate this with a scatter plot. In TODO, we plot the
whole population, with kindness on the x-axis and looks on the y-axis.
As you can see, the variables are not associated in the general population.
However, if we remove the ones who are already in a relationship (top
triangle), we are left with a clear negative association. This phenomenon
is known as Berkson’s paradox. The fact that see this negative association
simply because we are selecting a biased subset of the general population
to look at is why this is sometimes referred to as selection bias [see, e.g., 7,
Chapter 8]. [7]: Hernán and Robins (2020), Causal In-
ference: What If
Numerical Example All of the above has been to give you intuition
about why conditioning on a collider induces association between its
parents, but we have yet to give a concrete numerical example of this.
We will give a simple one here. Consider the following data generating
process (DGP), where 𝑋1 and 𝑋3 are drawn independently from standard
normal distributions and then used to compute 𝑋2 :

𝑋1 ∼ 𝑁(0 , 1) , 𝑋3 ∼ 𝑁(0, 1) (3.14)


𝑋2 = 𝑋1 + 𝑋3 (3.15)

We’ve already stated that 𝑋1 and 𝑋3 are independent, but to juxtapose


the two calculations, let’s compute their covariance:

Cov(𝑋1 , 𝑋3 ) = 𝔼[(𝑋1 − 𝔼[𝑋1 ])(𝑋3 − 𝔼[𝑋3 ])]


= 𝔼[𝑋1 𝑋3 ] (zero mean)
= 𝔼[𝑋1 ]𝔼[𝑋3 ] (independent)
=0
3 The Flow of Association and Causation in Graphs 28

Now, let’s compute their covariance, conditional on 𝑋2 :

Cov(𝑋1 , 𝑋3 | 𝑋2 = 𝑥) = 𝔼[𝑋1 𝑋3 | 𝑋2 = 𝑥] (3.16)


= 𝔼[𝑋1 (𝑥 − 𝑋1 )] (3.17)
= 𝑥 𝔼[𝑋1 ] − 𝔼[𝑋12 ] (3.18)
= −1 (3.19)

Crucially, in Equation 3.17, we used Equation 3.15 to plug in for 𝑋3 in


terms of 𝑋1 and 𝑋2 (conditioned to 𝑥 ). This led to a second-order term,
which led to the calculation giving a nonzero number, which means 𝑋1
and 𝑋3 are associated, conditional on 𝑋2 .
Descendants of Colliders Conditioning on descendants of a collider
also induces association in between the parents of the collider. The
intuition is that if we learn something about a collider’s descendent, we
usually also learn something about the collider itself because there is
a direct causal path from the collider to its descendants, and we know
that nodes in a chain are usually associated (see Section 3.5), assuming
minimality (Assumption 3.2). In other words, a descendant of a collider
can be thought of as a proxy for that collider, so conditioning on the Active reading exercise: We have provided
descendant is similar to conditioning on the collider itself. several techniques for how to think about
colliders: high-level examples, numerical
examples, and abstract reasoning. Use at
least one of them to convince yourself
that conditioning on a descendant of a

3.7 d-separation collider can induce association between


the collider’s parents.

Before we define d-separation, we’ll codify what we mean by the con-


cept of a “blocked path,” which we’ve been discussing in the previous
sections:

Definition 3.3 (blocked path) A path between nodes 𝑋 and 𝑌 is blocked


by a (potentially empty) conditioning set 𝑍 if either of the following hold:
1. Along the path, there is a chain · · · → 𝑊 → . . . or a fork
· · · ← 𝑊 → . . ., where 𝑊 is conditioned on (𝑊 ∈ 𝑍 ).
2. There is a collider 𝑊 on the path that is not conditioned on (𝑊 ∉ 𝑍 )
and none of its descendants are conditioned on (de(𝑊) * 𝑍 ).

Then, an unblocked path is simply the complement; an unblocked path is a


path that is not blocked. The graphical intuition to have in mind is that
association flows along unblocked paths, and association does not flow
along blocked paths. If you don’t have this intuition in mind, then it is
probably worth it to reread the previous two sections, with the goal of
gaining this intuition. Now, we are ready to introduce a very important
concept: d-separation.

Definition 3.4 (d-separation) Two (sets of) nodes 𝑋 and 𝑌 are d-separated
by a set of nodes 𝑍 if all of the paths between (any node in) 𝑋 and (any node
in) 𝑌 are blocked by 𝑍 [15]. [15]: Pearl (1988), Probabilistic Reasoning
in Intelligent Systems: Networks of Plausible
Inference
If all the paths between two nodes 𝑋 and 𝑌 are blocked, then we say that
𝑋 and 𝑌 are d-separated. Similarly, if there exists at least one path between
𝑋 and 𝑌 that is unblocked, then we say that 𝑋 and 𝑌 are d-connected.
3 The Flow of Association and Causation in Graphs 29

As we will see in Theorem 3.1, d-separation is such an important concept


because it implies conditional independence. We will use the notation
𝑋 ⊥⊥ 𝐺 𝑌 | 𝑍 to denote that 𝑋 and 𝑌 are d-separated in the graph 𝐺
when conditioning on 𝑍 . Similarly, we will use the notation 𝑋 ⊥
⊥ 𝑃𝑌 | 𝑍
to denote that 𝑋 and 𝑌 are independent in the distribution 𝑃 when
conditioning on 𝑍 .

Theorem 3.1 Given that 𝑃 is Markov with respect to 𝐺 (satisfies the local
Markov assumption, Assumption 3.1), if 𝑋 and 𝑌 are d-separated in 𝐺
conditioned on 𝑍 , then 𝑋 and 𝑌 are independent in 𝑃 conditioned on 𝑍 . We
can write this succinctly as follows:

𝑋⊥
⊥ 𝐺 𝑌 | 𝑍 =⇒ 𝑋 ⊥
⊥ 𝑃𝑌 | 𝑍 (3.20)

Because this is so important, we will give Equation 3.20 a name: the global
Markov assumption. Theorem 3.1 tells us that the local Markov assumption
implies the global Markov assumption.
Markov assumption Just as we built up the intuition that suggested that the
local Markov assumption (Assumption 3.1) implies the Bayesian network
factorization (Definition 3.1) and alerted you to the fact that the Bayesian
network factorization also implies the local Markov assumption (the
two are equivalent), it turns out that the global Markov assumption also
implies the local Markov assumption. In other words, the local Markov
assumption, global Markov assumption, and the Bayesian network fac-
torization are equivalent all [see, e.g., 12, Chapter 3]. Therefore, we will [12]: Koller and Friedman (2009), Proba-
use the slightly shortened phrase Markov assumption to refer to these bilistic Graphical Models: Principles and Tech-
niques
concepts as a group, or we will simply write “𝑃 is Markov with respect
to 𝐺 ” to convey the same meaning.

3.8 Flow of Association and Causation

Now that we have covered the necessary preliminaries (chains, forks,


colliders, and d-separation), it is worth emphasizing how association and
causation flow in directed graphs. Association flows along all unblocked
paths. In causal graphs, causation flows along directed paths. Recall from
Section 1.3.2 that not only is association not causation, but causation is a
sub-category of association. That’s why association and causation both
flow along directed paths.
Regular Bayesian networks are purely statistical models, so we can only
talk about the flow of association in Bayesian networks. Association still
flows in exactly the same way in Bayesian networks as it does in causal
graphs, though. In both, association flows along chains and forks, unless
a node is conditioned on. And in both, a collider blocks the flow of
association, unless it is conditioned on. Combining these building blocks,
we get how association flows in general DAGs. We can tell if two nodes
are not associated (no association flows between them) by whether or
not they are d-separated.
Causal graphs are special in that we additionally assume that the edges
have causal meaning (causal edges assumption, Assumption 3.3). This
assumption is what introduces causality into our models, and it makes
3 The Flow of Association and Causation in Graphs 30

one type of path takes on a whole new meaning: directed paths. This
assumption endows directed paths with the unique role of carrying
causation along them. Additionally, this assumption is asymmetric; “𝑋
is a cause of 𝑌 ” is not the same as saying “𝑌 is a cause of 𝑋 .” This means
that there is an important difference between association and causation:
association is symmetric, whereas causation is asymmetric.
Given that we have tools to measure association, how can we isolate
causation? In other words, how can we ensure that the association we
measure is causation, say, for measuring the causal effect of 𝑋 on 𝑌 ?
Well, we can do that by ensuring that there is no non-causal association
flowing between 𝑋 and 𝑌 . This is true if 𝑋 and 𝑌 are d-separated in
the augmented graph where we remove outgoing edges from 𝑋 . This
is because when all of 𝑋 ’s causal effect on 𝑌 would flow through it’s
outgoing edges; once those are removed, the only association that remains
is purely non-causal association.
In Figure 3.16, we illustrate what each of the important assumptions
gives us in terms of interpreting this flow of association. First, we have
the (local/global) Markov assumption (Assumption 3.1). As we saw
in Section 3.7, this assumption allows us to know which nodes are
unassociated. In other words, the Markov assumption tells along which
paths the association does not flow. When we slightly strengthen the
Markov assumption to the minimality assumption (Assumption 3.2),
we get which paths association does flow along (except in intransitive
edges cases). When we further add in the causal edges assumption
(Assumption 3.3), we get that causation flows along directed paths.
Therefore, the following two8 assumptions are essential for graphical 8 Recall that the first part of the minimal-
causal models: ity assumption is just the local Markov
assumption and that the second part is
1. Markov Assumption (Assumption 3.1) contained in the causal edges assumption.
2. Causal Edges Assumption (Assumption 3.3)

Markov Minimality Causal Edges


Assumption Statistical Assumption Statistical Assumption Causal
Independencies Dependencies Dependencies

Figure 3.16: A flowchart that illustrates what kind of claims we can make about our data as we add each additional important assumption.
Causal Models 4
Causal models are essential for identification of causal quantities. When 4.1 The do-operator and Inter-
we presented the Identification-Estimation Flowchart (Figure 2.5) back ventional Distributions . . 31
in Section 2.4, we described identification as the process of moving 4.2 The Main Assumption: Mod-
from a causal estimand to a statistical estimand. However, to do that, ularity . . . . . . . . . . . . . 33
we must have a causal model. We depict this more full version of the 4.3 Truncated Factorization . . 34
Identification-Estimation Flowchart in Figure 4.1. Example Application and Re-
visiting “Association is Not
Causal Estimand Causal Model Causation” . . . . . . . . . . 35
4.4 The Backdoor Adjustment 36
Relation to Potential Out-
comes . . . . . . . . . . . . . 38
Statistical Estimand Data
4.5 Structural Causal Models
(SCMs) . . . . . . . . . . . . 39
Structural Equations . . . . 39
Estimate Interventions . . . . . . . . . 40
Collider Bias and Why to Not
Figure 4.1: The Identification-Estimation Flowchart – a flowchart that illustrates the process
of moving from a target causal estimand to a corresponding estimate, through identification Condition on Descendants
and estimation. In contrast to Figure 2.5, this version is augmented with a causal model of Treatment . . . . . . . . . 42
and data. 4.6 Example Applications of the
Backdoor Adjustment . . . 43
The previous chapter gives graphical intuition for causal models, but it Association vs. Causation in
a Toy Example . . . . . . . . 43
doesn’t explain how to identify causal quantities and formalize causal
A Complete Example with
models. We will do that in this chapter.
Estimation . . . . . . . . . . 44
4.7 Assumptions Revisited . . . 46

4.1 The do-operator and Interventional


Distributions

The first thing that we will introduce is a mathematical operator for


intervention. In the regular notation for probability, we have conditioning,
but that isn’t the same as intervening. Conditioning on 𝑇 = 𝑡 just means
that we are restricting our focus to the subset of the population to those
who received treatment 𝑡 . In contrast, an intervention would be to take
the whole population and give everyone treatment 𝑡 . We illustrate this in
Figure 4.2. We will denote intervention with the do-operator: do(𝑇 = 𝑡).
This is the notation commonly used in graphical causal models, and it has
equivalents in potential outcomes notation. For example, we can write
the distribution of the potential outcome 𝑌(𝑡) that we saw in Chapter 2
as follows:

𝑃(𝑌(𝑡) = 𝑦) , 𝑃(𝑌 = 𝑦 | do(𝑇 = 𝑡)) , 𝑃(𝑌 = 𝑦 | do(𝑡)) (4.1)

Note that we shorten do(𝑇 = 𝑡) to just do(𝑡) in the last option in Equation
4.1. We will use this shorthand throughout the book. We can similarly
write the ATE (average treatment effect) when the treatment is binary as
follows:
𝔼[𝑌 | do(𝑇 = 1)] − 𝔼[𝑌 | do(𝑇 = 0)] (4.2)
4 Causal Models 32

Population Subpopulations Conditioning Intervening

𝑇=0 𝑇=1 𝑇=1 do(𝑇 = 1)

or or

𝑇=0 do(𝑇 = 0)

Figure 4.2: Illustration of the difference between conditioning and intervening

We will often work with full distributions like 𝑃(𝑌 | do(𝑡)), rather than
their means, as this is more general; if we characterize 𝑃(𝑌 | do(𝑡)), then
we’ve characterized 𝔼[𝑌 | do(𝑡)]. We will commonly refer to 𝑃(𝑌 | do(𝑇 =
𝑡)) and other expressions with the do-operator in them as interventional
distributions.
Interventional distributions such as 𝑃(𝑌 | do(𝑇 = 𝑡)) are conceptually
quite different from the observational distribution 𝑃(𝑌). Observational
distributions such as 𝑃(𝑌) or 𝑃(𝑌, 𝑇, 𝑋) do not have the do-operator in
them. Because they don’t have the do-operator, we can observe data from
them without needing to carry out any experiment. This is why we call
data from 𝑃(𝑌, 𝑇, 𝑋) observational data. If we can reduce an expression
𝑄 with do in it (an interventional expression) to one without do in it (an
observational expression), then 𝑄 is said to be identifiable. An expression
with a do in it is fundamentally different from an expression without a
do in it, despite the fact that in do-notation, do appears after a regular
conditioning bar. As we discussed in Section 2.4, we will refer to an
estimand as a causal estimand when it contains a do-operator, and we
refer to an estimand as a statistical estimand when it doesn’t contain a
do-operator.
Whenever, do(𝑡) appears after the conditioning bar, it means that ev-
erything in that expression is in the post-intervention world where the
intervention do(𝑡) occurs. For example, 𝔼[𝑌 | do(𝑡), 𝑍 = 𝑧] refers to the
expected outcome in the subpopulation where 𝑍 = 𝑧 after the whole
subpopulation has taken treatment 𝑡 . In contrast, 𝔼[𝑌 | 𝑍 = 𝑧] simply
refers to the expected value in the (pre-intervention) population where
individuals take whatever treatment they would normally take (𝑇 ). This
distinction will become important when we get to counterfactuals in
Chapter 8.
4 Causal Models 33

4.2 The Main Assumption: Modularity

Before we can describe a very important assumption, we must specify


what a causal mechanism is. There are a few different ways to think about
causal mechanisms. In this section, we will refer to the causal mechanism
that generates 𝑋𝑖 as the conditional distribution of 𝑋𝑖 given all of its
causes: 𝑃(𝑥 𝑖 | pa𝑖 ). As we show graphically in Figure 4.3, the causal
mechanism that generates 𝑋𝑖 is all of 𝑋𝑖 ’s parents and their edges that go
into 𝑋𝑖 . We will give a slightly more specific description of what a causal
mechanism is in Section 4.5.1, but these suffice for now.
In order to get many causal identification results, the main assumption
we will make is that interventions are local. More specifically, we will
assume that intervening on a variable 𝑋𝑖 only changes the causal mech- 𝑋𝑖
anism for 𝑋𝑖 ; it does not change the causal mechanisms that generate
any other variables. In this sense, the causal mechanisms are modular.
Other names that are used for the modularity property are independent
mechanisms, autonomy, and invariance. We will now state this assumption
more formally. Figure 4.3: A causal graph with the causal
mechanism that generates 𝑋𝑖 depicted in-
side an ellipse.
Assumption 4.1 (Modularity / Independent Mechanisms / Invariance)
If we intervene on a set of nodes 𝑆 ⊆ [𝑛],1 setting them to constants, then for 1 We use [𝑛] to refer to the set {1 , 2 , . . . , 𝑛}.
all 𝑖 , we have the following:
1. If 𝑖 ∉ 𝑆 , then 𝑃(𝑥 𝑖 | pa𝑖 ) remains unchanged.
2. If 𝑖 ∈ 𝑆 , then 𝑃(𝑥 𝑖 | pa𝑖 ) = 1 if 𝑥 𝑖 is the value that 𝑋𝑖 was set to by
the intervention; otherwise, 𝑝(𝑥 𝑖 | pa𝑖 ) = 0.

In the second part of the above assumption, we could have alternatively


said 𝑃(𝑥 𝑖 | pa𝑖 ) = 1 if 𝑥 𝑖 is consistent with the intervention2 and 0 otherwise. 2Yes, the word “consistent” is extremely
More explicitly, we will say (in the future) that if 𝑖 ∈ 𝑆 , a value 𝑥 𝑖 is overloaded.
consistent with the intervention if 𝑥 𝑖 equals the value that 𝑋𝑖 was set to
in the intervention.
The modularity assumption is what allows us to encode many different
interventional distributions all in a single graph. For example, it could be
the case that 𝑃(𝑌), 𝑃(𝑌 | do(𝑇 = 𝑡), 𝑃(𝑌 | do(𝑇 = 𝑡 0), and 𝑃(𝑌 | do(𝑇2 = 𝑡2 )
are all completely different distributions that share almost nothing. If
this were the case, then each of these distributions would need their own
graph. However, by assuming modularity, we can encode them all with
the same graph that we use to encode the joint 𝑃(𝑌, 𝑇, 𝑇2 , . . . ), and we
can know that all of the factors (except ones that are intervened on) are
shared across these graphs.
The causal graph for interventional distributions is simply the same
graph that was used for the observational joint distribution, but with
all of the edges to the intervened node(s) removed. This is because the
probability for the intervened factor has been set to 1, so we can just
ignore that factor (this is the focus of the next section). Another way to
see that the intervened node has no causal parents is that the intervened
node is set to a constant value, so it no longer depends on any of the
variables it depends on in the observational setting (its parents). The
graph with edges removed is known as the manipulated graph.
4 Causal Models 34

For example, consider the causal graph for an observational distribution


in Figure 4.4a. Both 𝑃(𝑌 | do(𝑇 = 𝑡) and 𝑃(𝑌 | do(𝑇 = 𝑡 0) correspond
to the causal graph in Figure 4.4b, where the incoming edge to 𝑇 has
been removed. Similarly, 𝑃(𝑌 | do(𝑇2 = 𝑡2 ) corresponds to the graph
in Figure 4.4c, where the incoming edges to 𝑇2 have been removed.
Although it is not expressed in the graphs (which only express conditional
independencies and causal relations), under the modularity assumption,
𝑃(𝑌), 𝑃(𝑌 | 𝑇 = 𝑡 0), and 𝑃(𝑌 | do(𝑇2 = 𝑡2 ) all shared the exact same
factors (that are not intervened on).

𝑇3 𝑇3 𝑇3

𝑇2 𝑇2 𝑇2
𝑇 𝑇 𝑇

𝑌 𝑌 𝑌

(a) Causal graph for observational distri- (b) Causal graph after intervention on 𝑇 (c) Causal graph after intervention on 𝑇2
bution (interventional distribution) (interventional distribution)

Figure 4.4: Intervention as edge deletion in causal graphs

What would it mean for the modularity assumption to be violated?


Imagine that you intervene on 𝑋𝑖 , and this causes the mechanism that
generates a different node 𝑋 𝑗 to change; an intervention on 𝑋𝑖 changes
𝑃(𝑥 𝑗 | pa 𝑗 ), where 𝑗 ≠ 𝑖 . In other words, the intervention is not local to
the node you intervene on; causal mechanisms are not invariant to when
you change other causal mechanisms; the causal mechanisms are not
modular.
This assumption is so important that Judea Pearl refers to a closely
related version (which we will see in Section 4.5.2) as The Law of
Counterfactuals (and Interventions), one of two key principles from
which all other causal results follow.3 Incidentally, taking the modularity 3 The other key principle is the global
assumption (Assumption 4.1) and the Markov assumption (the other key Markov assumption (Theorem 3.1), which
principle) together gives us causal Bayesian networks. We’ll now move to is the assumption that d-separation im-
plies conditional independence.
one of the important results that follow from these assumptions.

4.3 Truncated Factorization

Recall the Bayesian network factorization (Definition 3.1), which tells us


that if 𝑃 is Markov with respect to a graph 𝐺 , then 𝑃 factorizes as follows:
Y
𝑃(𝑥1 , . . . , 𝑥 𝑛 ) = 𝑃(𝑥 𝑖 | pa𝑖 ) (4.3)
𝑖

where pa𝑖 denotes the parents of 𝑋𝑖 in 𝐺 . Now, if we intervene on some


set of nodes 𝑆 and assume modularity (Assumption 4.1), then all of the
factors should remain the same except the factors for 𝑋𝑖 ∈ 𝑆 ; those factors
4 Causal Models 35

should change to 1 (for values consistent with the intervention) because


those variables have been intervened on. This is how we get the truncated
factorization.

Proposition 4.1 (Truncated Factorization) We assume that 𝑃 and 𝐺 satisfy


the Markov assumption and modularity. Given, a set of intervention nodes 𝑆 ,
if 𝑥 is consistent with the intervention, then
Y
𝑃(𝑥1 , . . . , 𝑥 𝑛 | do(𝑆 = 𝑠)) = 𝑃(𝑥 𝑖 | pa𝑖 ) . (4.4)
𝑖∉𝑆

Otherwise, 𝑃(𝑥 1 , . . . , 𝑥 𝑛 | do(𝑆 = 𝑠)) = 0.

The key thing that changed when we moved from the regular factorization
in Equation 4.3 to the truncated factorization in Equation 4.4 is that the
latter’s product is only over 𝑖 ∉ 𝑆 rather than all 𝑖 . In other words, the
factors for 𝑖 ∈ 𝑆 have been truncated.

4.3.1 Example Application and Revisiting “Association is


Not Causation”

To see the power that the truncated factorization gives us, let’s apply it
to identify the causal effect of treatment on outcome in a simple graph.
Specifically, we will identify the causal quantity 𝑃(𝑦 | do(𝑡)). In this
example, the distribution 𝑃 is Markov with respect to the graph in Figure 𝑋
4.5. The Bayesian network factorization (from the Markov assumption),
gives us the following:
𝑇 𝑌
𝑃(𝑦, 𝑡, 𝑥) = 𝑃(𝑥) 𝑃(𝑡 | 𝑥) 𝑃(𝑦 | 𝑡, 𝑥) (4.5)
Figure 4.5: Simple causal structure where
When we intervene on the treatment, the truncated factorization (from 𝑋 counfounds the effect of 𝑇 on 𝑌 and
where 𝑋 is the only confounder.
adding the modularity assumption) gives us the following:

𝑃(𝑦, 𝑥 | do(𝑡)) = 𝑃(𝑥) 𝑃(𝑦 | 𝑡, 𝑥) (4.6)

Then, we simply need to marginalize out 𝑥 to get what we want:


X
𝑃(𝑦 | do(𝑡)) = 𝑃(𝑦 | 𝑡, 𝑥) 𝑃(𝑥) (4.7)
𝑥

We assumed 𝑋 is discrete when we summed over its values, but we can


simply replace the sum with an integral if 𝑋 is continuous. Throughout
this book, that will be the case, so we usually won’t point it out.
If we massage Equation 4.7 a bit, we can clearly see how association is not
causation. The purely associational counterpart of 𝑃(𝑦 | do(𝑡)) is 𝑃(𝑦 | 𝑡).
If the 𝑃(𝑥) in Equation 4.7 were 𝑃(𝑥 | 𝑡), then we would actually recover
𝑃(𝑦 | 𝑡). We briefly show this:
X X
𝑃(𝑦 | 𝑡, 𝑥) 𝑃(𝑥 | 𝑡) = 𝑃(𝑦, 𝑥 | 𝑡) (4.8)
𝑥 𝑥
= 𝑃(𝑦 | 𝑡) (4.9)

This gives some concreteness to the difference between association


and causation. In this example (which is representative of a broader
4 Causal Models 36

phenomenon), the difference between 𝑃(𝑦 | do(𝑡)) and 𝑃(𝑦 | 𝑡) is the


difference between 𝑃(𝑥) and 𝑃(𝑥 | 𝑡).
To round this example out, say 𝑇 is a binary random variable, and we
want to compute the ATE. 𝑃(𝑦 | do(𝑇 = 1)) is the distribution for 𝑌(1), so
we can just take the expectation to get 𝔼[𝑌(1)]. Similarly, we can do the
same thing with 𝑌(0). Then, we can write the ATE as follows:
X X
𝔼[𝑌(1) − 𝑌(0)] = 𝑦 𝑃(𝑦 | do(𝑇 = 1)) − 𝑦 𝑃(𝑦 | do(𝑇 = 0)) (4.10)
𝑦 𝑦

If we then plug in Equation 4.7 for 𝑃(𝑦 | do(𝑇 = 1)) and 𝑃(𝑦 | do(𝑇 = 0)),
we have a fully identified ATE. Given the simple graph in Figure 4.5, we
have shown how we can use the truncated factorization to identify causal
effects in Equations 4.5 to 4.7. We will now generalize this identification
process to a more general formula.

4.4 The Backdoor Adjustment

Recall from Chapter 3 that causal association flows from 𝑇 to 𝑌 along


directed paths and that non-causal association flows along any other
paths from 𝑇 to 𝑌 that aren’t blocked by either 1) a non-collider that
is conditioned on or 2) a collider that isn’t conditioned on. These non-
directed unblocked paths from 𝑇 to 𝑌 are known as backdoor paths because
they have an edge that goes in the “backdoor” of the 𝑇 node. And it turns
out that if we can block these paths by conditioning, we can identify
causal quantities like 𝑃(𝑌 | do(𝑡)).4 4 As we mentioned in Section 3.8, blocking
all backdoor paths is equivalent to having
This is precisely what we did in the previous section. We blocked the d-separation in the graph where edges
backdoor path 𝑇 ← 𝑋 → 𝑌 in Figure 4.5 simple by conditioning on 𝑋 going out of 𝑇 are removed. This is because
and marginalizing it out (Equation 4.7). In this section, we will generalize these are the only edges that causation
flows along, so once they are removed, all
Equation 4.7 to arbitrary DAGs. But before we do that, let’s graphically that remains is non-causation association.
consider why the quantity 𝑃(𝑦 | do(𝑡)) is purely causal.
As we discussed in Section 4.2, the graph for the interventional dis-
tribution 𝑃(𝑌 | do(𝑡)) is the same as the graph for the observational
𝑋
distribution 𝑃(𝑌, 𝑇, 𝑋), but with the incoming edges to 𝑇 removed. For
example, if we take the graph from Figure 4.5 and intervene on 𝑇 , then
we get the manipulated graph in Figure 4.6. In this manipulated graph,
there cannot be any backdoor paths because no edges are going into the 𝑇 𝑌
backdoor of 𝑇 . Therefore, all of the association that flows from 𝑇 to 𝑌 in Figure 4.6: Manipulated graph that results
the manipulated graph is purely causal. from intervening on 𝑇 , when the original
graph is
With that digression aside, let’s prove that we can identify 𝑃(𝑦 | do(𝑡)).
We want to turn the causal estimand 𝑃(𝑦 | do(𝑡)) into a statistical estimand
(only relies on the observational distribution). We’ll start with assuming
we have a set of variables 𝑊 that satisfy the backdoor criterion:

Definition 4.1 (Backdoor Criterion) A set of variables 𝑊 satisfies the


backdoor criterion relative to 𝑇 and 𝑌 if the following are true:
1. 𝑊 blocks all backdoor paths from 𝑇 to 𝑌 . 5 Active reading exercise: In a general DAG,

2. 𝑊 does not contain any descendants of 𝑇 . which set of nodes related to 𝑇 will always
be a sufficient adjustment set? Which set
of nodes related to 𝑌 will always be a
sufficient adjustment set?
4 Causal Models 37

Satisfying the backdoor criterion makes 𝑊 a sufficient adjustment set.5


We saw an example of 𝑋 as a sufficient adjustment set in Section 4.3.1.
Because there was only a single backdoor path in Section 4.3.1, a single
node (𝑋 ) was enough to block all backdoor paths, but, in general, there
can be multiple backdoor paths.
To introduce 𝑊 into the proof, we’ll use the usual trick of conditioning
on variables and marginalizing them out:
X
𝑃(𝑦 | do(𝑡)) = 𝑃(𝑦 | do(𝑡), 𝑤) 𝑃(𝑤 | do(𝑡)) (4.11)
𝑤

Given that 𝑊 satisfies the backdoor criterion, we can write the following:

X X
𝑃(𝑦 | do(𝑡), 𝑤) 𝑃(𝑤 | do(𝑡)) = 𝑃(𝑦 | 𝑡, 𝑤) 𝑃(𝑤 | do(𝑡)) (4.12)
𝑤 𝑤

This follows from the modularity assumption (Assumption 4.1). If 𝑊 is all


of the parents for 𝑌 (other than 𝑇 ), it should be clear that the modularity
assumption immediately implies 𝑃(𝑦 | do(𝑡), 𝑤) = 𝑃(𝑦 | 𝑡, 𝑤). If 𝑊 isn’t
the parents of 𝑌 but still blocks all backdoor paths another way, then this
equality is still true but requires using the graphical knowledge we built
up in Chapter 3.
In the manipulated graph (for 𝑃(𝑦 | do(𝑡), 𝑤)), all of the 𝑇 -𝑌 association
flows along the directed path(s) from 𝑇 to 𝑌 , since there cannot be
any backdoor paths because 𝑇 has no incoming edges. Similarly, in the
regular graph (for 𝑃(𝑦 | 𝑡, 𝑤)), all of the 𝑇 -𝑌 association flows along
the directed path(s) from 𝑇 to 𝑌 . This is because, even though there
exist backdoor paths, the association that would flow along them is
blocked by 𝑊 , leaving association to only flow along directed paths. In
both cases, association flows along the exact same directed paths, which
correspond to the exact same conditional distributions (by the modularity
assumption).
Although we’ve justified Equation 4.12, there is still a do in the expression:
𝑃(𝑤 | do(𝑡)). However, 𝑃(𝑤 | do(𝑡)) = 𝑃(𝑤). To see this, consider how 𝑇
might have influence 𝑊 in the manipulated graph. It can’t be through
any path that has an edge into 𝑇 because 𝑇 doesn’t have any incoming
edges in the manipulated graph. It can’t be through any path that has an
edge going out of 𝑇 because such a path would have to have a collider,
that isn’t conditioned on, on the path. We know any such colliders are
not conditioned on because we have assumed that 𝑊 does not contain
descendants of 𝑇 (second part of the backdoor criterion).6 Therefore, we 6 We will come back to what goes wrong
can write the final step: if we condition on descendants of 𝑇 in Sec-
tion 4.5.3, after we cover some important
X X
𝑃(𝑦 | 𝑡, 𝑤) 𝑃(𝑤 | do(𝑡)) = 𝑃(𝑦 | 𝑡, 𝑤) 𝑃(𝑤)
concepts that we need before we can fully
(4.13)
explain that.
𝑤 𝑤

This is known as the backdoor adjustment.

Theorem 4.2 (Backdoor Adjustment) Given the modularity assumption


(Assumption 4.1) and that 𝑊 satisfies the backdoor criterion (Definition 4.1)
4 Causal Models 38

we can identify the causal effect of 𝑇 on 𝑌 :


X
𝑃(𝑦 | do(𝑡)) = 𝑃(𝑦 | 𝑡, 𝑤) 𝑃(𝑤)
𝑤

Here’s a concise recap of the proof (Equations 4.11 to 4.13) without all of
the explanation/justification:

Proof.
X
𝑃(𝑦 | do(𝑡)) = 𝑃(𝑦 | do(𝑡), 𝑤) 𝑃(𝑤 | do(𝑡)) (4.14)
𝑤
X
= 𝑃(𝑦 | 𝑡, 𝑤) 𝑃(𝑤 | do(𝑡)) (4.15)
𝑤
X
= 𝑃(𝑦 | 𝑡, 𝑤) 𝑃(𝑤) (4.16)
𝑤

4.4.1 Relation to Potential Outcomes

Hmm, the backdoor adjustment (Theorem 4.2) looks quite similar to


the adjustment formula (Theorem 2.1) that we saw back in the potential
outcomes chapter:

𝔼[𝑌(1) − 𝑌(0)] = 𝔼𝑊 [𝔼[𝑌 | 𝑇 = 1, 𝑊] − 𝔼[𝑌 | 𝑇 = 0, 𝑊]] (4.17)

We can derive this from the more general backdoor adjustment in a few
steps. First, we take an expectation over 𝑌 :
X
𝔼[𝑌 | do(𝑡)] = 𝔼[𝑌 | 𝑡, 𝑤] 𝑃(𝑤) (4.18)
𝑤

Then, we notice that the sum over 𝑤 and 𝑃(𝑤) is an expectation (for
discrete 𝑥 , but just replace with an integral if not):

𝔼[𝑌 | do(𝑡)] = 𝔼𝑊 𝔼[𝑌 | 𝑡, 𝑊] (4.19)

And finally, we look at the difference between 𝑇 = 1 and 𝑇 = 0:

𝔼[𝑌 | do(𝑇 = 1)] − 𝔼[𝑌 | do(𝑇 = 0)] = 𝔼𝑊 [𝔼[𝑌 | 𝑇 = 1, 𝑊] − 𝔼[𝑌 | 𝑇 = 0, 𝑊]]


(4.20)
Since the do-notation 𝔼[𝑌 | do(𝑡)] is just another notation for the potential
outcomes 𝔼[𝑌(𝑡)], we are done! If you remember, one of the main as-
sumptions we needed to get Equation 4.17 (Theorem 2.1) was conditional
exchangeability (Assumption 2.2), which we repeat below:

(𝑌(1), 𝑌(0)) ⊥
⊥𝑇 |𝑊 (4.21)

However, we had no way of knowing how to choose 𝑊 or knowing


that that 𝑊 actually gives us conditional exchangeability. Well, using
graphical causal models, we know how to choose a valid 𝑊 : we simply
choose 𝑊 so that it satisfies the backdoor criterion. Then, under the
assumptions encoded in the causal graph, conditional exchangeability
provably holds; the causal effect is provably identifiable.
4 Causal Models 39

4.5 Structural Causal Models (SCMs)

Graphical causal models such as causal Bayesian networks give us


powerful ways to encode statistical and causal assumptions, but we have
yet to explain exactly what an intervention is or exactly what a causal
mechanism is. Moving from causal Bayesian networks to full structural
causal models will give us this additional clarity along with the power to
compute counterfactuals.

4.5.1 Structural Equations

As Judea Pearl often says, the equals sign in mathematics does not convey
any causal information. Saying 𝐴 = 𝐵 is the same as saying 𝐵 = 𝐴.
Equality is symmetric. However, in order to talk about causation, we
must have something asymmetric. We need to be able to write that 𝐴
is a cause of 𝐵, meaning that changing 𝐴 results in changes in 𝐵, but
changing 𝐵 does not result in changes in 𝐴. This is what we get when we
write the following structural equation:

𝐵 := 𝑓 (𝐴) , (4.22)

where 𝑓 is some function that maps 𝐴 to 𝐵. While the usual “=” symbol
does not give us causal information, this new “:=” symbol does. This
is a major difference that we see when moving from statistical models
to causal models. Now, we have the asymmetry we need to describe
causal relations. However, the mapping between 𝐴 and 𝐵 is deterministic.
Ideally, we’d like to allow it to be probabilistic, which allows room for
some unknown causes of 𝐵 that factor into this mapping. Then, we can
write the following:
𝐵 := 𝑓 (𝐴, 𝑈) , (4.23)
where 𝑈 is some unobserved random variable. We depict this in Figure 4.7,
𝐴 𝑈
where 𝑈 is drawn inside a dashed node to indicate that it is unobserved.
The unobserved 𝑈 is analogous to the randomness that we would
see by sampling units (individuals); it denotes all the relevant (noisy)
background conditions that determine 𝐵. More concretely, there are 𝐵
analogs to every part of the potential outcome 𝑌𝑖 (𝑡): 𝐵 is the analog of 𝑌 , Figure 4.7: Graph for simple structural
𝐴 = 𝑎 is the analog of 𝑇 = 𝑡 , and 𝑈 is the analog of 𝑖 . equation. The dashed node 𝑈 means that
𝑈 is unobserved.
The functional form of 𝑓 does not need to be specified, and when
left unspecified, we are in the nonparametric regime because we aren’t
making any assumptions about parametric form. Although the mapping
is deterministic, because it takes a random variable 𝑈 (a “noise” or
“background conditions” variable) as input, it can represent any stochastic
mapping, so structural equations generalize the probabilistic factors
𝑃(𝑥 𝑖 | pa𝑖 ) that we’ve been using throughout this chapter. Therefore, all
the results that we’ve seen such as the truncated factorization and the
backdoor adjustment still hold when we introduce structural equations.
Cause and Causal Mechanism Revisited We have now come to the
more precise definitions of what a cause is (Definition 3.2) and what a
causal mechanism is (introduced in Section 4.2). A causal mechanism
that generates a variable is the structural equation that corresponds to
that variable. For example, the causal mechanism for 𝐵 is Equation 4.23.
4 Causal Models 40

Similarly, 𝑋 is a direct cause of 𝑌 if 𝑋 appears on the right-hand side of


the structural equation for 𝑌 . We say that 𝑋 is a cause of 𝑌 if 𝑋 is a direct
cause of any of the causes of 𝑌 7 or if 𝑋 is a direct cause of 𝑌 . 7 Trust me; the recursion ends. The base
case was specified.
We only showed a single structural equation in Equation 4.23, but there
can be a large collection of structural equations in a single model, which
we will commonly label 𝑀 . For example, we write structural equations
for Figure 4.8 below:

𝐵 := 𝑓𝐵 (𝐴, 𝑈𝐵 ) 𝐴 𝑈𝐵
𝑀: 𝐶 := 𝑓𝐶 (𝐴, 𝐵, 𝑈𝐶 ) (4.24)
𝐷 := 𝑓𝐷 (𝐴, 𝐶, 𝑈𝐷 )
𝐵 𝑈𝐶
In causal graphs, the noise variables are often implicit, rather than
explicitly drawn. The variables that we write structural equations for
are known as endogenous variables. These are the variables whose causal
mechanisms we are modeling – the variables that have parents in the 𝐶 𝑈𝐷
causal graph. In contrast, exogenous variables are variables who do not
have any parents in the causal graph; these variables are external to our
causal model in the sense that we choose not to model their causes. For
𝐷
example, in the causal model described by Figure 4.8 and Equation 4.24,
the endogenous variables are {𝐵, 𝐶, 𝐷}. And the exogenous variables Figure 4.8: Graph for the structural equa-

are {𝐴, 𝑈𝐵 , 𝑈𝐶 , 𝑈𝐷 }.
tions in Equation 4.24.

Definition 4.2 (Structural Causal Model (SCM)) A structural causal


model is a tuple of the following sets:
1. A set of endogenous variables 𝑉
2. A set of exogenous variables 𝑈
3. A set of functions 𝑓 , one to generate each endogenous variable as a
function of other variables

For example, 𝑀 , the set of three equations above in Equation 4.24


constitutes an SCM with corresponding causal graph in Figure 4.8. Every
SCM implies an associated causal graph: for each structural equation,
draw an edge from every variable on the right-hand side to the variable
on the left-hand side.
If the causal graph contains no cycles (is a DAG) and the noise variables
𝑈 are independent, then the causal model is Markovian; the distribution
𝑃 is Markov with respect to the causal graph. If the causal graph doesn’t
contain cycles but the noise terms are dependent, then the model is semi-
Markovian. For example, if there is unobserved confounding, the model
is semi-Markovian. Finally, the graphs of non-Markovian models contain
cycles. We will largely be considering Markovian and semi-Markovian
models in this book.

4.5.2 Interventions
𝑋
Interventions in SCMs are remarkably simple. The intervention do(𝑇 = 𝑡)
simply corresponds to replacing the structural equation for 𝑇 with 𝑇 := 𝑡 .
For example, consider the following causal model 𝑀 with corresponding 𝑇 𝑌
causal graph in Figure 4.9:
Figure 4.9: Basic causal graph
4 Causal Models 41

𝑇 := 𝑓𝑇 (𝑋 , 𝑈𝑇 )
𝑀: (4.25)
𝑌 := 𝑓𝑌 (𝑋 , 𝑇, 𝑈𝑌 )

If we then intervene on 𝑇 to set it to 𝑡 , we get the interventional SCM 𝑀𝑡


below and corresponding manipulated graph in Figure 4.10.
𝑋
𝑇 := 𝑡
𝑀𝑡 : (4.26)
𝑌 := 𝑓𝑌 (𝑋 , 𝑇, 𝑈𝑌 )
𝑇 𝑌
The fact that do(𝑇 = 𝑡) only changes the equation for 𝑇 and no other Figure 4.10: Basic causal with the the in-
variables is a consequence of the modularity assumption; these causal coming edges to 𝑇 removed, due to the
intervention do(𝑇 = 𝑡).
mechanisms (structural equations) are modular. Assumption 4.1 states
the modularity assumption in the context of causal Bayesian networks,
but we need a slightly different translation of this assumption for SCMs.

Assumption 4.2 (Modularity Assumption for SCMs) Consider an SCM


𝑀 and an interventional SCM 𝑀𝑡 that we get by performing the intervention
do(𝑇 = 𝑡). The modularity assumption states that 𝑀 and 𝑀𝑡 share all of
their structural equations except the structural equation for 𝑇 , which is 𝑇 := 𝑡
in 𝑀𝑡 .

In other words, the intervention do(𝑇 = 𝑡) is localized to 𝑇 . None of the


other structural equations change because they are modular; the causal
mechanisms are independent. The modularity assumption for SCMs is
what gives us what Pearl calls the The Law of Counterfactuals, which
we briefly mentioned at the end of Section 4.2, after we defined the
modularity assumption for causal Bayesian networks. But before we can
get to that, we must first introduce a bit more notation.
In the causal inference literature, there are many different ways of writing
the unit-level potential outcome. In Chapter 2, we used 𝑌𝑖 (𝑡). However,
there are other ways such as 𝑌𝑖𝑡 or even 𝑌𝑡 (𝑢). For example, in his
prominent potential outcomes paper, Holland [5] uses the 𝑌𝑡 (𝑢) notation. [5]: Holland (1986), ‘Statistics and Causal
In this notation, 𝑢 is the analog of 𝑖 , just as we mentioned is the case Inference’

for the 𝑈 in Equation 4.23 and the paragraph that followed it. This is
the notation that Pearl uses for SCMs as well [see, e.g., 16, Definition
4]. So 𝑌𝑡 (𝑢) denotes the outcome that unit 𝑢 would observe if they take [16]: Pearl (2009), ‘Causal inference in
treatment 𝑡 , given that the SCM is 𝑀 . Similarly, we define 𝑌𝑀𝑡 (𝑢) as statistics: An overview’

the outcome that unit 𝑢 would observe if they take treatment 𝑡 , given
that the SCM is 𝑀𝑡 (remember that 𝑀𝑡 is the same SCM as 𝑀 but with
the structural equation for 𝑇 changed to 𝑇 := 𝑡 ). Now, we are ready to
present one of Pearl’s two key principles from which all other causal
8 Active reading exercise: Can you recall
results follow:8
which was the other key principle/as-
sumption?
Definition 4.3 (The Law of Counterfactuals (and Interventions)) Active reading exercise: Take what you
now know about structural equations,
𝑌𝑡 (𝑢) = 𝑌𝑀𝑡 (𝑢) (4.27) and relate it to other parts of this chap-
ter. For example, how do interventions in
This is called “The Law of Counterfactuals” because it gives us informa- structural equations relate to the modu-
larity assumption? How does the mod-
tion about counterfactuals. Given an SCM with enough details about it ularity assumption for SCMs (Assump-
specified, we can actually compute counterfactuals. This is a big deal tion 4.2) relate to the modularity assump-
because this is exactly what the fundamental problem of causal inference tion in causal Bayesian networks (Assump-
tion 4.1)? Does this modularity assump-
(Section 2.2) told us we cannot do. We won’t say more about how to do
tion for SCMs still give us the backdoor
this until we get to the dedicated chapter for counterfactuals: Chapter 8. adjustment?
4 Causal Models 42

4.5.3 Collider Bias and Why to Not Condition on


Descendants of Treatment

In defining the backdoor criterion (Definition 4.1) for the backdoor


adjustment (Theorem 4.2), not only did we specify that the adjustment
set 𝑊 blocks all backdoor paths, but we also specified that 𝑊 does not
contain any descendants of 𝑇 . Why? There are two categories of things
that could go wrong if we condition on descendants of 𝑇 :
1. We block the flow of causation from 𝑇 to 𝑌 .
2. We induce non-causal association between 𝑇 and 𝑌 . 𝑊
As we’ll see, it is fairly intuitive why we want to avoid the first category.
The second category is a bit more complex, and we’ll break it up into two
different parts, each with their own paragraph. This more complex part
𝑇 𝑀 𝑌
is actually why we delayed this explanation to after we introduced SCMs,
rather than back when we introduced the backdoor criterion/adjustment Figure 4.11: Causal graph where all cau-
sation is blocked by conditioning on 𝑀 .
in Section 4.4.
If we condition on a node that is on a directed path from 𝑇 to 𝑌 , then we
block the flow of causation along that causal path. We will refer to a node
on a directed path from 𝑇 to 𝑌 as a mediator, as it mediates the effect of 𝑊
𝑇 on 𝑌 . For example, in Figure 4.11, all of the causal flow is blocked by
𝑀 . This means that we will measure zero association between 𝑇 and 𝑌
(given that 𝑊 blocks all backdoor paths). In Figure 4.12, only a portion of
𝑇 𝑀 𝑌
the causal flow is blocked by 𝑀 . This is because causation can still flow
along the 𝑇 → 𝑌 edge. In this case, we will get a non-zero estimate of
the causal effect, but it will still be biased, due to the causal flow that 𝑀 Figure 4.12: Causal graph where part of
blocks. the causation is blocked by conditioning
on 𝑀 .
If we condition on a descendant of 𝑇 that isn’t a mediator, it could unblock
a path from 𝑇 to 𝑌 that was blocked by a collider. For example, this is
the case with conditioning on 𝑍 in Figure 4.13. This induces non-causal
𝑊
association between 𝑇 and 𝑌 , which biases the estimate of the causal
effect. Consider the following general kind of path, where → · · · →
denotes a directed path: 𝑇 → · · · → 𝑍 ← · · · ← 𝑌 . Conditioning on 𝑍 ,
or any descendant of 𝑍 in a path like this, will induce collider bias. That 𝑇 𝑌
is, the causal effect estimate will be biased by the non-causal association
that we induce when we condition on 𝑍 or any of its descendants (see
Section 3.6). 𝑍
What about conditioning on 𝑍 in Figure 4.14? Would that induce bias? Figure 4.13: Causal graph where condi-
tioning on the collider 𝑍 induces bias.
Recall that graphs are frequently drawn without explicitly drawing
the noise variables. If we magnify part of the graph, making 𝑀 ’s noise
variable explicit, we get Figure 4.15. Now, we see that 𝑇 → 𝑀 ← 𝑈 𝑀
forms an immorality. Therefore, conditioning on 𝑍 induces an association 𝑊
between 𝑇 and 𝑈 𝑀 . This induced non-causal association is another form
of collider bias. You might find this unsatisfying because 𝑌 is not one
of the immoral parents here; rather 𝑇 and 𝑈 𝑀 are the ones living the
immoral lifestyle. So why would this change the association between 𝑇 𝑇 𝑀 𝑌
and 𝑌 ? One way to get the intuition for this is that there is now induced
association flowing between 𝑇 and 𝑈 𝑀 through the edge 𝑇 → 𝑀 , which
is also an edge that causal association is flowing along. You can think of
𝑍
these two types of association getting tangled up along the 𝑇 → 𝑀 edge,
making the observed association between 𝑇 and 𝑌 not purely causal. See Figure 4.14: Causal graph where the child
Pearl [17, Section 11.3.1 and 11.3.3] for more information on this topic. of a mediator is conditioned on.
4 Causal Models 43

Note that we actually can condition on some descendants of 𝑇 without 𝑊


inducing non-causal associations between 𝑇 and 𝑌 . For example, condi-
tioning on descendants of 𝑇 that aren’t on any causal paths to 𝑌 won’t
induce bias. However, as you can see from the above paragraph, this can
get a bit tricky, so it is safest to just not condition on any descendants of 𝑈𝑀
𝑇 , as the backdoor criterion prescribes. Even outside of graphical causal
models (e.g. in potential outcomes literature), this rule is often applied; it
is usually described as not conditioning on any pretreatment covariates.
𝑇 𝑀 𝑌
M-Bias Unfortunately, even if we only condition on pretreatment co-
variates, we can still induce collider bias. Consider what would happen
if we condition on the collider 𝑍2 in Figure 4.16. Doing this opens up
a backdoor path, along which non-causal association can flow. This is 𝑍
known as M-bias due to the M shape that this non-causal association
Figure 4.15: Magnified causal graph
flows along when the graph is drawn with children below their parents. where the child of a mediator is condi-
For many examples of collider bias, see Elwert and Winship [18]. tioned on.

4.6 Example Applications of the Backdoor 𝑍1 𝑍3


Adjustment

4.6.1 Association vs. Causation in a Toy Example 𝑍2

In this section, we posit a toy generative process and derive the bias of the
associational quantity 𝔼[𝑌 | 𝑡]. We compare this to the causal quantity 𝑇 𝑌
𝔼[𝑌 | do(𝑡)], which gives us exactly what we want. Note that both of
Figure 4.16: Causal graph depicting M-
these quantities are actually functions of 𝑡 . If the treatment were binary, Bias.
then we would just look at the difference between the quantities with
𝑇 = 1 and with 𝑇 = 0. However, because our generative processes will be
𝑑 𝔼[𝑌|𝑡] 𝑑 𝔼[𝑌| do(𝑡)]
linear, 𝑑𝑡 and 𝑑𝑡 actually gives us all the information about
the treatment effect, regardless of if treatment is continuous, binary, or
multi-valued. We will assume infinite data so that we can work with
expectations. This means this section has nothing to do with estimation;
for estimation, see the next section
The generative process that we consider has the causal graph in Figure 4.17
and the following structural equations:

𝑇 := 𝛼 1 𝑋 (4.28)
𝑌 := 𝛽𝑇 + 𝛼 2 𝑋 . (4.29) 𝑋
𝛼
Note that in the structural equation for 𝑌 , 𝛽 is the coefficient in front of 𝑇 . 𝛼1 2

This means that the causal effect of 𝑇 on 𝑌 is 𝛽 . Keep this in mind as we


𝑇 𝑌
go through these calculations. 𝛽
From the causal graph in Figure 4.17, we can see that 𝑋 is a sufficient Figure 4.17: Causal graph for toy example
adjustment set. Therefore, 𝔼[𝑌 | do(𝑡)] = 𝔼𝑋 𝔼[𝑌 | 𝑡, 𝑋]. Let’s calculate
the value of this quantity in our example.

𝔼𝑋 𝔼[𝑌 | 𝑡, 𝑋] = 𝔼𝑋 𝔼[𝛽𝑇 + 𝛼2 𝑋 | 𝑇 = 𝑡, 𝑋]
 
(4.30)
= 𝔼𝑋 𝛽𝑡 + 𝛼 2 𝑋
 
(4.31)
= 𝛽𝑡 + 𝛼 2 𝔼[𝑋] (4.32)
4 Causal Models 44

Importantly, we made use of the equality that the structural equation for
𝑌 (Equation 4.29) gives us in Equation 4.30. Now, we just have to take
the derivative to get the causal effect:

𝑑 𝔼𝑋 𝔼[𝑌 | 𝑡, 𝑋]
= 𝛽. (4.33)
𝑑𝑡
We got exactly what we were looking for. Now, let’s move to the associa-
tional quantity:

𝔼[𝑌 | 𝑇 = 𝑡] = 𝔼[𝛽𝑇 + 𝛼2 𝑋 | 𝑇 = 𝑡] (4.34)


= 𝛽𝑡 + 𝛼 2 𝔼[𝑋 | 𝑇 = 𝑡] (4.35)
𝛼2
= 𝛽𝑡 + 𝑡 (4.36)
𝛼1

In Equation 4.36, we made use of the equality that the structural equation
for 𝑇 (Equation 4.28) gives us. If we then take the derivative, we see that
there is confounding bias:

𝑑 𝔼[𝑌 | 𝑡] 𝛼2
=𝛽+ . (4.37)
𝑑𝑡 𝛼1

To recap, 𝔼𝑋 𝔼[𝑌 | 𝑡, 𝑋] gave us the causal effect we were looking for


(Equation 4.33), whereas the associational quantity 𝔼[𝑌 | 𝑡] did not
(Equation 4.37). Now, let’s go through an example that also takes into
account estimation.

4.6.2 A Complete Example with Estimation

Recall that we estimated a concrete value for the causal effect of sodium
intake on blood pressure in Section 2.5. There, we used the potential
outcomes framework. Here, we will do the same thing, but using causal
graphs. The spoiler is that the 19% error that we saw in Section 2.5 was
due to conditioning on a collider.
First, we need to write down our causal assumptions in terms of a causal
graph. Remember that in Luque-Fernandez et al. [8]’s example from [8]: Luque-Fernandez et al. (2018), ‘Edu-
epidemiology, the treatment 𝑇 is sodium intake, and the outcome 𝑌 is cational Note: Paradoxical collider effect

blood pressure. The covariates are age 𝑊 and amount of protein in urine
in the analysis of non-communicable dis-
ease epidemiological data: a reproducible
(proteinuria) 𝑍 . Age is a common cause of both blood pressure and the illustration and web application’
body’s ability to self-regulate sodium levels. In contrast, high amounts
of urinary protein are caused by high blood pressure and high sodium
intake. This means that proteinuria is a collider. We depict this causal
𝑊
graph in Figure 4.18.
Because 𝑍 is a collider, conditioning on it induces bias. Because 𝑊 and 𝑍
were grouped together as “covariates” 𝑋 in Section 2.5, we conditioned 𝑇 𝑌
on all of them. This is why we saw that our estimate was 19% off from
the true causal effect 1.05. Now that we’ve made the causal relationships
clear with a causal graph, the backdoor criterion (Definition 4.1) tells us
𝑍
to only adjust for 𝑊 and to not adjust for 𝑍 . More precisely, we were
doing the following adjustment in Section 2.5: Figure 4.18: Causal graph for the blood
pressure example. 𝑇 is sodium intake. 𝑌
is blood pressure. 𝑊 is age. And, impor-
𝔼𝑊 ,𝑍 𝔼[𝑌 | 𝑡, 𝑊 , 𝑍] (4.38) tantly, the amount of protein excreted in
urine 𝑍 is a collider.
4 Causal Models 45

And now, we will use the backdoor adjustment (Theorem 4.2) to change
our statistical estimand to the following:

𝔼𝑊 𝔼[𝑌 | 𝑡, 𝑊] (4.39)

We have simply removed the collider 𝑍 from the variables we adjust for.
For estimation, just as we did in Section 2.5, we use a model-assisted
estimator. We replace the outer expectation over 𝑊 with an empirical
mean over 𝑊 and replace the conditional expectation 𝔼[𝑌 | 𝑡, 𝑊] with a
machine learning model (in this case, linear regression).
Just as writing down the graph has lead us to simply not condition on 𝑍
in Equation 4.39, the code for estimation also barely changes. We need to
change just a single line of code in our previous program (Listing 2.1).
We display the full program with the fixed line of code below:

import numpy as np Listing 4.1: Python code for estimating the


ATE, without adjusting for the collider
import pandas as pd
from sklearn.linear_model import LinearRegression

Xt = df[['sodium', 'age']]
y = df['blood_pressure']
model = LinearRegression()
model.fit(Xt, y)
Full code, complete with simulation,
is available at https://github.com/
Xt1 = pd.DataFrame.copy(Xt) bradyneal/causal-book-code/blob/
Xt1['sodium'] = 1 master/sodium_example.py.
Xt0 = pd.DataFrame.copy(Xt)
Xt0['sodium'] = 0
ate_est = np.mean(model.predict(Xt1) - model.predict(Xt0))
print('ATE estimate:', ate_est)

Namely, we’ve changed line 5 from


Xt = df[['sodium', 'age', 'proteinuria']]

in Listing 2.1 to
Xt = df[['sodium', 'age']]

in Listing 4.1. When we run this revised code, we get an ATE estimate of 9 Active reading exercise: Given that 𝑌 is
1.0502, which corresponds to 0.02% error (true value is 1.05) when using generated as a linear function of 𝑇 and 𝑊 ,
a fairly large sample.9 could we have just used the coefficient in
front of 𝑇 in the linear regression as an
Progression of Reducing Bias When looking at the total association estimate for the causal effect?
between 𝑇 and 𝑌 by simply regressing 𝑌 on 𝑇 , we got an estimate that was
a staggering 407% off of the true causal effect, due largely to confounding 𝑍1 𝑍3
bias (see Section 2.5). When we adjusted for all covariates in Section 2.5,
we reduced the percent error all the way down to 19%. In this section,
we saw this remaining error is due to collider bias. When we removed
the collider bias, by not conditioning on the collider 𝑍 , the error became 𝑍2
non-existent.
Potential Outcomes and M-Bias In fairness to the general culture
around the potential outcomes framework, it is common to only condition 𝑇 𝑌
on pretreatment covariates. This would prevent a practitioner who Figure 4.19: Causal graph depicting M-
adheres to this rule from conditioning on the collider 𝑍 in Figure 4.18. Bias that can only be avoided by not con-
ditioning on the collider 𝑍2 . This is due to
However, there is no reason that there can’t be pretreatment colliders the fact that the dashed nodes 𝑍1 and 𝑍3
that induce M-bias (Section 4.5.3). In Figure 4.19, we depict an example are unobserved.
4 Causal Models 46

of M-bias that is created by conditioning on 𝑍2 . We could fix this by


additionally conditioning on 𝑍1 and/or 𝑍3 , but in this example, they are
unobserved (indicated by the dashed lines). This means that the only
way to avoid M-bias in Figure 4.19 is to not condition on the covariates
𝑍2 .

4.7 Assumptions Revisited

The first main set of assumptions is encoded by the causal graph that we
write down. Exactly what this causal graph means is determined by two
main assumptions, each of which can take on several different forms:
1. The Modularity Assumption
Different forms:
I Modularity Assumption for Causal Bayesian Networks (Assumption 4.1)
I Modularity Assumption for SCMs (Assumption 4.2)
I The Law of Counterfactuals (Definition 4.3)
2. The Markov Assumption
Different equivalent forms:
I Local Markov assumption (Assumption 3.1)
I Bayesian network factorization (Definition 3.1)
I Global Markov assumption (Theorem 3.1)

Given, these two assumptions (and positivity), if the backdoor criterion


(Definition 4.1) is satisfied in our assumed causal graph, then we have
identification. Note that although the backdoor criterion is a sufficient Now that you’re familiar with causal
condition for identification, it is not a necessary condition. We will see graphical models and SCMs, it may be
worth going back and rereading Chap-
this more in Chapter 6. ter 2 while trying to make connections
to what you’ve learned about graphical
causal models in these past two chapters.
More Formal If you’re really into fancy formalism, there are some
relevant sources to check out. You can see the fundamental axioms that
underlie The Law of Counterfactuals in [19, 20], or if you want a textbook, [19]: Galles and Pearl (1998), ‘An Axiomatic
you can find them in [17, Chapter 7.3]. To see proofs of the equivalence of Characterization of Causal Counterfactu-
als’
all three forms of the Markov assumption, see, for example, [12, Chapter [20]: Halpern (1998), ‘Axiomatizing Causal
3]. Reasoning’
[17]: Pearl (2009), Causality
Connections to No Interference, Consistency, and Positivity The no
interference assumption (Assumption 2.4) is commonly implicit in causal [12]: Koller and Friedman (2009), Proba-
bilistic Graphical Models: Principles and Tech-
graphs, since the outcome 𝑌 (think 𝑌𝑖 ) usually only has a single node 𝑇 niques
(think 𝑇𝑖 ) for treatment as a parent, rather than having multiple treatment
nodes 𝑇𝑖 , 𝑇𝑖−1 , 𝑇𝑖+1 , etc. as parents. However, causal DAGs can be extended
to settings where there is interference [21]. Consistency (Assumption 2.5) [21]: Ogburn and VanderWeele (2014),
follows from the axioms of SCMs (see [17, Corollary 7.3.2] and [22]). ‘Causal Diagrams for Interference’

Positivity (Assumption 2.3) is still a very important assumption that we [17]: Pearl (2009), Causality
must make, though it is sometimes neglected in the graphical models [22]: Pearl (2010), ‘On the consistency rule
in causal inference: axiom, definition, as-
literature.
sumption, or theorem?’
Randomized Experiments 5
Randomized experiments are noticeably different from observational 5.1 Comparability and Covari-
studies. In randomized experiments, the experimenter has complete con- ate Balance . . . . . . . . . . 47
trol over the treatment assignment mechanism (how treatment is assigned). 5.2 Exchangeability . . . . . . . 48
For example, in the most simple kind of randomized experiment, the 5.3 No Backdoor Paths . . . . . 49
experimenter randomly assigns (e.g. via coin toss) each participant to
either the treatment group or the control group. This complete control
over how treatment is chosen is what distinguishes randomized experi-
ments from observational studies. In this simple experimental setup, the
treatment isn’t a function of covariates at all! In contrast, in observational
studies, the treatment is almost always a function of some covariate(s).
As we will see, this difference is key to whether or not confounding is
present in our data.
In randomized experiments, association is causation. This is because
randomized experiments are special in that they guarantee that there
is no confounding. As a consequence, this allows us to measure the
causal effect 𝔼[𝑌(1)] − 𝔼[𝑌(0)] via the associational difference 𝔼[𝑌 | 𝑇 =
1]− 𝔼[𝑌 | 𝑇 = 0]. In the following sections, we explain why this is the case
from a variety of different perspectives. If any one of these explanations
clicks with you, that might be good enough. Definitely stick through to
the most visually appealing explanation in Section 5.3.

5.1 Comparability and Covariate Balance

Ideally, the treatment and control groups would be the same, in all
aspects, except for treatment. This would mean they only differ in the
treatment they receive (i.e. they are comparable). This would allow us to
attribute any difference in the outcomes of the treatment and control
groups to the treatment. Saying that these treatment groups are the same
in everything other than their treatment and outcomes is the same as
saying they have the same distribution of confounders. Because people
often check for this property on observed variables (often what people
mean by “covariates”), this concept is known as covariate balance.

Definition 5.1 (Covariate Balance) We have covariate balance if the distri-


bution of covariates 𝑋 is the same across treatment groups. More formally,

𝑑
𝑃(𝑋 | 𝑇 = 1) = 𝑃(𝑋 | 𝑇 = 0) (5.1)

Randomization implies covariate balance, across all covariates, even


unobserved ones. Intuitively, this is because the treatment is chosen at
random, regardless of 𝑋 , so the treatment and control groups should
look very similar. The proof is simple. Because 𝑇 is not at all determined
by 𝑋 (solely by a coin flip), 𝑇 is independent of 𝑋 . This means that
5 Randomized Experiments 48

𝑑 𝑑
𝑃(𝑋 | 𝑇 = 1) = 𝑃(𝑋). Similarly, it means 𝑃(𝑋 | 𝑇 = 0) = 𝑃(𝑋). Therefore,
𝑑
we have 𝑃(𝑋 | 𝑇 = 1) = 𝑃(𝑋 | 𝑇 = 0).
Although we have proven that randomization implies covariate balance,
we have not proven that that covariate balance implies identifiability.
The intuition is that covariance balance means that everything is the
same between the treatment groups, except for the treatment, so the
treatment must be the explanation for the change in 𝑌 . We’ll now prove
that 𝑃(𝑦 | do(𝑇 = 𝑡)) = 𝑃(𝑦 | 𝑇 = 𝑡). For the proof, the main property we
utilize is that covariate balance implies 𝑋 and 𝑇 are independent.

Proof. First, let 𝑋 be a sufficient adjustment set. This is the case with
randomization since we know that randomization balances everything,
not just the observed covariates. Then, we have the following from the
backdoor adjustment (Theorem 4.2):
X
𝑃(𝑦 | do(𝑇 = 𝑡)) = 𝑃(𝑦 | 𝑡, 𝑥)𝑃(𝑥) (5.2)
𝑥

𝑃(𝑡|𝑥)
By multiplying by 𝑃(𝑡|𝑥)
, we get the joint distribution in the numerator:

X 𝑃(𝑦 | 𝑡, 𝑥)𝑃(𝑡 | 𝑥)𝑃(𝑥)


= (5.3)
𝑥 𝑃(𝑡 | 𝑥)
X 𝑃(𝑦, 𝑡, 𝑥)
= (5.4)
𝑥 𝑃(𝑡 | 𝑥)

Now, we use the important property that 𝑋 ⊥


⊥ 𝑇:

X 𝑃(𝑦, 𝑡, 𝑥)
= (5.5)
𝑥 𝑃(𝑡)

An application of Bayes rule and marginalization gives us the rest:


X
= 𝑃(𝑦, 𝑥 | 𝑡) (5.6)
𝑥
= 𝑃(𝑦 | 𝑡) (5.7)

5.2 Exchangeability

Exchangeability (Assumption 2.1) gives us another perspective on why


randomization makes causation equal to association. To see why, consider
the following thought experiment. We decide an individual’s treatment
group using a random coin flip as follows: if the coin is heads, we assign
the individual to the treatment group (𝑇 = 1), and if the coins is tails,
we assign the individual to the control group (𝑇 = 0). If the groups are
exchangeable, we could exchange these groups, and the average outcomes
would remain the same. This is intuitively true if we chose the groups
with a coin flip. Imagine simply swapping the meaning of “heads” and
“tails” in this experiment. Would you expect that to change the results at
all? No. This is why randomized experiments give us exchangeability.
5 Randomized Experiments 49

Recall from Section 2.3.2 that mean exchangeability is formally the


following:

𝔼[𝑌(1) | 𝑇 = 1] = 𝔼[𝑌(1) | 𝑇 = 0] (5.8)


𝔼[𝑌(0) | 𝑇 = 0] = 𝔼[𝑌(0) | 𝑇 = 1] (5.9)

The “exchange” is when we go from 𝑌(1) in the treatment group to 𝑌(1)


in the control group (Equation 5.8) and from 𝑌(0) in the control group to
𝑌(0) in the treatment group (Equation 5.8).
To see the proof of why association is causation in randomized ex-
periments through the lens of exchangeability, recall the proof from
Section 2.3.2. First, recall that Equation 5.8 means that both quantities in
it are equal to the marginal expected outcome 𝔼[𝑌(1)] and, similarly, that
Equation 5.8 means that both quantities in it are equal to the marginal
expected outcome 𝔼[𝑌(0)]. Then, we have the following proof:

𝔼[𝑌(1)] − 𝔼[𝑌(0)] = 𝔼[𝑌(1) | 𝑇 = 1] − 𝔼[𝑌(0) | 𝑇 = 0] (2.3 revisited)


= 𝔼[𝑌 | 𝑇 = 1] − 𝔼[𝑌 | 𝑇 = 0] (2.4 revisited)

5.3 No Backdoor Paths 𝑋

The final perspective that we’ll look at to see why association is causation
in randomized experiments in that of graphical causal models. In regular
𝑇 𝑌
observational data, there is almost always confounding. For example, in
Figure 5.1 we see that 𝑋 is a confounder of the effect of 𝑇 on 𝑌 . Non-causal Figure 5.1: Causal structure of 𝑋 con-
founding the effect of 𝑇 on 𝑌 .
association flows along the backdoor path 𝑇 ← 𝑋 → 𝑌 .
However, if we randomize 𝑇 , something magical happens: 𝑇 no longer
has any causal parents, as we depict in Figure 5.2. This is because 𝑇 is
purely random. It doesn’t depend on anything other than the output of a
coin toss (or a quantum random number generator, if you’re into the kind
of stuff). Because 𝑇 has no incoming edges, under randomization, there 𝑋
are no backdoor paths. So the empty set is a sufficient adjustment set. This
means that all of the association that flows from 𝑇 to 𝑌 is causal. We can
identify 𝑃(𝑌 | do(𝑇 = 𝑡)) by simply applying the backdoor adjustment
𝑇 𝑌
(Theorem 4.2), adjusting for the empty set:
Figure 5.2: Causal structure when we ran-
𝑃(𝑌 | do(𝑇 = 𝑡)) = 𝑃(𝑌 | 𝑇 = 𝑡) domize treatment.

With that, we conclude our discussion of why association is causation in


randomized experiments. Hopefully, at least one of these three explana-
tions is intuitive to you and easy to store in long-term memory.
General Identification 6
6.1 Coming Soon . . . . . . . . . 50
6.1 Coming Soon
Estimation 7
7.1 Coming Soon . . . . . . . . . 51
7.1 Coming Soon
Counterfactuals 8
8.1 Coming Soon . . . . . . . . . 52
8.1 Coming Soon
Bibliography

Here are the references in citation order.

[1] Tyler Vigen. Spurious correlations. https://www.tylervigen.com/spurious- correlations. 2015


(cited on page 3).
[2] Jerzy Splawa-Neyman. ‘On the Application of Probability Theory to Agricultural Experiments. Essay
on Principles. Section 9.’ Trans. by D. M. Dabrowska and T. P. Speed. In: Statistical Science 5.4 (1923
[1990]), pp. 465–472 (cited on page 6).
[3] Donald B. Rubin. ‘Estimating causal effects of treatments in randomized and nonrandomized studies.’
In: Journal of educational Psychology 66.5 (1974), p. 688 (cited on pages 6, 7).
[4] Jasjeet S. Sekhon. ‘The Neyman-Rubin Model of Causal Inference and Estimation via Matching
Methods’. In: Oxford handbook of political methodology (2008), pp. 271– (cited on page 6).
[5] Paul W. Holland. ‘Statistics and Causal Inference’. In: Journal of the American Statistical Association 81.396
(1986), pp. 945–960. doi: 10.1080/01621459.1986.10478354 (cited on pages 8, 41).
[6] Alexander D’Amour, Peng Ding, Avi Feller, Lihua Lei, and Jasjeet Sekhon. Overlap in Observational
Studies with High-Dimensional Covariates. 2017 (cited on page 13).
[7] Miguel A Hernán and James M Robins. Causal Inference: What If. Boca Raton: Chapman & Hall/CRC,
2020 (cited on pages 14, 27).
[8] Miguel Angel Luque-Fernandez, Michael Schomaker, Daniel Redondo-Sanchez, Maria Jose Sanchez
Perez, Anand Vaidya, and Mireille E Schnitzer. ‘Educational Note: Paradoxical collider effect in the
analysis of non-communicable disease epidemiological data: a reproducible illustration and web
application’. In: International Journal of Epidemiology 48.2 (Dec. 2018), pp. 640–653. doi: 10.1093/ije/
dyy275 (cited on pages 16, 44).
[9] Salim S. Virani et al. ‘Heart Disease and Stroke Statistics—2020 Update: A Report From the American
Heart Association’. In: Circulation (Mar. 2020), pp. 640–653. doi: 10.1161/cir.0000000000000757
(cited on page 16).
[10] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning. Springer
Series in Statistics. New York, NY, USA: Springer New York Inc., 2001 (cited on page 17).
[11] Stephen L. Morgan and Christopher Winship. Counterfactuals and Causal Inference: Methods and Principles
for Social Research. 2nd ed. Analytical Methods for Social Research. Cambridge University Press, 2014
(cited on page 18).
[12] Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques. Adaptive
Computation and Machine Learning. The MIT Press, 2009 (cited on pages 21, 29, 46).
[13] J. Peters, D. Janzing, and B. Schölkopf. Elements of Causal Inference: Foundations and Learning Algorithms.
Cambridge, MA, USA: MIT Press, 2017 (cited on page 21).
[14] Judea Pearl, Madelyn Glymour, and Nicholas P Jewell. Causal inference in statistics: A primer. John Wiley
& Sons, 2016 (cited on page 25).
[15] Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Francisco,
CA, USA: Morgan Kaufmann Publishers Inc., 1988 (cited on page 28).
[16] Judea Pearl. ‘Causal inference in statistics: An overview’. In: Statist. Surv. 3 (2009), pp. 96–146. doi:
10.1214/09-SS057 (cited on page 41).
[17] Judea Pearl. Causality. Cambridge University Press, 2009 (cited on pages 42, 46).
[18] Felix Elwert and Christopher Winship. ‘Endogenous Selection Bias: The Problem of Conditioning on a
Collider Variable.’ In: Annual review of sociology 40 (2014), pp. 31–53 (cited on page 43).
[19] David Galles and Judea Pearl. ‘An Axiomatic Characterization of Causal Counterfactuals’. In: Founda-
tions of Science 3.1 (1998), pp. 151–182. doi: 10.1023/A:1009602825894 (cited on page 46).
[20] Joseph Y. Halpern. ‘Axiomatizing Causal Reasoning’. In: Proceedings of the Fourteenth Conference on
Uncertainty in Artificial Intelligence. UAI’98. Madison, Wisconsin: Morgan Kaufmann Publishers Inc.,
1998, pp. 202–210 (cited on page 46).
[21] Elizabeth L. Ogburn and Tyler J. VanderWeele. ‘Causal Diagrams for Interference’. In: Statist. Sci. 29.4
(Nov. 2014), pp. 559–578. doi: 10.1214/14-STS501 (cited on page 46).
[22] J. Pearl. ‘On the consistency rule in causal inference: axiom, definition, assumption, or theorem?’ In:
Epidemiology 21.6 (Nov. 2010), pp. 872–875 (cited on page 46).
Alphabetical Index

adjustment formula, 11 directed acyclic graph (DAG), model-assisted estimator, 16,


ancestor, 19 19 17
association, 4 directed graph, 19
causal association, 4 directed path, 19 node, 19
confounding association, do-operator, 31 non-Markovian, 40
4 nonparametric, 39
associational difference, 8 edge, 19
endogenous, 40 observational data, 32
average treatment effect (ATE),
estimand, 15 observational distribution,
8
causal, 15, 32 32
backdoor adjustment, 37 statistical, 15, 32 overlap, 13
backdoor criterion, 36 estimate, 15 parent, 19
backdoor path, 36 estimation, 15, 16 path, 19
Bayesian network, 21 estimator, 15 blocked, 25, 26
chain rule, 21 exchangeability, 9, 48 blocked , 28
factorization, 21 exogenous, 40 unblocked, 25, 27
Berkson’s paradox, 27 extrapolation, 13 unblocked , 28
blocked path, 25, 26, 28
positivity, 12
factual, 8
causal Bayesian networks, post-intervention, 32
34 global Markov assumption, potential outcome, 6
causal effect 29 pre-intervention, 32
average, 8 graph, 19 pretreatment covariates, 42
individual, 7
unit-level, 7 identifiability, 10, 32 randomized control trials
causal estimand, 15, 32 identification, 10, 16, 32 (RCTs), 47
causal graph, 22 ignorability, 9 randomized experiments,
non-strict, 23 immorality, 20, 26 47
strict, 22 individual treatment effect
semi-Markovian, 40
causal mechanism, 33, 39 (ITE), 7
Simpson’s paradox, 1
cause, 22, 39 interference, 13
spurious correlation, 3
child, 19 interventional distribution,
statistical estimand, 15, 32
collider, 26 31
structural causal model (SCM),
collider bias, 42 interventional SCM, 41
39
common cause, 4
local Markov assumption, structural equation, 39
common support, 13
20 sufficient adjustment set, 37
comparability, 47
lurking variable, 4 SUTVA, 14
confounder, 4
correlation, 4 terminology machine gun,
M-bias, 43, 45
counterfactual, 8 19
magnification, 42
covariate balance, 47 treatment assignment
magnify, 42
curse of dimensionality, 13 mechanism, 47
manipulated graph, 33
cycle, 19 truncated factorization, 34
Markov compatibility, 21
d-connected, 28 Markovian, 40 unblocked path, 25, 27, 28
d-separated, 28 mediator, 42 unconfoundedness, 11
d-separation, 28 minimality, 21 undirected graph, 19
data generating process, 27 misspecification, 18 unit-level treatment effect, 7
descendant, 19 model-assisted estimation,
direct cause, 22, 39 16 vertex, 19

You might also like