Tuesday, March 21, 2017

Causal inference (3): Mediation and counterfactuals

This is the third post in a series that explores recent advances in causal inference, particularly those stemming from Judea Pearl's Structural Causal Model (Pearl, 2000; 2009).  My first post defines causality in terms of actions and describes how this definition imposes a bright line between causal and associative concepts that can only be bridged through assumptions.  My second post extends these ideas to a multiple-variable context and describes the assumptions required to identify causal effects in the presence of spurious causes (confounds).

In this third post, I will explore these ideas in the context of a particular kind of multiple-variable design: mediation designs.


Causal attribution


My discussion of causal inference so far has assumed that we are interested in estimating the causal relationship between a particular psychological process, such as ice cream attitudes, and a behavior, such as ice cream purchases.  Although many psychological questions fit this research prototype, not all do.  Consider the following questions:
  1. Does cognitive dissonance affect attitudes because of psychological discomfort (Cooper & Fazio, 1984) or inferential processes (Bem, 1972)?
  2. Are differences in the treatment of Black people and White people caused by implicit bias (Lai, Hoffman, & Nosek, 2013; Forscher & Devine, 2015)?
  3. Are changes in performance following effortful tasks due to depleted cognitive resources or changes in motivation (Inzlicht & Schmeichel, 2012)?
In contrast to the types of questions that I reviewed in my first two posts, these questions aren't just asking whether a cause→effect relationship exists; they take as a given the existence of cause→effect relationship and seek to attribute this relationship to one or more sources (often called mediators).  In other words, these questions seek to explain why a cause→effect relationship occurs rather than establish whether it occurs.

How might we estimate this mediation effect (sometimes termed the indirect effect)?  As I have described in past posts, the typical approach in psychology is to conduct involving predictor $X$, mediator $M$, and outcome $Y$ and analyzing this study using the mediation formula.  This involves fitting the following two linear models:

$$M=aX + e_1$$ $$Y=bM + cX + e_2$$
then estimating the mediation effect as the product $ab$.

However, this is an associative procedure.  As I argued in my first post on causal inference, there is a bright line separating causal and associative concepts; while the quantities estimated in models of associations may converge with the quantities derived from causal models, they need not.  What we need is a clear definition of the mediation effect in terms of causal principles.  We can then use this definition to determine when the mediation effect can be identified from associative data.

Unfortunately, we run into a problem as soon as we define the mediation effect.  Properly defined, the mediation effect is impossible.


The impossibility of the mediation effect


To understand why, it is helpful to think of causality in terms of counterfactuals.  Imagine we have a device that peers into worlds that are identical except for the modification of one particular feature at a particular point in time.  Because the only difference between these universes is the feature of interest, we can reasonably infer that differences in these universes are caused by the changes in the feature.

For example, I love ice cream and I buy a lot of it.  From a counterfactual perspective, the question of whether my attitudes cause my purchases becomes a question of what would happen if we kept everything about the universe the same except one thing: my attitudes toward ice cream.  If we have an Attitudes-O-Matic that, instead of setting my attitudes to particular values, allows us to peer into counterfactual universes where my attitudes towards ice cream vary, we could compare how much ice cream I purchase in a universe where my attitudes are positive and a universe where they are negative.  If my purchasing behavior is different in these two universes, my attitudes must cause my purchasing behavior.


Note that this perspective on causality is very similar to the change-oriented perspective that I introduced in my first post (and, in fact, is subsumed by the change-oriented perspective; Pearl, 2000).  I am just imagining possible universes rather than a particular change process.

Let's say that, in addition to measuring purchases in the positive- and negative-attitude universes, I also measure approach-avoidance.  Because the only thing that varies across these two universes is attitudes, I can infer that any differences in approach-avoidance across the two universes must be due to attitudes.  But the attitudes→approach-avoidance causal relationship isn't quite what I want to know.  I want to know whether approach-avoidance is responsible for the attitudes→purchasing causal effect.  Clearly these two universes aren't enough to identify the effect I'm looking for.


What happens if I invent a second machine, the Approach-O-Matic, that peers into universes where only my approach-avoidance tendencies vary?  This will allow me to identify causal effects of approach-avoidance tendencies.  However, this, too, is not quite what I want: I want to know if approach-avoidance tendencies are responsible for a particular causal effect of attitudes, not whether approach-avoidance tendencies have causal effects on their own.

Perhaps I can use the Approach-O-Matic in conjunction with the Attitudes-O-Matic.  The logic works as follows.  The Attitudes-O-Matic tells me what my approach-avoidance tendencies are like in a universe where I have negative ice cream attitudes.  Perhaps I can use the Attitudes-O-Matic to look into the universe where I have positive attitudes and then use the Approach-O-Matic to somehow pretend my approach-avoidance tendencies have taken on their value from the negative attitudes universe.

Yes, that is Comic Sans, and yes, I used it intentionally

My constructed universe is a doubly hypothetical bizarro world: it is a universe where my attitudes are negative with respect to approach-avoidance and positive with respect to purchases.  This bizarro world clearly impossible: attitudes cannot simultaneously exert a causal effect on approach-avoidance as if it were negative and be positive.  Nevertheless, the bizarro world forms the basis of the formal definition of the indirect effect: the difference in purchases in the bizarro world and the world where attitudes are positive.


Identification through assumptions


The fact that the formal definition of the indirect effect requires the existence of an impossible universe implies that the indirect effect cannot be identified, even in an ideal experimental setting.  Fortunately, we are not entirely screwed if we wish to know whether certain effects are attributable to the action of another variable -- we just need to make certain assumptions before such effects can be identified.

Formally, the identification assumptions involve the conditional independence of counterfactual variables (Pearl, 2001).  If that sentence makes your head hurt, you are not alone.  Informally, we can ensure identifiability by consulting a sets of sufficient criteria for identification (Pearl, 2012).

One sufficient criterion is that all unmeasured causes of all variables are uncorrelated.  In my example involving attitudes, approach-avoidance, and purchases, this assumption amounts to the following causal graph:


In other words, if we can reasonably assume that all unknown causes are uncorrelated, then we can identify the indirect effect and the mediation analysis can proceed as normal (i.e., in the context of linear relationships, through calculating the product $ab$).  However, this is a very strong assumption that may not hold, even if attitudes in the above graph is randomized -- after all, randomization of attitudes does not guarantee that there is no relationship between the unknown causes of approach-avoidance and purchases.

Assuming that unknown causes are uncorrelated is probably too strong in most situations.  Fortunately, we can relax this assumption somewhat and still identify the indirect effect.  In practice, relaxing these assumptions involves finding a set of covariates that satisfy specific applications of the back-door criterion (for an explanation of this criterion, see second post in this series).

More specifically, assuming that we have variables $X$, $M$, $Y$, and a set of covariates $W$, adjusting for $W$ will identify the indirect effect if the following conditions are satisfied:
  1. None of the variables in $W$ is a descendant of $X$ (has a path from $X$ connected with one or more →)
  2. $W$ blocks all back-door paths from $X$ to $M$
  3. $W$ and $X$ block all backdoor paths from $M$ to $Y$
For example, the "norms" variable in the causal graph below satisfies the above three conditions.  There is no back-door path from attitudes to approach-avoidance (condition 2), and the two back-door paths between approach-avoidance and purchases, approach-avoidance←norms→purchases and approach-avoidance←attitudes→purchases, are blocked by norms and attitudes, respectively.

Unknown causes not explicitly shown

This approach to indirect effects still relies on strong assumptions that must be justified.  If the assumptions do not hold, the inferences fall apart.


Conclusions


Causal questions that involve attributing a causal effect to a particular source involve counterfactual logic.  This counterfactual logic posits the existence of entities that cannot exist.  This means that they cannot be identified from experiments unless we are willing to make certain, sometimes quite strong, assumptions.  Causal graphs combined with the back-door criterion can help us understand when our assumptions allow the indirect effect to be identified.


References

Bem, D. (1972). Self-perception theory. In Advances in Experimental Social Psychology (Vol. 6, pp. 1–62). Elsevier.

Cooper, J., & Fazio, R. (1984). A new look at dissonance theory. In Advances in Experimental Social Psychology (Vol. 17, pp. 229–266). Elsevier.

Forscher, P. S., & Devine, P. G. (2014). Breaking the prejudice habit: Automaticity and control in the context of a long-term goal. In J. Sherman, B. Gawronski, & Y. Trope (Eds.), Dual process theories of the social mind, 468-482.

Inzlicht, M., & Schmeichel, B. J. (2012). What Is Ego Depletion? Toward a Mechanistic Revision of the Resource Model of Self-Control. Perspectives on Psychological Science, 7, 450–463.

Lai, C. K., Hoffman, K. M., & Nosek, B. A. (2013). Reducing implicit prejudice. Social and Personality Psychology Compass, 7, 315–330.

Pearl, J. (2000). Causality: models, reasoning, and inference. Cambridge, U.K. ; New York: Cambridge University Press.

Pearl, J. (2009). Causal inference in statistics: An overview. Statistics Surveys, 3, 96–146.

Pearl, J. (2001, August). Direct and indirect effects. In Proceedings of the seventeenth conference on uncertainty in artificial intelligence (pp. 411-420). Morgan Kaufmann Publishers Inc.

Pearl, J. (2012). Interpretable conditions for identifying direct and indirect effects (No. TR-R-389). UCLA Department of Computer Science.

2 comments:

  1. Nice post! Will probably require a couple of re-reads to digest fully.

    Just wanted to point interested readers to the paper you also mention in your mediation intuitions-post* (from the Everything is Fucked syllabus): www2.psych.ubc.ca/~schaller/528Readings/BullockGreenHa2010.pdf

    I found it highly disturbing, and still wonder if all these mediation analyses we commonly see are just cargo cult keystrokes.

    How do you read papers that used mediation analysis, and what determines, whether they influence your beliefs?

    * https://persistentastonishment.blogspot.fi/2017/02/improving-intuitions-about-mediation.html

    ReplyDelete
    Replies
    1. Yes, I was also very disturbed by that paper!

      The main assumption that the Bullock paper points out can, I believe, be understood as no correlation between the errors of M and Y (no M⟷Y path). Bullock and colleagues are exactly right that this assumption can never be verified from a given set of data, as with any causal assumption.

      Bullock and colleagues recommend directly manipulating the mediator to identify the mediation effect. What Pearl points out is that even manipulating the mediator is insufficient for identification, since you need to know the value that the mediator takes when X is manipulated to its control and experimental values. This is the issue that I describe in my "impossibility" section. I think the only way forward is to lay out the assumptions that are required to identify a mediation effect in a particular study and defend them, either by reference to outside data (which, perhaps, you can collect yourself) or through other scientific knowledge. This is, of course, not common practice -- I'm sure you have seen just as many papers as I have where the authors assume that because their statistical packages have spit out a mediation effect, that mediation effect must reflect something true about reality.

      Bullock and colleagues also discuss what they call "causal heterogeneity", which is the issue that some, but not all, people may respond to a treatment and therefore show evidence of mediation. I believe this is an issue mostly in the context of mediation analysis using linear models, since linear models assume that relationships are the same for all pairs of variables (unless you add interaction terms). Because Pearl's Structural Causal Model is nonparametric (http://persistentastonishment.blogspot.com/2017/03/causal-inference-4-its-use-in-psychology.html) it doesn't make any assumptions about linearity. In principle, you can model any sort of relationships you wish in the SCM. Of course, this is easier said than done. :) But I think the issue of confounding between the X, M, and Y variables is harder to overcome than the causal heterogeneity issue.

      I myself am certainly much more skeptical about mediation analyses. I don't think there's anything inherently wrong about the statistical procedure itself -- it just rests on assumptions that need to be justified (or the authors need to openly acknowledge that their conclusions rest on shaky premises).

      Delete