Over the past week, I have written a series of posts on contemporary advances on causal inference. The series has covered a lot of ground, ranging from the foundations of causal inference (part 1), to adjustment for confounding (part 2), to causal inference for mediation designs (part 3). The wide-ranging nature of my review reflects the enormous progress of the past few decades of work in this area.

In the process of writing these posts, I have been struck by two observations. First, formal causal inference could have enormous benefits for psychological science. Second, despite these apparent benefits, formal causal inference is largely absent from the field.

I have detailed some of the benefits of causal inference in my first three posts. However, I wanted to spend a final post reflecting on these benefits. I will also describe what I perceive to be the main obstacles for adopting causal inference into psychological research.

## Wednesday, March 22, 2017

## Tuesday, March 21, 2017

### Causal inference (3): Mediation and counterfactuals

This is the third post in a series that explores recent advances in causal inference, particularly those stemming from Judea Pearl's Structural Causal Model (Pearl, 2000; 2009). My first post defines causality in terms of actions and describes how this definition imposes a bright line between causal and associative concepts that can only be bridged through assumptions. My second post extends these ideas to a multiple-variable context and describes the assumptions required to identify causal effects in the presence of spurious causes (confounds).

In this third post, I will explore these ideas in the context of a particular kind of multiple-variable design: mediation designs.

In this third post, I will explore these ideas in the context of a particular kind of multiple-variable design: mediation designs.

## Saturday, March 18, 2017

### Causal inference (2): Confounding and adjustment

In my last post, I reviewed, in a non math-y way, Judea Pearl's definition of causality in terms of action: setting a variable from one value to another while leaving other variables in the system constant (Pearl, 2000; 2009). Defining causality in terms of action implies that causality is different from association, which means that the concepts of association, such as correlation, regression, and adjustment, can never, by themselves, establish causality. We can only identify a particular causal relationship by making assumptions, and our causal inferences are only as good as our justifications for these assumptions.

In this post, I will extend these ideas to multiple-variable designs and explain how we can estimate causal effects even in the presence of spurious causal influences.

In this post, I will extend these ideas to multiple-variable designs and explain how we can estimate causal effects even in the presence of spurious causal influences.

## Thursday, March 16, 2017

### Causal inference (1): Two-variable designs

Inspired by this project and this wonderful post by Julia Rohrer, I have been reading a lot about causal inference. Although causality lies at the heart of science, I personally received very little formal training about causal inference beyond admonishments to use randomized experiments. As it turns out, mathematicians and statisticians have made enormous strides in this area over the past few decades. Of particular importance is Judea Pearl's Structural Causal Model (Pearl, 1995; 2000; 2009), which unifies previous approaches to causality, such as Structural Equation Modeling, potential outcomes, and sufficient causes.

## Thursday, February 9, 2017

### Improving intuitions about mediation models

As part of preparing a revision for this project, I have done a lot of reading and thinking about statistical mediation models. These models are often used when you wish to find the reason a manipulation exerts its impact on another variable -- the mechanism for the effect.

As I have described elsewhere, let's say you have predictor variable $X$, outcome $Y$, and a variable $M$ that you think transmits the impact of $X$ on $Y$. You can think of this situation in terms of the following diagram:

As I have described elsewhere, let's say you have predictor variable $X$, outcome $Y$, and a variable $M$ that you think transmits the impact of $X$ on $Y$. You can think of this situation in terms of the following diagram:

Subscribe to:
Posts (Atom)