Tuesday, June 16, 2015

Idealized vs actual psychological science

I have been reading recently about the philosophy of science, which has got me thinking about the scientific method, both as it's taught in most psychology classes and as it's commonly practiced in psychology.  This thinking has led me to the following conclusion: the version of the scientific method that is usually taught in psychology classes is a farce, to the detriment of the science as a whole.  Let me explain.

Differences between psychology's ideal and actual scientific method


In most psychology classes, the scientific method is explained as follows:

Click to embiggen


This idealized version of science (which I will call confirmatory science) starts with a theory, which the scientist uses to derive predictions through deductive logic.  The scientist then figures out a method to test the theory-derived predictions.  Using the method, the scientist gathers data and compares the data to the predictions, usually using statistical analysis.  Based on the comparison of predictions to data, the scientist either increases his or her confidence in the theory or revises the theory.  Science complete.

The trouble is that, at least in my experience, psychological science usually doesn't work this way.  Instead, it works something like the following, which I will call actual science (remember, of course, I'm just making claims about psychology):

Click to embiggen

As in confirmatory science, actual science starts with a prediction.  However, the source of this prediction is not usually theory, but instead a hunch, real-world observation, clinical observation, or some other extra-theoretical means.

The scientist then develops a method to test the prediction, gathers data based on the method, and compares the data to the prediction, usually using statistical analysis.  If the data are consistent with the prediction, the scientist develops a theory that logically implies the original prediction as a consequence of its premises.

On the other hand, if the data are inconsistent with the prediction, the scientist has to figure out what went wrong.  Usually (but not always), the scientist concludes that the method was flawed rather than concluding that the prediction was incorrect.

What do we make of the discrepancies between these two versions of science?

First, it is useful to make the following observations about confirmatory science:

(1) Confirmatory science assumes that a strong theory already exists that can be used to derive one's predictions
(2) Confirmatory science views deductive logic as the "best" way of deriving predictions
(3) Confirmatory science assumes that the method used to test one's predictions is relatively free of error

Unfortunately, psychology has few pre-existing theories that are strong enough to serve as a logical basis to deduce predictions.  Moreover, most methods in psychology are quite error-prone, creating a logical ambiguity whereby it is difficult to know whether results that are inconsistent with predictions were obtained because the prediction was wrong or because of method error.

The difficulty of confirmatory science in psychology leads me to make the following claim:

On its own, confirmatory science is not a good  way of doing psychology

In fact, I believe the overemphasis on confirmatory science in psychology has these specific negative consequences:

(1) Confirmatory science discourages scientists from acknowledging the true sources of their ideas.  Based on my own experience, many ideas in psychology come from sources that are different from theory.  However, because psychology journals enshrine the deductive, confirmatory scientific method as the "best" method, scientists are encouraged to present their ideas as if they were derived from theory.  This is a farce.  Let's acknowledge the true sources of our ideas so that we can focus on building theories that might eventually serve as a better basis of prediction.

(2) Confirmatory science does not acknowledge the role of method error in psychology.  Our methods are studying people are relatively prone to error.  It is important for psychologists to explicitly recognize this fact so that we can properly interpret whether unexpected results are due to poor methods or false premises.

(3) Confirmatory science prioritizes making predictions in new data at the expense of explaining findings in old data.  Explaining findings in old data involves noticing patterns across several studies and developing theory that explains those patterns.  However, confirmatory science does not have an explicit place for theory development -- instead, it prioritizes using theory to logically derive predictions in new situations.

Addressing the non-cumulative character of psychology


In 1978, psychologist Paul Meehl made the following observation about theories in psychology:
Most of them [psychological theories] suffer the same fate that General MacArthur ascribed to old generals -- They never die, they just slowly fade away.
Meehl's observation is just as true now as it was in 1978.

There are many reasons for the non-cumulative character of psychology, and fully exploring these reasons would involve a completely new post.  However, I submit that one of the reasons is that psychologists are loathe to acknowledge how soft their theories truly are.

Confirmatory science is just one way of doing science, and it is only possible in the presence of strong prior theory.  Let's acknowledge that fact and devote our attention to explicitly developing these strong theories.  Until we do this, I fear that psychological theories are doomed irrelevance, just like MacArthur's old generals.


References

Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806–834. http://doi.org/10.1037/0022-006X.46.4.806

2 comments:

  1. To me, it seems that the points 2) and 3) might be in disagreement with each other; if the data is noisy, should we be explaining findings before making sure they are not due to method error? This, among other things, would mean replication – i.e. predicting that the same pattern appears in new data, right?

    I'd be interested to hear your thoughts about the recent paper "Choosing prediction over explanation in psychology"! [pilab.psy.utexas.edu/publications/Yarkoni_Westfall_PPS_in_press.pdf]

    ReplyDelete
    Replies
    1. I admit that the "Choosing predictions" paper has been on my reading list for some time! Maybe this will be a good excuse to finally get around to reading it. :)

      With regard to your first comment, I am defining "method error" to encompass both systematic and non-systematic error. Non-systematic error is less of a problem with large samples. Systematic method error (e.g., measures that are invalid because they do not tap the desired construct) are a huge problem even in unlimited samples.

      Another way of looking at the criticism that I tried to articulate in this post is that psychology has a huge problem with not developing complete models of whether and how the underlying constructs relate to the measures/methods we use to assess them. This is what creates ambiguity when our results do not conform to "theory": "theory" here could mean either the scientific theory that we ultimately want to address with our data or the various meta-theories about how our methods allow us to assess our underlying constructs.

      In other words, I'm not arguing against replication at all -- I am arguing that the lack of clear attention to methodological validity adds logical ambiguity that impedes scientific progress.

      See also this paper, "Attack of the Psychometricians" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2779444/). I didn't know about it when I wrote this post, but it makes similar points, I think.

      Delete