Tuesday, September 20, 2011

Diederik Stapel and the frequency of scientific shenanigans

On August 27, two junior researchers working with the Dutch social psychologist Diederik Stapel at Tilburg University contacted a university administrator with suspicions that their senior colleague was using faked data.  As one of the worst forms of academic shenanigans that fall under the broad umbrella of "academic misconduct", an allegation of data fabrication was quite serious.  This is especially true because Diederik Stapel was in the early stages of a prolific scientific career; he served on the editorial board of six different academic journals and had received the 2007 "Early Career Award" from the International Society for Self and Identity (ISSI).  He had also published many articles that received generous press attention, including one in Science that claimed that messy environments promote discrimination.

Nonetheless, a little over a week and one university investigation later, Stapel admitted to making up data and was sacked from Tilburg University.

The revelation that Stapel committed data fabrication has sent shockwaves through academic psychology.  Beyond tarnishing Stapel's own work, the news also threatens to tarnish both the work of his colleagues and the journals with which Stapel was affiliated.  This has led to some vociferous distancing from Stapel's colleagues and the disappearance of all mentions of Stapel (outside the news of his sacking) from both the Tilburg University website and the ISSI website, as shown below:

Selected Tilburg University professor editorial activities,
before and just after Stapel admitted to data fabrication.
"Before" image taken from Google cache on 9/19/2011

ISSI Early Career Award recipients before and just after Stapel admitted to data fabrication.
"Before" image taken from Google cache on 9/19/2011

Stapel's swift, precipitous diminution in internet presence illustrates the "contagion principle" of academic misconduct: when it is uncovered, it threatens not only the credibility of the scientist who committed it, but also the credibility of all those associated with it.  Of course, the contagion principle is actually quite well-founded -- fundamentally, science rests on the trust of the public and the greater scientific community.

If Stapel were the only "bad apple" in a basket of otherwise delicious scientific fruit, the contagion principle might only apply in a limited way, to Stapel's immediate colleagues.  Unfortunately, Stapel is not alone in his shenanigans.  Just a year ago, the prominent psychology researcher Marc Hauser also admitted to misconduct, and was eventually forced to resign his post from Harvard.  By themselves, even two cases might not spoil psychology's credibility, but because psychology is typically not taken as seriously as other sciences (perhaps with good reason), it simply does not enjoy a large margin for error.

This leads to an obvious (but no less important) question:  how frequent is data fabrication (and other forms of misconduct) in psychology?

This is a difficult question to answer.  The most cost-effective way to obtain an answer is through a random-sample survey, but surveys have the fundamental problem that research scientists are probably pretty damn unwilling to admit to that they have made up data, even under conditions of anonymity.  Researchers can partially address this problem by comparing admission rates of personal misconduct to admission rates of witnessing the misconduct of others (though reports of witnessing misconduct might themselves be biased by factors such as professional jealousy).  Unfortunately, no such survey of research psychologists has been conducted, so the best we can do for now is draw inferences from surveys conducted in other sciences.

Enter a paper in PLoS ONE by Daniele Fanelli.  This paper aggregates the results from 18 anonymous academic misconduct surveys of randomly-sampled research scientists.  It also compares the results of surveys that asked for admission of personal academic misconduct to the results of surveys that asked admission of witnessing the misconduct of others, allowing one to indirectly assess the extent to which surveys that ask about personal misconduct are tainted by reporting bias.

The Fanelli paper is full of interesting insights.  The first that I'll highlight is from Figure 2, which shows the rate of personal admission of making up data (the worst form of misconduct) from the various surveys analyzed by the paper, as well as the overall mean estimate across the surveys.  The overall estimated rate of admission might seem low (1.97%), but remember that this is the rate of admission for the most severe form of misconduct. The estimated rate of admission when other, less severe forms of misconduct were included was much higher (9.54%).

Figure 2 from Fanelli (2009). Admission rates for personally
committing data fabrication.  Lines are 95% confidence intervals.

Compare the results regarding personal admission of data fabrication to the results regarding witnessing others commit data fabrication (below).   Here the estimated rate is 14.12%; this jumped to 28.53% when other, less severe forms of misconduct were included in the estimate.  Scary.

Figure 4 from Fanelli (2009).  Admission rates for witnessing others
commit data fabrication.  Lines are 95% confidence intervals.

What are the implications of these data for scientific psychology?  First, a large psychological society needs to step up by conducting its own randomized survey of research psychologists.  As I mentioned above, an academic misconduct survey of psychologists has yet to be conducted, which leaves both research psychologists and the public in the dark about the extent of the misconduct problem.  Second, if the problem is widespread, research psychologists need to do a better job of policing themselves.  If they don't, they will either be policed from outside -- or, more likely, they'll simply be unfunded.


Reference:

Fanelli, D. (2009). How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. PLoS ONE 4(5): e5738. doi:10.1371/journal.pone.0005738

2 comments:

  1. Thanks for writing this article. I just heard about Stapel and have been reading up on the problem.

    There's some irony in your article. You suggest that the ISSI is trying to distance itself from Stapel by removing his name form the list of "Early Career Award Recipients", as if it is okay to delete inconvenient facts. If this is their attitude towards the historical record, then they have no place overseeing the integrity of any field of science.

    Do you if they made any official announcement of this award being revoked? I feel like some sort of note should be on this page of recipients. Instead, it feels like a whitewash.

    ReplyDelete
  2. Thanks for sharing the info, keep up the good work.

    ReplyDelete

LinkWithin

Blog Widget by LinkWithin