Monday, June 15, 2015

What we can learn from the LaCour data fabrication incident

Mike LaCour, author of a paper on
canvassing that was later retracted
About two weeks ago, news broke that Michael LaCour, the first author of a study about how, purportedly, gay canvassers can successfully improve people's attitudes toward gay men and same-sex marriage initiatives, likely fabricated his data. Although news about fraud is always troubling, this news was particularly troubling -- after all, the study was published in Science, one of the most high-profile journals for scientific research (as the joke goes, the shorter the title, the more prestigious the journal). In addition, the methods of the study appeared to be rigorous, and the findings just "felt good" -- according to the study, brief, 20-minute conversations with canvassers who admitted they were gay created dramatic changes in attitude that persisted up to nine months.

The study received a great deal of media attention. This attention is understandable as the study's findings made for a great story. In a sense, the study was claiming that all you really needed to do to change someone's mind is to be earnest and persuasive. If you try hard enough, all you just need 20 minutes of someone's time to have a lasting impact.

However, a great story is not the same as great science. The narrative also goes against the common experience of anyone who's had an argument with anyone, especially about topics like gay marriage that have moral overtones. A great deal of scientific evidence also contradicts the basic findings of Lacour. For example, many classic social psychology studies suggest that people tend to filter evidence through the lens of their preconceived beliefs (Lord, Ross, & Lepper, 1979). Confrontation can occasionally be effective in changing these beliefs, but, contrary to the evidence presented by LaCour, this confrontation is most effective when it comes from someone similar to you rather than different from you (Czopp & Monteith, 2003; Gulker, Mark, & Monteith, 2012). In addition, although there is a great deal of evidence that contact with gay men and other stigmatized groups can improve people's attitudes toward those groups, this contact is generally only effective when it occurs over an extended period of time (such as a year as someone's roommate; Shook & Fazio, 2008). A year as someone's roommate is quite different from a 20-minute conversation with a canvasser.

Unfortunately, as reported by Retraction Watch, the study in question was indeed probably too good to be true. David Broockman, a graduate student who was hoping to replicate the results of the original results of the study, obtained a copy of the "raw" data from LaCour and Green and found a large number of irregularities in the data -- enough to cause him to contact Donald Green, the second author of the original study, and publish a white paper detailing the irregularities. Donald Green contacted Science retracting the paper.

There are always limits to the conclusions we can draw from incidents of fraud. After all, there are likely only a few people who are willing to go to the lengths of LaCour (or Diederik Stapel, the major social science fraudster from several years ago) to fabricate their data. However, I do think that we can learn a few things:

(1) Media coverage is not an indicator of good science. The major journalistic appeal of the original LaCour and Green was that it made for a feel-good story. The fact that there were prior reasons to disbelieve this feel-good story were not immediately relevant to the reporters who picked up this story. I don't expect science journalists to know everything there is to know about the topics that they report on, but in this case, if the reporters in question had contacted any reasonably knowledgeable persuasion scientist, they would likely have discovered some reasons to be skeptical about the results of LaCour and Green. Given other recent evidence of the ease of fooling science reporters into publishing feel-good (but false) science, I think it's safe to say that widespread media coverage has nothing to do with scientific quality.

(2) Journal prestige is not an indicator of good science. It should be safe to assume that the reason that LaCour and Green was published in Science was that the finding was splashy. Unfortunately, the truth is often not splashy. Sometimes the truth is downright mundane and disappointing. Thus, what makes it into the most prestigious journals is not the same as what is true (and there is even some evidence, e.g. Brembs et al., 2013, that what makes it into prestigious journals is less likely to be true).

(3) Scientific openness can help sort the good science from the bad. David Broockman was only able to tabulate the irregularities in LaCour and Green because he was able to obtain a publicly posted dataset from the study. The result is that, in contrast to what happened with Diederik Stapel, LaCour was caught only a year after he likely fabricated his data. I view the quick turnaround as a major victory for the movement toward scientific openness. Although I still view publish-or-perish incentive structure that rewards the publication of splashy findings as obstructing scientific advancement, it seems that scientific openness can serve as a mitigating factor.


References

Brembs, B., Button, K., & Munafò, M. (2013). Deep impact: unintended consequences of journal rank. Frontiers in Human Neuroscience, 7. http://doi.org/10.3389/fnhum.2013.00291

Czopp, A. M., & Monteith, M. J. (2003). Confronting Prejudice (Literally): Reactions to Confrontations of Racial and Gender Bias. Personality and Social Psychology Bulletin, 29(4), 532–544. http://doi.org/10.1177/0146167202250923

Gulker, J. E., Mark, A. Y., & Monteith, M. J. (2013). Confronting prejudice: The who, what, and why of confrontation effectiveness. Social Influence, 8(4), 280–293. http://doi.org/10.1080/15534510.2012.736879

LaCour, M. J., & Green, D. P. (2014). When contact changes minds: An experiment on transmission of support for gay equality. Science, 346(6215), 1366–1369. http://doi.org/10.1126/science.1256151

Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109. http://doi.org/10.1037/0022-3514.37.11.2098

Shook, N. J., & Fazio, R. H. (2008). Roommate Relationships: A Comparison of Interracial and Same-Race Living Situations. Group Processes & Intergroup Relations, 11(4), 425–437. http://doi.org/10.1177/1368430208095398

No comments:

Post a Comment