Thursday, February 11, 2016

Effect stability: (2) Simple mediation designs

In my last post, I described how a significant estimate need not be close to its population value, and how, using a clever method developed by Schönbrodt and Perugini (2013), one can estimate the sample size required to achieve stability for an estimator through simulation.

Schönbrodt's and Perugini's method defines a point of stability (POS), a sample size beyond which one is reasonably confident that an estimate is within a specified range (labeled the corridor of stability, or COS) of its population value.  For more details on how the point of stability is estimated, you can read either my previous post or Schönbrodt's and Perugini's paper.

By adapting Schönbrodt's and Perugini's freely available source code, I found that, in two-group, three-group, and interaction designs, statistical stability generally requires sample sizes around 150-250.  In this post, I will apply this same method to simple mediation designs.

Friday, February 5, 2016

Effect stability: (1) Two-group, three-group, and interaction designs

When planning the sample size to estimate a population parameter, most psychology researchers choose the size that could allow an inference that the parameter is non-zero -- in other words, researchers attempt to maximize statistical significance.  However, both practical and scientific interest often centers around whether the estimate is good or stable -- that is, close to its population parameter.

These two criteria, significance and stability, are not the same.  Indeed, with a sample size of 20, a correlation of $r$=.58, which has a $p$-value of .007, could plausibly range between .18 and .81.

Tuesday, August 11, 2015

Reviewing peer review (and its flaws)

Peer review is viewed as the arbiter of good science.  In fact, passing peer review is typically a prerequisite for professional advancement -- a scientific paper will not be published unless it is judged worthy of publication by one or more peers, and likewise a grant will not be awarded unless a group of scientific peers judge a proposal to be of sufficiently high quality.  I would argue that, because so many means of professional advancement are conditional on satisfying reviewers, satisfying reviewers is one of the most important tasks that a career scientist faces.

Because of its importance to career scientists, scientists have plenty of opinions about peer review.  However, peer review is not often taken as an object of scientific study.  My goal in this post, then, is to conduct a short review peer review.  I will structure my discussion around the following three questions, after which I will give some concluding thoughts:
  1. What are the goals of peer review?
  2. What are the costs of peer review?  Who bears these costs?
  3. What are benefits of peer review?  Who reaps these benefits?

Monday, July 27, 2015

Median publication delays at 38 APA journals

Last week, I had a paper accepted at the Journal of Personality and Social Psychology.  This acceptance is good for me, as JPSP is one of the more prestigious journals in my field.  However, given how grueling the review process has been, it's hard for me to feel happy about this acceptance -- based on my records, this paper spent about 17 months in review, and right now it has only been accepted, not published.

Of course, I am far from the only person whose paper has spent a long time in the limbo between acceptance and publication.  In fact, based on two analyses of papers in PubMed, this experience seems distressingly common.  For example, Steve Royle found that papers submitted to cell biology journals in 2013 and indexed by PubMed take about 100 days to go from received to accepted and another 120 days to go from accepted to published, for a total of 220 days.  In another analysis, Daniel Himmelstein analyzed the time between acceptance and publication for 3,476 journals indexed by PubMed in 2014.  I didn't see an overall median lag time, but most of the lags seem to be between 50 and 60 days.

Both of these analyses focus primarily on biology journals, and primarily on journals indexed by PubMed.  For example, if you try search for "J Pers Soc Psychol", the PubMed abbreviation for JPSP, on the Himmelstein site, you will not find this journal listed -- possibly because JPSP does not report the receipt and acceptance dates for each article in PubMed.  This leads me to my question: Do the Royle and Himmelstein analyses reflect the typical delays at psychology journals?

Tuesday, July 14, 2015

Mapping "prejudice" research reveals its preoccupation with implicit bias

One of the many difficulties of doing social science is that the concepts that we study are often fuzzy.  Precisely defining concepts like "attitudes", "cognition", and the "self" can be challenging, which sometimes leads to dramatic differences in how scientists use the terms.

The challenges are only enhanced when the object of study is a politically charged concept like my chosen field of study, prejudice.  I believe this fuzziness in the definition of "prejudice" has exerted a distorting influence on research on the topic, affecting the questions researchers ask, the measures researchers use, and the interventions researchers develop.

Today, I'm going to focus on a small piece of this issue by answering the following questions:
  1. When contemporary researchers choose to study "prejudice", how do they use the term?
  2. What does contemporary researchers' use of the term "prejudice" reveal about their (often unstated) definitions of of the term?