Although the commentary does raise some good points -- they note that some of the studies in the reproducibility project depart from the original studies in ways that are likely problematic -- I also think it's easy to lose sight of the broader context when critiquing a single project. (For those interested, there also may be some problems with the basic claims of the critique)
You see, the conversation in psychology about reproducibility has a longer history than 2015. Indeed, there were rumblings of this conversation way back in 1962, when Jacob Cohen published a review of the power of studies from the Journal of Abnormal and Social Psychology. His conclusion was that the typical study was woefully underpowered, having "one chance in five or six of detecting small effects" and a 50-60% chance of detecting medium effects. He went on to remark that "it seems obvious that investigators are less likely to submit for publication unsuccessful than successful research", resulting in a literature that overstates the evidence for its conclusions. This is a remarkably "modern" conclusion.
Similarly, in 1975, Tony Greenwald noted that psychologists are "prejudiced against the null hypothesis", with potentially far-reaching consequences, including an accumulation of true null findings that are consigned to the file drawer and an accumulation of published false positives. In a similar message, in 1978 Paul Meehl noted that the theories on which psychological data are based "scientifically unimpressive and technologically worthless", and that psychology lacks the cumulative character of the harder sciences. Meehl identified at least 20 potential causes of psychology's non-cumulative character, including ambiguity in measurement and experimental design, a large number of potential relationships between variables, and so on.
Much later, in 2005, John Ioannidis conducted a variety of simulations that showed that, in fields with small samples, small effects, high flexibility in design, and a large number of potential relationships, most published research findings will be false. Ioannidis was commenting on medicine rather than psychology, but, as we can see from the comments of Cohen, Greenwald, Meehl, and many others, his analysis applies just as strongly to psychology.
Even in this very selective review, one can see many strands of evidence throughout the years that indicating that all is not well with business-as-usual in psychology research. Of course, these issues received a huge surge of attention with the publication of the RPP -- indeed, one of the great virtues of this project is that it has brought the reproducibility conversation to the forefront of people's minds.
Thus, even if the RPP turns out to be invalid, that does not invalidate the other sources of evidence that indicate a problem with reproducibility. Reproducibility is more than the RPP, and we should remember that when assessing this commentary.
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716–aac4716. http://doi.org/10.1126/science.aac4716
Gilbert, D. T., King, G., Pettigrew, S., & Wilson, T. D. (2016). Comment on “Estimating the reproducibility of psychological science.” Science, 351(6277), 1037–1037. http://doi.org/10.1126/science.aad7243
Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. The Journal of Abnormal and Social Psychology, 65(3), 145–153. http://doi.org/10.1037/h0045186
Greenwald, A. G. (1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin, 82(1), 1–20. http://doi.org/10.1037/h0076157
Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. Journal of Consulting and Clinical Psychology, 46, 806–834. http://doi.org/10.1037/0022-006X.46.4.806
Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. http://doi.org/10.1371/journal.pmed.0020124