How should journals handle replication studies?

Recently Ben Goldacre wrote about a group of researchers (Stuart Ritchie, Chris French, and Richard Wiseman) whose null replication of 3 experiments from the infamous Bem ESP paper was rejected by JPSP – the same journal that published Bem’s paper.

JPSP is the flagship journal in my field, and I’ve published in it and I’ve reviewed for it, so I’m reasonably familiar with how it ordinarily works. It strives to publish work that is theory-advancing. I haven’t seen the manuscript, but my understanding is that the Ritchie et al. experiments were exact replications (not “replicate and extend” studies). In the usual course of things, I wouldn’t expect JPSP to accept a paper that only reported exact replication studies, even if their results conflicted with the original study.

However, the Bem paper was extraordinary in several ways. I had two slightly different lines of thinking about JPSP’s rejection.

My first thought was that given the extraordinary nature of the Bem paper, maybe JPSP has a special obligation to go outside of its usual policy. Many scientists think that Bem’s effects are impossible, which created the big controversy around the paper. So in this instance, a null replication has a special significance that usually it would not. That would be especially true if the results reported by Ritchie et al. fell outside of the Bem studies’ replication interval (i.e., if they statistically conflicted; I don’t know whether or not that is thecase).

My second line of thinking was slightly different. Some people have suggested that the Bem paper shines a light on shortcomings of our usual criteria for what constitutes good methodology. Tal Yarkoni made this argument very well. In short: the Bem paper was judged by the same standard that other papers are judged by. So the fact that an effect that most of us consider impossible was able to pass that standard should cause us to question the standard, rather than just attacking the paper.

So by that same line of thinking, maybe the rejection of the Ritchie et al. null replication should make us rethink the usual standards for how journals treat replications. Prior to electronic publication — in an age where journal pages were scarce and expensive — the JPSP policy made sense for a flagship journal that strived to be “theory advancing.” But a consequence of that kind of policy is that exact replication studies are undervalued. Since researchers know from the outset that the more prestigious journals won’t publish exact replications, we have a low incentive to invest time and energy running them. Replications still get run, but often only if a researcher can think of some novel extension, like a moderator variable or a new condition to compare the old ones too. And then the results might only get published if the extension yields a novel and statistically significant result.

But nowadays, in the era of electronic publication, why couldn’t a journal also publish an online supplement of replication studies? Call it “JPSP: Replication Reports.” It would be a home for all replication attempts of studies originally published in the journal. This would have benefits for individual investigators, for journals, and for the science as a whole.

For individual investigators, it would be an incentive to run and report exact replication studies simply to see if a published effect can be reproduced. The market – that is, hiring and tenure committees – would sort out how much credit to give people for publishing such papers, in relation to the more usual kind. Hopefully it would be greater than zero.

For journals, it would be additional content and added value to users of their online services. Imagine if every time you viewed the full text of a paper, there was a link to a catalog of all replication attempts. In addition to publishing and hosting replication reports, journals could link to replicate-and-extend studies published elsewhere (e.g., as a subset of a “cited by” index). That would be a terrific service to their customers.

For the science, it would be valuable to encourage and document replications better than we currently do. When a researcher looks up an article, you could immediately and easily see how well the effect has survived replication attempts. It would also help us organize information better for meta-analyses and the like. It would help us keep labs and journals honest by tracking phenomena like the notorious decline effect and publication bias. In the short term that might be bad for some journals (I’d guess that journals that focus on novel and groundbreaking research are going to show stronger decline curves). But in the long run, it would be another index (alongside impact factors and the like) of the quality of a journal — which the better journals should welcome if they really think they’re doing things right. It might even lead to improvement of some of the problems that Tal discussed. If researchers, editors, and publishers knew that failed replications would be tied around the neck of published papers, there would be an incentive to improve quality and close some methodological holes.

Are there downsides that I’m not thinking of? Probably. Would there be barriers to adopting this? Almost certainly. (At a minimum, nobody likes change.) Is this a good idea? A terrible idea? Tell me in the comments.

Postscript: After I drafted this entry and was getting ready to post it, I came across this article in New Scientist about the rejection. It looks like Richard Wiseman already had a similar idea:

“My feeling is that the whole system is out of date and comes from a time when journal space was limited.” He argues that journals could publish only abstracts of replication studies in print, and provide the full manuscript online.

14 thoughts on “How should journals handle replication studies?

  1. It’s not as if science is sagging under the weight of exact replications. As far as I can see they’re much less common than they should be. And while a supplement to publish replications might be one solution it would risk rendering replications as second-class studies: “This is the journal for real science. We have a special supplement for you, Replication.”

    I once had the idea that every grant should come with the requirement that the first thing you do is try to replicate a previous study which you’ve cited as justification for the grant. I think that would be unworkable but you see the point. Replications are very important.

  2. Interesting stuff, Sanjay, even though I am not a player in this field. One can only wonder though, what if similar review standards were applied to Black Eyed Peas replications? I think in that case, a more conservative approach would be best for all.

  3. Yah, I think a lot of people have kicked this idea around over the years. It seems like something that should exist. The problem of course is that publishing replications is still going to cost the journal, even if they are electronic, since they still have to be reviewed, etc. (I know reviewers aren’t paid, but editors, copyeditors, etc., are.)

  4. Neuroskeptic: 1. A replication reports supplement would have as much or as little prestige as scientists assign to it. As long as the reports are getting out there, I’m okay with that. 2. Setting aside better-worse, replications are qualitatively different than first reports of findings. For the journals, separate sections would let them keep their main identity as publishing only “groundbreaking” findings or whatever. For readers doing lit searches, it would make it easier to find original reports (which probably contain more theoretical background than a brief replication report) and to credit the original discoverers.

    Gameswithwords: that’s a good point. If subscribers got added value from replication reports and indices, and/or scientists preferred reading and publishing in journals that had them, then there might be market pressure (and a resulting business argument) to do them. But that’s certainly not guaranteed.

  5. Having been on the editorial board of JPSP for about a dozen years (way back when), I personally am very troubled by the “policy” the editor espouses of “not publishing replication studies” in this so-called (self-titled?) flagship journal. If this indeed is the flagship journal in Social Psychology, then the ship is taking on water, fast!

    A failure to replicate is not a replication, by definition. If the folk now at JPSP can’t appreciate that not-so-subtle point, then new management is possibly in order.

    Second, APA journals have, and likely still do, publish failures to replicate (I recall a publication from JEP:L&M @ 1979 titled “…9 failures to replicate”!)

    Finally (oh hell, there is a lot more than one final concern, but what is the use?), given the controversial nature of the paper JPSP chose to publish, and the potential importance to science in particular, and human knowledge in general, if these findings are, in some scientifically warranted sense, valid, then why on earth would we not want to know that the produced effect is, to put it mildly (e.g., Rhine, etc. and 100 + years devoted to such phenomena from a “scientific” methodological approach) replicable. That in itself is big news. Really BIG news!

    The editor’s comment (to me vis email) that JPSP is not in the business of publishing replications, is sad at best, and bad science practice, at worst. JPSP has a responsibility to all who read or are influenced by its publications, to go the extra mile (god, in such cases, the rules be damned. Why can’t exceptions be made?) when results of such magnitude (vis a vis their implications for both epistemology and science) are given public forum.

    stan klein
    UCSB

  6. Oh, and btw, let’s not even start on the “decline” effect. By definition it should be, well, in decline already and what are we to make of the N (very large #) of studies that keep replicating? Are they ignored by cosmic consciousness, and if yes, then what does cosmic consciousness have against them. Moreover, considering every time you turn the keys to start your car, a series of scientific experiments are being re-enacted, how come car performance hasn’t gotten worse over the decades of natural experimentation taking place millions of times each hour of every day? (not to mention, use of all scientifically tested and experimentally vetted products?!) Holy mackerel! Sometimes I really wonder about the social sciences.

  7. Hi Sanjay,
    interesting idea. Some journals publish replication studies in the journal, but an online replication section is ok. The main point is that readers who find the actual article have a direct link to all replication studies. I will cite your idea in a manuscript I am working on, if that is ok with you.

    Best, Uli

    1. Thanks Uli! I don’t think anything I’ve written on this blog has ever been cited in a “real” publication — that would be a cool first. There seems to be a lot of interest in the issue of replication right now, with a few people setting up websites to report replication studies. (See here: http://www.talyarkoni.org/blog/2011/11/22/tracking-replication-attempts-in-psychology-for-real-this-time/) I think it would be best if the journals took responsibility for replications of things they’ve published, but if the journals cannot be persuaded to do that, the independent sites are probably the next best thing.

Comments are closed.