Learning exactly the wrong lesson

For several years now I have heard fellow scientists worry that the dialogue around open and reproducible science could be used against science – to discredit results that people find inconvenient and even to de-fund science. And this has not just been fretting around the periphery. I have heard these concerns raised by scientists who hold policymaking positions in societies and journals.

A recent article by Ed Yong talks about this concern in the present political climate.

In this environment, many are concerned that attempts to improve science could be judo-flipped into ways of decrying or defunding it. “It’s been on our minds since the first week of November,” says Stuart Buck, Vice President of Research Integrity at the Laura and John Arnold Foundation, which funds attempts to improve reproducibility.

The worry is that policy-makers might ask why so much money should be poured into science if so many studies are weak or wrong? Or why should studies be allowed into the policy-making process if they’re inaccessible to public scrutiny? At a recent conference on reproducibility run by the National Academies of Sciences, clinical epidemiologist Hilda Bastian says that she and other speakers were told to consider these dangers when preparing their talks.

One possible conclusion is that this means we should slow down science’s movement toward greater openness and reproducibility. As Yong writes, “Everyone I spoke to felt that this is the wrong approach.” But as I said, those voices are out there and many could take Yong’s article as reinforcing their position. So I think it bears elaboration why that would be the wrong approach.

Probably the least principled reason, but an entirely unavoidable practical one, is just that it would be impossible. The discussion cannot be contained. Notwithstanding some defenses of gatekeeping and critiques of science discourse on social media (where much of this discussion is happening), there is just no way to keep scientists from talking about these issues in the open.

And imagine for a moment that we nevertheless tried to contain the conversation. Would that be a good idea? Consider the “climategate” faux-scandal. Opponents of climate science cooked up an anti-transparency conspiracy out of a few emails that showed nothing of the sort. Now imagine if we actually did that – if we kept scientists from discussing science’s problems in the open. And imagine that getting out. That would be a PR disaster to dwarf any misinterpretation of open science (because the worst PR disasters are the ones based in reality).

But to me, the even more compelling consideration is that if we put science’s public image first, we are inverting our core values. The conversation around open and reproducible science cuts to fundamental questions about what science is – such as that scientific knowledge is verifiable, and that it belongs to everyone – and why science offers unique value to society. We should fully and fearlessly engage in those questions and in making our institutions and practices better. We can solve the PR problem after that. In the long run, the way to make the best possible case for science is to make science the best possible.

Rather than shying away from talking about openness and reproducibility, I believe it is more critical than ever that we all pull together to move science forward. Because if we don’t, others will make changes in our name that serve other agendas.

For example, Yong’s article describes a bill pending in Congress that would set impossibly high standards of evidence for the Environmental Protection Agency to base policy on. Those standards are wrapped in the rhetoric of open science. But as Michael Eisen says in the article, “It won’t produce regulations based on more open science. It’ll just produce fewer regulations.” This is almost certainly the intended effect.

As long as scientists – individually and collectively in our societies and journals – drag our heels on making needed reforms, there will be a vacuum that others will try to fill. Turn that around, and the better the scientific community does its job of addressing openness and transparency in the service of actually making science do what science is supposed to do – making it more open, more verifiable, more accessible to everyone – the better positioned we will be to rebut those kinds of efforts by saying, “Nope, we got this.”

Bold changes at Psychological Science

Style manuals sound like they ought to be boring things, full of arcane points about commas and whatnot. But Wikipedia’s style manual has an interesting admonition: Be bold. The idea is that if you see something that could be improved, you should dive in and start making it better. Don’t wait until you are ready to be comprehensive, don’t fret about getting every detail perfect. That’s the path to paralysis. Wikipedia is an ongoing work in progress, your changes won’t be the last word but you can make things better.

In a new editorial at Psychological Science, interim editor Stephen Lindsay is clearly following the be bold philosophy. He lays out a clear and progressive set of principles for evaluating research. Beware the “troubling trio” of low power, surprising results, and just-barely-significant results. Look for signs of p-hacking. Care about power and precision. Don’t confuse nonsignificant for null.

To people who have been paying attention to the science reform discussion of the last few years (and its longstanding precursors), none of this is new. What is new is that an editor of a prominent journal has clearly been reading and absorbing the last few years’ wave of careful and thoughtful scholarship on research methods and meta-science. And he is boldly acting on it.

I mean, yes, there are some things I am not 100% in love with in that editorial. Personally, I’d like to see more value placed on good exploratory research.* I’d like to see him discuss whether Psychological Science will be less results-oriented, since that is a major contributor to publication bias.** And I’m sure other people have their objections too.***

But… Improving science will forever be a work in progress. Lindsay has laid out a set of principles. In the short term, they will be interpreted and implemented by humans with intelligence and judgment. In the longer term, someone will eventually look at what is and is not working and will make more changes.

Are Lindsay’s changes as good as they could possibly be? The answers are (1) “duh” because obviously no and (2) “duh” because it’s the wrong question. Instead let’s ask, are these changes better than things have been? I’m not going to give that one a “duh,” but I’ll stand behind a considered “yes.”


* Part of this is because in psychology we don’t have nearly as good a foundation of implicit knowledge and accumulated wisdom for differentiating good from bad exploratory research as we do for hypothesis-testing. So exploratory research gets a bad name because somebody hacks around in a tiny dataset and calls it “exploratory research,” and nobody has the language or concepts to say why they’re doing it wrong. I hope we can fix that. For starters, we could start stealing more ideas from the machine learning and genomics people, though we will need to adapt them for the particular features of our scientific problems. But that’s a blog post for another day.

** There are some nice comments about this already on the ISCON facebook page. Dan Simons brought up the exploratory issue; Victoria Savalei the issue about results-focus. My reactions on these issues are in part bouncing off of theirs.

*** When I got to the part about using confidence intervals to support the null, I immediately had a vision of steam coming out of some of the Twitter Bayesians’ ears.