The PoPS replication reports format is a good start

Big news today is that Perspectives on Psychological Science is going to start publishing pre-registered replication reports. The inaugural editors will be Daniel Simons and Alex Holcombe, who have done the serious legwork to make this happen. See the official announcement and blog posts by Ed Yong and Melanie Tannenbaum. (Note: this isn’t the same as the earlier plan I wrote about for Psychological Science to publish replications, but it appears to be related.)

The gist of the plan is that after getting pre-approval from the editors (mainly to filter for important but as-yet unreplicated studies), proposers will create a detailed protocol. The original authors (and maybe other reviewers?) will have a chance to review the protocol. Once it has been approved, the proposer and other interested labs will run the study. Publication will be contingent on carrying out the protocol but not on the results. Collections of replications from multiple labs will be published together as final reports.

I think this is great news. In my ideal world published replications would be more routine, and wouldn’t require all the hoopla of prior review by original authors, multiple independent replications packaged together, etc. etc. In other words, they shouldn’t be extraordinary, and they should be as easy or easier to publish than original research. I also think every journal should take responsibility for replications of its own original reports (the Pottery Barn rule). BUT… this new format doesn’t preclude any of that from also happening elsewhere. By including all of those extras, PoPS replication reports might function as a first-tier, gold standard of replication. And by doing a lot of things right (such as focusing on effect sizes rather than tallying “successful” and “failed” replications, which is problematic) they might set an example for more mundane replication reports in other outlets.

This won’t solve everything — not by a long shot. We need to change scientific culture (by which I mean institutional incentives) so that replication is a more common and more valued activity. We need funding agencies to see it that way too. In a painful coincidence, news came out today that a cognitive neuroscientist admitted to misconduct in published research. One of the many things that commonplace replications would do would be to catch or prevent fraud. But whenever I’ve asked colleagues who use fMRI whether people in their fields run direct replications, they’ve just laughed at me. There’s little incentive to run them and no money to do it even if you wanted to. All of that needs to change across many areas of science.

But you can’t solve everything at once, and the PoPS initiative is an important step forward.

What the heck is research anyway? The annual holiday post

Happy holidays, readers! Today, of course, is the day to gather around the aluminum pole with friends and family and air your grievances. And here at The Hardest Science I am adding a holiday tradition of my own to help that process along. So sometime after the fifth “it must be nice not to have to work over your long break” but before someone pins the head of household so you can all go home, gather together all your non-academic loved ones and read this to them aloud:

What the heck is research anyway?

by Brent Roberts

Recently, I was asked for the 17th time by a family member, “So, what are you going to do this summer?”  As usual, I answered, “research.”  And, as usual, I was met with that quizzical look that says, “What the heck is research anyway?”

It struck me in retrospect that I’ve done a pretty poor job of describing what research is to my family and friends.  So, I thought it might be a good idea to write an open letter that tries explaining research a little better.  You deserve an explanation.  So do other people, like parents of students and the general public.  You all pay a part of our salary, either through your taxes or the generous support of your kid’s education, and therefore should know where your money goes.

Continue reading…

All the personality blogging you could ask for

Want to see what’s new in personality research? Check out the new ARP Personality Meta-Blog that Chris Soto just set up. (That’s ARP as in Association for Research in Personality). It’s a blog aggregator that pulls from a bunch of different personality blogs. The Meta-Blog posts titles and excerpts, with links that you can follow to the original blogs for the full posts.

By and large these are blogs written by researchers for researchers, though some also mix in more outwardly focused content (particularly at Psych Your Mind). From my perspective this is a great thing. When I started The Hardest Science it felt like psychology had plenty of general-interest blogs (like those at Psychology Today) but relatively few blogs written with a researcher audience, especially compared to fields like economics and neuroscience. So I’m happy to see that changing.

Right now the Meta-Blog is pulling from 6 blogs. They are Tal Yarkoni’s [citation needed], David Funder’s funderstorms, Brent Roberts’s pigee, the collaborative Psych Your Mind, Brent Donnellan’s Trait-State Continuum, and yours truly. If you know of a blog that should be added, please contact Chris.

Personality psychology at SPSP

Melissa Ferguson and I are the program co-chairs for the upcoming SPSP conference in New Orleans, January 17-19. That means we are in charge of the scientific content of the program. (Cindy Pickett is the convention chair, meaning she’s in charge of pretty much everything else, which I have discovered is a heck of a lot more than 99% of the world knows. If you see Cindy at the conference, please buy her a drink.) The conference is going to be awesome. You should go.

One issue that I’m particularly attuned to is the representation of personality psychology on the program. During my work as program co-chair, I heard from some people who are from a more centrally personality-psych background that they’re worried that the conference is tilted too heavily toward social psych, and therefore there won’t be enough interesting stuff to go to.

I am writing here to dispel that notion. If you are a personality psychologist and you’re wavering about going, trust me: there’ll be lots of exciting stuff for you.

SPSP has a long-standing commitment to ensuring that both of its parent disciplines are well represented at the conference. That means, first of all, that the 2 program co-chairs are picked to make sure there is broad representation at the top. So among my predecessors are folks like Veronica Benet-Martinez, Sam Gosling, Will Fleeson, etc… — people who have both the expertise and motivation to make sure that outstanding personality submissions make it onto the program. Speaking for myself, I don’t see the personality/social distinction as mapping easily onto my work (it’s both!), but hopefully most people who are from a more canonical personality point of view will see me as intellectually connected to that.

One way that directly translates into program content is through selection of reviewers. Melissa and I made sure that both the symposium and poster review panels had plenty of personality psychologists, so all personality-related submissions get a fair shake. Not every good submission made it onto the program — there was just too much good stuff (and that’s true across all topic areas). But I personally assigned every symposium submission to its reviewers, and I promise you that anything that looked personality-ish got read by someone with relevant expertise.

On top of all that, SPSP’s 2013 president is David Funder. David got to handpick speakers for a Presidential Symposium, and he’ll also give a presidential address. Those sessions will appeal to everybody at SPSP, but I think personality psychologists will feel particularly happy.

For people interested in personality psychology content, here are some highlights:

Presidential Symposium, Thu 5:00 pm – 7:00 pm. Title: “The First ‘P’ in SPSP.” David will give the opening remarks, followed by talks by Colin DeYoung on personality and neuroscience, Sarah Hampson on lifespan personality development, and Bob Krueger on how personality psychology is shaping the DSM-5. (Hardcore social folks, these are 3 dynamite researchers. I bet you’ll like this one too!)

Presidential Address, Fri 2:00 pm – 3:15 pm. David Funder gets the spotlight this time, in a talk titled “Taking the Power of the Situation Seriously.”

Award lectures, Fri 5:00 pm – 6:30 pm. The recipients of SPSP’s 3 major awards will speak at this session. Dan McAdams is the winner of the Jack Block award for personality. Dan Wegner is the winner of the Campbell award in social psych (Thalia Wheatley will be speaking on his behalf). And  Jamie Pennebaker is the winner of the inaugural Distinguished Scholar Award.

Symposium Room 217-219. In order to ensure that there is always something personality-oriented for people to go to, we picked 9 symposia that we thought would be especially appealing to personality psychologists and spread them out over every timeslot. So if you want personality, personality, and more personality, you can set up camp in room 217-219 and never leave.

All the other symposium rooms. Just because we highlighted personality stuff in one room doesn’t mean that’s the only place it appears on the schedule. “Personality versus social psychology” is a clearer distinction in people’s stereotypes than in reality. Spread across the schedule are presentations on gene-environment interactions, individual differences and health, subjective well-being, motivation and self-regulation, research methods and practices, and much more.

Posters, posters, posters. There is personality-related content in every poster session. Posters were grouped by keywords (self-nominated by the submitters), so an especially high concentration will be in Session E on Saturday morning.

As long as personality psychologists keep submitting their best stuff, the high-quality representation of personality at SPSP is going to remain the rule in years to come.

Science is more interesting when it’s true

There is a great profile of Uri Simonsohn’s fraud-detection work in the Atlantic Monthly, written by Chris Shea (via Andrew Gelman). This paragraph popped out at me:

So what, then, is driving Simonsohn? His fraud-busting has an almost existential flavor. “I couldn’t tolerate knowing something was fake and not doing something about it,” he told me. “Everything loses meaning. What’s the point of writing a paper, fighting very hard to get it published, going to conferences?”

 It reminded me of a story involving my colleague (and grand-advisor) Lew Goldberg. Lew was at a conference once when someone presented a result that he was certain could not be correct. After the talk, Lew stood up and publicly challenged the speaker to a bet that she’d made a coding error in the data. (The bet offer is officially part of the published scientific record. According to people who were there, it was for a case of whiskey.)

The research got published anyway, there were several years of back-and-forth with what Lew felt was a vague and insufficient admission of possible errors, which ended up with Lew and colleagues publishing a comment on an erratum – the only time I’ve ever heard of that happening in a scientific journal. When someone asked Lew recently why he’d been so motivated to follow through, he answered in part: “Science is more interesting when it’s true.”

What is the Dutch word for “irony?”

Breathless headline-grabbing press releases based on modest findings. Investigations driven by confirmation bias. Broad generalizations based on tiny samples.

I am talking, of course, about the final report of the Diederik Stapel investigation.

Regular readers of my blog will know that I have been beating the drum for reform for quite a while. I absolutely think psychology in general, and perhaps social psychology especially, can and must work to improve its methods and practices.

But in reading the commission’s press release, which talks about “a general culture of careless, selective and uncritical handling of research and data” in social psychology, I am struck that those conclusions are based on a retrospective review of a known fraud case — a case that the commissions were specifically charged with finding an explanation for. So when they wag their fingers about a field rife with elementary statistical errors and confirmation bias, it’s a bit much for me.

I am writing this as a first reaction based on what I’ve seen in the press. At some point when I have the time and the stomach I plan to dig into the full 100-page commission report. I hope that — as is often the case when you go from a press release to an actual report — it takes a more sober and cautious tone. Because I do think that we have the potential to learn some important things by studying how Diederik Stapel did what he did. Most likely we will learn what kinds of hard questions we need to be asking of ourselves — not necessarily what the answers to those questions will be. Remember that the more we are shocked by the commission’s report, the less willing we should be to reach any sweeping generalizations from it.

So let’s all take a deep breath, face up to the Stapel case for what it is — neither exaggerating nor minimizing it — and then try to have a productive conversation about where we need to go next.

Changing software to nudge researchers toward better data analysis practice

The tools we have available to us affect the way we interact with and even think about the world. “If all you have is a hammer” etc. Along these lines, I’ve been wondering what would happen if the makers of data analysis software like SPSS, SAS, etc. changed some of the defaults and options. Sort of in the spirit of Nudge — don’t necessarily change the list of what is ultimately possible to do, but make changes to make some things easier and other things harder (like via defaults and options).

Would people think about their data differently? Here’s my list of how I might change regression procedures, and what I think these changes might do:

1. Let users write common transformations of variables directly into the syntax. Things like centering, z-scoring, log-transforming, multiplying variables into interactions, etc. This is already part of some packages (it’s easy to do in R), but not others. In particular, running interactions in SPSS is a huge royal pain. For example, to do a simple 2-way interaction with centered variables, you have to write all this crap *and* cycle back and forth between the code and the output along the way:

desc x1 x2.
* Run just the above, then look at the output and see what the means are, then edit the code below.
compute x1_c = x1 - [whatever the mean was].
compute x2_c = x2 - [whatever the mean was].
compute x1x2 = x1_c*x2_c.
regression /dependent y /enter x1_c x2_c x1x2.

Why shouldn’t we be able to do it all in one line like this?

regression /dependent y /enter center(x1) center(x2) center(x1)*center(x2).

The nudge: If it were easy to write everything into a single command, maybe more people would look at interactions more often. And maybe they’d stop doing median splits and then jamming everything into an ANOVA!

2. By default, the output shows you parameter estimates and confidence intervals.

3. Either by default or with an easy-to-implement option, you can get a variety of standardized effect size estimates with their confidence intervals. And let’s not make variance-explained metrics (like R^2 or eta^2) the defaults.

The nudge: #2 and #3 are both designed to focus people on point and interval estimation, rather than NHST.

This next one is a little more radical:

4. By default the output does not show you inferential t-tests and p-values — you have to ask for them through an option. And when you ask for them, you have to state what the null hypotheses are! So if you want to test the null that some parameter equals zero (as 99.9% of research in social science does), hey, go for it — but it has to be an active request, not a passive default. And if you want to test a null hypothesis that some parameter is some nonzero value, it would be easy to do that too.

The nudge. In the way a lot of statistics is taught in psychology, NHST is the main event and effect estimation is an afterthought. This would turn it around. And by making users specify a null hypothesis, it might spur us to pause and think about how and why we are doing so, rather than just mining for asterisks to put in tables. Heck, I bet some nontrivial number of psychology researchers don’t even know that the null hypothesis doesn’t have to be the nil hypothesis. (I still remember the “aha” feeling the first time I learned that you could do that — well along into graduate school, in an elective statistics class.) If we want researchers to move toward point or range predictions with strong hypothesis testing, we should make it easier to do.

All of these things are possible to do in most or all software packages. But as my SPSS example under #1 shows, they’re not necessarily easy to implement in a user-friendly way. Even R doesn’t do all of these things in the standard lm function. As  a result, they probably don’t get done as much as they could or should.

Any other nudges you’d make?