Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • Seminar
  • About
Menu

[12] Preregistration: Not just for the Empiro-zealots


Posted on January 7, 2014January 23, 2019 by Leif Nelson

I recently joined a large group of academics in co-authoring a paper looking at how political science, economics, and psychology are working to increase transparency in scientific publications. Psychology is leading, by the way.

Working on that paper (and the figure below) actually changed my mind about something. A couple of years ago, when Joe, Uri, and I wrote False Positive Psychology, we were not really advocates of preregistration (a la clinicaltrials.gov). We saw it as an implausible superstructure of unspecified regulation. Now I am an advocate. What changed?

Transparency in Scientific Reporting Figure

First, let me relate an anecdote originally told by Don Green (and related with more subtlety here). He described watching a research presentation that at one point emphasized a subtle three-way interaction. Don asked, “did you preregister that hypothesis?” and the speaker said “yes.” Don, as he relates it, was amazed. Here was this super complicated pattern of results, but it had all been predicted ahead of time. That is convincing. Then the speaker said, “No. Just kidding.” Don was less amazed.

The gap between those two reactions is the reason I am trying to start preregistering my experiments. I want people to be amazed.

The single most important scientific practice that Uri, Joe, and I have emphasized is disclosure (i.e., the top panel in the figure). Transparently disclose all manipulations, measures, exclusions, and sample size specification. We have been at least mildly persuasive, as a number of journals (e.g., Psychological Science, Management Science) are requiring such reporting.

Meanwhile, as a researcher, transparency creates a rhetorical problem. When I conduct experiments, for example, I typically collect a single measure that I see as the central test of my hypothesis. But, like any curious scientist, I sometimes measure some other stuff in case I can learn a bit more about what is happening. If I report everything, then my confirmatory measure is hard to distinguish from my exploratory measures. As outlined in the figure above, a reader might reasonably think, “Leif is p-hacking.” My only defense is to say, “no, that first measure was the critical one. These other ones were bonus.” When I read things like that I am often imperfectly convinced.

How can Leif the researcher be more convincing to Leif the reader? By saying something like, “The reason you can tell that the first measure was the critical one is because I said that publicly before I ran the study. Here, go take a look. I preregistered it.” (i.e., the left panel of the figure).

Note that this line of thinking is not even vaguely self-righteous. It isn’t pushy. I am not saying, “you have to preregister or else!” Heck, I am not even saying that you should; I am saying that I should. In a world of transparent reporting, I choose preregistration as a way to selfishly show off that I predicted the outcome of my study. I choose to preregister in the hopes that one day someone like Don Green will ask me, and that he will be amazed.

I am new to preregistration, so I am going to be making lots of mistakes. I am not going to wait until I am perfect (it would be a long wait). If you want to join me in trying to add preregistration to your research process, it is easy to get started. Go here, and open an account, set up a page for your project, and when you’re ready, preregister your study. There is even a video to help you out.


Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Get Colada email alerts.

Join 5,108 other subscribers

Social media

We tweet new posts: @DataColada
And mastopost'em: @DataColada@mas.to
And link to them on our Facebook page

Recent Posts

  • [107] Meaningless Means #3: The Truth About Lies
  • [106] Meaningless Means #2: The Average Effect of Nudging in Academic Publications is 8.7%
  • [105] Meaningless Means #1: The Average Effect
    of Nudging Is d = .43
  • [104] Meaningless Means: Some Fundamental Problems With Meta-Analytic Averages
  • [103] Mediation Analysis is Counterintuitively Invalid

Get blogpost email alerts

Join 5,108 other subscribers

tweeter & facebook

We tweet new posts: @DataColada
And link to them on our Facebook page

Posts on similar topics

Preregistration
  • [101] Transparency Makes Research Evaluable: Evaluating a Field Experiment on Crime Published in Nature
  • [64] How To Properly Preregister A Study
  • [55] The file-drawer problem is unfixable, and that’s OK
  • [44] AsPredicted: Pre-registration Made Easy
  • [12] Preregistration: Not just for the Empiro-zealots

search

© 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..