With more than mild trepidation, we are introducing a new column called Data Replicada.
In this column, we will report the results of exact (or close) preregistered replications of recently published findings.
Anyone who has been paying attention will have noticed that the publication of exact (or close) replications has become increasingly common. So why does the world need Data Replicada? Allow us to justify our existence.
- No file drawer of failed or successful replications: Even though it is much easier than ever to publish exact replications, it is still extremely hard to publish exact replications. We know for a fact that many high-quality replications remain unreported. You can’t publish failures to replicate unless you convince a review team both that the failure is robust and that it is really important. And you can’t publish a successful replication, well, ever…unless it is part of a larger project that builds on that success . We will overcome that problem by reporting every Data Replicada replication attempt, regardless of whether it is successful, unsuccessful, or inconclusive. There will be no file drawer.
- A focus on non-obvious effects from well-designed studies: Science is at its most valuable when it provides us with true, non-obvious results from studies that are well-designed (and thus not susceptible to alternative explanations that render the results less informative). Thus, our focus will be on trying to replicate those studies which seem well-designed, confound-free, and at least somewhat surprising. Not all studies live at that intersection (nor should they), but when we think about the replicability of behavioral science, those are the studies we are generally thinking of.
- A focus on recently published findings: Many published replications focus on studies that were conducted a long time ago, or at least before the field became aware of the perils of p-hacking and the importance of better research practices. We will focus on trying to replicate recently published findings, so as to get a sense for whether non-obvious research published after the “renaissance” is in fact reliable.
- Informative replications: Sometimes replication attempts are simply uninformative, either because they are underpowered or because they deviate too much from the original study design. We will ensure that all of our replications are highly powered, pre-registered, and faithful to the original study. Indeed, we will only choose to replicate studies that we are able to faithfully replicate (e.g., because they were done online or in a lab setting that we can reliably reproduce), and we will always try to reach out to the original authors before conducting the study in an effort to ensure that our replication attempt is as exact as possible. Additionally, we will always aim to run at least 2.5 times the sample size of the original study, so that even a non-significant result is likely to be informative.
- An initial focus on a literature that hasn’t been systematically replicated: Although there have been a few large-scale efforts to replicate research published in psychology, replications have not caught on in our subfield of behavioral marketing. So at least at the outset, we intend to focus our efforts on trying to replicate studies recently published in the two top behavioral marketing journals: the Journal of Consumer Research and the Journal of Marketing Research.
- Learning new things about methods and statistics: In our experience, conducting exact replications often teaches us much more than whether a particular finding will replicate. For example, sometimes it causes us to realize that a seemingly obvious (and pervasive) analytic strategy is flawed, or to confront ambiguities in how to interpret replication results. It also can teach us a lot about how much (or little) a study’s actual methods align with a study’s described methods. Conducting replications is a great way to learn about the messy business of scientific discovery, and we hope to communicate what we learn to you.
We are hoping to make Data Replicada a regular, monthly feature of Data Colada. But since it will be effortful and expensive, we only feel comfortable committing to doing it for six months or so, at which point we will re-assess . The first one will be up on Wednesday, December 11, 2019.
- Or unless it is part of a Registered Replication Report, which is also difficult to publish. [↩]
- If anyone reading this wants to help fund this effort, or knows how we can fund this effort, please email Joe: firstname.lastname@example.org. If we don’t get funding, we’ll have to draw on the paltry sum that Leif accrues from his "bucolic" music career (.html). [↩]