A number of authors have recently proposed that (i) psychological research is highly unpredictable, with identical studies obtaining surprisingly different results, (ii) the presence of heterogeneity decreases the replicability of psychological findings. In this post we provide evidence that contradicts both propositions. Consider these quotes: "heterogeneity persists, and to a reasonable degree, even in […]…
[75] Intentionally Biased: People Purposely Don't Ignore Information They "Should" Ignore
You can’t un-ring a bell. Once people receive information, even if it is taken back, they cannot disregard it. Teachers cannot imagine what novice students don’t know, juries cannot follow instructions to disregard evidence, negotiators cannot take the perspective of their counterpart who does not know what they know, etc. People exhibit “Outcome bias”, “hindsight…
[74] In Press at Psychological Science: A New 'Nudge' Supported by Implausible Data
Today Psychological Science issued a Corrigendum (.htm) and an expression of concern (htm) for a paper originally posted online in May 2018 (.htm). This post will spell out the data irregularities we uncovered that eventually led to the two postings from the journal today. We are not convinced that those postings are sufficient. It is…
[73] Don't Trust Internal Meta-Analysis
Researchers have increasingly been using internal meta-analysis to summarize the evidence from multiple studies within the same paper. Much of the time, this involves computing the average effect size across the studies, and assessing whether that effect size is significantly different from zero. At first glance, internal meta-analysis seems like a wonderful idea. It increases…
[72] Metacritic Has A (File-Drawer) Problem
Metacritic.com scores and aggregates critics’ reviews of movies, music, and video games. The website provides a summary assessment of the critics’ evaluations, using a scale ranging from 0 to 100. Higher numbers mean that critics were more favorable. In theory, this website is pretty awesome, seemingly leveraging the wisdom-of-crowds to give consumers the most reliable…
[71] The (Surprising?) Shape of the File Drawer
Let’s start with a question so familiar that you will have answered it before the sentence is even completed: How many studies will a researcher need to run before finding a significant (p<.05) result? (If she is studying a non-existent effect and if she is not p-hacking.) Depending on your sophistication, wariness about being asked…
[70] How Many Studies Have Not Been Run? Why We Still Think the Average Effect Does Not Exist
We have argued that, for most effects, it is impossible to identify the average effect (datacolada.org/33). The argument is subtle (but not statistical), and given the number of well-informed people who seem to disagree, perhaps we are simply wrong. This is my effort to explain why we think identifying the average effect is so hard….
[69] Eight things I do to make my open research more findable and understandable
It is now common for researchers to post original materials, data, and/or code behind their published research. That’s obviously great, but open research is often difficult to find and understand. In this post I discuss 8 things I do, in my papers, code, and datafiles, to combat that. Paper 1) Before all method sections, I…
[68] Pilot-Dropping Backfires (So Daryl Bem Probably Did Not Do It)
Uli Schimmack recently identified an interesting pattern in the data from Daryl Bem’s infamous “Feeling the Future” JPSP paper, in which he reported evidence for the existence of extrasensory perception (ESP; htm)[1]. In each study, the effect size is larger among participants who completed the study earlier (blogpost: .htm). Uli referred to this as the "decline…
[67] P-curve Handles Heterogeneity Just Fine
A few years ago, we developed p-curve (see p-curve.com), a statistical tool that identifies whether or not a set of statistically significant findings contains evidential value, or whether those results are solely attributable to the selective reporting of studies or analyses. It also estimates the true average power of a set of significant findings [1]….