This is the third post in a series (.htm) in which we argue/show that meta-analytic means are often meaningless, because they often (1) include invalid tests of the hypothesis of interest to the meta-analyst and (2) combine incommensurate results. The meta-analysis we discuss here explores how dishonesty differs across four different experimental paradigms (e.g., coin…
Category: Meta Analysis
[106] Meaningless Means #2: The Average Effect of Nudging in Academic Publications is 8.7%
This post is the second in a series (.htm) in which we argue that meta-analytic means are often meaningless, because these averages (1) include invalid tests of the meta-analytic research question, and (2) aggregate incommensurable results. In each post we showcase examples of (1) and (2) in a different published meta-analysis. We seek out meta-analyses…
[105] Meaningless Means #1: The Average Effect
of Nudging Is d = .43
This post is the second in a series (see its introduction: htm) arguing that meta-analytic means are often meaningless, because (1) they include results from invalid tests of the research question of interest to the meta-analyst, and (2) they average across fundamentally incommensurable results. In this post we focus primarily on problem (2), though problem…
[104] Meaningless Means: Some Fundamental Problems With Meta-Analytic Averages
This post is an introduction to a series of posts about meta-analysis [1]. We think that many, perhaps most, meta-analyses in the behavioral sciences are invalid. In this introductory post, we make that case with arguments. In subsequent posts, we will make that case by presenting examples taken from published meta-analyses. We have recently written…
[76] Heterogeneity Is Replicable: Evidence From Maluma, MTurk, and Many Labs
A number of authors have recently proposed that (i) psychological research is highly unpredictable, with identical studies obtaining surprisingly different results, (ii) the presence of heterogeneity decreases the replicability of psychological findings. In this post we provide evidence that contradicts both propositions. Consider these quotes: "heterogeneity persists, and to a reasonable degree, even in […]…
[63] "Many Labs" Overestimated The Importance of Hidden Moderators
Are hidden moderators a thing? Do experiments intended to be identical lead to inexplicably different results? Back in 2014, the "Many Labs" project (htm) reported an ambitious attempt to answer these questions. More than 30 different labs ran the same set of studies and the paper presented the results side-by-side. They did not find any…
[59] PET-PEESE Is Not Like Homeopathy
PET-PEESE is a meta-analytical tool that seeks to correct for publication bias. In a footnote in my previous post (.htm), I referred to is as the homeopathy of meta-analysis. That was unfair and inaccurate. Unfair because, in the style of our President, I just called PET-PEESE a name instead of describing what I believed was…
[58] The Funnel Plot is Invalid Because of This Crazy Assumption: r(n,d)=0
The funnel plot is a beloved meta-analysis tool. It is typically used to answer the question of whether a set of studies exhibits publication bias. That’s a bad question because we always know the answer: it is “obviously yes.” Some researchers publish some null findings, but nobody publishes them all. It is also a bad…
[33] "The" Effect Size Does Not Exist
Consider the robust phenomenon of anchoring, where people’s numerical estimates are biased towards arbitrary starting points. What does it mean to say “the” effect size of anchoring? It surely depends on moderators like domain of the estimate, expertise, and perceived informativeness of the anchor. Alright, how about “the average” effect-size of anchoring? That's simple enough….
[30] Trim-and-Fill is Full of It (bias)
Statistically significant findings are much more likely to be published than non-significant ones (no citation necessary). Because overestimated effects are more likely to be statistically significant than are underestimated effects, this means that most published effects are overestimates. Effects are smaller – often much smaller – than the published record suggests. For meta-analysts the gold…