This is the second in a four-part series of posts detailing evidence of fraud in four academic papers co-authored by Harvard Business School Professor Francesca Gino. It is worth reiterating that to the best of our knowledge, none of Gino’s co-authors carried out or assisted with the data collection for the studies in this series….
[109] Data Falsificada (Part 1): "Clusterfake"
This is the introduction to a four-part series of posts detailing evidence of fraud in four academic papers co-authored by Harvard Business School Professor Francesca Gino. In 2021, we and a team of anonymous researchers examined a number of studies co-authored by Gino, because we had concerns that they contained fraudulent data. We discovered evidence…
[108] MRAN is Dead, long live GRAN
Microsoft has been making daily copies of the entire CRAN website of R packages since 2014. This archive, named MRAN, allows installing older versions of packages, which is valuable for reproducibility purposes. The 15,000+ R packages on CRAN are incessantly updated. For example, the package tidyverse depends on 109 packages; these packages accumulate 63 updates, just…
[107] Meaningless Means #3: The Truth About Lies
This is the third post in a series (.htm) in which we argue/show that meta-analytic means are often meaningless, because they often (1) include invalid tests of the hypothesis of interest to the meta-analyst and (2) combine incommensurate results. The meta-analysis we discuss here explores how dishonesty differs across four different experimental paradigms (e.g., coin…
[106] Meaningless Means #2: The Average Effect of Nudging in Academic Publications is 8.7%
This post is the second in a series (.htm) in which we argue that meta-analytic means are often meaningless, because these averages (1) include invalid tests of the meta-analytic research question, and (2) aggregate incommensurable results. In each post we showcase examples of (1) and (2) in a different published meta-analysis. We seek out meta-analyses…
[105] Meaningless Means #1: The Average Effect
of Nudging Is d = .43
This post is the second in a series (see its introduction: htm) arguing that meta-analytic means are often meaningless, because (1) they include results from invalid tests of the research question of interest to the meta-analyst, and (2) they average across fundamentally incommensurable results. In this post we focus primarily on problem (2), though problem…
[104] Meaningless Means: Some Fundamental Problems With Meta-Analytic Averages
This post is an introduction to a series of posts about meta-analysis [1]. We think that many, perhaps most, meta-analyses in the behavioral sciences are invalid. In this introductory post, we make that case with arguments. In subsequent posts, we will make that case by presenting examples taken from published meta-analyses. We have recently written…
[103] Mediation Analysis is Counterintuitively Invalid
Mediation analysis is very common in behavioral science despite suffering from many invalidating shortcomings. While most of the shortcomings are intuitive [1], this post focuses on a counterintuitive one. It is one of those quirky statistical things that can be fun to think about, so it would merit a blog post even if it were…
[102] R on Steroids: Running WAY faster simulations in R
This post shows how to run simulations (loops) in R that can go 50 times faster than the default approach of running code like: for (k in 1:100) on your laptop. Obviously, a bit of a niche post. There are two steps. Step 1 involves running parallel rather than sequential loops [1]. This step can…
[101] Transparency Makes Research Evaluable: Evaluating a Field Experiment on Crime Published in Nature
A recently published Nature paper (.htm) examined an interesting psychological hypothesis and applied it to a policy relevant question. The authors ran an ambitious field experiment and posted all their data, code, and materials. They also were transparent in showing the results of many different analyses, including some that yielded non-significant results. This is in…