Diederik Stapel, Dirk Smeesters, and Lawrence Sanna published psychology papers with fake data. They each faked in their own idiosyncratic way, nevertheless, their data do share something in common. Real data are noisy. Theirs aren’t. Gregor Mendel’s data also lack noise (yes, famous peas-experimenter Mendel). Moreover, in a mathematical sense, his data are just as…

# Author: Uri Simonsohn

## [17] No-way Interactions

This post shares a shocking and counterintuitive fact about studies looking at interactions where effects are predicted to get smaller (attenuated interactions). I needed a working example and went with Fritz Strack et al.’s (1988, .pdf) famous paper [933 Google cites], in which participants rated cartoons as funnier if they saw them while holding a…

## [15] Citing Prospect Theory

Kahneman and Tversky’s (1979) Prospect Theory (.pdf), with its 9,206 citations, is the most cited article in Econometrica, the prestigious journal in which it appeared. In fact, it is more cited than any article published in any economics journal. [1] Let’s break it down by year. To be clear, this figure shows that just in 2013, Prospect Theory got about…

## [13] Posterior-Hacking

Many believe that while p-hacking invalidates p-values, it does not invalidate Bayesian inference. Many are wrong. This blog post presents two examples from my new “Posterior-Hacking” (SSRN) paper showing selective reporting invalidates Bayesian inference as much as it invalidates p-values. Example 1. Chronological Rejuvenation experiment In “False-Positive Psychology” (SSRN), Joe, Leif and I run experiments to demonstrate how easy…

## [9] Titleogy: Some facts about titles

Naming things is fun. Not sure why, but it is. I have collaborated in the naming of people, cats, papers, a blog, its posts, and in coining the term “p-hacking.” All were fun to do. So I thought I would write a Colada on titles. To add color I collected some data. At the end…

## [4] The Folly of Powering Replications Based on Observed Effect Size

It is common for researchers running replications to set their sample size assuming the effect size the original researchers got is correct. So if the original study found an effect-size of d=.73, the replicator assumes the true effect is d=.73, and sets sample size so as to have 90% chance, say, of getting a significant…

## [1] "Just Posting It" works, leads to new retraction in Psychology

The fortuitous discovery of new fake data. For a project I worked on this past May, I needed data for variables as different from each other as possible. From the data-posting journal Judgment and Decision Making I downloaded data for ten, including one from a now retracted paper involving the estimation of coin sizes. I created…