Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • About
Menu

Author: Uri Simonsohn

[31] Women are taller than men: Misusing Occam’s Razor to lobotomize discussions of alternative explanations

Posted on December 18, 2014February 11, 2020 by Uri Simonsohn

Most scientific studies document a pattern for which the authors provide an explanation. The job of readers and reviewers is to examine whether that pattern is better explained by alternative explanations. When alternative explanations are offered, it is common for authors to acknowledge that although, yes, each study has potential confounds, no single alternative explanation…

Read more

[29] Help! Someone Thinks I p-hacked

Posted on October 22, 2014April 22, 2020 by Uri Simonsohn

It has become more common to publicly speculate, upon noticing a paper with unusual analyses, that a reported finding was obtained via p-hacking. This post discusses how authors can persuasively respond to such speculations. Examples of public speculation of p-hacking Example 1. A Slate.com post by Andrew Gelman suspected p-hacking in a paper that collected…

Read more

[28] Confidence Intervals Don't Change How We Think about Data

Posted on October 8, 2014February 11, 2020 by Uri Simonsohn

Some journals are thinking of discouraging authors from reporting p-values and encouraging or even requiring them to report confidence intervals instead. Would our inferences be better, or even just different, if we reported confidence intervals instead of p-values? One possibility is that researchers become less obsessed with the arbitrary significant/not-significant dichotomy. We start paying more…

Read more

[24] P-curve vs. Excessive Significance Test

Posted on June 27, 2014February 12, 2020 by Uri Simonsohn

In this post I use data from the Many-Labs replication project to contrast the (pointless) inferences one arrives at using the Excessive Significant Test, with the (critically important) inferences one arrives at with p-curve. The many-labs project is a collaboration of 36 labs around the world, each running a replication of 13 published effects in…

Read more

[23] Ceiling Effects and Replications

Posted on June 4, 2014February 11, 2020 by Uri Simonsohn

A recent failure to replicate led to an attention-grabbing debate in psychology. As you may expect from university professors, some of it involved data.  As you may not expect from university professors, much of it involved saying mean things that would get a child sent to the principal's office (.pdf). The hostility in the debate has obscured an interesting…

Read more

[20] We cannot afford to study effect size in the lab

Posted on May 1, 2014January 30, 2020 by Uri Simonsohn

Methods people often say  – in textbooks, task forces, papers, editorials, over coffee, in their sleep – that we should focus more on estimating effect sizes rather than testing for significance. I am kind of a methods person, and I am kind of going to say the opposite. Only kind of the opposite because it…

Read more

[19] Fake Data: Mendel vs. Stapel

Posted on April 14, 2014February 11, 2020 by Uri Simonsohn

Diederik Stapel, Dirk Smeesters, and Lawrence Sanna published psychology papers with fake data. They each faked in their own idiosyncratic way, nevertheless, their data do share something in common. Real data are noisy. Theirs aren't. Gregor Mendel's data also lack noise (yes, famous peas-experimenter Mendel). Moreover, in a mathematical sense, his data are just as…

Read more

[17] No-way Interactions

Posted on March 12, 2014February 11, 2020 by Uri Simonsohn

This post shares a shocking and counterintuitive fact about studies looking at interactions where effects are predicted to get smaller (attenuated interactions). I needed a working example and went with Fritz Strack et al.’s  (1988, .html) famous paper [933 Google cites], in which participants rated cartoons as funnier if they saw them while holding a…

Read more

[15] Citing Prospect Theory

Posted on February 10, 2014February 12, 2020 by Uri Simonsohn

Kahneman and Tversky's (1979) Prospect Theory (.html), with its 9,206 citations, is the most cited article in Econometrica, the prestigious journal in which it appeared. In fact, it is more cited than any article published in any economics journal. [1] Let's break it down by year. To be clear, this figure shows that just in 2013, Prospect Theory got about…

Read more

[13] Posterior-Hacking

Posted on January 13, 2014January 30, 2020 by Uri Simonsohn

Many believe that while p-hacking invalidates p-values, it does not invalidate Bayesian inference. Many are wrong. This blog post presents two examples from my new “Posterior-Hacking” (SSRN) paper showing  selective reporting invalidates Bayesian inference as much as it invalidates p-values. Example 1. Chronological Rejuvenation experiment In  “False-Positive Psychology" (SSRN), Joe, Leif and I run experiments to demonstrate how easy…

Read more
  • Previous
  • 1
  • …
  • 3
  • 4
  • 5
  • 6
  • 7
  • Next

Get Colada email alerts.

Join 10.9K other subscribers

Social media

Recent Posts

  • [130] ResearchBox: Even Easier to Use and More Transparently Permanent than Before
  • [129] P-curve works in practice, but would it work if you dropped a piano on it?
  • [128] LinkedOut: The Best Published Audit Study, And Its Interesting Shortcoming
  • [127] Meaningless Means #4: Correcting Scientific Misinformation
  • [126] Stimulus Plots

Get blogpost email alerts

Join 10.9K other subscribers

tweeter & facebook

We announce posts on Twitter
We announce posts on Bluesky
And link to them on our Facebook page

Posts on similar topics

    search

    © 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..