Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • Seminar
  • About
Menu

Category: p-hacking

[91] p-hacking fast and slow: Evaluating a forthcoming AER paper deeming some econ literatures less trustworthy

Posted on September 15, 2020August 16, 2021 by Uri Simonsohn

The authors of a forthcoming AER article (.pdf), "Methods Matter: P-Hacking and Publication Bias in Causal Analysis in Economics", painstakingly harvested thousands of test results from 25 economics journals to answer an interesting question: Are studies that use some research designs more trustworthy than others? In this post I will explain why I think their…

Read more

[73] Don't Trust Internal Meta-Analysis

Posted on October 24, 2018November 18, 2020 by Guest co-author: Joachim Vosgerau and Uri, Leif, & Joe

Researchers have increasingly been using internal meta-analysis to summarize the evidence from multiple studies within the same paper. Much of the time, this involves computing the average effect size across the studies, and assessing whether that effect size is significantly different from zero. At first glance, internal meta-analysis seems like a wonderful idea. It increases…

Read more

[68] Pilot-Dropping Backfires (So Daryl Bem Probably Did Not Do It)

Posted on January 25, 2018November 18, 2020 by Joe Leif Uri

Uli Schimmack recently identified an interesting pattern in the data from Daryl Bem’s infamous “Feeling the Future” JPSP paper, in which he reported evidence for the existence of extrasensory perception (ESP; htm)[1]. In each study, the effect size is larger among participants who completed the study earlier (blogpost: .htm). Uli referred to this as the "decline…

Read more

[48] P-hacked Hypotheses Are Deceivingly Robust

Posted on April 28, 2016January 30, 2020 by Uri Simonsohn

Sometimes we selectively report the analyses we run to test a hypothesis. Other times we selectively report which hypotheses we tested. One popular way to p-hack hypotheses involves subgroups. Upon realizing analyses of the entire sample do not produce a significant effect, we check whether analyses of various subsamples — women, or the young, or republicans, or…

Read more

[29] Help! Someone Thinks I p-hacked

Posted on October 22, 2014April 22, 2020 by Uri Simonsohn

It has become more common to publicly speculate, upon noticing a paper with unusual analyses, that a reported finding was obtained via p-hacking. This post discusses how authors can persuasively respond to such speculations. Examples of public speculation of p-hacking Example 1. A Slate.com post by Andrew Gelman suspected p-hacking in a paper that collected…

Read more

Get Colada email alerts.

Join 5,108 other subscribers

Social media

We tweet new posts: @DataColada
And mastopost'em: @DataColada@mas.to
And link to them on our Facebook page

Recent Posts

  • [107] Meaningless Means #3: The Truth About Lies
  • [106] Meaningless Means #2: The Average Effect of Nudging in Academic Publications is 8.7%
  • [105] Meaningless Means #1: The Average Effect
    of Nudging Is d = .43
  • [104] Meaningless Means: Some Fundamental Problems With Meta-Analytic Averages
  • [103] Mediation Analysis is Counterintuitively Invalid

Get blogpost email alerts

Join 5,108 other subscribers

tweeter & facebook

We tweet new posts: @DataColada
And link to them on our Facebook page

Posts on similar topics

p-hacking
  • [91] p-hacking fast and slow: Evaluating a forthcoming AER paper deeming some econ literatures less trustworthy
  • [73] Don't Trust Internal Meta-Analysis
  • [68] Pilot-Dropping Backfires (So Daryl Bem Probably Did Not Do It)
  • [48] P-hacked Hypotheses Are Deceivingly Robust
  • [29] Help! Someone Thinks I p-hacked

search

© 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..