Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • About
Menu

Table of Contents

About Research Design

  • [31] Women are taller than men: Misusing Occam’s Razor to lobotomize discussions of alternative explanations
  • [36] How to Study Discrimination (or Anything) With Names; If You Must
  • [46] Controlling the Weather
  • [51] Greg vs. Jamal: Why Didn’t Bertrand and Mullainathan (2004) Replicate?
  • [124] "Complexity": 75% of participants missed comprehension questions in AER paper critiquing Prospect Theory

About Research Tips

  • [10] Reviewers are asking for it
  • [34] My Links Will Outlive You
  • [40] Reducing Fraud in Science
  • [69] Eight things I do to make my open research more findable and understandable

Comment on media coverage

  • [65] Spotlight on Science Journalism: The Health Benefits of Volunteering

Credibility Lab

  • [95] Groundhog: Addressing The Threat That R Poses To Reproducible Research
  • [100] Groundhog 2.0: Further addressing the threat R poses to reproducible research

Data Replicada

  • [81] Data Replicada
  • [82] Data Replicada #1: Do Elevated Viewpoints Increase Risk Taking?
  • [83] Data Replicada #2: Do Self-Construal and Group Size Influence How People Make Choices on Behalf of a Group?
  • [84] Data Replicada #3: Does Self-Concept Uncertainty Influence Magazine Subscription Choice?
  • [85] Data Replicada #4: The Problem of Hidden Confounds
  • [87] Data Replicada #5: Do Human-Like Products Inspire More Holistic Judgments?
  • [89] Data Replicada #6: The Problem of (Weird) Differential Attrition
  • [90] Data Replicada #7: Does Displaying Multiple Copies of a Product Increase Its Perceived Effectiveness?
  • [92] Data Replicada #8: Is The Left-Digit Bias Stronger When Prices Are Presented Side-By-Side?
  • [94] Data Replicada #9: Are Progression Ads More Credible?
  • [97] Data Replicada #10: Does Goal Conflict Affect Time Spent on Work and Leisure?

Discuss own paper

  • [30] Trim-and-Fill is Full of It (bias)
  • [43] Rain & Happiness: Why Didn’t Schwarz & Clore (1983) ‘Replicate’ ?
  • [45] Ambitious P-Hacking and P-Curve 4.0
  • [49] P-Curve Won’t Do Your Laundry, But Will Identify Replicable Findings
  • [62] Two-lines: The First Valid Test of U-Shaped Relationships
  • [73] Don't Trust Internal Meta-Analysis
  • [75] Intentionally Biased: People Purposely Don't Ignore Information They "Should" Ignore

Discuss Paper by Others

  • [1] "Just Posting It" works, leads to new retraction in Psychology
  • [3] A New Way To Increase Charitable Donations: Does It Replicate?
  • [11] “Exactly”: The Most Famous Framing Effect Is Robust To Precise Wording
  • [25] Maybe people actually enjoy being alone with their thoughts
  • [38] A Better Explanation Of The Endowment Effect
  • [50] Teenagers in Bikinis: Interpreting Police-Shooting Data
  • [54] The 90x75x50 heuristic: Noisy & Wasteful Sample Sizes In The “Social Science Replication Project”
  • [60] Forthcoming in JPSP: A Non-Diagnostic Audit of Psychological Research
  • [74] In Press at Psychological Science: A New 'Nudge' Supported by Implausible Data
  • [79] Experimentation Aversion: Reconciling the Evidence
  • [82] Data Replicada #1: Do Elevated Viewpoints Increase Risk Taking?
  • [83] Data Replicada #2: Do Self-Construal and Group Size Influence How People Make Choices on Behalf of a Group?
  • [84] Data Replicada #3: Does Self-Concept Uncertainty Influence Magazine Subscription Choice?
  • [85] Data Replicada #4: The Problem of Hidden Confounds
  • [86] The Data Colada Seminar Series
  • [87] Data Replicada #5: Do Human-Like Products Inspire More Holistic Judgments?
  • [89] Data Replicada #6: The Problem of (Weird) Differential Attrition
  • [90] Data Replicada #7: Does Displaying Multiple Copies of a Product Increase Its Perceived Effectiveness?
  • [92] Data Replicada #8: Is The Left-Digit Bias Stronger When Prices Are Presented Side-By-Side?
  • [94] Data Replicada #9: Are Progression Ads More Credible?
  • [96] Madam Speaker: Are Female Presenters Treated Worse in Econ Seminars?
  • [97] Data Replicada #10: Does Goal Conflict Affect Time Spent on Work and Leisure?
  • [98] Evidence of Fraud in an Influential Field Experiment About Dishonesty
  • [99] Hyping Fisher: The Most Cited 2019 QJE Paper Relied on an Outdated Stata Default to Conclude Regression <i>p</i>-values Are Inadequate
  • [101] Transparency Makes Research Evaluable: Evaluating a Field Experiment on Crime Published in <i>Nature</i>
  • [119] A Hidden Confound in a <i>Psych Methods</i> Pre‑registrations Critique
  • [121] Dear Political Scientists: Don't Bin, GAM Instead
  • [122] Arresting Flexibility: A QJE field experiment on police behavior with about 40 outcome variables
  • [124] "Complexity": 75% of participants missed comprehension questions in AER paper critiquing Prospect Theory
  • [125] "Complexity" 2: Don't be mean to the median

Effect size

  • [18] MTurk vs. The Lab: Either Way We Need Big Samples
  • [20] We cannot afford to study effect size in the lab
  • [28] Confidence Intervals Don't Change How We Think about Data
  • [63] "Many Labs" Overestimated The Importance of Hidden Moderators
  • [70] How Many Studies Have Not Been Run? Why We Still Think the Average Effect Does Not Exist

Fake data

  • [1] "Just Posting It" works, leads to new retraction in Psychology
  • [19] Fake Data: Mendel vs. Stapel
  • [21] Fake-Data Colada: Excessive Linearity
  • [40] Reducing Fraud in Science
  • [74] In Press at Psychological Science: A New 'Nudge' Supported by Implausible Data
  • [77] Number-Bunching: A New Tool for Forensic Data Analysis
  • [98] Evidence of Fraud in an Influential Field Experiment About Dishonesty
  • [109] Data Falsificada (Part 1): "Clusterfake"
  • [110] Data Falsificada (Part 2): "My Class Year Is Harvard"
  • [111] Data Falsificada (Part 3): "The Cheaters Are Out of Order"
  • [112] Data Falsificada (Part 4): "Forgetting The Words"
  • [114] Exhibits 3, 4, and 5
  • [117] The Impersonator: The Fake Data Were Coming From Inside the Lab
  • [118] Harvard’s Gino Report Reveals How A Dataset Was Altered

file-drawer

  • [24] P-curve vs. Excessive Significance Test
  • [55] The file-drawer problem is unfixable, and that’s OK
  • [58] The Funnel Plot is Invalid Because of This Crazy Assumption: r(n,d)=0
  • [59] PET-PEESE Is Not Like Homeopathy
  • [71] The (Surprising?) Shape of the File Drawer
  • [72] Metacritic Has A (File-Drawer) Problem
  • [73] Don't Trust Internal Meta-Analysis

Interactions

  • [17] No-way Interactions
  • [57] Interactions in Logit Regressions: Why Positive May Mean Negative
  • [80] Interaction Effects Need Interaction Controls
  • [123] Dear Political Scientists: The binning estimator violates ceteris paribus

Just fun

  • [5] The Consistency of Random Numbers
  • [8] Adventures in the Assessment of Animal Speed and Morality
  • [9] Titleogy: Some facts about titles
  • [15] Citing Prospect Theory
  • [22] You know what's on our shopping list
  • [32] Spotify Has Trouble With A Marketing Research Exam
  • [56] TWARKing: Test-Weighting After Results are Known
  • [72] Metacritic Has A (File-Drawer) Problem
  • [120] Off-Label Smirnov: How Many Subjects Show an Effect in Between-Subjects Experiments?

Lawsuits

  • [113] Data Litigada: Thank You (And An Update)
  • [114] Exhibits 3, 4, and 5
  • [116] Our (First?) Day In Court
  • [118] Harvard’s Gino Report Reveals How A Dataset Was Altered

Meta Analysis

  • [30] Trim-and-Fill is Full of It (bias)
  • [33] "The" Effect Size Does Not Exist
  • [58] The Funnel Plot is Invalid Because of This Crazy Assumption: r(n,d)=0
  • [59] PET-PEESE Is Not Like Homeopathy
  • [63] "Many Labs" Overestimated The Importance of Hidden Moderators
  • [76] Heterogeneity Is Replicable: Evidence From Maluma, MTurk, and Many Labs
  • [104] Meaningless Means: Some Fundamental Problems With Meta-Analytic Averages
  • [105] Meaningless Means #1: The Average Effect<BR> of Nudging Is d = .43
  • [106] Meaningless Means #2: The Average Effect of Nudging in Academic Publications is 8.7%
  • [107] Meaningless Means #3: The Truth About Lies

Music

  • [2] Using Personal Listening Habits to Identify Personal Music Preferences
  • [32] Spotify Has Trouble With A Marketing Research Exam
  • [72] Metacritic Has A (File-Drawer) Problem

On Bayesian Stats

  • [13] Posterior-Hacking
  • [35] The Default Bayesian Test is Prejudiced Against Small Effects
  • [42] Accepting the Null: Where to Draw the Line?
  • [78a] If you think p-values are problematic, wait until you understand Bayes Factors
  • [78b] Hyp-Chart, the Missing Link Between P-values and Bayes Factors
  • [78c] Bayes Factors in Ten Recent Psych Science Papers

Opinion

  • [52] Menschplaining: Three Ideas for Civil Criticism
  • [53] What I Want Our Field To Prioritize

p-curve

  • [24] P-curve vs. Excessive Significance Test
  • [37] Power Posing: Reassessing The Evidence Behind The Most Popular TED Talk
  • [41] Falsely Reassuring: Analyses of ALL p-values
  • [45] Ambitious P-Hacking and P-Curve 4.0
  • [49] P-Curve Won’t Do Your Laundry, But Will Identify Replicable Findings
  • [59] PET-PEESE Is Not Like Homeopathy
  • [60] Forthcoming in JPSP: A Non-Diagnostic Audit of Psychological Research
  • [61] Why p-curve excludes ps>.05
  • [66] Outliers: Evaluating A New P-Curve Of Power Poses
  • [67] P-curve Handles Heterogeneity Just Fine
  • [91] p-hacking fast and slow: Evaluating a forthcoming AER paper deeming some econ literatures less trustworthy

p-hacking

  • [29] Help! Someone Thinks I p-hacked
  • [48] P-hacked Hypotheses Are Deceivingly Robust
  • [68] Pilot-Dropping Backfires (So Daryl Bem Probably Did Not Do It)
  • [73] Don't Trust Internal Meta-Analysis
  • [91] p-hacking fast and slow: Evaluating a forthcoming AER paper deeming some econ literatures less trustworthy
  • [122] Arresting Flexibility: A QJE field experiment on police behavior with about 40 outcome variables

Preregistration

  • [12] Preregistration: Not just for the Empiro-zealots
  • [44] AsPredicted: Pre-registration Made Easy
  • [55] The file-drawer problem is unfixable, and that’s OK
  • [64] How To Properly Preregister A Study
  • [101] Transparency Makes Research Evaluable: Evaluating a Field Experiment on Crime Published in <i>Nature</i>
  • [115] Preregistration Prevalence
  • [119] A Hidden Confound in a <i>Psych Methods</i> Pre‑registrations Critique
  • [122] Arresting Flexibility: A QJE field experiment on police behavior with about 40 outcome variables

Replication

  • [4] The Folly of Powering Replications Based on Observed Effect Size
  • [7] Forthcoming in the American Economic Review: A Misdiagnosed Failure-to-Replicate
  • [11] “Exactly”: The Most Famous Framing Effect Is Robust To Precise Wording
  • [23] Ceiling Effects and Replications
  • [37] Power Posing: Reassessing The Evidence Behind The Most Popular TED Talk
  • [38] A Better Explanation Of The Endowment Effect
  • [43] Rain & Happiness: Why Didn’t Schwarz & Clore (1983) ‘Replicate’ ?
  • [47] Evaluating Replications: 40% Full ≠ 60% Empty
  • [51] Greg vs. Jamal: Why Didn’t Bertrand and Mullainathan (2004) Replicate?
  • [54] The 90x75x50 heuristic: Noisy & Wasteful Sample Sizes In The “Social Science Replication Project”
  • [76] Heterogeneity Is Replicable: Evidence From Maluma, MTurk, and Many Labs
  • [82] Data Replicada #1: Do Elevated Viewpoints Increase Risk Taking?
  • [83] Data Replicada #2: Do Self-Construal and Group Size Influence How People Make Choices on Behalf of a Group?
  • [84] Data Replicada #3: Does Self-Concept Uncertainty Influence Magazine Subscription Choice?
  • [85] Data Replicada #4: The Problem of Hidden Confounds
  • [87] Data Replicada #5: Do Human-Like Products Inspire More Holistic Judgments?
  • [89] Data Replicada #6: The Problem of (Weird) Differential Attrition
  • [90] Data Replicada #7: Does Displaying Multiple Copies of a Product Increase Its Perceived Effectiveness?
  • [92] Data Replicada #8: Is The Left-Digit Bias Stronger When Prices Are Presented Side-By-Side?
  • [94] Data Replicada #9: Are Progression Ads More Credible?
  • [97] Data Replicada #10: Does Goal Conflict Affect Time Spent on Work and Leisure?

Reproducibility

  • [100] Groundhog 2.0: Further addressing the threat R poses to reproducible research
  • [108] MRAN is Dead, long live GRAN

Researchbox

  • [93] ResearchBox: Open Research Made Easy

Statistical Power

  • [6] Samples Can't Be Too Large
  • [18] MTurk vs. The Lab: Either Way We Need Big Samples
  • [26] What If Games Were Shorter?
  • [54] The 90x75x50 heuristic: Noisy & Wasteful Sample Sizes In The “Social Science Replication Project”

Teaching

  • [2] Using Personal Listening Habits to Identify Personal Music Preferences
  • [26] What If Games Were Shorter?
  • [56] TWARKing: Test-Weighting After Results are Known

Unexpectedly Difficult Statistical Concepts

  • [17] No-way Interactions
  • [20] We cannot afford to study effect size in the lab
  • [27] Thirty-somethings are Shrinking and Other U-Shaped Challenges
  • [33] "The" Effect Size Does Not Exist
  • [39] Power Naps: When do Within-Subject Comparisons Help vs Hurt (yes, hurt) Power?
  • [41] Falsely Reassuring: Analyses of ALL p-values
  • [42] Accepting the Null: Where to Draw the Line?
  • [46] Controlling the Weather
  • [50] Teenagers in Bikinis: Interpreting Police-Shooting Data
  • [57] Interactions in Logit Regressions: Why Positive May Mean Negative
  • [70] How Many Studies Have Not Been Run? Why We Still Think the Average Effect Does Not Exist
  • [71] The (Surprising?) Shape of the File Drawer
  • [79] Experimentation Aversion: Reconciling the Evidence
  • [80] Interaction Effects Need Interaction Controls
  • [88] The Hot-Hand Artifact for Dummies & Behavioral Scientists
  • [91] p-hacking fast and slow: Evaluating a forthcoming AER paper deeming some econ literatures less trustworthy
  • [99] Hyping Fisher: The Most Cited 2019 QJE Paper Relied on an Outdated Stata Default to Conclude Regression <i>p</i>-values Are Inadequate
  • [103] Mediation Analysis is Counterintuitively Invalid
  • [120] Off-Label Smirnov: How Many Subjects Show an Effect in Between-Subjects Experiments?
  • [121] Dear Political Scientists: Don't Bin, GAM Instead
  • [123] Dear Political Scientists: The binning estimator violates ceteris paribus

Get Colada email alerts.

Join 10.5K other subscribers

Social media

Recent Posts

  • [125] "Complexity" 2: Don't be mean to the median
  • [124] "Complexity": 75% of participants missed comprehension questions in AER paper critiquing Prospect Theory
  • [123] Dear Political Scientists: The binning estimator violates ceteris paribus
  • [122] Arresting Flexibility: A QJE field experiment on police behavior with about 40 outcome variables
  • [121] Dear Political Scientists: Don't Bin, GAM Instead

Get blogpost email alerts

Join 10.5K other subscribers

tweeter & facebook

We announce posts on Twitter
We announce posts on Bluesky
And link to them on our Facebook page

Posts on similar topics

    search

    © 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..