Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • About
Menu

Author: Uri Simonsohn

[42] Accepting the Null: Where to Draw the Line?

Posted on October 28, 2015February 11, 2020 by Uri Simonsohn

We typically ask if an effect exists.  But sometimes we want to ask if it does not. For example, how many of the “failed” replications in the recent reproducibility project published in Science (.pdf) suggest the absence of an effect? Data have noise, so we can never say ‘the effect is exactly zero.’  We can…

Read more

[41] Falsely Reassuring: Analyses of ALL p-values

Posted on August 24, 2015October 18, 2023 by Uri Simonsohn

It is a neat idea. Get a ton of papers. Extract all p-values. Examine the prevalence of p-hacking by assessing if there are too many p-values near p=.05. Economists have done it [SSRN], as have psychologists [.html], and biologists [.html]. These charts with distributions of p-values come from those papers: The dotted circles highlight the excess of…

Read more

[40] Reducing Fraud in Science

Posted on June 29, 2015February 11, 2020 by Uri Simonsohn

Fraud in science is often attributed to incentives: we reward sexy-results→fraud happens. The solution, the argument goes, is to reward other things.  In this post I counter-argue, proposing three alternative solutions. Problems with the Change the Incentives solution. First, even if rewarding sexy-results caused fraud, it does not follow we should stop rewarding sexy-results. We…

Read more

[39] Power Naps: When do Within-Subject Comparisons Help vs Hurt (yes, hurt) Power?

Posted on June 22, 2015February 11, 2020 by Uri Simonsohn

A recent Science-paper (.html) used a total sample size of N=40 to arrive at the conclusion that implicit racial and gender stereotypes can be reduced while napping.  N=40 is a small sample for a between-subject experiment. One needs N=92 to reliably detect that men are heavier than women (SSRN). The study, however, was within-subject, for instance, its dependent…

Read more

[36] How to Study Discrimination (or Anything) With Names; If You Must

Posted on April 23, 2015May 23, 2020 by Uri Simonsohn

Consider these paraphrased famous findings: “Because his name resembles ‘dentist,’ Dennis became one” (JPSP, .pdf) “Because the applicant was black (named Jamal instead of Greg) he was not interviewed” (AER, .pdf) “Because the applicant was female (named Jennifer instead of John), she got a lower offer” (PNAS, .pdf) Everything that matters (income, age, location, religion) correlates with…

Read more

[35] The Default Bayesian Test is Prejudiced Against Small Effects

Posted on April 9, 2015November 18, 2020 by Uri Simonsohn

When considering any statistical tool I think it is useful to answer the following two practical questions: 1. “Does it give reasonable answers in realistic circumstances?” 2. “Does it answer a question I am interested in?” In this post I explain why, for me, when it comes to the default Bayesian test that's starting to…

Read more

[34] My Links Will Outlive You

Posted on March 2, 2015January 23, 2019 by Uri Simonsohn

If you are like me, from time to time your papers include links to online references. Because the internet changes so often, by the time readers follow those links, who knows if the cited content will still be there. This blogpost shares a simple way to ensure your links live “forever.”  I got the idea…

Read more

[33] "The" Effect Size Does Not Exist

Posted on February 9, 2015February 11, 2020 by Uri Simonsohn

Consider the robust phenomenon of anchoring, where people’s numerical estimates are biased towards arbitrary starting points. What does it mean to say “the” effect size of anchoring? It surely depends on moderators like domain of the estimate, expertise, and perceived informativeness of the anchor. Alright, how about “the average” effect-size of anchoring? That's simple enough….

Read more

[31] Women are taller than men: Misusing Occam’s Razor to lobotomize discussions of alternative explanations

Posted on December 18, 2014February 11, 2020 by Uri Simonsohn

Most scientific studies document a pattern for which the authors provide an explanation. The job of readers and reviewers is to examine whether that pattern is better explained by alternative explanations. When alternative explanations are offered, it is common for authors to acknowledge that although, yes, each study has potential confounds, no single alternative explanation…

Read more

[29] Help! Someone Thinks I p-hacked

Posted on October 22, 2014April 22, 2020 by Uri Simonsohn

It has become more common to publicly speculate, upon noticing a paper with unusual analyses, that a reported finding was obtained via p-hacking. This post discusses how authors can persuasively respond to such speculations. Examples of public speculation of p-hacking Example 1. A Slate.com post by Andrew Gelman suspected p-hacking in a paper that collected…

Read more
  • Previous
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • Next

Get Colada email alerts.

Join 10.7K other subscribers

Social media

Recent Posts

  • [128] LinkedOut: The Best Published Audit Study, And Its Interesting Shortcoming
  • [127] Meaningless Means #4: Correcting Scientific Misinformation
  • [126] Stimulus Plots
  • [125] "Complexity" 2: Don't be mean to the median
  • [124] "Complexity": 75% of participants missed comprehension questions in AER paper critiquing Prospect Theory

Get blogpost email alerts

Join 10.7K other subscribers

tweeter & facebook

We announce posts on Twitter
We announce posts on Bluesky
And link to them on our Facebook page

Posts on similar topics

    search

    © 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..