Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • Seminar
  • About
Menu

Category: Effect size

[70] How Many Studies Have Not Been Run? Why We Still Think the Average Effect Does Not Exist

Posted on March 9, 2018February 12, 2020 by Leif Nelson

We have argued that, for most effects, it is impossible to identify the average effect (datacolada.org/33). The argument is subtle (but not statistical), and given the number of well-informed people who seem to disagree, perhaps we are simply wrong. This is my effort to explain why we think identifying the average effect is so hard….

Read more

[63] "Many Labs" Overestimated The Importance of Hidden Moderators

Posted on October 20, 2017February 16, 2020 by Uri Simonsohn

Are hidden moderators a thing? Do experiments intended to be identical lead to inexplicably different results? Back in 2014, the "Many Labs" project (htm) reported an ambitious attempt to answer these questions. More than 30 different labs ran the same set of studies and the paper presented the results side-by-side. They did not find any…

Read more

[28] Confidence Intervals Don't Change How We Think about Data

Posted on October 8, 2014February 11, 2020 by Uri Simonsohn

Some journals are thinking of discouraging authors from reporting p-values and encouraging or even requiring them to report confidence intervals instead. Would our inferences be better, or even just different, if we reported confidence intervals instead of p-values? One possibility is that researchers become less obsessed with the arbitrary significant/not-significant dichotomy. We start paying more…

Read more

[20] We cannot afford to study effect size in the lab

Posted on May 1, 2014January 30, 2020 by Uri Simonsohn

Methods people often say  – in textbooks, task forces, papers, editorials, over coffee, in their sleep – that we should focus more on estimating effect sizes rather than testing for significance. I am kind of a methods person, and I am kind of going to say the opposite. Only kind of the opposite because it…

Read more

[18] MTurk vs. The Lab: Either Way We Need Big Samples

Posted on April 4, 2014February 12, 2020 by Joe Simmons

Back in May 2012, we were interested in the question of how many participants a typical between-subjects psychology study needs to have an 80% chance to detect a true effect. To answer this, you need to know the effect size for a typical study, which you can’t know from examining the published literature because it…

Read more

Get Colada email alerts.

Join 5,108 other subscribers

Social media

We tweet new posts: @DataColada
And mastopost'em: @DataColada@mas.to
And link to them on our Facebook page

Recent Posts

  • [107] Meaningless Means #3: The Truth About Lies
  • [106] Meaningless Means #2: The Average Effect of Nudging in Academic Publications is 8.7%
  • [105] Meaningless Means #1: The Average Effect
    of Nudging Is d = .43
  • [104] Meaningless Means: Some Fundamental Problems With Meta-Analytic Averages
  • [103] Mediation Analysis is Counterintuitively Invalid

Get blogpost email alerts

Join 5,108 other subscribers

tweeter & facebook

We tweet new posts: @DataColada
And link to them on our Facebook page

Posts on similar topics

Effect size
  • [70] How Many Studies Have Not Been Run? Why We Still Think the Average Effect Does Not Exist
  • [63] "Many Labs" Overestimated The Importance of Hidden Moderators
  • [28] Confidence Intervals Don't Change How We Think about Data
  • [20] We cannot afford to study effect size in the lab
  • [18] MTurk vs. The Lab: Either Way We Need Big Samples

search

© 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..