Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • Seminar
  • About
Menu

[20] We cannot afford to study effect size in the lab


Posted on May 1, 2014January 30, 2020 by Uri Simonsohn

Methods people often say  – in textbooks, task forces, papers, editorials, over coffee, in their sleep – that we should focus more on estimating effect sizes rather than testing for significance.

I am kind of a methods person, and I am kind of going to say the opposite.

Only kind of the opposite because it is not that we shouldn't try to estimate effect sizes; it is that, in the lab, we can't afford to.

The sample sizes needed to estimate effect sizes are too big for most researchers most of the time.

With n=20, forget it
The median sample size in published studies in Psychology is about n=20 per cell. [1] There have been many calls over the last few decades to report and discuss effect size in experiments. Does it make sense to push for effect size reporting when we run small samples? I don't see how.

Arguably the lowest bar for claiming to care about effect size is so to distinguish among Small, Medium, and Large effects. And with n=20 we can't do even that.

Cheatsheet: I use Cohen's d to index effect size. d is by how many standard deviations the means differ. Small is d=.2, Medium d=.5 and Large d=.8.

The figure below shows 95% confidence intervals surrounding Small, Medium and Large estimates when n=20 (see simple R Code).

f1d

Whatever effect we get, we will not be able to rule out effects of a different qualitative size.

Four-digit n’s
It is easy to bash n=20 (please do it often). But just how big an n do we need to study effect size?

I am about to show that the answer has four-digits.

It will be rhetorically useful to consider a specific effect size. Let’s go with d=.5. You need n=64 per cell to detect this effect 80% of the time.

If you run the study with n=64, then you will get a confidence interval that will not include zero 80% of the time, but if your estimate is right on the money at d=.5, that confidence interval still will include effects smaller than Small (d<.2) and larger than Large (d>.8). So n=64 is fine for testing whether the effect exists, but not for estimating its size.

Properly powered studies teach you almost nothing about effect size. [2]

f2b
What if we go the extra mile, or three, and power it to 99.9%, running n=205 per cell. This study will almost always produce a significant effect, yet the expected confidence interval is massive, spanning a basically small effect (d=.3) to a basically large effect (d=.7).

To get the kind of confidence interval that actually gives confidence regarding effect size, one that spans say ±0.1, we need n=3000 per cell. THREE-THOUSAND. (see simple R Code)  [3]

In the lab, four-digit per-cell sample sizes are not affordable.

Advocating a focus on effect size estimation, then, implies advocating for either:
1)       Leaving the lab (e.g., mTurk, archival data). [4] 2)       Running within-subject designs.

Some may argue effect size is so important we ought to do these things.
But that’s a case to be made, not an implication to be ignored.

UPDATE 2014 05 08: A commentary on this post is available here

Wide logo


 

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

  1. Based on the degrees of freedom reported in thousands of test statistics I scraped from Psych Science and JPSP [↩]
  2. Unless you properly power for a trivially small effect by running a gigantic sample [↩]
  3. If you run n=1000 the expected confidence interval spans d=.41 and d=.59 [↩]
  4. One way to get big samples is to combine many small samples. Whether one should focus on effect size in meta-analysis is not something that seems controversial enough to be interesting to discuss [↩]

Get Colada email alerts.

Join 5,108 other subscribers

Social media

We tweet new posts: @DataColada
And mastopost'em: @DataColada@mas.to
And link to them on our Facebook page

Recent Posts

  • [107] Meaningless Means #3: The Truth About Lies
  • [106] Meaningless Means #2: The Average Effect of Nudging in Academic Publications is 8.7%
  • [105] Meaningless Means #1: The Average Effect
    of Nudging Is d = .43
  • [104] Meaningless Means: Some Fundamental Problems With Meta-Analytic Averages
  • [103] Mediation Analysis is Counterintuitively Invalid

Get blogpost email alerts

Join 5,108 other subscribers

tweeter & facebook

We tweet new posts: @DataColada
And link to them on our Facebook page

Posts on similar topics

Effect size, Unexpectedly Difficult Statistical Concepts
  • [103] Mediation Analysis is Counterintuitively Invalid
  • [99] Hyping Fisher: The Most Cited 2019 QJE Paper Relied on an Outdated Stata Default to Conclude Regression p-values Are Inadequate
  • [91] p-hacking fast and slow: Evaluating a forthcoming AER paper deeming some econ literatures less trustworthy
  • [88] The Hot-Hand Artifact for Dummies & Behavioral Scientists
  • [80] Interaction Effects Need Interaction Controls
  • [79] Experimentation Aversion: Reconciling the Evidence
  • [71] The (Surprising?) Shape of the File Drawer
  • [70] How Many Studies Have Not Been Run? Why We Still Think the Average Effect Does Not Exist
  • [63] "Many Labs" Overestimated The Importance of Hidden Moderators
  • [57] Interactions in Logit Regressions: Why Positive May Mean Negative

search

© 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..