Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • Seminar
  • About
Menu

Category: Unexpectedly Difficult Statistical Concepts

[46] Controlling the Weather

Posted on February 2, 2016January 30, 2020 by Joe & Uri

Behavioral scientists have put forth evidence that the weather affects all sorts of things, including the stock market, restaurant tips, car purchases, product returns, art prices, and college admissions. It is not easy to properly study the effects of weather on human behavior. This is because weather is (obviously) seasonal, as is much of what…

Read more

[42] Accepting the Null: Where to Draw the Line?

Posted on October 28, 2015February 11, 2020 by Uri Simonsohn

We typically ask if an effect exists.  But sometimes we want to ask if it does not. For example, how many of the “failed” replications in the recent reproducibility project published in Science (.pdf) suggest the absence of an effect? Data have noise, so we can never say ‘the effect is exactly zero.’  We can…

Read more

[41] Falsely Reassuring: Analyses of ALL p-values

Posted on August 24, 2015February 11, 2020 by Uri Simonsohn

It is a neat idea. Get a ton of papers. Extract all p-values. Examine the prevalence of p-hacking by assessing if there are too many p-values near p=.05. Economists have done it [SSRN], as have psychologists [.html], and biologists [.html]. These charts with distributions of p-values come from those papers: The dotted circles highlight the excess of…

Read more

[39] Power Naps: When do Within-Subject Comparisons Help vs Hurt (yes, hurt) Power?

Posted on June 22, 2015February 11, 2020 by Uri Simonsohn

A recent Science-paper (.html) used a total sample size of N=40 to arrive at the conclusion that implicit racial and gender stereotypes can be reduced while napping.  N=40 is a small sample for a between-subject experiment. One needs N=92 to reliably detect that men are heavier than women (SSRN). The study, however, was within-subject, for instance, its dependent…

Read more

[33] "The" Effect Size Does Not Exist

Posted on February 9, 2015February 11, 2020 by Uri Simonsohn

Consider the robust phenomenon of anchoring, where people’s numerical estimates are biased towards arbitrary starting points. What does it mean to say “the” effect size of anchoring? It surely depends on moderators like domain of the estimate, expertise, and perceived informativeness of the anchor. Alright, how about “the average” effect-size of anchoring? That's simple enough….

Read more

[27] Thirty-somethings are Shrinking and Other U-Shaped Challenges

Posted on September 17, 2014February 11, 2020 by Leif and Uri

A recent Psych Science (.pdf) paper found that sports teams can perform worse when they have too much talent. For example, in Study 3 they found that NBA teams with a higher percentage of talented players win more games, but that teams with the highest levels of talented players win fewer games. The hypothesis is easy enough…

Read more

[20] We cannot afford to study effect size in the lab

Posted on May 1, 2014January 30, 2020 by Uri Simonsohn

Methods people often say  – in textbooks, task forces, papers, editorials, over coffee, in their sleep – that we should focus more on estimating effect sizes rather than testing for significance. I am kind of a methods person, and I am kind of going to say the opposite. Only kind of the opposite because it…

Read more

[17] No-way Interactions

Posted on March 12, 2014February 11, 2020 by Uri Simonsohn

This post shares a shocking and counterintuitive fact about studies looking at interactions where effects are predicted to get smaller (attenuated interactions). I needed a working example and went with Fritz Strack et al.’s  (1988, .html) famous paper [933 Google cites], in which participants rated cartoons as funnier if they saw them while holding a…

Read more
  • Previous
  • 1
  • 2

Get Colada email alerts.

Join 5,108 other subscribers

Social media

We tweet new posts: @DataColada
And mastopost'em: @DataColada@mas.to
And link to them on our Facebook page

Recent Posts

  • [107] Meaningless Means #3: The Truth About Lies
  • [106] Meaningless Means #2: The Average Effect of Nudging in Academic Publications is 8.7%
  • [105] Meaningless Means #1: The Average Effect
    of Nudging Is d = .43
  • [104] Meaningless Means: Some Fundamental Problems With Meta-Analytic Averages
  • [103] Mediation Analysis is Counterintuitively Invalid

Get blogpost email alerts

Join 5,108 other subscribers

tweeter & facebook

We tweet new posts: @DataColada
And link to them on our Facebook page

Posts on similar topics

Unexpectedly Difficult Statistical Concepts
  • [103] Mediation Analysis is Counterintuitively Invalid
  • [99] Hyping Fisher: The Most Cited 2019 QJE Paper Relied on an Outdated Stata Default to Conclude Regression p-values Are Inadequate
  • [91] p-hacking fast and slow: Evaluating a forthcoming AER paper deeming some econ literatures less trustworthy
  • [88] The Hot-Hand Artifact for Dummies & Behavioral Scientists
  • [80] Interaction Effects Need Interaction Controls
  • [79] Experimentation Aversion: Reconciling the Evidence
  • [71] The (Surprising?) Shape of the File Drawer
  • [70] How Many Studies Have Not Been Run? Why We Still Think the Average Effect Does Not Exist
  • [57] Interactions in Logit Regressions: Why Positive May Mean Negative
  • [50] Teenagers in Bikinis: Interpreting Police-Shooting Data

search

© 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..