Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • About
Menu

[27] Thirty-somethings are Shrinking and Other U-Shaped Challenges

Posted on September 17, 2014February 11, 2020 by Leif and Uri

A recent Psych Science (.pdf) paper found that sports teams can perform worse when they have too much talent. For example, in Study 3 they found that NBA teams with a higher percentage of talented players win more games, but that teams with the highest levels of talented players win fewer games. The hypothesis is easy enough…

Read more

[26] What If Games Were Shorter?

Posted on August 22, 2014February 11, 2020 by Joe Simmons

The smaller your sample, the less likely your evidence is to reveal the truth. You might already know this, but most people don’t (.html), or at least they don’t appropriately apply it (.html). (See, for example, nearly every inference ever made by anyone). My experience trying to teach this concept suggests that it’s best understood…

Read more

[25] Maybe people actually enjoy being alone with their thoughts

Posted on July 22, 2014August 23, 2023 by Leif Nelson

Recently Science published a paper concluding that people do not like sitting quietly by themselves (.html). The article received press coverage, that press coverage received blog coverage, which received twitter coverage, which received meaningful head-nodding coverage around my department. The bulk of that coverage (e.g., 1, 2, and 3) focused on the tenth study in…

Read more

[24] P-curve vs. Excessive Significance Test

Posted on June 27, 2014February 12, 2020 by Uri Simonsohn

In this post I use data from the Many-Labs replication project to contrast the (pointless) inferences one arrives at using the Excessive Significant Test, with the (critically important) inferences one arrives at with p-curve. The many-labs project is a collaboration of 36 labs around the world, each running a replication of 13 published effects in…

Read more

[23] Ceiling Effects and Replications

Posted on June 4, 2014February 11, 2020 by Uri Simonsohn

A recent failure to replicate led to an attention-grabbing debate in psychology. As you may expect from university professors, some of it involved data.  As you may not expect from university professors, much of it involved saying mean things that would get a child sent to the principal's office (.pdf). The hostility in the debate has obscured an interesting…

Read more

[22] You know what's on our shopping list

Posted on May 22, 2014February 16, 2020 by Leif Nelson

As part of an ongoing project with Minah Jung, a nearly perfect doctoral student, we asked  people to estimate the percentage of people who bought some common items in their last trip to the supermarket. For each of 18 items, we simply asked people (N = 397) to report whether they had bought it on…

Read more

[21] Fake-Data Colada: Excessive Linearity

Posted on May 8, 2014February 11, 2020 by Leif and Uri

Recently, a psychology paper (.html) was flagged as possibly fraudulent based on statistical analyses (.pdf). The author defended his paper (.html), but the university committee investigating misconduct concluded it had occurred (.pdf). In this post we present new and more intuitive versions of the analyses that flagged the paper as possibly fraudulent. We then rule…

Read more

[20] We cannot afford to study effect size in the lab

Posted on May 1, 2014January 30, 2020 by Uri Simonsohn

Methods people often say  – in textbooks, task forces, papers, editorials, over coffee, in their sleep – that we should focus more on estimating effect sizes rather than testing for significance. I am kind of a methods person, and I am kind of going to say the opposite. Only kind of the opposite because it…

Read more

[19] Fake Data: Mendel vs. Stapel

Posted on April 14, 2014February 11, 2020 by Uri Simonsohn

Diederik Stapel, Dirk Smeesters, and Lawrence Sanna published psychology papers with fake data. They each faked in their own idiosyncratic way, nevertheless, their data do share something in common. Real data are noisy. Theirs aren't. Gregor Mendel's data also lack noise (yes, famous peas-experimenter Mendel). Moreover, in a mathematical sense, his data are just as…

Read more

[18] MTurk vs. The Lab: Either Way We Need Big Samples

Posted on April 4, 2014February 12, 2020 by Joe Simmons

Back in May 2012, we were interested in the question of how many participants a typical between-subjects psychology study needs to have an 80% chance to detect a true effect. To answer this, you need to know the effect size for a typical study, which you can’t know from examining the published literature because it…

Read more
  • Previous
  • 1
  • …
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • Next

Get Colada email alerts.

Join 10.5K other subscribers

Social media

Recent Posts

  • [125] "Complexity" 2: Don't be mean to the median
  • [124] "Complexity": 75% of participants missed comprehension questions in AER paper critiquing Prospect Theory
  • [123] Dear Political Scientists: The binning estimator violates ceteris paribus
  • [122] Arresting Flexibility: A QJE field experiment on police behavior with about 40 outcome variables
  • [121] Dear Political Scientists: Don't Bin, GAM Instead

Get blogpost email alerts

Join 10.5K other subscribers

tweeter & facebook

We announce posts on Twitter
We announce posts on Bluesky
And link to them on our Facebook page

Posts on similar topics

    search

    © 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..