Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • About
Menu

[37] Power Posing: Reassessing The Evidence Behind The Most Popular TED Talk

Posted on May 8, 2015February 11, 2020 by Joe & Uri

A recent paper in Psych Science (.pdf) reports a failure to replicate the study that inspired a TED Talk that has been seen 25 million times. [1]  The talk invited viewers to do better in life by assuming high-power poses, just like Wonder Woman’s below, but the replication found that power-posing was inconsequential. If an…

Read more

[36] How to Study Discrimination (or Anything) With Names; If You Must

Posted on April 23, 2015May 23, 2020 by Uri Simonsohn

Consider these paraphrased famous findings: “Because his name resembles ‘dentist,’ Dennis became one” (JPSP, .pdf) “Because the applicant was black (named Jamal instead of Greg) he was not interviewed” (AER, .pdf) “Because the applicant was female (named Jennifer instead of John), she got a lower offer” (PNAS, .pdf) Everything that matters (income, age, location, religion) correlates with…

Read more

[35] The Default Bayesian Test is Prejudiced Against Small Effects

Posted on April 9, 2015November 18, 2020 by Uri Simonsohn

When considering any statistical tool I think it is useful to answer the following two practical questions: 1. “Does it give reasonable answers in realistic circumstances?” 2. “Does it answer a question I am interested in?” In this post I explain why, for me, when it comes to the default Bayesian test that's starting to…

Read more

[34] My Links Will Outlive You

Posted on March 2, 2015January 23, 2019 by Uri Simonsohn

If you are like me, from time to time your papers include links to online references. Because the internet changes so often, by the time readers follow those links, who knows if the cited content will still be there. This blogpost shares a simple way to ensure your links live “forever.”  I got the idea…

Read more

[33] "The" Effect Size Does Not Exist

Posted on February 9, 2015February 11, 2020 by Uri Simonsohn

Consider the robust phenomenon of anchoring, where people’s numerical estimates are biased towards arbitrary starting points. What does it mean to say “the” effect size of anchoring? It surely depends on moderators like domain of the estimate, expertise, and perceived informativeness of the anchor. Alright, how about “the average” effect-size of anchoring? That's simple enough….

Read more

[32] Spotify Has Trouble With A Marketing Research Exam

Posted on January 12, 2015January 30, 2020 by Leif Nelson

This is really just a post-script to Colada [2], where I described a final exam question I gave in my MBA marketing research class. Students got a year’s worth of iTunes listening data for one person –me– and were asked: “What songs would this person put on his end-of-year Top 40?” I compared that list…

Read more

[31] Women are taller than men: Misusing Occam’s Razor to lobotomize discussions of alternative explanations

Posted on December 18, 2014February 11, 2020 by Uri Simonsohn

Most scientific studies document a pattern for which the authors provide an explanation. The job of readers and reviewers is to examine whether that pattern is better explained by alternative explanations. When alternative explanations are offered, it is common for authors to acknowledge that although, yes, each study has potential confounds, no single alternative explanation…

Read more

[30] Trim-and-Fill is Full of It (bias)

Posted on December 3, 2014November 13, 2024 by Uri, Joe, & Leif

Statistically significant findings are much more likely to be published than non-significant ones (no citation necessary). Because overestimated effects are more likely to be statistically significant than are underestimated effects, this means that most published effects are overestimates. Effects are smaller – often much smaller – than the published record suggests. For meta-analysts the gold…

Read more

[29] Help! Someone Thinks I p-hacked

Posted on October 22, 2014April 22, 2020 by Uri Simonsohn

It has become more common to publicly speculate, upon noticing a paper with unusual analyses, that a reported finding was obtained via p-hacking. This post discusses how authors can persuasively respond to such speculations. Examples of public speculation of p-hacking Example 1. A Slate.com post by Andrew Gelman suspected p-hacking in a paper that collected…

Read more

[28] Confidence Intervals Don't Change How We Think about Data

Posted on October 8, 2014February 11, 2020 by Uri Simonsohn

Some journals are thinking of discouraging authors from reporting p-values and encouraging or even requiring them to report confidence intervals instead. Would our inferences be better, or even just different, if we reported confidence intervals instead of p-values? One possibility is that researchers become less obsessed with the arbitrary significant/not-significant dichotomy. We start paying more…

Read more
  • Previous
  • 1
  • …
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • Next

Get Colada email alerts.

Join 10.5K other subscribers

Social media

Recent Posts

  • [125] "Complexity" 2: Don't be mean to the median
  • [124] "Complexity": 75% of participants missed comprehension questions in AER paper critiquing Prospect Theory
  • [123] Dear Political Scientists: The binning estimator violates ceteris paribus
  • [122] Arresting Flexibility: A QJE field experiment on police behavior with about 40 outcome variables
  • [121] Dear Political Scientists: Don't Bin, GAM Instead

Get blogpost email alerts

Join 10.5K other subscribers

tweeter & facebook

We announce posts on Twitter
We announce posts on Bluesky
And link to them on our Facebook page

Posts on similar topics

    search

    © 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..