Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • Seminar
  • About
Menu

Category: Discuss own paper

[75] Intentionally Biased: People Purposely Don't Ignore Information They "Should" Ignore

Posted on January 29, 2019February 12, 2020 by Guest author Berkeley Dietvorst, with Uri

You can’t un-ring a bell. Once people receive information, even if it is taken back, they cannot disregard it. Teachers cannot imagine what novice students don’t know, juries cannot follow instructions to disregard evidence, negotiators cannot take the perspective of their counterpart who does not know what they know, etc. People exhibit “Outcome bias”, “hindsight…

Read more

[73] Don't Trust Internal Meta-Analysis

Posted on October 24, 2018November 18, 2020 by Guest co-author: Joachim Vosgerau and Uri, Leif, & Joe

Researchers have increasingly been using internal meta-analysis to summarize the evidence from multiple studies within the same paper. Much of the time, this involves computing the average effect size across the studies, and assessing whether that effect size is significantly different from zero. At first glance, internal meta-analysis seems like a wonderful idea. It increases…

Read more

[62] Two-lines: The First Valid Test of U-Shaped Relationships

Posted on September 18, 2017February 12, 2020 by Uri Simonsohn

Can you have too many options in the menu, too many talented soccer players in a national team, or too many examples in an opening sentence? Social scientists often hypothesize u-shaped relationships like these, where the effect of x on y starts positive and becomes negative, or starts negative and becomes positive. Researchers rely almost…

Read more

[49] P-Curve Won’t Do Your Laundry, But Will Identify Replicable Findings

Posted on June 14, 2016February 12, 2020 by Uri, Joe, & Leif

In a recent critique, Bruns and Ioannidis (PlosONE 2016 .htm) proposed that p-curve makes mistakes when analyzing studies that have collected field/observational data. They write that in such cases: p-curves based on true effects and p‑curves based on null-effects with p-hacking cannot be reliably distinguished” (abstract). In this post we show, with examples involving sex,…

Read more

[45] Ambitious P-Hacking and P-Curve 4.0

Posted on January 14, 2016February 11, 2020 by Uri, Joe, & Leif

In this post, we first consider how plausible it is for researchers to engage in more ambitious p-hacking (i.e., past the nominal significance level of p<.05). Then, we describe how we have modified p-curve (see app 4.0) to deal with this possibility. Ambitious p-hacking is hard. In “False-Positive Psychology” (SSRN), we simulated the consequences of four…

Read more

[43] Rain & Happiness: Why Didn’t Schwarz & Clore (1983) ‘Replicate’ ?

Posted on November 16, 2015February 11, 2020 by Uri Simonsohn

In my “Small Telescopes” paper, I introduced a new approach to evaluate replication results (SSRN). Among other examples, I described two studies as having failed to replicate the famous Schwarz and Clore (1983) finding that people report being happier with their lives when asked on sunny days. Figure and text from Small Telescopes paper (SSRN) I…

Read more

[30] Trim-and-Fill is Full of It (bias)

Posted on December 3, 2014February 11, 2020 by Uri, Joe, & Leif

Statistically significant findings are much more likely to be published than non-significant ones (no citation necessary). Because overestimated effects are more likely to be statistically significant than are underestimated effects, this means that most published effects are overestimates. Effects are smaller – often much smaller – than the published record suggests. For meta-analysts the gold…

Read more

Get Colada email alerts.

Join 5,108 other subscribers

Social media

We tweet new posts: @DataColada
And mastopost'em: @DataColada@mas.to
And link to them on our Facebook page

Recent Posts

  • [107] Meaningless Means #3: The Truth About Lies
  • [106] Meaningless Means #2: The Average Effect of Nudging in Academic Publications is 8.7%
  • [105] Meaningless Means #1: The Average Effect
    of Nudging Is d = .43
  • [104] Meaningless Means: Some Fundamental Problems With Meta-Analytic Averages
  • [103] Mediation Analysis is Counterintuitively Invalid

Get blogpost email alerts

Join 5,108 other subscribers

tweeter & facebook

We tweet new posts: @DataColada
And link to them on our Facebook page

Posts on similar topics

Discuss own paper
  • [75] Intentionally Biased: People Purposely Don't Ignore Information They "Should" Ignore
  • [73] Don't Trust Internal Meta-Analysis
  • [62] Two-lines: The First Valid Test of U-Shaped Relationships
  • [49] P-Curve Won’t Do Your Laundry, But Will Identify Replicable Findings
  • [45] Ambitious P-Hacking and P-Curve 4.0
  • [43] Rain & Happiness: Why Didn’t Schwarz & Clore (1983) ‘Replicate’ ?
  • [30] Trim-and-Fill is Full of It (bias)

search

© 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..