Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • About
Menu

[67] P-curve Handles Heterogeneity Just Fine

Posted on January 8, 2018February 12, 2020 by Joe Leif Uri

A few years ago, we developed p-curve (see p-curve.com), a statistical tool that identifies whether or not a set of statistically significant findings contains evidential value, or whether those results are solely attributable to the selective reporting of studies or analyses. It also estimates the true average power of a set of significant findings [1]….

Read more

[66] Outliers: Evaluating A New P-Curve Of Power Poses

Posted on December 6, 2017February 12, 2020 by Joe Leif Uri

In a forthcoming Psych Science paper, Cuddy, Schultz, & Fosse, hereafter referred to as CSF, p-curved 55 power-posing studies (.pdf |  SSRN), concluding that they contain evidential value [1]. Thirty-four of those studies were previously selected and described as “all published tests” (p. 657) by Carney, Cuddy, & Yap (2015; .htm). Joe and Uri p-curved…

Read more

[65] Spotlight on Science Journalism: The Health Benefits of Volunteering

Posted on November 13, 2017January 23, 2019 by Leif Nelson

I want to comment on a recent article in the New York Times, but along the way I will comment on scientific reporting as well. I think that science reporters frequently fall short in assessing the evidence behind the claims they relay, but as I try to show, assessing evidence is not an easy task….

Read more

[64] How To Properly Preregister A Study

Posted on November 6, 2017February 12, 2020 by Joe Leif Uri

P-hacking, the selective reporting of statistically significant analyses, continues to threaten the integrity of our discipline. P-hacking is inevitable whenever (1) a researcher hopes to find evidence for a particular result, (2) there is ambiguity about how exactly to analyze the data, and (3) the researcher does not perfectly plan out his/her analysis in advance….

Read more

[63] "Many Labs" Overestimated The Importance of Hidden Moderators

Posted on October 20, 2017February 16, 2020 by Uri Simonsohn

Are hidden moderators a thing? Do experiments intended to be identical lead to inexplicably different results? Back in 2014, the "Many Labs" project (htm) reported an ambitious attempt to answer these questions. More than 30 different labs ran the same set of studies and the paper presented the results side-by-side. They did not find any…

Read more

[62] Two-lines: The First Valid Test of U-Shaped Relationships

Posted on September 18, 2017February 12, 2020 by Uri Simonsohn

Can you have too many options in the menu, too many talented soccer players in a national team, or too many examples in an opening sentence? Social scientists often hypothesize u-shaped relationships like these, where the effect of x on y starts positive and becomes negative, or starts negative and becomes positive. Researchers rely almost…

Read more

[61] Why p-curve excludes ps>.05

Posted on June 15, 2017February 12, 2020 by Uri, Joe, & Leif

In a recent working paper, Carter et al (htm) proposed that one can better correct for publication bias by including not just p<.05 results, the way p-curve does, but also p>.05 results [1]. Their paper, currently under review, aimed to provide a comprehensive simulation study that compared a variety of bias-correction methods for meta-analysis. Although the…

Read more

[60] Forthcoming in JPSP: A Non-Diagnostic Audit of Psychological Research

Posted on May 8, 2017February 12, 2020 by Leif Joe and Uri

A forthcoming article in the Journal of Personality and Social Psychology has made an effort to characterize changes in the behavior of social and personality researchers over the last decade (.htm). In this post, we refer to it as “the JPSP article” and to the authors as "the JPSP authors." The research team, led by…

Read more

[59] PET-PEESE Is Not Like Homeopathy

Posted on April 12, 2017November 13, 2024 by Uri Simonsohn

PET-PEESE is a meta-analytical tool that seeks to correct for publication bias. In a footnote in my previous post (.htm), I referred to is as the homeopathy of meta-analysis. That was unfair and inaccurate. Unfair because, in the style of our President, I just called PET-PEESE a name instead of describing what I believed was…

Read more

[58] The Funnel Plot is Invalid Because of This Crazy Assumption: r(n,d)=0

Posted on March 21, 2017February 12, 2020 by Uri Simonsohn

The funnel plot is a beloved meta-analysis tool. It is typically used to answer the question of whether a set of studies exhibits publication bias. That’s a bad question because we always know the answer: it is “obviously yes.” Some researchers publish some null findings, but nobody publishes them all. It is also a bad…

Read more
  • Previous
  • 1
  • …
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • …
  • 13
  • Next

Get Colada email alerts.

Join 10.5K other subscribers

Social media

Recent Posts

  • [125] "Complexity" 2: Don't be mean to the median
  • [124] "Complexity": 75% of participants missed comprehension questions in AER paper critiquing Prospect Theory
  • [123] Dear Political Scientists: The binning estimator violates ceteris paribus
  • [122] Arresting Flexibility: A QJE field experiment on police behavior with about 40 outcome variables
  • [121] Dear Political Scientists: Don't Bin, GAM Instead

Get blogpost email alerts

Join 10.5K other subscribers

tweeter & facebook

We announce posts on Twitter
We announce posts on Bluesky
And link to them on our Facebook page

Posts on similar topics

    search

    © 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..