Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • Seminar
  • About
Menu

[58] The Funnel Plot is Invalid Because of This Crazy Assumption: r(n,d)=0

Posted on March 21, 2017February 12, 2020 by Uri Simonsohn

The funnel plot is a beloved meta-analysis tool. It is typically used to answer the question of whether a set of studies exhibits publication bias. That’s a bad question because we always know the answer: it is “obviously yes.” Some researchers publish some null findings, but nobody publishes them all. It is also a bad…

Read more

[57] Interactions in Logit Regressions: Why Positive May Mean Negative

Posted on February 23, 2017July 28, 2022 by Uri Simonsohn

Of all economics papers published this century, the 10th most cited appeared in Economics Letters , a journal with an impact factor of 0.5.  It makes an inconvenient and counterintuitive point: the sign of the estimate (b̂) of an interaction in a logit/probit regression, need not correspond to the sign of its effect on the…

Read more

[56] TWARKing: Test-Weighting After Results are Known

Posted on January 3, 2017December 17, 2021 by Uri Simonsohn

On the last class of the semester I hold a “town-hall” meeting; an open discussion about how to improve the course (content, delivery, grading, etc). I follow-up with a required online poll to “vote” on proposed changes [1]. Grading in my class is old-school. Two tests, each 40%, homeworks 20% (graded mostly on a completion…

Read more

[55] The file-drawer problem is unfixable, and that’s OK

Posted on December 17, 2016February 12, 2020 by Uri Simonsohn

The “file-drawer problem” consists of researchers not publishing their p>.05 studies (Rosenthal 1979 .htm). P-hacking consist of researchers not reporting their p>.05 analyses for a given study. P-hacking is easy to stop. File-drawering nearly impossible. Fortunately, while p-hacking is a real problem, file-drawering is not. Consequences of p-hacking vs file-drawering With p-hacking it’s easy to…

Read more

[54] The 90x75x50 heuristic: Noisy & Wasteful Sample Sizes In The “Social Science Replication Project”

Posted on November 1, 2016February 12, 2020 by Uri Simonsohn

An impressive team of researchers is engaging in an impressive task: Replicate 21 social science experiments published in Nature and Science in 2010-2015 (.htm). The task requires making many difficult decisions, including what sample sizes to use. The authors' current plan is a simple rule: Set n for the replication so that it would have 90%…

Read more

[53] What I Want Our Field To Prioritize

Posted on September 30, 2016November 18, 2020 by Joe Simmons

When I was a sophomore in college, I read a book by Carl Sagan called The Demon-Haunted World. By the time I finished it, I understood the difference between what is scientifically true and what is not. It was not obvious to me at the time: If a hypothesis is true, then you can use…

Read more

[52] Menschplaining: Three Ideas for Civil Criticism

Posted on September 26, 2016September 25, 2016 by Uri Simonsohn

As bloggers, commentators, reviewers, and editors, we often criticize the work of fellow academics. In this post I share three ideas to be more civil and persuasive when doing so. But first: should we comment publicly in the first place? One of the best known social psychologist, Susan Fiske (.htm), last week circulated a draft of an invited opinion…

Read more

[51] Greg vs. Jamal: Why Didn’t Bertrand and Mullainathan (2004) Replicate?

Posted on September 6, 2016February 15, 2020 by Uri Simonsohn

Bertrand & Mullainathan (2004, .htm) is one of the best known and most cited American Economic Review (AER) papers [1]. It reports a field experiment in which resumes given typically Black names (e.g., Jamal and Lakisha) received fewer callbacks than those given typically White names (e.g., Greg and Emily). This finding is interpreted as evidence of racial discrimination…

Read more

[50] Teenagers in Bikinis: Interpreting Police-Shooting Data

Posted on July 14, 2016February 15, 2020 by Uri Simonsohn

The New York Times, on Monday, showcased (.htm) an NBER working paper (.pdf) that proposed that “blacks are 23.8 percent less likely to be shot at by police relative to whites.” (p.22) The paper involved a monumental data collection effort  to address an important societal question. The analyses are rigorous, clever and transparently reported. Nevertheless, I do…

Read more

[49] P-Curve Won’t Do Your Laundry, But Will Identify Replicable Findings

Posted on June 14, 2016February 12, 2020 by Uri, Joe, & Leif

In a recent critique, Bruns and Ioannidis (PlosONE 2016 .htm) proposed that p-curve makes mistakes when analyzing studies that have collected field/observational data. They write that in such cases: p-curves based on true effects and p‑curves based on null-effects with p-hacking cannot be reliably distinguished” (abstract). In this post we show, with examples involving sex,…

Read more
  • Previous
  • 1
  • …
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • …
  • 11
  • Next

Get Colada email alerts.

Join 5,085 other subscribers

Social media

We tweet new posts: @DataColada
And mastopost'em: @DataColada@mas.to
And link to them on our Facebook page

Recent Posts

  • [106] Meaningless Means #2: The Average Effect of Nudging in Academic Publications is 8.7%
  • [105] Meaningless Means #1: The Average Effect
    of Nudging Is d = .43
  • [104] Meaningless Means: Some Fundamental Problems With Meta-Analytic Averages
  • [103] Mediation Analysis is Counterintuitively Invalid
  • [102] R on Steroids: Running WAY faster simulations in R

Get blogpost email alerts

Join 5,085 other subscribers

tweeter & facebook

We tweet new posts: @DataColada
And link to them on our Facebook page

Posts on similar topics

    search

    © 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..