Data Colada
Menu
  • Home
  • About
  • Feedback Policy
  • Table of Contents
Menu

[63] "Many Labs" Overestimated The Importance of Hidden Moderators

Posted on October 20, 2017March 25, 2019 by Uri Simonsohn

Are hidden moderators a thing? Do experiments intended to be identical lead to inexplicably different results? Back in 2014, the "Many Labs" project (.pdf) reported an ambitious attempt to answer these questions. More than 30 different labs ran the same set of studies and the paper presented the results side-by-side. They did not find any…

Read more

[62] Two-lines: The First Valid Test of U-Shaped Relationships

Posted on September 18, 2017January 9, 2019 by Uri Simonsohn

Can you have too many options in the menu, too many talented soccer players in a national team, or too many examples in an opening sentence? Social scientists often hypothesize u-shaped relationships like these, where the effect of x on y starts positive and becomes negative, or starts negative and becomes positive. Researchers rely almost…

Read more

[61] Why p-curve excludes ps>.05

Posted on June 15, 2017October 30, 2017 by Uri Joe Leif

In a recent working paper, Carter et al (.pdf) proposed that one can better correct for publication bias by including not just p<.05 results, the way p-curve does, but also p>.05 results [1]. Their paper, currently under review, aimed to provide a comprehensive simulation study that compared a variety of bias-correction methods for meta-analysis. Although the…

Read more

[60] Forthcoming in JPSP: A Non-Diagnostic Audit of Psychological Research

Posted on May 8, 2017December 1, 2017 by Leif Joe and Uri

A forthcoming article in the Journal of Personality and Social Psychology has made an effort to characterize changes in the behavior of social and personality researchers over the last decade (.pdf). In this post, we refer to it as "the JPSP article" and to the authors as "the JPSP authors." The research team, led by…

Read more

[59] PET-PEESE Is Not Like Homeopathy

Posted on April 12, 2017January 23, 2019 by Uri Simonsohn

PET-PEESE is a meta-analytical tool that seeks to correct for publication bias. In a footnote in my previous post (.htm), I referred to is as the homeopathy of meta-analysis. That was unfair and inaccurate. Unfair because, in the style of our President, I just called PET-PEESE a name instead of describing what I believed was…

Read more

[58] The Funnel Plot is Invalid Because of This Crazy Assumption: r(n,d)=0

Posted on March 21, 2017January 23, 2019 by Uri Simonsohn

The funnel plot is a beloved meta-analysis tool. It is typically used to answer the question of whether a set of studies exhibits publication bias. That's a bad question because we always know the answer: it is "obviously yes." Some researchers publish some null findings, but nobody publishes them all. It is also a bad…

Read more

[57] Interactions in Logit Regressions: Why Positive May Mean Negative

Posted on February 23, 2017January 25, 2019 by Uri Simonsohn

Of all economics papers published this century, the 10th most cited appeared in Economics Letters , a journal with an impact factor of 0.5.  It makes an inconvenient and counterintuitive point: the sign of the estimate (b̂) of an interaction in a logit/probit regression, need not correspond to the sign of its effect on the…

Read more

[56] TWARKing: Test-Weighting After Results are Known

Posted on January 3, 2017January 2, 2017 by Uri Simonsohn

On the last class of the semester I hold a "town-hall" meeting; an open discussion about how to improve the course (content, delivery, grading, etc). I follow-up with a required online poll to "vote" on proposed changes [1]. Grading in my class is old-school. Two tests, each 40%, homeworks 20% (graded mostly on a completion…

Read more

[55] The file-drawer problem is unfixable, and that's OK

Posted on December 17, 2016December 21, 2018 by Uri Simonsohn

The "file-drawer problem" consists of researchers not publishing their p>.05 studies (Rosenthal 1979 .pdf). P-hacking consist of researchers not reporting their p>.05 analyses for a given study. P-hacking is easy to stop. File-drawering nearly impossible. Fortunately, while p-hacking is a real problem, file-drawering is not. Consequences of p-hacking vs file-drawering With p-hacking it's easy to…

Read more

[54] The 90x75x50 heuristic: Noisy & Wasteful Sample Sizes In The "Social Science Replication Project"

Posted on November 1, 2016January 25, 2019 by Uri Simonsohn

An impressive team of researchers is engaging in an impressive task: Replicate 21 social science experiments published in Nature and Science in 2010-2015 (.htm). The task requires making many difficult decisions, including what sample sizes to use. The authors' current plan is a simple rule: Set n for the replication so that it would have 90%…

Read more
  • Previous
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • …
  • 9
  • Next

Get Colada email alerts.

Join 2,708 other subscribers

Your hosts

Uri Simonsohn (.htm)
Joe Simmons (.htm)
Leif Nelson (.htm)

Other Posts on Similar Topics

    tweeter & facebook

    We tweet new posts: @DataColada
    And link to them on our Facebook page

    search

    All posts

    • [81] Data Replicada
    • [80] Interaction Effects Need Interaction Controls
    • [79] Experimentation Aversion: Reconciling the Evidence
    • [78c] Bayes Factors in Ten Recent Psych Science Papers
    • [78b] Hyp-Chart, the Missing Link Between P-values and Bayes Factors
    • [78a] If you think p-values are problematic, wait until you understand Bayes Factors
    • [77] Number-Bunching: A New Tool for Forensic Data Analysis
    • [76] Heterogeneity Is Replicable: Evidence From Maluma, MTurk, and Many Labs
    • [75] Intentionally Biased: People Purposely Don't Ignore Information They "Should" Ignore
    • [74] In Press at Psychological Science: A New 'Nudge' Supported by Implausible Data
    • [73] Don't Trust Internal Meta-Analysis
    • [72] Metacritic Has A (File-Drawer) Problem
    • [71] The (Surprising?) Shape of the File Drawer
    • [70] How Many Studies Have Not Been Run? Why We Still Think the Average Effect Does Not Exist
    • [69] Eight things I do to make my open research more findable and understandable
    • [68] Pilot-Dropping Backfires (So Daryl Bem Probably Did Not Do It)
    • [67] P-curve Handles Heterogeneity Just Fine
    • [66] Outliers: Evaluating A New P-Curve Of Power Poses
    • [65] Spotlight on Science Journalism: The Health Benefits of Volunteering
    • [64] How To Properly Preregister A Study
    • [63] "Many Labs" Overestimated The Importance of Hidden Moderators
    • [62] Two-lines: The First Valid Test of U-Shaped Relationships
    • [61] Why p-curve excludes ps>.05
    • [60] Forthcoming in JPSP: A Non-Diagnostic Audit of Psychological Research
    • [59] PET-PEESE Is Not Like Homeopathy
    • [58] The Funnel Plot is Invalid Because of This Crazy Assumption: r(n,d)=0
    • [57] Interactions in Logit Regressions: Why Positive May Mean Negative
    • [56] TWARKing: Test-Weighting After Results are Known
    • [55] The file-drawer problem is unfixable, and that's OK
    • [54] The 90x75x50 heuristic: Noisy & Wasteful Sample Sizes In The "Social Science Replication Project"
    • [53] What I Want Our Field To Prioritize
    • [52] Menschplaining: Three Ideas for Civil Criticism
    • [51] Greg vs. Jamal: Why Didn't Bertrand and Mullainathan (2004) Replicate?
    • [50] Teenagers in Bikinis: Interpreting Police-Shooting Data
    • [49] P-Curve Won't Do Your Laundry, But Will Identify Replicable Findings
    • [48] P-hacked Hypotheses Are Deceivingly Robust
    • [47] Evaluating Replications: 40% Full ≠ 60% Empty
    • [46] Controlling the Weather
    • [45] Ambitious P-Hacking and P-Curve 4.0
    • [44] AsPredicted: Pre-registration Made Easy
    • [43] Rain & Happiness: Why Didn't Schwarz & Clore (1983) 'Replicate' ?
    • [42] Accepting the Null: Where to Draw the Line?
    • [41] Falsely Reassuring: Analyses of ALL p-values
    • [40] Reducing Fraud in Science
    • [39] Power Naps: When do Within-Subject Comparisons Help vs Hurt (yes, hurt) Power?
    • [38] A Better Explanation Of The Endowment Effect
    • [37] Power Posing: Reassessing The Evidence Behind The Most Popular TED Talk
    • [36] How to Study Discrimination (or Anything) With Names; If You Must
    • [35] The Default Bayesian Test is Prejudiced Against Small Effects
    • [34] My Links Will Outlive You
    • [33] "The" Effect Size Does Not Exist
    • [32] Spotify Has Trouble With A Marketing Research Exam
    • [31] Women are taller than men: Misusing Occam's Razor to lobotomize discussions of alternative explanations
    • [30] Trim-and-Fill is Full of It (bias)
    • [29] Help! Someone Thinks I p-hacked
    • [28] Confidence Intervals Don't Change How We Think about Data
    • [27] Thirty-somethings are Shrinking and Other U-Shaped Challenges
    • [26] What If Games Were Shorter?
    • [25] Maybe people actually enjoy being alone with their thoughts
    • [24] P-curve vs. Excessive Significance Test
    • [23] Ceiling Effects and Replications
    • [22] You know what's on our shopping list
    • [21] Fake-Data Colada: Excessive Linearity
    • [20] We cannot afford to study effect size in the lab
    • [19] Fake Data: Mendel vs. Stapel
    • [18] MTurk vs. The Lab: Either Way We Need Big Samples
    • [17] No-way Interactions
    • [16] People Take Baths In Hotel Rooms
    • [15] Citing Prospect Theory
    • [14] How To Win A Football Prediction Contest: Ignore Your Gut
    • [13] Posterior-Hacking
    • [12] Preregistration: Not just for the Empiro-zealots
    • [11] "Exactly": The Most Famous Framing Effect Is Robust To Precise Wording
    • [10] Reviewers are asking for it
    • [9] Titleogy: Some facts about titles
    • [8] Adventures in the Assessment of Animal Speed and Morality
    • [7] Forthcoming in the American Economic Review: A Misdiagnosed Failure-to-Replicate
    • [6] Samples Can't Be Too Large
    • [5] The Consistency of Random Numbers
    • [4] The Folly of Powering Replications Based on Observed Effect Size
    • [3] A New Way To Increase Charitable Donations: Does It Replicate?
    • [2] Using Personal Listening Habits to Identify Personal Music Preferences
    • [1] "Just Posting It" works, leads to new retraction in Psychology

    Pages

    • About
    • Drop That Bayes: A Colada Series on Bayes Factors
    • Policy on Soliciting Feedback From Authors
    • Table of Contents

    Get email alerts

    Data Colada - All Content Licensed: CC-BY [Creative Commons]