Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • Seminar
  • About
Menu

[78b] Hyp-Chart, the Missing Link Between P-values and Bayes Factors

Posted on September 11, 2019February 12, 2020 by Uri Simonsohn

Just two steps are needed to go from computing p-values to computing Bayes factors. This post explains both steps and introduces Hyp-Chart, the missing link we arrive at if we take only the first step. Hyp-Chart is a graph that shows how well the data fit the null vs. every possible alternative hypothesis [1]. Hyp-Chart…

Read more

[78a] If you think p-values are problematic, wait until you understand Bayes Factors

Posted on September 6, 2019September 6, 2019 by Uri Simonsohn

Would raising the minimum wage by $4 lead to greater unemployment? Milton, a Chicago economist, has a theory (supply and demand) that says so. Milton believes the causal effect is anywhere between 1% and 10%. After the minimum wage increase of $4, unemployment goes up 1%.  Milton feels bad about the unemployed but good about…

Read more

[77] Number-Bunching: A New Tool for Forensic Data Analysis

Posted on May 25, 2019November 18, 2020 by Uri Simonsohn

In this post I show how one can analyze the frequency with which values get repeated within a dataset – what I call “number-bunching” – to statistically identify whether the data were likely tampered with. Unlike Benford’s law (.htm), and its generalizations, this approach examines the entire number at once, not only the first or…

Read more

[76] Heterogeneity Is Replicable: Evidence From Maluma, MTurk, and Many Labs

Posted on April 24, 2019November 18, 2020 by Joe & Uri

A number of authors have recently proposed that (i) psychological research is highly unpredictable, with identical studies obtaining surprisingly different results, (ii) the presence of heterogeneity decreases the replicability of psychological findings. In this post we provide evidence that contradicts both propositions. Consider these quotes: "heterogeneity persists, and to a reasonable degree, even in […]…

Read more

[75] Intentionally Biased: People Purposely Don't Ignore Information They "Should" Ignore

Posted on January 29, 2019February 12, 2020 by Guest author Berkeley Dietvorst, with Uri

You can’t un-ring a bell. Once people receive information, even if it is taken back, they cannot disregard it. Teachers cannot imagine what novice students don’t know, juries cannot follow instructions to disregard evidence, negotiators cannot take the perspective of their counterpart who does not know what they know, etc. People exhibit “Outcome bias”, “hindsight…

Read more

[74] In Press at Psychological Science: A New 'Nudge' Supported by Implausible Data

Posted on December 5, 2018November 18, 2020 by Guest co-author: Frank Yu, with Leif and Uri

Today Psychological Science issued a Corrigendum (.htm) and an expression of concern (htm) for a paper originally posted online in May 2018 (.htm). This post will spell out the data irregularities we uncovered that eventually led to the two postings from the journal today. We are not convinced that those postings are sufficient. It is…

Read more

[73] Don't Trust Internal Meta-Analysis

Posted on October 24, 2018November 18, 2020 by Guest co-author: Joachim Vosgerau and Uri, Leif, & Joe

Researchers have increasingly been using internal meta-analysis to summarize the evidence from multiple studies within the same paper. Much of the time, this involves computing the average effect size across the studies, and assessing whether that effect size is significantly different from zero. At first glance, internal meta-analysis seems like a wonderful idea. It increases…

Read more

[72] Metacritic Has A (File-Drawer) Problem

Posted on July 2, 2018December 21, 2018 by Joe Simmons

Metacritic.com scores and aggregates critics’ reviews of movies, music, and video games. The website provides a summary assessment of the critics’ evaluations, using a scale ranging from 0 to 100. Higher numbers mean that critics were more favorable. In theory, this website is pretty awesome, seemingly leveraging the wisdom-of-crowds to give consumers the most reliable…

Read more

[71] The (Surprising?) Shape of the File Drawer

Posted on April 30, 2018January 23, 2019 by Leif Nelson

Let’s start with a question so familiar that you will have answered it before the sentence is even completed: How many studies will a researcher need to run before finding a significant (p<.05) result? (If she is studying a non-existent effect and if she is not p-hacking.) Depending on your sophistication, wariness about being asked…

Read more

[70] How Many Studies Have Not Been Run? Why We Still Think the Average Effect Does Not Exist

Posted on March 9, 2018February 12, 2020 by Leif Nelson

We have argued that, for most effects, it is impossible to identify the average effect (datacolada.org/33). The argument is subtle (but not statistical), and given the number of well-informed people who seem to disagree, perhaps we are simply wrong. This is my effort to explain why we think identifying the average effect is so hard….

Read more
  • Previous
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • …
  • 11
  • Next

Get Colada email alerts.

Join 5,109 other subscribers

Social media

We tweet new posts: @DataColada
And mastopost'em: @DataColada@mas.to
And link to them on our Facebook page

Recent Posts

  • [107] Meaningless Means #3: The Truth About Lies
  • [106] Meaningless Means #2: The Average Effect of Nudging in Academic Publications is 8.7%
  • [105] Meaningless Means #1: The Average Effect
    of Nudging Is d = .43
  • [104] Meaningless Means: Some Fundamental Problems With Meta-Analytic Averages
  • [103] Mediation Analysis is Counterintuitively Invalid

Get blogpost email alerts

Join 5,109 other subscribers

tweeter & facebook

We tweet new posts: @DataColada
And link to them on our Facebook page

Posts on similar topics

    search

    © 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..