Data Colada
Menu
  • Home
  • About
  • Feedback Policy
  • Table of Contents
  • Seminar
  • Past Seminars
Menu

Category: Unexpectedly Difficult Statistical Concepts

[91] p-hacking fast and slow: Evaluating a forthcoming AER paper deeming some econ literatures less trustworthy

Posted on September 15, 2020November 18, 2020 by Uri Simonsohn

The authors of a forthcoming AER article (.pdf), "Methods Matter: P-Hacking and Publication Bias in Causal Analysis in Economics", painstakingly harvested thousands of test results from 25 economics journals to answer an interesting question: Are studies that use some research designs more trustworthy than others? In this post I will explain why I think their…

Read more

[88] The Hot-Hand Artifact for Dummies & Behavioral Scientists

Posted on May 27, 2020November 18, 2020 by Uri Simonsohn

A friend recently asked for my take on the Miller and Sanjurjo's (2018; pdf) debunking of the hot hand fallacy. In that paper, the authors provide a brilliant and surprising observation missed by hundreds of people who had thought about the issue before, including the classic Gilovich, Vallone, & Tverksy (1985 .htm). In this post:…

Read more

[80] Interaction Effects Need Interaction Controls

Posted on November 20, 2019February 11, 2020 by Uri Simonsohn

In a recent referee report I argued something I have argued in several reports before: if the effect of interest in a regression is an interaction, the control variables addressing possible confounds should be interactions as well. In this post I explain that argument using as a working example a 2011 QJE paper (.htm) that…

Read more

[79] Experimentation Aversion: Reconciling the Evidence

Posted on November 7, 2019February 11, 2020 by Berkeley Dietvorst, Rob Mislavsky, and Uri Simonsohn

A PNAS paper (.htm) proposed that people object “to experiments that compare two unobjectionable policies” (their title). In our own work (.htm), we arrive at the opposite conclusion: people “don’t dislike a corporate experiment more than they dislike its worst condition” (our title). In a forthcoming PNAS letter, we identified a problem with the statistical…

Read more

[71] The (Surprising?) Shape of the File Drawer

Posted on April 30, 2018January 23, 2019 by Leif Nelson

Let’s start with a question so familiar that you will have answered it before the sentence is even completed: How many studies will a researcher need to run before finding a significant (p<.05) result? (If she is studying a non-existent effect and if she is not p-hacking.) Depending on your sophistication, wariness about being asked…

Read more

[70] How Many Studies Have Not Been Run? Why We Still Think the Average Effect Does Not Exist

Posted on March 9, 2018February 12, 2020 by Leif Nelson

We have argued that, for most effects, it is impossible to identify the average effect (datacolada.org/33). The argument is subtle (but not statistical), and given the number of well-informed people who seem to disagree, perhaps we are simply wrong. This is my effort to explain why we think identifying the average effect is so hard….

Read more

[57] Interactions in Logit Regressions: Why Positive May Mean Negative

Posted on February 23, 2017January 25, 2019 by Uri Simonsohn

Of all economics papers published this century, the 10th most cited appeared in Economics Letters , a journal with an impact factor of 0.5.  It makes an inconvenient and counterintuitive point: the sign of the estimate (b̂) of an interaction in a logit/probit regression, need not correspond to the sign of its effect on the…

Read more

[50] Teenagers in Bikinis: Interpreting Police-Shooting Data

Posted on July 14, 2016February 15, 2020 by Uri Simonsohn

The New York Times, on Monday, showcased (.htm) an NBER working paper (.pdf) that proposed that “blacks are 23.8 percent less likely to be shot at by police relative to whites.” (p.22) The paper involved a monumental data collection effort  to address an important societal question. The analyses are rigorous, clever and transparently reported. Nevertheless, I do…

Read more

[46] Controlling the Weather

Posted on February 2, 2016January 30, 2020 by Joe & Uri

Behavioral scientists have put forth evidence that the weather affects all sorts of things, including the stock market, restaurant tips, car purchases, product returns, art prices, and college admissions. It is not easy to properly study the effects of weather on human behavior. This is because weather is (obviously) seasonal, as is much of what…

Read more

[42] Accepting the Null: Where to Draw the Line?

Posted on October 28, 2015February 11, 2020 by Uri Simonsohn

We typically ask if an effect exists.  But sometimes we want to ask if it does not. For example, how many of the “failed” replications in the recent reproducibility project published in Science (.pdf) suggest the absence of an effect? Data have noise, so we can never say ‘the effect is exactly zero.’  We can…

Read more
  • 1
  • 2
  • Next

Get Colada email alerts.

Join 3,344 other subscribers

tweeter & facebook

We tweet new posts: @DataColada
And link to them on our Facebook page

Search posts

Get blogpost email alerts

Join 3,344 other subscribers

tweeter & facebook

We tweet new posts: @DataColada
And link to them on our Facebook page

Posts on similar topics

Unexpectedly Difficult Statistical Concepts
  • [91] p-hacking fast and slow: Evaluating a forthcoming AER paper deeming some econ literatures less trustworthy
  • [88] The Hot-Hand Artifact for Dummies & Behavioral Scientists
  • [80] Interaction Effects Need Interaction Controls
  • [79] Experimentation Aversion: Reconciling the Evidence
  • [71] The (Surprising?) Shape of the File Drawer
  • [70] How Many Studies Have Not Been Run? Why We Still Think the Average Effect Does Not Exist
  • [57] Interactions in Logit Regressions: Why Positive May Mean Negative
  • [50] Teenagers in Bikinis: Interpreting Police-Shooting Data
  • [46] Controlling the Weather
  • [42] Accepting the Null: Where to Draw the Line?

search

Data Colada - All Content Licensed: CC-BY [Creative Commons]