Recent past and present The leading empirical psychology journal, Psychological Science, will begin requiring authors to disclose flexibility in data collection and analysis starting on January of 2014 (see editorial). The leading business school journal, Management Science, implemented a similar policy a few months ago. Both policies closely mirror the recommendations we made in our…
[9] Titleogy: Some facts about titles
Naming things is fun. Not sure why, but it is. I have collaborated in the naming of people, cats, papers, a blog, its posts, and in coining the term "p-hacking." All were fun to do. So I thought I would write a Colada on titles. To add color I collected some data. At the end…
[8] Adventures in the Assessment of Animal Speed and Morality
In surveys, most people answer most questions. That is true regardless of whether or not questions are coherently constructed and reasonably articulated. That means that absurd questions still receive answers, and in part because humans are similar to one another, those answers can even look peculiarly consistent. I asked an absurd question and was rewarded…
[7] Forthcoming in the American Economic Review: A Misdiagnosed Failure-to-Replicate
In the paper “One Swallow Doesn't Make A Summer: New Evidence on Anchoring Effects”, forthcoming in the AER, Maniadis, Tufano and List attempted to replicate a classic study in economics. The results were entirely consistent with the original and yet they interpreted them as a “failure to replicate.” What went wrong? This post answers that…
[6] Samples Can't Be Too Large
Reviewers, and even associate editors, sometimes criticize studies for being “overpowered” – that is, for having sample sizes that are too large. (Recently, the between-subjects sample sizes under attack were about 50-60 per cell, just a little larger than you need to have an 80% chance to detect that men weigh more than women). This…
[5] The Consistency of Random Numbers
What’s your favorite number between 1 and 100? Now, think of a random number between 1 and 100. My goal for this post is to compare those two responses. Number preferences feel random. They aren’t. “Random” numbers also feel random. Those aren’t random either. I collected some data, found a pair of austere academic papers,…
[4] The Folly of Powering Replications Based on Observed Effect Size
It is common for researchers running replications to set their sample size assuming the effect size the original researchers got is correct. So if the original study found an effect-size of d=.73, the replicator assumes the true effect is d=.73, and sets sample size so as to have 90% chance, say, of getting a significant…
[3] A New Way To Increase Charitable Donations: Does It Replicate?
A new paper finds that people will donate more money to help 20 people if you first ask them how much they would donate to help 1 person. This Unit Asking Effect (Hsee, Zhang, Lu, & Xu, 2013, Psychological Science) emerges because donors are naturally insensitive to the number of individuals needing help. For example,…
[2] Using Personal Listening Habits to Identify Personal Music Preferences
Not everything at Data Colada is as serious as fraudulent data. This post is way less serious than that. This post is about music and teaching. As part of their final exam, my students analyze a data set. For a few years that data set has been a collection of my personal listening data from…
[1] "Just Posting It" works, leads to new retraction in Psychology
The fortuitous discovery of new fake data. For a project I worked on this past May, I needed data for variables as different from each other as possible. From the data-posting journal Judgment and Decision Making I downloaded data for ten, including one from a now retracted paper involving the estimation of coin sizes. I created…