Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • Seminar
  • About
Menu

[3] A New Way To Increase Charitable Donations: Does It Replicate?


Posted on October 2, 2013August 16, 2021 by Joe Simmons

A new paper finds that people will donate more money to help 20 people if you first ask them how much they would donate to help 1 person.

This Unit Asking Effect (Hsee, Zhang, Lu, & Xu, 2013, Psychological Science) emerges because donors are naturally insensitive to the number of individuals needing help. For example, Hsee et al. observed that if you ask different people how much they’d donate to help either 1 needy child or 20 needy children, you get virtually the same answer. But if you ask the same people to indicate how much they’d donate to 1 child and then to 20 children, they realize that they should donate more to help 20 than to help 1, and so they increase their donations.

If true, then this is a great example of how one can use psychology to design effective interventions.

The paper reports two field experiments and a study that solicited hypothetical donations (Study 1). Because it was easy, I attempted to replicate the latter. (Here at Data Colada, we report all of our replication attempts, no matter the outcome).

I ran two replications, a “near replication” using materials that I developed based on the authors’ description of their methods (minus a picture of a needy schoolchild) and then an “exact replication” using the authors’ exact materials. (Thanks to Chris Hsee and Jiao Zhang for providing those).

In the original study, people were asked how much they’d donate to help a kindergarten principal buy Christmas gifts for her 20 low-income pupils. There were four conditions, but I only ran the three most interesting conditions:

 

The original study had ~45 participants per cell. To be properly powered, replications should have ~2.5 times the original sample size. I (foolishly) collected only ~100 per cell in my near replication, but corrected my mistake in the exact replication (~150 per cell). Following Hsee et al., I dropped responses more than 3 SD from the mean, though there was a complication in the exact replication that required a judgment call. My studies used MTurk participants; theirs used participants from “a nationwide online survey service.”

Here are the results of the original (some means and SEs are guesses) and my replications (full data).

I successfully replicated the Unit Asking Effect, as defined by Unit Asking vs. Control; it was marginal (p=.089) in the smaller-sampled near replication and highly significant (p< .001) in the exact replication.

There were some differences. First, my effect sizes (d=.24 and d=.48) were smaller than theirs (d=.88). Second, whereas they found that, across conditions, people were insensitive to whether they were asked to donate to 1 child or 20 children (the white $15 bar vs. the gray $18 bar), I found a large difference in my near replication and a smaller but significant difference in the exact replication. This sensitivity is important, because if people do give lower donations for 1 child than for 20, then they might anchor on those lower amounts, which could diminish the Unit Asking Effect.

In sum, my studies replicated the Unit Asking Effect.

 

Get Colada email alerts.

Join 5,108 other subscribers

Social media

We tweet new posts: @DataColada
And mastopost'em: @DataColada@mas.to
And link to them on our Facebook page

Recent Posts

  • [107] Meaningless Means #3: The Truth About Lies
  • [106] Meaningless Means #2: The Average Effect of Nudging in Academic Publications is 8.7%
  • [105] Meaningless Means #1: The Average Effect
    of Nudging Is d = .43
  • [104] Meaningless Means: Some Fundamental Problems With Meta-Analytic Averages
  • [103] Mediation Analysis is Counterintuitively Invalid

Get blogpost email alerts

Join 5,108 other subscribers

tweeter & facebook

We tweet new posts: @DataColada
And link to them on our Facebook page

Posts on similar topics

Discuss Paper by Others
  • [101] Transparency Makes Research Evaluable: Evaluating a Field Experiment on Crime Published in Nature
  • [99] Hyping Fisher: The Most Cited 2019 QJE Paper Relied on an Outdated Stata Default to Conclude Regression p-values Are Inadequate
  • [98] Evidence of Fraud in an Influential Field Experiment About Dishonesty
  • [97] Data Replicada #10: Does Goal Conflict Affect Time Spent on Work and Leisure?
  • [96] Madam Speaker: Are Female Presenters Treated Worse in Econ Seminars?
  • [94] Data Replicada #9: Are Progression Ads More Credible?
  • [92] Data Replicada #8: Is The Left-Digit Bias Stronger When Prices Are Presented Side-By-Side?
  • [90] Data Replicada #7: Does Displaying Multiple Copies of a Product Increase Its Perceived Effectiveness?
  • [89] Data Replicada #6: The Problem of (Weird) Differential Attrition
  • [87] Data Replicada #5: Do Human-Like Products Inspire More Holistic Judgments?

search

© 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..