Data Colada
Menu
  • Home
  • Table of Contents
  • Feedback Policy
  • Seminar
  • About
Menu

[82] Data Replicada #1: Do Elevated Viewpoints Increase Risk Taking?


Posted on December 11, 2019February 11, 2020 by Joe & Leif

In this post, we report our attempt to replicate a study in a recently published Journal of Marketing Research (JMR) article entitled, “Having Control Over and Above Situations: The Influence of Elevated Viewpoints on Risk Taking” (.htm). The article’s abstract summarizes the key result: “consumers’ views of scenery from a high physical elevation induce an illusory source of control, which in turn intensifies risk taking.”

Four of the article’s seven studies were conducted on MTurk. We tried to replicate Study 1b because it has the smallest reported p-value of the four (p = .016). This study (N = 203) found that “people exposed to high-elevation sceneries [were] more willing to purchase new products than people exposed to low-elevation images” (p. 56). This effect is hypothesized to emerge because exposure to high elevation sceneries increases risk-taking, and purchasing new products is a risky endeavor [1].

We contacted the author of the paper to request the materials needed to conduct a replication. He was extremely forthcoming and polite, replying in a matter of hours, while quickly and thoroughly answering all of our subsequent questions. We are grateful to him for his help and professionalism.

The Replication

In the preregistered replication (https://aspredicted.org/9zw6i.pdf), we used the same instructions, procedures, images, questions, and participant population as in the original study. Except for sample size – we sought to get three times the original sample size – the only difference we are aware of between our study and the original is that our formatting of the scenery questions on Qualtrics was slightly different from the original author’s formatting (see this .pdf for an example). You can access our Qualtrics survey here (.qsf), our materials here (.pdf), our data here (.csv), our codebook here (.xlsx), and our R code here (.R).

We asked 603 MTurkers to complete an online survey that featured two ostensibly separate studies. First, participants saw five images, taken from a low vertical position vs. high vertical position, and described their feelings about each.

Here’s an example:

In the ostensibly unrelated second study, participants rated their likelihood of purchasing each of four new products. Here is an image of one of the products:

So the original study found that those exposed to the forest image (and other low-elevation images) were less likely to say that they would purchase the heated butter knife (and other new products) than those exposed to the hilltop image (and other high-elevation images).

Here are the original results, as well as our replication results:

There are at least two ways to evaluate the results of replications.

First, we can ask whether the predicted result is significant in the (larger-sampled) replication attempt. In this case, it is not. Elevation condition did not significantly influence participants’ likelihood of purchasing the new products; indeed, the average purchase likelihood was nearly identical across the two conditions (MHIGH = 3.67, MLOW = 3.64, p = .802) [2] [3].

Second, we can consider what the results tell us about the possible size of the effect, and about the sample size that would be required to detect it. The figure below shows that our effect size estimate is d = .02, with a 90% confidence interval of [-.11, +.15].

If the true effect size is at the upper boundary of our confidence interval (d = .15), then you would need 1,320 participants to have an 80% chance to detect it. If the true effect size is equal to our best estimate of the true effect size (d = .02), then you would need 75,301 participants to have an 80% chance to detect it. The original study had 203 participants, giving it only a 5.2% chance of detecting d = .02, and only a 19.4% chance of detecting d = .15. The results from the replication imply that if the effect of interest exists, the sample size used in the original study would not have been large enough to be informative.

Conclusions

In sum, using three times the sample size as the original study, we find no evidence that being exposed to an elevated vertical position increases risk taking, at least when risk is operationalized as the likelihood of purchasing new products.


Author feedback

When we reached out to the author for comments on the post, he politely noted two potential discrepancies between our replication attempt and the original study. 
 
First, he noted that we did not spell out all of the scale points when we assessed the dependent variable. Whereas we anchored the scale at 1 = “Extremely Unlikely” and 7 = “Extremely Likely,” he labeled all scale points: 1 = Extremely unlikely; 2 = Moderately unlikely; 3 = Slightly unlikely; 4 = Neither likely nor unlikely; 5 = Slightly likely; 6 = Moderately likely; 7 = Extremely likely. He does not think this contributed to the results. 
 
Second, he wrote, "One factor, that I think, may contribute to this null results is how much people consider the list of products to be new. Before I posted the original study, I searched the Internet looking for new products. It is possible that a product that people considered to be new in September 2017 is not considered a new product in May 2019 even when we tell them they are new products. This effect is more clear for one product, remote entryway lock system, which was a very new product in 2017 but not in 2019. This could potentially lower the risk associated with the products and subsequently contribute to the results.”
 
We agree with the author that the remote entryway lock system may not seem sufficiently new to at least some participants in 2019. When we removed this item from our analysis, the overall effect does increase, from d = .02, p = .802 with all the products to d = .05, p = .523 without the remote entryway lock system.
White Space
Footnotes.
  1. For any multi-study paper, a replication of a single study cannot speak directly to the replicability of any of the others. It is possible that those studies are more likely, or less likely, to replicate than the study we selected. [↩]
  2. We analyzed the data in two ways, both of which lead to identical results: p = .802. First, following our pre-registration, we structured the data so that each participant contributed four rows, one for each of the products they rated. We regressed their purchase likelihood rating on elevation condition, while including fixed effects for product, and clustering standard errors by participant. We also conducted the analysis the same way as the original authors did and ran a 2 (condition) x 4 (product) mixed ANOVA. [↩]
  3. At the end of the study, we asked participants whether they had ever completed the sceneries task on MTurk before, and whether they had ever seen the products on MTurk before (Possible answers: No, Maybe, Yes). Restricting the analyses to those participants who answered “No” to both of these questions (N = 501) causes the direction of the effect to reverse (so it is opposite of the original), while remaining nonsignificant (p = .511). [↩]

Get Colada email alerts.

Join 5,109 other subscribers

Social media

We tweet new posts: @DataColada
And mastopost'em: @DataColada@mas.to
And link to them on our Facebook page

Recent Posts

  • [107] Meaningless Means #3: The Truth About Lies
  • [106] Meaningless Means #2: The Average Effect of Nudging in Academic Publications is 8.7%
  • [105] Meaningless Means #1: The Average Effect
    of Nudging Is d = .43
  • [104] Meaningless Means: Some Fundamental Problems With Meta-Analytic Averages
  • [103] Mediation Analysis is Counterintuitively Invalid

Get blogpost email alerts

Join 5,109 other subscribers

tweeter & facebook

We tweet new posts: @DataColada
And link to them on our Facebook page

Posts on similar topics

Data Replicada, Discuss Paper by Others, Replication
  • [101] Transparency Makes Research Evaluable: Evaluating a Field Experiment on Crime Published in Nature
  • [99] Hyping Fisher: The Most Cited 2019 QJE Paper Relied on an Outdated Stata Default to Conclude Regression p-values Are Inadequate
  • [98] Evidence of Fraud in an Influential Field Experiment About Dishonesty
  • [97] Data Replicada #10: Does Goal Conflict Affect Time Spent on Work and Leisure?
  • [96] Madam Speaker: Are Female Presenters Treated Worse in Econ Seminars?
  • [94] Data Replicada #9: Are Progression Ads More Credible?
  • [92] Data Replicada #8: Is The Left-Digit Bias Stronger When Prices Are Presented Side-By-Side?
  • [90] Data Replicada #7: Does Displaying Multiple Copies of a Product Increase Its Perceived Effectiveness?
  • [89] Data Replicada #6: The Problem of (Weird) Differential Attrition
  • [87] Data Replicada #5: Do Human-Like Products Inspire More Holistic Judgments?

search

© 2021, Uri Simonsohn, Leif Nelson, and Joseph Simmons. For permission to reprint individual blog posts on DataColada please contact us via email..