Fraud in science is often attributed to incentives: we reward sexy-results→fraud happens. The solution, the argument goes, is to reward other things. In this post I counter-argue, proposing three alternative solutions.
Problems with the Change the Incentives solution.
First, even if rewarding sexy-results caused fraud, it does not follow we should stop rewarding sexy-results. We should pit costs vs benefits. Asking questions with the most upside is beneficial.
Second, if we started rewarding unsexy stuff, a likely consequence is fabricateurs continuing to fake, now just unsexy stuff. Fabricateurs want the lifestyle of successful scientists. [1] Changing incentives involves making our lifestyle less appealing. (Finally, a benefit to committee meetings).
Third, the evidence for “liking sexy→fraud” is just not there. Like real research, most fake research is not sexy. Life-long fabricateur Diederik Stapel mostly published dry experiments with “findings” in line with the rest of the literature. That we attend to and remember the sexy fake studies is diagnostic of what we pay attention to, not what causes fraud.
The evidence that incentives causes fraud comes primarily from self-reports, with fabricateurs saying "the incentives made me do it" (see e.g., Tijdink et al .pdf; or Stapel interviews). To me, the guilty saying “it’s not my fault” seems like weak evidence. What else could they say?
“I realized I was not cut-out for this; it was either faking some science or getting a job with less status”
“I am kind of a psychopath, I had fun tricking everyone”
“A voice in my head told me to do it”
Similarly weak, to me, is the observation that fraud is more prevalent in top journals; we find fraud where we look for it. Fabricateurs faking articles that don’t get read don’t get caught….
It's good for universities to ignore quantity of papers when hiring and promoting, good for journals to publish interesting questions with inconclusive answers. But that won't help with fraud.
Solution 1. Retract without asking “are the data fake?”
We have a high bar for retracting articles, and a higher bar for accusing people of fraud. The latter makes sense. The former does not.
Retracting is not such a big deal, it just says "we no longer have confidence in the evidence.”
So many things can go wrong when collecting, analyzing and reporting data that this should be a relatively routine occurrence even in the absence of fraud. An accidental killing may not land the killer in prison, but the victim goes 6 ft under regardless. I'd propose a retraction doctrine like:
If something is discovered that would lead reasonable experts to believe the results did not originate in a study performed as described in a published paper, or to conclude the study was conducted with excessive sloppiness, the journal should retract the paper.
Example 1. Analyses indicate published results are implausible for a study conducted as described (e.g., excessive linearity, implausibly similar means, or a covariate is impossibly imbalanced across conditions). Retract.
Example 2. Authors of a paper published in a journal that requires data sharing upon request, when asked for it, indicate to have “lost the data”. Retract. [2]
Example 3. Comparing original materials with posted data reveals important inconsistencies (e.g., scales ranges are 1-11 in the data but 1-7 in the original). Retract.
When journals reject original submissions it is not their job to figure out why the authors run an uninteresting study or executed it poorly. They just reject it.
When journals lose confidence in the data behind a published article it is not their job to figure out why the authors published data whose confidence was eventually lost. They should just retract it.
Employers, funders, and co-authors can worry about why an author published untrustworthy data.
Solution 2. Show receipts
Penn, my employer, reimburses me for expenses incurred at conferences.
However, I don’t get to just say “hey, I bought some tacos in that Kansas City conference, please deposit $6.16 onto my checking account.” I need receipts. They trust me, but there is a paper trail in case of need.
When I submit the work I presented in Kansas City to a journal, in contrast, I do just say “hey, I collected the data this or that way.” No receipts.
The recent Science retraction, with canvassers & gay marriage, is a great example for the value of receipts. The statistical evidence suggested something was off, but the receipts-like paper trail helped a lot
Author: “so and so run the survey with such and such company”
Sleuths: "hello such and such company, can we talk with so and so about this survey you guys run?"
Such and such company: “we don’t know any so and so, and we don’t have the capability to run the survey.”
Authors should provide as much documentation about how they run their science as they do about what they eat at conferences: where exactly was the study run, at what time and day, which research assistant run it (with contact information), how exactly were participants paid, etc.
We will trust everything researchers say. Until the need to verify arises.
Solution 3. Post data, materials and code
Had the raw data not been available, the recent Science retraction would probably not have happened. Stapel would probably not have gotten caught. The cases against Sanna and Smeesters would not have moved forward. To borrow from a recent paper with Joe and Leif:
Journals that do not increase data and materials posting requirements for publications are causally, if not morally, responsible for the continued contamination of the scientific record with fraud and sloppiness.
Feedback from Ivan Oransky, co-founder of Retraction Watch
Ivan co-wrote an editorial in the New York Times on changing the incentives to reduce fraud (.html). I reached out to him to get feedback. He directed me to some papers on the evidence of incentives and fraud. I was unaware of, but also unpersuaded by, that evidence. This prompted to add the last paragraph in the incentives section (where I am skeptical of that evidence). Despite our different takes on the role of rewarding sexy-findings on fraud, Ivan is on board with the three non-incentive solutions proposed here. I thank Ivan for the prompt response and useful feedback. (and for Retraction Watch!)
Footnotes.
- I use the word fabricateur to refer to scientists who fabricate data. Fraudster is insufficiently specific (e.g., selling 10 bagels calling them a dozen is fraud too), and fabricator has positive meanings (e.g., people who make things). Fabricateur has a nice ring to it. [↩]
- Every author publishing in an American Psychological Association journal agrees to share data upon request [↩]