It’s time for one last look at the excellent article by Yudhijit Bhattacharjee in the New York Times Magazine (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. We’ve already examined strategy Stapel pursued to fabricate persuasive “results”, the particular harms Stapel’s misconduct did to the graduate students he was training, and the apprehensions of the students and colleagues who suspected fraud was afoot about the prospect of blowing the whistle on Stapel. To close, let’s look at some of the uncomfortable lessons the Stapel case has for his scientific community — and perhaps for other scientific communities as well.
At the end of November, the universities unveiled their final report at a joint news conference: Stapel had committed fraud in at least 55 of his papers, as well as in 10 Ph.D. dissertations written by his students. The students were not culpable, even though their work was now tarnished. The field of psychology was indicted, too, with a finding that Stapel’s fraud went undetected for so long because of “a general culture of careless, selective and uncritical handling of research and data.” If Stapel was solely to blame for making stuff up, the report stated, his peers, journal editors and reviewers of the field’s top journals were to blame for letting him get away with it. The committees identified several practices as “sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be.
The adjective “sloppy” seems charitable. Several psychologists I spoke to admitted that each of these more common practices was as deliberate as any of Stapel’s wholesale fabrications. Each was a choice made by the scientist every time he or she came to a fork in the road of experimental research — one way pointing to the truth, however dull and unsatisfying, and the other beckoning the researcher toward a rosier and more notable result that could be patently false or only partly true. What may be most troubling about the research culture the committees describe in their report are the plentiful opportunities and incentives for fraud. “The cookie jar was on the table without a lid” is how Stapel put it to me once. Those who suspect a colleague of fraud may be inclined to keep mum because of the potential costs of whistle-blowing.
The key to why Stapel got away with his fabrications for so long lies in his keen understanding of the sociology of his field. “I didn’t do strange stuff, I never said let’s do an experiment to show that the earth is flat,” he said. “I always checked — this may be by a cunning manipulative mind — that the experiment was reasonable, that it followed from the research that had come before, that it was just this extra step that everybody was waiting for.” He always read the research literature extensively to generate his hypotheses. “So that it was believable and could be argued that this was the only logical thing you would find,” he said. “Everybody wants you to be novel and creative, but you also need to be truthful and likely. You need to be able to say that this is completely new and exciting, but it’s very likely given what we know so far.”
Fraud like Stapel’s — brazen and careless in hindsight — might represent a lesser threat to the integrity of science than the massaging of data and selective reporting of experiments. The young professor who backed the two student whistle-blowers told me that tweaking results — like stopping data collection once the results confirm a hypothesis — is a common practice. “I could certainly see that if you do it in more subtle ways, it’s more difficult to detect,” Ap Dijksterhuis, one of the Netherlands’ best known psychologists, told me. He added that the field was making a sustained effort to remedy the problems that have been brought to light by Stapel’s fraud.
(Bold emphasis added.)
If the writers of this report are correct, the field of psychology failed in multiple ways here. First, they were insufficiently skeptical — both of Stapel’s purported findings and of their own preconceptions — to nip Stapel’s fabrications in the bud. And, they were themselves routinely engaging in practices that were bound to mislead.
Maybe these practices don’t rise to the level of outright fabrication. However, neither do they rise to the level of rigorous and intellectually honest scientific methodology.
There could be a number of explanations for these questionable methodological choices.
Possibly some of the psychologists engaging in this “sloppy science” lack a good understanding of statistics or of what counts as a properly rigorous test of one’s hypothesis. Essentially, this is an explanation of faulty methodology on the basis of ignorance. However, it’s likely that this is culpable ignorance — that psychology researchers have a positive duty to learn what they ought to know about statistics and hypothesis testing, and to avail themselves of available resources to ensure that they aren’t ignorant in this particular way.
I don’t know if efforts to improve statistics education are a part of the “sustained effort to remedy the problems that have been brought to light by Stapel’s fraud,” but I think they should be.
Another explanation for the lax methodology decried by the report is alluded to in the quoted passage: perhaps psychology researchers let the strength of their own intuitions about what they were going to see in their research results drive their methodology. Perhaps they unconsciously drifted away from methodological rigor and toward cherry-picking and misuse of statistics and the like because they knew in their hearts what the “right” answer would be. Given this kind of conviction, of course they would reject methods that didn’t yield the “right” answer in favor of those that did.
Here, too, the explanation does not provide an excuse. The scientist’s brief is not to take strong intuitions as true, but to look for evidence — especially evidence that could demonstrate that the intuitions are wrong. A good scientist should be on the alert for instances where she is being fooled by her intuitions. Rigorous methodology is one of the tools at her disposal to avoid being fooled. Organized skepticism from her fellow scientists is another.
From here, the explanations drift into waters where the researchers are even more culpable for their sloppiness. If you understand how to test hypotheses properly, and if you’re alert enough to the seductive power of your intuitions, it seems like the other reason you might engage in “sloppy science” is to make your results look less ambiguous, more certain, more persuasive than they really are, either to your fellow scientists or to others (administrators evaluating your tenure or promotion case? the public?). Knowingly providing a misleading picture of how good your results are is lying. It may be a lie of a smaller magnitude than Diederik Stapel’s full-scale fabrications, but it’s still dishonest.
And of course, there are plenty of reasons scientists (like other human beings) might try to rationalize a little lie as being not that bad. Maybe you really needed more persuasive preliminary data than you got to land the grant without which you won’t be able to support graduate students. Maybe you needed to make your conclusions look stronger to satisfy the notoriously difficult peer reviewers at the journal to which you submitted your manuscript. Maybe you are on the verge of getting credit for a paradigm-shaking insight in your field (if only you can put up the empirical results to support it), or of beating a competing research group to the finish line for an important discovery (if only you can persuade your peers that the results you have establish that discovery).
But maybe all these excuses prioritize scientific scorekeeping to the detriment of scientific knowledge-building.
Science is supposed to be an activity aimed at building a reliable body of knowledge about the world. You can’t reconcile this with lying, whether to yourself or to your fellow scientists. This means that scientists who are committed to the task must refrain from the little lies, and that they must take serious conscious steps to ensure that they don’t lie to themselves. Anything else runs the risk of derailing the whole project.