Eugenie Samuel Reich is a reporter whose work in the Boston Globe, Nature, and New Scientist will be well-known to those with an interest in scientific conduct (and misconduct). In Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World, she turns her skills as an investigative reporter to writing a book-length exploration of Jan Hendrik Schön’s frauds at Bell Labs, providing a detailed picture of the conditions that made it possible for him to get away with his fraud as long as he did.
Eugenie Samuel Reich agreed to answer some questions about Plastic Fantastic and the Schön case. My questions, and her answers, after the jump.
JDS: Reading your account, I’m struck by how little I understand Schön’s motivation for his fraudulent conduct — I honestly can’t decide whether he just didn’t grasp the wrongness of what he was doing, or if he knew it was wrong but felt overpowered by other considerations (pleasing his supervisors and colleagues, securing his position, getting along). Since Schön wasn’t exactly a cooperative subject for the book, you had to piece together your picture of him from sources other than his direct testimony. (How much we could trust his testimony is another question.) What kind of challenge was it for you to figure out Schön from the sources available to you? Was it hard to fit together different people’s impressions of him to come up with a coherent picture? What questions did you find yourself left with as far as what Schön was like, what drove him, and why he did what he did?
ESR: Schön knew that what he was doing might be regarded as wrong if it came to light, because as the book describes, he went to great lengths to avoid confrontations about his research. On the other hand, I found no evidence, including in emails that he had sent after he was exposed that I obtained, that he himself regarded what he was doing as wrong. Based on this, I concluded that he didn’t understand why faking data mattered. He didn’t think it was wrong, he only thought others might think it was. To try to understand this state of mind, I give one example in the book from my own experience as a student, when I was learning to take data, and made up some numbers that fit a pattern I had observed, not realizing that you cannot do this. Based on my experience I’m convinced that for some people ethical data taking needs to be taught, it doesn’t come naturally. This means teaching not only that faking data is regarded as wrong but also conveying to students an understanding why it is both wrong and self-defeating – wrong because it involves misleading others and self-defeating because it negates the possibility of discovery. Schön is an example of someone who never learned this, never learned to do real science, never enjoyed discovering something surprising or discerning an unexpected pattern in his real numbers. He did, on the other hand, enjoy presenting and reporting results. I understood him in the end as someone with a moral handicap, a lack of moral understanding, in this regard.
As you imply, he was hard to portray.
JDS: One of the moments in the book that made my blood boil was when a Bell Labs MTS named Julia Hsu was scored in her performance review as “not a team player” for pointing out to her manager that good science requires time and resources. Since Schön was held up as an example of the efficiency to which she should aspire, Hsu’s concern was that something was atypical about his high output (and, perhaps, that something was fishy, from the resource-tracking end, about not taking account of the fact that many of his devices were supposedly built and experiments supposedly conducted in Konstanz, with someone else’s equipment). Did you get a sense that the priorities of the Bell Labs team were distinct in important respects from the priorities of Team Science?
ESR: In that anecdote, Hsu wasn’t herself voicing any particular suspicion of Schön. It was her manager who brought up Schön as an exercise in contrast to Hsu. However her experience being criticized for speaking up about resourcing definitely shows how the existence of fraudulent science puts pressure on honest scientists. Honest scientists need time and money to get results, and sometimes, due to bad luck and the quirks of nature rather than any shortcoming of their own, fail to get any results at all. Fraudulent science hypes expectations of managers and journal editors, making it harder for honest scientists to compete.
JDS: To what extent do you think Schön’s fraud was encouraged, or at least allowed to go undetected, by the pressures on the managers at Bell Labs (especially in the wake of the break up of AT&T, the transfer of Bell Labs to Lucent Technologies, the dot-com boom, and the dot-com bust)? Do you see any larger lessons about the hazards of trying to balance commercial interests and scientific interests?
ESR: As the book describes, Schön’s fraud was seriously exacerbated by the pressures on managers, whose priority in the dotcom boom was to showcase the research going on in their departments, and in the dot-com bust to reduce expenditures, and protect staff from cuts. Verifying the integrity of the research wasn’t a priority because at Bell Labs integrity was assumed, with managers tending to take on trust that staff had really obtained the results that were reported. This way of operating cannot cope with people who knowingly fabricate plausible and desirable results. Arguably, it is a risky model even when it comes to coping with people who exaggerate or hide results to a lesser extent than outright fabrication.
JDS: The “science glamor mags” — in particular, Science and Nature — played an important part in propagating Schön’s fraudulent data. How much of this was due to the breakdown of official journal rules and procedures, and how much do you think might have happened anyway, owing to the basic trust peer reviewers and editors seemed inclined to give Schön and his frequent coauthor Bertram Batlogg? How much of a role do you reckon the commercial interests of the journals played here? If Schön had only submitted to specialist journals, do you think that his crime spree would likely have ended sooner? What standing conditions at the journals do you think did the most to exacerbate the problem?
ESR: Editors at both Nature and Science are under pressure to recruit significant research findings. There is a commercial origin to this pressure at both organizations – significant papers will attract readers and advertisers to the journals. (Science magazine describes itself as non-profit, but the magazine is owned by the AAAS organization, whose tax returns I inspected – they include taxable advertising revenue from a for-profit, AAAS Science Publications). At the same time, both journals have a process of quality control that to some extent balances the commercial pressure. I report evidence in the book that this failed at both journals when it came to Schön. This was firstly because unknowing reviewers were positive in response to the aesthetically pleasing results Schön fabricated. It was also because editors tended to accept papers readily once they read positive reviews, rather than making sure Schön’s methods were completely clear, and that any technical questions (reviewers did raise some questions) were fully articulated and answered. (Science, on one occasion, published a paper by Schön after receiving only one review, which is a breach of their own procedure. The review they relied on was very short, but extremely positive.) That said, specialist journals were not immune to these kinds of issues – Schön fooled specialist reviewers too – and if he had published only in specialist journals, he would probably have continued for longer before the troubling duplications in his data came to widespread attention.
JDS: Reading this book is going to give a bunch of scientists involved in interdisciplinary research projects pause, given how many coauthors Schön had on his fraudulent papers and how surprised they seemed to be that he was committing fraud. What lessons do you draw from this case about prudent collaboration, and about the checks and oversight that ought to go on before a result is communicated beyond a research group?
ESR: Although it’s true that Schön’s fraud was enabled partly by the interdisciplinary nature of research, a close read of the story reveals that there are things colleagues from other disciplines can do, or fail to do, that help to catch fraud. For example, although Schön’s colleagues did begin to ask questions about his methods once those questions were raised by external researchers, they did not start out the research by making sure clear protocols existed. Such protocols should not have involved a lab member having to have “magical hands” or other poorly understood research skills, nor should they have involved postulating the existence of one-of-a-kind equipment that produced perfectly tuned lab materials in a mysterious and undocumented way.
JDS: One of the striking features of Schön’s M.O. was his habit of adjusting his falsified results in light of referee reports, and producing additional “corroborating” results in response to what colleagues said they would need to see to really be convinced. Did the scientists you interviewed say anything about whether they thought a different form of interaction — perhaps a more adversarial one — might have undercut Schön’s apparent strategy? Do you have a view, based on this case, of how scientists can interact productively with each other without inadvertently helping cheaters fine-tune their misrepresentations?
ESR: Yes, you picked up on a really interesting point, that criticism can help cheats to fine-tune their story. The only way to avoid this is to have someone keep track of the changing story. At the journals, reviewers are in a position to do this if editors let them, for example by making sure that one or two rounds of responses to review are, in turn, reviewed. Journal confidentiality works against this because it allows fraudulent scientists to withdraw papers after a round of review and shop them to another journal where the reviewers won’t keep track of the changing story. Online resources that track changing representations – in physics there is the Los Alamos preprint archive – can be a useful tool.
JDS: Part of what led to Schön’s unraveling is that he seemed to have made up results for systems he didn’t understand. Since he didn’t have a good understanding of much of the physical theory, he ended up reporting results that struck experts as really implausible, And, because he seems not to have done much actual experimentation, he wasn’t great at making up data with all the noise and other features that experimentalists would expect. But I wonder … was the scientific community lucky that Schön wasn’t smarter? If he had understood the underlying theory and the literature better, could he have constructed more plausible frauds and gotten away with it?
ESR: I believe the answer is yes.
JDS: Near the end of the book, you make the important observation that Schön’s fraud wasn’t exposed by people who had faith in the self-correcting nature of science, but rather by people who seemed to feel that the scientific community could be fooled if they were not tenacious in their scrutiny. What they realized, in other words, was that “science” only works through the actions of individual scientists. What do you think is going on with all those scientists who have a generalized trust in science but, apparently, not much sense of personal responsibility when it comes to scrutinizing the results and interpretations of other scientists?
ESR: The problem is that science requires resources and scientists are forced to market themselves to the public to raise money for their research. The danger comes when they develop too much faith in their own cover story. Science succeeds when it remains self-critical.
JDS: Is your sense that the Schön case resulted in any lasting changes in how the community of science conducts business — whether in terms of how journal submissions are handled, or how managers interact with the scientists they’re managing, or how hands-on scientists are with their coauthors? Or, with Schön out of the game, was your sense that, for many of the scientists you talked to, everything has gone back to normal?
ESR: I think there’s less naiviety, particularly at the journals, and that the community of condensed matter physicists is now probably better inoculated against the scenario of a young rising star physicist producing up to a dozen breakthrough results in less than a year. I don’t think procedures or scientific culture have fundamentally changed.
JDS: Are there lessons that the public (largely made up on non-scientists) should draw from the Schön case? Should what happened here change our sense of how science ought to be done in academic settings and in the private sector — and of what stake the public has in making sure the science that is done is good rather than pathological?
ESR: I think the lesson for the public is an understanding that good science takes time and resources, and might sometimes involve negative results. This is not a very sexy message.
Part of what led to Schön’s unraveling is that he seemed to have made up results for systems he didn’t understand.
If I remember correctly, it was Schön’s built-up self confidence and self importance due to his meteoric success that cause him to drop his own guard. He had used the very same figure in two separate papers for two different crystals. A reviewer caught that and the rest was history.
I agree with almost everything Reich says. She seems to have great insight into how science works and articulates it clearly.
But I disagree that a good fraud might get away with it if only he did a more convincing job.
I believe that science is what is represented by the consensus opinions in graduate level text books written by the recognized leaders in the field. And unless you can document cases of fraud influencing what is taught in these advanced texts, and I am not aware of any, then you cannot say a fraudster has any possibility of getting away with it.
Now I’m not saying that the graduate level texts are always right, because the definition of scientific truth is always changing and what is correct today may be false tomorrow. But that falsity is not the result of fraud.
Fraud is only one of the things that threatens science, and it should not be blown out of proportion compared to the many other issues that also threaten science.
You can’t assume that the readers of scientific journals are unsophisticates who need protection from bad science by journal editors. Scientists are smart enough to protect themselves when you look at the entire community of them and that is how science works.
Great interview- very interesting.
To have gotten as far as he did without realizing that what he was doing was truly wrong, he must have been a sociopath.
Decaf coffee thread: 24 comments.
SCIENTIFIC FRAUD thread: 4 comments.
lolz
I don’t think the fraud is the problem here – the problem is that the self-checking that could have minimized the damage the fraud caused was not performed. Checking results against your own and those of others and seeing if the cited results cohere and make sense is one of the important duties of scientists, and for a variety of reasons, it was done poorly here. If self-checking (by science) is done as poorly on a regular basis or under the presence of the incentives present in this case, there are likely to be lots of results that aren’t valid (or known to be invalid), and lots of wasted time, effort, and money. Eventually, we won’t have much credibility, which will further diminish the ability to find new things, and increase the incentives for not checking results (or by falsifying results or reporting questionable ones).
The fraud isn’t the problem – that it was caught too late and not caught by people who probably should have, is.
“On the other hand, I found no evidence, including in emails that he had sent after he was exposed that I obtained, that he himself regarded what he was doing as wrong. Based on this, I concluded that he didn’t understand why faking data mattered. He didn’t think it was wrong, he only thought others might think it was.”
I had the beginnings of my doctoral research project directed by a co-investigator (of my supervisor) like this. It took me years of small incidents before I finally figured out that 1) you couldn’t trust what they said because they hid controls and exaggerated “good” results and 2) they hid things from others because the others would then think the data was weak, but since it’s clearly good data, it’s OK to hide the “bad” controls.
We tend to forget that 1-5% of the population meet the DSM criteria for antisocial personality disorder. Arbitrarily assume that at least 90% of these potential sociopaths get washed out, or self-select out, of science. That still leaves a remarkable number of folks with Ph.D.s who simply cannot be trusted.
Actually, I’d bet that the frequency of ASPD among scientists is not that different from the general population. Science wins in the end, but there is an irreducible and non-trivial percentage of reported results which are fake. We’d do better spending more to make people to be be aware of this and less on trying to change human nature.