Tempering justice with mercy: the question of youthful offenders in the tribe of science.

Recently, I wrote a post about two researchers at the University of Alabama at Birmingham (UAB) who were caught falsifying data in animal studies of immune suppressing drugs. In the post, I conveyed that this falsification was very bad indeed, and examined some of the harm it caused. I also noted that the Office of Research Integrity (ORI) meted out somewhat different penalties to the principal investigator (ten year voluntary exclusion from government funding and from serving in any advisory capacity with the PHS) and to her postdoc (three year voluntary exclusion from government funding and PHS advisory roles). Moreover, UAB had left open the possibility that the postdoc might work on other people’s research projects under very strict mentoring. (Owing to the ORI ruling, these research projects would have to be ones funded by someone other than the U.S. government, however.)

On that post, commenter Paul Browne disagreed with my suggestion that rehabilitation of the postdoc in this case might be an end worth seeking:

“While such an obvious departure from an experimental protocol — especially an in an experiment involving animal use — isn’t much of an ethical gray area, I think there’s something to be said for treating early-career scientists as potentially redeemable in the aftermath of such ethical screw-ups.”

You have got to be kidding.

We’re not talking about an honest mistake, or deviating from an approved protocol with the best of intentions, or excluding a few outliers from the analysis but rather a decade of repeatedly lying to their funders, their IACUC and to other scientists working in their field.

What they did almost makes me wish that science has a ceremony similar to the old military drumming out.

At the very least they should both be charged with fraud, and since they presumably included their previous falsified results in support of NIH grant applications it shouldn’t be too hard to get a conviction.

Believe me, I understand where Paul is coming from. Given the harm that cheaters can do to the body of shared knowledge on which the scientific community relies, and to the trust within the scientific community that makes coordination of effort possible, I understand the impulse to remove cheaters from the community once and for all.

But this impulse raises a big question: Can a scientist who has made an ethical misstep be rehabilitated and reintegrated as a productive member of the scientific community? Or is your first ethical blunder grounds for permanent expulsion from the community? In practice, this isn’t just a question about the person who commits the ethical violation. It’s also a question about what other scientists in the community can stomach in dealing with the offenders — especially when the offender turns out to be a close colleague or a trainee.


In the case of a hard line — one ethical strike and you’re out — what kind of decision does this place on the scientific mentor who discovers that his or her graduate student or postdoc has crossed an ethical line? Faced with someone you judge to have talent and promise, someone you think could contribute to the scientific endeavor, someone whose mistake you are convinced was the result of a moment of bad judgment rather than evil intent or an irredeemably flawed character, what do you do?

Do you hand the matter on to university administrators or federal funders (who don’t know your trainee, might not recognize or value his or her promise, might not be able to judge just out of character this ethical misstep really was) and let them mete out punishment? Or, do you try to address the transgression yourself, as a mentor, addressing the actual circumstances of the ethical blunder, the other options your trainee should have recognized as better ones to pursue, and the kind of harm this bad decision could bring to the trainee and to other members of the scientific community?

Clearly, there are downsides to either of these options.

One problem with handling an ethical transgression privately is that it’s hard to be sure it has really been handled in a lasting way. Given the persistent patterns of misbehavior that often come to light when big frauds are exposed, it’s hard not to wonder whether scientific mentors were aware, and perhaps even intervening in ways they hoped would be effective. In commenting on the Luk Van Parijs case in an article in the Boston Globe, one research integrity expert noted just such a pattern:

It is not unusual to see cases of fraud involving data that are tangential to the main point of a research paper, as is alleged in some of Van Parijs’s work, according to C.K. Gunsalus, a special counsel at the University of Illinois at Urbana-Champaign, and specialist on research integrity.

”It is very common, and there is also a common defense, which is ‘I have a PhD and I wouldn’t have done something so stupid,’ ” said Gunsalus. Often, she said, this defense is successful. She also said that it was common to see a pattern of escalation, with small infractions building over time to larger ones.

(Bold emphasis mine.)

It’s the building over time of ethical violations that is concerning. Is such an escalation the result of a hands-off (and eyes-off) policy from mentors and collaborators? Could intervention earlier in the game have stopped the pattern of infractions and led the researcher to cultivate more honest patterns of scientific behavior? Or is being caught by a mentor or collaborator who admonishes you privately and warns that he or she will keep an eye on you almost as good as getting away with it — an outcome with no real penalties and no paper-trail that other members of the scientific community might access?

It’s even possible that some of these interventions might happen at an institutional level — the department or the university becomes aware of ethical violations and deals with them “internally” without involving “the authorities” (who, in such cases, are usually federal funding agencies). I dare say that the feds would be pretty unhappy about being kept out of the loop if the ethical violations in question occur in research supported by federal funding. But if the presumption is that getting the feds involved raises the available penalties to the draconian, it is understandable that departments and universities might want to try to address the ethical missteps while still protecting the investment they have made in a promising young researcher.

Of course, the rest of the scientific community has relevant interests here. These include an interest in being able to trust that other scientists present honest results to the community, whether in journal articles, conference presentations, grant applications, or private communications. Arguably, they also include an interest in having other members of the community expose dishonesty when they detect it. Managing an ethical infraction privately is problematic if it leaves the scientific community with misleading literature that isn’t corrected or retracted (for example).

It’s also problematic if it leaves someone with a habit of cheating in the community, presumed by all but a few of the community’s members to have a good record of integrity.

But I’m inclined to think that the impulse to deal with science’s youthful offenders privately is a response to the fear that handing them over to federal authorities has a high likelihood of ending their scientific careers forever. There is a fear that a first offense will be punished with the career equivalent of the death penalty.

Permanent expulsion or a slap on the wrist is not much of a range of penalties. And, I suspect neither of these options really address the question of whether rehabilitation is possible and in the best interests of both the individual and the scientific community.

Earlier this month, I wrote:

It’s true that scientific training seems to go on forever, but that shouldn’t mean that early career scientists are infantilized. They are, by and large, legal adults, and they ought to be striving to make decisions as adults — which means considering the potential effects of their actions and accepting the consequences of them. I’m disinclined, therefore, to view ORI judgments of scientific misconduct as akin to juvenile criminal records that are truly expunged to reflect the transient nature of the youthful offender’s transgressions. Scientists ought to have better judgment than fifteen-year-olds. Occasionally they don’t. If they want to stay a part of the scientific community that their bad choices may have harmed, they have to be prepared to make real restitution. This may include having to meet a higher burden of proof to make up for having misled one’s fellow scientists at some earlier point in time. It may be a pain, but it’s not impossible.

Indeed, I’m inclined to think that early career lapses in judgment ought not to be buried precisely because public knowledge of the problem gives the scientific community some responsibility for providing guidance to the promising young scientist who messed up. Acknowledging your mistakes sets up a context in which it may be easier to ask other folks for help in avoiding similar mistakes in the future. (Ideally, scientists would be able to ask each other for such advice as a matter of course, but there are plenty of instances where it feels like asking a question would be exposing a weakness — something that can feel very dangerous, especially to an early career scientist.)

If no errors in judgment are tolerated, people will do anything to conceal such errors. Mentors who are trying to be humane may become accomplices in the concealment. The conversations about how to make better judgments may not happen because people worry that their hypothetical situations will be scrutinized for clues about actual screw-ups.

None of this is to say that ethical violations should be without serious consequences — they shouldn’t. But this need not preclude the possibility that people can learn from their mistakes. Violators may have to meet a heavy burden to demonstrate that they have learned from their mistakes. Indeed, it is possible they may never fully regain the trust of their fellow researchers (who may read their papers and grant proposals with heightened skepticism in light of their past wrongdoing).

However, it seems perverse for the scientific community to adopt a stance that rehabilitation is impossible when so many of its members seem motivated to avoid official channels for dealing with misconduct precisely because they feel rehabilitation is possible. If the official penalty structure denies the possibility of rehabilitation, those scientists who believe in rehabilitation will take matters into their own hands. To the extent that this may exacerbate the problem, it might be good if paths to rehabilitation were given more prominence in official responses to misconduct.

facebooktwittergoogle_pluslinkedinmail
Posted in Doing science for the government, Misconduct, Professional ethics, Tribe of Science.

19 Comments

  1. I think part of the point that Paul Browne seemed to be making, which wasn’t directly addressed, is that this isn’t a minor fudging of a tangential point or a one-off event; this was a systematic, repeated, and lengthy series of conscious decisions to falsify critical results. This wasn’t a kid who pockets $20 from the register when he’s short for gas one day; these guys were more like Bernie Madoff. If there was ever going to be a case to take a hard line, this is it. Frankly, I’m dismayed that they might ever be allowed to work on government-funded projects again.

  2. Nick, fair enough. In the specific case at UAB, the postdoc in question may give us grounds to think *his* rehabilitation is extremely unlikely.
    However, especially given the collective gnashing of teeth in instances where postdocs and grad students seem to get saddled with the harshest punishments while PIs walk away relatively unscathed, I think there’s a case to be made for recognizing rehabilitation as an *option* — one that’s available and that’s exercised when appropriate — rather than taking it off the table altogether.
    And, as I noted previously, there is a different between regaining eligibility to be awarded government grant money and regaining the trust of your professional community. The latter is much harder, and is pretty much a necessity for a successful scientific career.

  3. Basically, you have written a very nice persuasive justification for applying the principle of proportionality of punishment–most well developed in the context of Anglo-Saxon criminal law–to scientific misconduct.

  4. Janet,
    You have a partner in your stance on this issue i.e., Richard Gallagher, who wrote an editorial in the July, 2009 issue of The Scientist about Fairness for Fraudsters.
    Personally, I hold that scientific misconduct is a crime that has much greater number of victims than the garden variety crimes we read about in the paper every day. Moreover, scientists know better than anyone else what are the consequences of scientific misconduct, since those who make the decision to commit it manipulate the very trust that they were given and on which the whole scientific aparatus is standing. The fraudster’s victims are his/her own colleagues (students, postdocs, technicians, co-investigators, mentors who taught him/her), the department s/he belong to, including all the department members, the school to which this department belongs and its faculty members and administrators, the university to which this school belongs and its staff, the state where this university located, especially if it is a state university, the American taxpayers and the government agency that allocated the taxpayers money for the faudster’s research and all the scientists around the world who relied on and spent money and time base on worthless scientific data. Such far-reaching victimization should not be even considered for the possibility of rehabilitation. Fraudsters in science should look for a different vocation.

  5. S. Rivlin, one of the links I dropped in this post was my own response to Gallagher’s editorial, in which I expressed my disagreement with his call to scrub electronic archives of ORI judgments against fraudsters.
    I think scientific misconduct is serious business. But I also worry that if we create conditions where human (and humane) scientists are unwilling to report it, the problem isn’t likely to get better.
    As always, I’m happy to entertain alternative proposals to address the problem, though.

  6. I agree with you that the PI should take greater responsibility for missteps than they are usually forced to. I remember one case with fraudulent data in a published paper and the PI excused themselves by stating they hadn’t even read the paper and so could not be expected to catch it. The PI was named co-author on the paper, but was nevertheless not assigned any blame, something I and many others found rather distasteful.
    I do agree with you that people should be getting second chances – in science as in criminal law. But I also do agree with Nick above that this is one case where being lenient does not seem appropriate. On the scale of offences this is more akin to kidnapping or bank robbery than shoplifting or skimming the till.
    Finally, I do put a question mark on the “youthful” aspect you like to push. A first-time post-doc is already pushing 30, and most “young” researchers are in their mid- or late 30’s already. We are all supposed to have a pretty well honed moral compass by that time. And even at your first post-doc you’ve already spent years in a lab, doing research. It’s not like the job is completely new to you. “youthful infractions” is what graduate students do. By the time you’re off on your own you’re supposed to know better.

  7. Janet, scientific misconduct is already underreported due to, among other reasons, the fact that human (and humane) scientists are unwilling to report it. Frequently, big wigs within institutions are willing accomplices in hiding scientific misconduct within their own institution. I assume that in most of these cases, the main impetus to not report a scientific misconduct is not humane, but rather egotistic on the part of the one who chooses not to report it. Being more forgiving and promising rehabilitation, I’m sure will not increase the number of cases of scientific misconduct reported. Full protection and encouragement of potential whistleblowers is the first step in fighting the increase in payoff that fraudsters are enticed by. In addition, a complementary step must be a heavy punishment of those who are found guilty along with their accomplices (including those who knew, but chose to remain silent.
    We have to change the attitude, which is prevalent among scientists, of loyalty to our peers even when we know that they are actually fraudsters. This attitude is acceptable among thieves, politicians residing in the C Street House and cops in certain police departments, but it has no place among scientists.
    On a personal note; after I blew the whistle on a scientific misconduct case, I was shocked to find out that those I considered friends and colleagues chose to stay as far as possible from me. Few even called me on the phone to criticize me for rocking the boat. Other were willing to stand strongly by the offender for their own personal gain. Of the hundreds of faculty members in our medical school, only one volunteered to stand by me, offering emotional and legal support.

  8. “But this impulse raises a big question: Can a scientist who has made an ethical misstep be rehabilitated and reintegrated as a productive member of the scientific community? Or is your first ethical blunder grounds for permanent expulsion from the community?”
    In many cases scientists can be rehabilitated, but it all depends on the nature of the blunder and the circumstances in which it takes place. In some cases a warning might be sufficient, in other cases more serious disciplinary action and perhaps additional training/retraining might be appropriate, and in others I think expulsion from the scientific community and criminal charges are warranted.
    As you might have gathered from my previous post I believe that the Judith Thomas and Juan Contreras affair falls into the latter category, and it’s worth remembering that Judith Thomas received the more severe punishment even if it was Juan Contreras who resigned.
    As Comrade PhysioProf noted it’s all about proportionality, but for the severe infringements of ethical standards there must be severe punishments, otherwise there will be the temptation to cover up unintentional or small infringements with even worse fraud rather than admit/expose them early on.

  9. While I agree in principle that there should be a punishment somewhere between “slap on the wrist” and “death penalty”, I don’t see how to implement it in a way where it doesn’t turn into one or the other extreme. The problem arises whenever a trainee (student or postdoc) applies for a position at another institute. If the person doing the hiring sees that the applicant has previously been sanctioned for academic misconduct, why would he take the risk that this applicant could become a repeat offender? If that did happen, the PI would not have the (evidently too popular) “Hoocoodanode?” defense available. And if the infraction is not disclosed on the application, it’s effectively a wrist slap.
    I agree with the earlier posters who advocate stronger punishments for PIs. It’s their job to keep track of what’s going on in the lab; when they get a grant it’s their signature on the dotted line saying that they will be responsible for those funds. It should still be possible for the PI to mount a defense against a rogue trainee, but the presumption should be that the PI is responsible for the training–otherwise, we need to accept that this whole “trainee” notion is bogus, and deal with the consequences.

  10. “rocking the boat”… that’s what people will blame the one who wants to expose the fraud. And with all the “references” that is necessary for one’s survival in scientific world it will be a hard thing for a starter to blow the whistle against their bosses.

  11. “rocking the boat”… that’s what people will blame the one who wants to expose the fraud. And with all the “references” that is necessary for one’s survival in scientific world it will be a hard thing for a starter to blow the whistle against their bosses.
    Ela, you’re absolutely correct. That’s why a full protection of whistleblowers is needed; a mechanism thatl allows the whistleblower to remain annonymous. Threats of retaliation and retribution are the main weapons fraudsters in power positions use to discourage whistleblowing. Such threats by themselves should be consider misconduct and should be punishable accordingly.

  12. Janet “On the other hand, there is a power differential between the PI and the postdoc. It doesn’t excuse his actions, but might we consider the effect of coercion?”
    Certainly, that was what I meant by “the circumstances in which it [the fraud] takes place”. If there is evidence that the junior scientist came under a lot of pressure from the senior scientist to go along with or commit the fraud that should be taken as a mitigating factor, even it it does not excuse their actions.
    I do think we need to consider the possibility that the senior scientist does not always initiate the fraud. It appears to be the case that Contreras initiated the fraud (or at the very least was responsible for mistakes that later became fraud), and that Thomas discovered it later and chose to continue it rather than admit to it and retract the affected papers. While it might be right that he doesn’t shoulder as much responsibility as Thomas it does seem to me to be appropriate for him to face severe disciplinary measures.
    We all know that there are a lot of promising young scientists in the US, many of whom will, despite their ability and best efforts, never get the break they need to get a tenure track position or similar senior post. Should we really attempt to save the careers of the likes of Dr. Contreras, whose actions threaten not only their close colleagues but also the wider medical research community, while there are plenty of more deserving candidates available?
    Finally what concerns me it that while UAB and ORI appear to have acted promptly once they became aware that there was a problem, and their investigation looks thorough, it took far to long for the fraud to be detected (though perhaps not in banking terms). IMHO S.Rivlin is correct that whistleblowers need more protection, whether that can be done through reform and harmonization of procedures within institutions or through greater involvement of external regulators would be an interesting topic for discussion.

  13. “Can a scientist who has made an ethical misstep be rehabilitated and reintegrated as a productive member of the scientific community?”
    “Ethical misstep”? “a blunder”?
    It appears here that data was willfully and intentionally falsified. That’s not a misstep; it’s jumping off the cliff. Blunder? How does one commit an intentional blunder.

  14. “Can a scientist who has made an ethical misstep be rehabilitated and reintegrated as a productive member of the scientific community?”
    Perhaps it is possible – but shouldn’t you rather be asking whether it is desirable?

  15. I do think we need to consider the possibility that the senior scientist does not always initiate the fraud.
    In the vast majority of misconduct, the senior scientist isn’t even aware of the fraud.

  16. Philosophically, I am opposed to any form of punishment whose goal is anything but an attempt to provoke rehabilitation of the individuals involved (unless the offense [or, more importantly, pattern of offenses] is so heinous as to warrant a “ban for life” – which I think is uncommon). With that in mind, we have to deliver a consequence proportionate to the offense without invoking career-ending measures. Clearly that alone discriminates between the two individuals described above: the magnitude of a punishment that is effective to elicit rehabilitation but which is not a career killer is clearly different for a post-doctoral fellow and an established PI.
    I also side with Dr. Isis in her argument that a postdoctoral fellow could (at least in principle) be subject to “coercion”. PIs can elicit unethical behavior in their trainees simply by fostering an excessively competitive culture that favors results at the expense of care and scientific rigor; they then look the other way when that behavior crosses lines. I suspect that this is the most common manifestation of coercion in labs that leads trainees to cut corners or explicitly fabricate data. Again, the manner in which the PI and the postdoctoral fellow are responsible for the offense is different and best addressed with different forms or levels of punitive measures.
    Overall, I think the PI often needs to be the target of more robust intervention than the trainee because the PI has failed twice – as a scientist and as a mentor.

  17. “In the vast majority of misconduct, the senior scientist isn’t even aware of the fraud.”
    And this statement is based on what? Statistics? Personal knowledge? Educated Guess? Uneducated guess?

  18. David,
    My philosophy is that scientists in academia are not different from officers in a police force. There is a certain type of crimes that, when committed by a police officer, there is no way that s/he will ever be hired again to serve in any police force. There are types of crimes that scientists commit that should forbid their rehiring into academia. The case at UAB is a perfect example of such a crime. Ban from academia is a fitting punishment.

Leave a Reply to Ela Cancel reply

Your email address will not be published. Required fields are marked *