Faith in rehabilitation (but not in official channels): how unethical behavior in science goes unreported.

Can a scientist who has behaved unethically be rehabilitated and reintegrated as a productive member of the scientific community? Or is your first ethical blunder grounds for permanent expulsion from the community?

In practice, this isn’t just a question about the person who commits the ethical violation. It’s also a question about what other scientists in the community can stomach in dealing with the offenders — especially when the offender turns out to be a close colleague or a trainee.

In the case of a hard line — one ethical strike and you’re out — what kind of decision does this place on the scientific mentor who discovers that his or her graduate student or postdoc has crossed an ethical line? Faced with someone you judge to have talent and promise, someone you think could contribute to the scientific endeavor, someone whose behavior you are convinced was the result of a moment of bad judgment rather than evil intent or an irredeemably flawed character, what do you do?

Do you hand the matter on to university administrators or federal funders (who don’t know your trainee, might not recognize or value his or her promise, might not be able to judge just how out of character this ethical misstep really was) and let them mete out punishment? Or, do you try to address the transgression yourself, as a mentor, addressing the actual circumstances of the ethical blunder, the other options your trainee should have recognized as better ones to pursue, and the kind of harm this bad decision could bring to the trainee and to other members of the scientific community?

Clearly, there are downsides to either of these options.

One problem with handling an ethical transgression privately is that it’s hard to be sure it has really been handled in a lasting way. Given the persistent patterns of escalating misbehavior that often come to light when big frauds are exposed, it’s hard not to wonder whether scientific mentors were aware, and perhaps even intervening in ways they hoped would be effective.

It’s the building over time of ethical violations that is concerning. Is such an escalation the result of a hands-off (and eyes-off) policy from mentors and collaborators? Could intervention earlier in the game have stopped the pattern of infractions and led the researcher to cultivate more honest patterns of scientific behavior? Or is being caught by a mentor or collaborator who admonishes you privately and warns that he or she will keep an eye on you almost as good as getting away with it — an outcome with no real penalties and no paper-trail that other members of the scientific community might access?

It’s even possible that some of these interventions might happen at an institutional level — the department or the university becomes aware of ethical violations and deals with them “internally” without involving “the authorities” (who, in such cases, are usually federal funding agencies). I dare say that the feds would be pretty unhappy about being kept out of the loop if the ethical violations in question occur in research supported by federal funding. But if the presumption is that getting the feds involved raises the available penalties to the draconian, it is understandable that departments and universities might want to try to address the ethical missteps while still protecting the investment they have made in a promising young researcher.

Of course, the rest of the scientific community has relevant interests here. These include an interest in being able to trust that other scientists present honest results to the community, whether in journal articles, conference presentations, grant applications, or private communications. Arguably, they also include an interest in having other members of the community expose dishonesty when they detect it. Managing an ethical infraction privately is problematic if it leaves the scientific community with misleading literature that isn’t corrected or retracted (for example).

It’s also problematic if it leaves someone with a habit of cheating in the community, presumed by all but a few of the community’s members to have a good record of integrity.

But I’m inclined to think that the impulse to deal with science’s youthful offenders privately is a response to the fear that handing them over to federal authorities has a high likelihood of ending their scientific careers forever. There is a fear that a first offense will be punished with the career equivalent of the death penalty.

As it happens, administrative sanctions imposed by Office of Research Integrity are hardly ever permanent removal. Findings of scientific misconduct are much more likely to be punished with exclusion from federal funding for three years, or five years, or ten years. Still, in an extremely competitive environment, with multitudes of scientists competing for scarce grant dollars and permanent jobs, even a three year disbarment may be enough to seriously derail a scientific career. The mentor making the call about whether to report a trainee’s unethical behavior may judge the likely fallout as enough to end the trainee’s career.

Permanent expulsion or a slap on the wrist is not much of a range of penalties. And, neither of these options really addresses the question of whether rehabilitation is possible and in the best interests of both the individual and the scientific community.

If no errors in judgment are tolerated, people will do anything to conceal such errors. Mentors who are trying to be humane may become accomplices in the concealment. The conversations about how to make better judgments may not happen because people worry that their hypothetical situations will be scrutinized for clues about actual screw-ups.

None of this is to say that ethical violations should be without serious consequences — they shouldn’t. But this need not preclude the possibility that people can learn from their mistakes. Violators may have to meet a heavy burden to demonstrate that they have learned from their mistakes. Indeed, it is possible they may never fully regain the trust of their fellow researchers (who may go forward reading their papers and grant proposals with heightened skepticism in light of their past wrongdoing).

However, it seems perverse for the scientific community to adopt a stance that rehabilitation is impossible when so many of its members seem motivated to avoid official channels for dealing with misconduct precisely because they feel rehabilitation is possible. If the official penalty structure denies the possibility of rehabilitation, those scientists who believe in rehabilitation will take matters into their own hands. To the extent that this may exacerbate the problem, it might be good if paths to rehabilitation were given more prominence in official responses to misconduct.

_____________
This post is an updated version of an ancestor post on my other blog.

Resistance to ethics instruction: considering the hypothesis that moral character is fixed.

This week I’ve been blogging about the resistance to required ethics coursework one sometimes sees in STEM* disciplines. As one reason for this resistance is the hunch that you can’t teach a person to be ethical once they’re past a certain (pre-college) age, my previous post noted that there’s a sizable body of research that supports ethics instruction as an intervention to help people behave more ethically.

But, as I mentioned in that post, the intuition that one’s moral character is fixed by one’s twenties can be so strong that folks don’t always believe what the empirical research says about the question.

So, as a thought experiment, let’s entertain the hypothesis that, by your twenties, your moral character is fixed — that you’re either ethical or evil by then and there’s nothing further ethics instruction can do about it. If this were the case, how would we expect scientists to respond to other scientists or scientific trainees who behave unethically?

Presumably, scientists would want the unethical members of the tribe of science identified and removed, permanently. Under the fixed-character hypothesis, the removal would have to be permanent, because there would be every reason to expect the person who behaved unethically to behave unethically again.

If we took this seriously, that would mean every college student who ever cheated on a quiz or made up data for a lab report should be barred from entry to the scientific community, and that every grown-up scientist caught committing scientific misconduct — or any ethical lapse, even those falling well short of fabrication, falsification, or plagiarism — would be excommunicated from the tribe of science forever.

That just doesn’t happen. Even Office of Research Integrity findings of scientific misconduct don’t typically lead to lifetime disbarment from federal research funding. Instead, they usually lead to administrative actions imposed for a finite duration, on the order of years, not decades.

And, I don’t think the failure to impose a policy of “one strike, you’re out” for those who behave unethically is because members of the tribe of science are being held back by some naïvely optimistic outside force (like the government, or the taxpaying public, or ethics professors). Nor is it because scientists believe it’s OK to lie, cheat, and steal in one’s scientific practice; there is general agreement that scientific misconduct damages the shared body of knowledge scientists are working to build.

When dealing with members of their community who have behaved unethically, scientists usually behave as if there is a meaningful difference between a first offense and a pattern of repeated offenses. This wouldn’t make sense if scientists were truly committed to the fixed-character hypothesis.

On the other hand, it fits pretty well with the hypothesis that people may be able to learn from their mistakes — to be rehabilitated rather than simply removed from the community.

There are surely some hard cases that the tribe of science view as utterly irredeemable, but graduate students or early career scientists whose unethical behavior is caught early are treated by many as probably redeemable.

How to successfully rehabilitate a scientist who has behaved unethically is a tricky question, and not one scientists seem inclined to speak about much. Actions by universities, funding agencies, or governmental entities like the Office of Research Integrity are part of the punishment landscape, but punishment is not the same thing as rehabilitation. Meanwhile, it’s unclear whether individual actions to address wrongdoing are effective at heading off future unethical behavior.

If it takes a village to raise a scientist, it may take concerted efforts at the level of scientific communities to rehabilitate scientists who have strayed from the path of ethical practice. We’ll discuss some of the challenges with that in the next post.

______
*STEM stands for science, technology, engineering, and mathematics.

Resistance to ethics instruction: the intuition that ethics cannot be taught.

In my last post, I suggested that required ethics coursework (especially for students in STEM* disciplines) are met with a specific sort of resistance. I also surmised that part of this resistance is the idea that ethics can’t be taught in any useful way, “the idea that being ethical is somehow innate, a mere matter of not being evil.”

In a comment on that post, ThomasB nicely illustrates that particular strain of resistance:

Certainly scientists, like everyone else in our society, must behave ethically. But what makes this a college-level class? From the description, it covers the basic do not lie-cheat-steal along with some anti-bullying and possibly a reminder to cite one’s references. All of which should have been instilled long before college.

So what is there to teach at this point? The only thing I can think of specific to science is the “publish or perish” pressure to keep the research dollars flowing in. Or possibly the psychological studies showing that highly intelligent and creative people are more inclined to be dishonest than ordinary people. Possibly because they are better at rationalizing doing what they want to do. Which is why I used the word “instilled” earlier: it seems to me that ethics comes more from the emotional centers of the brain than the conscious analytical part. As soon as we start consciously thinking about ethics, they seem to go out the window. Such as the study from one of the Ivy League schools where the students did worse at the ethics test at the end of the class than at the beginning.

So I guess the bottom line is whether the science shows that ethics classes at this point in a person’s life actually show an improvement in the person’s behavior. As Far as I know, there has been no such study done.

(Bold emphasis added.)

I think it’s reasonable to ask, before requiring an intervention (like ethics coursework), what we know about whether this sort of intervention is likely to work. I think it’s less reasonable to assume it won’t work without consulting the research on the matter.

As it happens, there has been a great deal of research on whether ethics instruction is an intervention that helps people behave more ethically — and the bulk of it shows that well-designed ethics instruction is an effective intervention.

Here’s what Bebeau et al. (1995) have to say about the question:

When people are given an opportunity to reflect on decisions and choices, they can and do change their minds about what they ought to do and how they wish to conduct their personal and professional lives. This is not to say that any instruction will be effective, or that all manner of ethical behavior can be developed with well-developed ethics instruction. But it is to say — and there is considerable evidence to show it — that ethics instruction can influence the thinking processes that relate to behavior. …

We do not claim that radical changes are likely to take place in the classroom or that sociopaths can be transformed into saints via case discussion. But we do claim that significant improvements can be made in reasoning about complex problems and that the effort is worthwhile. We are not alone in this belief: the National Institutes of Health, the National Science Foundation, the American Association for the Advancement of Science, and the Council of Biology Editors, among others, have called for increased attention to training in the responsible conduct of scientific research. Further, our belief is buttressed by empirical evidence from moral psychology. In Garrod (1993), James R. Rest summarizes the “several thousand” published studies on moral judgment and draws the following conclusions:

  • development of competence in ethical problem-solving continues well into adulthood (people show dramatic changes in their twenties, as in earlier years);
  • such changes reflect profound reconceptualization of moral issues;
  • formal education promotes ethical reasoning;
  • deliberate attempts to develop moral reasoning … can be demonstrated to be effective; and
  • studies link moral reasoning to moral behavior

So, there’s a body of research that supports ethics instruction as an intervention to help people behave more ethically.

Indeed, part of how ethics instruction helps is by getting students to engage analytically, not just emotionally. I would argue that making ethical decisions involves moving beyond gut feelings and instincts. It means understanding how your decisions impact others, and considering the ways your interests and theirs intersect. It means thinking through possible impacts of the various choices available to you. It means understanding the obligations set up by our relations to others in personal and professional contexts.

And methodology for approaching ethical decision making can be taught. Practice in making ethical decisions makes it easier to make better decisions. And making these decisions in conversation with other people who may have different perspectives (rather than just following a gut feeling) forces us to work out our reasons for preferring one course of action to the alternatives. These reasons are not just something we can offer to others to defend what we did, but they are things we can consider when deciding what to do in the first place.

As always, I reckon that there are some people who will remain unmoved by the research that shows the efficacy of ethics instruction, preferring to cling to their strong intuition that college-aged humans are past the point where an intervention like an ethics class could make any impact on their ethical behavior. But if that’s an intuition that ought to guide us — if, by your twenties, you’re either a good egg or irredeemably corrupt — it’s not clear that our individual or institutional responses to unethical behavior by scientists make any sense.

That’s the subject I’ll take up in my next post.

______
*STEM stands for science, technology, engineering, and mathematics.

______
Bebeau, M. J., Pimple, K. D., Muskavitch, K. M., Borden, S. L., & Smith, D. H. (1995). Moral reasoning in scientific research. Cases for teaching and assessment. Bloomington, IN: Poynter Center for the Study of Ethics and Assessment.

Garrod, A. (Ed.). (1993). Approaches to moral development: New research and emerging themes. Teachers College Press.

Resistance to ethics is different from resistance to other required courses.

For academic types like myself, the end of the semester can be a weird juxtaposition of projects that are ending and new projects that are on the horizon, a juxtaposition that can be an opportunity for reflexion.

I’ve just seen another offering of my “Ethics in Science” course to a (mostly successful) conclusion. Despite the fact that the class was huge (more than 100 students) for a course that is heavy on discussion, its students were significantly more active and engaged than those in the much smaller class I taught right after it. The students thought hard and well, and regularly floored me with their razor-sharp insights. All the evidence suggests that these students were pretty into it.

Meanwhile, I’m getting set for a new project that will involve developing ethics units for required courses offered in another college at my university — and one of the things I’ve been told is that the students required to take these courses (as well as some non-zero number of the professors in their disciplines) are very resistant to the inclusion of ethics coursework in courses otherwise focused on their major subjects.

I find this resistance interesting, especially given that the majority of the students in my “Ethics in Science” class were taking it because it was required for their majors.

I recognize that part of what’s going on may be a blanket resistance to required courses. Requirements can feel like an attack on one’s autonomy and individuality — rather than being able to choose what you will to study, you’re told what you must study to major in a particular subject or to earn a degree from a particular university. A course that a student might have been open to enjoying were it freely chosen can become a loathed burden merely by virtue of being required. I’ve seen the effect often enough that it no longer surprises me.

However, requirements aren’t usually imposed solely to constrain students’ autonomy. There’s almost always a reason that the course, or subject-matter, or problem-solving area that’s required is being required. The students may not know that reason (or judge it to be a compelling reason if they do know it), but that doesn’t meant that there’s not a reason.

In some ways, ethics is really not much different here from other major requirements or subject matter that students bemoan, including calculus, thermodynamics, writing in the major, and significant figures. On the other hand, the moaning for some of those other requirements tends to take the form of “When am I ever going to use that?”

I don’t believe I’ve ever heard a science or engineering student say, “When am I ever going to use ethics?”

In other words, they generally accept that they should be ethical, but they also sometimes voice resistance to the idea that a course (or workshop, or online training module) about how to be ethical will be anything but a massive waste of their time.

My sense is that at least part of what’s going on here is that scientists and engineers and their ilk feel like ethics are being imposed on them from without, by university administrators or funding agencies or accrediting organizations. Worse, the people exhorting scientists, engineers, et alia to take ethics seriously often seem to take a finger-wagging approach. And this, I suspect, makes it harder to get what those business types call “buy-in” from the scientists.

The typical story I’ve heard about ethics sessions in industry (and some university settings) goes something like this:

You get a big packet with the regulations you have to follow — to get your protocols approved by the IRB and/or the IACUC, to disclose potential conflicts of interest, to protect the company’s or university’s patent rights, to fill out the appropriate paperwork for hazardous waste disposal, etc., etc. You are admonished against committing the “big three” of falsification, fabrication, and plagiarism. Sometimes, you are also admonished against sexually harassing those with whom you are working. The whole thing has the feel of being driven by the legal department’s concerns: for goodness sake, don’t do anything that will embarrass the organization or get us into hot water with regulators or funders!


Listening to the litany of things you ought not to do, it’s really easy to think: Very bad people do things like this. But I’m not a very bad person. So I can tune this out, and I can kind of ignore ethics.


The decision to tune out ethics is enabled by the fact that the people wagging the fingers at the scientists are generally outsiders (from the legal department, or the philosophy department, or wherever). These outsiders are coming in telling us how to do our jobs! And, the upshot of what they’re telling us seems to be “Don’t be evil,” and we’re not evil! Besides, these outsiders clearly don’t care about (let alone understand) the science so much as avoiding scandals or legal problems. And they don’t really trust us not to be evil.


So just nod earnestly and let’s get this over with.

One hurdle here is the need to get past the idea that being ethical is somehow innate, a mere matter of not being evil, rather than a problem-solving practice that gets better with concrete strategies and repeated use. Another hurdle is the feeling that ethics instruction is the result of meddling by outsiders.


If ethics is seen as something imposed upon scientists by a group from the outside — one that neither understands science, nor values it, nor trusts that scientists are generally not evil — then scientists will resist ethics. To get “buy-in” from the scientists, they need to see how ethics are intimately connected to the job they’re trying to get done. In other words, scientists need to understand how ethical conduct is essential to the project of doing science. Once scientists make that connection, they will be ethical — not because someone else is telling them to be ethical, but because being ethical is required to make progress on the job of building scientific knowledge.
_____________
This post is an updated version of an ancestor post on my other blog, and was prompted by the Virtually Speaking Science discussion of philosophy in and of science scheduled for Wednesday, May 28, 2014 (starting 8 PM EDT/8 PM PDT). Watch the hashtags #VSpeak and #AskVS for more details.