Getting scientists to take ethics seriously: strategies that are probably doomed to failure.

As part of my day-job as a philosophy professor, I regularly teach a semester-long “Ethics in Science” course at my university. Among other things, the course is intended to help science majors figure out why being ethical might matter to them if they continue on their path to becoming working scientists and devote their careers to the knowledge-building biz.

And, there’s a reasonable chance that my “Ethics in Science” course wouldn’t exist but for strings attached to training grants from federal funding agencies requiring that students funded by these training grants receive ethics training.

The funding agencies demand the ethics training component largely in response to high profile cases of federally funded scientists behaving badly on the public’s dime. The bad behavior suggests some number of working scientists who don’t take ethics seriously. The funders identify this as a problem and want the scientists who receive grants from them to take ethics seriously. But the big question is how to get scientists to take ethics seriously.

Here are some approaches to that problem that strike me as unpromising:

  • Delivering ethical instruction that amounts to “don’t be evil” or “don’t commit this obviously wrong act”. Most scientists are not mustache-twirling villains, and few are so ignorant that they wouldn’t know that the obviously wrong acts are obviously wrong. If ethical training is delivered with the subtext of “you’re evil” or “you’re dumb,” most of the scientists to whom you’re delivering it will tune it out, since you’re clearly talking to someone else.
  • Reducing ethics to a laundry list of “thou shalt not …” Ethics is not simply a matter of avoiding bad acts — and the bad acts are not bad simply because federal regulations or your compliance officer say they are bad. There is a significant component of ethics concerned with positive action — doing good things. Presenting ethics as results instead of a process — as a set of things the ethics algorithm says you shouldn’t do, rather than a set of strategies for evaluating the goodness of various courses of action you might pursue — is not very engaging. Besides, you can’t even count on this approach for good results, since refraining from particular actions that are expressly forbidden is no guarantee you won’t find some not-expressly-forbidden action that’s equally bad.
  • Presenting ethics as something you have to talk about because the funders require that you talk about it. If you treat the ethics-talk as just a string attached to your grant money, but something with which you wouldn’t waste your time otherwise, you’re identifying attention to ethics as a thing that gets in the way of research rather as something that supports research. Once you’ve fulfilled the requirement to have the ethics-talk, would you ever revisit ethics, or would you just get down to the business of research?
  • Segregating attention to ethics in a workshop, class, or training session. Is ethics something the entirety of which you can “do” in a few hours, or even a whole semester? That’s the impression scientific trainees can get from an ethics training requirement that floats unconnected from any discussion with the people training them about how to be a successful scientist. Once you’re done with your training, then, you’re done — why think about ethics again?
  • Pointing trainees to a professional code, the existence of which proves that your scientific discipline takes ethics seriously. The existence of a professional code suggests that someone in your discipline sat down and tried to spell out ethical standards that would support your scientific activities, but the mere existence of a code doesn’t mean the members of your scientific community even know what’s in that code, nor that they behave in ways that reflect the commitments put forward by it. Walking the walk is different from talking the talk — and knowing that there is a code, somewhere on your professional society’s website, that you could find if you Googled it probably doesn’t even rise to the level of talking the talk.
  • Delivering ethical training with the accompanying message that scientists who aren’t willing to cut ethical corners are at a competitive career disadvantage, and that this is just how things are. Essentially, this creates a situation where you tell trainees, “Here’s how you should behave … unless you’re really up against it, at which point you should be smart and drop the ethics to survive in this field.” And, what motivated trainee doesn’t recognize that she’s always up against it? It is important, I think, to recognize that unethical behavior is often motivated at least in part by a perception of extreme career pressures rather than by the inherent evil of the scientist engaging in that behavior. But noting the competitive advantage available for cheaters only to throw up your hands and say, “Eh, what are you going to do?” strikes me as a shrugging off of responsibility. At a minimum, members of a scientific community ought to reflect upon and discuss whether the structures of career rewards and career punishments incentivize bad behavior. If they do, members of the community probably have a responsibility to try to change those structures of career rewards and career punishments.

Laying out approaches to ethics training that won’t help scientists take ethics seriously might help a trainer avoid some pitfalls, but it’s not the same as spelling out approaches that are more likely to work. That’s a topic I’ll take up in a post to come.

facebooktwittergoogle_pluslinkedinmail
facebooktwittergoogle_pluslinkedin
Posted in More Science and tagged , , .

Leave a Reply

Your email address will not be published. Required fields are marked *