The moral thermostat and the problem of cultivating ethical scientists.

Earlier this week, Ed Yong posted an interesting discussion about psychological research that suggests people have a moral thermostat, keeping them from behaving too badly — or too well:

Through three psychological experiments, Sonya Sachdeva from Northwestern University found that people who are primed to think well of themselves behave less altruistically than those whose moral identity is threatened. They donate less to charity and they become less likely to make decisions for the good of the environment.
Sachdeva suggests that the choice to behave morally is a balancing act between the desire to do good and the costs of doing so – be they time, effort or (in the case of giving to charities) actual financial costs. The point at which these balance is set by our own sense of self-worth. Tip the scales by threatening our saintly personas and we become more likely to behave selflessly to cleanse our tarnished perception. Do the opposite, and our bolstered moral identity slackens our commitment, giving us a license to act immorally. Having established our persona as a do-gooder, we feel less impetus to bear the costs of future moral actions.

I want to pause here to interject the observation that not all ethical acts are acts of altruism. Altruism is sacrificing one’s own interests for the interests of others. And, it’s quite possible to have your interests aligned well with the ethical thing to do.
There are, of course, moral philosophers (I’m looking at you, Kant) who will express some skepticism as to whether your act is a truly moral one in cases where your interests align too neatly with your actions. This is the crowd for whom your motive, rather than the impact of your action, determines the moral worth of your act. This crowd will tell you that you ought to be motivated by duty, regardless of what your other interests might be. In other words, an action that happens also to go in the same direction as your interests can still be a moral one, so long as what motivated you to perform the act was not those interests, but rather duty — so that you would have been motivated to perform the action even if your interests had happened to pull the other way.
Anyway, some moral acts involve self-sacrifice, but it’s not the case that every moral act must.
Sachdeva’s experiments (which Ed describes in his post) suggest a human tendency to rest on one’s moral laurels. Moreover, these results seem consistent with what other researchers report:

Sachdeva also cites several studies which have found that ethical behaviour provides a license for laxer morality. People who can establish their identity as a non-prejudiced person, by contradicting sexist statements or hiring someone from an ethnic minority, become more likely to make prejudiced choices later.

The general attitude that emerges from these studies seems to be that being good is a chore (since it requires effort and sometimes expenditure), but that it’s a chore that stays done longer than dishes, laundry, or those other grinding but necessary labors that we understand need attention on a regular basis.
As someone who thinks a lot about the place of ethics in the everyday interactions of scientists, you can imagine I have some thoughts about this attitude.
Sadly, a track record of being ethical isn’t sufficient in a world where your fellow scientist is relying on you to honestly report the results of your current study, to refrain from screwing over the author of the manuscript you are currently reviewing, and to make decisions that are not swayed by prejudice on the hiring committee on which you currently serve. But Sachdeva’s experiments raise the possibility that your awareness of your past awesomeness, ethically speaking, could undercut your future ethical performance.
How on earth can people maintain the ethical behaviors we hope they will exercise?
As Ed notes, the research does not rule out the possibility that mere mortals could stay the ethical course. It’s just a question of how consistently ethical people are setting their moral thermostats:

Sachdeva is also interested in the types of situations where people seem to break free of this self-regulating loop of morality, and where good behaviour clearly begets more good behaviour. For example, many social or political activists drop out of their causes after some cursory participation, but others seem to draw even greater fervour. Why?
Sachdeva has two explanations. The first deals with habits – many selfless actions become more routine with time (recycling, for one). As this happens, the effort involved lessens, the “costs” seem smaller, and the potential for moral licensing fades. The second explanation relates to the standards that people set for themselves. Those who satisfy their moral goals award themselves with a license to disengage more easily, but those who hold themselves to loftier standards are more likely to stay the course.

Within the community of science, there are plenty of habits scientists cultivate, some conscious and some unconscious. From the point of view of fostering more ethical behavior, it seems reasonable to say that cultivating a habit of honesty is a good thing — giving fair and accurate reports ought to be routine, rather than something that requires a great deal of conscious effort. Cultivating a habit of fairness (in evaluating the ideas and findings of others, in distributing the goods needed to do science, etc.) might also be worthwhile. The point is not to get scientists to display extraordinarily saintly behavior, but to make honesty and fairness a standard part of how scientists roll.
Then there’s the strategy of setting lofty goals. The scientific community shares a commitment to objectivity, something that involves both individual effort and coordination of the community of scientists. Objectivity is something that is never achieved perfectly, only by degrees. This sets the bar high enough that scientists’ frailties are always pretty evident, which may reduce the potential for backsliding.
At the same time, objectivity is so tightly linked with the scientific goal of building a reliable body of knowledge about the world that it’s unlikely that this lofty goal will be jettisoned simply because it’s hard to achieve.
I don’t think we can overlook the danger in latching onto goals that reveal themselves to be impossible or nearly so. Such goals won’t motivate action — or if they do, they will motivate actions like cheating as the most rational response to a rigged game. Indeed, situations in which the individual feels like her own success might require going against the interests of the community make me think that it’s vitally important for individual interests and community interests to be aligned with each other. If what is good for the community of scientists is also good for the individual scientist trying to participate in the scientific discourse and to make a contribution to the shared body of knowledge, then being good feels a lot less like altruism. And, tuning up the institutional contexts in which science is practiced to reduce the real career costs of honesty and cooperation might be the kind of thing that would lead to better behavior, too.

facebooktwittergoogle_pluslinkedinmail
Posted in Ethics 101, Minds and/or brains, Tribe of Science.

2 Comments

  1. Altruism is only self-sacrifice if you’re foolish enough to keep your entire identity in only one flesh bag.

Leave a Reply

Your email address will not be published. Required fields are marked *