Heroes, human “foibles”, and science outreach.

“Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.”

— Richard Feynman

There is a tendency sometimes to treat human beings as if they were resultant vectors arrived at by adding lots and lots of particular vectors together, an urge to try to work out whether someone’s overall contribution to their field (or to the world) was a net positive.

Unless you have the responsibility for actually putting the human being in question into the system to create good or bad effects (and I don’t kid myself that my readership is that omnipotent), I think treating human beings like resultant vectors is not a great idea.

For one thing, in focusing on the net effect, one tends to overlook that people are complicated. You end up in a situation where you might use those overall tallies to sort people into good and evil rather than noticing how in particular circumstances good and bad may turn on a decision or two.

This can also create an unconscious tendency to put a thumb on the scale when the person whose impact you’re evaluating is someone about whom you have strong feelings, whether they’re a hero to you or a villain. As a result, you may end up completely ignoring the experiences of others, or noticing them but treating them as insignificant, when a better course of action may be to recognize that it’s entirely possible that people who had a positive impact on you had a negative impact on others (and vice versa).

Science is sometimes cast as a pursuit in which people can, by participating in a logical methodology, transcend their human frailties, at least insofar as these frailties constrain our ability to get objective knowledge of the world. On that basis, you’ll hear the claim that we really ought to separate the scientific contributions of an individual from their behaviors and interactions with others. In other words, we should focus on what they did when they were being a scientist rather than on the rest of the (incidental) stuff they did while they were being a human.

This distinction rests on a problematic dichotomy between being a scientist and being a human. Because scientific knowledge is built not just through observations and experiments but also through human interactions, drawing a clear line between human behavior and scientific contributions is harder than it might at first appear.

Consider a scientist who has devised, conducted, and reported the results of many important experiments. If it turns out that some of those experimental results were faked, what do you want to say about his scientific legacy? Can you be confident in his other results? If so, on what basis can you be confident?

The coordinated effort to build a reliable body of knowledge about the world depends on a baseline level of trust between scientists. Without that trust, you are left having to take on the entire project yourself, and that seriously diminished the chances that the knowledge you’re building will be objective.

What about behaviors that don’t involve putting misinformation into the scientific record? Are those the kinds of things we can separate from someone’s scientific contributions?

Here, the answer will depend a lot on the particulars of those behaviors. Are we talking about a scientist who dresses his dogs in ugly sweaters, or one who plays REO Speedwagon albums at maximum volume while drafting journal articles? Such peculiarities might come up in anecdotes but they probably won’t impact the credibility of one’s science. Do we have a scientist who is regularly cruel to his graduate student trainees, or who spreads malicious rumors about his scientific colleagues? That kind of behavior has the potential to damage the networks of trust and cooperation upon which the scientific knowledge-building endeavor depends, which means it probably can’t be dismissed as a mere “foible”.

What about someone who is scrupulously honest about his scientific contributions but whose behavior towards women or members of underrepresented minorities demonstrates that he does not regard them as being as capable, as smart, or as worthy of respect? What if, moreover, most of these behaviors are displayed outside of scientific contexts (owing to the general lack of women or members of underrepresented minorities in the scientific contexts this scientist encounters)? Intended or not, such attitudes and behaviors can have the effect of excluding people from the scientific community. Even if you think you’re actively working to improve outreach/inclusion, your regular treatment of people you’re trying to help as “less than” can have the effect of exclusion. It also sets a tone within your community where it’s predictable that simply having more women and members of underrepresented minorities there won’t result in their full participation, whether because you and your likeminded colleagues are disinclined to waste your time interacting with them or because they get burnt out interacting with people like you who treat them as “less than”.

This last description of a hypothetical scientist is not too far from famous physicist Richard Feynman, something that we know not just from the testimony of his contemporaries but from Feynman’s own accounts. As it happens, Feynman is enough of a hero to scientists and people who do science outreach that many seem compelled to insist that the net effect of his legacy is positive. Ironically, the efforts to paint Feynman as a net-good guy can inflict harms similar to the behavior Feynman’s defenders seem to minimize.

In an excellent, nuanced post on Feynman, Matthew Francis writes:

Richard Feynman casts the longest shadow in the collective psyche of modern physicists. He plays the nearly same role within the community that Einstein does in the world beyond science: the Physicist’s Physicist, someone almost as important as a symbol as he was as a researcher. Many of our professors in school told Feynman stories, and many of us acquired copies of his lecture notes in physics. …

Feynman was a pioneer of quantum field theory, one of a small group of researchers who worked out quantum electrodynamics (QED): the theory governing the behavior of light, matter, and their interactions. QED shows up everywhere from the spectrum of atoms to the collisions of electrons inside particle accelerators, but Feynman’s calculation techniques proved useful well beyond the particular theory.

Not only that, his explanations of quantum physics were deep and cogent, in a field where clarity can be hard to come by. …

Feynman stories that get passed around physics departments aren’t usually about science, though. They’re about his safecracking, his antics, his refusal to wear neckties, his bongos, his rejection of authority, his sexual predation on vulnerable women.

The predation in question here included actively targeting female students as sex partners, a behavior that rather conveys that you don’t view them primarily in terms of their potential to contribute to science.

While it is true that much of what we know about Richard Feynman’s behavior is the result of Feynman telling stories about himself, there stories really don’t seem to indicate awareness of the harmful impacts his behavior might have had on others. Moreover, Feynman’s tone in telling these stories suggests he assumed an audience that would be taken with his cleverness, including his positioning of women (and his ability to get into their pants) as a problem to be solved scientifically.

Apparently these are not behaviors that prevented Feynman from making significant contributions to physics. However, it’s not at all clear that these are behaviors that did no harm to the scientific community.

One take-home message of all this is that making positive contributions to science doesn’t magically cancel out harmful things you may do — including things that may have the effect of harming other scientists or the cooperative knowledge-building effort in which they’re engaged. If you’re a living scientist, this means you should endeavor not to do harm, regardless of what kinds of positive contributions you’ve amassed so far.

Another take-home message here is that it is dangerous to rest your scientific outreach efforts on scientific heroes.

If the gist of your outreach is: “Science is cool! Here’s a cool guy who made cool contributions to science!” and it turns out that your “cool guy” actually displayed some pretty awful behavior (sexist, racist, whatever), you probably shouldn’t put yourself in a position where your message comes across as:

  • These scientific contributions were worth the harm done by his behavior (including the harm it may have done in unfairly excluding people from full participation in science).
  • He may have been sexist or racist, but that was no big deal because people in his time, place and culture were pretty sexist (as if that removes the harm done by the behavior).
  • He did some things that weren’t sexist or racist, so that cancels out the things he did that were sexist or racist. Maybe he worked hard to help a sister or a daughter participate in science; how can we then say that his behavior hurt women’s inclusion in science?
  • His sexism or racism was no big deal because it seems to have been connected to a traumatic event (e.g., his wife died, he had a bad experience with a black person once), or because the problematic behavior seems to have been his way of “blowing off steam” during a period of scientific productivity.

You may be intending to convey the message that this was an interesting guy who made some important contributions to science, but the message that people may take away is that great scientific achievement totally outweighs sexism, racism, and other petty problems.

But people aren’t actually resultant vectors. If you’re a target of the racism, sexism, and other petty problems, you may not feel like they should be overlooked or forgiven on the strength of the scientific achievement.

Science outreach doesn’t just deliver messages about what science knows or about the processes by which that knowledge is built. Science outreach also delivers messages about what kind of people scientists are (and about what kinds of people can be scientists).

There is a special danger lurking here if you are doing science outreach by using a hero like Feynman and you are not a member of a group likely to have been hurt by his behavior. You may believe that the net effect of his story casts science and scientists in a way that will draw people in, but it’s possible you are fooling yourself.

Maybe you aren’t the kind of person whose opinion about science or eagerness to participate in science would be influenced by the character flaws of the “scientific heroes” on offer, but if you’re already interested in science perhaps you’re not the main target for outreach efforts. And if members of the groups who are targeted for outreach tell you that they find these “scientific heroes” and the glorification of them by science fans alienating, perhaps listening to them would help you to devise more effective outreach strategies.

Building more objective knowledge about the world requires input from others. Why should we think that ignoring such input — especially from the kind of people you’re trying to reach — would lead to better science outreach?

On the value of empathy, not othering.

Could seeing the world through the eyes of the scientist who behaves unethically be a valuable tool for those trying to behave ethically?

Last semester, I asked my “Ethics in Science” students to review an online ethics training module of the sort that many institutions use to address responsible conduct of research with their students and employees. Many of my students elected to review the Office of Research Integrity’s interactive movie The Lab, which takes you through a “choose your own adventure” scenario in as academic lab as one of four characters (a graduate student, a postdoc, the principal investigator, or the institution’s research integrity officer). The scenario surrounds research misconduct by another member of the lab, and your goal is to do what you can to address the problems — and to avoid being drawn into committing misconduct yourself.

By and large, my students reported that “The Lab” was a worthwhile activity. As part of the assignment, I asked them to suggest changes, and a number of them made what I thought was a striking suggestion: players should have the option to play the character who commits the misconduct.

I can imagine some imminently sensible reasons why the team that produced “The Lab” didn’t include the cheater as a playable character. For instance, if the scenario were to start before the decision to cheat and the user playing this character picks the options that amount to not cheating, you end up with a story that lacks almost all of the drama. Similarly, if you pick up with that character in the immediate aftermath of the instance of cheating and go with the “come clean/don’t dig a deeper hole” options, the story ends pretty quickly.

Setting the need for dramatic tension aside, I suspect that another reason that “The Lab” doesn’t include the cheater as a playable character is that people who are undergoing research ethics training are supposed to think of themselves as people who would not cheat. Rather, they’re supposed to think of themselves as ethical folks who would resist temptation and stand up to cheating when others do it. These training exercises bring out some of the particular challenges that might be associated with making good ethical decisions (many of them connected to seeing a bit further down the causal chain to anticipate the likely consequences of your choices), but they tend to position the cheater as just part of the environment to which the ethical researcher must respond.

I think this is a mistake. I think there may be something valuable in being able to view those who commit misconduct as more than mere antagonists or monsters.

Part of what makes “The Lab” a useful exercise is that it presents situations with a number of choices available to us, some easier and some harder, some likely to lead to interactions that are more honest and fair and others more likely to lead to problems. In real life, though, we don’t usually have the option of rewinding time and choosing a different option if our first choice goes badly. Nor do we have assurance that we’ll end up being the good guys.

It’s important to understand the temptations that the cheaters felt — the circumstances that made their unethical behaviors seem expedient, or rational, or necessary. Casting cheaters as monsters is glossing over our own human vulnerability to these bad choices, which will surely make the temptations harder to handle when we encounter them. Moreover, understanding the cheaters as humans (just like the scientists who haven’t cheated) rather than “other” in some fundamental way lets us examine those temptations and then collectively create working environments with fewer of them. Though it’s part of a different discussion, Ashe Dryden describes the dangers of “othering” here quite well:

There is no critical discussion about what leads to these incidents — what parts of our culture allow these things to go unchecked for so long, how pervasive they are, and how so much of this is rewarded directly or indirectly. …

It’s important to notice what is happening here: by declaring that the people doing these things are others, it removes the need to examine our own actions. The logic assumed is that only bad people do these things and we aren’t bad people, so we couldn’t do something like this. Othering effectively absolves ourselves of any blame.

The dramatic arc of “The Lab” is definitely not centered on the cheater’s redemption, nor on cultivating empathy for him, and in the context of the particular training it offers, that’s fine. Sometimes one’s first priority is protecting or repairing the integrity of the scientific record, or ensuring a well-functioning scientific community by isolating a member who has proven himself untrustworthy.

But, that member of the community who we’re isolating, or rehabilitating, is connected to the community — connected to us — in complicated ways. Misconduct doesn’t just happen, but neither is it the case that, when someone commits it, it’s just the matter of the choices and actions of an individual in a vacuum.

The community is participating in creating the environment in which people commit misconduct. Trying to understand the ways in which behaviors, expectations, formal and informal reward systems, and the like can encourage big ethical transgressions or desensitize people to “little” lapses may be a crucial step to creating an environment where fewer people commit misconduct, whether because the cost of doing so is too high or the payoff for doing so (if you get away with it) is too low.

But seeing members of the community as connected in this way requires not seeing the research environment as static and unchangeable — and not seeing those in the community who commit misconduct as fundamentally different creatures from those who do not.

All of this makes me think that part of the voluntary exclusion deals between people who have committed misconduct and the ORI should be an allocution, in which the wrongdoer spells out the precise circumstances of the misconduct, including the pressures in the foreground when the wrongdoer chose the unethical course. This would not be an excuse but an explanation, a post-mortem of the misconduct available to the community for inspection and instruction. Ideally, others might recognize familiar situations in the allocution and then consider how close their own behavior in such situations has come to crossing ethical lines, as well as what factors seemed to help them avoid crossing those lines. As well, researchers could think together about what gives rise to the situations and the temptations within them and explore whether common practices can be tweaked to remove some of the temptations while supporting knowledge-building and knowledge builders.

Casting cheaters as monsters doesn’t do much to help people make good choices in the face of difficult circumstances. Ignoring the ways we contribute to creating those circumstances doesn’t help, either — and may even increase the risk that we’ll become like the “monsters” we decry

Conduct of scientists (and science writers) can shape the public’s view of science.

Scientists undertake a peculiar kind of project. In striving to build objective knowledge about the world, they are tacitly recognizing that our unreflective picture of the world is likely to be riddled with mistakes and distortions. On the other hand, they frequently come to regard themselves as better thinkers — as more reliably objective — than humans who are not scientists, and end up forgetting that they have biases and blindspots of their own which they are helpless to detect without help from others who don’t share these particular biases and blindspots.

Building reliable knowledge about the world requires good methodology, teamwork, and concerted efforts to ensure that the “knowledge” you build doesn’t simply reify preexisting individual and cultural biases. It’s hard work, but it’s important to do it well — especially given a long history of “scientific” findings being used to justify and enforce preexisting cultural biases.

I think this bigger picture is especially appropriate to keep in mind in reading the response from Scientific American Blogs Editor Curtis Brainard to criticisms of a pair of problematic posts on the Scientific American Blog Network. Brainard writes:

The posts provoked accusations on social media that Scientific American was promoting sexism, racism and genetic determinism. While we believe that such charges are excessive, we share readers’ concerns. Although we expect our bloggers to cover controversial topics from time to time, we also recognize that sensitive issues require extra care, and that did not happen here. The author and I have discussed the shortcomings of the two posts in detail, including the lack of attention given to countervailing arguments and evidence, and he understood the deficiencies.

As stated at the top of every post, Scientific American does not always share the views and opinions expressed by our bloggers, just as our writers do not always share our editorial positions. At the same time, we realize our network’s bloggers carry the Scientific American imprimatur and that we have a responsibility to ensure that—differences of opinion notwithstanding—their work meets our standards for accuracy, integrity, transparency, sensitivity and other attributes.

(Bold emphasis added.)

The problem here isn’t that the posts in question advocated sound scientific views with implications that people on social media didn’t like. Rather, the posts presented claims in a way that made them look like they had much stronger scientific support than they really do — and did so in the face of ample published scientific counterarguments. Scientific American is not requiring that posts on its blog network meet a political litmus test, but rather that they embody the same kind of care, responsibility to the available facts, and intellectual honesty that science itself should display.

This is hard work, but it’s important. And engaging seriously with criticism, rather than just dismissing it, can help us do it better.

There’s an irony in the fact that one of the problematic posts which ignored some significant relevant scientific literature (helpfully cited by commenters in the comments section of that very post) was ignoring that literature in the service of defending Larry Summers and his remarks on possible innate biological causes that make men better at math and science than women. The irony lies in the fact that Larry Summers displayed an apparently ironclad commitment to ignore any and all empirical findings that might challenge his intuition that there’s something innate at the heart of the gender imbalance in math and science faculty.

Back in January of 2005, Larry Summers gave a speech at a conference about what can be done to attract more women to the study of math and science, and to keep them in the field long enough to become full professors. In his talk, Summers suggested as a possible hypothesis for the relatively low number of women in math and science careers that there may be innate biological factors that make males better at math and science than females. (He also related an anecdote about his daughter naming her toy trucks as if they were dolls, but it’s fair to say he probably meant this anecdote to be illustrative rather than evidentiary.)

The talk did not go over well with the rest of the participants in the conference.

Several scientific studies were presented at the conference before Summers made his speech. All these studies presented significant evidence against the claim of an innate difference between males and females that could account for the “science gap”.


In the aftermath of this conference of yore, there were some commenters who lauded Summers for voicing “unpopular truths” and others who distanced themselves from his claims but said they supported his right to make them as an exercise of “academic freedom.”

But if Summers was representing himself as a scientist* when he made his speech, I don’t think the “academic freedom” defense works.


Summers is free to state hypotheses — even unpopular hypotheses — that might account for a particular phenomenon. But, as a scientist, he is also responsible to take account of data relevant to his hypotheses. If the data weighs against his preferred hypothesis, intellectual honesty requires that he at least acknowledge this fact. Some would argue that it could even require that he abandon his hypothesis (since science is supposed to be evidence-based whenever possible).


When news of Summers’ speech, and reactions to it, was fresh, one of the details that stuck with me was that one of the conference organizers noted to Summers, after he gave his speech, that there was a large body of evidence — some of it presented at that very conference — that seemed to undermine his hypothesis, after which Summers gave a reply that amounted to, “Well, I don’t find those studies convincing.”

Was Summers within his rights to not believe these studies? Sure. But, he had a responsibility to explain why he rejected them. As a part of a scientific community, he can’t just reject a piece of scientific knowledge out of hand. Doing so comes awfully close to undermining the process of communication that scientific knowledge is based upon. You aren’t supposed to reject a study because you have more prestige than the authors of the study (so, you don’t have to care what they say). You can question the experimental design, you can question the data analysis, you can challenge the conclusions drawn, but you have to be able to articulate the precise objection. Surely, rejecting a study just because it doesn’t fit with your preferred hypothesis is not an intellectually honest move.


By my reckoning, Summers did not conduct himself as a responsible scientist in this incident. But I’d argue that the problem went beyond a lack of intellectual honesty within the universe of scientific discourse. Summers is also responsible for the bad consequences that flowed from his remark.


The bad consequence I have in mind here is the mistaken view of science and its workings that Summers’ conduct conveys. Especially by falling back on a plain vanilla “academic freedom” defense here, defenders of Summers conveyed to the public at large the idea that any hypothesis in science is as good as any other. Scientists who are conscious of the evidence-based nature of their field will see the absurdity of this idea — some hypotheses are better, others worse, and whenever possible we turn to the evidence to make these discriminations. Summers compounded ignorance of the relevant data with what came across as a statement that he didn’t care what the data showed. From this, the public at large could assume he was within his scientific rights to decide which data to care about without giving any justification for this choice**, or they could infer that data has little bearing on the scientific picture of the world.

Clearly, such a picture of science would undermine the standing of the rest of the bits of knowledge produced by scientists far more intellectually honest than Summers.


Indeed, we might go further here. Not only did Summers have some responsibilities that seemed to have escaped him while he was speaking as a scientist, but we could argue that the rest of the scientists (whether at the conference or elsewhere) have a collective responsibility to address the mistaken picture of science his conduct conveyed to society at large. It’s disappointing that, nearly a decade later, we instead have to contend with the problem of scientists following in Summers’ footsteps by ignoring, rather than engaging with, the scientific findings that challenge their intuitions.

Owing to the role we play in presenting a picture of what science knows and of how scientists come to know it to a broader audience, those of us who write about science (on blogs and elsewhere) also have a responsibility to be clear about the kind of standards scientists need to live up to in order to build a body of knowledge that is as accurate and unbiased as humanly possible. If we’re not clear about these standards in our science writing, we risk misleading our audiences about the current state of our knowledge and about how science works to build reliable knowledge about our world. Our responsibility here isn’t just a matter of noticing when scientists are messing up — it’s also a matter of acknowledging and correcting our own mistakes and of working harder to notice our own biases. I’m pleased that our Blogs Editor is committed to helping us fulfill this duty.
_____
*Summers is an economist, and whether to regard economics as a scientific field is a somewhat contentious matter. I’m willing to give the scientific status of economics the benefit of the doubt, but this means I’ll also expect economists to conduct themselves like scientists, and will criticize them when they do not.

**It’s worth noting that a number of the studies that Summers seemed to be dismissing out of hand were conducted by women. One wonders what lessons the public might draw from that.

_____
A portion of this post is an updated version of an ancestor post on my other blog.

Some thoughts about human subjects research in the wake of Facebook’s massive experiment.

You can read the study itself here, plus a very comprehensive discussion of reactions to the study here.

1. If you intend to publish your research in a peer-reviewed scientific journal, you are expected to have conducted that research with the appropriate ethical oversight. Indeed, the submission process usually involves explicitly affirming that you have done so (and providing documentation, in the case of human subjects research, of approval by the relevant Institutional Review Board(s) or of the IRB’s determination that the research was exempt from IRB oversight).

2. Your judgment, as a researcher, that your research will not expose your human subjects to especially big harms does not suffice to exempt that research from IRB oversight. The best way to establish that your research is exempt from IRB oversight is to submit your protocol to the IRB and have the IRB determine that it is exempt.

3. It’s not unreasonable for people to judge that violating their informed consent (say, by not letting them know that they are human subjects in a study where you are manipulating their environment and not giving them the opportunity to opt out of being part of your study) is itself a harm to them. When we value our autonomy, we tend to get cranky when others disregard it.

4. Researchers, IRBs, and the general public needn’t judge a study to be as bad as [fill in the name of a particularly horrific instance of human subjects research] to judge the conduct of the researchers in the study unethical. We can (and should) surely ask for more than “not as bad as the Tuskegee Syphilis Experiment”.

5. IRB approval of a study means that the research has received ethical oversight, but it does not guarantee that the treatment of human subjects in the research will be ethical. IRBs can make questionable ethical judgments too.

6. It is unreasonable to suggest that you can generally substitute Terms of Service or End User License Agreements for informed consent documents, as the latter are supposed to be clear and understandable to your prospective human subjects, while the former are written in such a way that even lawyers have a hard time reading and understanding them. The TOS or EULA is clearly designed to protect the company, not the user. (Some of those users, by the way, are in their early teens, which means they probably ought to be regarded as members of a “vulnerable population” entitled to more protection, not less.)

7. Just because a company like Facebook may “routinely” engage in manipulations of a user’s environment doesn’t make that kind of manipulation automatically ethical when it is done for the purposes of research. Nor does it mean that that kind of manipulation is ethical when Facebook does it for its own purposes. As it happens, peer-reviewed scientific journals, funding agencies, and other social structures tend to hold scientists building knowledge with human subjects research to higher ethical standards than (say) corporations are held to when they interact with humans. This doesn’t necessarily means our ethical demands of scientific knowledge-builders are too high. Instead, it may mean that our ethical demands of corporations are too low.

In the wake of the Harran plea deal, are universities embracing lab safety?

Earlier this month, prosecutors in Los Angeles reached a plea agreement with UCLA chemistry professor Patrick Harran in the criminal case against him in connection with the 2008 lab accident that resulted in the death of 23-year-old staff research assistant Sheharbano “Sheri” Sangji. Harran, who was facing more than 4 years of jail time if convicted, instead will perform 800 hours of community service and may find himself back in court in the event that his lab is found to have new safety violations in the next five years.

The Sangji family is not satisfied that the plea punishes Harran enough. My worry is whether the resolution of this case has a positive impact on safety in academic labs and research settings.

According to The Chronicle of Higher Education,

Several [independent safety advocates] agreed that universities’ research laboratories still remain more dangerous than their corporate counterparts. Yet they also expressed confidence that the impetus for improvement brought by the first filing ever of criminal charges over a fatal university lab accident has not been diluted by the plea bargain. …

[T]he action by California prosecutors “has gotten the attention of virtually every research chemist out there,” even in states that may seem more reluctant to pursue such cases, [Neal R. Langerman, a former associate professor of chemistry at Utah State University who now heads Advanced Chemical Safety, a consulting firm] said. “This is precedent-setting, and now that the precedent is set, you really do not want to test the water, because the water is already boiling.”

As you might expect, the official statement from UCLA plays up the improvements in lab safety put into place in the wake of the accident and points to the creation of the UC Center for Laboratory Safety, which has been holding workshops and surveying lab workers on safety practices and attitudes.

I’m afraid, however, judging from the immediate reaction I’ve seen at my own institution, that we have a long way to go.

In particular, a number of science faculty (who are not chemists) seem to have been getting clear messages in the wake of “that UCLA prosecution” — they didn’t really know the details of the case, nor the names of the people involved — that our university would not be backing them up legally in the event of any safety mishap in the lab or the field. Basically, the rumblings from the higher administrative strata were: No matter how well you’ve prepared yourself, your students, your employees, no matter how many safety measures you’ve put into place, no matter what limitations you’re working with as far as equipment or facilities, if something goes wrong, it’s your ass on the line.

This does not strike me as a productive way to approach safe working conditions as a collective responsibility within an educational institution. I also suspect it’s not a stance that would hold up in court, but since it would probably take another lab tragedy and prosecution to undermine it, I’m hopeful that some sense of institutional ethics will well up and result in a more productive approach.

The most charitable explanation I can come up with is that the higher administrative strata intended to communicate that science faculty have a positive duty to ensure safe working conditions for their students and employees (and themselves). That means that science faculty need to be proactive in assessing their research settings (whether for laboratory or field research) for potential hazards, in educating themselves and those they work with about those hazards, in having workable plans to mitigate the hazards and to respond swiftly and effectively to mishaps. All of that is sensible enough.

However, none of that means that the institution is free of responsibility. Departments, colleges, and university administrators control resources that can make the difference between a pretty safe research environment and a terribly risky one. Institutions, not individual faculty, create and maintain occupational health programs. Institutions can marshal shared resources (including safety training programs and institutional safety officers) that individual faculty cannot.

Moreover, institutions set the institutional tone — the more or less official sense of what is prioritized, of what is valued. If the strongest message about safety that reaches faculty boils down to legal liability and who will ultimately be legally liable, I’m pretty sure the institution still has a great deal of work to do in establishing a real culture of safety.

_____

Related posts:

Suit against UCLA in fatal lab fire raises question of who is responsible for safety.

Crime, punishment, and the way forward: in the wake of Sheri Sangji’s death, what should happen to Patrick Harran?

Facing felony charges in lab death of Sheri Sangji, UCLA settles, Harran stretches credulity.

Why does lab safety look different to chemists in academia and chemists in industry?

Community responsibility for a safety culture in academic chemistry.

Are safe working conditions too expensive for knowledge-builders?

Do permanent records of scientific misconduct findings interfere with rehabilitation?

We’ve been discussing how the scientific community deals with cheaters in its midst and the question of whether scientists view rehabilitation as a live option. Connected to the question of rehabilitation is the question of whether an official finding of scientific misconduct leaves a permanent mark that makes it practically impossible for someone to function within the scientific community — not because the person who has committed the conduct is unable to straighten up and fly right, but because others in the scientific community will no longer accept that person in the scientific knowledge-building endeavor, no matter what their behavior.

A version of this worry is at the center of an editorial by Richard Gallagher that appeared in The Scientist five years ago. In it, Gallagher argued that the Office of Research Integrity should not include findings of scientific misconduct in publications that are archived online, and that traces of such findings that persist after the period of debarment from federal funding has ended are unjust. Gallagher wrote:

For the sake of fairness, these sentences must be implemented precisely as intended. This means that at the end of the exclusion period, researchers should be able to participate again as full members of the scientific community. But they can’t.

Misconduct findings against a researcher appear on the Web–indeed, in multiple places on the Web. And the omnipresence of the Web search means that reprimands are being dragged up again and again and again. However minor the misdemeanor, the researcher’s reputation is permanently tarnished, and his or her career is invariably ruined, just as surely as if the punishment were a lifetime ban.

Both the NIH Guide and The Federal Register publish findings of scientific misconduct, and are archived online. As long as this continues, the problem will persist. The director of the division of investigative oversight at ORI has stated his regret at the “collateral damage” caused by the policy (see page 32). But this is not collateral damage; it is a serious miscarriage of justice against researchers and a stain on the integrity of the system, and therefore of science.

It reminds me of the system present in US prisons, in which even after “serving their time,” prisoners will still have trouble finding work because of their criminal records. But is it fair to compare felons to scientists who have, for instance, fudged their affiliations on a grant application when they were young and naïve?

It’s worth noting that the ORI website seems currently to present information for misconduct cases where scientists haven’t yet “served out their sentences”, featuring the statement:

This page contains cases in which administrative actions were imposed due to findings of research misconduct. The list only includes those who CURRENTLY have an imposed administrative actions against them. It does NOT include the names of individuals whose administrative actions periods have expired.

In the interaction between scientists who have been found to have committed scientific misconduct and the larger scientific community, we encounter the tension between the rights of the individual scientist and the rights of the scientific community. This extends to the question of the magnitude of a particular instance of misconduct, or of whether it was premeditated or merely sloppy, or of whether the offender was young and naïve or old enough to know better. An oversight or mistake in judgment that may strike the individual scientist making it as no big deal (at least at the time) can have significant consequences for the scientific community in terms of time wasted (e.g., trying to reproduce reported results) and damaged trust.

The damaged trust is not a minor thing. Given that the scientific knowledge-building enterprise relies on conditions where scientists can trust their fellow scientists to make honest reports (whether in the literature, in grant proposals, or in less formal scientific communications), discovering a fellow scientist whose relationship with the truth is more casual is a very big deal. Flagging liars is like tagging a faulty measuring device. It doesn’t mean you throw them out, but you do need to go to some lengths to reestablish their reliability.

To the extent that an individual scientist is committed to the shared project of building a reliable body of scientific knowledge, he or she ought to understand that after a breach, one is not entitled to a full restoration of the community’s trust. Rather, that trust must be earned back. One step in earning back trust is to acknowledge the harm the community suffered (or at least risked) from the dishonesty. Admitting that you blew it, that you are sorry, and that others have a right to be upset about it, are all necessary preliminaries to making a credible claim that you won’t make the same mistake again.

On the other hand, protesting that your screw-ups really weren’t important, or that your enemies have blown them out of proportion, might be an indication that you still don’t really get why your scientific colleagues are unhappy about your behavior. In such a circumstance, although you may have regained your eligibility to receive federal grant money, you may still have some work left to do to demonstrate that you are a trustworthy member of the scientific community.

It’s true that scientific training seems to go on forever, but that shouldn’t mean that early career scientists are infantilized. They are, by and large, legal adults, and they ought to be striving to make decisions as adults — which means considering the potential effects of their actions and accepting the consequences of them. I’m disinclined, therefore, to view ORI judgments of scientific misconduct as akin to juvenile criminal records that are truly expunged to reflect the transient nature of the youthful offender’s transgressions. Scientists ought to have better judgment than fifteen-year-olds. Occasionally they don’t. If they want to stay a part of the scientific community that their bad choices may have harmed, they have to be prepared to make real restitution. This may include having to meet a higher burden of proof to make up for having misled one’s fellow scientists at some earlier point in time. It may be a pain, but it’s not impossible.

Indeed, I’m inclined to think that early career lapses in judgment ought not to be buried precisely because public knowledge of the problem gives the scientific community some responsibility for providing guidance to the promising young scientist who messed up. Acknowledging your mistakes sets up a context in which it may be easier to ask other folks for help in avoiding similar mistakes in the future. (Ideally, scientists would be able to ask each other for such advice as a matter of course, but there are plenty of instances where it feels like asking a question would be exposing a weakness — something that can feel very dangerous, especially to an early career scientist.)

Besides, there’s a practical difficulty in burying the pixel trail of a scientist’s misconduct. It’s almost always the case that other members of the scientific community are involved in alleging, detecting, investigating, or adjudicating. They know something is up. Keeping the official findings secret leaves the other concerned members of the scientific community hanging, unsure whether the ORI has done anything about the allegations (which can breed suspicion that scientists are getting away with misconduct left and right). It can also make the rumor mill seem preferable to a total lack of information on scientific colleagues prone to dishonesty toward other scientists.

Given the amount of information available online, it’s unlikely that scientists who have been caught in misconduct can fly completely under the radar. But even before the internet, there was no guarantee such a secret would stay secret. Searchable online information imposes a certain level of transparency. But if this is transparency following upon actions that deceived one’s scientific community, it might be the start of effective remediation. Admitting that you have broken trust may be the first real step in earning that trust back.

_____________
This post is an updated version of an ancestor post on my other blog.

A suggestion for those arguing about the causal explanation for fewer women in science and engineering fields.

People are complex, as are the social structures they build (including but not limited to educational institutions, workplaces, and professional communities).

Accordingly, the appropriate causal stories to account for the behaviors and choices of humans, individually and collectively, are bound to be complex. It will hardly ever be the case that there is a single cause doing all the work.

However, there are times when people seem to lose the thread when they spin their causal stories. For example:

The point of focusing on innate psychological differences is not to draw attention away from anti-female discrimination. The research clearly shows that such discrimination exists—among other things, women seem to be paid less for equal work. Nor does it imply that the sexes have nothing in common. Quite frankly, the opposite is true. Nor does it imply that women—or men—are blameworthy for their attributes.

Rather, the point is that anti-female discrimination isn’t the only cause of the gender gap. As we learn more about sex differences, we’ve built better theories to explain the non-identical distribution of the sexes among the sciences. Science is always tentative, but the latest research suggests that discrimination has a weaker impact than people might think, and that innate sex differences explain quite a lot.

What I’m seeing here is a claim that amounts to “there would still be a gender gap in the sciences even if we eliminated anti-female discrimination” — in other words, that the causal powers of innate sex differences would be enough to create a gender gap.

To this claim, I would like to suggest:

1. that there is absolutely no reason not to work to eliminate anti-female discrimination; whether or not there are other causes that are harder to change, such discrimination seems like something we can change, and it has negative effects on those subject to it;

2. that is is an empirical question whether, in the absence of anti-female discrimination, there would still be a gender gap in the sciences; given the complexity of humans and their social structures, controlled studies in psychology are models of real life that abstract away lots of details*, and when the rubber hits the road in the real phenomena we are modeling, things may play out differently.

Let’s settle the question of how much anti-female discrimination matters by getting rid of it.

_____
* This is not a special problem for psychology. All controlled experiments are abstracting away details. That’s what controlling variables is all about.

Faith in rehabilitation (but not in official channels): how unethical behavior in science goes unreported.

Can a scientist who has behaved unethically be rehabilitated and reintegrated as a productive member of the scientific community? Or is your first ethical blunder grounds for permanent expulsion from the community?

In practice, this isn’t just a question about the person who commits the ethical violation. It’s also a question about what other scientists in the community can stomach in dealing with the offenders — especially when the offender turns out to be a close colleague or a trainee.

In the case of a hard line — one ethical strike and you’re out — what kind of decision does this place on the scientific mentor who discovers that his or her graduate student or postdoc has crossed an ethical line? Faced with someone you judge to have talent and promise, someone you think could contribute to the scientific endeavor, someone whose behavior you are convinced was the result of a moment of bad judgment rather than evil intent or an irredeemably flawed character, what do you do?

Do you hand the matter on to university administrators or federal funders (who don’t know your trainee, might not recognize or value his or her promise, might not be able to judge just how out of character this ethical misstep really was) and let them mete out punishment? Or, do you try to address the transgression yourself, as a mentor, addressing the actual circumstances of the ethical blunder, the other options your trainee should have recognized as better ones to pursue, and the kind of harm this bad decision could bring to the trainee and to other members of the scientific community?

Clearly, there are downsides to either of these options.

One problem with handling an ethical transgression privately is that it’s hard to be sure it has really been handled in a lasting way. Given the persistent patterns of escalating misbehavior that often come to light when big frauds are exposed, it’s hard not to wonder whether scientific mentors were aware, and perhaps even intervening in ways they hoped would be effective.

It’s the building over time of ethical violations that is concerning. Is such an escalation the result of a hands-off (and eyes-off) policy from mentors and collaborators? Could intervention earlier in the game have stopped the pattern of infractions and led the researcher to cultivate more honest patterns of scientific behavior? Or is being caught by a mentor or collaborator who admonishes you privately and warns that he or she will keep an eye on you almost as good as getting away with it — an outcome with no real penalties and no paper-trail that other members of the scientific community might access?

It’s even possible that some of these interventions might happen at an institutional level — the department or the university becomes aware of ethical violations and deals with them “internally” without involving “the authorities” (who, in such cases, are usually federal funding agencies). I dare say that the feds would be pretty unhappy about being kept out of the loop if the ethical violations in question occur in research supported by federal funding. But if the presumption is that getting the feds involved raises the available penalties to the draconian, it is understandable that departments and universities might want to try to address the ethical missteps while still protecting the investment they have made in a promising young researcher.

Of course, the rest of the scientific community has relevant interests here. These include an interest in being able to trust that other scientists present honest results to the community, whether in journal articles, conference presentations, grant applications, or private communications. Arguably, they also include an interest in having other members of the community expose dishonesty when they detect it. Managing an ethical infraction privately is problematic if it leaves the scientific community with misleading literature that isn’t corrected or retracted (for example).

It’s also problematic if it leaves someone with a habit of cheating in the community, presumed by all but a few of the community’s members to have a good record of integrity.

But I’m inclined to think that the impulse to deal with science’s youthful offenders privately is a response to the fear that handing them over to federal authorities has a high likelihood of ending their scientific careers forever. There is a fear that a first offense will be punished with the career equivalent of the death penalty.

As it happens, administrative sanctions imposed by Office of Research Integrity are hardly ever permanent removal. Findings of scientific misconduct are much more likely to be punished with exclusion from federal funding for three years, or five years, or ten years. Still, in an extremely competitive environment, with multitudes of scientists competing for scarce grant dollars and permanent jobs, even a three year disbarment may be enough to seriously derail a scientific career. The mentor making the call about whether to report a trainee’s unethical behavior may judge the likely fallout as enough to end the trainee’s career.

Permanent expulsion or a slap on the wrist is not much of a range of penalties. And, neither of these options really addresses the question of whether rehabilitation is possible and in the best interests of both the individual and the scientific community.

If no errors in judgment are tolerated, people will do anything to conceal such errors. Mentors who are trying to be humane may become accomplices in the concealment. The conversations about how to make better judgments may not happen because people worry that their hypothetical situations will be scrutinized for clues about actual screw-ups.

None of this is to say that ethical violations should be without serious consequences — they shouldn’t. But this need not preclude the possibility that people can learn from their mistakes. Violators may have to meet a heavy burden to demonstrate that they have learned from their mistakes. Indeed, it is possible they may never fully regain the trust of their fellow researchers (who may go forward reading their papers and grant proposals with heightened skepticism in light of their past wrongdoing).

However, it seems perverse for the scientific community to adopt a stance that rehabilitation is impossible when so many of its members seem motivated to avoid official channels for dealing with misconduct precisely because they feel rehabilitation is possible. If the official penalty structure denies the possibility of rehabilitation, those scientists who believe in rehabilitation will take matters into their own hands. To the extent that this may exacerbate the problem, it might be good if paths to rehabilitation were given more prominence in official responses to misconduct.

_____________
This post is an updated version of an ancestor post on my other blog.

Resistance to ethics instruction: considering the hypothesis that moral character is fixed.

This week I’ve been blogging about the resistance to required ethics coursework one sometimes sees in STEM* disciplines. As one reason for this resistance is the hunch that you can’t teach a person to be ethical once they’re past a certain (pre-college) age, my previous post noted that there’s a sizable body of research that supports ethics instruction as an intervention to help people behave more ethically.

But, as I mentioned in that post, the intuition that one’s moral character is fixed by one’s twenties can be so strong that folks don’t always believe what the empirical research says about the question.

So, as a thought experiment, let’s entertain the hypothesis that, by your twenties, your moral character is fixed — that you’re either ethical or evil by then and there’s nothing further ethics instruction can do about it. If this were the case, how would we expect scientists to respond to other scientists or scientific trainees who behave unethically?

Presumably, scientists would want the unethical members of the tribe of science identified and removed, permanently. Under the fixed-character hypothesis, the removal would have to be permanent, because there would be every reason to expect the person who behaved unethically to behave unethically again.

If we took this seriously, that would mean every college student who ever cheated on a quiz or made up data for a lab report should be barred from entry to the scientific community, and that every grown-up scientist caught committing scientific misconduct — or any ethical lapse, even those falling well short of fabrication, falsification, or plagiarism — would be excommunicated from the tribe of science forever.

That just doesn’t happen. Even Office of Research Integrity findings of scientific misconduct don’t typically lead to lifetime disbarment from federal research funding. Instead, they usually lead to administrative actions imposed for a finite duration, on the order of years, not decades.

And, I don’t think the failure to impose a policy of “one strike, you’re out” for those who behave unethically is because members of the tribe of science are being held back by some naïvely optimistic outside force (like the government, or the taxpaying public, or ethics professors). Nor is it because scientists believe it’s OK to lie, cheat, and steal in one’s scientific practice; there is general agreement that scientific misconduct damages the shared body of knowledge scientists are working to build.

When dealing with members of their community who have behaved unethically, scientists usually behave as if there is a meaningful difference between a first offense and a pattern of repeated offenses. This wouldn’t make sense if scientists were truly committed to the fixed-character hypothesis.

On the other hand, it fits pretty well with the hypothesis that people may be able to learn from their mistakes — to be rehabilitated rather than simply removed from the community.

There are surely some hard cases that the tribe of science view as utterly irredeemable, but graduate students or early career scientists whose unethical behavior is caught early are treated by many as probably redeemable.

How to successfully rehabilitate a scientist who has behaved unethically is a tricky question, and not one scientists seem inclined to speak about much. Actions by universities, funding agencies, or governmental entities like the Office of Research Integrity are part of the punishment landscape, but punishment is not the same thing as rehabilitation. Meanwhile, it’s unclear whether individual actions to address wrongdoing are effective at heading off future unethical behavior.

If it takes a village to raise a scientist, it may take concerted efforts at the level of scientific communities to rehabilitate scientists who have strayed from the path of ethical practice. We’ll discuss some of the challenges with that in the next post.

______
*STEM stands for science, technology, engineering, and mathematics.

Resistance to ethics instruction: the intuition that ethics cannot be taught.

In my last post, I suggested that required ethics coursework (especially for students in STEM* disciplines) are met with a specific sort of resistance. I also surmised that part of this resistance is the idea that ethics can’t be taught in any useful way, “the idea that being ethical is somehow innate, a mere matter of not being evil.”

In a comment on that post, ThomasB nicely illustrates that particular strain of resistance:

Certainly scientists, like everyone else in our society, must behave ethically. But what makes this a college-level class? From the description, it covers the basic do not lie-cheat-steal along with some anti-bullying and possibly a reminder to cite one’s references. All of which should have been instilled long before college.

So what is there to teach at this point? The only thing I can think of specific to science is the “publish or perish” pressure to keep the research dollars flowing in. Or possibly the psychological studies showing that highly intelligent and creative people are more inclined to be dishonest than ordinary people. Possibly because they are better at rationalizing doing what they want to do. Which is why I used the word “instilled” earlier: it seems to me that ethics comes more from the emotional centers of the brain than the conscious analytical part. As soon as we start consciously thinking about ethics, they seem to go out the window. Such as the study from one of the Ivy League schools where the students did worse at the ethics test at the end of the class than at the beginning.

So I guess the bottom line is whether the science shows that ethics classes at this point in a person’s life actually show an improvement in the person’s behavior. As Far as I know, there has been no such study done.

(Bold emphasis added.)

I think it’s reasonable to ask, before requiring an intervention (like ethics coursework), what we know about whether this sort of intervention is likely to work. I think it’s less reasonable to assume it won’t work without consulting the research on the matter.

As it happens, there has been a great deal of research on whether ethics instruction is an intervention that helps people behave more ethically — and the bulk of it shows that well-designed ethics instruction is an effective intervention.

Here’s what Bebeau et al. (1995) have to say about the question:

When people are given an opportunity to reflect on decisions and choices, they can and do change their minds about what they ought to do and how they wish to conduct their personal and professional lives. This is not to say that any instruction will be effective, or that all manner of ethical behavior can be developed with well-developed ethics instruction. But it is to say — and there is considerable evidence to show it — that ethics instruction can influence the thinking processes that relate to behavior. …

We do not claim that radical changes are likely to take place in the classroom or that sociopaths can be transformed into saints via case discussion. But we do claim that significant improvements can be made in reasoning about complex problems and that the effort is worthwhile. We are not alone in this belief: the National Institutes of Health, the National Science Foundation, the American Association for the Advancement of Science, and the Council of Biology Editors, among others, have called for increased attention to training in the responsible conduct of scientific research. Further, our belief is buttressed by empirical evidence from moral psychology. In Garrod (1993), James R. Rest summarizes the “several thousand” published studies on moral judgment and draws the following conclusions:

  • development of competence in ethical problem-solving continues well into adulthood (people show dramatic changes in their twenties, as in earlier years);
  • such changes reflect profound reconceptualization of moral issues;
  • formal education promotes ethical reasoning;
  • deliberate attempts to develop moral reasoning … can be demonstrated to be effective; and
  • studies link moral reasoning to moral behavior

So, there’s a body of research that supports ethics instruction as an intervention to help people behave more ethically.

Indeed, part of how ethics instruction helps is by getting students to engage analytically, not just emotionally. I would argue that making ethical decisions involves moving beyond gut feelings and instincts. It means understanding how your decisions impact others, and considering the ways your interests and theirs intersect. It means thinking through possible impacts of the various choices available to you. It means understanding the obligations set up by our relations to others in personal and professional contexts.

And methodology for approaching ethical decision making can be taught. Practice in making ethical decisions makes it easier to make better decisions. And making these decisions in conversation with other people who may have different perspectives (rather than just following a gut feeling) forces us to work out our reasons for preferring one course of action to the alternatives. These reasons are not just something we can offer to others to defend what we did, but they are things we can consider when deciding what to do in the first place.

As always, I reckon that there are some people who will remain unmoved by the research that shows the efficacy of ethics instruction, preferring to cling to their strong intuition that college-aged humans are past the point where an intervention like an ethics class could make any impact on their ethical behavior. But if that’s an intuition that ought to guide us — if, by your twenties, you’re either a good egg or irredeemably corrupt — it’s not clear that our individual or institutional responses to unethical behavior by scientists make any sense.

That’s the subject I’ll take up in my next post.

______
*STEM stands for science, technology, engineering, and mathematics.

______
Bebeau, M. J., Pimple, K. D., Muskavitch, K. M., Borden, S. L., & Smith, D. H. (1995). Moral reasoning in scientific research. Cases for teaching and assessment. Bloomington, IN: Poynter Center for the Study of Ethics and Assessment.

Garrod, A. (Ed.). (1993). Approaches to moral development: New research and emerging themes. Teachers College Press.