The continuum between outright fraud and “sloppy science”: inside the frauds of Diederik Stapel (part 5).

It’s time for one last look at the excellent article by Yudhijit Bhattacharjee in the New York Times Magazine (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. We’ve already examined strategy Stapel pursued to fabricate persuasive “results”, the particular harms Stapel’s misconduct did to the graduate students he was training, and the apprehensions of the students and colleagues who suspected fraud was afoot about the prospect of blowing the whistle on Stapel. To close, let’s look at some of the uncomfortable lessons the Stapel case has for his scientific community — and perhaps for other scientific communities as well.

Bhattacharjee writes:

At the end of November, the universities unveiled their final report at a joint news conference: Stapel had committed fraud in at least 55 of his papers, as well as in 10 Ph.D. dissertations written by his students. The students were not culpable, even though their work was now tarnished. The field of psychology was indicted, too, with a finding that Stapel’s fraud went undetected for so long because of “a general culture of careless, selective and uncritical handling of research and data.” If Stapel was solely to blame for making stuff up, the report stated, his peers, journal editors and reviewers of the field’s top journals were to blame for letting him get away with it. The committees identified several practices as “sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be.

The adjective “sloppy” seems charitable. Several psychologists I spoke to admitted that each of these more common practices was as deliberate as any of Stapel’s wholesale fabrications. Each was a choice made by the scientist every time he or she came to a fork in the road of experimental research — one way pointing to the truth, however dull and unsatisfying, and the other beckoning the researcher toward a rosier and more notable result that could be patently false or only partly true. What may be most troubling about the research culture the committees describe in their report are the plentiful opportunities and incentives for fraud. “The cookie jar was on the table without a lid” is how Stapel put it to me once. Those who suspect a colleague of fraud may be inclined to keep mum because of the potential costs of whistle-blowing.

The key to why Stapel got away with his fabrications for so long lies in his keen understanding of the sociology of his field. “I didn’t do strange stuff, I never said let’s do an experiment to show that the earth is flat,” he said. “I always checked — this may be by a cunning manipulative mind — that the experiment was reasonable, that it followed from the research that had come before, that it was just this extra step that everybody was waiting for.” He always read the research literature extensively to generate his hypotheses. “So that it was believable and could be argued that this was the only logical thing you would find,” he said. “Everybody wants you to be novel and creative, but you also need to be truthful and likely. You need to be able to say that this is completely new and exciting, but it’s very likely given what we know so far.”

Fraud like Stapel’s — brazen and careless in hindsight — might represent a lesser threat to the integrity of science than the massaging of data and selective reporting of experiments. The young professor who backed the two student whistle-blowers told me that tweaking results — like stopping data collection once the results confirm a hypothesis — is a common practice. “I could certainly see that if you do it in more subtle ways, it’s more difficult to detect,” Ap Dijksterhuis, one of the Netherlands’ best known psychologists, told me. He added that the field was making a sustained effort to remedy the problems that have been brought to light by Stapel’s fraud.

(Bold emphasis added.)

If the writers of this report are correct, the field of psychology failed in multiple ways here. First, they were insufficiently skeptical — both of Stapel’s purported findings and of their own preconceptions — to nip Stapel’s fabrications in the bud. And, they were themselves routinely engaging in practices that were bound to mislead.

Maybe these practices don’t rise to the level of outright fabrication. However, neither do they rise to the level of rigorous and intellectually honest scientific methodology.

There could be a number of explanations for these questionable methodological choices.

Possibly some of the psychologists engaging in this “sloppy science” lack a good understanding of statistics or of what counts as a properly rigorous test of one’s hypothesis. Essentially, this is an explanation of faulty methodology on the basis of ignorance. However, it’s likely that this is culpable ignorance — that psychology researchers have a positive duty to learn what they ought to know about statistics and hypothesis testing, and to avail themselves of available resources to ensure that they aren’t ignorant in this particular way.

I don’t know if efforts to improve statistics education are a part of the “sustained effort to remedy the problems that have been brought to light by Stapel’s fraud,” but I think they should be.

Another explanation for the lax methodology decried by the report is alluded to in the quoted passage: perhaps psychology researchers let the strength of their own intuitions about what they were going to see in their research results drive their methodology. Perhaps they unconsciously drifted away from methodological rigor and toward cherry-picking and misuse of statistics and the like because they knew in their hearts what the “right” answer would be. Given this kind of conviction, of course they would reject methods that didn’t yield the “right” answer in favor of those that did.

Here, too, the explanation does not provide an excuse. The scientist’s brief is not to take strong intuitions as true, but to look for evidence — especially evidence that could demonstrate that the intuitions are wrong. A good scientist should be on the alert for instances where she is being fooled by her intuitions. Rigorous methodology is one of the tools at her disposal to avoid being fooled. Organized skepticism from her fellow scientists is another.

From here, the explanations drift into waters where the researchers are even more culpable for their sloppiness. If you understand how to test hypotheses properly, and if you’re alert enough to the seductive power of your intuitions, it seems like the other reason you might engage in “sloppy science” is to make your results look less ambiguous, more certain, more persuasive than they really are, either to your fellow scientists or to others (administrators evaluating your tenure or promotion case? the public?). Knowingly providing a misleading picture of how good your results are is lying. It may be a lie of a smaller magnitude than Diederik Stapel’s full-scale fabrications, but it’s still dishonest.

And of course, there are plenty of reasons scientists (like other human beings) might try to rationalize a little lie as being not that bad. Maybe you really needed more persuasive preliminary data than you got to land the grant without which you won’t be able to support graduate students. Maybe you needed to make your conclusions look stronger to satisfy the notoriously difficult peer reviewers at the journal to which you submitted your manuscript. Maybe you are on the verge of getting credit for a paradigm-shaking insight in your field (if only you can put up the empirical results to support it), or of beating a competing research group to the finish line for an important discovery (if only you can persuade your peers that the results you have establish that discovery).

But maybe all these excuses prioritize scientific scorekeeping to the detriment of scientific knowledge-building.

Science is supposed to be an activity aimed at building a reliable body of knowledge about the world. You can’t reconcile this with lying, whether to yourself or to your fellow scientists. This means that scientists who are committed to the task must refrain from the little lies, and that they must take serious conscious steps to ensure that they don’t lie to themselves. Anything else runs the risk of derailing the whole project.

C.K. Gunsalus on responsible — and prudent — whistleblowing.

In my last post, I considered why, despite good reasons to believe that social psychologist Diederik Stapel’s purported results were too good to be true, the scientific colleagues and students who were suspicious of his work were reluctant to pursue these suspicions. Questioning the integrity of a member of your professional community is hard, and blowing the whistle on misconduct and misbehavior can be downright dangerous.

In her excellent article “How to Blow the Whistle and Still Have a Career Afterwards”, C. K. Gunsalus describes some of the challenges that come from less than warm community attitudes towards members who point out wrongdoing:

[Whistleblowers pay a high price] due to our visceral cultural dislike of tattletales. While in theory we believe the wrong-doing should be reported, our feelings about practice are more ambivalent. …

Perhaps some of this ambivalence is rooted in fear of becoming oneself the target of maliciously motivated false charges filed by a disgruntled student or former colleague. While this concern is probably overblown, it seems not far from the surface in many discussions of scientific integrity. (p. 52)

I suspect that much of this is a matter of empathy — or, more precisely, of who it is within our professional community with whom we empathize. Maybe we have an easier time empathizing with the folks who seem to be trying to get along, rather than those who seem to be looking for trouble. Or maybe we have more empathy for our colleagues, with whom we share experiences and responsibilities and the expectation of longterm durable bonds, than we have for our students.

But perhaps distaste for a tattletale is more closely connected to our distaste for the labor involved in properly investigating allegations of wrongdoing and then, if wrongdoing is established, addressing it. It would certainly be easier to assume the charges are baseless, and sometimes disinclination to investigate takes the form of finding reasons not to believe the person raising the concerns.

Still, if the psychology of scientists cannot permit them to take allegations of misbehavior seriously, there is no plausible way for science to be self-correcting. Gunsalus writes:

[E]very story has at least two sides, and a problem often looks quite different when both are in hand than when only one perspective is in view. The knowledge that many charges are misplaced or result from misunderstandings reinforces ingrained hesitancies against encouraging charges without careful consideration.

On the other hand, serious problems do occur where the right and best thing for all is a thorough examination of the problem. In most instances, this examination cannot occur without someone calling the problem to attention. Early, thorough review of potential problems is in the interest of every research organization, and conduct that leads to it should be encouraged. (p. 53)

(Bold emphasis added.)

Gunsalus’s article (which you should read in full) takes account of negative attitudes towards whistleblowers despite the importance of rooting out misconduct and lays out a sensible strategy for bringing wrongdoing to light without losing your membership in your professional community. She lays out “rules for responsible whistleblowing”:

  1. Consider alternative explanations (especially that you may be wrong).
  2. In light of #1, ask questions, do not make charges.
  3. Figure out what documentation supports your concerns and where it is.
  4. Separate your personal and professional concerns.
  5. Assess your goals.
  6. Seek advice and listen to it.

and her “step-by-step procedures for responsible whistleblowing”:

  1. Review your concern with someone you trust.
  2. Listen to what that person tells you.
  3. Get a second opinion and take that seriously, too.
  4. If you decide to initiate formal proceedings, seek strength in numbers.
  5. Find the right place to file charges; study the procedures.
  6. Report your concerns.
  7. Ask questions; keep notes.
  8. Cultivate patience!

The focus is very much on moving beyond hunches to establish clear evidence — and on avoiding self-deception. The potential whistleblower must hope that those to whom he or she is bringing concerns are themselves as committed to looking at the available evidence and avoiding self-deception.

Sometimes this is the situation, as it seems to have been in the Stapel case. In other cases, though, whistleblowers have done everything Gunsalus recommends and still found themselves without the support of their community. This is not just a bad thing for the whistleblowers. It is also a bad thing for the scientific community and the reliability of the shared body of knowledge it tries to build.
_____
C. K. Gunsalus, “How to Blow the Whistle and Still Have a Career Afterwards,” Science and Engineering Ethics, 4(1) 1998, 51-64.

Reluctance to act on suspicions about fellow scientists: inside the frauds of Diederik Stapel (part 4).

It’s time for another post in which I chew on some tidbits from Yudhijit Bhattacharjee’s incredibly thought-provoking New York Times Magazine article (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. (You can also look at the tidbits I chewed on in part 1, part 2, and part 3.) This time I consider the question of why it was that, despite mounting clues that Stapel’s results were too good to be true, other scientists in Stapel’s orbit were reluctant to act on their suspicions that Stapel might be up to some sort of scientific misbehavior.

Let’s look at how Bhattacharjee sets the scene in the article:

[I]n the spring of 2010, a graduate student noticed anomalies in three experiments Stapel had run for him. When asked for the raw data, Stapel initially said he no longer had it. Later that year, shortly after Stapel became dean, the student mentioned his concerns to a young professor at the university gym. Each of them spoke to me but requested anonymity because they worried their careers would be damaged if they were identified.

The bold emphasis here (and in the quoted passages that follow) is mine. I find it striking that even now, when Stapel has essentially been fully discredited as a trustworthy scientist, these two members of the scientific community feel safer not being identified. It’s not entirely obvious to me if their worry is being identified as someone who was suspicious that fabrication was taking place but who said nothing to launch official inquiries — or whether they fear that being identified as someone who was suspicious of a fellow scientist could harm their standing in the scientific community.

If you dismiss that second possibility as totally implausible, read on:

The professor, who had been hired recently, began attending Stapel’s lab meetings. He was struck by how great the data looked, no matter the experiment. “I don’t know that I ever saw that a study failed, which is highly unusual,” he told me. “Even the best people, in my experience, have studies that fail constantly. Usually, half don’t work.”

The professor approached Stapel to team up on a research project, with the intent of getting a closer look at how he worked. “I wanted to kind of play around with one of these amazing data sets,” he told me. The two of them designed studies to test the premise that reminding people of the financial crisis makes them more likely to act generously.

In early February, Stapel claimed he had run the studies. “Everything worked really well,” the professor told me wryly. Stapel claimed there was a statistical relationship between awareness of the financial crisis and generosity. But when the professor looked at the data, he discovered inconsistencies confirming his suspicions that Stapel was engaging in fraud.

If one has suspicions about how reliable a fellow scientist’s results are, doing some empirical investigation seems like the right thing to do. Keeping an open mind and then examining the actual data might well show one’s suspicions to be unfounded.

Of course, that’s not what happened here. So, given a reason for doubt with stronger empirical support — not to mention the fact that scientists are trying to build a shared body of scientific knowledge (which means that unreliable papers in the literature can hurt the knowledge-building efforts of other scientists who trust that the work reported in that literature was done honestly), you would think the time was right for this professor to pass on what he had found to those at the university who could investigate further. Right?

The professor consulted a senior colleague in the United States, who told him he shouldn’t feel any obligation to report the matter.

For all the talk of science, and the scientific literature, being “self-correcting,” it’s hard to imagine the precise mechanism for such self-correction in a world where no scientist who is aware of likely scientific misconduct feels any obligation to report the matter.

But the person who alerted the young professor, along with another graduate student, refused to let it go. That spring, the other graduate student examined a number of data sets that Stapel had supplied to students and postdocs in recent years, many of which led to papers and dissertations. She found a host of anomalies, the smoking gun being a data set in which Stapel appeared to have done a copy-paste job, leaving two rows of data nearly identical to each other.

The two students decided to report the charges to the department head, Marcel Zeelenberg. But they worried that Zeelenberg, Stapel’s friend, might come to his defense. To sound him out, one of the students made up a scenario about a professor who committed academic fraud, and asked Zeelenberg what he thought about the situation, without telling him it was hypothetical. “They should hang him from the highest tree” if the allegations were true, was Zeelenberg’s response, according to the student.

Some might think these students were being excessively cautious, but the sad fact is that scientists faced with allegations of misconduct against a colleague — especially if they are brought by students — frequently side with their colleague and retaliate against those making the allegations. Students, after all, are new members of one’s professional community, so green one might not even think of them as really members. They are low status, they are learning how things work, they are judged likely to have misunderstood what they have seen. And, in contrast to one’s colleagues, students are transients. They are just passing through the training program, whereas you might hope to be with your colleagues for your whole professional life. In a case of dueling testimony, who are you more likely to believe?

Maybe the question should be whether your bias towards believing one over the other is strong enough to keep you from examining the available evidence to determine whether your trust is misplaced.

The students waited till the end of summer, when they would be at a conference with Zeelenberg in London. “We decided we should tell Marcel at the conference so that he couldn’t storm out and go to Diederik right away,” one of the students told me.

In London, the students met with Zeelenberg after dinner in the dorm where they were staying. As the night wore on, his initial skepticism turned into shock. It was nearly 3 when Zeelenberg finished his last beer and walked back to his room in a daze. In Tilburg that weekend, he confronted Stapel.

It might not be universally true, but at least some of the people who will lie about their scientific findings in a journal article will lie right to your face about whether they obtained those findings honestly. Yet lots of us think we can tell — at least with the people we know — whether they are being honest with us. This hunch can be just as wrong as the wrongest scientific hunch waiting for us to accumulate empirical evidence against it.

The students seeking Zeelenberg’s help in investigating Stapel’s misbehavior found a situation in which Zeelenberg would have to look at the empirical evidence first before he looked his colleague in the eye and asked him whether he was fabricating his results. They had already gotten him to say, at least in the abstract, that the kind of behavior they had reason to believe Stapel was committing was unacceptable in their scientific community. To make a conscious decision to ignore the empirical evidence would have meant Zeelenberg would have to see himself as displaying a kind of intellectual dishonesty — because if fabrication is harmful to science, it is harmful to science no matter who perpetrates it.

As it was, Zeelenberg likely had to make the painful concession that he had misjudged his colleague’s character and trustworthiness. But having wrong hunches is science is much less of a crime than clinging to those hunches in the face of mounting evidence against them.

Doing good science requires a delicate balance of trust and accountability. Scientists’ default position is to trust that other scientists are making honest efforts to build reliable scientific knowledge about the world, using empirical evidence and methods of inference that they display for the inspection (and critique) of their colleagues. Not to hold this default position means you have to build all your knowledge of the world yourself (which makes achieving anything like objective knowledge really hard). However, this trust is not unconditional, which is where the accountability comes is. Scientists recognize that they need to be transparent about what they did to build the knowledge — to be accountable when other scientists ask questions or disagree about conclusions — else that trust evaporates. When the evidence warrants it, distrusting a fellow scientist is not mean or uncollegial — it’s your duty. We need the help of other to build scientific knowledge, but if they insist that they ignore evidence of their scientific misbehavior, they’re not actually helping.

Are safe working conditions too expensive for knowledge-builders?

Last week’s deadly collapse of an eight-story garment factory building in Dhaka, Bangladesh has prompted discussions about whether poor countries can afford safe working conditions for workers who make goods that consumers in countries like the U.S. prefer to buy for bargain prices.

Maybe the risk of being crushed to death (or burned to death, or what have you) is just a trade-off poor people are (or should be) willing to accept to draw a salary. At least, that seems to be the take-away message from the crowd arguing that it would cost too much to have safety regulation (and enforcement) with teeth.

It is hard not to consider how this kind of attitude might get extended to other kinds of workplaces — like, say, academic research labs — given that last week UCLA chemistry professor Patrick Harran was also scheduled to return to court for a preliminary hearing on the felony charges of labor code violations brought against him in response to the 2008 fire in his laboratory that killed his employee, Shari Sangji.

Jyllian Kemsley has a detailed look at how Harran’s defense team has responded to the charges of specific violations of the California Labor Code, charges involving failure to provide adequate training, failure to have adequate procedures in place to correct unsafe conditions or work practices, and failure to require workers wear appropriate clothing for the work being done. Since I’m not a lawyer, it’s hard for me to assess the likelihood that the defense responses to these charges would be persuasive to a judge, but ethically, they’re pretty weak tea.

Sadly, though, it’s weak tea of the exact sort that my scientific training has led me to expect from people directing scientific research labs in academic settings.

When safety training is confined to a single safety video that graduate students are shown when they enter a program, that tells graduate students that their safety is not a big deal in the research activities that are part of their training.

When there’s not enough space under the hood for all the workers in a lab to conduct all the activities that, for safety’s sake, ought to be conducted under the hood — and when the boss expects all those activities to happen without delay — that tells them that a sacrifice in safety to produce quick results is acceptable.

When a student-volunteer needs to receive required ionizing radiation safety training to get a film badge that will give her access to the facility where she can irradiate her cells for an experiment, and the PI, upon hearing that the next training session in three weeks away, says to the student-volunteer, “Don’t bother; use my film badge,” that tells people in the lab that the PI is unwilling to lose three weeks of unpaid labor on one aspect of a research project just to make the personnel involved a little bit safer.

When people running a lab take an attitude of “Eh, young people are going to dress how they’re going to dress” rather than imposing clear rules for their laboratories that people whose dress is unsafe for the activities they are to undertake don’t get to undertake them, that tells the personnel in the lab that whatever cost is involved in holding this line — losing a day’s worth of work, being viewed by one’s underlings as strict rather than cool — has been judged too high relative to the benefit of making personnel in the lab safer.

When university presidents or other administrators proclaim that knowledge-builders “must continue to recalibrate [their] risk tolerance” by examining their “own internal policies and ask[ing] the question—do they meet—or do they exceed—our legal or regulatory requirements,” that tells knowledge-builders at those universities that people with significantly more power than them judge efforts to make things safer for knowledge-builders (and for others, like the human subjects of their research) as an unnecessary burden. When institutions need to become leaner, or more agile, shouldn’t researchers (and human subjects) do their part by accepting more risk as the price of doing business?

To be sure, safety isn’t free. But there are also costs to being less safe in academic research settings.

For example, personnel develop lax attitudes toward risks and trainees take these attitudes with them when they go out in the world as grown-up scientists. Surrounding communities can get hurt by improper disposal of hazardous materials, or by inadequate safety measures taken by researchers working with infectious agents who then go home and cough on their families and friends. Sometimes, personnel are badly injured, or killed.

And, if academic scientists are dragging feet on making things safer for the researchers on their team because it takes time and effort to investigate risks and make sensible plans for managing them, to develop occupational health plans and to institute standard operating procedures that everyone on the research team knows and follows, I hope they’re noticing that facing felony charges stemming from safety problems in their labs can also take lots of time and effort.

UPDATE: The Los Angeles Times reports that Patrick Harran will stand trial after an LA County Superior Court judge denied a defense motion to dismiss the case.

Shame versus guilt in community responses to wrongdoing.

Yesterday, on the Hastings Center Bioethics Forum, Carl Elliott pondered the question of why a petition asking the governor of Minnesota to investigate ethically problematic research at the University of Minnesota has gathered hundreds of signatures from scholars in bioethics, clinical research, medical humanities, and related disciplines — but only a handful of signatures from scholars and researchers at the University of Minnesota.

At the center of the research scandal is the death of Dan Markingson, who was a human subject in a clinical trial of psychiatric drugs. Detailed background on the case can be found here, and Judy Stone has blogged extensively about the ethical dimensions of the case.

Elliott writes:

Very few signers come from the University of Minnesota. In fact, only two people from the Center for Bioethics have signed: Leigh Turner and me. This is not because any faculty member outside the Department of Psychiatry actually defends the ethics of the study, at least as far as I can tell. What seems to bother people here is speaking out about it. Very few faculty members are willing to register their objections publicly.

Why not? Well, there are the obvious possibilities – fear, apathy, self-interest, and so on. At least one person has told me she is unwilling to sign because she doesn’t think the petition will succeed. But there may be a more interesting explanation that I’d like to explore. …

Why would faculty members remain silent about such an alarming sequence of events? One possible reason is simply because they do not feel as if the wrongdoing has anything to do with them. The University of Minnesota is a vast institution; the scandal took place in a single department; if anyone is to be blamed, it is the psychiatrists and the university administrators, not them. Simply being a faculty member at the university does not implicate them in the wrongdoing or give them any special obligation to fix it. In a phrase: no guilt, hence no responsibility.

My view is somewhat different. These events have made me deeply ashamed to be a part of the University of Minnesota, in the same way that I feel ashamed to be a Southerner when I see video clips of Strom Thurmond’s race-baiting speeches or photos of Alabama police dogs snapping at black civil rights marchers. I think that what our psychiatrists did to Dan Markingson was wrong in the deepest sense. It was exploitative, cruel, and corrupt. Almost as disgraceful are the actions university officials have taken to cover it up and protect the reputation of the university. The shame I feel comes from the fact that I have worked at the University of Minnesota for 15 years. I have even been a member of the IRB. For better or worse, my identity is bound up with the institution.

These two different reactions – shame versus guilt – differ in important ways. Shame is linked with honor; it is about losing the respect of others, and by virtue of that, losing your self-respect. And honor often involves collective identity. While we don’t usually feel guilty about the actions of other people, we often do feel ashamed if those actions reflect on our own identities. So, for example, you can feel ashamed at the actions of your parents, your fellow Lutherans, or your physician colleagues – even if you feel as if it would be unfair for anyone to blame you personally for their actions.

Shame, unlike guilt, involves the imagined gaze of other people. As Ruth Benedict writes: “Shame is a reaction to other people’s criticism. A man is shamed either by being openly ridiculed or by fantasying to himself that he has been made ridiculous. In either case it is a potent sanction. But it requires an audience or at least a man’s fantasy of an audience. Guilt does not.”

As Elliott notes, one way to avoid an audience — and thus to avoid shame — is to actively participate in, or tacitly endorse, a cover-up of the wrongdoing. I’m inclined to think, however, that taking steps to avoid shame by hiding the facts, or by allowing retaliation against people asking inconvenient questions, is itself a kind of wrongdoing — the kind of thing that incurs guilt, for which no audience is required.

As well, I think the scholars and researchers at the University of Minnesota who prefer not to take a stand on how their university responds to ethically problematic research, even if it is research in someone else’s lab, or someone else’s department, underestimate the size of the audience for their actions and for their inaction.

A hugely significant segment of this audience is their trainees. Their students and postdocs (and others involved in training relationships with them) are watching them, trying to draw lessons about how to be a grown-up scientist or scholar, a responsible member of a discipline, a responsible member of a university community, a responsible citizen of the world. The people they are training are looking to them to set a good example on how to respond to problems — by addressing them, learning from them, making things right, and doing better going forward, or by lying, covering up, and punishing people harmed by trying to recover costs from them (thus sending a message to others daring to point out how they have been harmed).

There are many fewer explicit conversations about such issues than one might hope in a scientist’s training. In the absence of explicit conversations, most of what trainees have to go on is how the people training them actually behave. And sometimes, a mentor’s silence speaks as loud as words.

The ethics of naming and shaming.

Lately I’ve been pondering the practice of responding to bad behavior by calling public attention to it.

The most recent impetus for my thinking about it was this tech blogger’s response to behavior that felt unwelcoming at a conference (behavior that seems, in fact, to have run afoul of that conference’s official written policies)*, but there are plenty of other examples one might find of “naming and shaming”: the discussion (on blogs and in other media outlets) of University of Chicago neuroscientist Dario Maestripieri’s comments about female attendees of the Society for Neuroscience meeting, the Office of Research Integrity’s posting of findings of scientific misconduct investigations, the occasional instructor who promises to publicly shame students who cheat in his class, and actually follows through on the promise.

There are many forms “naming-and-shaming” might take, and many types of behavior one might identify as problematic enough that they ought to be pointed out and attended to. But there seems to be a general worry that naming-and-shaming is an unethical tactic. Here, I want to explore that worry.

Presumably, the point of responding to bad behavior is because it’s bad — causing harm to individuals or a community (or both), undermining progress on a project or goal, and so forth. Responding to bad behavior can be useful if it stops bad behavior in progress and/or keeps similarly bad behavior from happening in the future. A response can also be useful in calling attention to the harm the behavior does (i.e., in making clear what’s bad about the behavior). And, depending on the response, it can affirm the commitment of individuals or communities that the behavior in question actual is bad, and that the individuals or communities see themselves as having a real stake in reducing it.

Rules, professional codes, conference harassment policies — these are some ways to specify at the outset what behaviors are not acceptable in the context of the meeting, game, work environment, or disciplinary pursuit. There are plenty of contexts, too, where there is no written-and-posted official enumeration of every type of unacceptable behavior. Sometimes communities make judgments on the fly about particular kinds of behavior. Sometimes, members of communities are not in agreement about these judgments, which might result in a thoughtful conversation within the community to try to come to some agreement, or the emergence of a rift that leads people to realize that the community was not as united as they once thought, or ruling on the “actual” badness or acceptability of the behavior by those within the community who can marshal the power to make such a ruling.

Sharing a world with people who are not you is complicated, after all.

Still, I hope we can agree that there are some behaviors that count as bad behaviors. Assuming we had an unambiguous example of someone engaging in such a behavior, should we respond? How should we respond? Do we have a duty to respond?

I frequently hear people declare that one should respond to bad behavior, but that one should do so privately. The idea here seems to be that letting the bad actor know that the behavior in question was bad, and should be stopped, is enough to ensure that it will be stopped — and that the bad behavior must be a reflection of a gap in the bad actor’s understanding.

If knowing that a behavior is bad (or against the rules) were enough to ensure that those with the relevant knowledge never engage in the behavior, though, it becomes difficult to explain the highly educated researchers who get caught fabricating or falsifying data or images, the legions of undergraduates who commit plagiarism despite detailed instructions on proper citation methods, the politicians who lie. If knowledge that a certain kind of behavior is unacceptable is not sufficient to prevent that behavior, responding effectively to bad behavior must involve more than telling the perpetrator of that behavior, “What you’re doing is bad. Stop it.”

This is where penalties may be helpful in responding to bad behavior — get benched for the rest of the game, or fail the class, or get ejected from the conference, or become ineligible for funding for this many years. A penalty can convey that bad behavior is harmful enough to the endeavor or the community that its perpetrator needs a “time-out”.

Sometimes the application of penalties needs to be private (e.g., when a law like the Family Education Rights and Privacy Act makes applying the penalty publicly illegal). But there are dangers in only dealing with bad behavior privately.

When fabrication, falsification, and plagiarism are “dealt with” privately, it can make it hard for a scientific community to identify papers in the scientific literature that they shouldn’t trust or researchers who might be prone to slipping back into fabricating, falsifying, or plagiarizing if they think no one is watching. (It is worth noting that large ethical lapses are frequently part of an escalating pattern that started with smaller ethical infractions.)

Worse, if bad behavior is dealt with privately, out of view of members of the community who witnessed the bad behavior in question, those members may lose faith in the community’s commitment to calling it out. Keeping penalties (if any) under wraps can convey the message that the bad behavior is actually tolerated, that official policies against it are empty words.

And sometimes, there are instances where the people within an organization or community with the power to impose penalties on bad actors seem disinclined to actually address bad behavior, using the cover of privacy as a way to opt out of penalizing the bad actors or of addressing the bad behavior in any serious way.

What’s a member of the community to do in such circumstances? Given that the bad behavior is bad because it has harmful effects on the community and its members, should those aware of the bad behavior call the community’s attention to it, in the hopes that the community can respond to it (or that the community’s scrutiny will encourage the bad actor to cease the bad behavior)?

Arguably, a community that is harmed by bad behavior has an interest in knowing when that behavior is happening, and who the bad actors are. As well, the community has an interest in stopping the bad behavior, in mitigating the harms it has already caused, and in discouraging further such behavior. Naming-and-shaming bad actors may be an effective way to secure these interests.

I don’t think this means naming-and-shaming is the only possible way to secure these interests, nor that it is always the best way to do so. Sometimes, however, it’s the tool that’s available that seems likely to do the most good.

There’s not a simple algorithm or litmus test that will tell you when shaming bad actors is the best course of action, but there are questions that are worth asking when assessing the options:

  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, goes unchallenged?
  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, gets challenged privately? (In particular, what are the potential consequences to the person engaging in the bad behavior? To the person challenging the behavior? To others who have had occasion to observe the behavior, or who might be affected by similar behavior in the future?)
  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, gets challenged publicly? (In particular, what are the potential consequences to the person engaging in the bad behavior? To the person challenging the behavior? To others who have had occasion to observe the behavior, or who might be affected by similar behavior in the future?)

Challenging bad behavior is not without costs. Depending on your status within the community, challenging a bad actor may harm you more than the bad actor. However, not challenging bad behavior has costs, too. If the community and its members aren’t prepared to deal with bad behavior when it happens, the community has to bear those costs.
_____
* Let me be clear that this post is focused on the broader question of publicly calling out bad behavior rather than on the specific details of Adria Richards’ response to the people behind her at the tech conference, whether she ought to have found their jokes unwelcoming, whether she ought to have responded to them the way she did, or what have you. Since this post is not about whether Adria Richards did everything right (or everything wrong) in that particular instance, I’m going to be quite ruthless in pruning comments that are focused on her particular circumstances or decisions. Indeed, commenters who make any attempt to use the comments here to issue threats of violence against Richards (of the sort she is receiving via social media as I compose this post), or against anyone else, will have their information (including IP address) forwarded to law enforcement.

If you’re looking for my take on the details of the Adria Richards case, I’ll have a post up on my other blog within the next 24 hours.

Reasonably honest impressions of #overlyhonestmethods.

I suspect at least some of you who are regular Twitter users have been following the #overlyhonestmethods hashtag, with which scientists have been sharing details of their methodology that are maybe not explicitly spelled out in their published “Materials and Methods” sections. And, as with many other hashtag genres, the tweets in #overlyhonestmethods are frequently hilarious.

I was interviewed last week about #overlyhonestmethods for the Public Radio International program Living On Earth, and the length of my commentary was more or less Twitter-scaled. This means some of the nuance (at least in my head), about questions like whether I thought the tweets were an overshare that could make science look bad, didn’t quite make it to the radio. Also, in response to the Living On Earth segment, one of the people with whom I regularly discuss the philosophy of science in the three-dimensional world, shared some concerns about this hashtag in the hopes I’d say a bit more:

I am concerned about the brevity of the comments which may influence what one expresses.  Second there is an ego component; some may try to outdo others’ funny stories, and may stretch things in order to gain a competitive advantage.

So, I’m going to say a bit more.

Should we worry that #overlyhonestmethods tweets share information that will make scientific practice look bad to (certain segments of) the public?

I don’t think so. I suppose this may depend on what exactly the public expects of scientists.

The people doing science are human. They are likely to be working with all kinds of constraints — how close their equipment is to the limits of its capabilities (and to making scary noises), how frequently lab personnel can actually make it into the lab to tend to cell cultures, how precisely (or not) pumping rates can be controlled, how promptly (or not) the folks receiving packages can get perishable deliveries to the researchers. (Notice that at least some of these limitations are connected to limited budgets for research … which maybe means that if the public finds them unacceptable, they should lobby their Congresscritters for increased research funding.) There are also constraints that come from the limits of the human animal: with a finite attention span, without a built in chronometer or calibrated eyeballs, and with a need for sleep and possibly even recreation every so often (despite what some might have you think).

Maybe I’m wrong, but my guess is that it’s a good thing to have a public that is aware of these limitations imposed by the available equipment, reagents, and non-robot workforce.

Actually, I’m willing to bet that some of these limitations, and an awareness of them, are also really handy in scientific knowledge-building. They are departures from ideality that may help scientists nail down which variables in the system really matter in producing and controlling the phenomenon being studied. Reproducibility might be easy for a robot that can do every step of the experiment precisely every single time, but we really learn what’s going on when we drift from that. Does it matter if I use reagents from a different supplier? Can I leave the cultures to incubate a day longer? Can I successfully run the reaction in a lab that’s 10 oC warmer or 10 oC colder? Working out the tolerances helps turn an experimental protocol from a magic trick into a system where we have some robust understanding of what variables matter and of how they’re hooked to each other.

Does the 140 character limit mean #overlyhonestmethods tweets leave out important information, or that scientists will only use the hashtag to be candid about some of their methods while leaving others unexplored?

The need for brevity surely means that methods for which candor requires a great deal of context and/or explanation won’t be as well-represented as methods where one can be candid and pithy simultaneously. These tweeted glimpses into how the science gets done are more likely to be one-liners than shaggy-dog stories.

However, it’s hard to imagine that folks who really wanted to share wouldn’t use a series of tweets if they wanted to play along, or maybe even write a blog post about it and use the hashtag to tweet a link to that post.

What if #overlyhonestmethods becomes a game of one-upmanship and puffery, in which researchers sacrifice honesty for laughs?

Maybe there’s some of this happening, and if the point of the hashtag is for researchers to entertain each other, maybe that’s not a problem. However, in the case that other members of one’s scientific community were actually looking to those tweets to fill in some of the important details of methodology that are elided in the terse “Materials and Methods” section of a published research paper, I hope the tweeters would, when queried, provide clear and candid information on how they actually conducted their experiments. Correcting or retracting a tweet should be less of an ego blow than correcting or retracting a published paper, I hope (and indeed, as hard as it might be to correct or retract published claims, good scientists do it when they need to).

The whole #overlyhonestmethods hashtag raises the perennial question of why it is so much is elided in published “Materials and Methods” sections. Blame is usually put on limitations of space in the journals, but it’s also reasonable to acknowledge that sometimes details-that-turn-out-to-be-important are left out because the researchers don’t fully recognize their importance. Other times, researchers may have empirical grounds for thinking these details are important, but they don’t yet have a satisfying story to tell about why they should be.

By the way, I think it would be an excellent thing if, for research that is already published, #overlyhonestmethods included the relevant DOI. These tweets would be supplementary information researchers could really use.

What researchers use #overlyhonestmethods to disclose ethically problematic methods?

Given that Twitter is a social medium, I expect other scientists in the community watching the hashtag would challenge those methods or chime in to explain just what makes them ethically problematic. They might also suggest less ethically problematic ways to achieve the same research goals.

The researchers on Twitter could, in other words, use the social medium to exert social pressure in order to make sure other members of their scientific community understand and live up to the norms of that community.

That outcome would strike me as a very good one.

* * * * *

In addition to the ever expanding collection of tweets about methods, #overlyhonestmethods also has links to some thoughtful, smart, and funny commentary on the hashtag and the conversations around it. Check it out!

The danger of pointing out bad behavior: retribution (and the community’s role in preventing it).

There has been a lot of discussion of Dario Maestripieri’s disappointment at the unattractiveness of his female colleagues in the neuroscience community. Indeed, it’s notable how much of this discussion has been in public channels, not just private emails or conversations conducted with sound waves which then dissipate into the aether. No doubt, this is related to Maestripieri’s decision to share his hot-or-not assessment of the women in his profession in a semi-public space where it could achieve more permanence — and amplification — than it would have as an utterance at the hotel bar.

His behavior became something that any member of his scientific community with an internet connection (and a whole lot of people outside his scientific community) could inspect. The impacts of an actual, rather than hypothetical, piece of behavior, could be brought into the conversation about the climate of professional and learning communities, especially for the members of these communities who are women.

It’s worth pointing out that there is nothing especially surprising about such sexist behavior* within these communities. The people in the communities who have been paying attention have seen them before (and besides have good empirical grounds for expecting that gender biases may be a problem). But many sexist behaviors go unreported and unremarked, sometimes because of the very real fear of retribution.

What kind of retribution could there be for pointing out a piece of behavior that has sexist effects, or arguing that it is an inappropriate way for a member of the professional community to behave?

Let’s say you are an early career scientist, applying for a faculty post. As it happens, Dario Maestripieri‘s department, the University of Chicago Department of Comparative Human Development, currently has an open search for a tenure-track assistant professor. There is a non-zero chance that Dario Maestripieri is a faculty member on that search committee, or that he has the ear of a colleague that is.

It is not a tremendous stretch to hypothesize that Dario Maestripieri may not be thrilled at the public criticism he’s gotten in response to his Facebook post (including some quite close to home). Possibly he’s looking through the throngs of his Facebook friends and trying to guess which of them is the one who took the screenshot of his ill advised post and shared it more widely. Or looking through his Facebook friends’ Facebook friends. Or considering which early career neuroscientists might be in-real-life friends or associates with his Facebook friends or their Facebook friends.

Now suppose you’re applying for that faculty position in his department and you happen to be one of his Facebook friends,** or one of their Facebook friends, or one of the in-real-life friends of either of those.

Of course, shooting down an applicant for a faculty position for the explicit reason that you think he or she may have cast unwanted attention on your behavior towards your professional community would be a problem. But there are probably enough applicants for the position, enough variation in the details of their CVs, and enough subjective judgment on the part of the members of the search committee in evaluating all those materials that it would be possible to cut all applicants who are Dario Maestripieri’s Facebook friends (or their Facebook friends, or in-real-life friends of either of those) from consideration while providing some other plausible reason for their elimination. Indeed, the circle could be broadened to eliminate candidates with letters of recommendation from Dario Maestripieri’s Facebook friends (or their Facebook friends, or in-real-life friends of either of those), candidates who have coauthored papers with Dario Maestripieri’s Facebook friends (or their Facebook friends, or in-real-life friends of either of those), etc.)

And, since candidates who don’t get the job generally aren’t told why they were found wanting — only that some other candidate was judged to be better — these other plausible reasons for shooting down a candidate would only even matter in the discussions of the search committee.

In other words, real retaliation (rejection from consideration for a faculty job) could fall on people who are merely suspected of sharing information that led to Dario Maestripieri becoming the focus of a public discussion of sexist behavior — not just on the people who have publicly spoken about his behavior. And, the retaliation would be practically impossible to prove.

If you don’t think this kind of possibility has a chilling effect on the willingness of members of a professional community to speak up when they see a relatively powerful colleague behave in they think is harmful, you just don’t understand power dynamics.

And even if Dario Maestripieri has no part at all in his department’s ongoing faculty search, there are other interactions within his professional community in which his suspicions about who might have exposed his behavior could come into play. Senior scientists are routinely asked to referee papers submitted to scientific journals and to serve on panels and study sections that rank applications for grants. In some of these circumstances, the identities of the scientists one is judging (e.g., for grants) are known to the scientists making the evaluations. In others, they are masked, but the scientists making the evaluations have hunches about whose work they are evaluating. If those hunches are mingled with hunches about who could have shared evidence of behavior that is now making the evaluator’s life difficult, it’s hard to imagine the grant applicant or the manuscript author getting a completely fair shake.

Let’s pause here to note that the attitude Dario Maestripieri’s Facebook posting reveals, that it’s appropriate to evaluate women in the field on their physical beauty rather than their scientific achievements, could itself be a source of bias as he does things that are part of a normal professional life, like serving on search committees, reviewing journal submissions and grant applications, evaluating students, and so forth. A bias like this could manifest itself in a preference for hiring job candidates one finds aesthetically pleasing. (Sure, academic job application packets usually don’t include a headshot, but even senior scientists have probably heard of Google Image search.) Or it could manifest itself in a preference against hiring more women (since too high a concentration of female colleagues might be perceived as increasing the likelihood that one would be taken to task for freely expressing one’s aesthetic preferences about women in the field). Again, it would be extraordinarily hard to prove the operation of such a bias in any particular case — but that doesn’t rule out the possibility that it is having an effect in activities where members of the professional community are supposed to be as objective as possible.

Objectivity, as we’ve noted before, is hard.

We should remember, though, that faculty searches are conducted by committees, rather than by a single individual with the power to make all the decisions. And, the University of Chicago Department of Comparative Human Development (as well as the University of Chicago more generally) may recognize that it is likely to be getting more public scrutiny as a result of the public scrutiny Dario Maestripieri has been getting.

Among other things, this means that the department and the university have a real interest in conducting a squeaky-clean search that avoids even the appearance of retaliation. In any search, members of the search committee have a responsibility to identify, disclose, and manage their own biases. In this search, discharging that responsibility is even more vital. In any search, members of the hiring department have a responsibility to discuss their shared needs and interests, and how these should inform the selection of the new faculty member. In this search, that discussion of needs and interests must include a discussion of the climate within the department and the larger scientific community — what it is now, and what members of the department think it should be.

In any search, members of the hiring department have an interest in sharing their opinions on who the best candidate might be, and to having a dialogue around the disagreements. In this search, if it turns out one of the disagreements about a candidate comes down to “I suspect he may have been involved in exposing my Facebook post and making me feel bad,” well, arguably there’s a responsibility to have a discussion about that.

Ask academics what it’s like to hire a colleague and it’s not uncommon to hear them describe the experience as akin to entering a marriage. You’re looking for someone with whom you might spend the next 30 years, someone who will grow with you, who will become an integral part of your department and its culture, even to the point of helping that departmental culture grow and change. This is a good reason not to choose the new hire based on the most superficial assessment of what each candidate might bring to the relationship — and to recognize that helping one faculty member avoid discomfort might not be the most important thing.

Indeed, Dario Maestripieri’s colleagues may have all kinds of reasons to engage him in uncomfortable discussions about his behavior that have nothing to do with conducting a squeaky-clean faculty search. Their reputations are intertwined, and leaving things alone rather than challenging Dario Maestripieri’s behavior may impact their own ability to attract graduate students or maintain the respect of undergraduates. These are things that matter to academic scientists — which means that Dario Maestripieri’s colleagues have an interest in pushing back for their own good and the good of the community.

The pushback, if it happens, is likely to be just as invisible publicly as any retaliation against job candidates for possibly sharing the screenshot of Dario Maestripieri’s Facebook posting. If positive effects are visible, it might make it seem less dangerous for members of the professional community to speak up about bad behavior when they see it. But if the outward appearance is that nothing has changed for Dario Maestripieri and his department, expect that there will be plenty of bad behavior that is not discussed in public because the career costs of doing so are just too high.

______
* This is not at all an issue about whether Dario Maestripieri is a sexist. This is an issue about the effects of the behavior, which have a disproportionate negative impact on women in the community. I do not know, or care, what is in the heart of the person who displays these behaviors, and it is not at all relevant to a discussion of how the behaviors affect the community.

** Given the number of his Facebook friends and their range of ages, career stages, etc., this doesn’t strike me as improbable. (At last check, I have 11 Facebook friends in common with Dario Maestripieri.)

Reading the writing on the (Facebook) wall: a community responds to Dario Maestripieri.

Imagine an academic scientist goes to a big professional meeting in his field. For whatever reason, he then decides to share the following “impression” of that meeting with his Facebook friends:

My impression of the Conference of the Society for Neuroscience in New Orleans. There are thousands of people at the conference and an unusually high concentration of unattractive women. The super model types are completely absent. What is going on? Are unattractive women particularly attracted to neuroscience? Are beautiful women particularly uninterested in the brain? No offense to anyone..

Maybe this is a lapse in judgment, but it’s no big thing, right?

I would venture, from the selection of links collected below discussing Dario Maestripieri and his recent social media foible, this is very much A Thing. Read on to get a sense of how the discussion is unfolding within the scientific community and the higher education community:

Drugmonkey, SfN 2012: Professors behaving badly:

There is a very simple response here. Don’t do this. It’s sexist, juvenile, offensive and stupid. For a senior scientist it is yet another contribution to the othering of women in science. In his lab, in his subfield, in his University and in his academic societies. We should not tolerate this crap.

Professor Maestripieri needs to apologize for this in a very public way and take responsibility for his actions. You know, not with a nonpology of “I’m sorry you were offended” but with an “I shouldn’t have done that” type of response.

Me, at Adventures in Ethics and Science, The point of calling out bad behavior:

It’s almost like people have something invested in denying the existence of gender bias among scientists, the phenomenon of a chilly climate in scientific professions, or even the possibility that Dario Maestripieri’s Facebook post was maybe not the first observable piece of sexism a working scientist put out there for the world to see.

The thing is, that denial is also the denial of the actual lived experience of a hell of a lot of women in science

Isis the Scientist, at On Becoming a Domestic and Laboratory Goddess, What We Learn When Professorly d00ds Take to Facebook:

Dr. Maestripieri’s comments will certainly come as no great shock to the women who read them.  That’s because those of us who have been around the conference scene for a while know that this is pretty par for the course.  There’s not just sekrit, hidden sexism in academia.  A lot of it is pretty overt.  And many of us know about the pockets of perv-fest that can occur at scientific meetings.  We know which events to generally avoid.  Many of us know who to not have cocktails with or be alone with, who the ass grabbers are, and we share our lists with other female colleagues.  We know to look out for the more junior women scientists who travel with us.  I am in no way shocked that Dr. Maestripieri would be so brazen as to post his thoughts on Facebook because I know that there are some who wouldn’t hesistate to say the same sorts of things aloud. …

The real question is whether the ability to evaluate Dr. Maestripieri’s asshattery in all of its screenshot-captured glory will actually actually change hearts and minds.

Erin Gloria Ryan at Jezebel, University of Chicago Professor Very Disappointed that Female Neuroscientists Aren’t Sexier:

Professor Maestripieri is a multiple-award winning academic working at the University of Chicago, which basically means he is Nerd Royalty. And, judging by his impressive resume, which includes a Ph.D in Psychobiology, the 2000 American Psychological Association Distinguished Scientific Award for Early Career Contribution to Psychology, and several committees at the U of C, he’s well aware of how hard someone in his position has had to work in order to rise to the top of an extremely competitive and demanding field. So it’s confusing to me that he would fail to grasp the fact that women in his field had to perform similar work and exhibit similar levels of dedication that he did.

Women: also people! Just like men, but with different genitals!

Cory Doctorow at BoingBoing, Why casual sexism in science matters:

I’ve got a daughter who, at four and a half, wants to be a scientist. Every time she says this, it makes me swell up with so much pride, I almost bust. If she grows up to be a scientist, I want her to be judged on the reproducibility of her results, the elegance of her experimental design, and the insight in her hypotheses, not on her ability to live up to someone’s douchey standard of “super model” looks.

(Also, do check out the conversation in the comments; it’s very smart and very funny.)

Scott Jaschik at Inside Higher Education, (Mis)Judging Female Scientists:

Pity the attendees at last week’s annual meeting of the Society for Neuroscience who thought they needed to focus on their papers and the research breakthroughs being discussed. It turns out they were also being judged — at least by one prominent scientist — on their looks. At least the female attendees were. …

Maestripieri did not respond to e-mail messages or phone calls over the past two days. A spokesman for the University of Chicago said that he had decided not to comment.

Pat Campbell at Fairer Science, No offense to anyone:

I’m glad the story hit Inside Higher Ed; I find it really telling that only women are quoted … Inside Higher Ed makes this a woman’s problem not a science problem and that is a much more important issue than Dario Maestripieri’s stupid comments.

Beryl Benderly at the Science Careers Blog, A Facebook Furor:

There’s another unpleasant implication embedded in Maestripieri’s post. He apparently assumed that some of his Facebook readers would find his observations interesting or amusing. This indicates that, in at least some circles, women scientists are still not evaluated on their work but rather on qualities irrelevant to their science. …

[T]he point of the story is not one faculty member’s egregious slip.  It is the apparently more widespread attitudes that this slip reveals

Dana Smith at Brain Study, More sexism in science:

However, others still think his behavior was acceptable, writing it off as a joke and telling people to not take it so seriously. This is particularly problematic given the underlying gender bias we know to still exist in science. If we accept overt and covert discrimination against women in science we all lose out, not just women who are dissuaded from the field because of it, but also everyone who might have benefited from their future work.

Minerva Cheevy at Research Centered (Chronicle of Higher Education Blog Network), Where’s the use of looking nice?:

There’s just no winning for women in academia – if you’re unattractive, then you’re a bad female. But if you’re attractive, you’re a bad academic.

The Maroon Editorial Board at The Chicago Maroon, Changing the conversation:

[T]his incident offers the University community an opportunity to reexamine our culture of “self-deprecation”—especially in relation to the physical attractiveness of students—and how that culture can condone assumptions which are just as baseless and offensive. …

Associating the depth of intellectual interests with a perceived lack of physical beauty fosters a culture of permissiveness towards derogatory comments. Negative remarks about peers’ appearances make blanket statements about their social lives and demeanors more acceptable. Though recently the popular sentiment among students is that the U of C gets more attractive the further away it gets from its last Uncommon App class, such comments stem from the same type of confused associations—that “normal” is “attractive” and that “weird” is not. It’s about time that we distance ourselves from these kinds of normative assumptions. While not as outrageous as Maestripieri’s comments, the belief that intelligence should be related to any other trait—be it attractiveness, normalcy, or social skills—is just as unproductive and illogical.

It’s quite possible that I’ve missed other good discussions of this situation and its broader implications. If so, please feel free to share links to them in the comments.

Community responsibility for a safety culture in academic chemistry.

This is another approximate transcript of a part of the conversation I had with Chemjobber that became a podcast. This segment (from about 29:55 to 52:00) includes our discussion of what a just punishment might look like for PI Patrick Harran for his part in the Sheri Sangji case. From there, our discussion shifted to the question of how to make the culture of academic chemistry safer:

Chemjobber: One of the things that I guess I’ll ask is whether you think we’ll get justice out of this legal process in the Sheri Sangji case.

Janet: I think about this, I grapple with this, and about half the time when I do, I end up thinking that punishment — and figuring out the appropriate punishment for Patrick Harran — doesn’t even make my top-five list of things that should come out of all this. I kind of feel like a decent person should feel really, really bad about what happened, and should devote his life forward from here to making the conditions that enabled the accident that killed Sheri Sangji go away. But, you know, maybe he’s not a decent person. Who the heck can tell? And certainly, once you put things in the context where you have a legal team defending you against criminal charges — that tends to obscure the question of whether you’re a decent person or not, because suddenly you’ve got lawyers acting on your behalf in all sorts of ways that don’t look decent at all.

Chemjobber: Right.

Janet: I think the bigger question in my mind is how does the community respond? How does the chemistry department at UCLA, how does the larger community of academic chemistry, how do Patrick Harran’s colleagues at UCLA and elsewhere respond to all of this? I know that there are some people who say, “Look, he really fell down on the job safety-wise, and in terms of creating an environment for people working on his behalf, and someone died, and he should do jail time.” I don’t actually know if putting him in jail changes the conditions on the outside, and I’ve said that I think, in some ways, tucking him away in jail for however many months makes it easier for the people who are still running academic labs while he’s incarcerated to say, “OK, the problem is taken care of. The bad actor is out of the pool. Not a problem,” rather than looking at what it is about the culture of academic chemistry that has us devoting so little of our time and energy to making sure we’re doing this safely. So, if it were up to me, if I were the Queen of Just Punishment in the world of academic chemistry, I’ve said his job from here on out should be to be Safety in the Research Culture Guy. That’s what he gets to work on. He doesn’t get to go forward and conduct new research on some chemical question like none of this ever happened. Because something happened. Something bad happened, and the reason something bad happened, I think, is because of a culture in academic chemistry where it was acceptable for a PI not to pay attention to safety considerations until something bad happened. And that’s got to change.

Chemjobber: I think it will change. I should point out here that if your proposed punishment were enacted, it would be quite a punishment, because he wouldn’t get to choose what he worked on anymore, and that, to a great extent, is the joy of academic research, that it’s self-directed and that there is lots and lots of freedom. I don’t get to choose the research problems I work on, because I do it for money. My choices are more or less made by somebody else.

Janet: But they pay you.

Chemjobber: But they pay me.

Janet: I think I’d even be OK saying maybe Harran gets to do 50% of his research on self-directed research topics. But the other 50% is he has to go be an evangelist for changing how we approach the question of safety in academic research.

Chemjobber: Right.

Janet: He’s still part of the community, he’s still “one of us,” but he has to show us how we are treading dangerously close to the conditions that led to the really bad thing that happened in his lab, so we can change that.

Chemjobber: Hmm.

Janet: And not just make it an individual thing. I think all of the attempts to boil what happened down to all being the individual responsibility of the technician, or of the PI, or it’s a split between the individual responsibility of one and the individual responsibility of the other, totally misses the institutional responsibility, and the responsibility of the professional community, and how systemic factors that the community is responsible for failed here.

Chemjobber: Hmm.

Janet: And I think sometimes we need individuals to step up and say, part of me acknowledging my personal responsibility here is to point to the ways that the decisions I made within the landscape we’ve got — of what we take seriously, of what’s rewarded and what’s punished — led to this really bad outcome. I think that’s part of the power here is when academic chemists say, “I would be horrified if you jailed this guy because this could have happened in any of our labs,” I think they’re right. I think they’re right, and I think we have to ask how it is that conditions in these academic communities got to the point where we’re lucky that more people haven’t been seriously injured or killed by some of the bad things that could happen — that we don’t even know that we’re walking into because safety gets that short shrift.

Chemjobber: Wow, that’s heavy. I’m not sure whether there are industrial chemists whose primary job is to think about safety. Is part of the issue we have here that safety has been professionalized? We have industrial chemical hygienists and safety engineers. Every university has an EH&S [environmental health and safety] department. Does that make safety somebody else’s problem? And maybe if Patrick Harran were to become a safety evangelist, it would be a way of saying it’s our problem, and we all have to learn, we have to figure out a way to deal with this?

Janet:Yeah. I actually know that there exist safety officers in academic science departments, partly because I serve on some university committees with people who fill that role — so I know they exist. I don’t know how much the people doing research in those departments actually talk with those safety officers before something goes wrong, or how much of it goes beyond “Oh, there’s paperwork we need to make sure is filed in the right place in case there’s an inspection,” or something like that. But it strikes me that safety should be more collaborative. In some ways, wouldn’t that be a more gripping weekly seminar to have in a chemistry department for grad students working in the lab, even just once a month on the weekly seminar, to have a safety roundtable? “Here are the risks that we found out about in this kind of work,” or talking about unforeseen things that might happen, or how do you get started finding out about proper precautions as you’re beginning a new line of research? What’s your strategy for figuring that out? Who do you talk to? I honestly feel like this is a part of chemical education at the graduate level that is extremely underdeveloped. I know there’s been some talk about changing the undergraduate chemistry degree so that it includes something like a certificate program in chemical safety, and maybe that will fix it all. But I think the only thing that fixes it all is really making it part of the day to day lived culture of how we build new knowledge in chemistry, that the safety around how that knowledge gets built is an ongoing part of the conversation.

Chemjobber: Hmm.

Janet: It’s not something we talk about once and then never again. Because that’s not how research works. We don’t say, “Here’s our protocol. We never have to revisit it. We’ll just keep running it until we have enough data, and then we’re done.”

Chemjobber: Right.

Janet: Show me an experiment that’s like that. I’ve never touched an experiment like that in my life.

Chemjobber: So, how many times do you remember your Ph.D. advisor talking to you about safety?

Janet: Zero. He was a really good advisor, he was a very good mentor, but essentially, how it worked in our lab was that the grad students who were further on would talk to the grad students who were newer about “Here’s what you need to be careful about with this reaction, “ or “If you’ve got overflow of your chemical waste, here’s who to call to do the clean-up,” or “Here’s the paperwork you fill out to have the chemical waste hauled away properly.” So, the culture was the people who were in the lab day to day were the keepers of the safety information, and luckily I joined a lab where those grad students were very forthcoming. They wanted to share that information. You didn’t have to ask because they offered it first. I don’t think it happens that way in every lab, though.

Chemjobber: I think you’re right. The thorniness of the problem of turning chemical safety into a day to day thing, within the lab — within a specific group — is you’re relying on this group of people that are transient, and they’re human, so some people really care about it and some people tend not to care about it. I had an advisor who didn’t talk about safety all the time but did, on a number of occasions, yank us all short and say, “Hey, look, what you’re doing is dangerous!” I clearly remember specific admonishments: “Hey, that’s dangerous! Don’t do that!”

Janet: I suspect that may be more common in organic chemistry than in physical chemistry, which is my area. You guys work with stuff that seems to have a lot more potential to do interesting things in interesting ways. The other thing, too, is that in my research group we were united by a common set of theoretical approaches, but we all worked in different kinds of experimental systems which had different kinds of hazards. The folks doing combustion reactions had different things to worry about than me, working with my aqueous reaction in a flow-through reactor, while someone in the next room was working with enzymatic reactions. We were all over the map. Nothing that any of us worked with seemed to have real deadly potential, at least as we were running it, but who knows?

Chemjobber: Right.

Janet: And given that different labs have very different dynamics, that could make it hard to actually implement a desire to have safety become a part of the day to day discussions people are having as they’re building the knowledge. But this might really be a good place for departments and graduate training programs to step up. To say, “OK, you’ve got your PI who’s running his or her own fiefdom in the lab, but we’re the other professional parental unit looking out for your well being, so we’re going to have these ongoing discussions with graduate cohorts made up of students who are working in different labs about safety and how to think about safety where the rubber hits the road.” Actually bringing those discussions out of the research group, the research group meeting, might provide a space where people can become reflective about how things go in their own labs and can see something about how things are being done differently in other labs, and start piecing together strategies, start thinking about what they want the practices to be like when they’re the grown-up chemists running their own labs. How do they want to make safety something that’s part of the job, not an add on that’s being slapped on or something that’s being forgotten altogether.

Chemjobber: Right.

Janet: But of course, graduate training programs would have to care enough about that to figure out how to put the resources on it, to make it happen.

Chemjobber: I’m in profound sympathy with the people who would have to figure out how to do that. I don’t really know anything about the structure of a graduate training program other than, you know, “Do good work, and try to graduate sooner rather than later.” But I assume that in the last 20 to 30 years, there have been new mandates like “OK, you all need to have some kind of ethics component”

Janet: — because ethics coursework will keep people from cheating! Except that’s an oversimplified equation. But ethics is a requirement they’re heaping on, and safety could certainly be another. The question is how to do that sensibly rather than making it clear that we’re doing this only because there’s a mandate from someone else that we do it.

Chemjobber: One of the things that I’ve always thought about in terms of how to better inculcate safety in academic labs is maybe to have training that happens every year, that takes a week. New first-years come in and you get run through some sort of a lab safety thing where you go and you set up the experiment and weird things are going to happen. It’s kind of an artificial environment where you have to go in and run a dangerous reaction as a drill that reminds you that there are real-world consequences. I think Chembark talked about how, in Caltech Safety Day, they brought out one of the lasers and put a hole through an apple. Since Paul is an organic chemist, I don’t think he does that very often, but his response was “Oh, if I enter one of these laser labs, I should probably have my safety glasses on.” There’s a limit to the effectiveness of that sort of stuff. you have to really, really think about how to design it, and a week out of a year is a long time, and who’s going to run it? I think your idea of the older students in the lab being the ones who really do a lot of the day to day safety stuff is important. What happens when there are no older students in the lab?

Janet: That’s right, when you’re the first cohort in the PI’s lab.

Chemjobber: Or, when there hasn’t been much funding for students and suddenly now you have funding for students.

Janet: And there’s also the question of going from a sparsely populated lab to a really crowded lab when you have the funding but you don’t suddenly have more lab space. And crowded labs have different kinds of safety concerns than sparsely populated labs.

Chemjobber: That’s very true.

Janet: I also wonder whether the “grown-up” chemists, the postdocs and the PIs, ought to be involved in some sort of regular safety … I guess casting it as “training” is likely to get people’s hackles up, and they’re likely to say, “I have even less time for this than my students do.”

Chemjobber: Right.

Janet: But at the same time, pretending that they learned everything they need to know about safety in grad school? Really? Really you did? When we’re talking now about how maybe the safety training for graduate students is inadequate, you magically got the training that tells you everything you need to know from here on out about safety? That seems weird. And also, presumably, the risks of certain kinds of procedures and certain kinds of reagents — that’s something about which our knowledge continues to increase as well. So, finding ways to keep up on that, to come up with safer techniques and better responses when things do go wrong — some kind of continuing education, continuing involvement with that. If there was a way to do it to include the PIs and the people they’re employing or training, to engage them together, maybe that would be effective.

Chemjobber: Hmm.

Janet: It would at least make it seem less like, “This is education we have to give our students, this is one more requirement to throw on the pile, but we wouldn’t do it if we had the choice, because it gets in the way of making knowledge.” Making knowledge is good. I think making knowledge is important, but we’re human beings making knowledge and we’d like to live long enough to appreciate that knowledge. Graduate students shouldn’t be consumable resources in the knowledge-building the same way that chemical reagents are.

Chemjobber: Yeah.

Janet: Because I bet you the disposal paperwork on graduate students is a fair bit more rigorous than for chemical waste.