Addressing (unintended) disrespect in your professional community.

I am a believer in the power of the professional conference. Getting people in the same room to share ideas, experiences, and challenges is one of the best ways to build a sense of community, to break down geographical and generational barriers, to energize people and remind them what they love about what they’re doing.

Sometimes, though, interactions flowing from a professional conference have a way of reinforcing barriers. Sometimes a member of the community makes an attempt to express appreciation of colleagues that actually has the effect of treating those colleagues like they’re not really part of the community after all.

Last week, the 8th World Conference of Science Journalists met in Helsinki, Finland. Upon his return from the conference, journalist Nicolás Luco posted a column reflecting on his experience there. (Here’s an English translation of the column by Wladimir Labeikovsky.) Luco’s piece suggests some of the excitement of finding connections with science journalists from other countries, as well as finding common ground with journalists entering the profession in a very different decade with a panoply of different technological tools:

If I hadn’t come, I wouldn’t have had that experience. I have submerged into an atmosphere where what I had seen as the future is already taken for granted. And yet, the fundamentals [e.g., that the story is what matters] remain.

It is, without a doubt, a description of a very positive personal experience.

However, Luco’s column is also a description of his experience of female colleagues at this conference framed primarily in terms of their physical attributes: shining blonde hair, limpid blue eyes, translucent complexions, apparent youth. His description of the panel of journalists using the tools of new media to practice the fundamentals of good journalism describes them as

four Americans: Rose, Lena, Kathleen and Erin (blonde), none older than 25

All of the other conference-goers who are identified by name are identified with surnames as well as given names. We do learn of the two women identified by their full names in the column that they are not blonde. It is left to the reader to imagine the hair color of Philip J. Hilts, the only male attendee mentioned by name.

I understand that Nicolás Luco was aiming to give a vivid visual description to draw his readers into his experience of being in Helsinki for this conference, and that this description was meant to convey a positive, optimistic mood about the future of science journalism.

But I also understand that these stylistic choices carry baggage that make it harder for Rose Eveleth, and Lena Groeger, and Kathleen Raven, and Erin Podolak, the journalists on the panel, to be taken seriously within this international community of science journalists.

Their surnames matter. In a field where they want their work to be recognized, disconnecting their bylines from the valuable insights they shared as part of a conference panel is not helpful.

Moreover, I am told that the journalistic convention is to identify adults by full name, and to identify people by first name alone only when those people are children.

Eveleth, Groeger, Raven, and Podolak are not children. They may seem relatively young to a journalist who came into the profession in the age of linotype (indeed, to the extent that he underestimated their ages, which range from 25 to 30), but they are professionals. Their ages should not be a barrier to treating them as if they are full members of the professional community of science journalists, but focusing unduly on their ages could well present such a barrier.

And, needless to say, their hair color should have no relevance at all in assessing whether they are skilled journalists with valuable insights to share.

As it happens, only days before the 8th World Conference of Science Journalists, Podolak wrote a blog post describing why she needs feminism. In that post, she wrote:

I’m a feminist for myself because yes, I want a fair shake, I want to be recognized for the value of my work and not whether or not my hair looks shiny that day. But, adding my voice to the other feminist voices out there is about more than just me. I’ve got it pretty good. I’m not trying to argue that I don’t. But I can support the women out there who are dealing with overt sexism, who are being attacked. I can try to be an ally. That to me is the real value of feminism, of standing together.

It is profoundly disheartening to take yourself to be accepted by your professional community, valued for the skills and ideas you bring to the table, only to discover that this is not how your presumptive colleagues actually see you. You would think that other journalists should be the ones most likely to appreciate the value of using new technologies to tell compelling stories. What a disappointment to find that their focus gets stuck on the surface. Who can tell whether the work has value if the hair of the journalist is shiny?

You will likely not be surprised that Eveleth, Groeger, Raven, and Podolak were frustrated at Nicolás Luco’s description of their panel, despite understanding that Luco was trying to be flattering. In an email to Luco the four sent in response to the column, they wrote:

Leading your story with a note about your attraction to blondes and then noting Erin’s hair color, is both inappropriate and, frankly, sexist. We were not there for anyone to ogle, and our physical appearance is completely irrelevant to the point of our panel. It is important for you to understand why were are upset about your tone in this piece. Women are constantly appraised for their looks, rather than their thoughts and skills, and in writing your story the way you did you are contributing to that sexism.

And, in a postscript to that email, Kathleen Raven noted:

I was under the impression that you wrote your article using hair color as a narrative tool to tie together your meetings with journalists. I appreciate this creativity, but I am worried that American women can perceive — as we have — the article as not fully respecting us as journalists in our own right.

What Eveleth, Groeger, Raven, and Podolak are up against is a larger society that values women more for their aesthetic appeal than their professional skills. That their own professional community repeats this pattern — presenting them as first young and pretty and only secondarily as good journalists — is a source of frustration. As Eveleth wrote to me:

Last I checked, being pretty has nothing to do with your skills at any kind of journalism. Having long blonde hair is not going to get Erin the story. Erin is going to get the story because she’s good at her job, because she’s got experience and passion, because she’s talented and tough and hard working. The same goes for Kathleen and Lena. 

The idea that it is not just okay, but actually complimentary to focus on a young woman’s (or really any aged woman’s) looks as leading part of her professional identity is wrong. The idea that it’s flattering to call out Erin’s hair and age before her skills is wrong. The idea that a woman’s professional skill set is made better if she is blonde and pretty is wrong. And the idea that someone who writes something like this should just be able to pass it off as “tongue in cheek” or “a cultural difference” is also wrong.

I should pause here to take note of another dimension of professional communities in this story. There is a strong pressure to get along with one’s professional colleagues, to get along rather than raising a fuss. Arguably this pressure is stronger on newer members of a professional community, and on members of that community with characteristics (e.g., of gender, race, disability, etc.) that are not well represented in the more established members of that professional community.

Practically, this pressure manifests itself as an inclination to let things go, to refrain from pointing out the little instances which devalue one’s professional identity or status as a valued member of the community. Most of the time it seems easier to sigh and say to oneself, “Well, he meant well,” or, “What can you expect from someone of that generation/cultural background?” than to point out the ways that the comments hurt. It feels like a tradeoff where you should swallow some individual hurt for the good of the community.

But accepting this tradeoff is accepting that your full membership in the community (and that of others like you) is less important. To the extent that you believe that you make a real contribution to the community, swallowing your individual hurt is dancing on the edge of accepting what is arguably a harm to the professional community as a whole by letting the hurtful behaviors pass unexamined.

Eveleth, Groeger, Raven, and Podolak had more respect than that for their professional community, and for Nicolás Luco as a professional colleague. They did not just sigh and roll their eyes. Rather, they emailed Luco to explain what the problem was.

In his reply to them (which I quote with his permission), Luco makes it clear that he did not intend to do harm to anyone, especially not to Eveleth, Groeger, Raven, and Podolak, with his column. Still, he also makes it clear that he may not fully grasp just what the problem is:

I write as a known voice who can write tongue in cheek and get away with it because I am willing to laugh at myself. 

I strive to make what I write entertaining. And maybe sneak in the more serious arguments.

Sorry about my misjudgment on your ages.  But the point is: you are generations apart.

I did not include your last names because they would interrupt the flow of reading and clog the line with surnames, an obstacle.

Finally, it is so much in U.S. culture to discard the looks vis a vis the brains when the looks, as President Clinton knows so well, can be a good hook into the brains.  And since this is a personal column, in the first person singular, I can tell how I personally react at good looks.  For example, Ms. Anne Glover, was extraordinarily beautiful and charming besides being bright and political, which helps, in front of the probable mean thoughts of envious uglier looking colleagues.

Thank you, I still prize the panel as the best and most important in the Conference.

Is there a way Nicolás Luco could have described his personal experience of the conference, and of this panel within the conference that he found particular valuable, in a way that was entertaining, even tongue-in-cheek, while avoiding the pitfalls of describing his female colleagues in terms that undercut their status in the professional community? I think so.

He might, for example, have talked about his own expectations that journalists who are generations apart would agree upon what makes good journalism good journalism. The way that these expectations were thwarted would surely be a good opportunity to laugh at oneself.

He might even have written about his own surprise that a young women he finds attractive contributed a valuable insight — using this as an opportunity to examine this expectation and whether it’s one he ought to be carrying around with him in his professional interactions. There’s even a line in his column that seems like it might provide a hook for this bit of self-examination:

Erin, the youngest and a cancer specialist, insists that decorations don’t matter: good journalism is good journalism, period. Makes me happy.

(Bold emphasis added.)

Extending the lesson about the content of the story mattering more than the packaging to a further lesson about the professional capabilities of the storyteller mattering more than one’s reaction to her superficial appearance — that could drive home some of the value of a conference like this.

Nicolás Luco wrote the column he wrote. Eveleth, Groeger, Raven, and Podolak took him seriously as a professional colleague who is presumptively concerned to strengthen their shared community. They asked him to consider the effect of his description on members of the professional community who stand where they do, to take responsibility as a writer for even the effects of his words that he had not intended or foreseen.

Engaging with colleagues when they hurt us without meaning to is not easy work, but it’s absolutely essential to the health of a professional community. I am hopeful that this engagement will continue productively.

Strategies to address questionable statistical practices.

If you have not yet read all you want to read about the wrongdoing of social psychologist Diederik Stapel, you may be interested in reading the 2012 Tilburg Report (PDF) on the matter. The full title of the English translation is “Flawed science: the fraudulent research practices of social psychologist Diederik Stapel” (in Dutch, “Falende wetenschap: De fruaduleuze onderzoekspraktijken van social-psycholoog Diederik Stapel”), and it’s 104 pages long, which might make it beach reading for the right kind of person.

If you’re not quite up to the whole report, Error Statistics Philosophy has a nice discussion of some of the highlights. In that post, D. G. Mayo writes:

The authors of the Report say they never anticipated giving a laundry list of “undesirable conduct” by which researchers can flout pretty obvious requirements for the responsible practice of science. It was an accidental byproduct of the investigation of one case (Diederik Stapel, social psychology) that they walked into a culture of “verification bias”. Maybe that’s why I find it so telling. It’s as if they could scarcely believe their ears when people they interviewed “defended the serious and less serious violations of proper scientific method with the words: that is what I have learned in practice; everyone in my research environment does the same, and so does everyone we talk to at international conferences” (Report 48). …

I would place techniques for ‘verification bias’ under the general umbrella of techniques for squelching stringent criticism and repressing severe tests. These gambits make it so easy to find apparent support for one’s pet theory or hypotheses, as to count as no evidence at all (see some from their list ). Any field that regularly proceeds this way I would call a pseudoscience, or non-science, following Popper. “Observations or experiments can be accepted as supporting a theory (or a hypothesis, or a scientific assertion) only if these observations or experiments are severe tests of the theory.”

You’d imagine this would raise the stakes pretty significantly for the researcher who could be teetering on the edge of verification bias: fall off that cliff and what you’re doing is no longer worthy of the name scientific knowledge-building.

Psychology, after all, is one of those fields given a hard time by people in “hard sciences,” which are popularly reckoned to be more objective, more revealing of actual structures and mechanisms in the world — more science-y. Fair or not, this might mean that psychologist have something to prove about their hardheadedness as researchers, about the stringency of their methods. Some peer pressure within the field to live up to such standards would obviously be a good thing — and certainly, it would be a better thing for the scientific respectability of psychology than an “everyone is doing it” excuse for less stringent methods.

Plus, isn’t psychology a field whose practitioners should have a grip on the various cognitive biases to which we humans fall prey? Shouldn’t psychologists understand better than most the wisdom of putting structures in place (whether embodied in methodology or in social interactions) to counteract those cognitive biases?

Remember that part of Stapel’s M.O. was keeping current with the social psychology literature so he could formulate hypotheses that fit very comfortably with researchers’ expectations of how the phenomena they studied behaved. Then, fabricating the expected results for his “investigations” of these hypotheses, Stapel caught peer reviewers being credulous rather than appropriately skeptical.

Short of trying to reproduce the experiments Stapel described themselves, how could peer reviewers avoid being fooled? Mayo has a suggestion:

Rather than report on believability, researchers need to report the properties of the methods they used: What was their capacity to have identified, avoided, admitted verification bias? The role of probability here would not be to quantify the degree of confidence or believability in a hypothesis, given the background theory or most intuitively plausible paradigms, but rather to check how severely probed or well-tested a hypothesis is– whether the assessment is formal, quasi-formal or informal. Was a good job done in scrutinizing flaws…or a terrible one?  Or was there just a bit of data massaging and cherry picking to support the desired conclusion? As a matter of routine, researchers should tell us.

I’m no social psychologist, but this strikes me as a good concrete step that could help peer reviewers make better evaluations — and that should help scientists who don’t want to fool themselves (let alone their scientific peers) to be clearer about what they really know and how well they really know it.

The continuum between outright fraud and “sloppy science”: inside the frauds of Diederik Stapel (part 5).

It’s time for one last look at the excellent article by Yudhijit Bhattacharjee in the New York Times Magazine (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. We’ve already examined strategy Stapel pursued to fabricate persuasive “results”, the particular harms Stapel’s misconduct did to the graduate students he was training, and the apprehensions of the students and colleagues who suspected fraud was afoot about the prospect of blowing the whistle on Stapel. To close, let’s look at some of the uncomfortable lessons the Stapel case has for his scientific community — and perhaps for other scientific communities as well.

Bhattacharjee writes:

At the end of November, the universities unveiled their final report at a joint news conference: Stapel had committed fraud in at least 55 of his papers, as well as in 10 Ph.D. dissertations written by his students. The students were not culpable, even though their work was now tarnished. The field of psychology was indicted, too, with a finding that Stapel’s fraud went undetected for so long because of “a general culture of careless, selective and uncritical handling of research and data.” If Stapel was solely to blame for making stuff up, the report stated, his peers, journal editors and reviewers of the field’s top journals were to blame for letting him get away with it. The committees identified several practices as “sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be.

The adjective “sloppy” seems charitable. Several psychologists I spoke to admitted that each of these more common practices was as deliberate as any of Stapel’s wholesale fabrications. Each was a choice made by the scientist every time he or she came to a fork in the road of experimental research — one way pointing to the truth, however dull and unsatisfying, and the other beckoning the researcher toward a rosier and more notable result that could be patently false or only partly true. What may be most troubling about the research culture the committees describe in their report are the plentiful opportunities and incentives for fraud. “The cookie jar was on the table without a lid” is how Stapel put it to me once. Those who suspect a colleague of fraud may be inclined to keep mum because of the potential costs of whistle-blowing.

The key to why Stapel got away with his fabrications for so long lies in his keen understanding of the sociology of his field. “I didn’t do strange stuff, I never said let’s do an experiment to show that the earth is flat,” he said. “I always checked — this may be by a cunning manipulative mind — that the experiment was reasonable, that it followed from the research that had come before, that it was just this extra step that everybody was waiting for.” He always read the research literature extensively to generate his hypotheses. “So that it was believable and could be argued that this was the only logical thing you would find,” he said. “Everybody wants you to be novel and creative, but you also need to be truthful and likely. You need to be able to say that this is completely new and exciting, but it’s very likely given what we know so far.”

Fraud like Stapel’s — brazen and careless in hindsight — might represent a lesser threat to the integrity of science than the massaging of data and selective reporting of experiments. The young professor who backed the two student whistle-blowers told me that tweaking results — like stopping data collection once the results confirm a hypothesis — is a common practice. “I could certainly see that if you do it in more subtle ways, it’s more difficult to detect,” Ap Dijksterhuis, one of the Netherlands’ best known psychologists, told me. He added that the field was making a sustained effort to remedy the problems that have been brought to light by Stapel’s fraud.

(Bold emphasis added.)

If the writers of this report are correct, the field of psychology failed in multiple ways here. First, they were insufficiently skeptical — both of Stapel’s purported findings and of their own preconceptions — to nip Stapel’s fabrications in the bud. And, they were themselves routinely engaging in practices that were bound to mislead.

Maybe these practices don’t rise to the level of outright fabrication. However, neither do they rise to the level of rigorous and intellectually honest scientific methodology.

There could be a number of explanations for these questionable methodological choices.

Possibly some of the psychologists engaging in this “sloppy science” lack a good understanding of statistics or of what counts as a properly rigorous test of one’s hypothesis. Essentially, this is an explanation of faulty methodology on the basis of ignorance. However, it’s likely that this is culpable ignorance — that psychology researchers have a positive duty to learn what they ought to know about statistics and hypothesis testing, and to avail themselves of available resources to ensure that they aren’t ignorant in this particular way.

I don’t know if efforts to improve statistics education are a part of the “sustained effort to remedy the problems that have been brought to light by Stapel’s fraud,” but I think they should be.

Another explanation for the lax methodology decried by the report is alluded to in the quoted passage: perhaps psychology researchers let the strength of their own intuitions about what they were going to see in their research results drive their methodology. Perhaps they unconsciously drifted away from methodological rigor and toward cherry-picking and misuse of statistics and the like because they knew in their hearts what the “right” answer would be. Given this kind of conviction, of course they would reject methods that didn’t yield the “right” answer in favor of those that did.

Here, too, the explanation does not provide an excuse. The scientist’s brief is not to take strong intuitions as true, but to look for evidence — especially evidence that could demonstrate that the intuitions are wrong. A good scientist should be on the alert for instances where she is being fooled by her intuitions. Rigorous methodology is one of the tools at her disposal to avoid being fooled. Organized skepticism from her fellow scientists is another.

From here, the explanations drift into waters where the researchers are even more culpable for their sloppiness. If you understand how to test hypotheses properly, and if you’re alert enough to the seductive power of your intuitions, it seems like the other reason you might engage in “sloppy science” is to make your results look less ambiguous, more certain, more persuasive than they really are, either to your fellow scientists or to others (administrators evaluating your tenure or promotion case? the public?). Knowingly providing a misleading picture of how good your results are is lying. It may be a lie of a smaller magnitude than Diederik Stapel’s full-scale fabrications, but it’s still dishonest.

And of course, there are plenty of reasons scientists (like other human beings) might try to rationalize a little lie as being not that bad. Maybe you really needed more persuasive preliminary data than you got to land the grant without which you won’t be able to support graduate students. Maybe you needed to make your conclusions look stronger to satisfy the notoriously difficult peer reviewers at the journal to which you submitted your manuscript. Maybe you are on the verge of getting credit for a paradigm-shaking insight in your field (if only you can put up the empirical results to support it), or of beating a competing research group to the finish line for an important discovery (if only you can persuade your peers that the results you have establish that discovery).

But maybe all these excuses prioritize scientific scorekeeping to the detriment of scientific knowledge-building.

Science is supposed to be an activity aimed at building a reliable body of knowledge about the world. You can’t reconcile this with lying, whether to yourself or to your fellow scientists. This means that scientists who are committed to the task must refrain from the little lies, and that they must take serious conscious steps to ensure that they don’t lie to themselves. Anything else runs the risk of derailing the whole project.

C.K. Gunsalus on responsible — and prudent — whistleblowing.

In my last post, I considered why, despite good reasons to believe that social psychologist Diederik Stapel’s purported results were too good to be true, the scientific colleagues and students who were suspicious of his work were reluctant to pursue these suspicions. Questioning the integrity of a member of your professional community is hard, and blowing the whistle on misconduct and misbehavior can be downright dangerous.

In her excellent article “How to Blow the Whistle and Still Have a Career Afterwards”, C. K. Gunsalus describes some of the challenges that come from less than warm community attitudes towards members who point out wrongdoing:

[Whistleblowers pay a high price] due to our visceral cultural dislike of tattletales. While in theory we believe the wrong-doing should be reported, our feelings about practice are more ambivalent. …

Perhaps some of this ambivalence is rooted in fear of becoming oneself the target of maliciously motivated false charges filed by a disgruntled student or former colleague. While this concern is probably overblown, it seems not far from the surface in many discussions of scientific integrity. (p. 52)

I suspect that much of this is a matter of empathy — or, more precisely, of who it is within our professional community with whom we empathize. Maybe we have an easier time empathizing with the folks who seem to be trying to get along, rather than those who seem to be looking for trouble. Or maybe we have more empathy for our colleagues, with whom we share experiences and responsibilities and the expectation of longterm durable bonds, than we have for our students.

But perhaps distaste for a tattletale is more closely connected to our distaste for the labor involved in properly investigating allegations of wrongdoing and then, if wrongdoing is established, addressing it. It would certainly be easier to assume the charges are baseless, and sometimes disinclination to investigate takes the form of finding reasons not to believe the person raising the concerns.

Still, if the psychology of scientists cannot permit them to take allegations of misbehavior seriously, there is no plausible way for science to be self-correcting. Gunsalus writes:

[E]very story has at least two sides, and a problem often looks quite different when both are in hand than when only one perspective is in view. The knowledge that many charges are misplaced or result from misunderstandings reinforces ingrained hesitancies against encouraging charges without careful consideration.

On the other hand, serious problems do occur where the right and best thing for all is a thorough examination of the problem. In most instances, this examination cannot occur without someone calling the problem to attention. Early, thorough review of potential problems is in the interest of every research organization, and conduct that leads to it should be encouraged. (p. 53)

(Bold emphasis added.)

Gunsalus’s article (which you should read in full) takes account of negative attitudes towards whistleblowers despite the importance of rooting out misconduct and lays out a sensible strategy for bringing wrongdoing to light without losing your membership in your professional community. She lays out “rules for responsible whistleblowing”:

  1. Consider alternative explanations (especially that you may be wrong).
  2. In light of #1, ask questions, do not make charges.
  3. Figure out what documentation supports your concerns and where it is.
  4. Separate your personal and professional concerns.
  5. Assess your goals.
  6. Seek advice and listen to it.

and her “step-by-step procedures for responsible whistleblowing”:

  1. Review your concern with someone you trust.
  2. Listen to what that person tells you.
  3. Get a second opinion and take that seriously, too.
  4. If you decide to initiate formal proceedings, seek strength in numbers.
  5. Find the right place to file charges; study the procedures.
  6. Report your concerns.
  7. Ask questions; keep notes.
  8. Cultivate patience!

The focus is very much on moving beyond hunches to establish clear evidence — and on avoiding self-deception. The potential whistleblower must hope that those to whom he or she is bringing concerns are themselves as committed to looking at the available evidence and avoiding self-deception.

Sometimes this is the situation, as it seems to have been in the Stapel case. In other cases, though, whistleblowers have done everything Gunsalus recommends and still found themselves without the support of their community. This is not just a bad thing for the whistleblowers. It is also a bad thing for the scientific community and the reliability of the shared body of knowledge it tries to build.
_____
C. K. Gunsalus, “How to Blow the Whistle and Still Have a Career Afterwards,” Science and Engineering Ethics, 4(1) 1998, 51-64.

The ethics of naming and shaming.

Lately I’ve been pondering the practice of responding to bad behavior by calling public attention to it.

The most recent impetus for my thinking about it was this tech blogger’s response to behavior that felt unwelcoming at a conference (behavior that seems, in fact, to have run afoul of that conference’s official written policies)*, but there are plenty of other examples one might find of “naming and shaming”: the discussion (on blogs and in other media outlets) of University of Chicago neuroscientist Dario Maestripieri’s comments about female attendees of the Society for Neuroscience meeting, the Office of Research Integrity’s posting of findings of scientific misconduct investigations, the occasional instructor who promises to publicly shame students who cheat in his class, and actually follows through on the promise.

There are many forms “naming-and-shaming” might take, and many types of behavior one might identify as problematic enough that they ought to be pointed out and attended to. But there seems to be a general worry that naming-and-shaming is an unethical tactic. Here, I want to explore that worry.

Presumably, the point of responding to bad behavior is because it’s bad — causing harm to individuals or a community (or both), undermining progress on a project or goal, and so forth. Responding to bad behavior can be useful if it stops bad behavior in progress and/or keeps similarly bad behavior from happening in the future. A response can also be useful in calling attention to the harm the behavior does (i.e., in making clear what’s bad about the behavior). And, depending on the response, it can affirm the commitment of individuals or communities that the behavior in question actual is bad, and that the individuals or communities see themselves as having a real stake in reducing it.

Rules, professional codes, conference harassment policies — these are some ways to specify at the outset what behaviors are not acceptable in the context of the meeting, game, work environment, or disciplinary pursuit. There are plenty of contexts, too, where there is no written-and-posted official enumeration of every type of unacceptable behavior. Sometimes communities make judgments on the fly about particular kinds of behavior. Sometimes, members of communities are not in agreement about these judgments, which might result in a thoughtful conversation within the community to try to come to some agreement, or the emergence of a rift that leads people to realize that the community was not as united as they once thought, or ruling on the “actual” badness or acceptability of the behavior by those within the community who can marshal the power to make such a ruling.

Sharing a world with people who are not you is complicated, after all.

Still, I hope we can agree that there are some behaviors that count as bad behaviors. Assuming we had an unambiguous example of someone engaging in such a behavior, should we respond? How should we respond? Do we have a duty to respond?

I frequently hear people declare that one should respond to bad behavior, but that one should do so privately. The idea here seems to be that letting the bad actor know that the behavior in question was bad, and should be stopped, is enough to ensure that it will be stopped — and that the bad behavior must be a reflection of a gap in the bad actor’s understanding.

If knowing that a behavior is bad (or against the rules) were enough to ensure that those with the relevant knowledge never engage in the behavior, though, it becomes difficult to explain the highly educated researchers who get caught fabricating or falsifying data or images, the legions of undergraduates who commit plagiarism despite detailed instructions on proper citation methods, the politicians who lie. If knowledge that a certain kind of behavior is unacceptable is not sufficient to prevent that behavior, responding effectively to bad behavior must involve more than telling the perpetrator of that behavior, “What you’re doing is bad. Stop it.”

This is where penalties may be helpful in responding to bad behavior — get benched for the rest of the game, or fail the class, or get ejected from the conference, or become ineligible for funding for this many years. A penalty can convey that bad behavior is harmful enough to the endeavor or the community that its perpetrator needs a “time-out”.

Sometimes the application of penalties needs to be private (e.g., when a law like the Family Education Rights and Privacy Act makes applying the penalty publicly illegal). But there are dangers in only dealing with bad behavior privately.

When fabrication, falsification, and plagiarism are “dealt with” privately, it can make it hard for a scientific community to identify papers in the scientific literature that they shouldn’t trust or researchers who might be prone to slipping back into fabricating, falsifying, or plagiarizing if they think no one is watching. (It is worth noting that large ethical lapses are frequently part of an escalating pattern that started with smaller ethical infractions.)

Worse, if bad behavior is dealt with privately, out of view of members of the community who witnessed the bad behavior in question, those members may lose faith in the community’s commitment to calling it out. Keeping penalties (if any) under wraps can convey the message that the bad behavior is actually tolerated, that official policies against it are empty words.

And sometimes, there are instances where the people within an organization or community with the power to impose penalties on bad actors seem disinclined to actually address bad behavior, using the cover of privacy as a way to opt out of penalizing the bad actors or of addressing the bad behavior in any serious way.

What’s a member of the community to do in such circumstances? Given that the bad behavior is bad because it has harmful effects on the community and its members, should those aware of the bad behavior call the community’s attention to it, in the hopes that the community can respond to it (or that the community’s scrutiny will encourage the bad actor to cease the bad behavior)?

Arguably, a community that is harmed by bad behavior has an interest in knowing when that behavior is happening, and who the bad actors are. As well, the community has an interest in stopping the bad behavior, in mitigating the harms it has already caused, and in discouraging further such behavior. Naming-and-shaming bad actors may be an effective way to secure these interests.

I don’t think this means naming-and-shaming is the only possible way to secure these interests, nor that it is always the best way to do so. Sometimes, however, it’s the tool that’s available that seems likely to do the most good.

There’s not a simple algorithm or litmus test that will tell you when shaming bad actors is the best course of action, but there are questions that are worth asking when assessing the options:

  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, goes unchallenged?
  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, gets challenged privately? (In particular, what are the potential consequences to the person engaging in the bad behavior? To the person challenging the behavior? To others who have had occasion to observe the behavior, or who might be affected by similar behavior in the future?)
  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, gets challenged publicly? (In particular, what are the potential consequences to the person engaging in the bad behavior? To the person challenging the behavior? To others who have had occasion to observe the behavior, or who might be affected by similar behavior in the future?)

Challenging bad behavior is not without costs. Depending on your status within the community, challenging a bad actor may harm you more than the bad actor. However, not challenging bad behavior has costs, too. If the community and its members aren’t prepared to deal with bad behavior when it happens, the community has to bear those costs.
_____
* Let me be clear that this post is focused on the broader question of publicly calling out bad behavior rather than on the specific details of Adria Richards’ response to the people behind her at the tech conference, whether she ought to have found their jokes unwelcoming, whether she ought to have responded to them the way she did, or what have you. Since this post is not about whether Adria Richards did everything right (or everything wrong) in that particular instance, I’m going to be quite ruthless in pruning comments that are focused on her particular circumstances or decisions. Indeed, commenters who make any attempt to use the comments here to issue threats of violence against Richards (of the sort she is receiving via social media as I compose this post), or against anyone else, will have their information (including IP address) forwarded to law enforcement.

If you’re looking for my take on the details of the Adria Richards case, I’ll have a post up on my other blog within the next 24 hours.

Are scientists obligated to call out the bad work of other scientists? (A thought experiment)

Here’s a thought experiment. While it was prompted by intertubes discussions of evolutionary psychology and some of its practitioners, I take it the ethical issues are not limited to that field.

Say there’s an area of scientific research that is at a relatively early stage of its development. People working in this area of research see what they are doing as strongly connected to other, better established scientific fields, whether in terms of methodological approaches to answering questions, or the existing collections of empirical evidence on which they draw, or what have you.

There is general agreement within this community about the broad type of question that might be answered by this area of research and the sorts of data that may be useful in evaluating hypotheses. But there is also a good bit of disagreement among practitioners of this emerging field about which questions will be the most interesting (or tractable) ones to pursue, about how far one may reasonably extend the conclusions from particular bits of research, and even about methodological issues (such as what one’s null hypothesis should be).

Let me pause to note that I don’t think the state of affairs I’m describing would be out of the ordinary for a newish scientific field trying to get its footing. You have a community of practitioners trying to work out a reasonable set of strategies to answer questions about a bundle of phenomena that haven’t really been tackled by other scientific fields that are chugging merrily along. Not only do you not have the answers yet to the questions you’re asking about those phenomena, but you’re also engaged in building, testing, and refining the tools you’ll be using to try to answer those questions. You may share a commitment with others in the community that there will be a useful set of scientific tools (conceptual and methodological) to help you get a handle on those phenomena, but getting there may involve a good bit of disagreement about what tools are best suited for the task. And, there’s a possibility that in the end, there might not be any such tools that give you answers to the questions you’re asking.

Imagine yourself to be a member of this newish area of scientific research.*

What kind of obligation do you have to engage with other practitioners of this newish area of scientific research whose work you feel is not good? (What kind of “not good” are we talking about here? Possibly you perceive them to be drawing unwarranted conclusions from their studies, or using shoddy methodology, or ignoring empirical evidence that seems to contradict their claims. There’s no need to assume that they are being intentionally dishonest.) Do you have an obligation to take to the scientific literature to critique the shortcomings in their work? Do you have an obligation to communicate these critiques privately (e.g., in email correspondence)? Or is it ethically permissible not to engage with what you consider the bad examples of work in your emerging scientific field, instead keeping your head down and producing your own good examples of how to make progress in your emerging scientific field?

Do you think your obligations here are different than they might be if you were working in a well-established scientific field? (In a well-established scientific field, one might argue, the standards for good work and bad work are clearer; does this mean it takes less individual work to identify and rebut the bad work?)

Now consider the situation when your emerging scientific field is one that focuses on questions that capture the imagination not just of scientists trying to get this new field up and running, but also of the general public — to the extent that science writers and journalists are watching the output of your emerging scientific field for interesting results to communicate to the public. How does the fact that the public is paying some attention to your newish area of scientific research bear on what kind of obligation you have to engage with the practitioners in your field whose work you feel is not good?

(Is it fair that a scientist’s obligations within his or her scientific field might shift depending on whether the public cares at all about the details of the knowledge being built by that scientific field? Is this the kind of thing that might drive scientists into more esoteric fields of research?)

Finally, consider the situation when your emerging field of science has captured the public imagination, and when the science writers and journalists seem to be getting most of their information about what your field is up to and what knowledge you have built from the folks in your field whose work you feel is not good. Does this place more of an obligation upon you to engage with the practitioners doing not-good work? Does it obligate you to engage with the science writers and journalists to rebut the bad work and/or explain what is required for good scientific work in your newish field? If you suspect that science writers and journalists are acting, in this case, to amplify misunderstandings or to hype tempting results that lack proper evidential support, do you have an obligation to communicate directly to the public about the misunderstandings and/or about what proper evidential support looks like?

A question I think can be asked at every stage of this thought experiment: Does the community of practitioners of your emerging scientific field have a collective responsibility to engage with the not-so-good work, even if any given individual practitioner does not? And, if the answer to this question is “yes”, how can the community of practitioners live up to that obligation if no individual practitioner is willing to step up and do it?

_____
* For fun, you can also consider these questions from the point of view of a member of the general public: What kinds of obligations do you want the scientists in this emerging field to recognize? After all, as a member of the public, your interests might diverge in interesting ways from those of a scientist in this emerging field.

The danger of pointing out bad behavior: retribution (and the community’s role in preventing it).

There has been a lot of discussion of Dario Maestripieri’s disappointment at the unattractiveness of his female colleagues in the neuroscience community. Indeed, it’s notable how much of this discussion has been in public channels, not just private emails or conversations conducted with sound waves which then dissipate into the aether. No doubt, this is related to Maestripieri’s decision to share his hot-or-not assessment of the women in his profession in a semi-public space where it could achieve more permanence — and amplification — than it would have as an utterance at the hotel bar.

His behavior became something that any member of his scientific community with an internet connection (and a whole lot of people outside his scientific community) could inspect. The impacts of an actual, rather than hypothetical, piece of behavior, could be brought into the conversation about the climate of professional and learning communities, especially for the members of these communities who are women.

It’s worth pointing out that there is nothing especially surprising about such sexist behavior* within these communities. The people in the communities who have been paying attention have seen them before (and besides have good empirical grounds for expecting that gender biases may be a problem). But many sexist behaviors go unreported and unremarked, sometimes because of the very real fear of retribution.

What kind of retribution could there be for pointing out a piece of behavior that has sexist effects, or arguing that it is an inappropriate way for a member of the professional community to behave?

Let’s say you are an early career scientist, applying for a faculty post. As it happens, Dario Maestripieri‘s department, the University of Chicago Department of Comparative Human Development, currently has an open search for a tenure-track assistant professor. There is a non-zero chance that Dario Maestripieri is a faculty member on that search committee, or that he has the ear of a colleague that is.

It is not a tremendous stretch to hypothesize that Dario Maestripieri may not be thrilled at the public criticism he’s gotten in response to his Facebook post (including some quite close to home). Possibly he’s looking through the throngs of his Facebook friends and trying to guess which of them is the one who took the screenshot of his ill advised post and shared it more widely. Or looking through his Facebook friends’ Facebook friends. Or considering which early career neuroscientists might be in-real-life friends or associates with his Facebook friends or their Facebook friends.

Now suppose you’re applying for that faculty position in his department and you happen to be one of his Facebook friends,** or one of their Facebook friends, or one of the in-real-life friends of either of those.

Of course, shooting down an applicant for a faculty position for the explicit reason that you think he or she may have cast unwanted attention on your behavior towards your professional community would be a problem. But there are probably enough applicants for the position, enough variation in the details of their CVs, and enough subjective judgment on the part of the members of the search committee in evaluating all those materials that it would be possible to cut all applicants who are Dario Maestripieri’s Facebook friends (or their Facebook friends, or in-real-life friends of either of those) from consideration while providing some other plausible reason for their elimination. Indeed, the circle could be broadened to eliminate candidates with letters of recommendation from Dario Maestripieri’s Facebook friends (or their Facebook friends, or in-real-life friends of either of those), candidates who have coauthored papers with Dario Maestripieri’s Facebook friends (or their Facebook friends, or in-real-life friends of either of those), etc.)

And, since candidates who don’t get the job generally aren’t told why they were found wanting — only that some other candidate was judged to be better — these other plausible reasons for shooting down a candidate would only even matter in the discussions of the search committee.

In other words, real retaliation (rejection from consideration for a faculty job) could fall on people who are merely suspected of sharing information that led to Dario Maestripieri becoming the focus of a public discussion of sexist behavior — not just on the people who have publicly spoken about his behavior. And, the retaliation would be practically impossible to prove.

If you don’t think this kind of possibility has a chilling effect on the willingness of members of a professional community to speak up when they see a relatively powerful colleague behave in they think is harmful, you just don’t understand power dynamics.

And even if Dario Maestripieri has no part at all in his department’s ongoing faculty search, there are other interactions within his professional community in which his suspicions about who might have exposed his behavior could come into play. Senior scientists are routinely asked to referee papers submitted to scientific journals and to serve on panels and study sections that rank applications for grants. In some of these circumstances, the identities of the scientists one is judging (e.g., for grants) are known to the scientists making the evaluations. In others, they are masked, but the scientists making the evaluations have hunches about whose work they are evaluating. If those hunches are mingled with hunches about who could have shared evidence of behavior that is now making the evaluator’s life difficult, it’s hard to imagine the grant applicant or the manuscript author getting a completely fair shake.

Let’s pause here to note that the attitude Dario Maestripieri’s Facebook posting reveals, that it’s appropriate to evaluate women in the field on their physical beauty rather than their scientific achievements, could itself be a source of bias as he does things that are part of a normal professional life, like serving on search committees, reviewing journal submissions and grant applications, evaluating students, and so forth. A bias like this could manifest itself in a preference for hiring job candidates one finds aesthetically pleasing. (Sure, academic job application packets usually don’t include a headshot, but even senior scientists have probably heard of Google Image search.) Or it could manifest itself in a preference against hiring more women (since too high a concentration of female colleagues might be perceived as increasing the likelihood that one would be taken to task for freely expressing one’s aesthetic preferences about women in the field). Again, it would be extraordinarily hard to prove the operation of such a bias in any particular case — but that doesn’t rule out the possibility that it is having an effect in activities where members of the professional community are supposed to be as objective as possible.

Objectivity, as we’ve noted before, is hard.

We should remember, though, that faculty searches are conducted by committees, rather than by a single individual with the power to make all the decisions. And, the University of Chicago Department of Comparative Human Development (as well as the University of Chicago more generally) may recognize that it is likely to be getting more public scrutiny as a result of the public scrutiny Dario Maestripieri has been getting.

Among other things, this means that the department and the university have a real interest in conducting a squeaky-clean search that avoids even the appearance of retaliation. In any search, members of the search committee have a responsibility to identify, disclose, and manage their own biases. In this search, discharging that responsibility is even more vital. In any search, members of the hiring department have a responsibility to discuss their shared needs and interests, and how these should inform the selection of the new faculty member. In this search, that discussion of needs and interests must include a discussion of the climate within the department and the larger scientific community — what it is now, and what members of the department think it should be.

In any search, members of the hiring department have an interest in sharing their opinions on who the best candidate might be, and to having a dialogue around the disagreements. In this search, if it turns out one of the disagreements about a candidate comes down to “I suspect he may have been involved in exposing my Facebook post and making me feel bad,” well, arguably there’s a responsibility to have a discussion about that.

Ask academics what it’s like to hire a colleague and it’s not uncommon to hear them describe the experience as akin to entering a marriage. You’re looking for someone with whom you might spend the next 30 years, someone who will grow with you, who will become an integral part of your department and its culture, even to the point of helping that departmental culture grow and change. This is a good reason not to choose the new hire based on the most superficial assessment of what each candidate might bring to the relationship — and to recognize that helping one faculty member avoid discomfort might not be the most important thing.

Indeed, Dario Maestripieri’s colleagues may have all kinds of reasons to engage him in uncomfortable discussions about his behavior that have nothing to do with conducting a squeaky-clean faculty search. Their reputations are intertwined, and leaving things alone rather than challenging Dario Maestripieri’s behavior may impact their own ability to attract graduate students or maintain the respect of undergraduates. These are things that matter to academic scientists — which means that Dario Maestripieri’s colleagues have an interest in pushing back for their own good and the good of the community.

The pushback, if it happens, is likely to be just as invisible publicly as any retaliation against job candidates for possibly sharing the screenshot of Dario Maestripieri’s Facebook posting. If positive effects are visible, it might make it seem less dangerous for members of the professional community to speak up about bad behavior when they see it. But if the outward appearance is that nothing has changed for Dario Maestripieri and his department, expect that there will be plenty of bad behavior that is not discussed in public because the career costs of doing so are just too high.

______
* This is not at all an issue about whether Dario Maestripieri is a sexist. This is an issue about the effects of the behavior, which have a disproportionate negative impact on women in the community. I do not know, or care, what is in the heart of the person who displays these behaviors, and it is not at all relevant to a discussion of how the behaviors affect the community.

** Given the number of his Facebook friends and their range of ages, career stages, etc., this doesn’t strike me as improbable. (At last check, I have 11 Facebook friends in common with Dario Maestripieri.)

Reading the writing on the (Facebook) wall: a community responds to Dario Maestripieri.

Imagine an academic scientist goes to a big professional meeting in his field. For whatever reason, he then decides to share the following “impression” of that meeting with his Facebook friends:

My impression of the Conference of the Society for Neuroscience in New Orleans. There are thousands of people at the conference and an unusually high concentration of unattractive women. The super model types are completely absent. What is going on? Are unattractive women particularly attracted to neuroscience? Are beautiful women particularly uninterested in the brain? No offense to anyone..

Maybe this is a lapse in judgment, but it’s no big thing, right?

I would venture, from the selection of links collected below discussing Dario Maestripieri and his recent social media foible, this is very much A Thing. Read on to get a sense of how the discussion is unfolding within the scientific community and the higher education community:

Drugmonkey, SfN 2012: Professors behaving badly:

There is a very simple response here. Don’t do this. It’s sexist, juvenile, offensive and stupid. For a senior scientist it is yet another contribution to the othering of women in science. In his lab, in his subfield, in his University and in his academic societies. We should not tolerate this crap.

Professor Maestripieri needs to apologize for this in a very public way and take responsibility for his actions. You know, not with a nonpology of “I’m sorry you were offended” but with an “I shouldn’t have done that” type of response.

Me, at Adventures in Ethics and Science, The point of calling out bad behavior:

It’s almost like people have something invested in denying the existence of gender bias among scientists, the phenomenon of a chilly climate in scientific professions, or even the possibility that Dario Maestripieri’s Facebook post was maybe not the first observable piece of sexism a working scientist put out there for the world to see.

The thing is, that denial is also the denial of the actual lived experience of a hell of a lot of women in science

Isis the Scientist, at On Becoming a Domestic and Laboratory Goddess, What We Learn When Professorly d00ds Take to Facebook:

Dr. Maestripieri’s comments will certainly come as no great shock to the women who read them.  That’s because those of us who have been around the conference scene for a while know that this is pretty par for the course.  There’s not just sekrit, hidden sexism in academia.  A lot of it is pretty overt.  And many of us know about the pockets of perv-fest that can occur at scientific meetings.  We know which events to generally avoid.  Many of us know who to not have cocktails with or be alone with, who the ass grabbers are, and we share our lists with other female colleagues.  We know to look out for the more junior women scientists who travel with us.  I am in no way shocked that Dr. Maestripieri would be so brazen as to post his thoughts on Facebook because I know that there are some who wouldn’t hesistate to say the same sorts of things aloud. …

The real question is whether the ability to evaluate Dr. Maestripieri’s asshattery in all of its screenshot-captured glory will actually actually change hearts and minds.

Erin Gloria Ryan at Jezebel, University of Chicago Professor Very Disappointed that Female Neuroscientists Aren’t Sexier:

Professor Maestripieri is a multiple-award winning academic working at the University of Chicago, which basically means he is Nerd Royalty. And, judging by his impressive resume, which includes a Ph.D in Psychobiology, the 2000 American Psychological Association Distinguished Scientific Award for Early Career Contribution to Psychology, and several committees at the U of C, he’s well aware of how hard someone in his position has had to work in order to rise to the top of an extremely competitive and demanding field. So it’s confusing to me that he would fail to grasp the fact that women in his field had to perform similar work and exhibit similar levels of dedication that he did.

Women: also people! Just like men, but with different genitals!

Cory Doctorow at BoingBoing, Why casual sexism in science matters:

I’ve got a daughter who, at four and a half, wants to be a scientist. Every time she says this, it makes me swell up with so much pride, I almost bust. If she grows up to be a scientist, I want her to be judged on the reproducibility of her results, the elegance of her experimental design, and the insight in her hypotheses, not on her ability to live up to someone’s douchey standard of “super model” looks.

(Also, do check out the conversation in the comments; it’s very smart and very funny.)

Scott Jaschik at Inside Higher Education, (Mis)Judging Female Scientists:

Pity the attendees at last week’s annual meeting of the Society for Neuroscience who thought they needed to focus on their papers and the research breakthroughs being discussed. It turns out they were also being judged — at least by one prominent scientist — on their looks. At least the female attendees were. …

Maestripieri did not respond to e-mail messages or phone calls over the past two days. A spokesman for the University of Chicago said that he had decided not to comment.

Pat Campbell at Fairer Science, No offense to anyone:

I’m glad the story hit Inside Higher Ed; I find it really telling that only women are quoted … Inside Higher Ed makes this a woman’s problem not a science problem and that is a much more important issue than Dario Maestripieri’s stupid comments.

Beryl Benderly at the Science Careers Blog, A Facebook Furor:

There’s another unpleasant implication embedded in Maestripieri’s post. He apparently assumed that some of his Facebook readers would find his observations interesting or amusing. This indicates that, in at least some circles, women scientists are still not evaluated on their work but rather on qualities irrelevant to their science. …

[T]he point of the story is not one faculty member’s egregious slip.  It is the apparently more widespread attitudes that this slip reveals

Dana Smith at Brain Study, More sexism in science:

However, others still think his behavior was acceptable, writing it off as a joke and telling people to not take it so seriously. This is particularly problematic given the underlying gender bias we know to still exist in science. If we accept overt and covert discrimination against women in science we all lose out, not just women who are dissuaded from the field because of it, but also everyone who might have benefited from their future work.

Minerva Cheevy at Research Centered (Chronicle of Higher Education Blog Network), Where’s the use of looking nice?:

There’s just no winning for women in academia – if you’re unattractive, then you’re a bad female. But if you’re attractive, you’re a bad academic.

The Maroon Editorial Board at The Chicago Maroon, Changing the conversation:

[T]his incident offers the University community an opportunity to reexamine our culture of “self-deprecation”—especially in relation to the physical attractiveness of students—and how that culture can condone assumptions which are just as baseless and offensive. …

Associating the depth of intellectual interests with a perceived lack of physical beauty fosters a culture of permissiveness towards derogatory comments. Negative remarks about peers’ appearances make blanket statements about their social lives and demeanors more acceptable. Though recently the popular sentiment among students is that the U of C gets more attractive the further away it gets from its last Uncommon App class, such comments stem from the same type of confused associations—that “normal” is “attractive” and that “weird” is not. It’s about time that we distance ourselves from these kinds of normative assumptions. While not as outrageous as Maestripieri’s comments, the belief that intelligence should be related to any other trait—be it attractiveness, normalcy, or social skills—is just as unproductive and illogical.

It’s quite possible that I’ve missed other good discussions of this situation and its broader implications. If so, please feel free to share links to them in the comments.

Community responsibility for a safety culture in academic chemistry.

This is another approximate transcript of a part of the conversation I had with Chemjobber that became a podcast. This segment (from about 29:55 to 52:00) includes our discussion of what a just punishment might look like for PI Patrick Harran for his part in the Sheri Sangji case. From there, our discussion shifted to the question of how to make the culture of academic chemistry safer:

Chemjobber: One of the things that I guess I’ll ask is whether you think we’ll get justice out of this legal process in the Sheri Sangji case.

Janet: I think about this, I grapple with this, and about half the time when I do, I end up thinking that punishment — and figuring out the appropriate punishment for Patrick Harran — doesn’t even make my top-five list of things that should come out of all this. I kind of feel like a decent person should feel really, really bad about what happened, and should devote his life forward from here to making the conditions that enabled the accident that killed Sheri Sangji go away. But, you know, maybe he’s not a decent person. Who the heck can tell? And certainly, once you put things in the context where you have a legal team defending you against criminal charges — that tends to obscure the question of whether you’re a decent person or not, because suddenly you’ve got lawyers acting on your behalf in all sorts of ways that don’t look decent at all.

Chemjobber: Right.

Janet: I think the bigger question in my mind is how does the community respond? How does the chemistry department at UCLA, how does the larger community of academic chemistry, how do Patrick Harran’s colleagues at UCLA and elsewhere respond to all of this? I know that there are some people who say, “Look, he really fell down on the job safety-wise, and in terms of creating an environment for people working on his behalf, and someone died, and he should do jail time.” I don’t actually know if putting him in jail changes the conditions on the outside, and I’ve said that I think, in some ways, tucking him away in jail for however many months makes it easier for the people who are still running academic labs while he’s incarcerated to say, “OK, the problem is taken care of. The bad actor is out of the pool. Not a problem,” rather than looking at what it is about the culture of academic chemistry that has us devoting so little of our time and energy to making sure we’re doing this safely. So, if it were up to me, if I were the Queen of Just Punishment in the world of academic chemistry, I’ve said his job from here on out should be to be Safety in the Research Culture Guy. That’s what he gets to work on. He doesn’t get to go forward and conduct new research on some chemical question like none of this ever happened. Because something happened. Something bad happened, and the reason something bad happened, I think, is because of a culture in academic chemistry where it was acceptable for a PI not to pay attention to safety considerations until something bad happened. And that’s got to change.

Chemjobber: I think it will change. I should point out here that if your proposed punishment were enacted, it would be quite a punishment, because he wouldn’t get to choose what he worked on anymore, and that, to a great extent, is the joy of academic research, that it’s self-directed and that there is lots and lots of freedom. I don’t get to choose the research problems I work on, because I do it for money. My choices are more or less made by somebody else.

Janet: But they pay you.

Chemjobber: But they pay me.

Janet: I think I’d even be OK saying maybe Harran gets to do 50% of his research on self-directed research topics. But the other 50% is he has to go be an evangelist for changing how we approach the question of safety in academic research.

Chemjobber: Right.

Janet: He’s still part of the community, he’s still “one of us,” but he has to show us how we are treading dangerously close to the conditions that led to the really bad thing that happened in his lab, so we can change that.

Chemjobber: Hmm.

Janet: And not just make it an individual thing. I think all of the attempts to boil what happened down to all being the individual responsibility of the technician, or of the PI, or it’s a split between the individual responsibility of one and the individual responsibility of the other, totally misses the institutional responsibility, and the responsibility of the professional community, and how systemic factors that the community is responsible for failed here.

Chemjobber: Hmm.

Janet: And I think sometimes we need individuals to step up and say, part of me acknowledging my personal responsibility here is to point to the ways that the decisions I made within the landscape we’ve got — of what we take seriously, of what’s rewarded and what’s punished — led to this really bad outcome. I think that’s part of the power here is when academic chemists say, “I would be horrified if you jailed this guy because this could have happened in any of our labs,” I think they’re right. I think they’re right, and I think we have to ask how it is that conditions in these academic communities got to the point where we’re lucky that more people haven’t been seriously injured or killed by some of the bad things that could happen — that we don’t even know that we’re walking into because safety gets that short shrift.

Chemjobber: Wow, that’s heavy. I’m not sure whether there are industrial chemists whose primary job is to think about safety. Is part of the issue we have here that safety has been professionalized? We have industrial chemical hygienists and safety engineers. Every university has an EH&S [environmental health and safety] department. Does that make safety somebody else’s problem? And maybe if Patrick Harran were to become a safety evangelist, it would be a way of saying it’s our problem, and we all have to learn, we have to figure out a way to deal with this?

Janet:Yeah. I actually know that there exist safety officers in academic science departments, partly because I serve on some university committees with people who fill that role — so I know they exist. I don’t know how much the people doing research in those departments actually talk with those safety officers before something goes wrong, or how much of it goes beyond “Oh, there’s paperwork we need to make sure is filed in the right place in case there’s an inspection,” or something like that. But it strikes me that safety should be more collaborative. In some ways, wouldn’t that be a more gripping weekly seminar to have in a chemistry department for grad students working in the lab, even just once a month on the weekly seminar, to have a safety roundtable? “Here are the risks that we found out about in this kind of work,” or talking about unforeseen things that might happen, or how do you get started finding out about proper precautions as you’re beginning a new line of research? What’s your strategy for figuring that out? Who do you talk to? I honestly feel like this is a part of chemical education at the graduate level that is extremely underdeveloped. I know there’s been some talk about changing the undergraduate chemistry degree so that it includes something like a certificate program in chemical safety, and maybe that will fix it all. But I think the only thing that fixes it all is really making it part of the day to day lived culture of how we build new knowledge in chemistry, that the safety around how that knowledge gets built is an ongoing part of the conversation.

Chemjobber: Hmm.

Janet: It’s not something we talk about once and then never again. Because that’s not how research works. We don’t say, “Here’s our protocol. We never have to revisit it. We’ll just keep running it until we have enough data, and then we’re done.”

Chemjobber: Right.

Janet: Show me an experiment that’s like that. I’ve never touched an experiment like that in my life.

Chemjobber: So, how many times do you remember your Ph.D. advisor talking to you about safety?

Janet: Zero. He was a really good advisor, he was a very good mentor, but essentially, how it worked in our lab was that the grad students who were further on would talk to the grad students who were newer about “Here’s what you need to be careful about with this reaction, “ or “If you’ve got overflow of your chemical waste, here’s who to call to do the clean-up,” or “Here’s the paperwork you fill out to have the chemical waste hauled away properly.” So, the culture was the people who were in the lab day to day were the keepers of the safety information, and luckily I joined a lab where those grad students were very forthcoming. They wanted to share that information. You didn’t have to ask because they offered it first. I don’t think it happens that way in every lab, though.

Chemjobber: I think you’re right. The thorniness of the problem of turning chemical safety into a day to day thing, within the lab — within a specific group — is you’re relying on this group of people that are transient, and they’re human, so some people really care about it and some people tend not to care about it. I had an advisor who didn’t talk about safety all the time but did, on a number of occasions, yank us all short and say, “Hey, look, what you’re doing is dangerous!” I clearly remember specific admonishments: “Hey, that’s dangerous! Don’t do that!”

Janet: I suspect that may be more common in organic chemistry than in physical chemistry, which is my area. You guys work with stuff that seems to have a lot more potential to do interesting things in interesting ways. The other thing, too, is that in my research group we were united by a common set of theoretical approaches, but we all worked in different kinds of experimental systems which had different kinds of hazards. The folks doing combustion reactions had different things to worry about than me, working with my aqueous reaction in a flow-through reactor, while someone in the next room was working with enzymatic reactions. We were all over the map. Nothing that any of us worked with seemed to have real deadly potential, at least as we were running it, but who knows?

Chemjobber: Right.

Janet: And given that different labs have very different dynamics, that could make it hard to actually implement a desire to have safety become a part of the day to day discussions people are having as they’re building the knowledge. But this might really be a good place for departments and graduate training programs to step up. To say, “OK, you’ve got your PI who’s running his or her own fiefdom in the lab, but we’re the other professional parental unit looking out for your well being, so we’re going to have these ongoing discussions with graduate cohorts made up of students who are working in different labs about safety and how to think about safety where the rubber hits the road.” Actually bringing those discussions out of the research group, the research group meeting, might provide a space where people can become reflective about how things go in their own labs and can see something about how things are being done differently in other labs, and start piecing together strategies, start thinking about what they want the practices to be like when they’re the grown-up chemists running their own labs. How do they want to make safety something that’s part of the job, not an add on that’s being slapped on or something that’s being forgotten altogether.

Chemjobber: Right.

Janet: But of course, graduate training programs would have to care enough about that to figure out how to put the resources on it, to make it happen.

Chemjobber: I’m in profound sympathy with the people who would have to figure out how to do that. I don’t really know anything about the structure of a graduate training program other than, you know, “Do good work, and try to graduate sooner rather than later.” But I assume that in the last 20 to 30 years, there have been new mandates like “OK, you all need to have some kind of ethics component”

Janet: — because ethics coursework will keep people from cheating! Except that’s an oversimplified equation. But ethics is a requirement they’re heaping on, and safety could certainly be another. The question is how to do that sensibly rather than making it clear that we’re doing this only because there’s a mandate from someone else that we do it.

Chemjobber: One of the things that I’ve always thought about in terms of how to better inculcate safety in academic labs is maybe to have training that happens every year, that takes a week. New first-years come in and you get run through some sort of a lab safety thing where you go and you set up the experiment and weird things are going to happen. It’s kind of an artificial environment where you have to go in and run a dangerous reaction as a drill that reminds you that there are real-world consequences. I think Chembark talked about how, in Caltech Safety Day, they brought out one of the lasers and put a hole through an apple. Since Paul is an organic chemist, I don’t think he does that very often, but his response was “Oh, if I enter one of these laser labs, I should probably have my safety glasses on.” There’s a limit to the effectiveness of that sort of stuff. you have to really, really think about how to design it, and a week out of a year is a long time, and who’s going to run it? I think your idea of the older students in the lab being the ones who really do a lot of the day to day safety stuff is important. What happens when there are no older students in the lab?

Janet: That’s right, when you’re the first cohort in the PI’s lab.

Chemjobber: Or, when there hasn’t been much funding for students and suddenly now you have funding for students.

Janet: And there’s also the question of going from a sparsely populated lab to a really crowded lab when you have the funding but you don’t suddenly have more lab space. And crowded labs have different kinds of safety concerns than sparsely populated labs.

Chemjobber: That’s very true.

Janet: I also wonder whether the “grown-up” chemists, the postdocs and the PIs, ought to be involved in some sort of regular safety … I guess casting it as “training” is likely to get people’s hackles up, and they’re likely to say, “I have even less time for this than my students do.”

Chemjobber: Right.

Janet: But at the same time, pretending that they learned everything they need to know about safety in grad school? Really? Really you did? When we’re talking now about how maybe the safety training for graduate students is inadequate, you magically got the training that tells you everything you need to know from here on out about safety? That seems weird. And also, presumably, the risks of certain kinds of procedures and certain kinds of reagents — that’s something about which our knowledge continues to increase as well. So, finding ways to keep up on that, to come up with safer techniques and better responses when things do go wrong — some kind of continuing education, continuing involvement with that. If there was a way to do it to include the PIs and the people they’re employing or training, to engage them together, maybe that would be effective.

Chemjobber: Hmm.

Janet: It would at least make it seem less like, “This is education we have to give our students, this is one more requirement to throw on the pile, but we wouldn’t do it if we had the choice, because it gets in the way of making knowledge.” Making knowledge is good. I think making knowledge is important, but we’re human beings making knowledge and we’d like to live long enough to appreciate that knowledge. Graduate students shouldn’t be consumable resources in the knowledge-building the same way that chemical reagents are.

Chemjobber: Yeah.

Janet: Because I bet you the disposal paperwork on graduate students is a fair bit more rigorous than for chemical waste.

Why does lab safety look different to chemists in academia and chemists in industry?

Here’s another approximate transcript of the conversation I had with Chemjobber that became a podcast. In this segment (from about 19:30 to 29:30), we consider how reaction to the Sheri Sangji case sound different when they’re coming from academic chemists than when they’re coming from industry, and we spin some hypotheses about what might be going on behind those differences:

Chemjobber: I know that you wanted to talk about the response of industrial chemists versus academic chemists to the Sheri Sangji case.

Janet: This is one of the things that jumps out at me in the comment threads on your blog posts about the Sangji case. (Your commenters, by the way, are awesome. What a great community of commenters engaging with this stuff.) It really does seem that the commenters who are coming from industry are saying, “These conditions that we’re hearing about in the Harran lab (and maybe in academic labs in general) are not good conditions for producing knowledge as safely as we can.” And the academic commenters are saying, “Oh come on, it’s like this everywhere! Why are you going to hold this one guy responsible for something that could have happened to any of us?” It shines a light on something interesting about how academic labs building knowledge function really differently from industrial labs building knowledge.

Chemjobber: Yeah, I don’t know. It’s very difficult for me to separate out whether it’s culture or law or something else. Certainly I think there’s a culture aspect of it, which is that every large company and most small companies really try hard to have some sort of a safety culture. Whether or not they actually stick to it is a different story, but what I’ve seen is that the bigger the company, the more it really matters. Part of it, I think, is that people are older and a little bit wiser, they’re better at looking over each other’s shoulders and saying, “What are you doing over there?” and “So, you’re planning to do that? That doesn’t sound like a great idea.” It seems like there’s less of that in academia. And then there’s the regulatory aspect of it. Industrial chemists are workers, the companies they’re working for are employers, and there’s a clear legal aspect to that. Even as under-resourced as OSHA is, there is an actual legal structure prepared to deal with accidents. If the Sangji incident had happened at a very large company, most people think that heads would have rolled, letters would have been placed in evaluation files, and careers would be over.

Janet: Or at least the lab would probably have been shut down until a whole bunch of stuff was changed.

Chemjobber: But in academia, it looks like things are different.

Janet: I have some hunches that perhaps support some of your hunches here about where the differences are coming from. First of all, the set-up in academia assumes radical autonomy on the part of the PI about how to run his or her lab. Much of that is for the good as far as allowing different ways to tackle the creative problems about how to ask the scientific questions to better shake loose the piece of knowledge you’re trying to shake loose, or allowing a range of different work habits that might be successful for these people you’re training to be grown-up scientists in your scientific field. And along with that radical autonomy — your lab is your fiefdom — in a given academic chemistry department you’re also likely to have a wide array of chemical sub-fields that people are exploring. So, depending on the size of your department, you can’t necessarily count on there being more than a couple other PIs in the department who really understand your work well enough that they would have deep insight into whether what you’re doing is safe or really dangerous. It’s a different kind of resource that you have available right at hand — there’s maybe a different kind of peer pressure that you have in your immediate professional and work environment acting on the industrial chemist than on the academic chemist. I think that probably plays some role in how PIs in academia are maybe aren’t as up on potential safety risks of new work they’re doing as they might be otherwise. And then, of course, there’s the really different kinds of rewards people are working for in industry versus academia, and how the whole tenure race ends up asking more and more of people with the same 24 hours in the day as anyone else. So, people on the tenure track start asking, “What are the things I’m really rewarded for? Because obviously, if I’m going to succeed, that’s where I have to focus my attention.”

Chemjobber: It’s funny how the “T” word keeps coming up.

Janet: By the same token, in a university system that has consistently tried to male it easier to fire faculty at whim because they’re expensive, I sort of see the value of tenure. I’m not at all argue that tenure is something that academic chemists don’t need. But, it may be that the particulars of how we evaluate people for tenure are incentivizing behaviors that are not helping the safety of the people building the knowledge or the well-being of the people who are training to be grown-ups in these professional communities.

Chemjobber: That’s right. We should just say specifically that in this particular case, Patrick Harran already had tenure, and I believe he is still a chaired professor at UCLA.

Janet: I think maybe the thing to point out is that some of these expectations, some of these standard operating procedures within disciplines in academia, are heavily shaped by the things that are rewarded for tenure, and then for promotion to full professor, and then whatever else. So, even if you’re tenured, you’re still soaking in that same culture that is informing the people who are trying to get permission to stay there permanently rather than being thanked for their six years of service and shown the door. You’re still soaking in that culture that says, “Here’s what’s really important.” Because if something else was really important, then by golly that’s how we’d be choosing who gets to stay here for reals and who’s just passing through.

Chemjobber: Yes.

Janet: I don’t know as much about the typical life cycle of the employee in industrial chemistry, but my sense is that maybe the fact that grad students and postdocs and, to some extent, technicians are sort of transient in the community of academic chemistry might make a difference as well — that they’re seen as people who are passing through, and that the people who are more permanent fixtures in that world either forget that they come in not knowing all the stuff that the people who have been there for a long, long time know, or they’re sort of making a calculation, whether they realize it or not, about how important it is to convey some of this stuff they know to transients in their academic labs.

Chemjobber: Yeah, I think that’s true. Numerically, there’s certainly a lot less turnover in industry than there is in academic labs.

Janet: I would hope so!

Chemjobber: Especially from the bench-worker perspective. It’s unfortunate that layoffs happen (topic for another podcast!), but that seems to be the main source of turnover in industry these days.