Fall semester musing on numbers.

The particular numbers on which I’m focused aren’t cool ones like pi, although I suspect they’re not entirely rational, either.

I teach at a public university in a state whose recent budget crises have been epic. That means that funding for sections of classes (and especially for the faculty who teach those sections of classes) has been tight.

My university is a teaching-focused university, which means that there has also been serious effort to ensure that the education students get at the university gives them a significant level of mastery over their major subject, helps them develop compentencies and qualities of mind and skills, and so forth. How precisely to ensure this is an interesting conversation, couched in language about learning objectives and assessments and competing models of learning. But for at least some of the things our students are supposed to learn, the official judgment has been that this will require students to write (and receive meaningful feedback on) a minimum number of words, and for them to do so in classes with a relatively small maximum number of students.

In a class where students are required to write, and receive feedback on, a total of at least 6000 words, it seems absolutely reasonable that you wouldn’t want more than 25 students in the class. Do you want to grade and comment on more than 150,000 words per class section you are teaching? (At my university, it’s usually three or four sections per semester.) That’s a lot of feedback, and for it to be at all useful in assisting student learning, it’s best of you don’t go mad in the process of giving it.

There’s a recognition, then, that on a practical level, for courses that help students learn by way of a lot of writing, smaller class sizes are good. From the student’s point of view as well, there are arguably additional benefits to a smaller class size, whether being able to ask questions during lectures or class discussions, not feeling lost in the crowd, or what have you.

At least for a certain set of courses, the university recognizes that smaller classes are better and requires that the courses be no larger than 25.

But remember that tight funding? This means that the university has also put demands on departments, schools, and colleges within the university to maintain higher and higher student-faculty ratios.

If you make one set of courses small, to maintain the required student-faculty ratio, you must make other courses big — sometimes very, very big.

But while we’re balancing numbers and counting beans, we are still a teaching-focused university. That might mean that what supports effective teaching and learning should be a constraint on our solutions to the bean-counting problems.

We’re taking as a constraint that composition, critical thinking, and chemistry lab (among others) are courses where keeping class sizes small makes for better teaching and learning.

Is there any reason (beyond budgetary expedience) to think that the courses that are made correspondingly large are also making for better teaching and learning? Is there any subject we teach to a section of 200 that we couldn’t teach better to 30? (And here, some sound empirical research would be nice, not just anecdata.)

I can’t help but wonder if there is some other way to count the beans that would better support our teaching-focused mission, and our students.

In the wake of the Harran plea deal, are universities embracing lab safety?

Earlier this month, prosecutors in Los Angeles reached a plea agreement with UCLA chemistry professor Patrick Harran in the criminal case against him in connection with the 2008 lab accident that resulted in the death of 23-year-old staff research assistant Sheharbano “Sheri” Sangji. Harran, who was facing more than 4 years of jail time if convicted, instead will perform 800 hours of community service and may find himself back in court in the event that his lab is found to have new safety violations in the next five years.

The Sangji family is not satisfied that the plea punishes Harran enough. My worry is whether the resolution of this case has a positive impact on safety in academic labs and research settings.

According to The Chronicle of Higher Education,

Several [independent safety advocates] agreed that universities’ research laboratories still remain more dangerous than their corporate counterparts. Yet they also expressed confidence that the impetus for improvement brought by the first filing ever of criminal charges over a fatal university lab accident has not been diluted by the plea bargain. …

[T]he action by California prosecutors “has gotten the attention of virtually every research chemist out there,” even in states that may seem more reluctant to pursue such cases, [Neal R. Langerman, a former associate professor of chemistry at Utah State University who now heads Advanced Chemical Safety, a consulting firm] said. “This is precedent-setting, and now that the precedent is set, you really do not want to test the water, because the water is already boiling.”

As you might expect, the official statement from UCLA plays up the improvements in lab safety put into place in the wake of the accident and points to the creation of the UC Center for Laboratory Safety, which has been holding workshops and surveying lab workers on safety practices and attitudes.

I’m afraid, however, judging from the immediate reaction I’ve seen at my own institution, that we have a long way to go.

In particular, a number of science faculty (who are not chemists) seem to have been getting clear messages in the wake of “that UCLA prosecution” — they didn’t really know the details of the case, nor the names of the people involved — that our university would not be backing them up legally in the event of any safety mishap in the lab or the field. Basically, the rumblings from the higher administrative strata were: No matter how well you’ve prepared yourself, your students, your employees, no matter how many safety measures you’ve put into place, no matter what limitations you’re working with as far as equipment or facilities, if something goes wrong, it’s your ass on the line.

This does not strike me as a productive way to approach safe working conditions as a collective responsibility within an educational institution. I also suspect it’s not a stance that would hold up in court, but since it would probably take another lab tragedy and prosecution to undermine it, I’m hopeful that some sense of institutional ethics will well up and result in a more productive approach.

The most charitable explanation I can come up with is that the higher administrative strata intended to communicate that science faculty have a positive duty to ensure safe working conditions for their students and employees (and themselves). That means that science faculty need to be proactive in assessing their research settings (whether for laboratory or field research) for potential hazards, in educating themselves and those they work with about those hazards, in having workable plans to mitigate the hazards and to respond swiftly and effectively to mishaps. All of that is sensible enough.

However, none of that means that the institution is free of responsibility. Departments, colleges, and university administrators control resources that can make the difference between a pretty safe research environment and a terribly risky one. Institutions, not individual faculty, create and maintain occupational health programs. Institutions can marshal shared resources (including safety training programs and institutional safety officers) that individual faculty cannot.

Moreover, institutions set the institutional tone — the more or less official sense of what is prioritized, of what is valued. If the strongest message about safety that reaches faculty boils down to legal liability and who will ultimately be legally liable, I’m pretty sure the institution still has a great deal of work to do in establishing a real culture of safety.

_____

Related posts:

Suit against UCLA in fatal lab fire raises question of who is responsible for safety.

Crime, punishment, and the way forward: in the wake of Sheri Sangji’s death, what should happen to Patrick Harran?

Facing felony charges in lab death of Sheri Sangji, UCLA settles, Harran stretches credulity.

Why does lab safety look different to chemists in academia and chemists in industry?

Community responsibility for a safety culture in academic chemistry.

Are safe working conditions too expensive for knowledge-builders?

Reluctance to act on suspicions about fellow scientists: inside the frauds of Diederik Stapel (part 4).

It’s time for another post in which I chew on some tidbits from Yudhijit Bhattacharjee’s incredibly thought-provoking New York Times Magazine article (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. (You can also look at the tidbits I chewed on in part 1, part 2, and part 3.) This time I consider the question of why it was that, despite mounting clues that Stapel’s results were too good to be true, other scientists in Stapel’s orbit were reluctant to act on their suspicions that Stapel might be up to some sort of scientific misbehavior.

Let’s look at how Bhattacharjee sets the scene in the article:

[I]n the spring of 2010, a graduate student noticed anomalies in three experiments Stapel had run for him. When asked for the raw data, Stapel initially said he no longer had it. Later that year, shortly after Stapel became dean, the student mentioned his concerns to a young professor at the university gym. Each of them spoke to me but requested anonymity because they worried their careers would be damaged if they were identified.

The bold emphasis here (and in the quoted passages that follow) is mine. I find it striking that even now, when Stapel has essentially been fully discredited as a trustworthy scientist, these two members of the scientific community feel safer not being identified. It’s not entirely obvious to me if their worry is being identified as someone who was suspicious that fabrication was taking place but who said nothing to launch official inquiries — or whether they fear that being identified as someone who was suspicious of a fellow scientist could harm their standing in the scientific community.

If you dismiss that second possibility as totally implausible, read on:

The professor, who had been hired recently, began attending Stapel’s lab meetings. He was struck by how great the data looked, no matter the experiment. “I don’t know that I ever saw that a study failed, which is highly unusual,” he told me. “Even the best people, in my experience, have studies that fail constantly. Usually, half don’t work.”

The professor approached Stapel to team up on a research project, with the intent of getting a closer look at how he worked. “I wanted to kind of play around with one of these amazing data sets,” he told me. The two of them designed studies to test the premise that reminding people of the financial crisis makes them more likely to act generously.

In early February, Stapel claimed he had run the studies. “Everything worked really well,” the professor told me wryly. Stapel claimed there was a statistical relationship between awareness of the financial crisis and generosity. But when the professor looked at the data, he discovered inconsistencies confirming his suspicions that Stapel was engaging in fraud.

If one has suspicions about how reliable a fellow scientist’s results are, doing some empirical investigation seems like the right thing to do. Keeping an open mind and then examining the actual data might well show one’s suspicions to be unfounded.

Of course, that’s not what happened here. So, given a reason for doubt with stronger empirical support — not to mention the fact that scientists are trying to build a shared body of scientific knowledge (which means that unreliable papers in the literature can hurt the knowledge-building efforts of other scientists who trust that the work reported in that literature was done honestly), you would think the time was right for this professor to pass on what he had found to those at the university who could investigate further. Right?

The professor consulted a senior colleague in the United States, who told him he shouldn’t feel any obligation to report the matter.

For all the talk of science, and the scientific literature, being “self-correcting,” it’s hard to imagine the precise mechanism for such self-correction in a world where no scientist who is aware of likely scientific misconduct feels any obligation to report the matter.

But the person who alerted the young professor, along with another graduate student, refused to let it go. That spring, the other graduate student examined a number of data sets that Stapel had supplied to students and postdocs in recent years, many of which led to papers and dissertations. She found a host of anomalies, the smoking gun being a data set in which Stapel appeared to have done a copy-paste job, leaving two rows of data nearly identical to each other.

The two students decided to report the charges to the department head, Marcel Zeelenberg. But they worried that Zeelenberg, Stapel’s friend, might come to his defense. To sound him out, one of the students made up a scenario about a professor who committed academic fraud, and asked Zeelenberg what he thought about the situation, without telling him it was hypothetical. “They should hang him from the highest tree” if the allegations were true, was Zeelenberg’s response, according to the student.

Some might think these students were being excessively cautious, but the sad fact is that scientists faced with allegations of misconduct against a colleague — especially if they are brought by students — frequently side with their colleague and retaliate against those making the allegations. Students, after all, are new members of one’s professional community, so green one might not even think of them as really members. They are low status, they are learning how things work, they are judged likely to have misunderstood what they have seen. And, in contrast to one’s colleagues, students are transients. They are just passing through the training program, whereas you might hope to be with your colleagues for your whole professional life. In a case of dueling testimony, who are you more likely to believe?

Maybe the question should be whether your bias towards believing one over the other is strong enough to keep you from examining the available evidence to determine whether your trust is misplaced.

The students waited till the end of summer, when they would be at a conference with Zeelenberg in London. “We decided we should tell Marcel at the conference so that he couldn’t storm out and go to Diederik right away,” one of the students told me.

In London, the students met with Zeelenberg after dinner in the dorm where they were staying. As the night wore on, his initial skepticism turned into shock. It was nearly 3 when Zeelenberg finished his last beer and walked back to his room in a daze. In Tilburg that weekend, he confronted Stapel.

It might not be universally true, but at least some of the people who will lie about their scientific findings in a journal article will lie right to your face about whether they obtained those findings honestly. Yet lots of us think we can tell — at least with the people we know — whether they are being honest with us. This hunch can be just as wrong as the wrongest scientific hunch waiting for us to accumulate empirical evidence against it.

The students seeking Zeelenberg’s help in investigating Stapel’s misbehavior found a situation in which Zeelenberg would have to look at the empirical evidence first before he looked his colleague in the eye and asked him whether he was fabricating his results. They had already gotten him to say, at least in the abstract, that the kind of behavior they had reason to believe Stapel was committing was unacceptable in their scientific community. To make a conscious decision to ignore the empirical evidence would have meant Zeelenberg would have to see himself as displaying a kind of intellectual dishonesty — because if fabrication is harmful to science, it is harmful to science no matter who perpetrates it.

As it was, Zeelenberg likely had to make the painful concession that he had misjudged his colleague’s character and trustworthiness. But having wrong hunches is science is much less of a crime than clinging to those hunches in the face of mounting evidence against them.

Doing good science requires a delicate balance of trust and accountability. Scientists’ default position is to trust that other scientists are making honest efforts to build reliable scientific knowledge about the world, using empirical evidence and methods of inference that they display for the inspection (and critique) of their colleagues. Not to hold this default position means you have to build all your knowledge of the world yourself (which makes achieving anything like objective knowledge really hard). However, this trust is not unconditional, which is where the accountability comes is. Scientists recognize that they need to be transparent about what they did to build the knowledge — to be accountable when other scientists ask questions or disagree about conclusions — else that trust evaporates. When the evidence warrants it, distrusting a fellow scientist is not mean or uncollegial — it’s your duty. We need the help of other to build scientific knowledge, but if they insist that they ignore evidence of their scientific misbehavior, they’re not actually helping.

Scientific training and the Kobayashi Maru: inside the frauds of Diederik Stapel (part 3).

This post continues my discussion of issues raised in the article by Yudhijit Bhattacharjee in the New York Times Magazine (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. Part 1 looked at how expecting to find a particular kind of order in the universe may leave a scientific community more vulnerable to a fraudster claiming to have found results that display just that kind of order. Part 2 looked at some of the ways Stapel’s conduct did harm to the students he was supposed to be training to be scientists. Here, I want to point out another way that Stapel failed his students — ironically, by shielding them from failure.

Bhattacharjee writes:

[I]n the spring of 2010, a graduate student noticed anomalies in three experiments Stapel had run for him. When asked for the raw data, Stapel initially said he no longer had it. Later that year, shortly after Stapel became dean, the student mentioned his concerns to a young professor at the university gym. Each of them spoke to me but requested anonymity because they worried their careers would be damaged if they were identified.

The professor, who had been hired recently, began attending Stapel’s lab meetings. He was struck by how great the data looked, no matter the experiment. “I don’t know that I ever saw that a study failed, which is highly unusual,” he told me. “Even the best people, in my experience, have studies that fail constantly. Usually, half don’t work.”

In the next post, we’ll look at how this other professor’s curiosity about Stapel’s too-good-to-be-true results led to the unraveling of Stapel’s fraud. But I think it’s worth pausing here to say a bit more on how very odd a training environment Stapel’s research group provided for his students.

None of his studies failed. Since, as we saw in the last post, Stapel was also conducting (or, more accurately, claiming to conduct) his students’ studies, that means none of his students’ studies failed.

This is pretty much the opposite of every graduate student experience in an empirical field that I have heard described. Most studies fail. Getting to a 50% success rate with your empirical studies is a significant achievement.

Graduate students who are also Trekkies usually come to recognize that the travails of empirical studies are like a version of the Kobayashi Maru.

Introduced in Star Trek II: The Wrath of Khan, the Kobayashi Maru is a training simulation in which Star Fleet cadets are presented with a civilian ship in distress. Saving the civilians requires the cadet to violate treaty by entering the Neutral Zone (and in the simulation, this choice results in a Klingon attack and the boarding of the cadet’s ship). Honoring the treaty, on the other hand, means abandoning the civilians and their disabled ship in the Neutral Zone. The Kobayashi Maru is designed as a “no-win” scenario. The intent of the test is to discover how trainees face such a situation. Owing to James T. Kirk’s performance on the test, Wikipedia notes that some Trekkies also view the Kobayashi Maru as a problem whose solution depends on redefining the problem.

Scientific knowledge-building turns out to be packed with particular plans that cannot succeed at yielding the particular pieces of knowledge the scientists hope to discover. This is because scientists are formulating plans on the basis of what is already known to try to reveal what isn’t yet known — so knowing where to look, or what tools to use to do the looking, or what other features of the world are there to confound your ability to get clear information with those tools, is pretty hard.

Failed attempts happen. If they’re the sort of thing that will crush your spirit and leave you unable to shake it off and try it again, or to come up with a new strategy to try, then the life of a scientist will be a pretty hard life for you.

Grown-up scientists have studies fail all the time. Graduate students training to be scientists do, too. But graduate students also have mentors who are supposed to help them bounce back from failure — to figure out the most likely sources of failure, whether it’s worth trying the study again, whether a new approach would be better, whether some crucial piece of knowledge has been learned despite the failure of what was planned. Mentors give scientific trainees a set of strategies for responding to particular failures, and they also give reassurance that even good scientists fail.

Scientific knowledge is built by actual humans who don’t have perfect foresight about the features of the world as yet undiscovered, humans who don’t have perfectly precise instruments (or hands and eyes using those instruments), humans who sometimes mess up in executing their protocols. Yet the knowledge is built, and it frequently works pretty well.

In the context of scientific training, it strikes me as malpractice to send new scientists out into the world with the expectation that all of their studies should work, and without any experience grappling with studies that don’t work. Shielding his students from their Kobayashi Maru is just one more way Diederik Stapel cheated them out of a good scientific training.

Failing the scientists-in-training: inside the frauds of Diederik Stapel (part 2)

In this post, I’m continuing my discussion of the excellent article by Yudhijit Bhattacharjee in the New York Times Magazine (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. The last post considered how being disposed to expect order in the universe might have made other scientists in Stapel’s community less critical of his (fabricated) results than they could have been. Here, I want to shift my focus to some of the harm Stapel did beyond introducing lies to the scientific literature — specifically, the harm he did to the students he was supposed to be training to become good scientists.

I suppose it’s logically possible for a scientist to commit misconduct in a limited domain — say, to make up the results of his own research projects but to make every effort to train his students to be honest scientists. This doesn’t strike me as a likely scenario, though. Publishing fraudulent results as if they were factual is lying to one’s fellow scientists — including the generation of scientists one is training. Moreover, most research groups pursue interlocking questions, meaning that the questions the grad students are working to answer generally build on pieces of knowledge the boss has built — or, in Stapel’s case “built”. This means that at minimum, a fabricating PI is probably wasting his trainees’ time by letting them base their own research efforts on claims that there’s no good scientific reason to trust.

And as Bhattacharjee describes the situation for Stapel’s trainees, things for them were even worse:

He [Stapel] published more than two dozen studies while at Groningen, many of them written with his doctoral students. They don’t appear to have questioned why their supervisor was running many of the experiments for them. Nor did his colleagues inquire about this unusual practice.

(Bold emphasis added.)

I’d have thought that one of the things a scientist-in-training hopes to learn in the course of her graduate studies is not just how to design a good experiment, but how to implement it. Making your experimental design work in the real world is often much harder than it seems like it will be, but you learn from these difficulties — about the parameters you ignored in the design that turn out to be important, about the limitations of your measurement strategies, about ways the system you’re studying frustrates the expectations you had about it before you were actually interacting with it.

I’ll even go out on a limb and say that some experience doing experiments can make a significant difference in a scientist’s skill conceiving of experimental approaches to problems.

That Stapel cut his students out of doing the experiments was downright weird.

Now, scientific trainees probably don’t have the most realistic picture of precisely what competencies they need to master to become successful grown-up scientists in a field. They trust that the grown-up scientists training them know what these competencies are, and that these grown-up scientists will make sure that they encounter them in their training. Stapel’s trainees likely trusted him to guide them. Maybe they thought that he would have them conducting experiments if that were a skill that would require a significant amount of time or effort to master. Maybe they assumed that implementing the experiments they had designed was just so straightforward that Stapel thought they were better served working to learn other competencies instead.

(For that to be the case, though, Stapel would have to be the world’s most reassuring graduate advisor. I know my impostor complex was strong enough that I wouldn’t have believed I could do an experiment my boss or my fellow grad students viewed as totally easy until I had actually done it successfully three times. If I had to bet money, it would be that some of Stapel’s trainees wanted to learn how to do the experiments, but they were too scared to ask.)

There’s no reason, however, that Stapel’s colleagues should have thought it was OK that his trainees were not learning how to do experiments by taking charge of doing their own. If they did know and they did nothing, they were complicit in a failure to provide adequate scientific training to trainees in their program. If they didn’t know, that’s an argument that departments ought to take more responsibility for their trainees and to exercise more oversight rather than leaving each trainee to the mercies of his or her advisor.

And, as becomes clear from the New York Times Magazine article, doing experiments wasn’t the only piece of standard scientific training of which Stapel’s trainees were deprived. Bhattacharjee describes the revelation when a colleague collaborated with Stapel on a piece of research:

Stapel and [Ad] Vingerhoets [a colleague of his at Tilburg] worked together with a research assistant to prepare the coloring pages and the questionnaires. Stapel told Vingerhoets that he would collect the data from a school where he had contacts. A few weeks later, he called Vingerhoets to his office and showed him the results, scribbled on a sheet of paper. Vingerhoets was delighted to see a significant difference between the two conditions, indicating that children exposed to a teary-eyed picture were much more willing to share candy. It was sure to result in a high-profile publication. “I said, ‘This is so fantastic, so incredible,’ ” Vingerhoets told me.

He began writing the paper, but then he wondered if the data had shown any difference between girls and boys. “What about gender differences?” he asked Stapel, requesting to see the data. Stapel told him the data hadn’t been entered into a computer yet.

Vingerhoets was stumped. Stapel had shown him means and standard deviations and even a statistical index attesting to the reliability of the questionnaire, which would have seemed to require a computer to produce. Vingerhoets wondered if Stapel, as dean, was somehow testing him. Suspecting fraud, he consulted a retired professor to figure out what to do. “Do you really believe that someone with [Stapel’s] status faked data?” the professor asked him.

“At that moment,” Vingerhoets told me, “I decided that I would not report it to the rector.”

Stapel’s modus operandi was to make up his results out of whole cloth — to produce “findings” that looked statistically plausible without the muss and fuss of conducting actual experiments or collecting actual data. Indeed, since the thing he was creating that needed to look plausible enough to be accepted by his fellow scientists was the analyzed data, he didn’t bother making up raw data from which such an analysis could be generated.

Connecting the dots here, this surely means that Stapel’s trainees must not have gotten any experience dealing with raw data or learning how to apply methods of analysis to actual data sets. This left another gaping hole in the scientific training they deserved.

It would seem that those being trained by other scientists in Stapel’s program were getting some experience in conducting experiments, collecting data, and analyzing their data — since that experimentation, data collection, and data analysis became fodder for discussion in the ethics training that Stapel led. From the article:

And yet as part of a graduate seminar he taught on research ethics, Stapel would ask his students to dig back into their own research and look for things that might have been unethical. “They got back with terrible lapses­,” he told me. “No informed consent, no debriefing of subjects, then of course in data analysis, looking only at some data and not all the data.” He didn’t see the same problems in his own work, he said, because there were no real data to contend with.

I would love to know the process by which Stapel’s program decided that he was the best one to teach the graduate seminar on research ethics. I wonder if this particular teaching assignment was one of those burdens that his colleagues tried to dodge, or if research ethics was viewed as a teaching assignment requiring no special expertise. I wonder how it’s sitting with them that they let a now-famous cheater teach their grad students how to be ethical scientists.

The whole “those who can’t do, teach” adage rings hollow here.

Are safe working conditions too expensive for knowledge-builders?

Last week’s deadly collapse of an eight-story garment factory building in Dhaka, Bangladesh has prompted discussions about whether poor countries can afford safe working conditions for workers who make goods that consumers in countries like the U.S. prefer to buy for bargain prices.

Maybe the risk of being crushed to death (or burned to death, or what have you) is just a trade-off poor people are (or should be) willing to accept to draw a salary. At least, that seems to be the take-away message from the crowd arguing that it would cost too much to have safety regulation (and enforcement) with teeth.

It is hard not to consider how this kind of attitude might get extended to other kinds of workplaces — like, say, academic research labs — given that last week UCLA chemistry professor Patrick Harran was also scheduled to return to court for a preliminary hearing on the felony charges of labor code violations brought against him in response to the 2008 fire in his laboratory that killed his employee, Shari Sangji.

Jyllian Kemsley has a detailed look at how Harran’s defense team has responded to the charges of specific violations of the California Labor Code, charges involving failure to provide adequate training, failure to have adequate procedures in place to correct unsafe conditions or work practices, and failure to require workers wear appropriate clothing for the work being done. Since I’m not a lawyer, it’s hard for me to assess the likelihood that the defense responses to these charges would be persuasive to a judge, but ethically, they’re pretty weak tea.

Sadly, though, it’s weak tea of the exact sort that my scientific training has led me to expect from people directing scientific research labs in academic settings.

When safety training is confined to a single safety video that graduate students are shown when they enter a program, that tells graduate students that their safety is not a big deal in the research activities that are part of their training.

When there’s not enough space under the hood for all the workers in a lab to conduct all the activities that, for safety’s sake, ought to be conducted under the hood — and when the boss expects all those activities to happen without delay — that tells them that a sacrifice in safety to produce quick results is acceptable.

When a student-volunteer needs to receive required ionizing radiation safety training to get a film badge that will give her access to the facility where she can irradiate her cells for an experiment, and the PI, upon hearing that the next training session in three weeks away, says to the student-volunteer, “Don’t bother; use my film badge,” that tells people in the lab that the PI is unwilling to lose three weeks of unpaid labor on one aspect of a research project just to make the personnel involved a little bit safer.

When people running a lab take an attitude of “Eh, young people are going to dress how they’re going to dress” rather than imposing clear rules for their laboratories that people whose dress is unsafe for the activities they are to undertake don’t get to undertake them, that tells the personnel in the lab that whatever cost is involved in holding this line — losing a day’s worth of work, being viewed by one’s underlings as strict rather than cool — has been judged too high relative to the benefit of making personnel in the lab safer.

When university presidents or other administrators proclaim that knowledge-builders “must continue to recalibrate [their] risk tolerance” by examining their “own internal policies and ask[ing] the question—do they meet—or do they exceed—our legal or regulatory requirements,” that tells knowledge-builders at those universities that people with significantly more power than them judge efforts to make things safer for knowledge-builders (and for others, like the human subjects of their research) as an unnecessary burden. When institutions need to become leaner, or more agile, shouldn’t researchers (and human subjects) do their part by accepting more risk as the price of doing business?

To be sure, safety isn’t free. But there are also costs to being less safe in academic research settings.

For example, personnel develop lax attitudes toward risks and trainees take these attitudes with them when they go out in the world as grown-up scientists. Surrounding communities can get hurt by improper disposal of hazardous materials, or by inadequate safety measures taken by researchers working with infectious agents who then go home and cough on their families and friends. Sometimes, personnel are badly injured, or killed.

And, if academic scientists are dragging feet on making things safer for the researchers on their team because it takes time and effort to investigate risks and make sensible plans for managing them, to develop occupational health plans and to institute standard operating procedures that everyone on the research team knows and follows, I hope they’re noticing that facing felony charges stemming from safety problems in their labs can also take lots of time and effort.

UPDATE: The Los Angeles Times reports that Patrick Harran will stand trial after an LA County Superior Court judge denied a defense motion to dismiss the case.

The danger of pointing out bad behavior: retribution (and the community’s role in preventing it).

There has been a lot of discussion of Dario Maestripieri’s disappointment at the unattractiveness of his female colleagues in the neuroscience community. Indeed, it’s notable how much of this discussion has been in public channels, not just private emails or conversations conducted with sound waves which then dissipate into the aether. No doubt, this is related to Maestripieri’s decision to share his hot-or-not assessment of the women in his profession in a semi-public space where it could achieve more permanence — and amplification — than it would have as an utterance at the hotel bar.

His behavior became something that any member of his scientific community with an internet connection (and a whole lot of people outside his scientific community) could inspect. The impacts of an actual, rather than hypothetical, piece of behavior, could be brought into the conversation about the climate of professional and learning communities, especially for the members of these communities who are women.

It’s worth pointing out that there is nothing especially surprising about such sexist behavior* within these communities. The people in the communities who have been paying attention have seen them before (and besides have good empirical grounds for expecting that gender biases may be a problem). But many sexist behaviors go unreported and unremarked, sometimes because of the very real fear of retribution.

What kind of retribution could there be for pointing out a piece of behavior that has sexist effects, or arguing that it is an inappropriate way for a member of the professional community to behave?

Let’s say you are an early career scientist, applying for a faculty post. As it happens, Dario Maestripieri‘s department, the University of Chicago Department of Comparative Human Development, currently has an open search for a tenure-track assistant professor. There is a non-zero chance that Dario Maestripieri is a faculty member on that search committee, or that he has the ear of a colleague that is.

It is not a tremendous stretch to hypothesize that Dario Maestripieri may not be thrilled at the public criticism he’s gotten in response to his Facebook post (including some quite close to home). Possibly he’s looking through the throngs of his Facebook friends and trying to guess which of them is the one who took the screenshot of his ill advised post and shared it more widely. Or looking through his Facebook friends’ Facebook friends. Or considering which early career neuroscientists might be in-real-life friends or associates with his Facebook friends or their Facebook friends.

Now suppose you’re applying for that faculty position in his department and you happen to be one of his Facebook friends,** or one of their Facebook friends, or one of the in-real-life friends of either of those.

Of course, shooting down an applicant for a faculty position for the explicit reason that you think he or she may have cast unwanted attention on your behavior towards your professional community would be a problem. But there are probably enough applicants for the position, enough variation in the details of their CVs, and enough subjective judgment on the part of the members of the search committee in evaluating all those materials that it would be possible to cut all applicants who are Dario Maestripieri’s Facebook friends (or their Facebook friends, or in-real-life friends of either of those) from consideration while providing some other plausible reason for their elimination. Indeed, the circle could be broadened to eliminate candidates with letters of recommendation from Dario Maestripieri’s Facebook friends (or their Facebook friends, or in-real-life friends of either of those), candidates who have coauthored papers with Dario Maestripieri’s Facebook friends (or their Facebook friends, or in-real-life friends of either of those), etc.)

And, since candidates who don’t get the job generally aren’t told why they were found wanting — only that some other candidate was judged to be better — these other plausible reasons for shooting down a candidate would only even matter in the discussions of the search committee.

In other words, real retaliation (rejection from consideration for a faculty job) could fall on people who are merely suspected of sharing information that led to Dario Maestripieri becoming the focus of a public discussion of sexist behavior — not just on the people who have publicly spoken about his behavior. And, the retaliation would be practically impossible to prove.

If you don’t think this kind of possibility has a chilling effect on the willingness of members of a professional community to speak up when they see a relatively powerful colleague behave in they think is harmful, you just don’t understand power dynamics.

And even if Dario Maestripieri has no part at all in his department’s ongoing faculty search, there are other interactions within his professional community in which his suspicions about who might have exposed his behavior could come into play. Senior scientists are routinely asked to referee papers submitted to scientific journals and to serve on panels and study sections that rank applications for grants. In some of these circumstances, the identities of the scientists one is judging (e.g., for grants) are known to the scientists making the evaluations. In others, they are masked, but the scientists making the evaluations have hunches about whose work they are evaluating. If those hunches are mingled with hunches about who could have shared evidence of behavior that is now making the evaluator’s life difficult, it’s hard to imagine the grant applicant or the manuscript author getting a completely fair shake.

Let’s pause here to note that the attitude Dario Maestripieri’s Facebook posting reveals, that it’s appropriate to evaluate women in the field on their physical beauty rather than their scientific achievements, could itself be a source of bias as he does things that are part of a normal professional life, like serving on search committees, reviewing journal submissions and grant applications, evaluating students, and so forth. A bias like this could manifest itself in a preference for hiring job candidates one finds aesthetically pleasing. (Sure, academic job application packets usually don’t include a headshot, but even senior scientists have probably heard of Google Image search.) Or it could manifest itself in a preference against hiring more women (since too high a concentration of female colleagues might be perceived as increasing the likelihood that one would be taken to task for freely expressing one’s aesthetic preferences about women in the field). Again, it would be extraordinarily hard to prove the operation of such a bias in any particular case — but that doesn’t rule out the possibility that it is having an effect in activities where members of the professional community are supposed to be as objective as possible.

Objectivity, as we’ve noted before, is hard.

We should remember, though, that faculty searches are conducted by committees, rather than by a single individual with the power to make all the decisions. And, the University of Chicago Department of Comparative Human Development (as well as the University of Chicago more generally) may recognize that it is likely to be getting more public scrutiny as a result of the public scrutiny Dario Maestripieri has been getting.

Among other things, this means that the department and the university have a real interest in conducting a squeaky-clean search that avoids even the appearance of retaliation. In any search, members of the search committee have a responsibility to identify, disclose, and manage their own biases. In this search, discharging that responsibility is even more vital. In any search, members of the hiring department have a responsibility to discuss their shared needs and interests, and how these should inform the selection of the new faculty member. In this search, that discussion of needs and interests must include a discussion of the climate within the department and the larger scientific community — what it is now, and what members of the department think it should be.

In any search, members of the hiring department have an interest in sharing their opinions on who the best candidate might be, and to having a dialogue around the disagreements. In this search, if it turns out one of the disagreements about a candidate comes down to “I suspect he may have been involved in exposing my Facebook post and making me feel bad,” well, arguably there’s a responsibility to have a discussion about that.

Ask academics what it’s like to hire a colleague and it’s not uncommon to hear them describe the experience as akin to entering a marriage. You’re looking for someone with whom you might spend the next 30 years, someone who will grow with you, who will become an integral part of your department and its culture, even to the point of helping that departmental culture grow and change. This is a good reason not to choose the new hire based on the most superficial assessment of what each candidate might bring to the relationship — and to recognize that helping one faculty member avoid discomfort might not be the most important thing.

Indeed, Dario Maestripieri’s colleagues may have all kinds of reasons to engage him in uncomfortable discussions about his behavior that have nothing to do with conducting a squeaky-clean faculty search. Their reputations are intertwined, and leaving things alone rather than challenging Dario Maestripieri’s behavior may impact their own ability to attract graduate students or maintain the respect of undergraduates. These are things that matter to academic scientists — which means that Dario Maestripieri’s colleagues have an interest in pushing back for their own good and the good of the community.

The pushback, if it happens, is likely to be just as invisible publicly as any retaliation against job candidates for possibly sharing the screenshot of Dario Maestripieri’s Facebook posting. If positive effects are visible, it might make it seem less dangerous for members of the professional community to speak up about bad behavior when they see it. But if the outward appearance is that nothing has changed for Dario Maestripieri and his department, expect that there will be plenty of bad behavior that is not discussed in public because the career costs of doing so are just too high.

______
* This is not at all an issue about whether Dario Maestripieri is a sexist. This is an issue about the effects of the behavior, which have a disproportionate negative impact on women in the community. I do not know, or care, what is in the heart of the person who displays these behaviors, and it is not at all relevant to a discussion of how the behaviors affect the community.

** Given the number of his Facebook friends and their range of ages, career stages, etc., this doesn’t strike me as improbable. (At last check, I have 11 Facebook friends in common with Dario Maestripieri.)

Community responsibility for a safety culture in academic chemistry.

This is another approximate transcript of a part of the conversation I had with Chemjobber that became a podcast. This segment (from about 29:55 to 52:00) includes our discussion of what a just punishment might look like for PI Patrick Harran for his part in the Sheri Sangji case. From there, our discussion shifted to the question of how to make the culture of academic chemistry safer:

Chemjobber: One of the things that I guess I’ll ask is whether you think we’ll get justice out of this legal process in the Sheri Sangji case.

Janet: I think about this, I grapple with this, and about half the time when I do, I end up thinking that punishment — and figuring out the appropriate punishment for Patrick Harran — doesn’t even make my top-five list of things that should come out of all this. I kind of feel like a decent person should feel really, really bad about what happened, and should devote his life forward from here to making the conditions that enabled the accident that killed Sheri Sangji go away. But, you know, maybe he’s not a decent person. Who the heck can tell? And certainly, once you put things in the context where you have a legal team defending you against criminal charges — that tends to obscure the question of whether you’re a decent person or not, because suddenly you’ve got lawyers acting on your behalf in all sorts of ways that don’t look decent at all.

Chemjobber: Right.

Janet: I think the bigger question in my mind is how does the community respond? How does the chemistry department at UCLA, how does the larger community of academic chemistry, how do Patrick Harran’s colleagues at UCLA and elsewhere respond to all of this? I know that there are some people who say, “Look, he really fell down on the job safety-wise, and in terms of creating an environment for people working on his behalf, and someone died, and he should do jail time.” I don’t actually know if putting him in jail changes the conditions on the outside, and I’ve said that I think, in some ways, tucking him away in jail for however many months makes it easier for the people who are still running academic labs while he’s incarcerated to say, “OK, the problem is taken care of. The bad actor is out of the pool. Not a problem,” rather than looking at what it is about the culture of academic chemistry that has us devoting so little of our time and energy to making sure we’re doing this safely. So, if it were up to me, if I were the Queen of Just Punishment in the world of academic chemistry, I’ve said his job from here on out should be to be Safety in the Research Culture Guy. That’s what he gets to work on. He doesn’t get to go forward and conduct new research on some chemical question like none of this ever happened. Because something happened. Something bad happened, and the reason something bad happened, I think, is because of a culture in academic chemistry where it was acceptable for a PI not to pay attention to safety considerations until something bad happened. And that’s got to change.

Chemjobber: I think it will change. I should point out here that if your proposed punishment were enacted, it would be quite a punishment, because he wouldn’t get to choose what he worked on anymore, and that, to a great extent, is the joy of academic research, that it’s self-directed and that there is lots and lots of freedom. I don’t get to choose the research problems I work on, because I do it for money. My choices are more or less made by somebody else.

Janet: But they pay you.

Chemjobber: But they pay me.

Janet: I think I’d even be OK saying maybe Harran gets to do 50% of his research on self-directed research topics. But the other 50% is he has to go be an evangelist for changing how we approach the question of safety in academic research.

Chemjobber: Right.

Janet: He’s still part of the community, he’s still “one of us,” but he has to show us how we are treading dangerously close to the conditions that led to the really bad thing that happened in his lab, so we can change that.

Chemjobber: Hmm.

Janet: And not just make it an individual thing. I think all of the attempts to boil what happened down to all being the individual responsibility of the technician, or of the PI, or it’s a split between the individual responsibility of one and the individual responsibility of the other, totally misses the institutional responsibility, and the responsibility of the professional community, and how systemic factors that the community is responsible for failed here.

Chemjobber: Hmm.

Janet: And I think sometimes we need individuals to step up and say, part of me acknowledging my personal responsibility here is to point to the ways that the decisions I made within the landscape we’ve got — of what we take seriously, of what’s rewarded and what’s punished — led to this really bad outcome. I think that’s part of the power here is when academic chemists say, “I would be horrified if you jailed this guy because this could have happened in any of our labs,” I think they’re right. I think they’re right, and I think we have to ask how it is that conditions in these academic communities got to the point where we’re lucky that more people haven’t been seriously injured or killed by some of the bad things that could happen — that we don’t even know that we’re walking into because safety gets that short shrift.

Chemjobber: Wow, that’s heavy. I’m not sure whether there are industrial chemists whose primary job is to think about safety. Is part of the issue we have here that safety has been professionalized? We have industrial chemical hygienists and safety engineers. Every university has an EH&S [environmental health and safety] department. Does that make safety somebody else’s problem? And maybe if Patrick Harran were to become a safety evangelist, it would be a way of saying it’s our problem, and we all have to learn, we have to figure out a way to deal with this?

Janet:Yeah. I actually know that there exist safety officers in academic science departments, partly because I serve on some university committees with people who fill that role — so I know they exist. I don’t know how much the people doing research in those departments actually talk with those safety officers before something goes wrong, or how much of it goes beyond “Oh, there’s paperwork we need to make sure is filed in the right place in case there’s an inspection,” or something like that. But it strikes me that safety should be more collaborative. In some ways, wouldn’t that be a more gripping weekly seminar to have in a chemistry department for grad students working in the lab, even just once a month on the weekly seminar, to have a safety roundtable? “Here are the risks that we found out about in this kind of work,” or talking about unforeseen things that might happen, or how do you get started finding out about proper precautions as you’re beginning a new line of research? What’s your strategy for figuring that out? Who do you talk to? I honestly feel like this is a part of chemical education at the graduate level that is extremely underdeveloped. I know there’s been some talk about changing the undergraduate chemistry degree so that it includes something like a certificate program in chemical safety, and maybe that will fix it all. But I think the only thing that fixes it all is really making it part of the day to day lived culture of how we build new knowledge in chemistry, that the safety around how that knowledge gets built is an ongoing part of the conversation.

Chemjobber: Hmm.

Janet: It’s not something we talk about once and then never again. Because that’s not how research works. We don’t say, “Here’s our protocol. We never have to revisit it. We’ll just keep running it until we have enough data, and then we’re done.”

Chemjobber: Right.

Janet: Show me an experiment that’s like that. I’ve never touched an experiment like that in my life.

Chemjobber: So, how many times do you remember your Ph.D. advisor talking to you about safety?

Janet: Zero. He was a really good advisor, he was a very good mentor, but essentially, how it worked in our lab was that the grad students who were further on would talk to the grad students who were newer about “Here’s what you need to be careful about with this reaction, “ or “If you’ve got overflow of your chemical waste, here’s who to call to do the clean-up,” or “Here’s the paperwork you fill out to have the chemical waste hauled away properly.” So, the culture was the people who were in the lab day to day were the keepers of the safety information, and luckily I joined a lab where those grad students were very forthcoming. They wanted to share that information. You didn’t have to ask because they offered it first. I don’t think it happens that way in every lab, though.

Chemjobber: I think you’re right. The thorniness of the problem of turning chemical safety into a day to day thing, within the lab — within a specific group — is you’re relying on this group of people that are transient, and they’re human, so some people really care about it and some people tend not to care about it. I had an advisor who didn’t talk about safety all the time but did, on a number of occasions, yank us all short and say, “Hey, look, what you’re doing is dangerous!” I clearly remember specific admonishments: “Hey, that’s dangerous! Don’t do that!”

Janet: I suspect that may be more common in organic chemistry than in physical chemistry, which is my area. You guys work with stuff that seems to have a lot more potential to do interesting things in interesting ways. The other thing, too, is that in my research group we were united by a common set of theoretical approaches, but we all worked in different kinds of experimental systems which had different kinds of hazards. The folks doing combustion reactions had different things to worry about than me, working with my aqueous reaction in a flow-through reactor, while someone in the next room was working with enzymatic reactions. We were all over the map. Nothing that any of us worked with seemed to have real deadly potential, at least as we were running it, but who knows?

Chemjobber: Right.

Janet: And given that different labs have very different dynamics, that could make it hard to actually implement a desire to have safety become a part of the day to day discussions people are having as they’re building the knowledge. But this might really be a good place for departments and graduate training programs to step up. To say, “OK, you’ve got your PI who’s running his or her own fiefdom in the lab, but we’re the other professional parental unit looking out for your well being, so we’re going to have these ongoing discussions with graduate cohorts made up of students who are working in different labs about safety and how to think about safety where the rubber hits the road.” Actually bringing those discussions out of the research group, the research group meeting, might provide a space where people can become reflective about how things go in their own labs and can see something about how things are being done differently in other labs, and start piecing together strategies, start thinking about what they want the practices to be like when they’re the grown-up chemists running their own labs. How do they want to make safety something that’s part of the job, not an add on that’s being slapped on or something that’s being forgotten altogether.

Chemjobber: Right.

Janet: But of course, graduate training programs would have to care enough about that to figure out how to put the resources on it, to make it happen.

Chemjobber: I’m in profound sympathy with the people who would have to figure out how to do that. I don’t really know anything about the structure of a graduate training program other than, you know, “Do good work, and try to graduate sooner rather than later.” But I assume that in the last 20 to 30 years, there have been new mandates like “OK, you all need to have some kind of ethics component”

Janet: — because ethics coursework will keep people from cheating! Except that’s an oversimplified equation. But ethics is a requirement they’re heaping on, and safety could certainly be another. The question is how to do that sensibly rather than making it clear that we’re doing this only because there’s a mandate from someone else that we do it.

Chemjobber: One of the things that I’ve always thought about in terms of how to better inculcate safety in academic labs is maybe to have training that happens every year, that takes a week. New first-years come in and you get run through some sort of a lab safety thing where you go and you set up the experiment and weird things are going to happen. It’s kind of an artificial environment where you have to go in and run a dangerous reaction as a drill that reminds you that there are real-world consequences. I think Chembark talked about how, in Caltech Safety Day, they brought out one of the lasers and put a hole through an apple. Since Paul is an organic chemist, I don’t think he does that very often, but his response was “Oh, if I enter one of these laser labs, I should probably have my safety glasses on.” There’s a limit to the effectiveness of that sort of stuff. you have to really, really think about how to design it, and a week out of a year is a long time, and who’s going to run it? I think your idea of the older students in the lab being the ones who really do a lot of the day to day safety stuff is important. What happens when there are no older students in the lab?

Janet: That’s right, when you’re the first cohort in the PI’s lab.

Chemjobber: Or, when there hasn’t been much funding for students and suddenly now you have funding for students.

Janet: And there’s also the question of going from a sparsely populated lab to a really crowded lab when you have the funding but you don’t suddenly have more lab space. And crowded labs have different kinds of safety concerns than sparsely populated labs.

Chemjobber: That’s very true.

Janet: I also wonder whether the “grown-up” chemists, the postdocs and the PIs, ought to be involved in some sort of regular safety … I guess casting it as “training” is likely to get people’s hackles up, and they’re likely to say, “I have even less time for this than my students do.”

Chemjobber: Right.

Janet: But at the same time, pretending that they learned everything they need to know about safety in grad school? Really? Really you did? When we’re talking now about how maybe the safety training for graduate students is inadequate, you magically got the training that tells you everything you need to know from here on out about safety? That seems weird. And also, presumably, the risks of certain kinds of procedures and certain kinds of reagents — that’s something about which our knowledge continues to increase as well. So, finding ways to keep up on that, to come up with safer techniques and better responses when things do go wrong — some kind of continuing education, continuing involvement with that. If there was a way to do it to include the PIs and the people they’re employing or training, to engage them together, maybe that would be effective.

Chemjobber: Hmm.

Janet: It would at least make it seem less like, “This is education we have to give our students, this is one more requirement to throw on the pile, but we wouldn’t do it if we had the choice, because it gets in the way of making knowledge.” Making knowledge is good. I think making knowledge is important, but we’re human beings making knowledge and we’d like to live long enough to appreciate that knowledge. Graduate students shouldn’t be consumable resources in the knowledge-building the same way that chemical reagents are.

Chemjobber: Yeah.

Janet: Because I bet you the disposal paperwork on graduate students is a fair bit more rigorous than for chemical waste.

Why does lab safety look different to chemists in academia and chemists in industry?

Here’s another approximate transcript of the conversation I had with Chemjobber that became a podcast. In this segment (from about 19:30 to 29:30), we consider how reaction to the Sheri Sangji case sound different when they’re coming from academic chemists than when they’re coming from industry, and we spin some hypotheses about what might be going on behind those differences:

Chemjobber: I know that you wanted to talk about the response of industrial chemists versus academic chemists to the Sheri Sangji case.

Janet: This is one of the things that jumps out at me in the comment threads on your blog posts about the Sangji case. (Your commenters, by the way, are awesome. What a great community of commenters engaging with this stuff.) It really does seem that the commenters who are coming from industry are saying, “These conditions that we’re hearing about in the Harran lab (and maybe in academic labs in general) are not good conditions for producing knowledge as safely as we can.” And the academic commenters are saying, “Oh come on, it’s like this everywhere! Why are you going to hold this one guy responsible for something that could have happened to any of us?” It shines a light on something interesting about how academic labs building knowledge function really differently from industrial labs building knowledge.

Chemjobber: Yeah, I don’t know. It’s very difficult for me to separate out whether it’s culture or law or something else. Certainly I think there’s a culture aspect of it, which is that every large company and most small companies really try hard to have some sort of a safety culture. Whether or not they actually stick to it is a different story, but what I’ve seen is that the bigger the company, the more it really matters. Part of it, I think, is that people are older and a little bit wiser, they’re better at looking over each other’s shoulders and saying, “What are you doing over there?” and “So, you’re planning to do that? That doesn’t sound like a great idea.” It seems like there’s less of that in academia. And then there’s the regulatory aspect of it. Industrial chemists are workers, the companies they’re working for are employers, and there’s a clear legal aspect to that. Even as under-resourced as OSHA is, there is an actual legal structure prepared to deal with accidents. If the Sangji incident had happened at a very large company, most people think that heads would have rolled, letters would have been placed in evaluation files, and careers would be over.

Janet: Or at least the lab would probably have been shut down until a whole bunch of stuff was changed.

Chemjobber: But in academia, it looks like things are different.

Janet: I have some hunches that perhaps support some of your hunches here about where the differences are coming from. First of all, the set-up in academia assumes radical autonomy on the part of the PI about how to run his or her lab. Much of that is for the good as far as allowing different ways to tackle the creative problems about how to ask the scientific questions to better shake loose the piece of knowledge you’re trying to shake loose, or allowing a range of different work habits that might be successful for these people you’re training to be grown-up scientists in your scientific field. And along with that radical autonomy — your lab is your fiefdom — in a given academic chemistry department you’re also likely to have a wide array of chemical sub-fields that people are exploring. So, depending on the size of your department, you can’t necessarily count on there being more than a couple other PIs in the department who really understand your work well enough that they would have deep insight into whether what you’re doing is safe or really dangerous. It’s a different kind of resource that you have available right at hand — there’s maybe a different kind of peer pressure that you have in your immediate professional and work environment acting on the industrial chemist than on the academic chemist. I think that probably plays some role in how PIs in academia are maybe aren’t as up on potential safety risks of new work they’re doing as they might be otherwise. And then, of course, there’s the really different kinds of rewards people are working for in industry versus academia, and how the whole tenure race ends up asking more and more of people with the same 24 hours in the day as anyone else. So, people on the tenure track start asking, “What are the things I’m really rewarded for? Because obviously, if I’m going to succeed, that’s where I have to focus my attention.”

Chemjobber: It’s funny how the “T” word keeps coming up.

Janet: By the same token, in a university system that has consistently tried to male it easier to fire faculty at whim because they’re expensive, I sort of see the value of tenure. I’m not at all argue that tenure is something that academic chemists don’t need. But, it may be that the particulars of how we evaluate people for tenure are incentivizing behaviors that are not helping the safety of the people building the knowledge or the well-being of the people who are training to be grown-ups in these professional communities.

Chemjobber: That’s right. We should just say specifically that in this particular case, Patrick Harran already had tenure, and I believe he is still a chaired professor at UCLA.

Janet: I think maybe the thing to point out is that some of these expectations, some of these standard operating procedures within disciplines in academia, are heavily shaped by the things that are rewarded for tenure, and then for promotion to full professor, and then whatever else. So, even if you’re tenured, you’re still soaking in that same culture that is informing the people who are trying to get permission to stay there permanently rather than being thanked for their six years of service and shown the door. You’re still soaking in that culture that says, “Here’s what’s really important.” Because if something else was really important, then by golly that’s how we’d be choosing who gets to stay here for reals and who’s just passing through.

Chemjobber: Yes.

Janet: I don’t know as much about the typical life cycle of the employee in industrial chemistry, but my sense is that maybe the fact that grad students and postdocs and, to some extent, technicians are sort of transient in the community of academic chemistry might make a difference as well — that they’re seen as people who are passing through, and that the people who are more permanent fixtures in that world either forget that they come in not knowing all the stuff that the people who have been there for a long, long time know, or they’re sort of making a calculation, whether they realize it or not, about how important it is to convey some of this stuff they know to transients in their academic labs.

Chemjobber: Yeah, I think that’s true. Numerically, there’s certainly a lot less turnover in industry than there is in academic labs.

Janet: I would hope so!

Chemjobber: Especially from the bench-worker perspective. It’s unfortunate that layoffs happen (topic for another podcast!), but that seems to be the main source of turnover in industry these days.

Safety in academic chemistry labs (with some thoughts on incentives).

Earlier this month, Chemjobber and I had a conversation that became a podcast. We covered lots of territory, from the Sheri Sangji case, to the different perspectives on lab safety in industry and academia, to broader questions about how to make attention to safety part of the culture of chemistry. Below is a transcript of a piece of that conversation (from about 07:45 to 19:25). I think there are some relevant connections here to my earlier post about strategies for delivering ethics training — a post which Jyllian Kemsley notes may have some lessons for safety-training, too.

Chemjobber: I think, academic-chemistry-wise, we might do better at looking out after premeds than we do at looking out after your typical first year graduate student in the lab.

Janet: Yeah, and I wonder why that is, actually, given the excess of premeds. Maybe that’s the wrong place to put our attention.* But maybe the assumption is that, you know, not everyone taking a chemistry lab course is necessarily going to come into the lab knowing everything they need to know to be safe. And that’s probably a safe assumption to make even about people who are good in chemistry classes. So, that’s one of those things that I think we could do a lot better at, just recognizing that there are hazards and that people who have never been in these situations before don’t necessarily know ho to handle them.

Chemjobber: Yeah, I agree. I don’t know what the best way is to make sure to inculcate that sort of lab safety stuff into graduate school. Because graduate school research is supposed to be kind of free-flowing and spontaneous — you have a project and you don’t really know where it’s going to lead you. On the other hand, a premed organic chemistry class is a really artificial environment where there is an obvious beginning and an obvious end and you stick the safety discussion right at the beginning. I remember doing this, where you pull out the MSDS that’s really scary sounding and you scare the pants off the students.

Janet: I don’t even think alarming them is necessarily the way to go, but just saying, hey, it matters how you do this, it matters where you do this, this is why it matters.

Chemjobber: Right.

Janet: And I guess in research, you’re right, there is this very open-ended, free-flowing thing. You try to build knowledge that maybe doesn’t exist yet. You don’t know where it’s going to go. You don’t necessarily know what the best way to build that knowledge is going to be. I think where we fall short sometimes is that there may be an awful lot of knowledge out there somewhere, that if you take this approach, with these techniques or with these chemicals, here are some dangers that are known. Here are some risks that someone knows about. You may not know them yet, but maybe we need to do better in the conceiving-of-the-project stage at making that part of the search of prior literature. Not just, what do we know about this reaction mechanism, but what do we know about the gnarly reagents you need to be able to work with to pursue a similar kind of reaction.

Chemjobber: Yeah. My understanding is that in the UK, before you do every experiment, there’s supposed to be a formalized written risk analysis. UK listeners can comment on whether those actually happen. But it seems like they do, because, you know, when you see online conversation of it, it’s like, “What? You guys don’t do that in the US?” No, we don’t.

Janet: There’s lots of things we don’t do. We don’t have a national health service either.

Chemjobber: But how would you make the bench-level researcher do that risk analysis? How does the PI make the bench-level researcher do that? I don’t know. … Neal Langerman is a prominent chemical safety expert. Beryl Benderly is somebody who writes on the Sheri Sangji case who’s talked about this, which is that basically that we should fully and totally incentivize this by tying academic lab safety to grants and tenure. What do you think?

Janet: I think that the intuition is right that if there’s not some real consequence for not caring about safety, it’s going to be the case that some academic researchers, making a rational calculation about what they have to do and what they’re going to be rewarded on and what they’re going to be punished for, are going to say, this would be nice in a perfect world. But there really aren’t enough hours in the day, and I’ve got to churn out the data, and I’ve got to get it analyzed and get the manuscript submitted, especially because I think that other group that was working on something like this might be getting close, and lord knows we don’t want to get scooped — you know, if there’s no consequence for not doing it, if there’s no culture of doing it, if there’s no kind of repercussion among their peers and their professional community for not doing it, a large number of people are going to make the rational calculation that there’s no point in doing it.

Chemjobber: Yeah.

Janet: Maybe they’ll do it as a student exercise or something, but you know what, students are pretty clever, and they get to a point where they actually watch what the PI who is advising them does, and form something like a model of “this is what you need to do to be a successful PI”. And all the parts of what their PI does that are invisible to them? At least to a first approximation, those are not part of the model.

Chemjobber: Right. I’ve been on record as saying that I find tying lab safety to tenure especially to be really dangerous, because you’re giving an incredible incentive to hide incidents. I mean, “For everybody’s sake, sweep this under the rug!” is what might come of this. Obviously, if somebody dies, you can’t hide that.

Janet: Hard to hide unless you’ve got off-the-books grad students, which … why would you do that?

Chemjobber: Are you kidding? There’s a huge supply of them already! But, my concern with tying lab safety to tenure is that I have a difficult time seeing how you would make that a metric other than, if you’ve reported an accident, you will not get tenure, or, if you have more than two accidents a year, you will not get tenure. For the marginal cases, the incentive becomes very high to hide these accidents.

Janet: Here’s a way it might work, though — and I know this sort of goes against the grain, since tenure committees much prefer something they can count to things they have to think about, which is why the number of publications and the impact factor becomes way more important somehow than the quality or importance of the publications as judged by experts in the field. But, something like this might work: if you said, what we’re going to look at in evaluating safety and commitment to safety for your grants and tenure is whether you’ve developed a plan. We’re going to look at what you’ve done to talk with the people in your lab about the plan, and at what you’ve done to involve them in executing the plan. So we’re going to look at it as maybe a part of your teaching, a part of your mentoring — and here, I know some people are going to laugh, because mentoring is another one of those things that presumably is supposed to be happening in academic chemistry programs, but whether it’s seriously evaluated or not, other than by counting the number of students who you graduate per year, is … you know, maybe it’s not evaluated as rigorously as it might be. But, if it became a matter of “Show us the steps you’re taking to incorporate an awareness and a seriousness about safety into how you train these graduate students to be grown-up chemists,” that’s a different kind of thing from, “Oh, and did you have any accidents or not?” Because sometimes the accidents are because you haven’t paid attention at all to safety, but sometimes the accidents are really just bad luck.

Chemjobber: Right.

Janet: And you know, maybe this isn’t going to happen every place, but at places like my university, in our tenure dossiers, they take seriously things like grant proposals we have written as part of our scholarly work, whether or not they get funded. You include them so the people evaluating your tenure dossier can evaluate the quality of your grant proposal, and you get some credit for that work even if it’s a bad pay-line year. So a safety plan and evidence of its implementation you might get credit for even if it’s been a bad year as far as accidents.

Chemjobber: I think that’s fair. You know, I think that everybody hopes that with a high-stakes thing like tenure, there’s lots of “human factor” and relatively little number-crunching.

Janet: Yeah, but you know, then you’re on the committee that has to evaluate a large number of dossiers. Human nature kicks in and counting is easier than evaluating, isn’t it?

______
* Let the record reflect that despite our joking about “excesses” of premeds, neither I nor Chemjobber have it in for premeds. Especially so now that neither of us is TAing a premed course.