Safety in academic chemistry labs (with some thoughts on incentives).

Earlier this month, Chemjobber and I had a conversation that became a podcast. We covered lots of territory, from the Sheri Sangji case, to the different perspectives on lab safety in industry and academia, to broader questions about how to make attention to safety part of the culture of chemistry. Below is a transcript of a piece of that conversation (from about 07:45 to 19:25). I think there are some relevant connections here to my earlier post about strategies for delivering ethics training — a post which Jyllian Kemsley notes may have some lessons for safety-training, too.

Chemjobber: I think, academic-chemistry-wise, we might do better at looking out after premeds than we do at looking out after your typical first year graduate student in the lab.

Janet: Yeah, and I wonder why that is, actually, given the excess of premeds. Maybe that’s the wrong place to put our attention.* But maybe the assumption is that, you know, not everyone taking a chemistry lab course is necessarily going to come into the lab knowing everything they need to know to be safe. And that’s probably a safe assumption to make even about people who are good in chemistry classes. So, that’s one of those things that I think we could do a lot better at, just recognizing that there are hazards and that people who have never been in these situations before don’t necessarily know ho to handle them.

Chemjobber: Yeah, I agree. I don’t know what the best way is to make sure to inculcate that sort of lab safety stuff into graduate school. Because graduate school research is supposed to be kind of free-flowing and spontaneous — you have a project and you don’t really know where it’s going to lead you. On the other hand, a premed organic chemistry class is a really artificial environment where there is an obvious beginning and an obvious end and you stick the safety discussion right at the beginning. I remember doing this, where you pull out the MSDS that’s really scary sounding and you scare the pants off the students.

Janet: I don’t even think alarming them is necessarily the way to go, but just saying, hey, it matters how you do this, it matters where you do this, this is why it matters.

Chemjobber: Right.

Janet: And I guess in research, you’re right, there is this very open-ended, free-flowing thing. You try to build knowledge that maybe doesn’t exist yet. You don’t know where it’s going to go. You don’t necessarily know what the best way to build that knowledge is going to be. I think where we fall short sometimes is that there may be an awful lot of knowledge out there somewhere, that if you take this approach, with these techniques or with these chemicals, here are some dangers that are known. Here are some risks that someone knows about. You may not know them yet, but maybe we need to do better in the conceiving-of-the-project stage at making that part of the search of prior literature. Not just, what do we know about this reaction mechanism, but what do we know about the gnarly reagents you need to be able to work with to pursue a similar kind of reaction.

Chemjobber: Yeah. My understanding is that in the UK, before you do every experiment, there’s supposed to be a formalized written risk analysis. UK listeners can comment on whether those actually happen. But it seems like they do, because, you know, when you see online conversation of it, it’s like, “What? You guys don’t do that in the US?” No, we don’t.

Janet: There’s lots of things we don’t do. We don’t have a national health service either.

Chemjobber: But how would you make the bench-level researcher do that risk analysis? How does the PI make the bench-level researcher do that? I don’t know. … Neal Langerman is a prominent chemical safety expert. Beryl Benderly is somebody who writes on the Sheri Sangji case who’s talked about this, which is that basically that we should fully and totally incentivize this by tying academic lab safety to grants and tenure. What do you think?

Janet: I think that the intuition is right that if there’s not some real consequence for not caring about safety, it’s going to be the case that some academic researchers, making a rational calculation about what they have to do and what they’re going to be rewarded on and what they’re going to be punished for, are going to say, this would be nice in a perfect world. But there really aren’t enough hours in the day, and I’ve got to churn out the data, and I’ve got to get it analyzed and get the manuscript submitted, especially because I think that other group that was working on something like this might be getting close, and lord knows we don’t want to get scooped — you know, if there’s no consequence for not doing it, if there’s no culture of doing it, if there’s no kind of repercussion among their peers and their professional community for not doing it, a large number of people are going to make the rational calculation that there’s no point in doing it.

Chemjobber: Yeah.

Janet: Maybe they’ll do it as a student exercise or something, but you know what, students are pretty clever, and they get to a point where they actually watch what the PI who is advising them does, and form something like a model of “this is what you need to do to be a successful PI”. And all the parts of what their PI does that are invisible to them? At least to a first approximation, those are not part of the model.

Chemjobber: Right. I’ve been on record as saying that I find tying lab safety to tenure especially to be really dangerous, because you’re giving an incredible incentive to hide incidents. I mean, “For everybody’s sake, sweep this under the rug!” is what might come of this. Obviously, if somebody dies, you can’t hide that.

Janet: Hard to hide unless you’ve got off-the-books grad students, which … why would you do that?

Chemjobber: Are you kidding? There’s a huge supply of them already! But, my concern with tying lab safety to tenure is that I have a difficult time seeing how you would make that a metric other than, if you’ve reported an accident, you will not get tenure, or, if you have more than two accidents a year, you will not get tenure. For the marginal cases, the incentive becomes very high to hide these accidents.

Janet: Here’s a way it might work, though — and I know this sort of goes against the grain, since tenure committees much prefer something they can count to things they have to think about, which is why the number of publications and the impact factor becomes way more important somehow than the quality or importance of the publications as judged by experts in the field. But, something like this might work: if you said, what we’re going to look at in evaluating safety and commitment to safety for your grants and tenure is whether you’ve developed a plan. We’re going to look at what you’ve done to talk with the people in your lab about the plan, and at what you’ve done to involve them in executing the plan. So we’re going to look at it as maybe a part of your teaching, a part of your mentoring — and here, I know some people are going to laugh, because mentoring is another one of those things that presumably is supposed to be happening in academic chemistry programs, but whether it’s seriously evaluated or not, other than by counting the number of students who you graduate per year, is … you know, maybe it’s not evaluated as rigorously as it might be. But, if it became a matter of “Show us the steps you’re taking to incorporate an awareness and a seriousness about safety into how you train these graduate students to be grown-up chemists,” that’s a different kind of thing from, “Oh, and did you have any accidents or not?” Because sometimes the accidents are because you haven’t paid attention at all to safety, but sometimes the accidents are really just bad luck.

Chemjobber: Right.

Janet: And you know, maybe this isn’t going to happen every place, but at places like my university, in our tenure dossiers, they take seriously things like grant proposals we have written as part of our scholarly work, whether or not they get funded. You include them so the people evaluating your tenure dossier can evaluate the quality of your grant proposal, and you get some credit for that work even if it’s a bad pay-line year. So a safety plan and evidence of its implementation you might get credit for even if it’s been a bad year as far as accidents.

Chemjobber: I think that’s fair. You know, I think that everybody hopes that with a high-stakes thing like tenure, there’s lots of “human factor” and relatively little number-crunching.

Janet: Yeah, but you know, then you’re on the committee that has to evaluate a large number of dossiers. Human nature kicks in and counting is easier than evaluating, isn’t it?

______
* Let the record reflect that despite our joking about “excesses” of premeds, neither I nor Chemjobber have it in for premeds. Especially so now that neither of us is TAing a premed course.

Science, priorities, and the challenges of sharing a world.

For scientists, doing science is often about trying to satisfy deep curiosity about how various bits of our world work. For society at large, it often seems like science ought to exist primarily to solve particular pressing problems — or at least, that this is what science ought to be doing, given that our tax dollars are going to support it. It’s not a completely crazy idea. Even if tax dollars weren’t funding lots of scientific research and the education of scientists (even at private universities), the public might expect scientists to focus their attention on pressing problems, simply because scientists have the expertise to solve these problems and other members of society don’t.

This makes it harder to get the public to care about funding science for which the pay-off is not obviously useful, especially “basic research”. You want to understand the structure of subatomic particles, or the fundamental forces at work in our universe? That’s great, but how is it going to help us live longer, or help us build more fuel-efficient vehicles, or bring smaller iPods to market? Most members of the public don’t even know what a quark is, let alone care about whether you can detect a particular kind of quark experimentally. Satisfying our curiosity about the details on the surface of Mars can strike folks not gripped by that particular curiosity as a distraction from important questions that science could be answering instead.

A typical response is to note that basic research has in the past led to unanticipated practical applications. Of course, this isn’t a way to get the public to see the intrinsic value of basic research — it merely asks them to value such research instrumentally, as sort of a mystery box that is bound to contain some payoff which we cannot describe in advance but which promises to be awesome.

Some years ago Rick Weiss made an argument like this in the Washington Post in defense of space research. For example, space exploration. Weiss expressed concern that “Americans have lost sight of the value of non-applied, curiosity-driven research — the open-ended sort of exploration that doesn’t know exactly where it’s going but so often leads to big payoffs,” then went through an impressive list of scientific projects that started off without any practical applications but ended up making possible all manner of useful applications. Limit basic science, the argument went, and you’re risking economic growth.

But Weiss was careful not to say the only value in scientific research is in marketable products. Rather, he offered an even more important reason for the public to support research:

Because our understanding of the world and our support of the quest for knowledge for knowledge’s sake is a core measure of our success as a civilization. Our grasp, however tentative, of what we are and where we fit in the cosmos should be a source of pride to all of us. Our scientific achievements are a measure of ourselves that our children can honor and build upon.

I find that a pretty inspiring description of science’s value, but it’s not clear that most members of the public would be similarly misty-eyed.

Scientists may already feel that they have to become the masters of spin to get even their practical research projects funded. Will the scientists also have to take on the task of convincing the public at large that a scientific understanding of ourselves and of the world we live in should be a source of pride? Will a certain percentage of the scientist’s working budget have to go to public relations? (“Knowledge: It’s not just for dilettantes anymore!”) Maybe the message that knowledge for knowledge’s sake is a fitting goal for a civilized society is the kind of thing that people would just get as part of their education. Only it’s not on the standardized tests, and it seems like that’s the only place the public wants to put up money for education any more. Sometimes not even then.

The problem here is that scientists value something that the public at large seems not to value. The scientists think the public ought to value it, but they don’t have the power to impose their will on the public in this regard any more than the public can demand that scientists stop caring about weird things like quarks. Meanwhile, the public supports science, at least to the extent that science can deliver practical results in a timely fashion. There would probably be tension in this relationship even if scientists weren’t looking to the public for funding.

Of course, when scientists do tackle real-life problems and develop real-life solutions, it’s not like the public is always so good about accepting them. Consider the mixed public reception of the vaccine against human papilloma virus (HPV). The various strains of HPV are the leading cause of cervical cancer, and are not totally benign for men, causing genital warts and penile cancers. You would think that developing a reasonably safe and effective vaccine against a virus like HPV is exactly the sort of scientific accomplishment the public might value — except that religious groups in the US voiced opposition to the HPV vaccine on the grounds that it might give young women license to engage in premarital sex rather than practicing abstinence.

(The scientist scratches her head.) Let me get this straight: Y’all want to cut funding for the basic science because you don’t think it will lead to practical applications. But when we do the research to solve what seems like a real problem — people are dying from cervical cancer — y’all tell us this is a problem you didn’t really want us to solve?

Here, to be fair, it’s not everyone who wants to opt out of the science, just a part of the population with a fair bit of political clout at particular moments in history. The central issue seems to be that our society is made up of a bunch of people (including scientists) with rather different values, which lead to rather different priorities. In thinking about where scientific funding comes from, we talk as though there were a unitary Public with whom the unitary Science transacts business. It might be easier were that really the case. Instead, the scientists get to deal with the writhing mass of contradictory impulses that is the American public. About the only thing that public knows for sure is that it doesn’t want to pay more taxes.

How can scientists direct their efforts at satisfying public wants, or addressing public needs, if the public itself can’t come to any robust agreement on what those wants and needs are? If science has to prove to the public that the research dollars are going to the good stuff, will scientists have to stretch things a little in the telling?

Or might it actually be better if the public (or the politicians acting in the public’s name) spent less time trying to micro-manage scientists as they set the direction of their research? Maybe it would make sense, if the public decided that having scientists in society was a good thing for society, to let the scientists have some freedom to pursue their own scientific interests, and to make sure they have the funding to do so.

I’m not denying that the public has a right to decide where its money goes, but I don’t think putting up the money means you get total control. Because if you demand that much control, you may end up having to do the science yourself. Also, once science delivers the knowledge, it seems like the next step is to make that knowledge available. If particular members of the public decide not to avail themselves of that knowledge (because they feel it would be morally wrong, or maybe just silly, as in the case of pet cloning), that is their decision. We shouldn’t be making life harder for the scientists for doing what good scientists do.

It’s clear that there are forces at work in American culture right now that are not altogether comfortable with all that science has to offer at the moment. Discomfort is a normal part of sharing society with others who don’t think just like you do. But hardly anyone thinks it would be a good idea to ship all the scientists off to someplace else. We like our tablet computers and our smartphones and our headache medicines and our DSL and our Splenda too much for that.

Perhaps, for a few moments, we should give the hard-working men and women of science a break and thank them for the knowledge they produce, whether we know what to do with it or not. Then, we can return to telling them about the pieces of our world we’d like more help navigating, and see whether they have any help to offer yet.

Getting scientists to take ethics seriously: strategies that are probably doomed to failure.

As part of my day-job as a philosophy professor, I regularly teach a semester-long “Ethics in Science” course at my university. Among other things, the course is intended to help science majors figure out why being ethical might matter to them if they continue on their path to becoming working scientists and devote their careers to the knowledge-building biz.

And, there’s a reasonable chance that my “Ethics in Science” course wouldn’t exist but for strings attached to training grants from federal funding agencies requiring that students funded by these training grants receive ethics training.

The funding agencies demand the ethics training component largely in response to high profile cases of federally funded scientists behaving badly on the public’s dime. The bad behavior suggests some number of working scientists who don’t take ethics seriously. The funders identify this as a problem and want the scientists who receive grants from them to take ethics seriously. But the big question is how to get scientists to take ethics seriously.

Here are some approaches to that problem that strike me as unpromising:

  • Delivering ethical instruction that amounts to “don’t be evil” or “don’t commit this obviously wrong act”. Most scientists are not mustache-twirling villains, and few are so ignorant that they wouldn’t know that the obviously wrong acts are obviously wrong. If ethical training is delivered with the subtext of “you’re evil” or “you’re dumb,” most of the scientists to whom you’re delivering it will tune it out, since you’re clearly talking to someone else.
  • Reducing ethics to a laundry list of “thou shalt not …” Ethics is not simply a matter of avoiding bad acts — and the bad acts are not bad simply because federal regulations or your compliance officer say they are bad. There is a significant component of ethics concerned with positive action — doing good things. Presenting ethics as results instead of a process — as a set of things the ethics algorithm says you shouldn’t do, rather than a set of strategies for evaluating the goodness of various courses of action you might pursue — is not very engaging. Besides, you can’t even count on this approach for good results, since refraining from particular actions that are expressly forbidden is no guarantee you won’t find some not-expressly-forbidden action that’s equally bad.
  • Presenting ethics as something you have to talk about because the funders require that you talk about it. If you treat the ethics-talk as just a string attached to your grant money, but something with which you wouldn’t waste your time otherwise, you’re identifying attention to ethics as a thing that gets in the way of research rather as something that supports research. Once you’ve fulfilled the requirement to have the ethics-talk, would you ever revisit ethics, or would you just get down to the business of research?
  • Segregating attention to ethics in a workshop, class, or training session. Is ethics something the entirety of which you can “do” in a few hours, or even a whole semester? That’s the impression scientific trainees can get from an ethics training requirement that floats unconnected from any discussion with the people training them about how to be a successful scientist. Once you’re done with your training, then, you’re done — why think about ethics again?
  • Pointing trainees to a professional code, the existence of which proves that your scientific discipline takes ethics seriously. The existence of a professional code suggests that someone in your discipline sat down and tried to spell out ethical standards that would support your scientific activities, but the mere existence of a code doesn’t mean the members of your scientific community even know what’s in that code, nor that they behave in ways that reflect the commitments put forward by it. Walking the walk is different from talking the talk — and knowing that there is a code, somewhere on your professional society’s website, that you could find if you Googled it probably doesn’t even rise to the level of talking the talk.
  • Delivering ethical training with the accompanying message that scientists who aren’t willing to cut ethical corners are at a competitive career disadvantage, and that this is just how things are. Essentially, this creates a situation where you tell trainees, “Here’s how you should behave … unless you’re really up against it, at which point you should be smart and drop the ethics to survive in this field.” And, what motivated trainee doesn’t recognize that she’s always up against it? It is important, I think, to recognize that unethical behavior is often motivated at least in part by a perception of extreme career pressures rather than by the inherent evil of the scientist engaging in that behavior. But noting the competitive advantage available for cheaters only to throw up your hands and say, “Eh, what are you going to do?” strikes me as a shrugging off of responsibility. At a minimum, members of a scientific community ought to reflect upon and discuss whether the structures of career rewards and career punishments incentivize bad behavior. If they do, members of the community probably have a responsibility to try to change those structures of career rewards and career punishments.

Laying out approaches to ethics training that won’t help scientists take ethics seriously might help a trainer avoid some pitfalls, but it’s not the same as spelling out approaches that are more likely to work. That’s a topic I’ll take up in a post to come.

Wikipedia, the DSM, and Beavis.

There are some nights that Wikipedia raises more questions for me than it answers.

The other evening, reminiscing about some of the background noise of my life (viz. “Beavis and Butt-head”) when I was in graduate school, I happened to look up Cornholio. After I got over my amusement that its first six letters were enough to put my desired search target second on the list of Wikipedia’s suggestions for what I might be looking for (right between cornhole and Cornholme, I read the entry and got something of a jolt at its diagnostic tone:

After consuming large amounts of sugar and/or caffeine, Beavis sometimes undergoes a radical personality change, or psychotic break. In one episode, “Holy Cornholio”, the transformation occurred after chewing and swallowing many pain killer pills. He will raise his forearms in a 90-degree angle next to his chest, pull his shirt over his head, and then begin to yell or scream erratically, producing a stream of gibberish and strange noises, his eyes wide. This is an alter-ego named ‘Cornholio,’ a normally dormant persona. Cornholio tends to wander aimlessly while reciting “I am the Great Cornholio! I need TP for my bunghole!” in an odd faux-Spanish accent. Sometimes Beavis will momentarily talk normally before resuming the persona of Cornholio. Once his Cornholio episode is over, Beavis usually has no memory of what happened.

Regular viewers of “Beavis and Butt-head” probably suspected that Beavis had problems, but I’m not sure we knew that he had a diagnosable problem. For that matter, I’m not sure we would have classified moments of Cornholio as falling outside the broad umbrella of Things Beavis Does to Make Things Difficult for Teachers.

But, the Wikipedia editors seem to have taken a shine to the DSM (or other relevant literature on psychiatric conditions), and to have confidence that the behavior Beavis displays here is properly classified as a psychotic break.

Here, given my familiarity with the details of the DSM (hardly any), I find myself asking some questions:

  • Was the show written with the intention that the Beavis-to-Cornholio transformation be seen as a psychotic break?
  • Is it possible to give a meaningful psychiatric diagnosis of a cartoon character?
  • Does a cartoon character need a substantial inner life of some sort for a psychiatric diagnosis of that cartoon character to make any sense?
  • If psychiatric diagnoses are based wholly on outward behavioral manifestations rather than on the inner stuff that might be driving that behavior (as may be the case if it’s really possible to apply diagnostic criteria to Beavis), is this a good reason for us to be cautious about the potential value of these definitions and diagnostic criteria?
  • Is there a psychology or psychiatry classroom somewhere that is using clips of the Beavis-to-Cornholio transformation in order to teach students what a psychotic break is?

I’m definitely uncomfortable that this fictional character has a psychiatric classification thrust upon him so easily — though at least, as a fictional character, he doesn’t have to deal with any actual stigma associated with such a psychiatric classification. And, I think perhaps my unease points to a worry I have (and that Katherine Sharpe also voices in her book Coming of Age on Zoloft) about the project of assembling checklists of easy-to-assess symptoms that seem detached from the harder-to-assess conditions in someone’s head, or in his environment, that are involved in causing the symptoms in the first place.

Possibly Wikipedia’s take on Beavis is simply an indication that the relevant Wikipedia editors like the DSM a lot more than I do (or that they intended their psychiatric framing of Beavis ironically — and if so, well played, editors!). But possibly it reflects a larger society that is much more willing than I am to put behaviors into boxes, regardless of the details (or even existence) of the inner life that accompanies that behavior.

I would welcome the opinions and insight of psychiatrists, psychologist, and others who run with that crowd on this matter.