The quest for underlying order: inside the frauds of Diederik Stapel (part 1)

Yudhijit Bhattacharjee has an excellent article in the most recent New York Times Magazine (published April 26, 2013) on disgraced Dutch social psychologist Diederik Stapel. Why is Stapel disgraced? At the last count at Retraction Watch, 54 53 of his scientific publications have been retracted, owing to the fact that the results reported in those publications were made up. [Scroll in that Retraction Watch post for the update — apparently one of the Stapel retractions was double-counted. This is the risk when you publish so much made-up stuff.]

There’s not much to say about the badness of a scientist making results up. Science is supposed to be an activity in which people build a body of reliable knowledge about the world, grounding that knowledge in actual empirical observations of that world. Substituting the story you want to tell for those actual empirical observations undercuts that goal.

But Bhattacharjee’s article is fascinating because it goes some way to helping illuminate why Stapel abandoned the path of scientific discovery and went down the path of scientific fraud instead. It shows us some of the forces and habits that, while seemingly innocuous taken individually, can compound to reinforce scientific behavior that is not helpful to the project of knowledge-building. It reveals forces within scientific communities that make it hard for scientists to pursue suspicions of fraud to get formal determinations of whether their colleagues are actually cheating. And, the article exposes some of the harms Stapel committed beyond publishing lies as scientific findings.

It’s an incredibly rich piece of reporting, one which I recommend you read in its entirety, maybe more than once. Given just how much there is to talk about here, I’ll be taking at least a few posts to highlight bits of the article as nourishing food for thought.

Let’s start with how Stapel describes his early motivation for fabricating results to Bhattacharjee. From the article:

Stapel did not deny that his deceit was driven by ambition. But it was more complicated than that, he told me. He insisted that he loved social psychology but had been frustrated by the messiness of experimental data, which rarely led to clear conclusions. His lifelong obsession with elegance and order, he said, led him to concoct sexy results that journals found attractive. “It was a quest for aesthetics, for beauty — instead of the truth,” he said. He described his behavior as an addiction that drove him to carry out acts of increasingly daring fraud, like a junkie seeking a bigger and better high.

(Bold emphasis added.)

It’s worth noting here that other scientists — plenty of scientists who were never cheaters, in fact — have also pursued science as a quest for beauty, elegance, and order. For many, science is powerful because it is a way to find order in a messy universe, to discover simple natural laws that give rise to such an array of complex phenomena. We’ve discussed this here before, when looking at the tension between Platonist and Aristotelian strategies for getting to objective truths:

Plato’s view was that the stuff of our world consists largely of imperfect material instantiations of immaterial ideal forms -– and that science makes the observations it does of many examples of material stuff to get a handle on those ideal forms.

If you know the allegory of the cave, however, you know that Plato didn’t put much faith in feeble human sense organs as a route to grasping the forms. The very imperfection of those material instantiations that our sense organs apprehend would be bound to mislead us about the forms. Instead, Plato thought we’d need to use the mind to grasp the forms.

This is a crucial juncture where Aristotle parted ways with Plato. Aristotle still thought that there was something like the forms, but he rejected Plato’s full-strength rationalism in favor of an empirical approach to grasping them. If you wanted to get a handle on the form of “horse,” for example, Aristotle thought the thing to do was to examine lots of actual specimens of horse and to identify the essence they all have in common. The Aristotelian approach probably feels more sensible to modern scientists than the Platonist alternative, but note that we’re still talking about arriving at a description of “horse-ness” that transcends the observable features of any particular horse.

Honest scientists simultaneously reach for beautiful order and the truth. They use careful observations of the world to try to discern the actual structures and forces giving rise to what they are observing. They recognize that our observational powers are imperfect, that our measurements are not infinitely precise (and that they are often at least a little inaccurate), but those observations, those measurements, are what we have to work with in discerning the order underlying them.

This is why Ockham’s razor — to prefer simple explanations for phenomena over more complicated ones — is a strategy but not a rule. Scientists go into their knowledge-building endeavor with the hunch that the world has more underlying order than is immediately apparent to us — and that careful empirical study will help us discover that order — but how things actually are provides a constraint on how much elegance there is to be found.

However, as the article in the New York Times Magazine makes clear, Stapel was not alone in expecting the world he was trying to describe in his research to yield elegance:

In his early years of research — when he supposedly collected real experimental data — Stapel wrote papers laying out complicated and messy relationships between multiple variables. He soon realized that journal editors preferred simplicity. “They are actually telling you: ‘Leave out this stuff. Make it simpler,’” Stapel told me. Before long, he was striving to write elegant articles.

The journal editors’ preference here connects to a fairly common notion of understanding. Understanding a system is being able to identify that components of that system that make a difference in producing the effects of interest — and, by extension, recognizing which components of the system don’t feature prominently in bringing about the behaviors you’re studying. Again, the hunch is that there are likely to be simple mechanisms underlying apparently complex behavior. When you really understand the system, you can point out those mechanisms and explain what’s going on while leaving all the other extraneous bits in the background.

Pushing to find this kind of underlying simplicity has been a fruitful scientific strategy, but it’s a strategy that can run into trouble if the mechanisms giving rise to the behavior you’re studying are in fact complicated. There’s a phrase attributed to Einstein that captures this tension nicely: as simple as possible … but not simpler.

The journal editors, by expressing to Stapel that they liked simplicity more than messy relationships between multiple variables, were surely not telling Stapel to lie about his findings to create such simplicity. They were likely conveying their view that further study, or more careful analysis of data, might yield elegant relations that were really there but elusive. However, intentionally or not, they did communicate to Stapel that simple relationships fit better with journal editors’ hunches about what the world is like than did messy ones — and that results that seemed to reveal simple relations were thus more likely to pass through peer review without raising serious objections.

So, Stapel was aware that the gatekeepers of the literature in his field preferred elegant results. He also seemed to have felt the pressure that early-career academic scientists often feel to make all of his research time productive — where the ultimate measure of productivity is a publishable result. Again, from the New York Times Magazine article:

The experiment — and others like it — didn’t give Stapel the desired results, he said. He had the choice of abandoning the work or redoing the experiment. But he had already spent a lot of time on the research and was convinced his hypothesis was valid. “I said — you know what, I am going to create the data set,” he told me.

(Bold emphasis added.)

The sunk time clearly struck Stapel as a problem. Making a careful study of the particular psychological phenomenon he was trying to understand hadn’t yielded good results — which is to say, results that would be recognized by scientific journal editors or peer reviewers as adding to the shared body of knowledge by revealing something about the mechanism at work in the phenomenon. This is not to say that experiments with negative results don’t tell scientists something about how the world is. But what negative results tell us is usually that the available data don’t support the hypothesis, or perhaps that the experimental design wasn’t a great way to obtain data to let us evaluate that hypothesis.

Scientific journals have not generally been very interested in publishing negative results, however, so scientists tend to view them as failures. They may help us to reject appealing hypotheses or to refine experimental strategies, but they don’t usually do much to help advance a scientist’s career. If negative results don’t help you get publications, without which it’s harder to get grants to fund research that could find positive results, then the time and money spent doing all that research has been wasted.

And Stapel felt — maybe because of his hunch that the piece of the world he was trying to describe had to have an underlying order, elegance, simplicity — that his hypothesis was right. The messiness of actual data from the world got in the way of proving it, but it had to be so. And this expectation of elegance and simplicity fit perfectly with the feedback he had heard before from journal editors in his field (feedback that may well have fed Stapel’s own conviction).

A career calculation paired with a strong metaphysical commitment to underlying simplicity seems, then, to have persuaded Diederik Stapel to let his hunch weigh more heavily than the data and then to commit the cardinal sin of falsifying data that could be presented to other scientists as “evidence” to support that hunch.

No one made Diederik Stapel cross that line. But it’s probably worth thinking about the ways that commitments within scientific communities — especially methodological commitments that start to take on the strength of metaphysical commitments — could have made crossing it more tempting.

Are safe working conditions too expensive for knowledge-builders?

Last week’s deadly collapse of an eight-story garment factory building in Dhaka, Bangladesh has prompted discussions about whether poor countries can afford safe working conditions for workers who make goods that consumers in countries like the U.S. prefer to buy for bargain prices.

Maybe the risk of being crushed to death (or burned to death, or what have you) is just a trade-off poor people are (or should be) willing to accept to draw a salary. At least, that seems to be the take-away message from the crowd arguing that it would cost too much to have safety regulation (and enforcement) with teeth.

It is hard not to consider how this kind of attitude might get extended to other kinds of workplaces — like, say, academic research labs — given that last week UCLA chemistry professor Patrick Harran was also scheduled to return to court for a preliminary hearing on the felony charges of labor code violations brought against him in response to the 2008 fire in his laboratory that killed his employee, Shari Sangji.

Jyllian Kemsley has a detailed look at how Harran’s defense team has responded to the charges of specific violations of the California Labor Code, charges involving failure to provide adequate training, failure to have adequate procedures in place to correct unsafe conditions or work practices, and failure to require workers wear appropriate clothing for the work being done. Since I’m not a lawyer, it’s hard for me to assess the likelihood that the defense responses to these charges would be persuasive to a judge, but ethically, they’re pretty weak tea.

Sadly, though, it’s weak tea of the exact sort that my scientific training has led me to expect from people directing scientific research labs in academic settings.

When safety training is confined to a single safety video that graduate students are shown when they enter a program, that tells graduate students that their safety is not a big deal in the research activities that are part of their training.

When there’s not enough space under the hood for all the workers in a lab to conduct all the activities that, for safety’s sake, ought to be conducted under the hood — and when the boss expects all those activities to happen without delay — that tells them that a sacrifice in safety to produce quick results is acceptable.

When a student-volunteer needs to receive required ionizing radiation safety training to get a film badge that will give her access to the facility where she can irradiate her cells for an experiment, and the PI, upon hearing that the next training session in three weeks away, says to the student-volunteer, “Don’t bother; use my film badge,” that tells people in the lab that the PI is unwilling to lose three weeks of unpaid labor on one aspect of a research project just to make the personnel involved a little bit safer.

When people running a lab take an attitude of “Eh, young people are going to dress how they’re going to dress” rather than imposing clear rules for their laboratories that people whose dress is unsafe for the activities they are to undertake don’t get to undertake them, that tells the personnel in the lab that whatever cost is involved in holding this line — losing a day’s worth of work, being viewed by one’s underlings as strict rather than cool — has been judged too high relative to the benefit of making personnel in the lab safer.

When university presidents or other administrators proclaim that knowledge-builders “must continue to recalibrate [their] risk tolerance” by examining their “own internal policies and ask[ing] the question—do they meet—or do they exceed—our legal or regulatory requirements,” that tells knowledge-builders at those universities that people with significantly more power than them judge efforts to make things safer for knowledge-builders (and for others, like the human subjects of their research) as an unnecessary burden. When institutions need to become leaner, or more agile, shouldn’t researchers (and human subjects) do their part by accepting more risk as the price of doing business?

To be sure, safety isn’t free. But there are also costs to being less safe in academic research settings.

For example, personnel develop lax attitudes toward risks and trainees take these attitudes with them when they go out in the world as grown-up scientists. Surrounding communities can get hurt by improper disposal of hazardous materials, or by inadequate safety measures taken by researchers working with infectious agents who then go home and cough on their families and friends. Sometimes, personnel are badly injured, or killed.

And, if academic scientists are dragging feet on making things safer for the researchers on their team because it takes time and effort to investigate risks and make sensible plans for managing them, to develop occupational health plans and to institute standard operating procedures that everyone on the research team knows and follows, I hope they’re noticing that facing felony charges stemming from safety problems in their labs can also take lots of time and effort.

UPDATE: The Los Angeles Times reports that Patrick Harran will stand trial after an LA County Superior Court judge denied a defense motion to dismiss the case.

Are scientists obligated to call out the bad work of other scientists? (A thought experiment)

Here’s a thought experiment. While it was prompted by intertubes discussions of evolutionary psychology and some of its practitioners, I take it the ethical issues are not limited to that field.

Say there’s an area of scientific research that is at a relatively early stage of its development. People working in this area of research see what they are doing as strongly connected to other, better established scientific fields, whether in terms of methodological approaches to answering questions, or the existing collections of empirical evidence on which they draw, or what have you.

There is general agreement within this community about the broad type of question that might be answered by this area of research and the sorts of data that may be useful in evaluating hypotheses. But there is also a good bit of disagreement among practitioners of this emerging field about which questions will be the most interesting (or tractable) ones to pursue, about how far one may reasonably extend the conclusions from particular bits of research, and even about methodological issues (such as what one’s null hypothesis should be).

Let me pause to note that I don’t think the state of affairs I’m describing would be out of the ordinary for a newish scientific field trying to get its footing. You have a community of practitioners trying to work out a reasonable set of strategies to answer questions about a bundle of phenomena that haven’t really been tackled by other scientific fields that are chugging merrily along. Not only do you not have the answers yet to the questions you’re asking about those phenomena, but you’re also engaged in building, testing, and refining the tools you’ll be using to try to answer those questions. You may share a commitment with others in the community that there will be a useful set of scientific tools (conceptual and methodological) to help you get a handle on those phenomena, but getting there may involve a good bit of disagreement about what tools are best suited for the task. And, there’s a possibility that in the end, there might not be any such tools that give you answers to the questions you’re asking.

Imagine yourself to be a member of this newish area of scientific research.*

What kind of obligation do you have to engage with other practitioners of this newish area of scientific research whose work you feel is not good? (What kind of “not good” are we talking about here? Possibly you perceive them to be drawing unwarranted conclusions from their studies, or using shoddy methodology, or ignoring empirical evidence that seems to contradict their claims. There’s no need to assume that they are being intentionally dishonest.) Do you have an obligation to take to the scientific literature to critique the shortcomings in their work? Do you have an obligation to communicate these critiques privately (e.g., in email correspondence)? Or is it ethically permissible not to engage with what you consider the bad examples of work in your emerging scientific field, instead keeping your head down and producing your own good examples of how to make progress in your emerging scientific field?

Do you think your obligations here are different than they might be if you were working in a well-established scientific field? (In a well-established scientific field, one might argue, the standards for good work and bad work are clearer; does this mean it takes less individual work to identify and rebut the bad work?)

Now consider the situation when your emerging scientific field is one that focuses on questions that capture the imagination not just of scientists trying to get this new field up and running, but also of the general public — to the extent that science writers and journalists are watching the output of your emerging scientific field for interesting results to communicate to the public. How does the fact that the public is paying some attention to your newish area of scientific research bear on what kind of obligation you have to engage with the practitioners in your field whose work you feel is not good?

(Is it fair that a scientist’s obligations within his or her scientific field might shift depending on whether the public cares at all about the details of the knowledge being built by that scientific field? Is this the kind of thing that might drive scientists into more esoteric fields of research?)

Finally, consider the situation when your emerging field of science has captured the public imagination, and when the science writers and journalists seem to be getting most of their information about what your field is up to and what knowledge you have built from the folks in your field whose work you feel is not good. Does this place more of an obligation upon you to engage with the practitioners doing not-good work? Does it obligate you to engage with the science writers and journalists to rebut the bad work and/or explain what is required for good scientific work in your newish field? If you suspect that science writers and journalists are acting, in this case, to amplify misunderstandings or to hype tempting results that lack proper evidential support, do you have an obligation to communicate directly to the public about the misunderstandings and/or about what proper evidential support looks like?

A question I think can be asked at every stage of this thought experiment: Does the community of practitioners of your emerging scientific field have a collective responsibility to engage with the not-so-good work, even if any given individual practitioner does not? And, if the answer to this question is “yes”, how can the community of practitioners live up to that obligation if no individual practitioner is willing to step up and do it?

_____
* For fun, you can also consider these questions from the point of view of a member of the general public: What kinds of obligations do you want the scientists in this emerging field to recognize? After all, as a member of the public, your interests might diverge in interesting ways from those of a scientist in this emerging field.

Community responsibility for a safety culture in academic chemistry.

This is another approximate transcript of a part of the conversation I had with Chemjobber that became a podcast. This segment (from about 29:55 to 52:00) includes our discussion of what a just punishment might look like for PI Patrick Harran for his part in the Sheri Sangji case. From there, our discussion shifted to the question of how to make the culture of academic chemistry safer:

Chemjobber: One of the things that I guess I’ll ask is whether you think we’ll get justice out of this legal process in the Sheri Sangji case.

Janet: I think about this, I grapple with this, and about half the time when I do, I end up thinking that punishment — and figuring out the appropriate punishment for Patrick Harran — doesn’t even make my top-five list of things that should come out of all this. I kind of feel like a decent person should feel really, really bad about what happened, and should devote his life forward from here to making the conditions that enabled the accident that killed Sheri Sangji go away. But, you know, maybe he’s not a decent person. Who the heck can tell? And certainly, once you put things in the context where you have a legal team defending you against criminal charges — that tends to obscure the question of whether you’re a decent person or not, because suddenly you’ve got lawyers acting on your behalf in all sorts of ways that don’t look decent at all.

Chemjobber: Right.

Janet: I think the bigger question in my mind is how does the community respond? How does the chemistry department at UCLA, how does the larger community of academic chemistry, how do Patrick Harran’s colleagues at UCLA and elsewhere respond to all of this? I know that there are some people who say, “Look, he really fell down on the job safety-wise, and in terms of creating an environment for people working on his behalf, and someone died, and he should do jail time.” I don’t actually know if putting him in jail changes the conditions on the outside, and I’ve said that I think, in some ways, tucking him away in jail for however many months makes it easier for the people who are still running academic labs while he’s incarcerated to say, “OK, the problem is taken care of. The bad actor is out of the pool. Not a problem,” rather than looking at what it is about the culture of academic chemistry that has us devoting so little of our time and energy to making sure we’re doing this safely. So, if it were up to me, if I were the Queen of Just Punishment in the world of academic chemistry, I’ve said his job from here on out should be to be Safety in the Research Culture Guy. That’s what he gets to work on. He doesn’t get to go forward and conduct new research on some chemical question like none of this ever happened. Because something happened. Something bad happened, and the reason something bad happened, I think, is because of a culture in academic chemistry where it was acceptable for a PI not to pay attention to safety considerations until something bad happened. And that’s got to change.

Chemjobber: I think it will change. I should point out here that if your proposed punishment were enacted, it would be quite a punishment, because he wouldn’t get to choose what he worked on anymore, and that, to a great extent, is the joy of academic research, that it’s self-directed and that there is lots and lots of freedom. I don’t get to choose the research problems I work on, because I do it for money. My choices are more or less made by somebody else.

Janet: But they pay you.

Chemjobber: But they pay me.

Janet: I think I’d even be OK saying maybe Harran gets to do 50% of his research on self-directed research topics. But the other 50% is he has to go be an evangelist for changing how we approach the question of safety in academic research.

Chemjobber: Right.

Janet: He’s still part of the community, he’s still “one of us,” but he has to show us how we are treading dangerously close to the conditions that led to the really bad thing that happened in his lab, so we can change that.

Chemjobber: Hmm.

Janet: And not just make it an individual thing. I think all of the attempts to boil what happened down to all being the individual responsibility of the technician, or of the PI, or it’s a split between the individual responsibility of one and the individual responsibility of the other, totally misses the institutional responsibility, and the responsibility of the professional community, and how systemic factors that the community is responsible for failed here.

Chemjobber: Hmm.

Janet: And I think sometimes we need individuals to step up and say, part of me acknowledging my personal responsibility here is to point to the ways that the decisions I made within the landscape we’ve got — of what we take seriously, of what’s rewarded and what’s punished — led to this really bad outcome. I think that’s part of the power here is when academic chemists say, “I would be horrified if you jailed this guy because this could have happened in any of our labs,” I think they’re right. I think they’re right, and I think we have to ask how it is that conditions in these academic communities got to the point where we’re lucky that more people haven’t been seriously injured or killed by some of the bad things that could happen — that we don’t even know that we’re walking into because safety gets that short shrift.

Chemjobber: Wow, that’s heavy. I’m not sure whether there are industrial chemists whose primary job is to think about safety. Is part of the issue we have here that safety has been professionalized? We have industrial chemical hygienists and safety engineers. Every university has an EH&S [environmental health and safety] department. Does that make safety somebody else’s problem? And maybe if Patrick Harran were to become a safety evangelist, it would be a way of saying it’s our problem, and we all have to learn, we have to figure out a way to deal with this?

Janet:Yeah. I actually know that there exist safety officers in academic science departments, partly because I serve on some university committees with people who fill that role — so I know they exist. I don’t know how much the people doing research in those departments actually talk with those safety officers before something goes wrong, or how much of it goes beyond “Oh, there’s paperwork we need to make sure is filed in the right place in case there’s an inspection,” or something like that. But it strikes me that safety should be more collaborative. In some ways, wouldn’t that be a more gripping weekly seminar to have in a chemistry department for grad students working in the lab, even just once a month on the weekly seminar, to have a safety roundtable? “Here are the risks that we found out about in this kind of work,” or talking about unforeseen things that might happen, or how do you get started finding out about proper precautions as you’re beginning a new line of research? What’s your strategy for figuring that out? Who do you talk to? I honestly feel like this is a part of chemical education at the graduate level that is extremely underdeveloped. I know there’s been some talk about changing the undergraduate chemistry degree so that it includes something like a certificate program in chemical safety, and maybe that will fix it all. But I think the only thing that fixes it all is really making it part of the day to day lived culture of how we build new knowledge in chemistry, that the safety around how that knowledge gets built is an ongoing part of the conversation.

Chemjobber: Hmm.

Janet: It’s not something we talk about once and then never again. Because that’s not how research works. We don’t say, “Here’s our protocol. We never have to revisit it. We’ll just keep running it until we have enough data, and then we’re done.”

Chemjobber: Right.

Janet: Show me an experiment that’s like that. I’ve never touched an experiment like that in my life.

Chemjobber: So, how many times do you remember your Ph.D. advisor talking to you about safety?

Janet: Zero. He was a really good advisor, he was a very good mentor, but essentially, how it worked in our lab was that the grad students who were further on would talk to the grad students who were newer about “Here’s what you need to be careful about with this reaction, “ or “If you’ve got overflow of your chemical waste, here’s who to call to do the clean-up,” or “Here’s the paperwork you fill out to have the chemical waste hauled away properly.” So, the culture was the people who were in the lab day to day were the keepers of the safety information, and luckily I joined a lab where those grad students were very forthcoming. They wanted to share that information. You didn’t have to ask because they offered it first. I don’t think it happens that way in every lab, though.

Chemjobber: I think you’re right. The thorniness of the problem of turning chemical safety into a day to day thing, within the lab — within a specific group — is you’re relying on this group of people that are transient, and they’re human, so some people really care about it and some people tend not to care about it. I had an advisor who didn’t talk about safety all the time but did, on a number of occasions, yank us all short and say, “Hey, look, what you’re doing is dangerous!” I clearly remember specific admonishments: “Hey, that’s dangerous! Don’t do that!”

Janet: I suspect that may be more common in organic chemistry than in physical chemistry, which is my area. You guys work with stuff that seems to have a lot more potential to do interesting things in interesting ways. The other thing, too, is that in my research group we were united by a common set of theoretical approaches, but we all worked in different kinds of experimental systems which had different kinds of hazards. The folks doing combustion reactions had different things to worry about than me, working with my aqueous reaction in a flow-through reactor, while someone in the next room was working with enzymatic reactions. We were all over the map. Nothing that any of us worked with seemed to have real deadly potential, at least as we were running it, but who knows?

Chemjobber: Right.

Janet: And given that different labs have very different dynamics, that could make it hard to actually implement a desire to have safety become a part of the day to day discussions people are having as they’re building the knowledge. But this might really be a good place for departments and graduate training programs to step up. To say, “OK, you’ve got your PI who’s running his or her own fiefdom in the lab, but we’re the other professional parental unit looking out for your well being, so we’re going to have these ongoing discussions with graduate cohorts made up of students who are working in different labs about safety and how to think about safety where the rubber hits the road.” Actually bringing those discussions out of the research group, the research group meeting, might provide a space where people can become reflective about how things go in their own labs and can see something about how things are being done differently in other labs, and start piecing together strategies, start thinking about what they want the practices to be like when they’re the grown-up chemists running their own labs. How do they want to make safety something that’s part of the job, not an add on that’s being slapped on or something that’s being forgotten altogether.

Chemjobber: Right.

Janet: But of course, graduate training programs would have to care enough about that to figure out how to put the resources on it, to make it happen.

Chemjobber: I’m in profound sympathy with the people who would have to figure out how to do that. I don’t really know anything about the structure of a graduate training program other than, you know, “Do good work, and try to graduate sooner rather than later.” But I assume that in the last 20 to 30 years, there have been new mandates like “OK, you all need to have some kind of ethics component”

Janet: — because ethics coursework will keep people from cheating! Except that’s an oversimplified equation. But ethics is a requirement they’re heaping on, and safety could certainly be another. The question is how to do that sensibly rather than making it clear that we’re doing this only because there’s a mandate from someone else that we do it.

Chemjobber: One of the things that I’ve always thought about in terms of how to better inculcate safety in academic labs is maybe to have training that happens every year, that takes a week. New first-years come in and you get run through some sort of a lab safety thing where you go and you set up the experiment and weird things are going to happen. It’s kind of an artificial environment where you have to go in and run a dangerous reaction as a drill that reminds you that there are real-world consequences. I think Chembark talked about how, in Caltech Safety Day, they brought out one of the lasers and put a hole through an apple. Since Paul is an organic chemist, I don’t think he does that very often, but his response was “Oh, if I enter one of these laser labs, I should probably have my safety glasses on.” There’s a limit to the effectiveness of that sort of stuff. you have to really, really think about how to design it, and a week out of a year is a long time, and who’s going to run it? I think your idea of the older students in the lab being the ones who really do a lot of the day to day safety stuff is important. What happens when there are no older students in the lab?

Janet: That’s right, when you’re the first cohort in the PI’s lab.

Chemjobber: Or, when there hasn’t been much funding for students and suddenly now you have funding for students.

Janet: And there’s also the question of going from a sparsely populated lab to a really crowded lab when you have the funding but you don’t suddenly have more lab space. And crowded labs have different kinds of safety concerns than sparsely populated labs.

Chemjobber: That’s very true.

Janet: I also wonder whether the “grown-up” chemists, the postdocs and the PIs, ought to be involved in some sort of regular safety … I guess casting it as “training” is likely to get people’s hackles up, and they’re likely to say, “I have even less time for this than my students do.”

Chemjobber: Right.

Janet: But at the same time, pretending that they learned everything they need to know about safety in grad school? Really? Really you did? When we’re talking now about how maybe the safety training for graduate students is inadequate, you magically got the training that tells you everything you need to know from here on out about safety? That seems weird. And also, presumably, the risks of certain kinds of procedures and certain kinds of reagents — that’s something about which our knowledge continues to increase as well. So, finding ways to keep up on that, to come up with safer techniques and better responses when things do go wrong — some kind of continuing education, continuing involvement with that. If there was a way to do it to include the PIs and the people they’re employing or training, to engage them together, maybe that would be effective.

Chemjobber: Hmm.

Janet: It would at least make it seem less like, “This is education we have to give our students, this is one more requirement to throw on the pile, but we wouldn’t do it if we had the choice, because it gets in the way of making knowledge.” Making knowledge is good. I think making knowledge is important, but we’re human beings making knowledge and we’d like to live long enough to appreciate that knowledge. Graduate students shouldn’t be consumable resources in the knowledge-building the same way that chemical reagents are.

Chemjobber: Yeah.

Janet: Because I bet you the disposal paperwork on graduate students is a fair bit more rigorous than for chemical waste.

Why does lab safety look different to chemists in academia and chemists in industry?

Here’s another approximate transcript of the conversation I had with Chemjobber that became a podcast. In this segment (from about 19:30 to 29:30), we consider how reaction to the Sheri Sangji case sound different when they’re coming from academic chemists than when they’re coming from industry, and we spin some hypotheses about what might be going on behind those differences:

Chemjobber: I know that you wanted to talk about the response of industrial chemists versus academic chemists to the Sheri Sangji case.

Janet: This is one of the things that jumps out at me in the comment threads on your blog posts about the Sangji case. (Your commenters, by the way, are awesome. What a great community of commenters engaging with this stuff.) It really does seem that the commenters who are coming from industry are saying, “These conditions that we’re hearing about in the Harran lab (and maybe in academic labs in general) are not good conditions for producing knowledge as safely as we can.” And the academic commenters are saying, “Oh come on, it’s like this everywhere! Why are you going to hold this one guy responsible for something that could have happened to any of us?” It shines a light on something interesting about how academic labs building knowledge function really differently from industrial labs building knowledge.

Chemjobber: Yeah, I don’t know. It’s very difficult for me to separate out whether it’s culture or law or something else. Certainly I think there’s a culture aspect of it, which is that every large company and most small companies really try hard to have some sort of a safety culture. Whether or not they actually stick to it is a different story, but what I’ve seen is that the bigger the company, the more it really matters. Part of it, I think, is that people are older and a little bit wiser, they’re better at looking over each other’s shoulders and saying, “What are you doing over there?” and “So, you’re planning to do that? That doesn’t sound like a great idea.” It seems like there’s less of that in academia. And then there’s the regulatory aspect of it. Industrial chemists are workers, the companies they’re working for are employers, and there’s a clear legal aspect to that. Even as under-resourced as OSHA is, there is an actual legal structure prepared to deal with accidents. If the Sangji incident had happened at a very large company, most people think that heads would have rolled, letters would have been placed in evaluation files, and careers would be over.

Janet: Or at least the lab would probably have been shut down until a whole bunch of stuff was changed.

Chemjobber: But in academia, it looks like things are different.

Janet: I have some hunches that perhaps support some of your hunches here about where the differences are coming from. First of all, the set-up in academia assumes radical autonomy on the part of the PI about how to run his or her lab. Much of that is for the good as far as allowing different ways to tackle the creative problems about how to ask the scientific questions to better shake loose the piece of knowledge you’re trying to shake loose, or allowing a range of different work habits that might be successful for these people you’re training to be grown-up scientists in your scientific field. And along with that radical autonomy — your lab is your fiefdom — in a given academic chemistry department you’re also likely to have a wide array of chemical sub-fields that people are exploring. So, depending on the size of your department, you can’t necessarily count on there being more than a couple other PIs in the department who really understand your work well enough that they would have deep insight into whether what you’re doing is safe or really dangerous. It’s a different kind of resource that you have available right at hand — there’s maybe a different kind of peer pressure that you have in your immediate professional and work environment acting on the industrial chemist than on the academic chemist. I think that probably plays some role in how PIs in academia are maybe aren’t as up on potential safety risks of new work they’re doing as they might be otherwise. And then, of course, there’s the really different kinds of rewards people are working for in industry versus academia, and how the whole tenure race ends up asking more and more of people with the same 24 hours in the day as anyone else. So, people on the tenure track start asking, “What are the things I’m really rewarded for? Because obviously, if I’m going to succeed, that’s where I have to focus my attention.”

Chemjobber: It’s funny how the “T” word keeps coming up.

Janet: By the same token, in a university system that has consistently tried to male it easier to fire faculty at whim because they’re expensive, I sort of see the value of tenure. I’m not at all argue that tenure is something that academic chemists don’t need. But, it may be that the particulars of how we evaluate people for tenure are incentivizing behaviors that are not helping the safety of the people building the knowledge or the well-being of the people who are training to be grown-ups in these professional communities.

Chemjobber: That’s right. We should just say specifically that in this particular case, Patrick Harran already had tenure, and I believe he is still a chaired professor at UCLA.

Janet: I think maybe the thing to point out is that some of these expectations, some of these standard operating procedures within disciplines in academia, are heavily shaped by the things that are rewarded for tenure, and then for promotion to full professor, and then whatever else. So, even if you’re tenured, you’re still soaking in that same culture that is informing the people who are trying to get permission to stay there permanently rather than being thanked for their six years of service and shown the door. You’re still soaking in that culture that says, “Here’s what’s really important.” Because if something else was really important, then by golly that’s how we’d be choosing who gets to stay here for reals and who’s just passing through.

Chemjobber: Yes.

Janet: I don’t know as much about the typical life cycle of the employee in industrial chemistry, but my sense is that maybe the fact that grad students and postdocs and, to some extent, technicians are sort of transient in the community of academic chemistry might make a difference as well — that they’re seen as people who are passing through, and that the people who are more permanent fixtures in that world either forget that they come in not knowing all the stuff that the people who have been there for a long, long time know, or they’re sort of making a calculation, whether they realize it or not, about how important it is to convey some of this stuff they know to transients in their academic labs.

Chemjobber: Yeah, I think that’s true. Numerically, there’s certainly a lot less turnover in industry than there is in academic labs.

Janet: I would hope so!

Chemjobber: Especially from the bench-worker perspective. It’s unfortunate that layoffs happen (topic for another podcast!), but that seems to be the main source of turnover in industry these days.

Gender bias: ethical implications of an empirical finding.

By now, you may have seen the recently published study by Ross-Macusin et al. in the Proceedings of the National Academy of Sciences entitled “Science faculty’s subtle gender biases favor male students”, or the nice discussion by Ilana Yurkiewicz of why these findings matter.

Briefly, the study involved having science faculty from research-focused universities rate materials from potential student candidates for a lab manager position. The researchers attached names to the application materials — some of them male names, some of them female names — at random, and examined how the ratings of the materials correlated with the names that were attached to them. What they found was that the same application materials got a higher ranking (i.e., a judgment that the applicant would be more qualified for the job) when the attached name was male than when it was female. Moreover, both male and female faculty ranked the same application more highly when attached to a male name.

It strikes me that there are some ethical implications that flow from this study to which scientists (among others) should attend:

  1. Confidence that your judgments are objective is not a guarantee that your judgments are objective, and your intent to be unbiased may not be enough. The results of this study show a pattern of difference in ratings for which the only plausible explanation is the presence of a male name or a female name for the applicant. The faculty members treated the task they were doing as an objective evaluation of candidates based on prior research experience, faculty recommendations, the applicant’s statement, GRE scores, and so forth — that they were sorting out the well-qualified from the less-well-qualified — but they didn’t do that sorting solely on the basis of the actual experience and qualifications described in the application materials. If they had, the rankings wouldn’t have displayed the gendered split they did. The faculty in the study undoubtedly did not mean to bring gender bias to the evaluative task, but the results show that they did, whether they intended to or not.
  2. If you want to build reliable knowledge about the world, it’s helpful to identify your biases so they don’t end up getting mistaken for objective findings. As I’ve mentioned before, objectivity is hard. One of the hardest things about being objective is that fact that so many of our biases are unconscious — we don’t realize that we have them. If you don’t realize that you have a bias, it’s much harder to keep that bias from creeping in to your knowledge-building, from the way you frame the question you’re exploring to how you interpret data and draw conclusions from them. The biases you know about are easier to keep on a short leash.
  3. If a methodologically sound study finds that science faculty have a particular kind of bias, and if you are science faculty, you probably should assume that you might also have that bias. If you happen to have good independent evidence that you do not display the particular bias in question, that’s great — one less unconscious bias that might be messing with your objectivity. However, in the absence of such good independent evidence, the safest assumption to make is that you’re vulnerable to the bias too — even if you don’t feel like you are.
  4. If you doubt the methodologically soundness of a study finding that science faculty have a particular kind of bias, it is your responsibility to identify the methodological flaws. Ideally, you’d also want to communicate with the authors of the study, and with other researchers in the field, about the flaws you’ve identified in the study methodology. This is how scientific communities work together to build a reliable body of knowledge we all can use. And, a responsible scientist doesn’t reject the conclusions of a study just because they don’t match one’s hunches about how things are. The evidence is how scientists know anything.
  5. If there’s reason to believe you have a particular kind of bias, there’s reason to examine what kinds of judgments of yours it might influence beyond the narrow scope of the experimental study. Could gender bias influence whose data in your lab you trust the most? Which researchers in your field you take most seriously? Which theories or discoveries are taken to be important, and which others are taken to be not-so-important? If so, you have to be honest with yourself and recognize the potential for this bias to interfere with your interaction with the phenomena, and with your interaction with other scientists to tackle scientific questions and build knowledge. If you’re committed to building reliable knowledge, you need to find ways to expose the operation of this bias, or to counteract its effects. (Also, to the extent that this bias might play a role in the distribution of rewards like jobs or grants in scientific careers, being honest with yourself probably means acknowledging that the scientific community does not operate as a perfect meritocracy.)

Each of these acknowledgments looks small on its own, but I will not pretend that that makes them easy. I trust that this won’t be a deal-breaker. Scientists do lots of hard things, and people committed to building reliable knowledge about the world should be ready to take on pieces of self-knowledge relevant to that knowledge-building. Even when they hurt.

Book review: The Radioactive Boy Scout.

When I and my three younger siblings were growing up, our parents had a habit of muttering, “A little knowledge is a dangerous thing.” The muttering that followed that aphorism usually had to do with the danger coming from the “little” amount of knowledge rather than a more comprehensive understanding of whatever field of endeavor was playing host to the hare-brained scheme of the hour. Now, as a parent myself, I suspect that another source of danger involved asymmetric distribution of the knowledge among the interested parties: while our parents may have had knowledge of the potential hazards of various activities, knowledge that we kids lacked, they didn’t always have detailed knowledge of what exactly we kids were up to. It may take a village to raise a child, but it can take less than an hour for a determined child to scorch the hell out of a card table with a chemistry kit. (For the record, the determined child in question was not me.)

The question of knowledge — and of gaps in knowledge — is a central theme in The Radioactive Boy Scout: The Frightening True Story of a Whiz Kid and His Homemade Nuclear Reactor by Ken Silverstein. Silverstein relates the story of David Hahn, a Michigan teen in the early 1990s who, largely free of adult guidance or supervision, worked tirelessly to build a breeder reactor in his back yard. At times this feels like a tale of youthful determination to reach a goal, a story of a self-motivated kid immersing himself in self-directed learning and doing an impressive job of identifying the resources he required. However, this is also a story about how, in the quest to achieve that goal, safety considerations can pretty much disappear.

David Hahn’s source of inspiration — not to mention his guide to many of the experimental techniques he used — was The Golden Book of Chemistry Experiments. Published in 1960, the text by Robert Brent conveys an almost ruthlessly optimistic view of the benefits chemistry and chemical experimentation can bring, whether to the individual or to humanity as a whole. Part of this optimism is what appears to modern eyes as an alarmingly cavalier attitude towards potential hazards and chemical safety. If anything, the illustrations by Harry Lazarus downplay the risks even more than does the text — across 112 pages, the only pictured items remotely resembling safety apparatus are lab coats and a protective mask for an astronaut.

Coupled with the typical teenager’s baseline assumption of invulnerability, you might imagine that leaving safety considerations in the subtext, or omitting them altogether, could be a problem. In the case of a teenager teaching himself chemistry from the book, relying on it almost as a bible of the concepts, history, and experimental techniques a serious chemist ought to know, the lack of focus on potential harms might well have suggested that there was no potential for harm — or at any rate that the harm would be minor compared to the benefits of mastery. David Hahn seems to have maintained this belief despite a series of mishaps that made him a regular at his local emergency room.

Ah, youth.

Here, though, The Radioactive Boy Scout reminds us that young David Hahn was not the only party operating with far too little knowledge. Silverstein’s book expands on his earlier Harper’s article on the incident with chapters that convey just how widespread our ignorance of radioactive hazards has been for most of the history of our scientific, commercial, and societal engagement with radioactivity. At nearly every turn in this history, potential benefits have been extolled (with radium elixirs sold in the early 1900s to lower blood pressure, ease arthritis pain, and produce “sexual rejuvenescence”) and risks denied, sometimes until the body count was so large and the legal damages were so high that they could no longer be denied.

Surely part of the problem here is that the hazards of radioactivity are less immediately obvious than those of corrosive chemicals or explosive chemicals. The charred table is directly observable in a way that damage to one’s body from exposure to radioisotopes is not (partly because the table doesn’t have an immune system that kicks in to try to counter the damage). But the invisibility of these risks was also enhanced when manufacturers who used radioactive materials proclaimed their safety for both the end-user of consumer products and the workers making those products, and when the nuclear energy industry throttled the information the public got about mishaps at various nuclear reactors.

Possibly some of David Hahn’s teachers could have given him a more accurate view of the kinds of hazards he might undertake in trying to build a back yard breeder reactor … but the teen didn’t seem to feel like he could get solid mentoring from any of them, and didn’t let them in on his plans in any detail. The guidance he got from the Boy Scouts came in the form of an atomic energy merit badge pamphlet authored by the Atomic Energy Commission, a group created to promote atomic energy, and thus one unlikely to foreground the risks. (To be fair, this merit badge pamphlet did not anticipate that scouts working on the badge would actually take it upon themselves to build breeder reactors.) Presumably some of the scientists with whom David Hahn corresponded to request materials and advice on reactions would have emphasized the risks of his activities had they realized that they were corresponding with a high school student undertaking experiments in his back yard rather than with a science teacher trying to get clear on conceptual issues.

Each of these gaps of information ended up coalescing in such a way that David Hahn got remarkably close to his goal. He did an impressive job isolating radioactive materials from consumer products, performing chemical reactions to put them in suitable form for a breeder reactor, and assembling the pieces that might have initiated a chain reaction. He also succeeded in turning the back yard shed in which he conducted his work into a Superfund site. (According to Silverstein, the official EPA clean-up missed materials that his father and step-mother found hidden in their house and discarded in their household trash — which means that both the EPA and those close enough to the local landfill where the radioactive materials ended up had significant gaps in their knowledge about the hazards David Hahn introduced to the environment.)

The Radioactive Boy Scout manages to be at once an engaging walk through a challenging set of scientific problems and a chilling look at what can happen when scientific problems are stripped out of their real-life context of potential impacts for good and for ill that stretch across time and space and impact people who aren’t even aware of the scientific work being undertaken. It is a book I suspect my 13-year-old would enjoy very much.

I’m just not sure I’m ready to give it to her.

Book review: Uncaged.

In our modern world, many of the things that contribute to the mostly smooth running of our day-to-day lives are largely invisible to us. We tend to notice them only when they break. Uncaged, a thriller by Paul McKellips, identifies animal research as one of the activities in the background supporting the quality of life we take for granted, and explores what might happen if all the animal research in the U.S. ended overnight.

Part of the fun of a thriller is the unfolding of plot turns and the uncertainty about which characters who come into focus will end up becoming important. Therefore, in order not to spoil the book for those who haven’t read it yet, I’m not going to say much about the details of the plot or the main characters.

The crisis emerges from a confluence of events and an intertwining of the actions of disparate persons acting in ignorance of each other. This complex tangle of causal factors is one of the most compelling parts of the narrative. McKellips gives us “good guys,” “bad guys,” and ordinary folks just trying to get by and to satisfy whatever they think their job description or life circumstances demand of them, weaving a tapestry where each triggers chains of events that compound in ways they could scarcely have foreseen. This is a viscerally persuasive picture of how connected we are to each other, whether by political processes, public health infrastructure, the food supply, or the germ pool.

There is much to like in Uncaged. The central characters are complex, engaging, and even surprising. McKellips is deft in his descriptions of events, especially the impacts of causal chains initiated by nature or by human action on researchers and on members of the public. Especially strong are McKellips’s explanations of scientific techniques and rationales for animal research in ways that are reasonably accessible to the lay reader without being oversimplified.

Uncaged gets to the crux of the societal debate about scientific animal use in a statement issued by the President of the United States as, in response to a series of events, he issues an executive order halting animal research. This president spells out his take on the need — or not — for continued biomedical research with animals:

I realize that the National Institutes of Health grants billions of dollars to American universities and our brightest scientists for biomedical research each year. But there comes a point when we must ask ourselves — that we must seriously question — has our health reached the level of “good enough”? Think of all the medicine we have available to us today. It’s amazing. It’s plenty. It’s more than we have had available in the history of humanity. And for those of us who need medicines, surgeries, therapies and diagnostic tools — it is the sum total of all that we have available to us today. If it’s good enough for those of us who need it today, then perhaps it’s good enough for those who will need it tomorrow as well. Every generation has searched for the fountain of youth. But can we afford to spend more time, more money, and — frankly — more animals just to live longer? Natural selection is an uninvited guest within every family. Some of us will die old; some of us will die far too young. We cannot continue to fund the search for the fountain of youth. We must realize that certain diseases of aging — such as cancer, Alzheimer’s, and Parkinson’s — are inevitable. Our lifestyles and nutrition are environmental factors that certainly contribute to our health. How much longer can we pretend to play the role of God in our own laboratories? (58-59)

In some ways, this statement is the ethical pivot-point around which all the events of the novel — and the reader’s moral calculations — turn. How do we gauge “good enough”? Who gets to make the call, the people for whom modern medicine is more or less sufficient, or the people whose ailments still have no good treatment? What kind of process ought we as a society to use for this assessment?

These live questions end up being beside the point within the universe of Uncaged though. The president issuing this statement has become, to all appearances, a one-man death panel.

McKellips develops a compelling and diverse selection of minor characters here: capitalists, terrorists, animal researchers, animal rights activists, military personnel, political appointees. Some of these (especially the animal rights activists) are clearly based on particular real people who are instantly recognizable to those who have been paying attention to the targeting of researchers in recent years. (If you’ve followed the extremists and their efforts less closely, entering bits of text from the communiques of the fictional animal rights organizations into a search engine is likely to help you get a look at their real-life counterparts.)

But, while McKellips’s portrayal of the animal rights activists is accurate in capturing their rhetoric, these key players who are central in creating the crisis to which the protagonists must respond remain ciphers. The reader gets little sense of the events or thought processes that brought them to these positions, or of the sorts of internal conflicts that might occur within animal rights organizations — or within the hearts and minds of individual activists.

Maybe this is unavoidable — the internet animal rights activists often do seem like ciphers who work very hard to deny the complexities acknowledged by the researchers in Uncaged. But, perhaps naïvely, I have a hard time believing they are not more complex in real life than this.

As well, I would have liked for Uncaged to give us more of a glimpse into the internal workings of the executive branch — how the president and his cabinet made the decision to issue the executive order for a moratorium on animal research, what kinds of arguments various advisors might have offered for or against this order, what assemblage of political considerations, ideals, gut feelings, and unforeseen consequences born of incomplete information or sheer ignorance might have been at work. But maybe presidents, cabinet members, agency heads, and other political animals are ciphers, too — at least to research scientists who have to navigate the research environment these political animals establish and then rearrange.

Maybe this is an instance of the author grappling with the same challenge researchers face: you can’t build a realistic model without accurate and detailed information about the system you’re modeling. Maybe making such a large cast of characters more nuanced, and drawing us deeply into their inner lives, would have undercut the taut pacing of what is, after all, intended as an action thriller.

But to me, this feels like a missed opportunity. Ultimately, I worry that the various players in Uncaged — and worse, their real life counterparts — the researchers and other advocates of humane animal research, the animal rights activists, the political animals, and the various segments of the broader public — continue to see each other as ciphers rather than trying to get in each others heads and figure out where their adversaries are coming from, the better to be able to reflect upon and address the real concerns that are driving people. Modeling your opponents as automata has a certain efficiency, but to me it leaves the resolution feeling somewhat hollow — and it’s certainly not a strategy for engagement that I see leading to healthy civil society in real life.

I suspect, though, that my disappointments are a side-effect of the fact that I am not a newcomer to these disputes. For readers not already immersed in the battles over research with animals, Uncaged renders researchers as complex human beings to whom one can relate. This is a good read for someone who wants a thriller that also conveys a compelling picture of what motivates various lines of biomedical research — and why such research might matter to us all.

The purpose of a funding agency (and how that should affect its response to misconduct).

In the “Ethics in Science” course I regularly teach, students spend a good bit of time honing their ethical decision-making skills by writing responses to case studies. (A recent post lays out the basic strategy we take in approaching these cases.) Over the span of the semester, my students’ responses to the cases give me pretty good data about the development of their ethical decision-making.

From time to time, they also advance claims that make me say, “Hmmm …”

Here’s one such claim, recently asserted in response to a case in which the protagonist, a scientist serving on a study section for the NIH (i.e., a committee that ranks the merit of grant proposals submitted to the NIH for funding), has to make a decision about how to respond when she detects plagiarism in a proposal:

The main purpose of the NIH is to ensure that projects with merit get funded, not to punish scientists for plagiarism.

Based on this assertion, the student argued that it wasn’t clear that the study section member had to make an official report to the NIH about the plagiarism.

I think the claim is interesting, though I think maybe we would do well to unpack it a little. What, for instance, counts as a project with merit?

Is it enough that the proposed research would, if successful, contribute a new piece of knowledge to our shared body of scientific knowledge? Does the anticipated knowledge that the research would generate need to be important, and if so, according to what metric? (Clearly applicable to a pressing problem? Advancing our basic understanding of some part of our world? Surprising? Resolving an ongoing scientific debate?) Does the proposal need to convey evidence that the proposers have a good chance at being successful in conducting the research (because they have the scientific skills, the institutional resources, etc.)?

Does plagiarism count as evidence against merit here?

Perhaps we answer this question differently if we think what should be evaluated is the proposal rather than the proposer. Maybe the proposed research is well-designed, likely to work, and likely to make an important contribution to knowledge in the field — even if the proposer is judged lacking in scholarly integrity (because she seems not to know how properly to cite the words or ideas of others, or not to care to do so if she knows how).

But, one of the expectations of federal funders like the NIH is that scientists whose research is funded will write up the results and share them in the scientific literature. Among other things, this means that one of the scientific skills that a proposer will need to see a project through to completion (including publishing the results) successfully is the ability to write without running afoul of basic standards of honest scholarship. A paper which communicates important results while also committing plagiarism will not bring glory to the NIH for funding the researcher.

More broadly, the fact that something (like detecting or punishing plagiarism) is not a primary goal does not mean it is not a goal that might support the primary goal. To the extent that certain kinds of behavior in proposing research might mark a scientist as a bad risk to carry out research responsibly, it strikes me as entirely appropriate for funding agencies to flag those behaviors when they see them — and also to share that information with other funding agencies.

As well, to the extent that an agency like the NIH might punish a scientist for plagiarism, the kind of punishment it imposes is generally barring that scientist from eligibility for funding for a finite number of years. In other words, the punishment amounts to “You don’t get our money, and you don’t get to ask us for money again for the next N years.” To me, this punishment doesn’t look like it’s disproportional, and it doesn’t look like imposing it on a plagiarist grant proposer diverges wildly from the main goal of ensuring that projects with merit get funded.

But, as always, I’m interested in what you all think about it.

Suit against UCLA in fatal lab fire raises question of who is responsible for safety.

Right before 2011 ended (and, as it happened, right before the statute of limitations ran out), the Los Angeles County district attorney’s office filed felony charges against the University of California regents and UCLA chemistry professor Patrick Harran in connection with a December 2008 fire in Harran’s lab that resulted in the death of a 23-year-old staff research assistant, Sheharbano “Sheri” Sangji.

As reported by The Los Angeles Times:

Harran and the UC regents are charged with three counts each of willfully violating occupational health and safety standards. They are accused of failing to correct unsafe work conditions in a timely manner, to require clothing appropriate for the work being done and to provide proper chemical safety training.

Harran, 42, faces up to 4½ years in state prison, Robison said. He is out of town and will surrender to authorities when he returns, said his lawyer, Thomas O’Brien, who declined to comment further.

UCLA could be fined up to $1.5 million for each of the three counts.

[UCLA vice chancellor for legal affairs Kevin] Reed described the incident as “an unfathomable tragedy,” but not a crime.

The article notes that Sangji was working as a staff research assistant in Harran’s lab while she was applying to law schools. It mentions that she was a 2008 graduate of Pomona College but doesn’t mention whether she had any particular background in chemistry.

As it happens, the work she was doing in the Harran lab presented particular hazards:

Sangji was transferring up to two ounces of t-butyl lithium from one sealed container to another when a plastic syringe came apart in her hands, spewing a chemical compound that ignites when exposed to air. The synthetic sweater she wore caught fire and melted onto her skin, causing second- and third-degree burns.

In May 2009, Cal/OSHA fined UCLA a total of $31,875 after finding that Sangji had not been trained properly and was not wearing protective clothing.

Two months before the fatal fire, UCLA safety inspectors found more than a dozen deficiencies in the same lab, according to internal investigative and inspection reports reviewed by The Times. Inspectors found that employees were not wearing requisite protective lab coats and that flammable liquids and volatile chemicals were stored improperly.

Corrective actions were not taken before the fire, the records showed.

Actions to address the safety deficiencies were taken after the fire, but these were, obviously, too late to save Sangji.

I’m not a lawyer, and I’m not interested in talking about legalities here — whether for the particular case the Los Angeles DA’s office will be pursuing against UCLA or for academic research labs more generally.

Rather, I want to talk about ethics.

Knowledge-building can be a risky business. In some situations, it involves materials that pose direct dangers to the people handling them, to the people in the vicinity, and even to people some distance away who are just trying to get on with their lives (e.g., if the hazardous materials get out into our shared environment).

Generally, scientists doing research that involves hazardous materials do what they can to find out how to mitigate the hazards. They learn appropriate ways of handling the materials, of disposing of them, of protecting themselves and others in case of accidents.

But, knowing the right ways to deal with hazardous materials is not sufficient to mitigate the risks. Proper procedures need to be implemented. Otherwise, your knowledge about the risks of hazardous materials is mostly useful in explaining bad outcomes after they happen.

So, who is ethically responsible for keeping an academic chemistry lab safe? And what exactly is the shape this responsibility takes — that is, what should he or she be doing to fulfill that obligation?

What’s the responsibility of the principal investigator, the scientist leading the research project and, in most cases, heading the lab?

What’s the responsibility of the staff research assistant or technician, doing necessary labor in the lab for a paycheck?

What’s the responsibility of the graduate student in the research group, trying to learn how to do original research and to master the various skills he or she will need to become a PI someday? (It’s worth noting here that there’s a pretty big power differential between grad students and PIs, which may matter as far as how we apportion responsibility. Still, this doesn’t mean that those with less power have no ethical obligations pulling on them.)

What’s the responsibility of the institution under whose auspices the lab is operating? When a safety inspection turns up problems and issues a list of issues that must be corrected, has that responsibility been discharged? When faculty members hire new staff research assistants, or technicians, or graduate students, does the institution have any specific obligations to them (as far as providing safety training, or a place to bring their safety concerns, or protective gear), or does this all fall to the PI?

And, what kind of obligations do these parties have in the case that one of the other players falls down on some of his or her obligations?

If I were still working in a chemistry lab, thinking through ethical dimensions like these before anything bad happened would not strike me as a purely academic exercise. Rather, it would be essential to ensuring that everyone stays as safe as possible.

So, let’s talk about what that would look like.