What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

If you’re a scientist, are there certain things you’re obligated to do for society (not just for your employer)? If so, where does this obligation come from?

This is part of the discussion we started back in September about special duties or obligations scientists might have to the non-scientists with whom they share a world. If you’re just coming to the discussion now, you might want to check out the post where we set out some groundwork for the discussion, plus the three posts on scientists’ negative duties (i.e., the things scientists have an obligation not to do): our consideration of powers that scientists have and should not misuse, our discussion of scientific misconduct, the high crimes against science that scientists should never commit, and our examination of how plagiarism is not only unfair but also hazardous to knowledge-building.

In this post, finally, we lay out some of the positive duties that scientists might have.

In her book Ethics of Scientific Research, Kristin Shrader-Frechette gives a pretty forceful articulation of a set of positive duties for scientists. She asserts that scientists have a duty to do research, and a duty to use research findings in ways that serve the public good. Recall that these positive duties are in addition to scientists’ negative duty to ensure that the knowledge and technologies created by the research do not harm anyone.

Where do scientists’ special duties come from? Shrader-Frechette identifies a number of sources. For one thing, she says, there are obligations that arise from holding a monopoly on certain kinds of knowledge and services. Scientists are the ones in society who know how to work the electron microscopes and atom-smashers. They’re the ones who have the equipment and skills to build scientific knowledge. Such knowledge is not the kind of thing your average non-scientist could build for himself.

Scientists also have obligations that arise from the fact that they have a good chance of success (at least, better than anyone else) when it comes to educating the public about scientific matters or influencing public policy. The scientists who track the evidence that human activity leads to climate change, for example, are the ones who might be able to explain that evidence to the public and argue persuasively for measures that are predicted to slow climate change.

As well, scientists have duties that arise from the needs of the public. If the public’s pressing needs can only be met with the knowledge and technologies produced by scientific research – and if non-scientists cannot produce such knowledge and technologies themselves – then if scientists do no work to meet these needs, who can?

As we’ve noted before, there is, in all of this, that Spiderman superhero ethos: with great power comes great responsibility. When scientists realize how much power their knowledge and skills give them relative to the non-scientists in society, they begin to see that their duties are greater than they might have thought.

Let’s turn to what I take to be Shrader-Frechette’s more controversial claim: that scientists have a positive duty to conduct research. Where does this obligation come from?

For one thing, she argues, knowledge itself is valuable, especially in democratic societies where it could presumably help us make better choices than we’d be able to make with less knowledge. Thus, those who can produce knowledge should produce it.

For another thing, Shrader-Frechette points out, society funds research projects (through various granting agencies and direct funding from governmental entities). Researchers who accept such research funding are not free to abstain from research. They can’t take the grants and put an addition on the house. Rather, they are obligated to perform the contracted research. This argument is pretty uncontroversial, I think, since asking for money to do the research that will lead to more scientific knowledge and then failing to use that money to build more scientific knowledge is deceptive.

But here’s the argument that I think will meet with more resistance, at least from scientists: In the U.S., in addition to funding particular pieces of scientific research, society pays the bill for training scientists. This is not just true for scientists trained at public colleges and universities. Even private universities get a huge chunk of their money to fund research projects, research infrastructure, and the scientific training they give their students from public sources, including but not limited to federal funding agencies like the National Science Foundation and the National Institutes of Health.

The American people are not putting up this funding out of the goodness of their hearts. Rather, the public invests in the training of scientists because it expects a return on this investment in the form of the vital knowledge those trained scientists go on to produce and share with the public. Since the public pays to train people who can build scientific knowledge, the people who receive this training have a duty to go forth and build scientific knowledge to benefit the public.

Finally, Shrader-Frechette says, scientists have a duty to do research because if they don’t do research regularly, they won’t remain knowledgeable in their field. Not only will they not be up on the most recent discoveries or what they mean, but they will start to lose the crucial experimental and analytic skills they developed when they were being trained as scientists. For the philosophy fans in the audience, this point in Shrader-Frechette’s argument is reminiscent of Immanuel Kant’s example of how the man who prefers not to cultivate his talents is falling down on his duties. If everyone in society chose not to cultivate her talents, each of us would need to be completely self-sufficient (since we could not receive aid from others exercising their talents on our behalf) – and even that would not be enough, since we would not be able to rely on our own talents, having decided not to cultivate them.

On the basis of Shrader-Frechette’s argument, it sounds like every member of society who has had the advantage of scientific training (paid for by your tax dollars and mine) should be working away in the scientific knowledge salt-mine, at least until science has built all the knowledge society needs it to build.

And here’s where I put my own neck on the line: I earned a Ph.D. in chemistry (conferred in January 1994, almost exactly 20 years ago). Like other students in U.S. Ph.D. programs in chemistry, I did not pay for that scientific training. Rather, as Shrader-Frechette points out, my scientific training was heavily subsidized by the American tax payer. I have not build a bit of new chemical knowledge since the middle of 1994 (since I wrapped up one more project after completing my Ph.D.).

Have I fallen down on my positive duties as a trained scientist? Would it be fair for American tax payers to try to recover the funds they invested in my scientific training?

We’ll take up these questions (among others) in the next installment of this series. Stay tuned!

_____
Shrader-Frechette, K. S. (1994). Ethics of scientific research. Rowman & Littlefield.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Careers (not just jobs) for Ph.D.s outside the academy.

A week ago I was in Boston for the 2013 annual meeting of the History of Science Society. Immediately after the session in which I was a speaker, I attended a session (Sa31 in this program) called “Happiness beyond the Professoriate — Advising and Embracing Careers Outside the Academy.” The discussion there was specifically pitched at people working in the history of science (whether earning their Ph.D.s or advising those who are), but much of it struck me as broadly applicable to people in other fields — not just fields like philosophy, but also science, technology, engineering, and mathematics (STEM) fields.

The discourse in the session was framed in terms of recognizing, and communicating, that getting a job just like your advisor’s (i.e., as a faculty member at a research university with a Ph.D. program in your field — or, loosening it slightly, as permanent faculty at a college or university, even one not primarily focused on research or on training new members of the profession at the Ph.D. level) shouldn’t be a necessary condition for maintaining your professional identity and place in the professional community. Make no mistake, people in one’s discipline (including those training new members of the profession at the Ph.D. level) frequently do discount people as no longer really members of the profession for failing to succeed in the One True Career Path, but the panel asserted that they shouldn’t.

And, they provided plenty of compelling reasons why the “One True Career Path” approach is problematic. Chief among these, at least in fields like history, is that this approach feeds the creation and growth of armies of adjunct faculty, hoping that someday they will become regular faculty, and in the meantime working for very low wages relative to the amount of work they do (and relative to their training and expertise), experiencing serious job insecurity (sometimes not finding out whether they’ll have classes to teach until the academic term is actually underway), and enduring all manner of employer shenanigans (like having their teaching loads reduced to 50% of full time so the universities employing them are not required by law to provide health care coverage). Worse, insistence on One True Career Path fails to acknowledge that happiness is important.

Panelist Jim Grossman noted that the very language of “alternative careers” reinforces this problematic view by building in the assumption that there is a default career path. Speaking of “alternatives” instead might challenge the assumption that all options other than the default are lesser options.

Grossman identified other bits of vocabulary that ought to be excised from these discussions. He argued against speaking of “the job market” when one really means “the academic job market”. Otherwise, the suggestion is that you can’t really consider those other jobs without exiting the profession. Talking about “job placement,” he said, might have made sense back in the day when the chair of a hiring department called the chair of another department to say, “Send us your best man!” rather than conducting an actual job search. Those days are long gone.

And Grossman had lots to say about why we should stop talking about “overproduction of Ph.D.s.”

Ph.D.s, he noted, are earned by people, not produced like widgets on a factory line. Describing the number of new Ph.D.-holders each year as overproduction is claiming that there are too many — but again, this is too many relative to a specific kind of career trajectory assumed implicitly to be the only one worth pursuing. There are many sectors in the career landscape that could benefit from the talents of these Ph.D.-holders, so why are we not describing the current situation as one of “underconsumption of Ph.D.s”? Finally, the “overproduction of Ph.D.s.” locution doesn’t seem helpful in a context where these seems to be no good way to stop departments from “producing” as many Ph.D.s as they want to. If market forces were enough to address this imbalance, we wouldn’t have armies of adjuncts.

Someone in the discussion pointed out that STEM fields have for some time had similar issues of Ph.D. supply and demand, suggesting that they might be ahead of the curve in developing useful responses which other disciplines could borrow. However, the situation in STEM fields differs in that industrial career paths have been treated as legitimate (and as not removing you from the profession). And, more generally, society seems to take the skills and qualities of mind developed during a STEM Ph.D. as useful and broadly applicable, while those developed during a history or philosophy Ph.D. are assumed to be hopelessly esoteric. However, it was noted that while STEM fields don’t generate the same armies of adjuncts as humanities field, they do have what might be described as the “endless postdoc” problem.

Given that structural stagnation of the academic job market is real (and has been reality for something like 40 years in the history of science), panelist Lynn Nyhart observed that it would be foolish for Ph.D. students not to consider — and prepare for — other kinds of jobs. As well, Nyhart argues that as long as faculty take on graduate students, they have a responsibility to help them find jobs.

Despite profession that they are essentially clueless about career paths other than academia, advisors do have resources they can draw upon in helping their graduate students. Among these is the network of Ph.D. alumni from their graduate program, as well as the network of classmates from their own Ph.D. training. Chances are that a number of people in these networks are doing a wide range of different things with their Ph.D.s — and that they could provide valuable information and contacts. (Also, keeping in contact with these folks recognizes that they are still valued members of your professional community, rather than treating them as dead to you if they did not pursue the One True Career Path.)

Nyhart also recommended Versatilephd.com, especially the PhD Career Finder tab, as a valuable resource for exploring the different kinds of work for which Ph.D.s in various fields can serve as preparation. Some of the good stuff on the site is premium content, but if your university subscribes to the site your access to that premium content may already be paid for.

Nyhart noted that preparing Ph.D. students for a wide range of careers doesn’t require lowering discipline-specific standards, nor changing the curriculum — although, as Grossman pointed out, it might mean thinking more creatively about what skills, qualities of mind, and experiences existing courses impart. After all, skills that are good training for a career in academia — being a good teacher, an effective committee member, an excellent researcher, a persuasive writer, a productive collaborator — are skills that are portable to other kinds of careers.

David Attis, who has a Ph.D. in history of science and has been working in the private sector for about a decade, mentioned some practical skills worth cultivating for Ph.D.s pursuing private sector careers. These include having a tight two-minute explanation of your thesis geared to a non-specialist audience, being able to demonstrate your facility in approaching and solving non-academic problems, and being able to work on the timescale of business, not thesis writing (i.e., five hours to write a two-page memo is far too slow). Attis said that private sector employers are looking for people who can work well on teams and who can be flexible in contexts beyond teaching and research.

I found the discussion in this session incredibly useful, and I hope some of the important issues raised there will find their way to the graduate advisors and Ph.D. students who weren’t in the room for it, no matter what their academic discipline.

Scientific training and the Kobayashi Maru: inside the frauds of Diederik Stapel (part 3).

This post continues my discussion of issues raised in the article by Yudhijit Bhattacharjee in the New York Times Magazine (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. Part 1 looked at how expecting to find a particular kind of order in the universe may leave a scientific community more vulnerable to a fraudster claiming to have found results that display just that kind of order. Part 2 looked at some of the ways Stapel’s conduct did harm to the students he was supposed to be training to be scientists. Here, I want to point out another way that Stapel failed his students — ironically, by shielding them from failure.

Bhattacharjee writes:

[I]n the spring of 2010, a graduate student noticed anomalies in three experiments Stapel had run for him. When asked for the raw data, Stapel initially said he no longer had it. Later that year, shortly after Stapel became dean, the student mentioned his concerns to a young professor at the university gym. Each of them spoke to me but requested anonymity because they worried their careers would be damaged if they were identified.

The professor, who had been hired recently, began attending Stapel’s lab meetings. He was struck by how great the data looked, no matter the experiment. “I don’t know that I ever saw that a study failed, which is highly unusual,” he told me. “Even the best people, in my experience, have studies that fail constantly. Usually, half don’t work.”

In the next post, we’ll look at how this other professor’s curiosity about Stapel’s too-good-to-be-true results led to the unraveling of Stapel’s fraud. But I think it’s worth pausing here to say a bit more on how very odd a training environment Stapel’s research group provided for his students.

None of his studies failed. Since, as we saw in the last post, Stapel was also conducting (or, more accurately, claiming to conduct) his students’ studies, that means none of his students’ studies failed.

This is pretty much the opposite of every graduate student experience in an empirical field that I have heard described. Most studies fail. Getting to a 50% success rate with your empirical studies is a significant achievement.

Graduate students who are also Trekkies usually come to recognize that the travails of empirical studies are like a version of the Kobayashi Maru.

Introduced in Star Trek II: The Wrath of Khan, the Kobayashi Maru is a training simulation in which Star Fleet cadets are presented with a civilian ship in distress. Saving the civilians requires the cadet to violate treaty by entering the Neutral Zone (and in the simulation, this choice results in a Klingon attack and the boarding of the cadet’s ship). Honoring the treaty, on the other hand, means abandoning the civilians and their disabled ship in the Neutral Zone. The Kobayashi Maru is designed as a “no-win” scenario. The intent of the test is to discover how trainees face such a situation. Owing to James T. Kirk’s performance on the test, Wikipedia notes that some Trekkies also view the Kobayashi Maru as a problem whose solution depends on redefining the problem.

Scientific knowledge-building turns out to be packed with particular plans that cannot succeed at yielding the particular pieces of knowledge the scientists hope to discover. This is because scientists are formulating plans on the basis of what is already known to try to reveal what isn’t yet known — so knowing where to look, or what tools to use to do the looking, or what other features of the world are there to confound your ability to get clear information with those tools, is pretty hard.

Failed attempts happen. If they’re the sort of thing that will crush your spirit and leave you unable to shake it off and try it again, or to come up with a new strategy to try, then the life of a scientist will be a pretty hard life for you.

Grown-up scientists have studies fail all the time. Graduate students training to be scientists do, too. But graduate students also have mentors who are supposed to help them bounce back from failure — to figure out the most likely sources of failure, whether it’s worth trying the study again, whether a new approach would be better, whether some crucial piece of knowledge has been learned despite the failure of what was planned. Mentors give scientific trainees a set of strategies for responding to particular failures, and they also give reassurance that even good scientists fail.

Scientific knowledge is built by actual humans who don’t have perfect foresight about the features of the world as yet undiscovered, humans who don’t have perfectly precise instruments (or hands and eyes using those instruments), humans who sometimes mess up in executing their protocols. Yet the knowledge is built, and it frequently works pretty well.

In the context of scientific training, it strikes me as malpractice to send new scientists out into the world with the expectation that all of their studies should work, and without any experience grappling with studies that don’t work. Shielding his students from their Kobayashi Maru is just one more way Diederik Stapel cheated them out of a good scientific training.

Failing the scientists-in-training: inside the frauds of Diederik Stapel (part 2)

In this post, I’m continuing my discussion of the excellent article by Yudhijit Bhattacharjee in the New York Times Magazine (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. The last post considered how being disposed to expect order in the universe might have made other scientists in Stapel’s community less critical of his (fabricated) results than they could have been. Here, I want to shift my focus to some of the harm Stapel did beyond introducing lies to the scientific literature — specifically, the harm he did to the students he was supposed to be training to become good scientists.

I suppose it’s logically possible for a scientist to commit misconduct in a limited domain — say, to make up the results of his own research projects but to make every effort to train his students to be honest scientists. This doesn’t strike me as a likely scenario, though. Publishing fraudulent results as if they were factual is lying to one’s fellow scientists — including the generation of scientists one is training. Moreover, most research groups pursue interlocking questions, meaning that the questions the grad students are working to answer generally build on pieces of knowledge the boss has built — or, in Stapel’s case “built”. This means that at minimum, a fabricating PI is probably wasting his trainees’ time by letting them base their own research efforts on claims that there’s no good scientific reason to trust.

And as Bhattacharjee describes the situation for Stapel’s trainees, things for them were even worse:

He [Stapel] published more than two dozen studies while at Groningen, many of them written with his doctoral students. They don’t appear to have questioned why their supervisor was running many of the experiments for them. Nor did his colleagues inquire about this unusual practice.

(Bold emphasis added.)

I’d have thought that one of the things a scientist-in-training hopes to learn in the course of her graduate studies is not just how to design a good experiment, but how to implement it. Making your experimental design work in the real world is often much harder than it seems like it will be, but you learn from these difficulties — about the parameters you ignored in the design that turn out to be important, about the limitations of your measurement strategies, about ways the system you’re studying frustrates the expectations you had about it before you were actually interacting with it.

I’ll even go out on a limb and say that some experience doing experiments can make a significant difference in a scientist’s skill conceiving of experimental approaches to problems.

That Stapel cut his students out of doing the experiments was downright weird.

Now, scientific trainees probably don’t have the most realistic picture of precisely what competencies they need to master to become successful grown-up scientists in a field. They trust that the grown-up scientists training them know what these competencies are, and that these grown-up scientists will make sure that they encounter them in their training. Stapel’s trainees likely trusted him to guide them. Maybe they thought that he would have them conducting experiments if that were a skill that would require a significant amount of time or effort to master. Maybe they assumed that implementing the experiments they had designed was just so straightforward that Stapel thought they were better served working to learn other competencies instead.

(For that to be the case, though, Stapel would have to be the world’s most reassuring graduate advisor. I know my impostor complex was strong enough that I wouldn’t have believed I could do an experiment my boss or my fellow grad students viewed as totally easy until I had actually done it successfully three times. If I had to bet money, it would be that some of Stapel’s trainees wanted to learn how to do the experiments, but they were too scared to ask.)

There’s no reason, however, that Stapel’s colleagues should have thought it was OK that his trainees were not learning how to do experiments by taking charge of doing their own. If they did know and they did nothing, they were complicit in a failure to provide adequate scientific training to trainees in their program. If they didn’t know, that’s an argument that departments ought to take more responsibility for their trainees and to exercise more oversight rather than leaving each trainee to the mercies of his or her advisor.

And, as becomes clear from the New York Times Magazine article, doing experiments wasn’t the only piece of standard scientific training of which Stapel’s trainees were deprived. Bhattacharjee describes the revelation when a colleague collaborated with Stapel on a piece of research:

Stapel and [Ad] Vingerhoets [a colleague of his at Tilburg] worked together with a research assistant to prepare the coloring pages and the questionnaires. Stapel told Vingerhoets that he would collect the data from a school where he had contacts. A few weeks later, he called Vingerhoets to his office and showed him the results, scribbled on a sheet of paper. Vingerhoets was delighted to see a significant difference between the two conditions, indicating that children exposed to a teary-eyed picture were much more willing to share candy. It was sure to result in a high-profile publication. “I said, ‘This is so fantastic, so incredible,’ ” Vingerhoets told me.

He began writing the paper, but then he wondered if the data had shown any difference between girls and boys. “What about gender differences?” he asked Stapel, requesting to see the data. Stapel told him the data hadn’t been entered into a computer yet.

Vingerhoets was stumped. Stapel had shown him means and standard deviations and even a statistical index attesting to the reliability of the questionnaire, which would have seemed to require a computer to produce. Vingerhoets wondered if Stapel, as dean, was somehow testing him. Suspecting fraud, he consulted a retired professor to figure out what to do. “Do you really believe that someone with [Stapel’s] status faked data?” the professor asked him.

“At that moment,” Vingerhoets told me, “I decided that I would not report it to the rector.”

Stapel’s modus operandi was to make up his results out of whole cloth — to produce “findings” that looked statistically plausible without the muss and fuss of conducting actual experiments or collecting actual data. Indeed, since the thing he was creating that needed to look plausible enough to be accepted by his fellow scientists was the analyzed data, he didn’t bother making up raw data from which such an analysis could be generated.

Connecting the dots here, this surely means that Stapel’s trainees must not have gotten any experience dealing with raw data or learning how to apply methods of analysis to actual data sets. This left another gaping hole in the scientific training they deserved.

It would seem that those being trained by other scientists in Stapel’s program were getting some experience in conducting experiments, collecting data, and analyzing their data — since that experimentation, data collection, and data analysis became fodder for discussion in the ethics training that Stapel led. From the article:

And yet as part of a graduate seminar he taught on research ethics, Stapel would ask his students to dig back into their own research and look for things that might have been unethical. “They got back with terrible lapses­,” he told me. “No informed consent, no debriefing of subjects, then of course in data analysis, looking only at some data and not all the data.” He didn’t see the same problems in his own work, he said, because there were no real data to contend with.

I would love to know the process by which Stapel’s program decided that he was the best one to teach the graduate seminar on research ethics. I wonder if this particular teaching assignment was one of those burdens that his colleagues tried to dodge, or if research ethics was viewed as a teaching assignment requiring no special expertise. I wonder how it’s sitting with them that they let a now-famous cheater teach their grad students how to be ethical scientists.

The whole “those who can’t do, teach” adage rings hollow here.

Are safe working conditions too expensive for knowledge-builders?

Last week’s deadly collapse of an eight-story garment factory building in Dhaka, Bangladesh has prompted discussions about whether poor countries can afford safe working conditions for workers who make goods that consumers in countries like the U.S. prefer to buy for bargain prices.

Maybe the risk of being crushed to death (or burned to death, or what have you) is just a trade-off poor people are (or should be) willing to accept to draw a salary. At least, that seems to be the take-away message from the crowd arguing that it would cost too much to have safety regulation (and enforcement) with teeth.

It is hard not to consider how this kind of attitude might get extended to other kinds of workplaces — like, say, academic research labs — given that last week UCLA chemistry professor Patrick Harran was also scheduled to return to court for a preliminary hearing on the felony charges of labor code violations brought against him in response to the 2008 fire in his laboratory that killed his employee, Shari Sangji.

Jyllian Kemsley has a detailed look at how Harran’s defense team has responded to the charges of specific violations of the California Labor Code, charges involving failure to provide adequate training, failure to have adequate procedures in place to correct unsafe conditions or work practices, and failure to require workers wear appropriate clothing for the work being done. Since I’m not a lawyer, it’s hard for me to assess the likelihood that the defense responses to these charges would be persuasive to a judge, but ethically, they’re pretty weak tea.

Sadly, though, it’s weak tea of the exact sort that my scientific training has led me to expect from people directing scientific research labs in academic settings.

When safety training is confined to a single safety video that graduate students are shown when they enter a program, that tells graduate students that their safety is not a big deal in the research activities that are part of their training.

When there’s not enough space under the hood for all the workers in a lab to conduct all the activities that, for safety’s sake, ought to be conducted under the hood — and when the boss expects all those activities to happen without delay — that tells them that a sacrifice in safety to produce quick results is acceptable.

When a student-volunteer needs to receive required ionizing radiation safety training to get a film badge that will give her access to the facility where she can irradiate her cells for an experiment, and the PI, upon hearing that the next training session in three weeks away, says to the student-volunteer, “Don’t bother; use my film badge,” that tells people in the lab that the PI is unwilling to lose three weeks of unpaid labor on one aspect of a research project just to make the personnel involved a little bit safer.

When people running a lab take an attitude of “Eh, young people are going to dress how they’re going to dress” rather than imposing clear rules for their laboratories that people whose dress is unsafe for the activities they are to undertake don’t get to undertake them, that tells the personnel in the lab that whatever cost is involved in holding this line — losing a day’s worth of work, being viewed by one’s underlings as strict rather than cool — has been judged too high relative to the benefit of making personnel in the lab safer.

When university presidents or other administrators proclaim that knowledge-builders “must continue to recalibrate [their] risk tolerance” by examining their “own internal policies and ask[ing] the question—do they meet—or do they exceed—our legal or regulatory requirements,” that tells knowledge-builders at those universities that people with significantly more power than them judge efforts to make things safer for knowledge-builders (and for others, like the human subjects of their research) as an unnecessary burden. When institutions need to become leaner, or more agile, shouldn’t researchers (and human subjects) do their part by accepting more risk as the price of doing business?

To be sure, safety isn’t free. But there are also costs to being less safe in academic research settings.

For example, personnel develop lax attitudes toward risks and trainees take these attitudes with them when they go out in the world as grown-up scientists. Surrounding communities can get hurt by improper disposal of hazardous materials, or by inadequate safety measures taken by researchers working with infectious agents who then go home and cough on their families and friends. Sometimes, personnel are badly injured, or killed.

And, if academic scientists are dragging feet on making things safer for the researchers on their team because it takes time and effort to investigate risks and make sensible plans for managing them, to develop occupational health plans and to institute standard operating procedures that everyone on the research team knows and follows, I hope they’re noticing that facing felony charges stemming from safety problems in their labs can also take lots of time and effort.

UPDATE: The Los Angeles Times reports that Patrick Harran will stand trial after an LA County Superior Court judge denied a defense motion to dismiss the case.

Shame versus guilt in community responses to wrongdoing.

Yesterday, on the Hastings Center Bioethics Forum, Carl Elliott pondered the question of why a petition asking the governor of Minnesota to investigate ethically problematic research at the University of Minnesota has gathered hundreds of signatures from scholars in bioethics, clinical research, medical humanities, and related disciplines — but only a handful of signatures from scholars and researchers at the University of Minnesota.

At the center of the research scandal is the death of Dan Markingson, who was a human subject in a clinical trial of psychiatric drugs. Detailed background on the case can be found here, and Judy Stone has blogged extensively about the ethical dimensions of the case.

Elliott writes:

Very few signers come from the University of Minnesota. In fact, only two people from the Center for Bioethics have signed: Leigh Turner and me. This is not because any faculty member outside the Department of Psychiatry actually defends the ethics of the study, at least as far as I can tell. What seems to bother people here is speaking out about it. Very few faculty members are willing to register their objections publicly.

Why not? Well, there are the obvious possibilities – fear, apathy, self-interest, and so on. At least one person has told me she is unwilling to sign because she doesn’t think the petition will succeed. But there may be a more interesting explanation that I’d like to explore. …

Why would faculty members remain silent about such an alarming sequence of events? One possible reason is simply because they do not feel as if the wrongdoing has anything to do with them. The University of Minnesota is a vast institution; the scandal took place in a single department; if anyone is to be blamed, it is the psychiatrists and the university administrators, not them. Simply being a faculty member at the university does not implicate them in the wrongdoing or give them any special obligation to fix it. In a phrase: no guilt, hence no responsibility.

My view is somewhat different. These events have made me deeply ashamed to be a part of the University of Minnesota, in the same way that I feel ashamed to be a Southerner when I see video clips of Strom Thurmond’s race-baiting speeches or photos of Alabama police dogs snapping at black civil rights marchers. I think that what our psychiatrists did to Dan Markingson was wrong in the deepest sense. It was exploitative, cruel, and corrupt. Almost as disgraceful are the actions university officials have taken to cover it up and protect the reputation of the university. The shame I feel comes from the fact that I have worked at the University of Minnesota for 15 years. I have even been a member of the IRB. For better or worse, my identity is bound up with the institution.

These two different reactions – shame versus guilt – differ in important ways. Shame is linked with honor; it is about losing the respect of others, and by virtue of that, losing your self-respect. And honor often involves collective identity. While we don’t usually feel guilty about the actions of other people, we often do feel ashamed if those actions reflect on our own identities. So, for example, you can feel ashamed at the actions of your parents, your fellow Lutherans, or your physician colleagues – even if you feel as if it would be unfair for anyone to blame you personally for their actions.

Shame, unlike guilt, involves the imagined gaze of other people. As Ruth Benedict writes: “Shame is a reaction to other people’s criticism. A man is shamed either by being openly ridiculed or by fantasying to himself that he has been made ridiculous. In either case it is a potent sanction. But it requires an audience or at least a man’s fantasy of an audience. Guilt does not.”

As Elliott notes, one way to avoid an audience — and thus to avoid shame — is to actively participate in, or tacitly endorse, a cover-up of the wrongdoing. I’m inclined to think, however, that taking steps to avoid shame by hiding the facts, or by allowing retaliation against people asking inconvenient questions, is itself a kind of wrongdoing — the kind of thing that incurs guilt, for which no audience is required.

As well, I think the scholars and researchers at the University of Minnesota who prefer not to take a stand on how their university responds to ethically problematic research, even if it is research in someone else’s lab, or someone else’s department, underestimate the size of the audience for their actions and for their inaction.

A hugely significant segment of this audience is their trainees. Their students and postdocs (and others involved in training relationships with them) are watching them, trying to draw lessons about how to be a grown-up scientist or scholar, a responsible member of a discipline, a responsible member of a university community, a responsible citizen of the world. The people they are training are looking to them to set a good example on how to respond to problems — by addressing them, learning from them, making things right, and doing better going forward, or by lying, covering up, and punishing people harmed by trying to recover costs from them (thus sending a message to others daring to point out how they have been harmed).

There are many fewer explicit conversations about such issues than one might hope in a scientist’s training. In the absence of explicit conversations, most of what trainees have to go on is how the people training them actually behave. And sometimes, a mentor’s silence speaks as loud as words.

The danger of pointing out bad behavior: retribution (and the community’s role in preventing it).

There has been a lot of discussion of Dario Maestripieri’s disappointment at the unattractiveness of his female colleagues in the neuroscience community. Indeed, it’s notable how much of this discussion has been in public channels, not just private emails or conversations conducted with sound waves which then dissipate into the aether. No doubt, this is related to Maestripieri’s decision to share his hot-or-not assessment of the women in his profession in a semi-public space where it could achieve more permanence — and amplification — than it would have as an utterance at the hotel bar.

His behavior became something that any member of his scientific community with an internet connection (and a whole lot of people outside his scientific community) could inspect. The impacts of an actual, rather than hypothetical, piece of behavior, could be brought into the conversation about the climate of professional and learning communities, especially for the members of these communities who are women.

It’s worth pointing out that there is nothing especially surprising about such sexist behavior* within these communities. The people in the communities who have been paying attention have seen them before (and besides have good empirical grounds for expecting that gender biases may be a problem). But many sexist behaviors go unreported and unremarked, sometimes because of the very real fear of retribution.

What kind of retribution could there be for pointing out a piece of behavior that has sexist effects, or arguing that it is an inappropriate way for a member of the professional community to behave?

Let’s say you are an early career scientist, applying for a faculty post. As it happens, Dario Maestripieri‘s department, the University of Chicago Department of Comparative Human Development, currently has an open search for a tenure-track assistant professor. There is a non-zero chance that Dario Maestripieri is a faculty member on that search committee, or that he has the ear of a colleague that is.

It is not a tremendous stretch to hypothesize that Dario Maestripieri may not be thrilled at the public criticism he’s gotten in response to his Facebook post (including some quite close to home). Possibly he’s looking through the throngs of his Facebook friends and trying to guess which of them is the one who took the screenshot of his ill advised post and shared it more widely. Or looking through his Facebook friends’ Facebook friends. Or considering which early career neuroscientists might be in-real-life friends or associates with his Facebook friends or their Facebook friends.

Now suppose you’re applying for that faculty position in his department and you happen to be one of his Facebook friends,** or one of their Facebook friends, or one of the in-real-life friends of either of those.

Of course, shooting down an applicant for a faculty position for the explicit reason that you think he or she may have cast unwanted attention on your behavior towards your professional community would be a problem. But there are probably enough applicants for the position, enough variation in the details of their CVs, and enough subjective judgment on the part of the members of the search committee in evaluating all those materials that it would be possible to cut all applicants who are Dario Maestripieri’s Facebook friends (or their Facebook friends, or in-real-life friends of either of those) from consideration while providing some other plausible reason for their elimination. Indeed, the circle could be broadened to eliminate candidates with letters of recommendation from Dario Maestripieri’s Facebook friends (or their Facebook friends, or in-real-life friends of either of those), candidates who have coauthored papers with Dario Maestripieri’s Facebook friends (or their Facebook friends, or in-real-life friends of either of those), etc.)

And, since candidates who don’t get the job generally aren’t told why they were found wanting — only that some other candidate was judged to be better — these other plausible reasons for shooting down a candidate would only even matter in the discussions of the search committee.

In other words, real retaliation (rejection from consideration for a faculty job) could fall on people who are merely suspected of sharing information that led to Dario Maestripieri becoming the focus of a public discussion of sexist behavior — not just on the people who have publicly spoken about his behavior. And, the retaliation would be practically impossible to prove.

If you don’t think this kind of possibility has a chilling effect on the willingness of members of a professional community to speak up when they see a relatively powerful colleague behave in they think is harmful, you just don’t understand power dynamics.

And even if Dario Maestripieri has no part at all in his department’s ongoing faculty search, there are other interactions within his professional community in which his suspicions about who might have exposed his behavior could come into play. Senior scientists are routinely asked to referee papers submitted to scientific journals and to serve on panels and study sections that rank applications for grants. In some of these circumstances, the identities of the scientists one is judging (e.g., for grants) are known to the scientists making the evaluations. In others, they are masked, but the scientists making the evaluations have hunches about whose work they are evaluating. If those hunches are mingled with hunches about who could have shared evidence of behavior that is now making the evaluator’s life difficult, it’s hard to imagine the grant applicant or the manuscript author getting a completely fair shake.

Let’s pause here to note that the attitude Dario Maestripieri’s Facebook posting reveals, that it’s appropriate to evaluate women in the field on their physical beauty rather than their scientific achievements, could itself be a source of bias as he does things that are part of a normal professional life, like serving on search committees, reviewing journal submissions and grant applications, evaluating students, and so forth. A bias like this could manifest itself in a preference for hiring job candidates one finds aesthetically pleasing. (Sure, academic job application packets usually don’t include a headshot, but even senior scientists have probably heard of Google Image search.) Or it could manifest itself in a preference against hiring more women (since too high a concentration of female colleagues might be perceived as increasing the likelihood that one would be taken to task for freely expressing one’s aesthetic preferences about women in the field). Again, it would be extraordinarily hard to prove the operation of such a bias in any particular case — but that doesn’t rule out the possibility that it is having an effect in activities where members of the professional community are supposed to be as objective as possible.

Objectivity, as we’ve noted before, is hard.

We should remember, though, that faculty searches are conducted by committees, rather than by a single individual with the power to make all the decisions. And, the University of Chicago Department of Comparative Human Development (as well as the University of Chicago more generally) may recognize that it is likely to be getting more public scrutiny as a result of the public scrutiny Dario Maestripieri has been getting.

Among other things, this means that the department and the university have a real interest in conducting a squeaky-clean search that avoids even the appearance of retaliation. In any search, members of the search committee have a responsibility to identify, disclose, and manage their own biases. In this search, discharging that responsibility is even more vital. In any search, members of the hiring department have a responsibility to discuss their shared needs and interests, and how these should inform the selection of the new faculty member. In this search, that discussion of needs and interests must include a discussion of the climate within the department and the larger scientific community — what it is now, and what members of the department think it should be.

In any search, members of the hiring department have an interest in sharing their opinions on who the best candidate might be, and to having a dialogue around the disagreements. In this search, if it turns out one of the disagreements about a candidate comes down to “I suspect he may have been involved in exposing my Facebook post and making me feel bad,” well, arguably there’s a responsibility to have a discussion about that.

Ask academics what it’s like to hire a colleague and it’s not uncommon to hear them describe the experience as akin to entering a marriage. You’re looking for someone with whom you might spend the next 30 years, someone who will grow with you, who will become an integral part of your department and its culture, even to the point of helping that departmental culture grow and change. This is a good reason not to choose the new hire based on the most superficial assessment of what each candidate might bring to the relationship — and to recognize that helping one faculty member avoid discomfort might not be the most important thing.

Indeed, Dario Maestripieri’s colleagues may have all kinds of reasons to engage him in uncomfortable discussions about his behavior that have nothing to do with conducting a squeaky-clean faculty search. Their reputations are intertwined, and leaving things alone rather than challenging Dario Maestripieri’s behavior may impact their own ability to attract graduate students or maintain the respect of undergraduates. These are things that matter to academic scientists — which means that Dario Maestripieri’s colleagues have an interest in pushing back for their own good and the good of the community.

The pushback, if it happens, is likely to be just as invisible publicly as any retaliation against job candidates for possibly sharing the screenshot of Dario Maestripieri’s Facebook posting. If positive effects are visible, it might make it seem less dangerous for members of the professional community to speak up about bad behavior when they see it. But if the outward appearance is that nothing has changed for Dario Maestripieri and his department, expect that there will be plenty of bad behavior that is not discussed in public because the career costs of doing so are just too high.

______
* This is not at all an issue about whether Dario Maestripieri is a sexist. This is an issue about the effects of the behavior, which have a disproportionate negative impact on women in the community. I do not know, or care, what is in the heart of the person who displays these behaviors, and it is not at all relevant to a discussion of how the behaviors affect the community.

** Given the number of his Facebook friends and their range of ages, career stages, etc., this doesn’t strike me as improbable. (At last check, I have 11 Facebook friends in common with Dario Maestripieri.)

Community responsibility for a safety culture in academic chemistry.

This is another approximate transcript of a part of the conversation I had with Chemjobber that became a podcast. This segment (from about 29:55 to 52:00) includes our discussion of what a just punishment might look like for PI Patrick Harran for his part in the Sheri Sangji case. From there, our discussion shifted to the question of how to make the culture of academic chemistry safer:

Chemjobber: One of the things that I guess I’ll ask is whether you think we’ll get justice out of this legal process in the Sheri Sangji case.

Janet: I think about this, I grapple with this, and about half the time when I do, I end up thinking that punishment — and figuring out the appropriate punishment for Patrick Harran — doesn’t even make my top-five list of things that should come out of all this. I kind of feel like a decent person should feel really, really bad about what happened, and should devote his life forward from here to making the conditions that enabled the accident that killed Sheri Sangji go away. But, you know, maybe he’s not a decent person. Who the heck can tell? And certainly, once you put things in the context where you have a legal team defending you against criminal charges — that tends to obscure the question of whether you’re a decent person or not, because suddenly you’ve got lawyers acting on your behalf in all sorts of ways that don’t look decent at all.

Chemjobber: Right.

Janet: I think the bigger question in my mind is how does the community respond? How does the chemistry department at UCLA, how does the larger community of academic chemistry, how do Patrick Harran’s colleagues at UCLA and elsewhere respond to all of this? I know that there are some people who say, “Look, he really fell down on the job safety-wise, and in terms of creating an environment for people working on his behalf, and someone died, and he should do jail time.” I don’t actually know if putting him in jail changes the conditions on the outside, and I’ve said that I think, in some ways, tucking him away in jail for however many months makes it easier for the people who are still running academic labs while he’s incarcerated to say, “OK, the problem is taken care of. The bad actor is out of the pool. Not a problem,” rather than looking at what it is about the culture of academic chemistry that has us devoting so little of our time and energy to making sure we’re doing this safely. So, if it were up to me, if I were the Queen of Just Punishment in the world of academic chemistry, I’ve said his job from here on out should be to be Safety in the Research Culture Guy. That’s what he gets to work on. He doesn’t get to go forward and conduct new research on some chemical question like none of this ever happened. Because something happened. Something bad happened, and the reason something bad happened, I think, is because of a culture in academic chemistry where it was acceptable for a PI not to pay attention to safety considerations until something bad happened. And that’s got to change.

Chemjobber: I think it will change. I should point out here that if your proposed punishment were enacted, it would be quite a punishment, because he wouldn’t get to choose what he worked on anymore, and that, to a great extent, is the joy of academic research, that it’s self-directed and that there is lots and lots of freedom. I don’t get to choose the research problems I work on, because I do it for money. My choices are more or less made by somebody else.

Janet: But they pay you.

Chemjobber: But they pay me.

Janet: I think I’d even be OK saying maybe Harran gets to do 50% of his research on self-directed research topics. But the other 50% is he has to go be an evangelist for changing how we approach the question of safety in academic research.

Chemjobber: Right.

Janet: He’s still part of the community, he’s still “one of us,” but he has to show us how we are treading dangerously close to the conditions that led to the really bad thing that happened in his lab, so we can change that.

Chemjobber: Hmm.

Janet: And not just make it an individual thing. I think all of the attempts to boil what happened down to all being the individual responsibility of the technician, or of the PI, or it’s a split between the individual responsibility of one and the individual responsibility of the other, totally misses the institutional responsibility, and the responsibility of the professional community, and how systemic factors that the community is responsible for failed here.

Chemjobber: Hmm.

Janet: And I think sometimes we need individuals to step up and say, part of me acknowledging my personal responsibility here is to point to the ways that the decisions I made within the landscape we’ve got — of what we take seriously, of what’s rewarded and what’s punished — led to this really bad outcome. I think that’s part of the power here is when academic chemists say, “I would be horrified if you jailed this guy because this could have happened in any of our labs,” I think they’re right. I think they’re right, and I think we have to ask how it is that conditions in these academic communities got to the point where we’re lucky that more people haven’t been seriously injured or killed by some of the bad things that could happen — that we don’t even know that we’re walking into because safety gets that short shrift.

Chemjobber: Wow, that’s heavy. I’m not sure whether there are industrial chemists whose primary job is to think about safety. Is part of the issue we have here that safety has been professionalized? We have industrial chemical hygienists and safety engineers. Every university has an EH&S [environmental health and safety] department. Does that make safety somebody else’s problem? And maybe if Patrick Harran were to become a safety evangelist, it would be a way of saying it’s our problem, and we all have to learn, we have to figure out a way to deal with this?

Janet:Yeah. I actually know that there exist safety officers in academic science departments, partly because I serve on some university committees with people who fill that role — so I know they exist. I don’t know how much the people doing research in those departments actually talk with those safety officers before something goes wrong, or how much of it goes beyond “Oh, there’s paperwork we need to make sure is filed in the right place in case there’s an inspection,” or something like that. But it strikes me that safety should be more collaborative. In some ways, wouldn’t that be a more gripping weekly seminar to have in a chemistry department for grad students working in the lab, even just once a month on the weekly seminar, to have a safety roundtable? “Here are the risks that we found out about in this kind of work,” or talking about unforeseen things that might happen, or how do you get started finding out about proper precautions as you’re beginning a new line of research? What’s your strategy for figuring that out? Who do you talk to? I honestly feel like this is a part of chemical education at the graduate level that is extremely underdeveloped. I know there’s been some talk about changing the undergraduate chemistry degree so that it includes something like a certificate program in chemical safety, and maybe that will fix it all. But I think the only thing that fixes it all is really making it part of the day to day lived culture of how we build new knowledge in chemistry, that the safety around how that knowledge gets built is an ongoing part of the conversation.

Chemjobber: Hmm.

Janet: It’s not something we talk about once and then never again. Because that’s not how research works. We don’t say, “Here’s our protocol. We never have to revisit it. We’ll just keep running it until we have enough data, and then we’re done.”

Chemjobber: Right.

Janet: Show me an experiment that’s like that. I’ve never touched an experiment like that in my life.

Chemjobber: So, how many times do you remember your Ph.D. advisor talking to you about safety?

Janet: Zero. He was a really good advisor, he was a very good mentor, but essentially, how it worked in our lab was that the grad students who were further on would talk to the grad students who were newer about “Here’s what you need to be careful about with this reaction, “ or “If you’ve got overflow of your chemical waste, here’s who to call to do the clean-up,” or “Here’s the paperwork you fill out to have the chemical waste hauled away properly.” So, the culture was the people who were in the lab day to day were the keepers of the safety information, and luckily I joined a lab where those grad students were very forthcoming. They wanted to share that information. You didn’t have to ask because they offered it first. I don’t think it happens that way in every lab, though.

Chemjobber: I think you’re right. The thorniness of the problem of turning chemical safety into a day to day thing, within the lab — within a specific group — is you’re relying on this group of people that are transient, and they’re human, so some people really care about it and some people tend not to care about it. I had an advisor who didn’t talk about safety all the time but did, on a number of occasions, yank us all short and say, “Hey, look, what you’re doing is dangerous!” I clearly remember specific admonishments: “Hey, that’s dangerous! Don’t do that!”

Janet: I suspect that may be more common in organic chemistry than in physical chemistry, which is my area. You guys work with stuff that seems to have a lot more potential to do interesting things in interesting ways. The other thing, too, is that in my research group we were united by a common set of theoretical approaches, but we all worked in different kinds of experimental systems which had different kinds of hazards. The folks doing combustion reactions had different things to worry about than me, working with my aqueous reaction in a flow-through reactor, while someone in the next room was working with enzymatic reactions. We were all over the map. Nothing that any of us worked with seemed to have real deadly potential, at least as we were running it, but who knows?

Chemjobber: Right.

Janet: And given that different labs have very different dynamics, that could make it hard to actually implement a desire to have safety become a part of the day to day discussions people are having as they’re building the knowledge. But this might really be a good place for departments and graduate training programs to step up. To say, “OK, you’ve got your PI who’s running his or her own fiefdom in the lab, but we’re the other professional parental unit looking out for your well being, so we’re going to have these ongoing discussions with graduate cohorts made up of students who are working in different labs about safety and how to think about safety where the rubber hits the road.” Actually bringing those discussions out of the research group, the research group meeting, might provide a space where people can become reflective about how things go in their own labs and can see something about how things are being done differently in other labs, and start piecing together strategies, start thinking about what they want the practices to be like when they’re the grown-up chemists running their own labs. How do they want to make safety something that’s part of the job, not an add on that’s being slapped on or something that’s being forgotten altogether.

Chemjobber: Right.

Janet: But of course, graduate training programs would have to care enough about that to figure out how to put the resources on it, to make it happen.

Chemjobber: I’m in profound sympathy with the people who would have to figure out how to do that. I don’t really know anything about the structure of a graduate training program other than, you know, “Do good work, and try to graduate sooner rather than later.” But I assume that in the last 20 to 30 years, there have been new mandates like “OK, you all need to have some kind of ethics component”

Janet: — because ethics coursework will keep people from cheating! Except that’s an oversimplified equation. But ethics is a requirement they’re heaping on, and safety could certainly be another. The question is how to do that sensibly rather than making it clear that we’re doing this only because there’s a mandate from someone else that we do it.

Chemjobber: One of the things that I’ve always thought about in terms of how to better inculcate safety in academic labs is maybe to have training that happens every year, that takes a week. New first-years come in and you get run through some sort of a lab safety thing where you go and you set up the experiment and weird things are going to happen. It’s kind of an artificial environment where you have to go in and run a dangerous reaction as a drill that reminds you that there are real-world consequences. I think Chembark talked about how, in Caltech Safety Day, they brought out one of the lasers and put a hole through an apple. Since Paul is an organic chemist, I don’t think he does that very often, but his response was “Oh, if I enter one of these laser labs, I should probably have my safety glasses on.” There’s a limit to the effectiveness of that sort of stuff. you have to really, really think about how to design it, and a week out of a year is a long time, and who’s going to run it? I think your idea of the older students in the lab being the ones who really do a lot of the day to day safety stuff is important. What happens when there are no older students in the lab?

Janet: That’s right, when you’re the first cohort in the PI’s lab.

Chemjobber: Or, when there hasn’t been much funding for students and suddenly now you have funding for students.

Janet: And there’s also the question of going from a sparsely populated lab to a really crowded lab when you have the funding but you don’t suddenly have more lab space. And crowded labs have different kinds of safety concerns than sparsely populated labs.

Chemjobber: That’s very true.

Janet: I also wonder whether the “grown-up” chemists, the postdocs and the PIs, ought to be involved in some sort of regular safety … I guess casting it as “training” is likely to get people’s hackles up, and they’re likely to say, “I have even less time for this than my students do.”

Chemjobber: Right.

Janet: But at the same time, pretending that they learned everything they need to know about safety in grad school? Really? Really you did? When we’re talking now about how maybe the safety training for graduate students is inadequate, you magically got the training that tells you everything you need to know from here on out about safety? That seems weird. And also, presumably, the risks of certain kinds of procedures and certain kinds of reagents — that’s something about which our knowledge continues to increase as well. So, finding ways to keep up on that, to come up with safer techniques and better responses when things do go wrong — some kind of continuing education, continuing involvement with that. If there was a way to do it to include the PIs and the people they’re employing or training, to engage them together, maybe that would be effective.

Chemjobber: Hmm.

Janet: It would at least make it seem less like, “This is education we have to give our students, this is one more requirement to throw on the pile, but we wouldn’t do it if we had the choice, because it gets in the way of making knowledge.” Making knowledge is good. I think making knowledge is important, but we’re human beings making knowledge and we’d like to live long enough to appreciate that knowledge. Graduate students shouldn’t be consumable resources in the knowledge-building the same way that chemical reagents are.

Chemjobber: Yeah.

Janet: Because I bet you the disposal paperwork on graduate students is a fair bit more rigorous than for chemical waste.

Why does lab safety look different to chemists in academia and chemists in industry?

Here’s another approximate transcript of the conversation I had with Chemjobber that became a podcast. In this segment (from about 19:30 to 29:30), we consider how reaction to the Sheri Sangji case sound different when they’re coming from academic chemists than when they’re coming from industry, and we spin some hypotheses about what might be going on behind those differences:

Chemjobber: I know that you wanted to talk about the response of industrial chemists versus academic chemists to the Sheri Sangji case.

Janet: This is one of the things that jumps out at me in the comment threads on your blog posts about the Sangji case. (Your commenters, by the way, are awesome. What a great community of commenters engaging with this stuff.) It really does seem that the commenters who are coming from industry are saying, “These conditions that we’re hearing about in the Harran lab (and maybe in academic labs in general) are not good conditions for producing knowledge as safely as we can.” And the academic commenters are saying, “Oh come on, it’s like this everywhere! Why are you going to hold this one guy responsible for something that could have happened to any of us?” It shines a light on something interesting about how academic labs building knowledge function really differently from industrial labs building knowledge.

Chemjobber: Yeah, I don’t know. It’s very difficult for me to separate out whether it’s culture or law or something else. Certainly I think there’s a culture aspect of it, which is that every large company and most small companies really try hard to have some sort of a safety culture. Whether or not they actually stick to it is a different story, but what I’ve seen is that the bigger the company, the more it really matters. Part of it, I think, is that people are older and a little bit wiser, they’re better at looking over each other’s shoulders and saying, “What are you doing over there?” and “So, you’re planning to do that? That doesn’t sound like a great idea.” It seems like there’s less of that in academia. And then there’s the regulatory aspect of it. Industrial chemists are workers, the companies they’re working for are employers, and there’s a clear legal aspect to that. Even as under-resourced as OSHA is, there is an actual legal structure prepared to deal with accidents. If the Sangji incident had happened at a very large company, most people think that heads would have rolled, letters would have been placed in evaluation files, and careers would be over.

Janet: Or at least the lab would probably have been shut down until a whole bunch of stuff was changed.

Chemjobber: But in academia, it looks like things are different.

Janet: I have some hunches that perhaps support some of your hunches here about where the differences are coming from. First of all, the set-up in academia assumes radical autonomy on the part of the PI about how to run his or her lab. Much of that is for the good as far as allowing different ways to tackle the creative problems about how to ask the scientific questions to better shake loose the piece of knowledge you’re trying to shake loose, or allowing a range of different work habits that might be successful for these people you’re training to be grown-up scientists in your scientific field. And along with that radical autonomy — your lab is your fiefdom — in a given academic chemistry department you’re also likely to have a wide array of chemical sub-fields that people are exploring. So, depending on the size of your department, you can’t necessarily count on there being more than a couple other PIs in the department who really understand your work well enough that they would have deep insight into whether what you’re doing is safe or really dangerous. It’s a different kind of resource that you have available right at hand — there’s maybe a different kind of peer pressure that you have in your immediate professional and work environment acting on the industrial chemist than on the academic chemist. I think that probably plays some role in how PIs in academia are maybe aren’t as up on potential safety risks of new work they’re doing as they might be otherwise. And then, of course, there’s the really different kinds of rewards people are working for in industry versus academia, and how the whole tenure race ends up asking more and more of people with the same 24 hours in the day as anyone else. So, people on the tenure track start asking, “What are the things I’m really rewarded for? Because obviously, if I’m going to succeed, that’s where I have to focus my attention.”

Chemjobber: It’s funny how the “T” word keeps coming up.

Janet: By the same token, in a university system that has consistently tried to male it easier to fire faculty at whim because they’re expensive, I sort of see the value of tenure. I’m not at all argue that tenure is something that academic chemists don’t need. But, it may be that the particulars of how we evaluate people for tenure are incentivizing behaviors that are not helping the safety of the people building the knowledge or the well-being of the people who are training to be grown-ups in these professional communities.

Chemjobber: That’s right. We should just say specifically that in this particular case, Patrick Harran already had tenure, and I believe he is still a chaired professor at UCLA.

Janet: I think maybe the thing to point out is that some of these expectations, some of these standard operating procedures within disciplines in academia, are heavily shaped by the things that are rewarded for tenure, and then for promotion to full professor, and then whatever else. So, even if you’re tenured, you’re still soaking in that same culture that is informing the people who are trying to get permission to stay there permanently rather than being thanked for their six years of service and shown the door. You’re still soaking in that culture that says, “Here’s what’s really important.” Because if something else was really important, then by golly that’s how we’d be choosing who gets to stay here for reals and who’s just passing through.

Chemjobber: Yes.

Janet: I don’t know as much about the typical life cycle of the employee in industrial chemistry, but my sense is that maybe the fact that grad students and postdocs and, to some extent, technicians are sort of transient in the community of academic chemistry might make a difference as well — that they’re seen as people who are passing through, and that the people who are more permanent fixtures in that world either forget that they come in not knowing all the stuff that the people who have been there for a long, long time know, or they’re sort of making a calculation, whether they realize it or not, about how important it is to convey some of this stuff they know to transients in their academic labs.

Chemjobber: Yeah, I think that’s true. Numerically, there’s certainly a lot less turnover in industry than there is in academic labs.

Janet: I would hope so!

Chemjobber: Especially from the bench-worker perspective. It’s unfortunate that layoffs happen (topic for another podcast!), but that seems to be the main source of turnover in industry these days.

Dueling narratives: what’s the job market like for scientists and is a Ph.D. worth it?

At the very end of August, Slate posted an essay by Daniel Lametti taking up, yet again, what the value of a science Ph.D. is in a world where the pool of careers for science Ph.D.s in academia and industry is (maybe) shrinking. Lametti, who is finishing up a Ph.D. in neuroscience, expresses optimism that the outlook is not so bleak, reading the tea leaves of some of the available survey data to conclude that unemployment is not much of a problem for science Ph.D.s. Moreover, he points to the rewards of the learning that happens in a Ph.D. program as something that might be values in its own right rather than as a mere instrument to make a living later. (This latter argument will no doubt sound familiar.)

Of course, Chemjobber had to rain on the parade of this youthful optimism. (In the blogging biz, we call that “due diligence”.) Chemjobber critiques Lametti’s reading of the survey data (and points out some important limitations with those data), questions his assertion that a science Ph.D. is a sterling credential to get you into all manner of non-laboratory jobs, reiterates that the opportunity costs of spending years in a Ph.D. program are non-neglible, and reminds us that unemployed Ph.D. scientists do exist.

Beryl Benderly mounts similar challenges to Lametti’s take on the job market at the Science Careers blog.

You’ve seen this disagreement before. And, I reckon, you’re likely to see it again.

But this time, I feel like I’m starting to notice what may be driving these dueling narratives about how things are for science Ph.D.s. It’s not just an inability to pin down the facts about the job markets, or the employment trajectories of those science Ph.D.s. In the end, it’s not even a deep disagreement about what may be valuable in economic or non-economic ways about the training one receives in a science Ph.D. program.

Where one narrative focuses on the overall trends within STEM fields, the other focuses on individual experiences. And, it strikes me that part of what drives the dueling narratives is what feels like a tension between voicing an individual view it may be helpful to adopt for one’s own well-being and acknowledging the existence of systemic forces that tend to create unhelpful outcomes.

Of course, part of the problem in these discussions may be that we humans have a hard time generally reconciling overall trends with individual experiences. Even if it were a true fact that the employment outlook was very, very good for people in your field with Ph.D.s, if you have one of those Ph.D.s and you can’t find a job with it, the employment situation is not good for you. Similarly, if you’re a person who can find happiness (or at least satisfaction) in pretty much whatever situation you’re thrown into, a generally grim job market in your field may not big you very much.

But I think the narratives keep missing each other because of something other than not being able to reconcile the pooled labor data with our own anecdata. I think, at their core, the two narratives are trying to do different things.

* * *

I’ve written before about some of what I found valuable in my chemistry Ph.D. program, including the opportunity to learn how scientific knowledge is made by actually making some. That’s not to say that the experience is without its challenges, and it’s hard for me to imagine taking on those challenges without a burning curiosity, a drive to go deeper than sitting in a classroom and learning the science that others have built.

It can feel a bit like a calling — like what I imagine people learning how to be artists or musicians must feel. And, if you come to this calling in a time where you know the job prospects at the other end are anything but certain, you pretty much have to do the gut-check that I imagine artists and musicians do, too:

Am I brave enough to try this, even though I know there’s a non-negligible chance that I won’t be able to make a career out of it? Is it worth it to devote these years of toil and study, with long hours and low salary, to immersing myself in this world, even knowing I might not get to stay in it?

A couple quick caveats here: I suspect it’s much easier to play music or make art “on the side” after you get home from the job that pays for your food but doesn’t feed your soul than it is to do science on the side. (Maybe this points to the need for community science workspaces?) And, it’s by no means clear that those embarking on Ph.D. training in a scientific field are generally presented with realistic expectations about the job market for Ph.D.s in their field.

Despite the fact that my undergraduate professors talked up a supposed shortage of Ph.D. chemists (one that was not reflected in the labor statistics less than a year later), I somehow came to my own Ph.D. training with the attitude that it was an open question whether I’d be able to get a job as a chemist in academia or industry or a national lab. I knew I was going to leave my graduate program with a Ph.D., and I knew I was going to work.

The rent needed to be paid, and I was well acclimated to a diet that alternated between lentils and ramen noodles, so I didn’t see myself holding out for a dream job with a really high salary and luxe benefits. A career was something I wanted, but the more pressing need was a paycheck.

Verily, by the time I completed my chemistry Ph.D., this was a very pressing need. It’s true that students in a chemistry Ph.D. program are “paid to go to school,” but we weren’t paid much. I kept my head, and credit card balance, mostly above water by being a cyclist rather than a driver, saving money for registration, insurance, parking permits, and gas that my car-owning classmates had to pay. But it took two veterinary emergencies, one knee surgery, and ultimately the binding and microfilming fee I had to pay when I submitted the final version of my dissertation to completely wipe out my savings.

I was ready to teach remedial arithmetic at a local business college for $12 an hour (and significantly less than 40 hours a week) if it came to that. Ph.D. chemist or not, I needed to pay the bills.

Ultimately, I did line up a postdoctoral position, though I didn’t end up taking it because I had my epiphany about needing to become a philosopher. When I was hunting for postdocs, though, I knew that there was still no guarantee of a tenure track job, or a gig at a national lab, or a job in industry at the end of the postdoc. I knew plenty of postdocs who were still struggling to find a permanent job. Even before my philosophy epiphany, I was thinking through other jobs I was probably qualified to do that I wouldn’t hate — because I kind of assumed it would be hard, and that the economy wouldn’t feel like it owed me anything, and that I might be lucky, but I also might not be. Seeing lots of really good people have really bad luck on the job market can do that to a person.

My individual take on the situation had everything to do with keeping me from losing it. It’s healthy to be able to recognize that bad luck is not the same as the universe (or even your chosen professional community) rendering the judgment that you suck. It’s healthy to be able to weather the bad luck rather than be crushed by it.

But, it’s probably also healthy to recognize when there may be systemic forces making it a lot harder than it needs to be to join a professional community for the long haul.

* * *

Indeed, the discussion of the community-level issues in scientific fields is frequently much less optimistic than the individual-level pep-talks people give themselves or each other.

What can you say about a profession that asks people who want to join it to sink as much as a decade into graduate school, and maybe another decade into postdoctoral positions (jobs defined as not permanent) just to meet the training prerequisite for desirable permanent jobs that may not exist in sufficient numbers to accommodate all the people who sacrificed maybe two decades at relatively low salaries for their level of education, who likely had to uproot and change their geographical location at least once, and who succeeded at the research tasks they were asked to take on during that training? And what can you say about that profession when the people asked to embark on this gamble aren’t given anything like a realistic estimate of their likelihood of success?

Much of what people do say frames this as a problem of supply and demand. There are just too many qualified candidates for the available positions, at least from the point of view of the candidates. From the point of view of a hiring department or corporation, the excess of available workers may seem like less of a problem, driving wages downward and making it easier to sell job candidates on positions in “geographically unattractive” locations.

Things might get better for the job seeker with a Ph.D. if the supply of science Ph.D.s were adjusted downward, but this would disrupt another labor pool, graduate students working to generate data for PIs in their graduate labs. Given the “productivity” expectations on those PIs, imposed by institutions and granting agencies, reducing student throughput in Ph.D. programs is likely to make things harder for those lucky enough to have secured tenure track positions in the first place.

The narrative about the community-level issues takes on a different tone depending on who’s telling it, and with which end of the power gradient they identify. Do Ph.D. programs depend on presenting a misleading picture of job prospects and quality of life for Ph.D. holders to create the big pools of student labor on which they depend? Do PIs and administrators running training programs encourage the (mistaken) belief that the academic job market is a perfect meritocracy, and that each new Ph.D.’s failure will be seen as hers alone? Are graduate students themselves to blame for not considering the employment data before embarking on their Ph.D. programs? Are they being spoiled brats when they should recognize that their unemployment numbers are much, much lower than for the population as a whole, that most employed people have nothing like tenure to protect their jobs, and indeed that most people don’t have jobs that have anything to do with their passions?

So the wrangling continues over whether things are generally good or generally bad for Ph.D. scientists, over whether the right basis for evaluating this is the life Ph.D. programs promise when they recruit students (which maybe they are only promising to the very best — or the very lucky) or the life most people (including large numbers of people who never finished college, or high school) can expect, over whether this is a problem that ought to be addressed or simply how things are.

* * *

The narratives here feel like they’re in conflict because they’re meant to do different things.

The individual-level narrative is intended to buoy the spirits of the student facing adversity, to find some glimmers of victory that can’t be taken away even by a grim employment market. It treats the background conditions as fixed, or at least as something the individual cannot change; what she can control is her reaction to them.

It’s pretty much the Iliad, but with lab coats.

The community-level narrative instead strives for a more accurate accounting of what all the individual trajectories add up to, focusing not on who has experienced personal growth but on who is employed. Here too, there is a striking assumption that The Way Things Are is a stable feature of the system, not something individual action could change — or that individual members of the community should feel any responsibility for changing.

And this is where I think there’s a need for another narrative, one with the potential to move us beyond the disagreement and disgruntlement we see each time the other two collide.

Professional communities, after all, are made up of individuals. People, not the economy, make hiring decisions. Members of professional communities make decisions about how they’re going to treat each other, and in particular about how they will treat the most vulnerable members of their community.

Graduate students are not receiving a mere service or commodity from their Ph.D. programs (“Would you like to supersize that scientific education?”). They are entering a relationship resembling an apprenticeship with the members of the professional community they’re trying to join. Arguably, this relationship means that the professional community has some responsibility for the ongoing well-being of those new Ph.D.s.

Here, I don’t think this is a responsibility to infantilize new Ph.D.s, to cover them with bubble-wrap or to create for them a sparkly artificial economy full of rainbows and unicorns. But they probably have a duty to provide help when they can.

Maybe this help would come in the form of showing compassion, rather than claiming that the people who deserve to be scientists will survive the rigors of the job market and that those who don’t weren’t meant to be in science. Maybe it would come by examining one’s own involvement in a system that defines success too narrowly, or that treats Ph.D. students as a consumable resource, or that fails to help those students cultivate a broad enough set of skills to ensure that they can find some gainful employment. Maybe it would come from professional communities finding ways to include as real members people they have trained but who have not been able to find employment in that profession.

Individuals make the communities. The aggregate of the decisions the communities make create the economic conditions and the quality of life issues. Treating current conditions — including current ways of recruiting students or describing the careers and lives they ought to expect at the other end of their training — as fixed for all time it a way of ignoring how individuals and institutions are responsible for those conditions. And, it doesn’t do anything to help change them.

It’s useful to have discussions of how to navigate the waters of The Way Things Are. It’s also useful to try to get accurate data about the topology of those waters. But these discussions shouldn’t distract us from serious discussions of The Way Things Could Be — and of how scientific communities can get there from here.