Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

In the previous post in this series, we examined the question of what scientists who are trained with significant financial support from the public (which, in the U.S., means practically every scientist trained at the Ph.D. level) owe to the public providing that support. The focus there was personal: I was trained to be a physical chemist, free of charge due to the public’s investment, but I stopped making new scientific knowledge in 1994, shortly after my Ph.D. was conferred.

From a certain perspective, that makes me a deadbeat, a person who has fallen down on her obligations to society.

Maybe that perspective strikes you as perverse, but there are working scientists who seem to share it.

Consider this essay by cancer researcher Scott E. Kern raising the question of whether cancer researchers at Johns Hopkins who don’t come into the lab on a Sunday afternoon have lost sight of their obligations to people with cancer.

Kern wonders if scientists who manage to fit their laboratory research into the confines of a Monday-through-Friday work week might lack a real passion for scientific research. He muses that full weekend utilization of their modern cancer research facility might waste less money (in terms of facilities and overhead, salaries and benefits). He suggests that the researchers who have are not hard at work in the lab on a weekend are falling down on their moral duty to cure cancer as soon as humanly possible.

The unsupported assumptions in Kern’s piece are numerous (and far from novel). Do we know that having each research scientist devote more hours in the lab increases the rate of scientific returns? Or might there plausibly be a point of diminishing returns, where additional lab-hours produce no appreciable return? Where’s the economic calculation to consider the potential damage to the scientists from putting in 80 hours a week (to their cognitive powers, their health, their personal relationships, their experience of a life outside of work, maybe even their enthusiasm for science)? After all, lots of resources are invested in educating and training researchers — enough so that one wouldn’t want to damage those researchers on the basis of an (unsupported) hypothesis offered in the pages of Cancer Biology & Therapy.

And while Kern is doing economic calculations, he might want to consider the impact on facilities of research activity proceeding full-tilt, 24/7. Without some downtime, equipment and facilities might wear out faster than they would otherwise.

Nowhere here does Kern consider the option of hiring more researchers to work 40 hour weeks, instead of persuading the existing research workforce into spending 60, 80, 100 hours a week in the lab.

These researchers might still end up bringing work home (if they ever get a chance to go home).

Kern might dismiss this suggestion on purely economic grounds — organizations are more likely to want to pay for fewer employees (with benefits) who can work more hours than to pay to have the same number of hours of work done my more employees. He might also dismiss it on the basis that the people who really have the passion needed to do the research to cure cancer will not prioritize anything else in their lives above doing that research and finding that cure.

But one assumes passion of the sort Kern seems to have in mind would be the kind of thing that would drive researchers to the lab no matter what, even in the face of long hours, poor pay, grinding fatigue. If that is so, it’s not clear how the problem is solved by browbeating researchers without this passion into working more hours because they owe it to cancer patients. Indeed, Kern might consider, in light of the relative dearth of researchers with passion sufficient to fill the cancer research facilities on weekends, the necessity of making use of the research talents and efforts of people who don’t want to spend 60 hours a week in the lab. Kern’s piece suggests he’d have a preference for keeping such people out of the research ranks (despite the significant societal investment made in their scientific training), but by his own account there would hardly be enough researchers left in that case to keep research moving forward.

Might not these conditions prompt us to reconsider whether the received wisdom of scientific mentors is always so wise? Wouldn’t this be a reasonable place to reevaluate the strategy for accomplishing the grand scientific goal?

And Kern does not even consider a pertinent competing hypothesis, that people often have important insights into how to move research forward in the moments when they step back and allow their minds to wander. Perhaps less time away from one’s project means fewer of these insights — which, on its face, would be bad for the project of curing cancer.

The strong claim at the center of Kern’s essay is an ethical claim about what researchers owe cancer patients, about what cancer patients can demand from researchers (or any other members of society), and on what basis.

He writes:

During the survey period, off-site laypersons offer comments on my observations. “Don’t the people with families have a right to a career in cancer research also?” I choose not to answer. How would I? Do the patients have a duty to provide this “right”, perhaps by entering suspended animation? Should I note that examining other measures of passion, such as breadth of reading and fund of knowledge, may raise the same concern and that “time” is likely only a surrogate measure? Should I note that productive scientists with adorable family lives may have “earned” their positions rather than acquiring them as a “right”? Which of the other professions can adopt a country-club mentality, restricting their activities largely to a 35–40 hour week? Don’t people with families have a right to be police? Lawyers? Astronauts? Entrepreneurs?

Kern’s formulation of this interaction of rights and duties strikes me as odd. Essentially, he’s framing this as a question of whether people with families have a right to a career in cancer research, rather than whether cancer researchers have a right to have families (or any other parts of their lives that exist beyond their careers). Certainly, there have been those who have treated scientific careers as vocations requiring many sacrifices, who have acted as if there is a forced choice between having a scientific career and having a family (unless one has a wife to tend to that family).

We should acknowledge, however, that having a family life is just one way to “have a life.” Therefore, let’s consider the question this way: Do cancer researchers have a right to a life outside of work?

Kern’s suggestion is that this “right,” when exercised by researchers, is something that cancer patients end up paying for with their lives (unless they go into suspended animation while cancer researchers are spending time with their families or puttering around their gardens).

The big question, then, is what the researcher’s obligations are to the cancer patient — or to society in general.

If we’re to answer that question, I don’t think it’s fair to ignore the related questions: What are society’s obligations to the cancer patient? What are society’s obligations to researchers? And what are the cancer patient’s obligations in all of this?

We’ve already spent some time discussing scientists’ putative obligation to repay society’s investment in their training:

  • society has paid for the training the scientists have received (through federal funding of research projects, training programs, etc.)
  • society has pressing needs that can best (only?) be addressed if scientific research is conducted
  • those few members of society who have specialized skills that are needed to address particular societal needs have a duty to use those skills to address those needs (i.e., if you can do research and most other people can’t, then to the extent that society as a whole needs the research that you can do, you ought to do it)

Arguably, finding cures and treatments for cancer would be among those societal needs.

Once again the Spider-Man ethos rears its head: with great power comes great responsibility, and scientific researchers have great power. If cancer researchers won’t help find cures and treatments for cancer, who else can?

Here, I think we should pause to note that there is probably an ethically relevant difference between offering help and doing everything you possibly can. It’s one thing to donate a hundred bucks to charity and quite another to give all your money and sell all your worldly goods in order to donate the proceeds. It’s a different thing for a healthy person to donate one kidney than to donate both kidneys plus the heart and lungs.

In other words, there is help you can provide, but there seems also to be a level of help that it would be wrong for anyone else to demand of you. Possibly there is also a level of help that it would be wrong for you to provide even if you were willing to do so because it harms you in a fundamental and/or irreparable way.

And once we recognize that such a line exists between the maximum theoretical help you could provide and the help you are obligated to provide, I think we have to recognize that the needs of cancer patients do not — and should not — trump every other interest of other individuals or of society as a whole. If a cancer patient cannot lay claim to the heart and lungs of a cancer researcher, then neither can that cancer patient lay claim to every moment of a cancer researcher’s time.

Indeed, in this argument of duties that spring from ability, it seems fair to ask why it is not the responsibility of everyone who might get cancer to train as a cancer researcher and contribute to the search for a cure. Why should tuning out in high school science classes, or deciding to pursue a degree in engineering or business or literature, excuse one from responsibility here? (And imagine how hard it’s going to be to get kids to study for their AP Chemistry or AP Biology classes when word gets out that their success is setting them up for a career where they ought never to take a day off, go to the beach, or cultivate friendships outside the workplace. Nerds can connect the dots.)

Surely anyone willing to argue that cancer researchers owe it to cancer patients to work the kind of hours Kern seems to think would be appropriate ought to be asking what cancer patients — and the precancerous — owe here.

Does Kern think researchers owe all their waking hours to the task because there are so few of them who can do this research? Reports from job seekers over the past several years suggest that there are plenty of other trained scientists who could do this research but have not been able to secure employment as cancer researchers. Some may be employed in other research fields. Others, despite their best efforts, may not have secured research positions at all. What are their obligations here? Ought those employed in other research areas to abandon their current research to work on cancer, departments and funders be damned? Ought those who are not employed in a research field to be conducting their own cancer research anyway, without benefit of institution or facilities, research funding or remuneration?

Why would we feel scientific research skills, in particular, should make the individuals who have them so subject to the needs of others, even to the exclusion of their own needs?

Verily, if scientific researchers and the special skills they have are so very vital to providing for the needs of other members of society — vital enough that people like Kern feel it’s appropriate to criticize them for wanting any time out of the lab — doesn’t society owe it to its members to give researchers every resource they need for the task? Maybe even to create conditions in which everyone with the talent and skills to solve the scientific problems society wants solved can apply those skills and talents — and live a reasonably satisfying life while doing so?

My hunch is that most cancer patients would actually be less likely than Kern to regard cancer researchers as of merely instrumental value. I’m inclined to think that someone fighting a potentially life-threatening disease would be reluctant to deny someone else the opportunity to spend time with loved ones or to savor an experience that makes life worth living. To the extent that cancer researchers do sacrifice some aspects of the rest of their life to make progress on their work, I reckon most cancer patients appreciate these sacrifices. If more is needed for cancer patients, it seems reasonable to place this burden on society as a whole — teeming with potential cancer patients and their relatives and friends — to enable more (and more effective) cancer research to go on without drastically restricting the lives of the people qualified to conduct it, or writing off their interests in their own human flourishing.

As a group, scientists do have special capabilities with which they could help society address pressing problems. To the extent that they can help society address those problems, scientists probably should — not least because scientists are themselves part of society. But despite their special powers, scientists are still human beings with needs, desires, interests, and aspirations. A society that asks scientists to direct their skills and efforts towards solving its problems also has a duty to give scientists the same opportunities to flourish that it provides for its members who happen not to be scientists.

In the next post in this series, I’ll propose a less economic way to think about just what society might be buying when it invests in the training of scientists. My hope is that this will give us a richer and more useful picture of the obligations scientists and non-scientists have to each other as they are sharing a world.

* * * * *
Ancestors of this post first appeared on Adventures in Ethics and Science
_____

Kern, S. E. (2010). Where’s the passion?. Cancer biology & therapy, 10(7),655-657.
_____
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

What do I owe society for my scientific training? Obligations of scientists (part 6)

One of the dangers of thinking hard about your obligations is that you may discover one that you’ve fallen down on. As we continue our discussion of the obligations of scientist, I put myself under the microscope and invite you to consider whether I’ve incurred a debt to society that I have failed to pay back.

In the last post in this series, we discussed the claim that those in our society with scientific training have a positive duty to conduct scientific research in order to build new scientific knowledge. The source of that putative duty is two-fold. On the one hand, it’s a duty that flows from the scientist’s abilities in the face of societal needs: if people trained to build new scientific knowledge won’t build the new scientific knowledge needed to address pressing problems (like how to feed the world, or hold off climate change, or keep us all from dying from infectious diseases, or what have you), we’re in trouble. On the other hand, it’s a duty that flows from the societal investment that nurtures the development of these special scientific abilities: in the U.S., it’s essentially impossible to get scientific training at the Ph.D. level that isn’t subsidized by public funding. Public funding is used to support the training of scientists because the public expects a return on that investment in the form of grown-up scientists building knowledge which will benefit the public in some way. By this logic, people who take advantage of that heavily subsidized scientific training but don’t go on to build scientific knowledge when they are fully trained are falling down on their obligation to society.

People like me.

From September 1989 through December 1993, I was in a Ph.D. program in chemistry. (My Ph.D. was conferred January 1994.)

As part of this program, I was enrolled in graduate coursework (two chemistry courses per quarter for my first year, plus another chemistry course and three math courses, for fun, during my second year). I didn’t pay a dime for any of this coursework (beyond buying textbooks and binder paper and writing implements). Instead, tuition was fully covered by my graduate tuition stipend (which also covered “units” in research, teaching, and department seminar that weren’t really classes but appeared on our transcripts as if they were). Indeed, beyond the tuition reimbursement I was paid a monthly stipend of $1000, which seemed like a lot of money at the time (despite the fact that more than a third of it went right to rent).

I was also immersed in a research lab from January 1990 onward. Working in this lab was the heart of my training as a chemist. I was given a project to start with — a set of empirical questions to try to answer about a far-from-equilibrium chemical system that one of the recently-graduated students before me had been studying. I had to digest a significant chunk of experimental and theoretical literature to grasp why the questions mattered and what the experimental challenges in answering them might be. I had to assess the performance of the experimental equipment we had on hand, spend hours with calibrations, read a bunch of technical manuals, disassemble and reassemble pumps, write code to drive the apparatus and to collect data, identify experimental constraints that were important to control (and that, strangely, were not identified as such in the experimental papers I was working from), and also, when I determined that the chemical system I had started with was much too fussy to study with the equipment the lab could afford, to identify a different chemical system that I could use to answer similar questions and persuade my advisor to approve this new plan.

In short, my time in the lab had me learning how to build new knowledge (in a particular corner of physical chemistry) by actually building new knowledge. The earliest stages of my training had me juggling the immersion into research with my own coursework and with teaching undergraduate chemistry students as a lab instructor and teaching assistant. Some weeks, this meant I was learning less about how to make new scientific knowledge than I was about how to tackle a my problem-sets or how to explain buffers to pre-meds. Past the first year of the program, though, my waking hours were dominated by getting experiments designed, collecting loads of data, and figuring out what it meant. There were significant stretches of time during which I got into the lab by 5 AM and didn’t leave until 8 or 9 PM, and the weekend days when I didn’t go into the lab were usually consumed with coding, catching up on relevant literature, or drafting manuscripts or thesis chapters.

Once, for fun, some of us grad students did a back-of-the-envelope calculation of our hourly wages. It was remarkably close to the minimum wage I had been paid as a high school student in 1985. Still, we were getting world class scientific training, for free! We paid with the sweat of our brows, but wouldn’t we have to put in that time and effort to learn how to make scientific knowledge anyway? Sure, we graduate students did the lion’s share of the hands-on teaching of undergraduates in our chemistry department (undergraduates who were paying a significant tuition bill), but we were learning, from some of the best scientists in the world, how to be scientists!

Having gotten what amounts to a full-ride for that graduate training, due in significant part to public investment in scientific training at the Ph.D. level, shouldn’t I be hunkered down somewhere working to build more chemical knowledge to pay off my debt to society?

Do I have any good defense to offer for the fact that I’m not building chemical knowledge?

For the record, when I embarked on Ph.D. training in chemistry, I fully expected to be an academic chemist when I grew up. I really did imagine that I’d have a long career building chemical knowledge, training new chemists, and teaching chemistry to an audience that included some future scientists and some students who would go on to do other things but who might benefit from a better understanding of chemistry. Indeed, when I was applying to graduate programs, my chemistry professors were talking up the “critical shortage” of Ph.D. chemists. (By January of my first year in graduate school, I was reading reports that there were actually something like 30% more Ph.D. chemists than there were jobs for Ph.D. chemists, but a first-year grad student is not necessarily freaking out about the job market while she is wrestling with her experimental system.) I did not embark on a chemistry Ph.D. as a collectable. I did not set out to be a dilettante.

In the course of the research that was part of my Ph.D. training, I actually built some new knowledge and shared it with the public, at least to the extent of publishing it in journal articles (four of them, an average of one per year). It’s not clear what the balance sheet would say about this rate of return on the public’s investment in my scientific training — nor either whether most taxpayers would judge the knowledge I built (about the dynamics of far-from-equilibrium chemical reactions and about ways to devise useful empirical tests of proposed reaction mechanisms) as useful knowledge.

Then again, no part of how our research was evaluated in grad school was framed in terms of societal utility. You might try to describe how your research had broader implications that someone outside your immediate subfield could appreciate if you were writing a grant to get the research funded, but solving society’s pressing scientific problems was not the sine qua non of the research agendas we were advancing for our advisors or developing for ourselves.

As my training was teaching me how to conduct serious research in physical chemistry, it was also helping me to discover that my temperament was maybe not so well suited to life as a researcher in physical chemistry. I found, as I was struggling with a grant application that asked me to describe the research agenda I expected to pursue as an academic chemist, that the questions that kept me up at night were not fundamentally questions about chemistry. I learned that no part of me was terribly interested in the amount of grant-writing and lab administration that would have been required of me as a principal investigator. Looking at the few women training me at the Ph.D. level, I surmised that I might have to delay or skip having kids altogether to survive academic chemistry — and that the competition for those faculty jobs where I’d be able to do research and build new knowledge was quite fierce.

Plausibly, had I been serious about living up to my obligation to build new knowledge by conducting research, I could have been a chemist in industry. As I was finishing up my Ph.D., the competition for industry jobs for physical chemists like me was also pretty intense. What I gathered as I researched and applied for industry jobs was that I didn’t really like the culture of industry. And, while working in industry would have been a way from me to conduct research and build new knowledge, I might have ended up spending more time solving the shareholders’ problems than solving society’s problems.

If I wasn’t going to do chemical research in an academic career and I wasn’t going to do chemical research in an industrial job, how should I pay society back for the publicly-supported scientific training I received? Should I be building new scientific knowledge on my own time, in my own garage, until I’ve built enough that the debt is settled? How much new knowledge would that take?

The fact is, none of us Ph.D. students seemed to know at the time that public money was making it possible for us to get graduate training in chemistry without paying for that training. Nor was there an explicit contract we were asked to sign as we took advantage of this public support, agreeing to work for a certain number of years upon the completion of our degrees as chemists serving the public’s interests. Rather, I think most of us saw an opportunity to pursue a subject we loved and to get the preparation we would need to become principal investigators in academia or industry if we decided to pursue those career paths. Most of us probably didn’t know enough about what those career paths would be like to have told you at the beginning of our Ph.D. training whether those career paths would suit our talents or temperaments — that was part of what we were trying to find out by pursuing graduate studies. And practically, many of us would not have been able to find out if we had had to pay the costs of our Ph.D. training ourselves.

If no one who received scientific training subsidized by the public went on to build new scientific knowledge, this would surely be a problem for society. But, do we want to say that everyone who receives such subsidized training is on the hook to pay society back by building new scientific knowledge until such time as society has all the scientific knowledge it needs?

That strikes me as too strong. However, given that I’ve benefitted directly from a societal investment in Ph.D. training that, for all practical purposes, I stopped using in 1994, I’m probably not in a good position to make an objective judgment about just what I do owe society to pay back this debt. Have I paid it back already? Is society within its rights to ask more of me?

Here, I’ve thought about the scientist’s debt to society — my debt to society — in very personal terms. In the next post in the series, we’ll revisit these questions on a slightly larger scale, looking at populations of scientists interacting with the larger society and seeing what this does to our understanding of the obligations of scientists.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

If you’re a scientist, are there certain things you’re obligated to do for society (not just for your employer)? If so, where does this obligation come from?

This is part of the discussion we started back in September about special duties or obligations scientists might have to the non-scientists with whom they share a world. If you’re just coming to the discussion now, you might want to check out the post where we set out some groundwork for the discussion, plus the three posts on scientists’ negative duties (i.e., the things scientists have an obligation not to do): our consideration of powers that scientists have and should not misuse, our discussion of scientific misconduct, the high crimes against science that scientists should never commit, and our examination of how plagiarism is not only unfair but also hazardous to knowledge-building.

In this post, finally, we lay out some of the positive duties that scientists might have.

In her book Ethics of Scientific Research, Kristin Shrader-Frechette gives a pretty forceful articulation of a set of positive duties for scientists. She asserts that scientists have a duty to do research, and a duty to use research findings in ways that serve the public good. Recall that these positive duties are in addition to scientists’ negative duty to ensure that the knowledge and technologies created by the research do not harm anyone.

Where do scientists’ special duties come from? Shrader-Frechette identifies a number of sources. For one thing, she says, there are obligations that arise from holding a monopoly on certain kinds of knowledge and services. Scientists are the ones in society who know how to work the electron microscopes and atom-smashers. They’re the ones who have the equipment and skills to build scientific knowledge. Such knowledge is not the kind of thing your average non-scientist could build for himself.

Scientists also have obligations that arise from the fact that they have a good chance of success (at least, better than anyone else) when it comes to educating the public about scientific matters or influencing public policy. The scientists who track the evidence that human activity leads to climate change, for example, are the ones who might be able to explain that evidence to the public and argue persuasively for measures that are predicted to slow climate change.

As well, scientists have duties that arise from the needs of the public. If the public’s pressing needs can only be met with the knowledge and technologies produced by scientific research – and if non-scientists cannot produce such knowledge and technologies themselves – then if scientists do no work to meet these needs, who can?

As we’ve noted before, there is, in all of this, that Spiderman superhero ethos: with great power comes great responsibility. When scientists realize how much power their knowledge and skills give them relative to the non-scientists in society, they begin to see that their duties are greater than they might have thought.

Let’s turn to what I take to be Shrader-Frechette’s more controversial claim: that scientists have a positive duty to conduct research. Where does this obligation come from?

For one thing, she argues, knowledge itself is valuable, especially in democratic societies where it could presumably help us make better choices than we’d be able to make with less knowledge. Thus, those who can produce knowledge should produce it.

For another thing, Shrader-Frechette points out, society funds research projects (through various granting agencies and direct funding from governmental entities). Researchers who accept such research funding are not free to abstain from research. They can’t take the grants and put an addition on the house. Rather, they are obligated to perform the contracted research. This argument is pretty uncontroversial, I think, since asking for money to do the research that will lead to more scientific knowledge and then failing to use that money to build more scientific knowledge is deceptive.

But here’s the argument that I think will meet with more resistance, at least from scientists: In the U.S., in addition to funding particular pieces of scientific research, society pays the bill for training scientists. This is not just true for scientists trained at public colleges and universities. Even private universities get a huge chunk of their money to fund research projects, research infrastructure, and the scientific training they give their students from public sources, including but not limited to federal funding agencies like the National Science Foundation and the National Institutes of Health.

The American people are not putting up this funding out of the goodness of their hearts. Rather, the public invests in the training of scientists because it expects a return on this investment in the form of the vital knowledge those trained scientists go on to produce and share with the public. Since the public pays to train people who can build scientific knowledge, the people who receive this training have a duty to go forth and build scientific knowledge to benefit the public.

Finally, Shrader-Frechette says, scientists have a duty to do research because if they don’t do research regularly, they won’t remain knowledgeable in their field. Not only will they not be up on the most recent discoveries or what they mean, but they will start to lose the crucial experimental and analytic skills they developed when they were being trained as scientists. For the philosophy fans in the audience, this point in Shrader-Frechette’s argument is reminiscent of Immanuel Kant’s example of how the man who prefers not to cultivate his talents is falling down on his duties. If everyone in society chose not to cultivate her talents, each of us would need to be completely self-sufficient (since we could not receive aid from others exercising their talents on our behalf) – and even that would not be enough, since we would not be able to rely on our own talents, having decided not to cultivate them.

On the basis of Shrader-Frechette’s argument, it sounds like every member of society who has had the advantage of scientific training (paid for by your tax dollars and mine) should be working away in the scientific knowledge salt-mine, at least until science has built all the knowledge society needs it to build.

And here’s where I put my own neck on the line: I earned a Ph.D. in chemistry (conferred in January 1994, almost exactly 20 years ago). Like other students in U.S. Ph.D. programs in chemistry, I did not pay for that scientific training. Rather, as Shrader-Frechette points out, my scientific training was heavily subsidized by the American tax payer. I have not build a bit of new chemical knowledge since the middle of 1994 (since I wrapped up one more project after completing my Ph.D.).

Have I fallen down on my positive duties as a trained scientist? Would it be fair for American tax payers to try to recover the funds they invested in my scientific training?

We’ll take up these questions (among others) in the next installment of this series. Stay tuned!

_____
Shrader-Frechette, K. S. (1994). Ethics of scientific research. Rowman & Littlefield.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

In the last post, we discussed why fabrication and falsification are harmful to scientific knowledge-building. The short version is that if you’re trying to build a body of reliable knowledge about the world, making stuff up (rather than, say, making careful observations of that world and reporting those observations accurately) tends not to get you closer to that goal.

Along with fabrication and falsification, plagiarism is widely recognized as a high crime against the project of science, but the explanations for why it’s harmful generally make it look like a different kind of crime than fabrication and falsification. For example, Donald E. Buzzelli (1999) writes:

[P]lagiarism is an instance of robbing a scientific worker of the credit for his or her work, not a matter of corrupting the record. (p. 278)

Kenneth D, Pimple (2002) writes:

One ideal of science, identified by Robert Merton as “disinterestedness,” holds that what matters is the finding, not who makes the finding. Under this norm, scientists do not judge each other’s work by reference to the race, religion, gender, prestige, or any other incidental characteristic of the researcher; the work is judged by the work, not the worker. No harm would be done to the Theory of Relativity if we discovered Einstein had plagiarized it…

[P]lagiarism … is an offense against the community of scientists, rather than against science itself. Who makes a particular finding will not matter to science in one hundred years, but today it matters deeply to the community of scientists. Plagiarism is a way of stealing credit, of gaining credit where credit is not due, and credit, typically in the form of authorship, is the coin of the realm in science. An offense against scientists qua scientists is an offense against science, and in its way plagiarism is as deep an offense against scientists as falsification and fabrication are offenses against science. (p. 196)

Pimple is claiming that plagiarism is not an offense that undermines the knowledge-building project of science per se. Rather, the crime is in depriving other scientists of the reward they are due for participating in this knowledge-building project. In other words, Pimple says that plagiarism is problematic not because it is dishonest, but rather because it is unfair.

While I think Pimple is right to identify an additional component of responsible conduct of science besides honesty, namely, a certain kind of fairness to one’s fellow scientists, I also think this analysis of plagiarism misses an important way in which misrepresenting the source of words, ideas, methods, or results can undermine the knowledge-building project of science.

On the surface, plagiarism, while potentially nasty to the person whose report is being stolen, might seem not to undermine the scientific community’s evaluation of the phenomena. We are still, after all, bringing together and comparing a number of different observation reports to determine the stable features of our experience of the phenomenon. But this comparison often involves a dialogue as well. As part of the knowledge-building project, from the earliest planning of their experiments to well after results are published, scientists are engaged in asking and answering questions about the details of the experience and of the conditions under which the phenomenon was observed.

Misrepresenting someone else’s honest observation report as one’s own strips the report of accurate information for such a dialogue. It’s hard to answer questions about the little, seemingly insignificant experimental details of an experiment you didn’t actually do, or to refine a description of an experience someone else had. Moreover, such a misrepresentation further undermines the process of building more objective knowledge by failing to contribute the actual insight of the scientist who appears to be contributing his own view but is actually contributing someone else’s. And while it may appear that a significant number of scientists are marshaling their resources to understand a particular phenomenon, if some of those scientists are plagiarists, there are fewer scientists actually grappling with the problem than it would appear.

In such circumstances, we know less than we think we do.

Given the intersubjective route to objective knowledge, failing to really weigh in to the dialogue may end up leaving certain of the subjective biases of others in place in the collective “knowledge” that results.

Objective knowledge is produced when the scientific community’s members work with each other to screen out subjective biases. This means the sort of honesty required for good science goes beyond the accurate reporting of what has been observed and under what conditions. Because each individual report is shaped by the individual’s perspective, objective scientific knowledge also depends on honesty about the individual agency actually involved in making the observations. Thus, plagiarism, which often strikes scientists as less of a threat to scientific knowledge (and more of an instance of “being a jerk”), may pose just as much of a threat to the project of producing objective scientific knowledge as outright fabrication.

What I’m arguing here is that plagiarism is a species of dishonesty that can undermine the knowledge-building project of science in a direct way. Even if what has been lifted by the plagiarist is “accurate” from the point of view of the person who actually collected or analyzed the data or drew conclusions from it, separating this contribution from its true author means it doesn’t function the same way in the ongoing scientific dialogue.

In the next post, we’ll continue our discussion of the duties of scientists by looking at what the positive duties of scientists might be, and by examining the sources of these duties.
_____


Buzzelli, D. E. (1999). Serious deviation from accepted practices. Science and Engineering Ethics, 5(2), 275-282.

Pimple, K. D. (2002). Six domains of research ethics. Science and Engineering Ethics, 8(2), 191-205.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Don’t be evil: Obligations of scientists (part 3)

In the last installation of our ongoing discussion of the obligations of scientists, I said the next post in the series would take up scientists’ positive duties (i.e., duties to actually do particular kinds of things). I’ve decided to amend that plan to say just a bit more about scientists’ negative duties (i.e., duties to refrain from doing particular kinds of things).

Here, I want to examine a certain minimalist view of scientists’ duties (or of scientists’ negative duties) that is roughly analogous to the old Google motto, “Don’t be evil.” For scientists, the motto would be “Don’t commit scientific misconduct.” The premise is that if X isn’t scientific misconduct, then X is acceptable conduct — at least, acceptable conduct within the context of doing science.

The next question, if you’re trying to avoid committing scientific misconduct, is how scientific misconduct is defined. For scientists in the U.S., a good place to look is to the federal agencies that provide funding for scientific research and training.

Here’s the Office of Research Integrity’s definition of misconduct:

Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. …

Research misconduct does not include honest error or differences of opinion.

Here’s the National Science Foundation’s definition of misconduct:

Research misconduct means fabrication, falsification, or plagiarism in proposing or performing research funded by NSF, reviewing research proposals submitted to NSF, or in reporting research results funded by NSF. …

Research misconduct does not include honest error or differences of opinion.

These definitions are quite similar, although NSF restricts its definition to actions that are part of a scientist’s interaction with NSF — giving the impression that the same actions committed in a scientist’s interaction with NIH would not be scientific misconduct. I’m fairly certain that NSF officials view all scientific plagiarism as bad. However, when the plagiarism is committed in connection with NIH funding, NSF leaves it to the ORI to pursue sanctions. This is a matter of jurisdiction for enforcement.

It’s worth thinking about why federal funders define (and forbid) scientific misconduct in the first place rather than leaving it to scientists as a professional community to police. One stated goal is to ensure that the money they are distributing to support scientific research and training is not being misused — and to have a mechanism with which they can cut off scientists who have proven themselves to be bad actors from further funding. Another stated goal is to protect the quality of the scientific record — that is, to ensure that the published results of the funded research reflect honest reporting of good scientific work rather than lies.

The upshot here is that public money for science comes with strings attached, and that one of those strings is that the money be used to conduct actual science.

Ensuring the proper use of the funding and protecting the integrity of the scientific record needn’t be the only goals of federal funding agencies in the U.S. in their interactions with scientists or in the way they frame their definitions of scientific misconduct, but at present these are the goals in the foreground in discussions of why federally funded scientists should avoid scientific misconduct.

Let’s consider the three high crimes identified in these definitions of scientific misconduct.

Fabrication is making up data or results rather than actually collecting them from observation or experimentation. Obviously, fabrication undermines the project of building a reliable body of knowledge about the world – faked data can’t be counted on to give us an accurate picture of what the world is really like.

A close cousin of fabrication is falsification. Here, rather than making up data out of whole cloth, falsification involves “adjusting” real data – changing the values, adding some data points, omitting other data points. As with fabrication, falsification is lying about your empirical data, representing the falsified data as an honest report of what you observed when it isn’t.

The third high crime is plagiarism, misrepresenting the words or ideas (or, for that matter, data or computer code, for example) of others as your own. Like fabrication and falsification, plagiarism is a variety of dishonesty.

Observation and experimentation are central in establishing the relevant facts about the phenomena scientists are trying to understand. Establishing such relevant facts requires truthfulness about what is observed or measured and under what conditions. Deception, therefore, undermines this aim of science. So at a minimum, scientists must embrace the norm of truthfulness or abandon the goal of building accurate pictures of reality. This doesn’t mean that honest scientists never make mistakes in setting up their experiments, making their measurements, performing data analysis, or reporting what they found to other scientists. However, when honest scientists discover these mistakes, they do what they can to correct them, so that they don’t mislead their fellow scientists even accidentally.

The importance of reliable empirical data, whether as the source of or a test of one’s theory, is why fabrication and falsification of data are rightly regarded as cardinal sins against science. Made-up data are no kind of reliable indicator of what the world is like or whether a particular theory is a good one. Similarly, “cooking” data sets to better support particular hypotheses amounts to ignoring the reality of what has actually been measured. The scientific rules of engagement with phenomena hold the scientist to account for what has actually been observed. While the scientist is always permitted to get additional data about the object of study, one cannot willfully ignore facts one finds puzzling or inconvenient. Even if these facts are not explained, they must be acknowledged.

Those who commit falsification and fabrication undermine the goal of science by knowingly introducing unreliable data into, or holding back relevant data from, the formulation and testing of theories. They sin by not holding themselves accountable to reality as observed in scientific experiments. When they falsify or fabricate in reports of research, they undermine the integrity of the scientific record. When they do it in grant proposals, they are attempting to secure funding under false pretenses.

Plagiarism, the third of the cardinal sins against responsible science, is dishonesty of another sort, namely, dishonesty about the source of words, ideas, methods, or results. A number of people who think hard about research ethics and scientific misconduct view plagiarism as importantly different in its effects from fabrication and falsification. For example, Donald E. Buzzelli (1999) writes:

[P]lagiarism is an instance of robbing a scientific worker of the credit for his or her work, not a matter of corrupting the record. (p. 278)

Kenneth D, Pimple (2002) writes:

One ideal of science, identified by Robert Merton as “disinterestedness,” holds that what matters is the finding, not who makes the finding. Under this norm, scientists do not judge each other’s work by reference to the race, religion, gender, prestige, or any other incidental characteristic of the researcher; the work is judged by the work, not the worker. No harm would be done to the Theory of Relativity if we discovered Einstein had plagiarized it…

[P]lagiarism … is an offense against the community of scientists, rather than against science itself. Who makes a particular finding will not matter to science in one hundred years, but today it matters deeply to the community of scientists. Plagiarism is a way of stealing credit, of gaining credit where credit is not due, and credit, typically in the form of authorship, is the coin of the realm in science. An offense against scientists qua scientists is an offense against science, and in its way plagiarism is as deep an offense against scientists as falsification and fabrication are offenses against science. (p. 196)

In fact, I think we can make a good argument that plagiarism does threaten the integrity of the scientific record (although I’ll save that argument for a separate post). However, I agree with both Buzzelli and Pimple that plagiarism is also a problem because it embodies a particular kind of unfairness within scientific practice. That federal funders include plagiarism by name in their definitions of scientific misconduct suggests that their goals extend further than merely protecting the integrity of the scientific record.

Fabrication, falsification, and plagiarism are clearly instances of scientific misconduct, but the misconduct definitions of the United States Public Health Service (whose umbrella includes NIH) and NSF used to define scientific misconduct as fabrication, falsification, plagiarism, and other serious deviations from accepted research practices. The “other serious deviations” clause was controversial, with a panel of the National Academy of Sciences (among others) arguing that this language was ambiguous enough that it shouldn’t be part of an official misconduct definition. Maybe, the panel worried, “serious deviations from accepted research practices” might be interpreted to include cutting-edge methodological innovations, meaning that scientific innovation would count as misconduct.

In his article 1993 article, “The Definition of Misconduct in Science: A View from NSF,” Buzzelli claimed that there was no evidence that the broader definitions of misconduct had been used to lodge this kind of misconduct complaint. Since then, however, there there have been instances where definitions of scientific misconduct containing an “other serious deviations” clause could be argued to take advantage of the ambiguity of the clause to go after a scientist for political reasons.

If the “other serious deviations” clause isn’t meant to keep scientists from innovating, what kinds of misconduct is it supposed to cover? These include things like sabotaging other scientists’ experiments or equipment, falsifying colleagues’ data, violating agreements about sharing important research materials like cultures and reagents, making misrepresentations in grant proposals, and violating the confidentiality of the peer review process. None of these activities is necessarily covered by fabrication, falsification, or plagiarism, but each of these activities can be seriously harmful to scientific knowledge-building.

Buzzelli (1993) discusses a particular deviation from accepted research practices that the NSF judged as misconduct, one where a principal investigator directing an undergraduate primatology research experience funded by an NSF grant sexually harassed student researchers and graduate assistants. Buzzelli writes:

In carrying out this project, the senior researcher was accused of a range of coercive sexual offenses against various female undergraduate students and research assistants, up to and including rape. … He rationed out access to the research data and the computer on which they were stored and analyzed, as well as his own assistance, so they were only available to students who accepted his advances. He was also accused of threatening to blackball some of the graduate students in the professional community and to damage their careers if they reported his activities. (p. 585)

Even opponents of the “other serious deviations” clause would be unlikely to argue that this PI was not behaving very badly. However, they did argue that this PI’s misconduct was not scientific misconduct — that it should be handled by criminal or civil authorities rather than funding agencies, and that it was not conduct that did harm to science per se.

Buzzelli (who, I should mention, was writing as a senior scientist in the Office of the Inspector General in the National Science Foundation) disagreed with this assessment. He argued that NSF had to get involved in this sexual harassment case in order to protect the integrity of its research funds. The PI in question, operating with NSF funds designated to provide an undergraduate training experience, used his power as a research director and mentor to make sexual demands of his undergraduate trainees. The only way for the undergraduate trainees to receive the training, mentoring, and even access to their own data that they were meant to receive in this research experience at a remote field site was for them to submit to the PI’s demands. In other words, while the PI’s behavior may not have directly compromised the shared body of scientific knowledge, it undermined the other central job of the tribe of science: the training of new scientists. Buzzelli writes:

These demands and assaults, plus the professional blackmail mentioned earlier, were an integral part of the subject’s performance as a research mentor and director and ethically compromised that performance. Hence, they seriously deviated from the practices accepted in the scientific community. (p. 647)

Buzzelli makes the case for an understanding of scientific misconduct as practices that do harm to science. Thus, practices that damage the integrity of training and supervision of associates and students – an important element of the research process – would count as misconduct. Indeed, in his 1999 article, he notes that the first official NIH definition of scientific misconduct (in 1986) used the phrase “serious deviations, such as fabrication, falsification, or plagiarism, from accepted practices in carrying out research or in reporting the results of research.” (p. 276) This language shifted in subsequent statements of the definition of scientific misconduct, for example “fabrication, falsification, plagiarism, and other serious deviations from accepted practices” in the NSF definition that was in place in 1999.

Reordering the words this way might not seem like a big shift, but as Buzzelli points out, it conveys the impression that “other serious deviations” is a fourth item in the list after the clearly enumerated fabrication, falsification, and plagiarism, an ill-defined catch-all meant to cover cases too fuzzy to enumerate in advance. The original NIH wording, in contrast, suggests that the essence of scientific misconduct is that it is an ethical deviation from accepted scientific practice. In this framing of the definition, fabrication, falsification, and plagiarism are offered as three examples of the kind of deviation that counts as scientific misconduct, but there is no claim that these three examples are the only deviations that count as scientific misconduct.

To those still worried by the imprecision of this definition, Buzzelli offers the following:

[T]he ethical import of “serious deviations from accepted practices” has escaped some critics, who have taken it to refer instead to such things as doing creative and novel research, exhibiting personality quirks, or deviating from some artificial ideal of scientific method. They consider the language of the present definition to be excessively broad because it would supposedly allow misconduct findings to be made against scientists for these inappropriate reasons.

However, the real import of “accepted practices” is that is makes the ethical standards held by the scientific community itself the regulatory standard that a federal agency will use in considering a case of misconduct against a scientist. (p. 277)

In other words, Buzzelli is arguing that a definition of scientific misconduct that is centered on practices that the scientific community finds harmful to knowledge-building is better for ensuring the proper use of research funding and protecting the integrity of the scientific record than a definition that restricts scientific misconduct to fabrication, falsification, and plagiarism. Refraining from fabrication, falsification, and plagiarism, then, would not suffice to fulfill the negative duties of a scientist.

We’ll continue our discussion of the duties of scientists with a sidebar discussion on what kind of harm I claim plagiarism does to scientific knowledge-building. From there, we will press on to discuss what the positive duties of scientists might be, as well as the sources of these duties.

_____
Buzzelli, D. E. (1993). The definition of misconduct in science: a view from NSF. Science, 259(5095), 584-648.

Buzzelli, D. (1999). Serious deviation from accepted practices. Science and Engineering Ethics, 5(2), 275-282.

Pimple, K. D. (2002). Six domains of research ethics. Science and Engineering Ethics, 8(2), 191-205.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

In this post, we’re returning to a discussion we started back in September about whether scientists have special duties or obligations to society (or, if the notion of “society” seems too fuzzy and ill-defined to you, to the other people who are not scientists with whom they share a world) in virtue of being scientists.

You may recall that, in the post where we set out some groundwork for the discussion, I offered one reason you might think that scientists have duties that are importantly different from the duties of non-scientists:

The main arguments for scientists having special duties tend to turn on scientists being in possession of special powers. This is the scientist as Spider-Man: with great power comes great responsibility.

What kind of special powers are we talking about? The power to build reliable knowledge about the world – and in particular, about phenomena and mechanisms in the world that are not so transparent to our everyday powers of observation and the everyday tools non-scientists have at their disposal for probing features of their world. On account of their training and experience, scientists are more likely to be able to set up experiments or conditions for observation that will help them figure out the cause of an outbreak of illness, or the robust patterns in global surface temperatures and the strength of their correlation with CO2 outputs from factories and farms, or whether a particular plan for energy generation is thermodynamically plausible. In addition, working scientists are more likely to have access to chemical reagents and modern lab equipment, to beamtimes at particle accelerators, to purpose-bred experimental animals, to populations of human subjects and institutional review boards for well-regulated clinical trials.

Scientists can build specialist knowledge that the rest of us (including scientists in other fields) cannot, and many of them have access to materials, tools, and social arrangements for use in their knowledge-building that the rest of us do not. That may fall short of a superpower, but we shouldn’t kid ourselves that this doesn’t represent significant power in our world.

In her book Ethics of Scientific Research, Kristin Shrader-Frechette argues that these special abilities give rise to obligations for scientists. We can separate these into positive duties and negative duties. A positive duty is an obligation to actually do something (e.g., a duty to care for the hungry, a duty to tell the truth), while a negative duty is an obligation to refrain from doing something (e.g., a duty not to lie, a duty not to steal, a duty not to kill). There may well be context sensitivity in some of these duties (e.g, if it’s a matter of self-defense, your duty not to kill may be weakened), but you get the basic difference between the two flavors of duties.

Let’s start with ways scientists ought not to use their scientific powers. Since scientists have to share a world with everyone else, Shrader-Frechette argues that this puts some limits on the research they can do. She says that scientists shouldn’t do research that causes unjustified risks to people. Nor should they do research that violates informed consent of the human subjects who participate in the research. They should not do research that unjustly converts public resources to private profits. Nor should they do research that seriously jeopardizes environmental welfare. Finally, scientists should not do biased research.

One common theme in these prohibitions is the idea that knowledge in itself is not more important than the welfare of people. Given how focused scientific activity is on knowledge-building, this may be something about which scientists need to be reminded. For the people with whom scientists share a world, knowledge is valuable instrumentally – because people in society can benefit from it. What this means is that scientific knowledge-building that harms people more than it helps them, or that harms shared resources like the environment, is on balance a bad thing, not a good thing. This is not to say that the knowledge scientists are seeking should not be built at all. Rather, scientists need to find a way to build it without inflicting those harms – because it is their duty to avoid inflicting those harms.

Shrader-Frechette makes the observation that for research to be valuable at all to the broader public, it must be research that produces reliable knowledge. This is a big reason scientists should avoid conducting biased research. And, she notes that not doing certain research can also pose a risk to the public.

There’s another way scientists might use their powers against non-scientists that’s suggested by the Mertonian norm of disinterestedness, an “ought” scientists are supposed to feel pulling at them because of how they’ve been socialized as members of their scientific tribe. Because the scientific expert has knowledge and knowledge-building powers that the non-scientist does not, she could exploit the non-scientist’s ignorance or his tendency to trust the judgment of the expert. The scientist, in other words, could put one over on the layperson for her own benefit. This is how snake oil gets sold — and arguably, this is the kind of thing that scientists ought to refrain from doing in their interactions with non-scientists.

The overall duties of the scientist, as Shrader-Frechette describes them, also include positive duties to do research and to use research findings in ways that serve the public good, as well as to ensure that the knowledge and technologies created by the research do not harm anyone. We’ll take up these positive duties in the next post in the series.
_____
Shrader-Frechette, K. S. (1994). Ethics of scientific research. Rowman & Littlefield.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

On allies.

Those who cannot remember the past are condemned to repeat it.
–George Santayana

All of this has happened before, and all of this will happen again.
–a guy who turned out to be a Cylon

Let me start by putting my cards on the table: Jamie Vernon is not someone I count as an ally.

At least, he’s not someone I’d consider a reliable ally. I don’t have any reason to believe that he really understands my interests, and I don’t trust him not to sacrifice them for his own comfort. He travels in some of the same online spaces that I do and considers himself a longstanding member of the SciComm community of which I take myself to be a member, but that doesn’t mean I think he has my back. Undoubtedly, there are some issues for which we would find ourselves on the same side of things, but that’s not terribly informative; there are some issues (not many, but some) for which Dick Cheney and I are on the same side.

Here, I’m in agreement with Isis that we needn’t be friends to be able to work together in pursuit of shared goals. I’ve made similar observations about the scientific community:

We’re not all on the same page about everything. Pretending that we are misrepresents the nature of the tribe of science and of scientific activity. But given that there are some shared commitments that guide scientific methodology, some conditions without which scientific activity in the U.S. cannot flourish, these provide some common ground on which scientists ought to be more or less united … [which] opens the possibility of building coalitions, of finding ways to work together toward the goals we share even if we may not agree about what other goals are worth pursuing.

We probably can’t form workable coalitions, though, by showing open contempt for each other’s other commitments or interests. We cannot be allies by behaving like enemies. Human nature sucks like that sometimes.

But without coalitions, we have to be ready to go it alone, to work to achieve our goals with much less help. Without coalitions, we may find ourselves working against the effects of those who have chosen to pursue other goals instead. If you can’t work with me toward goal A, I may not be inclined to help you work toward goal B. If we made common cause with each other, we might be able to tailor strategies that would get us closer to both goals rather than sacrificing one for the other. But if we decide we’re not working on the same team, why on earth should we care about each other’s recommendations with respect to strategies?

Ironically, we humans seem sometimes to show more respect to people who are strangers than to people we call our friends. Perhaps it’s related to the uncertainty of our interactions going forward — the possibility that we may need to band together, or to accommodate the other’s interests to protect our own — or to the lack of much shared history to draw upon in guiding our interactions. We begin our interactions with strangers with the slate as blank as it can be. Strangers can’t be implored (at least not credibly) to consider our past good acts to excuse our current rotten behavior toward them.

We may recognize strangers as potential allies, but we don’t automatically assume that they’re allies already. Neither do we assume that they’ll view us as their allies.

Thinking about allies is important in the aftermath of Joe Hanson’s video that he says was meant to “lampoon” the personalities of famous scientists of yore and to make “a joke to call attention to the sexual harassment that many women still today experience.” It’s fair to say the joke was not entirely successful given that the scenes of Albert Einstein sexually harassing and assaulting Marie Curie arguably did harm to women in science:

Hanson’s video isn’t funny. It’s painful. It’s painful because 1) it’s such an accurate portrayal of exactly what so many of us have faced, and 2) the fact that Hanson thinks it’s “outrageous” demonstrates how many of our male colleagues don’t realize the fullness of the hostility that women scientists are still facing in the workplace. Furthermore, Hanson’s continued clinging to “can’t you take a joke” and the fact that he was “trying to be comedic” reflects the deeper issue. Not only does he not get it, his statement implies that he has no intention of trying to get it.

Hanson’s posted explanation after the negative reactions urges the people who reacted negatively to see him as an ally:

To anyone curious if I am not aware of, or not committed to preventing this kind of treatment (in whatever way my privileged perspective allows me to do so) I would urge you to check out my past writing and videos … This doesn’t excuse us, but I ask that you form your opinion of me, It’s Okay To Be Smart, and PBS Digital Studios from my body of work, and not a piece of it.

Indeed, Jamie Vernon not only vouches for Hanson’s ally bona fides but asserts his own while simultaneously suggesting that the negative reactions to Hanson’s video are themselves a problem for the SciComm community:

Accusations of discrimination were even pointed in my direction, based on a single ill-advised Tweet.  One tweet (that I now regret and apologize for) triggered a tsunami of anger, attacks, taunts, and accusations against me. 

Despite many years of speaking out on women’s issues in science, despite being an ardent supporter of women science communicators, despite being a father to two young girls for whom it is one of my supreme goals to create a more gender balanced science community, despite these things and many other examples of my attempts to be an ally to the community of women science communicators, I was now facing down the barrel of a gun determined to make an example out of me. …

“How could this be happening to me?  I’m an ally!” I thought. …

Hanson has worked incredibly hard for several years to create an identity that has proven to inspire young people.  He has thousands of loyal readers who share his work thousands of times daily on Tumblr, Facebook and Twitter.  He has championed women’s causes.  Just the week prior to the release of the infamous video, he railed against discriminatory practices among the Nobel Prize selection committees.  He is a force for good in a sea of apathy and ignorance.  Without a doubt, he is an asset to science and science communication.  In my opinion, any mention of removing him from his contract with PBS is shortsighted and reflects misdirected anger.  He deserves the opportunity to recalibrate and power on in the name of science.

Vernon assures us that he and Hanson are allies to women in science and in the SciComm community. At minimum, I believe that Vernon must have a very different understanding than I of what is involved in being an ally.

Allies are people with whom we make common cause to pursue particular goals or to secure particular interests. Their interests and goals are not identical to ours — that’s what makes them allies.

I do not expect allies to be perfect. They, like me, are human, and I certainly mess up with some regularity. Indeed, I understand full well the difficulty of being a good ally. As Josh Witten observed to me, as a white woman I am “in one of the more privileged classes of the oppressed, arguably the least f@#$ed over of the totally f@#$ed over groups in modern western society.” This means when I try to be an ally to people of color, or disabled people, or poor people, for example, there’s a good chance I’ll step in it. I may not be playing life on the lowest difficulty setting, but I’m pretty damn close.

Happily, many people to whom I try to be an ally are willing to tell me when I step in it and to detail just how I’ve stepped in it. This gives me valuable feedback to try to do better.

Allies I trust are people who pay attention to the people to whom they’re trying to give support because they’re imperfect and because their interests and goals are not identical. The point of paying attention is to get some firsthand reports on whether you’re helping or hurting from the people you’re trying to help.

When good allies mess up, they do their best to respond ethically and do better going forward. Because they want to do better, they want to know when they have messed up — even though it can be profoundly painful to find out your best efforts to help have not succeeded.

Let’s pause for a moment here so I can assure you that I understand it hurts when someone tells you that you messed up. I understand it because I have experienced it. I know all about the feeling of defensiveness that pops right up, as well as the feeling that your character as a human being is being unfairly judged on the basis of limited data — indeed, in your defensiveness, you might immediately start looking for ways the person suggesting you are not acting like a good ally has messed up (including failing to communicate your mistake in language that is as gentle as possible). These feelings are natural, but being a good ally means not letting these feelings overcome your commitment to actually be helpful to the people you set out to help.

On account of these feelings, you might feel great empathy for someone else who has just stepped in it but who you think it trying to be an ally. You might feel so much empathy that you don’t want to make them feel bad by calling out their mistake — or that you chide others for pointing out that mistake. (You might even start reaching for quotations about people without sin and stones.) Following this impulse undercuts the goal of being a good ally.

As I wrote elsewhere,

If identifying problematic behavior in a community is something that can only be done by perfect people — people who have never sinned themselves, who have never pissed anyone off, who emerged from the womb incapable of engaging in bad behavior themselves — then we are screwed.

People mess up. The hope is that by calling attention to the bad behavior, and to the harm it does, we can help each other do better. Focusing on problematic behavior (especially if that behavior is ongoing and needs to be addressed to stop the harm) needn’t brand the bad actor as irredeemable, and it shouldn’t require that there’s a saint on duty to file the complaint.

An ally worth the name recognizes that while good intentions can be helpful in steering his conduct, in the end it’s the actions that matter the most. Other people don’t have privileged access to our intentions, after all. What they have to go on is how we behave, what we do — and that outward behavior can have positive or negative effects regardless of whether we intended those effects. It hurts when you step on my toe whether or not you are a good person inside. Telling me it shouldn’t hurt because you didn’t intend the harm is effectively telling me that my own experience isn’t valid, and that your feelings (that you are a good person) trump mine (that my foot hurts).

The allies I trust recognize that the trust they bank from their past good acts is finite. Those past good acts don’t make it impossible for their current acts to cause real harm — in fact, they can make a current act more harmful by shattering the trust built up with the past good acts. As well, they try to understand that harm done by other can make all the banked trust easier to deplete. It may not seem fair, but it is a rational move on the part of the people they are trying to help to protect themselves from harm.

This is, by the way, a good reason for people who want to be effective allies to address the harms done by others rather than maintaining a non-intervention policy.

Being a good ally means trying very hard to understand the positions and experiences of the people with whom you’re trying to make common cause by listening carefully, by asking questions, and by refraining from launching into arguments from first principles that those experiences are imaginary or mistaken. While they ask questions, those committed to being allies don’t demand to be educated. They make an effort to do their own homework.

I expect allies worth the name not to demand forgiveness, not to insist that the people with whom they say they stand will swallow their feelings or let go of hurt on the so-called ally’s schedule. Things hurt as much and as long as they’re going to hurt. Ignoring that just adds more hurt to the pile.

The allies I trust are the ones who are focused on doing the right thing, and on helping counter the wrongs, whether or not anyone is watching, not for the street cred as an ally, but because they know they should.

The allies I believe in recognize that every day they are faced with choices about how to act — about who to be — and that how they choose can make them better or worse allies regardless of what came before.

I am not ruling out the possibility that Joe Hanson or Jamie Vernon could be reliable allies for women in science and in the SciComm community. But their professions of ally status will not be what makes them allies, nor will such professions be enough to make me trust them as allies. The proof of an ally is in how he acts — including how he acts in response to criticism that hurts. Being an ally will mean acting like one.

On the labor involved in being part of a community.

On Thursday of this week, registration for ScienceOnline Together 2014, the “flagship annual conference” of ScienceOnline opened (and closed). ScienceOnline describes itself as a “global, ongoing, online community” made up of “a diverse and growing group of researchers, science writers, artists, programmers, and educators —those who conduct or communicate science online”.

On Wednesday of this week, Isis the Scientist expressed her doubts that the science communication community for which ScienceOnline functions as a nexus is actually a “community” in any meaningful sense:

The major fundamental flaw of the SciComm “community” is that it is a professional community with inconsistent common values. En face, one of its values is the idea of promoting science. Another is promoting diversity and equality in a professional setting. But, at its core, its most fundamental value are these notions of friendship, support, and togetherness. People join the community in part to talk about science, but also for social interactions with other members of the “community”.  While I’ve engaged in my fair share of drinking and shenanigans  at scientific conferences, ScienceOnline is a different beast entirely.  The years that I participated in person and virtually, there was no doubt in my mind that this was a primarily social enterprise.  It had some real hilarious parts, but it wasn’t an experience that seriously upgraded me professionally.

People in SciComm feel confident talking about “the community” as a tangible thing with values and including people in it, even when those people don’t value the social structure in the same way. People write things that are “brave” and bloviate in ways that make each other feel good and have “deep and meaningful conversations about issues” that are at the end of the day nothing more than words. It’s a “community” that gives out platters full of cookies to people who claim to be “allies” to causes without actually having to ever do anything meaningful. Without having to outreach in any tangible way, simply because they claim to be “allies.” Deeming yourself an “ally” and getting a stack of “Get Out of Jail, Free” cards is a hallmark of the “community”.

Isis notes that the value of “togetherness” in the (putative) SciComm community is often prioritized over the value of “diversity” — and that this is a pretty efficient way to undermine the community. She suggests that focusing on friendship rather than professionalism entrenches this problem and writes “I have friends in academia, but being a part of academic science is not predicated on people being my friends.”

I’m very sympathetic to Isis’s concerns here. I don’t know that I’d say there’s no SciComm community, but that might come down to a disagreement about where the line is between a dysfunctional community and a lack of community altogether. But that’s like the definitional dispute about how many hairs one needs on one’s head to shift from the category of “bald” to the category of “not-bald” — for the case we’re trying to categorize there’s still agreement that there’s a whole lot of bare skin hanging out in the wind.

The crux of the matter, whether we have a community or are trying to have one, is whether we have a set of shared values and goals that is sufficient for us to make common cause with each other and to take each other seriously — to take each other seriously even when we offer critiques of other members of the community. For if people in the community dismiss your critiques out of hand, if they have the backs of some members of the community and not others (and whose they have and whose they don’t sorts out along lines of race, gender, class, and other dimensions that the community’s shared values and goals purportedly transcend), it’s pretty easy to wonder whether you are actually a valued member of the community, whether the community is for you in any meaningful way.

I do believe there’s something like a SciComm community, albeit a dysfunctional one. I will be going to ScienceOnline Together 2014, as I went to the seven annual meetings preceding it. Personally, even though I am a full-time academic like Dr. Isis, I do find professional value from this conference. Probably this has to do with my weird interdisciplinary professional focus — something that makes it harder for me to get all the support and inspiration and engagement I need from the official professional societies that are supposed to be aligned with my professional identity. And because of the focus of my work, I am well aware of dysfunction in my own professional community and in other academic and professional communities.

While there has been a pronounced social component to ScienceOnline as a focus of the SciComm community, ScienceOnline (and its ancestor conferences) have never felt purely social to me. I have always had a more professional agenda there — learning what’s going on in different realms of practice, getting my ideas before people who can give me useful feedback on them, trying to build myself a big-picture, nuanced understanding of science engagement and how it matters.

And in recent years, my experience of the meetings has been more like work. Last year, for example, I put a lot of effort into coordinating a kid-friendly room at the conference so that attendees with small children could have some child-free time in the sessions. It was a small step towards making the conference — and the community — more accessible and welcoming to all the people who we describe as being part of the community. There’s still significant work to do on this front. If we opt out of doing that work, we are sending a pretty clear message about who we care about having in the community and who we view as peripheral, about whose voices and interests we value and whose we do not.

Paying attention to who is being left out, to whose voices are not being heard, to whose needs are not being met, takes effort. But this effort is part of the regular required maintenance for any community that is not completely homogeneous. Skipping it is a recipe for dysfunction.

And the maintenance, it seems, is required pretty much every damn day.

Friday, in the Twitter stream for the ScienceOnline hashtag #scio14, I saw this:

To find out what was making Bug Girl feel unsafe, I went back and watched Joe Hanson’s Thanksgiving video, in which Albert Einstein was portrayed as making unwelcome advances on Marie Curie, cheered on by his host, culminating in a naked assault on Curie.

Given the recent upheaval in the SciComm community around sexual harassment — with lots of discussion, because that’s how we roll — it is surprising and shocking that this video plays sexual harassment and assault for laughs, apparently with no thought to how many women are still targets of harassment, no consideration of how chilly the climate for women in science remains.

Here’s a really clear discussion of what makes the video problematic, and here’s Joe Hanson’s response to the criticisms. I’ll be honest: it looks to me like Joe still doesn’t really understand what people (myself included) took to the social media to explain to him. I’m hopeful that he’ll listen and think and eventually get it better. If not, I’m hopeful that people will keep piping up to explain the problem.

But not everyone was happy that members of our putative community responded to a publicly posted video (on a pretty visible platform — PBS Digital Studio — supported by taxpayers in the U.S.) was greeted with a public critique.

The objections raised on Twitter — many of them raised with obvious care as far as being focused on the harm and communicated constructively — were described variously as “drama,” “infighting,” a “witch hunt” and “burning [Joe] at the stake”. (I’m not going to link the tweets because a number of the people who made those characterizations thought about it and walked them back.)

People insisted, as they do pretty much every time, that the proper thing to do was to address the problem privately — as if that’s the only ethical way to deal with a public wrong, or as if it’s the most effective way to fix the harm. Despite what some will argue, I don’t think we have good evidence for either of those claims.

So let’s come back to regular maintenance of the community and think harder about this. I’ve written before that

if bad behavior is dealt with privately, out of view of members of the community who witnessed the bad behavior in question, those members may lose faith in the community’s commitment to calling it out.

This strikes me as good reason not to take all the communications to private channels. People watching and listening on the sidelines are gathering information on whether their so-called community shares their values, on whether it has their back.

Indeed, the people on the sidelines are also watching and listening to the folks dismissing critiques as drama. Operationally, “drama” seems to amount to “Stuff I’d rather you not discuss where I can see or hear it,” which itself shades quickly into “Stuff that really seems to bother other people, for whom I seem to be unable to muster any empathy, because they are not me.”

Let me pause to note what I am not claiming. I am not saying that every member of a community must be an active member of every conversation within that community. I am not saying that empathy requires you to personally step up and engage in every difficult dialogue every time it rolls around. Sometimes you have other stuff to do, or you know that the cost of being patient and calm is more than you can handle at the moment, or you know you need to listen and think for awhile before you get it well enough to get into it.

But going to the trouble to speak up to convey that the conversation is a troublesome one to have happening in your community — that you wish people would stop making an issue of it, that they should just let it go for the sake of peace in the community — that’s something different. That’s telling the people expressing their hurt and disappointment and higher expectations that they should swallow it, that they should keep it to themselves.

For the sake of the community.

For the sake of the community of which they are clearly not really valued members, if they are the ones, always, who need to shut up and let their issues go for the greater good.

Arguably, if one is really serious about the good of the community, one should pay attention to how this kind of dismissal impacts the community. Now is as good a moment as any to start.

Scary subject matter.

This being Hallowe’en, I felt like I should serve you something scary.

But what?

Verily, we’ve talked about some scary things here:

More scary subjects have come up on my other blog, including:

Making this list, I’m very glad it’s still light out! Otherwise I might be quaking uncontrollably.

Truth be told, as someone who works with ethics for a living, I’m less afraid of monsters than I am of ordinary humans who lose sight of their duties to their fellow humans.

And frankly, when it comes to things that go bump in the night, I’m less terrified than curious …

especially since the things that go “bump” in my kitchen usually involve the intriguing trio of temperature-, pressure-, and phase-changes — which is to say, it’s nothing a little science couldn’t demystify.

Have a happy, safe, and ethical Hallowe’en!

The ethics of admitting you messed up.

Part of any human endeavor, including building scientific knowledge or running a magazine with a website, is the potential for messing up.

Humans make mistakes.

Some of them are the result of deliberate choices to violate a norm. Some of them are the result of honest misunderstandings, or of misjudgments about how much control we have over conditions or events. Some of them come about in instances where we didn’t really want the bad thing that happened to happen, but we didn’t take the steps we reasonably could have taken to avoid that outcome, either. Sometimes we don’t recognize that what we did (or neglected to do) was a mistake until we appreciate the negative impact it has.

Human fallibility seems like the kind of thing we’re not going to be able to engineer out of the organism, but we probably can do better at recognizing situations where we’re likely to make mistakes, at exercising more care in those conditions, and at addressing our mistakes once we’ve made them.

Ethically speaking, mistakes are a problem because they cause harm, or because they result from a lapse in an obligation we ought to be honoring, or both. Thus, an ethical response to messing up ought to involving addressing that harm and/or getting back on track with the obligation we fell down on. What does this look like?

1. Acknowledge the harm. This needs to be the very first thing you do. To admit you messed up, you have to recognize the mess, with no qualifications. There it is.

2. Acknowledge the experiential report of the people you have harmed. If you’re serious about sharing a world (which is what ethics is all about), you need to take seriously what the people with whom your sharing that world tell you about how they feel. They have privileged access to their own lived experiences; you need to rely on their testimony of those lived experiences.

Swallow your impulse to say, “I wouldn’t feel that way,” or “I wouldn’t have made such a big deal of that if it happened to me.” Swallow as well any impulse to mount an argument from first principles about how the people telling you they were harmed should feel (especially if it’s an argument that they shouldn’t feel hurt at all). These arguments don’t change how people actually feel — except, perhaps, to make them feel worse because you don’t seem to take the actual harm to them seriously! (See “secondary trauma”.)

3. Acknowledge how what you did contributed to the harm. Spell it out without excuses. Note how your action, or your failure to act, helped bring about the bad outcome. Identify the way your action, or your failure to act, fell short of you living up to your obligations (and be clear about what you understand those obligations to be).

Undoubtedly, there will be other causal factors you can point to that also contributed to bringing about the bad outcome. Pointing them out right now will give the impression that you are dodging your responsibility. Don’t do that.

4. Say you are sorry for causing the harm/falling down on the duty. Actually, you can do this earlier in the process, but doing it again won’t hurt.

What will hurt is “I’m sorry if you were offended/if you were hurt” and similar locutions, since these suggest that you don’t take seriously the experiential reports of the people to whom you’re apologizing. (See #2 above.) If it looks like you’re denying that there really was harm (or that the harm was significant), it may also look like you’re not actually apologizing.

5. Identify steps you will take to avoid repeating this kind of mistake. This is closely connected to your post-mortem of what you did wrong this time (see #3 above). How are you going to change the circumstances, be more attentive to your duties, be more aware of the potential bad consequences that you didn’t foresee this time? Spell out the plan.

6. Identify steps you will take to address the harm of your mistake. Sometimes a sincere apology and a clear plan for not messing up in that way again is enough. Sometimes offsetting the harm and rebuilding trust will take more.

This is another good juncture at which to listen to the people telling you they were harmed. What do they want to help mitigate that harm? What are they telling you might help them trust you again?

7. Don’t demand forgiveness. Some harms hurt for a long time. Trust takes longer to establish than to destroy, and rebuilding it can take longer than it took to build the initial trust. This is a good reason to be on guard against mistakes!

8. If you get off to a bad start, admit it and stop digging. People make mistakes trying to address their mistakes. People give excuses when they should instead acknowledge their culpability. People minimize the feelings of the people to whom they’re trying to apologize. It happens, but it adds an additional layer of mistakes that you ought to address.

Catch yourself. Say, “OK, I was giving an excuse, but I should just tell you that what I did was wrong, and I’m sorry it hurt you.” Or, “That reason I gave you was me being defensive, and right now it’s your feelings I need to prioritize.” Or, “I didn’t notice before that the way I was treating you was unfair. I see now that it was, and I’m going to work hard not to treat you that way again.”

Addressing a mistake is not like winning an argument. In fact, it’s the opposite of that: It’s identifying a way that what you did wasn’t successful, or defensible, or good. But this is something we have to get good at, whether we’re trying to build reliable scientific knowledge or just to share a world with others.

——
I think this very general discussion has all sorts of specific applications, for instance to Mariette DiChristina’s message in response to the outcry over the removal of a post by DNLee.

I’m happy to entertain discussion of this particular case in the comments provided it keeps pretty close to the question of our ethical duties in explaining and apologizing. Claims about people’s intent when no clear statement of that intent has been made are out-of-bounds here (although there are plenty of online spaces where you can discuss such things if you like). So are claims about legalities (since what’s legal is not strictly congruent with what’s ethical).

Also, if you haven’t already, you should read Kate Clancy’s detailed analysis of what SciAm did well and what SciAm did poorly in responding to the situation about which DNLee was blogging and in responding to the online outcry when SciAm removed her post.

Also relevant: Melanie Tannenbaum’s excellent post on why we focus on intent when we should focus on impact.