How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

In the last post, we discussed why fabrication and falsification are harmful to scientific knowledge-building. The short version is that if you’re trying to build a body of reliable knowledge about the world, making stuff up (rather than, say, making careful observations of that world and reporting those observations accurately) tends not to get you closer to that goal.

Along with fabrication and falsification, plagiarism is widely recognized as a high crime against the project of science, but the explanations for why it’s harmful generally make it look like a different kind of crime than fabrication and falsification. For example, Donald E. Buzzelli (1999) writes:

[P]lagiarism is an instance of robbing a scientific worker of the credit for his or her work, not a matter of corrupting the record. (p. 278)

Kenneth D, Pimple (2002) writes:

One ideal of science, identified by Robert Merton as “disinterestedness,” holds that what matters is the finding, not who makes the finding. Under this norm, scientists do not judge each other’s work by reference to the race, religion, gender, prestige, or any other incidental characteristic of the researcher; the work is judged by the work, not the worker. No harm would be done to the Theory of Relativity if we discovered Einstein had plagiarized it…

[P]lagiarism … is an offense against the community of scientists, rather than against science itself. Who makes a particular finding will not matter to science in one hundred years, but today it matters deeply to the community of scientists. Plagiarism is a way of stealing credit, of gaining credit where credit is not due, and credit, typically in the form of authorship, is the coin of the realm in science. An offense against scientists qua scientists is an offense against science, and in its way plagiarism is as deep an offense against scientists as falsification and fabrication are offenses against science. (p. 196)

Pimple is claiming that plagiarism is not an offense that undermines the knowledge-building project of science per se. Rather, the crime is in depriving other scientists of the reward they are due for participating in this knowledge-building project. In other words, Pimple says that plagiarism is problematic not because it is dishonest, but rather because it is unfair.

While I think Pimple is right to identify an additional component of responsible conduct of science besides honesty, namely, a certain kind of fairness to one’s fellow scientists, I also think this analysis of plagiarism misses an important way in which misrepresenting the source of words, ideas, methods, or results can undermine the knowledge-building project of science.

On the surface, plagiarism, while potentially nasty to the person whose report is being stolen, might seem not to undermine the scientific community’s evaluation of the phenomena. We are still, after all, bringing together and comparing a number of different observation reports to determine the stable features of our experience of the phenomenon. But this comparison often involves a dialogue as well. As part of the knowledge-building project, from the earliest planning of their experiments to well after results are published, scientists are engaged in asking and answering questions about the details of the experience and of the conditions under which the phenomenon was observed.

Misrepresenting someone else’s honest observation report as one’s own strips the report of accurate information for such a dialogue. It’s hard to answer questions about the little, seemingly insignificant experimental details of an experiment you didn’t actually do, or to refine a description of an experience someone else had. Moreover, such a misrepresentation further undermines the process of building more objective knowledge by failing to contribute the actual insight of the scientist who appears to be contributing his own view but is actually contributing someone else’s. And while it may appear that a significant number of scientists are marshaling their resources to understand a particular phenomenon, if some of those scientists are plagiarists, there are fewer scientists actually grappling with the problem than it would appear.

In such circumstances, we know less than we think we do.

Given the intersubjective route to objective knowledge, failing to really weigh in to the dialogue may end up leaving certain of the subjective biases of others in place in the collective “knowledge” that results.

Objective knowledge is produced when the scientific community’s members work with each other to screen out subjective biases. This means the sort of honesty required for good science goes beyond the accurate reporting of what has been observed and under what conditions. Because each individual report is shaped by the individual’s perspective, objective scientific knowledge also depends on honesty about the individual agency actually involved in making the observations. Thus, plagiarism, which often strikes scientists as less of a threat to scientific knowledge (and more of an instance of “being a jerk”), may pose just as much of a threat to the project of producing objective scientific knowledge as outright fabrication.

What I’m arguing here is that plagiarism is a species of dishonesty that can undermine the knowledge-building project of science in a direct way. Even if what has been lifted by the plagiarist is “accurate” from the point of view of the person who actually collected or analyzed the data or drew conclusions from it, separating this contribution from its true author means it doesn’t function the same way in the ongoing scientific dialogue.

In the next post, we’ll continue our discussion of the duties of scientists by looking at what the positive duties of scientists might be, and by examining the sources of these duties.
_____


Buzzelli, D. E. (1999). Serious deviation from accepted practices. Science and Engineering Ethics, 5(2), 275-282.

Pimple, K. D. (2002). Six domains of research ethics. Science and Engineering Ethics, 8(2), 191-205.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Don’t be evil: Obligations of scientists (part 3)

In the last installation of our ongoing discussion of the obligations of scientists, I said the next post in the series would take up scientists’ positive duties (i.e., duties to actually do particular kinds of things). I’ve decided to amend that plan to say just a bit more about scientists’ negative duties (i.e., duties to refrain from doing particular kinds of things).

Here, I want to examine a certain minimalist view of scientists’ duties (or of scientists’ negative duties) that is roughly analogous to the old Google motto, “Don’t be evil.” For scientists, the motto would be “Don’t commit scientific misconduct.” The premise is that if X isn’t scientific misconduct, then X is acceptable conduct — at least, acceptable conduct within the context of doing science.

The next question, if you’re trying to avoid committing scientific misconduct, is how scientific misconduct is defined. For scientists in the U.S., a good place to look is to the federal agencies that provide funding for scientific research and training.

Here’s the Office of Research Integrity’s definition of misconduct:

Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. …

Research misconduct does not include honest error or differences of opinion.

Here’s the National Science Foundation’s definition of misconduct:

Research misconduct means fabrication, falsification, or plagiarism in proposing or performing research funded by NSF, reviewing research proposals submitted to NSF, or in reporting research results funded by NSF. …

Research misconduct does not include honest error or differences of opinion.

These definitions are quite similar, although NSF restricts its definition to actions that are part of a scientist’s interaction with NSF — giving the impression that the same actions committed in a scientist’s interaction with NIH would not be scientific misconduct. I’m fairly certain that NSF officials view all scientific plagiarism as bad. However, when the plagiarism is committed in connection with NIH funding, NSF leaves it to the ORI to pursue sanctions. This is a matter of jurisdiction for enforcement.

It’s worth thinking about why federal funders define (and forbid) scientific misconduct in the first place rather than leaving it to scientists as a professional community to police. One stated goal is to ensure that the money they are distributing to support scientific research and training is not being misused — and to have a mechanism with which they can cut off scientists who have proven themselves to be bad actors from further funding. Another stated goal is to protect the quality of the scientific record — that is, to ensure that the published results of the funded research reflect honest reporting of good scientific work rather than lies.

The upshot here is that public money for science comes with strings attached, and that one of those strings is that the money be used to conduct actual science.

Ensuring the proper use of the funding and protecting the integrity of the scientific record needn’t be the only goals of federal funding agencies in the U.S. in their interactions with scientists or in the way they frame their definitions of scientific misconduct, but at present these are the goals in the foreground in discussions of why federally funded scientists should avoid scientific misconduct.

Let’s consider the three high crimes identified in these definitions of scientific misconduct.

Fabrication is making up data or results rather than actually collecting them from observation or experimentation. Obviously, fabrication undermines the project of building a reliable body of knowledge about the world – faked data can’t be counted on to give us an accurate picture of what the world is really like.

A close cousin of fabrication is falsification. Here, rather than making up data out of whole cloth, falsification involves “adjusting” real data – changing the values, adding some data points, omitting other data points. As with fabrication, falsification is lying about your empirical data, representing the falsified data as an honest report of what you observed when it isn’t.

The third high crime is plagiarism, misrepresenting the words or ideas (or, for that matter, data or computer code, for example) of others as your own. Like fabrication and falsification, plagiarism is a variety of dishonesty.

Observation and experimentation are central in establishing the relevant facts about the phenomena scientists are trying to understand. Establishing such relevant facts requires truthfulness about what is observed or measured and under what conditions. Deception, therefore, undermines this aim of science. So at a minimum, scientists must embrace the norm of truthfulness or abandon the goal of building accurate pictures of reality. This doesn’t mean that honest scientists never make mistakes in setting up their experiments, making their measurements, performing data analysis, or reporting what they found to other scientists. However, when honest scientists discover these mistakes, they do what they can to correct them, so that they don’t mislead their fellow scientists even accidentally.

The importance of reliable empirical data, whether as the source of or a test of one’s theory, is why fabrication and falsification of data are rightly regarded as cardinal sins against science. Made-up data are no kind of reliable indicator of what the world is like or whether a particular theory is a good one. Similarly, “cooking” data sets to better support particular hypotheses amounts to ignoring the reality of what has actually been measured. The scientific rules of engagement with phenomena hold the scientist to account for what has actually been observed. While the scientist is always permitted to get additional data about the object of study, one cannot willfully ignore facts one finds puzzling or inconvenient. Even if these facts are not explained, they must be acknowledged.

Those who commit falsification and fabrication undermine the goal of science by knowingly introducing unreliable data into, or holding back relevant data from, the formulation and testing of theories. They sin by not holding themselves accountable to reality as observed in scientific experiments. When they falsify or fabricate in reports of research, they undermine the integrity of the scientific record. When they do it in grant proposals, they are attempting to secure funding under false pretenses.

Plagiarism, the third of the cardinal sins against responsible science, is dishonesty of another sort, namely, dishonesty about the source of words, ideas, methods, or results. A number of people who think hard about research ethics and scientific misconduct view plagiarism as importantly different in its effects from fabrication and falsification. For example, Donald E. Buzzelli (1999) writes:

[P]lagiarism is an instance of robbing a scientific worker of the credit for his or her work, not a matter of corrupting the record. (p. 278)

Kenneth D, Pimple (2002) writes:

One ideal of science, identified by Robert Merton as “disinterestedness,” holds that what matters is the finding, not who makes the finding. Under this norm, scientists do not judge each other’s work by reference to the race, religion, gender, prestige, or any other incidental characteristic of the researcher; the work is judged by the work, not the worker. No harm would be done to the Theory of Relativity if we discovered Einstein had plagiarized it…

[P]lagiarism … is an offense against the community of scientists, rather than against science itself. Who makes a particular finding will not matter to science in one hundred years, but today it matters deeply to the community of scientists. Plagiarism is a way of stealing credit, of gaining credit where credit is not due, and credit, typically in the form of authorship, is the coin of the realm in science. An offense against scientists qua scientists is an offense against science, and in its way plagiarism is as deep an offense against scientists as falsification and fabrication are offenses against science. (p. 196)

In fact, I think we can make a good argument that plagiarism does threaten the integrity of the scientific record (although I’ll save that argument for a separate post). However, I agree with both Buzzelli and Pimple that plagiarism is also a problem because it embodies a particular kind of unfairness within scientific practice. That federal funders include plagiarism by name in their definitions of scientific misconduct suggests that their goals extend further than merely protecting the integrity of the scientific record.

Fabrication, falsification, and plagiarism are clearly instances of scientific misconduct, but the misconduct definitions of the United States Public Health Service (whose umbrella includes NIH) and NSF used to define scientific misconduct as fabrication, falsification, plagiarism, and other serious deviations from accepted research practices. The “other serious deviations” clause was controversial, with a panel of the National Academy of Sciences (among others) arguing that this language was ambiguous enough that it shouldn’t be part of an official misconduct definition. Maybe, the panel worried, “serious deviations from accepted research practices” might be interpreted to include cutting-edge methodological innovations, meaning that scientific innovation would count as misconduct.

In his article 1993 article, “The Definition of Misconduct in Science: A View from NSF,” Buzzelli claimed that there was no evidence that the broader definitions of misconduct had been used to lodge this kind of misconduct complaint. Since then, however, there there have been instances where definitions of scientific misconduct containing an “other serious deviations” clause could be argued to take advantage of the ambiguity of the clause to go after a scientist for political reasons.

If the “other serious deviations” clause isn’t meant to keep scientists from innovating, what kinds of misconduct is it supposed to cover? These include things like sabotaging other scientists’ experiments or equipment, falsifying colleagues’ data, violating agreements about sharing important research materials like cultures and reagents, making misrepresentations in grant proposals, and violating the confidentiality of the peer review process. None of these activities is necessarily covered by fabrication, falsification, or plagiarism, but each of these activities can be seriously harmful to scientific knowledge-building.

Buzzelli (1993) discusses a particular deviation from accepted research practices that the NSF judged as misconduct, one where a principal investigator directing an undergraduate primatology research experience funded by an NSF grant sexually harassed student researchers and graduate assistants. Buzzelli writes:

In carrying out this project, the senior researcher was accused of a range of coercive sexual offenses against various female undergraduate students and research assistants, up to and including rape. … He rationed out access to the research data and the computer on which they were stored and analyzed, as well as his own assistance, so they were only available to students who accepted his advances. He was also accused of threatening to blackball some of the graduate students in the professional community and to damage their careers if they reported his activities. (p. 585)

Even opponents of the “other serious deviations” clause would be unlikely to argue that this PI was not behaving very badly. However, they did argue that this PI’s misconduct was not scientific misconduct — that it should be handled by criminal or civil authorities rather than funding agencies, and that it was not conduct that did harm to science per se.

Buzzelli (who, I should mention, was writing as a senior scientist in the Office of the Inspector General in the National Science Foundation) disagreed with this assessment. He argued that NSF had to get involved in this sexual harassment case in order to protect the integrity of its research funds. The PI in question, operating with NSF funds designated to provide an undergraduate training experience, used his power as a research director and mentor to make sexual demands of his undergraduate trainees. The only way for the undergraduate trainees to receive the training, mentoring, and even access to their own data that they were meant to receive in this research experience at a remote field site was for them to submit to the PI’s demands. In other words, while the PI’s behavior may not have directly compromised the shared body of scientific knowledge, it undermined the other central job of the tribe of science: the training of new scientists. Buzzelli writes:

These demands and assaults, plus the professional blackmail mentioned earlier, were an integral part of the subject’s performance as a research mentor and director and ethically compromised that performance. Hence, they seriously deviated from the practices accepted in the scientific community. (p. 647)

Buzzelli makes the case for an understanding of scientific misconduct as practices that do harm to science. Thus, practices that damage the integrity of training and supervision of associates and students – an important element of the research process – would count as misconduct. Indeed, in his 1999 article, he notes that the first official NIH definition of scientific misconduct (in 1986) used the phrase “serious deviations, such as fabrication, falsification, or plagiarism, from accepted practices in carrying out research or in reporting the results of research.” (p. 276) This language shifted in subsequent statements of the definition of scientific misconduct, for example “fabrication, falsification, plagiarism, and other serious deviations from accepted practices” in the NSF definition that was in place in 1999.

Reordering the words this way might not seem like a big shift, but as Buzzelli points out, it conveys the impression that “other serious deviations” is a fourth item in the list after the clearly enumerated fabrication, falsification, and plagiarism, an ill-defined catch-all meant to cover cases too fuzzy to enumerate in advance. The original NIH wording, in contrast, suggests that the essence of scientific misconduct is that it is an ethical deviation from accepted scientific practice. In this framing of the definition, fabrication, falsification, and plagiarism are offered as three examples of the kind of deviation that counts as scientific misconduct, but there is no claim that these three examples are the only deviations that count as scientific misconduct.

To those still worried by the imprecision of this definition, Buzzelli offers the following:

[T]he ethical import of “serious deviations from accepted practices” has escaped some critics, who have taken it to refer instead to such things as doing creative and novel research, exhibiting personality quirks, or deviating from some artificial ideal of scientific method. They consider the language of the present definition to be excessively broad because it would supposedly allow misconduct findings to be made against scientists for these inappropriate reasons.

However, the real import of “accepted practices” is that is makes the ethical standards held by the scientific community itself the regulatory standard that a federal agency will use in considering a case of misconduct against a scientist. (p. 277)

In other words, Buzzelli is arguing that a definition of scientific misconduct that is centered on practices that the scientific community finds harmful to knowledge-building is better for ensuring the proper use of research funding and protecting the integrity of the scientific record than a definition that restricts scientific misconduct to fabrication, falsification, and plagiarism. Refraining from fabrication, falsification, and plagiarism, then, would not suffice to fulfill the negative duties of a scientist.

We’ll continue our discussion of the duties of scientists with a sidebar discussion on what kind of harm I claim plagiarism does to scientific knowledge-building. From there, we will press on to discuss what the positive duties of scientists might be, as well as the sources of these duties.

_____
Buzzelli, D. E. (1993). The definition of misconduct in science: a view from NSF. Science, 259(5095), 584-648.

Buzzelli, D. (1999). Serious deviation from accepted practices. Science and Engineering Ethics, 5(2), 275-282.

Pimple, K. D. (2002). Six domains of research ethics. Science and Engineering Ethics, 8(2), 191-205.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Join Virtually Speaking Science for a conversation about sexism in science and science journalism.

Today at 5 P.M. Eastern/2 P.M. Pacific, I’ll be on Virtually Speaking Science with Maryn McKenna and Tom Levenson to discuss sexual harassment, gender bias, and related issues in the world of science, science journalism, and online science communication. Listen live online or, if you have other stuff to do in that bit of spacetime, you can check out the archived recording later. If you do the Second Life thing, you can join us there at the Exploratorium and text in questions for us.

Tom has a nice post with some background to orient our conversation.

Here, I’m going to give you a few links that give you a taste of what I’ve been thinking about in preparation for this conversation, and then I’ll say a little about what I hope will come out of the conversation.

Geek Feminism Wiki Timeline of incidents from 2013 (includes tech and science blogosphere)

Danielle Lee’s story about the “urban whore” incident and Scientific American’s response to it.

Kate Clancy’s post on how Danielle Lee’s story and the revelations about former Scientific American blog editor Bora Zivkovic are connected to the rape-y Einstein bobble head video incident (with useful discussion of productive strategies for community response)

Andrew David Thaler’s post “On being an ally and being called out on your privilege”

A post I wrote with a link to research on implicit gender bias among science faculty at universities, wherein I point out that the empirical findings have some ethical implications if we’re committed to reducing gender bias

A short film exploring the pipeline problem for women in chemistry, “A Chemical Imbalance” (Transcript)

The most recent of Zuska’s excellent posts on the pipeline problem, “Rethinking the Normality of Attrition”

As far as I’m concerned, the point of our conversation is not to say science, or science journalism, or online science communication, has a bigger problem with sexual harassment or sexism or gender disparities than other professional communities or than the broader societies from which members of these professional communities are drawn. The issue, as far as I can tell, is that these smaller communities reproduce these problems from the broader society — but, they don’t need to. Recognizing that the problem exists — that we think we have merit-driven institutions, or that we’re better at being objective than the average Jo(e), but that the evidence indicates we’re not — is a crucial step on the way to fixing it.

I’m hopeful that we’ll be able to talk about more than individual incidents of sexism or harassment in our discussion. The individual incidents matter, but they don’t emerge fully formed from the hearts, minds, mouths, and hands of evil-doers. They are reflections of cultural influences we’re soaking in, of systems we have built.

Among other things, this suggests to me that any real change will require thinking hard about how to change systems rather than keeping our focus at the level of individuals. Recognizing that it will take more than good intentions and individual efforts to overcome things like unconscious bias in human interactions in the professional sphere (including but not limited to hiring decisions) would be a huge step forward.

Such progress will surely be hard, but I don’t think it’s impossible, and I suspect the effort would be worth it.

If you can, do listen (and watch). I’ll be sure to link the archived broadcast once that link is available.

Careers (not just jobs) for Ph.D.s outside the academy.

A week ago I was in Boston for the 2013 annual meeting of the History of Science Society. Immediately after the session in which I was a speaker, I attended a session (Sa31 in this program) called “Happiness beyond the Professoriate — Advising and Embracing Careers Outside the Academy.” The discussion there was specifically pitched at people working in the history of science (whether earning their Ph.D.s or advising those who are), but much of it struck me as broadly applicable to people in other fields — not just fields like philosophy, but also science, technology, engineering, and mathematics (STEM) fields.

The discourse in the session was framed in terms of recognizing, and communicating, that getting a job just like your advisor’s (i.e., as a faculty member at a research university with a Ph.D. program in your field — or, loosening it slightly, as permanent faculty at a college or university, even one not primarily focused on research or on training new members of the profession at the Ph.D. level) shouldn’t be a necessary condition for maintaining your professional identity and place in the professional community. Make no mistake, people in one’s discipline (including those training new members of the profession at the Ph.D. level) frequently do discount people as no longer really members of the profession for failing to succeed in the One True Career Path, but the panel asserted that they shouldn’t.

And, they provided plenty of compelling reasons why the “One True Career Path” approach is problematic. Chief among these, at least in fields like history, is that this approach feeds the creation and growth of armies of adjunct faculty, hoping that someday they will become regular faculty, and in the meantime working for very low wages relative to the amount of work they do (and relative to their training and expertise), experiencing serious job insecurity (sometimes not finding out whether they’ll have classes to teach until the academic term is actually underway), and enduring all manner of employer shenanigans (like having their teaching loads reduced to 50% of full time so the universities employing them are not required by law to provide health care coverage). Worse, insistence on One True Career Path fails to acknowledge that happiness is important.

Panelist Jim Grossman noted that the very language of “alternative careers” reinforces this problematic view by building in the assumption that there is a default career path. Speaking of “alternatives” instead might challenge the assumption that all options other than the default are lesser options.

Grossman identified other bits of vocabulary that ought to be excised from these discussions. He argued against speaking of “the job market” when one really means “the academic job market”. Otherwise, the suggestion is that you can’t really consider those other jobs without exiting the profession. Talking about “job placement,” he said, might have made sense back in the day when the chair of a hiring department called the chair of another department to say, “Send us your best man!” rather than conducting an actual job search. Those days are long gone.

And Grossman had lots to say about why we should stop talking about “overproduction of Ph.D.s.”

Ph.D.s, he noted, are earned by people, not produced like widgets on a factory line. Describing the number of new Ph.D.-holders each year as overproduction is claiming that there are too many — but again, this is too many relative to a specific kind of career trajectory assumed implicitly to be the only one worth pursuing. There are many sectors in the career landscape that could benefit from the talents of these Ph.D.-holders, so why are we not describing the current situation as one of “underconsumption of Ph.D.s”? Finally, the “overproduction of Ph.D.s.” locution doesn’t seem helpful in a context where these seems to be no good way to stop departments from “producing” as many Ph.D.s as they want to. If market forces were enough to address this imbalance, we wouldn’t have armies of adjuncts.

Someone in the discussion pointed out that STEM fields have for some time had similar issues of Ph.D. supply and demand, suggesting that they might be ahead of the curve in developing useful responses which other disciplines could borrow. However, the situation in STEM fields differs in that industrial career paths have been treated as legitimate (and as not removing you from the profession). And, more generally, society seems to take the skills and qualities of mind developed during a STEM Ph.D. as useful and broadly applicable, while those developed during a history or philosophy Ph.D. are assumed to be hopelessly esoteric. However, it was noted that while STEM fields don’t generate the same armies of adjuncts as humanities field, they do have what might be described as the “endless postdoc” problem.

Given that structural stagnation of the academic job market is real (and has been reality for something like 40 years in the history of science), panelist Lynn Nyhart observed that it would be foolish for Ph.D. students not to consider — and prepare for — other kinds of jobs. As well, Nyhart argues that as long as faculty take on graduate students, they have a responsibility to help them find jobs.

Despite profession that they are essentially clueless about career paths other than academia, advisors do have resources they can draw upon in helping their graduate students. Among these is the network of Ph.D. alumni from their graduate program, as well as the network of classmates from their own Ph.D. training. Chances are that a number of people in these networks are doing a wide range of different things with their Ph.D.s — and that they could provide valuable information and contacts. (Also, keeping in contact with these folks recognizes that they are still valued members of your professional community, rather than treating them as dead to you if they did not pursue the One True Career Path.)

Nyhart also recommended Versatilephd.com, especially the PhD Career Finder tab, as a valuable resource for exploring the different kinds of work for which Ph.D.s in various fields can serve as preparation. Some of the good stuff on the site is premium content, but if your university subscribes to the site your access to that premium content may already be paid for.

Nyhart noted that preparing Ph.D. students for a wide range of careers doesn’t require lowering discipline-specific standards, nor changing the curriculum — although, as Grossman pointed out, it might mean thinking more creatively about what skills, qualities of mind, and experiences existing courses impart. After all, skills that are good training for a career in academia — being a good teacher, an effective committee member, an excellent researcher, a persuasive writer, a productive collaborator — are skills that are portable to other kinds of careers.

David Attis, who has a Ph.D. in history of science and has been working in the private sector for about a decade, mentioned some practical skills worth cultivating for Ph.D.s pursuing private sector careers. These include having a tight two-minute explanation of your thesis geared to a non-specialist audience, being able to demonstrate your facility in approaching and solving non-academic problems, and being able to work on the timescale of business, not thesis writing (i.e., five hours to write a two-page memo is far too slow). Attis said that private sector employers are looking for people who can work well on teams and who can be flexible in contexts beyond teaching and research.

I found the discussion in this session incredibly useful, and I hope some of the important issues raised there will find their way to the graduate advisors and Ph.D. students who weren’t in the room for it, no matter what their academic discipline.

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

In this post, we’re returning to a discussion we started back in September about whether scientists have special duties or obligations to society (or, if the notion of “society” seems too fuzzy and ill-defined to you, to the other people who are not scientists with whom they share a world) in virtue of being scientists.

You may recall that, in the post where we set out some groundwork for the discussion, I offered one reason you might think that scientists have duties that are importantly different from the duties of non-scientists:

The main arguments for scientists having special duties tend to turn on scientists being in possession of special powers. This is the scientist as Spider-Man: with great power comes great responsibility.

What kind of special powers are we talking about? The power to build reliable knowledge about the world – and in particular, about phenomena and mechanisms in the world that are not so transparent to our everyday powers of observation and the everyday tools non-scientists have at their disposal for probing features of their world. On account of their training and experience, scientists are more likely to be able to set up experiments or conditions for observation that will help them figure out the cause of an outbreak of illness, or the robust patterns in global surface temperatures and the strength of their correlation with CO2 outputs from factories and farms, or whether a particular plan for energy generation is thermodynamically plausible. In addition, working scientists are more likely to have access to chemical reagents and modern lab equipment, to beamtimes at particle accelerators, to purpose-bred experimental animals, to populations of human subjects and institutional review boards for well-regulated clinical trials.

Scientists can build specialist knowledge that the rest of us (including scientists in other fields) cannot, and many of them have access to materials, tools, and social arrangements for use in their knowledge-building that the rest of us do not. That may fall short of a superpower, but we shouldn’t kid ourselves that this doesn’t represent significant power in our world.

In her book Ethics of Scientific Research, Kristin Shrader-Frechette argues that these special abilities give rise to obligations for scientists. We can separate these into positive duties and negative duties. A positive duty is an obligation to actually do something (e.g., a duty to care for the hungry, a duty to tell the truth), while a negative duty is an obligation to refrain from doing something (e.g., a duty not to lie, a duty not to steal, a duty not to kill). There may well be context sensitivity in some of these duties (e.g, if it’s a matter of self-defense, your duty not to kill may be weakened), but you get the basic difference between the two flavors of duties.

Let’s start with ways scientists ought not to use their scientific powers. Since scientists have to share a world with everyone else, Shrader-Frechette argues that this puts some limits on the research they can do. She says that scientists shouldn’t do research that causes unjustified risks to people. Nor should they do research that violates informed consent of the human subjects who participate in the research. They should not do research that unjustly converts public resources to private profits. Nor should they do research that seriously jeopardizes environmental welfare. Finally, scientists should not do biased research.

One common theme in these prohibitions is the idea that knowledge in itself is not more important than the welfare of people. Given how focused scientific activity is on knowledge-building, this may be something about which scientists need to be reminded. For the people with whom scientists share a world, knowledge is valuable instrumentally – because people in society can benefit from it. What this means is that scientific knowledge-building that harms people more than it helps them, or that harms shared resources like the environment, is on balance a bad thing, not a good thing. This is not to say that the knowledge scientists are seeking should not be built at all. Rather, scientists need to find a way to build it without inflicting those harms – because it is their duty to avoid inflicting those harms.

Shrader-Frechette makes the observation that for research to be valuable at all to the broader public, it must be research that produces reliable knowledge. This is a big reason scientists should avoid conducting biased research. And, she notes that not doing certain research can also pose a risk to the public.

There’s another way scientists might use their powers against non-scientists that’s suggested by the Mertonian norm of disinterestedness, an “ought” scientists are supposed to feel pulling at them because of how they’ve been socialized as members of their scientific tribe. Because the scientific expert has knowledge and knowledge-building powers that the non-scientist does not, she could exploit the non-scientist’s ignorance or his tendency to trust the judgment of the expert. The scientist, in other words, could put one over on the layperson for her own benefit. This is how snake oil gets sold — and arguably, this is the kind of thing that scientists ought to refrain from doing in their interactions with non-scientists.

The overall duties of the scientist, as Shrader-Frechette describes them, also include positive duties to do research and to use research findings in ways that serve the public good, as well as to ensure that the knowledge and technologies created by the research do not harm anyone. We’ll take up these positive duties in the next post in the series.
_____
Shrader-Frechette, K. S. (1994). Ethics of scientific research. Rowman & Littlefield.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

On allies.

Those who cannot remember the past are condemned to repeat it.
–George Santayana

All of this has happened before, and all of this will happen again.
–a guy who turned out to be a Cylon

Let me start by putting my cards on the table: Jamie Vernon is not someone I count as an ally.

At least, he’s not someone I’d consider a reliable ally. I don’t have any reason to believe that he really understands my interests, and I don’t trust him not to sacrifice them for his own comfort. He travels in some of the same online spaces that I do and considers himself a longstanding member of the SciComm community of which I take myself to be a member, but that doesn’t mean I think he has my back. Undoubtedly, there are some issues for which we would find ourselves on the same side of things, but that’s not terribly informative; there are some issues (not many, but some) for which Dick Cheney and I are on the same side.

Here, I’m in agreement with Isis that we needn’t be friends to be able to work together in pursuit of shared goals. I’ve made similar observations about the scientific community:

We’re not all on the same page about everything. Pretending that we are misrepresents the nature of the tribe of science and of scientific activity. But given that there are some shared commitments that guide scientific methodology, some conditions without which scientific activity in the U.S. cannot flourish, these provide some common ground on which scientists ought to be more or less united … [which] opens the possibility of building coalitions, of finding ways to work together toward the goals we share even if we may not agree about what other goals are worth pursuing.

We probably can’t form workable coalitions, though, by showing open contempt for each other’s other commitments or interests. We cannot be allies by behaving like enemies. Human nature sucks like that sometimes.

But without coalitions, we have to be ready to go it alone, to work to achieve our goals with much less help. Without coalitions, we may find ourselves working against the effects of those who have chosen to pursue other goals instead. If you can’t work with me toward goal A, I may not be inclined to help you work toward goal B. If we made common cause with each other, we might be able to tailor strategies that would get us closer to both goals rather than sacrificing one for the other. But if we decide we’re not working on the same team, why on earth should we care about each other’s recommendations with respect to strategies?

Ironically, we humans seem sometimes to show more respect to people who are strangers than to people we call our friends. Perhaps it’s related to the uncertainty of our interactions going forward — the possibility that we may need to band together, or to accommodate the other’s interests to protect our own — or to the lack of much shared history to draw upon in guiding our interactions. We begin our interactions with strangers with the slate as blank as it can be. Strangers can’t be implored (at least not credibly) to consider our past good acts to excuse our current rotten behavior toward them.

We may recognize strangers as potential allies, but we don’t automatically assume that they’re allies already. Neither do we assume that they’ll view us as their allies.

Thinking about allies is important in the aftermath of Joe Hanson’s video that he says was meant to “lampoon” the personalities of famous scientists of yore and to make “a joke to call attention to the sexual harassment that many women still today experience.” It’s fair to say the joke was not entirely successful given that the scenes of Albert Einstein sexually harassing and assaulting Marie Curie arguably did harm to women in science:

Hanson’s video isn’t funny. It’s painful. It’s painful because 1) it’s such an accurate portrayal of exactly what so many of us have faced, and 2) the fact that Hanson thinks it’s “outrageous” demonstrates how many of our male colleagues don’t realize the fullness of the hostility that women scientists are still facing in the workplace. Furthermore, Hanson’s continued clinging to “can’t you take a joke” and the fact that he was “trying to be comedic” reflects the deeper issue. Not only does he not get it, his statement implies that he has no intention of trying to get it.

Hanson’s posted explanation after the negative reactions urges the people who reacted negatively to see him as an ally:

To anyone curious if I am not aware of, or not committed to preventing this kind of treatment (in whatever way my privileged perspective allows me to do so) I would urge you to check out my past writing and videos … This doesn’t excuse us, but I ask that you form your opinion of me, It’s Okay To Be Smart, and PBS Digital Studios from my body of work, and not a piece of it.

Indeed, Jamie Vernon not only vouches for Hanson’s ally bona fides but asserts his own while simultaneously suggesting that the negative reactions to Hanson’s video are themselves a problem for the SciComm community:

Accusations of discrimination were even pointed in my direction, based on a single ill-advised Tweet.  One tweet (that I now regret and apologize for) triggered a tsunami of anger, attacks, taunts, and accusations against me. 

Despite many years of speaking out on women’s issues in science, despite being an ardent supporter of women science communicators, despite being a father to two young girls for whom it is one of my supreme goals to create a more gender balanced science community, despite these things and many other examples of my attempts to be an ally to the community of women science communicators, I was now facing down the barrel of a gun determined to make an example out of me. …

“How could this be happening to me?  I’m an ally!” I thought. …

Hanson has worked incredibly hard for several years to create an identity that has proven to inspire young people.  He has thousands of loyal readers who share his work thousands of times daily on Tumblr, Facebook and Twitter.  He has championed women’s causes.  Just the week prior to the release of the infamous video, he railed against discriminatory practices among the Nobel Prize selection committees.  He is a force for good in a sea of apathy and ignorance.  Without a doubt, he is an asset to science and science communication.  In my opinion, any mention of removing him from his contract with PBS is shortsighted and reflects misdirected anger.  He deserves the opportunity to recalibrate and power on in the name of science.

Vernon assures us that he and Hanson are allies to women in science and in the SciComm community. At minimum, I believe that Vernon must have a very different understanding than I of what is involved in being an ally.

Allies are people with whom we make common cause to pursue particular goals or to secure particular interests. Their interests and goals are not identical to ours — that’s what makes them allies.

I do not expect allies to be perfect. They, like me, are human, and I certainly mess up with some regularity. Indeed, I understand full well the difficulty of being a good ally. As Josh Witten observed to me, as a white woman I am “in one of the more privileged classes of the oppressed, arguably the least f@#$ed over of the totally f@#$ed over groups in modern western society.” This means when I try to be an ally to people of color, or disabled people, or poor people, for example, there’s a good chance I’ll step in it. I may not be playing life on the lowest difficulty setting, but I’m pretty damn close.

Happily, many people to whom I try to be an ally are willing to tell me when I step in it and to detail just how I’ve stepped in it. This gives me valuable feedback to try to do better.

Allies I trust are people who pay attention to the people to whom they’re trying to give support because they’re imperfect and because their interests and goals are not identical. The point of paying attention is to get some firsthand reports on whether you’re helping or hurting from the people you’re trying to help.

When good allies mess up, they do their best to respond ethically and do better going forward. Because they want to do better, they want to know when they have messed up — even though it can be profoundly painful to find out your best efforts to help have not succeeded.

Let’s pause for a moment here so I can assure you that I understand it hurts when someone tells you that you messed up. I understand it because I have experienced it. I know all about the feeling of defensiveness that pops right up, as well as the feeling that your character as a human being is being unfairly judged on the basis of limited data — indeed, in your defensiveness, you might immediately start looking for ways the person suggesting you are not acting like a good ally has messed up (including failing to communicate your mistake in language that is as gentle as possible). These feelings are natural, but being a good ally means not letting these feelings overcome your commitment to actually be helpful to the people you set out to help.

On account of these feelings, you might feel great empathy for someone else who has just stepped in it but who you think it trying to be an ally. You might feel so much empathy that you don’t want to make them feel bad by calling out their mistake — or that you chide others for pointing out that mistake. (You might even start reaching for quotations about people without sin and stones.) Following this impulse undercuts the goal of being a good ally.

As I wrote elsewhere,

If identifying problematic behavior in a community is something that can only be done by perfect people — people who have never sinned themselves, who have never pissed anyone off, who emerged from the womb incapable of engaging in bad behavior themselves — then we are screwed.

People mess up. The hope is that by calling attention to the bad behavior, and to the harm it does, we can help each other do better. Focusing on problematic behavior (especially if that behavior is ongoing and needs to be addressed to stop the harm) needn’t brand the bad actor as irredeemable, and it shouldn’t require that there’s a saint on duty to file the complaint.

An ally worth the name recognizes that while good intentions can be helpful in steering his conduct, in the end it’s the actions that matter the most. Other people don’t have privileged access to our intentions, after all. What they have to go on is how we behave, what we do — and that outward behavior can have positive or negative effects regardless of whether we intended those effects. It hurts when you step on my toe whether or not you are a good person inside. Telling me it shouldn’t hurt because you didn’t intend the harm is effectively telling me that my own experience isn’t valid, and that your feelings (that you are a good person) trump mine (that my foot hurts).

The allies I trust recognize that the trust they bank from their past good acts is finite. Those past good acts don’t make it impossible for their current acts to cause real harm — in fact, they can make a current act more harmful by shattering the trust built up with the past good acts. As well, they try to understand that harm done by other can make all the banked trust easier to deplete. It may not seem fair, but it is a rational move on the part of the people they are trying to help to protect themselves from harm.

This is, by the way, a good reason for people who want to be effective allies to address the harms done by others rather than maintaining a non-intervention policy.

Being a good ally means trying very hard to understand the positions and experiences of the people with whom you’re trying to make common cause by listening carefully, by asking questions, and by refraining from launching into arguments from first principles that those experiences are imaginary or mistaken. While they ask questions, those committed to being allies don’t demand to be educated. They make an effort to do their own homework.

I expect allies worth the name not to demand forgiveness, not to insist that the people with whom they say they stand will swallow their feelings or let go of hurt on the so-called ally’s schedule. Things hurt as much and as long as they’re going to hurt. Ignoring that just adds more hurt to the pile.

The allies I trust are the ones who are focused on doing the right thing, and on helping counter the wrongs, whether or not anyone is watching, not for the street cred as an ally, but because they know they should.

The allies I believe in recognize that every day they are faced with choices about how to act — about who to be — and that how they choose can make them better or worse allies regardless of what came before.

I am not ruling out the possibility that Joe Hanson or Jamie Vernon could be reliable allies for women in science and in the SciComm community. But their professions of ally status will not be what makes them allies, nor will such professions be enough to make me trust them as allies. The proof of an ally is in how he acts — including how he acts in response to criticism that hurts. Being an ally will mean acting like one.

On the labor involved in being part of a community.

On Thursday of this week, registration for ScienceOnline Together 2014, the “flagship annual conference” of ScienceOnline opened (and closed). ScienceOnline describes itself as a “global, ongoing, online community” made up of “a diverse and growing group of researchers, science writers, artists, programmers, and educators —those who conduct or communicate science online”.

On Wednesday of this week, Isis the Scientist expressed her doubts that the science communication community for which ScienceOnline functions as a nexus is actually a “community” in any meaningful sense:

The major fundamental flaw of the SciComm “community” is that it is a professional community with inconsistent common values. En face, one of its values is the idea of promoting science. Another is promoting diversity and equality in a professional setting. But, at its core, its most fundamental value are these notions of friendship, support, and togetherness. People join the community in part to talk about science, but also for social interactions with other members of the “community”.  While I’ve engaged in my fair share of drinking and shenanigans  at scientific conferences, ScienceOnline is a different beast entirely.  The years that I participated in person and virtually, there was no doubt in my mind that this was a primarily social enterprise.  It had some real hilarious parts, but it wasn’t an experience that seriously upgraded me professionally.

People in SciComm feel confident talking about “the community” as a tangible thing with values and including people in it, even when those people don’t value the social structure in the same way. People write things that are “brave” and bloviate in ways that make each other feel good and have “deep and meaningful conversations about issues” that are at the end of the day nothing more than words. It’s a “community” that gives out platters full of cookies to people who claim to be “allies” to causes without actually having to ever do anything meaningful. Without having to outreach in any tangible way, simply because they claim to be “allies.” Deeming yourself an “ally” and getting a stack of “Get Out of Jail, Free” cards is a hallmark of the “community”.

Isis notes that the value of “togetherness” in the (putative) SciComm community is often prioritized over the value of “diversity” — and that this is a pretty efficient way to undermine the community. She suggests that focusing on friendship rather than professionalism entrenches this problem and writes “I have friends in academia, but being a part of academic science is not predicated on people being my friends.”

I’m very sympathetic to Isis’s concerns here. I don’t know that I’d say there’s no SciComm community, but that might come down to a disagreement about where the line is between a dysfunctional community and a lack of community altogether. But that’s like the definitional dispute about how many hairs one needs on one’s head to shift from the category of “bald” to the category of “not-bald” — for the case we’re trying to categorize there’s still agreement that there’s a whole lot of bare skin hanging out in the wind.

The crux of the matter, whether we have a community or are trying to have one, is whether we have a set of shared values and goals that is sufficient for us to make common cause with each other and to take each other seriously — to take each other seriously even when we offer critiques of other members of the community. For if people in the community dismiss your critiques out of hand, if they have the backs of some members of the community and not others (and whose they have and whose they don’t sorts out along lines of race, gender, class, and other dimensions that the community’s shared values and goals purportedly transcend), it’s pretty easy to wonder whether you are actually a valued member of the community, whether the community is for you in any meaningful way.

I do believe there’s something like a SciComm community, albeit a dysfunctional one. I will be going to ScienceOnline Together 2014, as I went to the seven annual meetings preceding it. Personally, even though I am a full-time academic like Dr. Isis, I do find professional value from this conference. Probably this has to do with my weird interdisciplinary professional focus — something that makes it harder for me to get all the support and inspiration and engagement I need from the official professional societies that are supposed to be aligned with my professional identity. And because of the focus of my work, I am well aware of dysfunction in my own professional community and in other academic and professional communities.

While there has been a pronounced social component to ScienceOnline as a focus of the SciComm community, ScienceOnline (and its ancestor conferences) have never felt purely social to me. I have always had a more professional agenda there — learning what’s going on in different realms of practice, getting my ideas before people who can give me useful feedback on them, trying to build myself a big-picture, nuanced understanding of science engagement and how it matters.

And in recent years, my experience of the meetings has been more like work. Last year, for example, I put a lot of effort into coordinating a kid-friendly room at the conference so that attendees with small children could have some child-free time in the sessions. It was a small step towards making the conference — and the community — more accessible and welcoming to all the people who we describe as being part of the community. There’s still significant work to do on this front. If we opt out of doing that work, we are sending a pretty clear message about who we care about having in the community and who we view as peripheral, about whose voices and interests we value and whose we do not.

Paying attention to who is being left out, to whose voices are not being heard, to whose needs are not being met, takes effort. But this effort is part of the regular required maintenance for any community that is not completely homogeneous. Skipping it is a recipe for dysfunction.

And the maintenance, it seems, is required pretty much every damn day.

Friday, in the Twitter stream for the ScienceOnline hashtag #scio14, I saw this:

To find out what was making Bug Girl feel unsafe, I went back and watched Joe Hanson’s Thanksgiving video, in which Albert Einstein was portrayed as making unwelcome advances on Marie Curie, cheered on by his host, culminating in a naked assault on Curie.

Given the recent upheaval in the SciComm community around sexual harassment — with lots of discussion, because that’s how we roll — it is surprising and shocking that this video plays sexual harassment and assault for laughs, apparently with no thought to how many women are still targets of harassment, no consideration of how chilly the climate for women in science remains.

Here’s a really clear discussion of what makes the video problematic, and here’s Joe Hanson’s response to the criticisms. I’ll be honest: it looks to me like Joe still doesn’t really understand what people (myself included) took to the social media to explain to him. I’m hopeful that he’ll listen and think and eventually get it better. If not, I’m hopeful that people will keep piping up to explain the problem.

But not everyone was happy that members of our putative community responded to a publicly posted video (on a pretty visible platform — PBS Digital Studio — supported by taxpayers in the U.S.) was greeted with a public critique.

The objections raised on Twitter — many of them raised with obvious care as far as being focused on the harm and communicated constructively — were described variously as “drama,” “infighting,” a “witch hunt” and “burning [Joe] at the stake”. (I’m not going to link the tweets because a number of the people who made those characterizations thought about it and walked them back.)

People insisted, as they do pretty much every time, that the proper thing to do was to address the problem privately — as if that’s the only ethical way to deal with a public wrong, or as if it’s the most effective way to fix the harm. Despite what some will argue, I don’t think we have good evidence for either of those claims.

So let’s come back to regular maintenance of the community and think harder about this. I’ve written before that

if bad behavior is dealt with privately, out of view of members of the community who witnessed the bad behavior in question, those members may lose faith in the community’s commitment to calling it out.

This strikes me as good reason not to take all the communications to private channels. People watching and listening on the sidelines are gathering information on whether their so-called community shares their values, on whether it has their back.

Indeed, the people on the sidelines are also watching and listening to the folks dismissing critiques as drama. Operationally, “drama” seems to amount to “Stuff I’d rather you not discuss where I can see or hear it,” which itself shades quickly into “Stuff that really seems to bother other people, for whom I seem to be unable to muster any empathy, because they are not me.”

Let me pause to note what I am not claiming. I am not saying that every member of a community must be an active member of every conversation within that community. I am not saying that empathy requires you to personally step up and engage in every difficult dialogue every time it rolls around. Sometimes you have other stuff to do, or you know that the cost of being patient and calm is more than you can handle at the moment, or you know you need to listen and think for awhile before you get it well enough to get into it.

But going to the trouble to speak up to convey that the conversation is a troublesome one to have happening in your community — that you wish people would stop making an issue of it, that they should just let it go for the sake of peace in the community — that’s something different. That’s telling the people expressing their hurt and disappointment and higher expectations that they should swallow it, that they should keep it to themselves.

For the sake of the community.

For the sake of the community of which they are clearly not really valued members, if they are the ones, always, who need to shut up and let their issues go for the greater good.

Arguably, if one is really serious about the good of the community, one should pay attention to how this kind of dismissal impacts the community. Now is as good a moment as any to start.

Scary subject matter.

This being Hallowe’en, I felt like I should serve you something scary.

But what?

Verily, we’ve talked about some scary things here:

More scary subjects have come up on my other blog, including:

Making this list, I’m very glad it’s still light out! Otherwise I might be quaking uncontrollably.

Truth be told, as someone who works with ethics for a living, I’m less afraid of monsters than I am of ordinary humans who lose sight of their duties to their fellow humans.

And frankly, when it comes to things that go bump in the night, I’m less terrified than curious …

especially since the things that go “bump” in my kitchen usually involve the intriguing trio of temperature-, pressure-, and phase-changes — which is to say, it’s nothing a little science couldn’t demystify.

Have a happy, safe, and ethical Hallowe’en!

A Hallowe’en science book recommendation for kids.

Sure, younger kids may think the real point of Hallowe’en in the candy or the costumes. But they’re likely to notice some of the scarier motifs that pop up in the decorations, and this presents as unexpected opportunity for some learning.

A Drop of Blood by Paul Showers, illustrated by Edward Miller.

The text of this book is straight-ahead science for the grade school set, explaining the key components of blood (red blood cells, white blood cells, platelets) and what they do. There are nice diagrams of how the circulatory system gets involved in transporting nutrients as well as oxygen, pictures of a white blood cell eating a germ, and a step-by-step explanation of how a scab forms.

But this unassuming text is illustrated in classic horror movie style.

All the “people” in the drawings are either vampires or … uh, whatever those greenish hunchbacked creatures who become henchmen are. And this illustration choice is brilliant! Kids who might be squicked out by blood in real life cannot resist the scary/funny/cool cartoonish vamps accompanying the text in this book. The drawing of the Count offering Igor a Band-aid for his boo-boo is heart-warming. So is the multigenerational picture that accompanies this text:

Little people do not need much blood. Cathy is one year old. She weighs twenty-four pounds. She has about one and a half pints of blood in her body. That is less than one quart.

Big people need more blood. Russell is eleven years old. He weighs eighty-eight pounds. He has about five and a half pints of blood in his body. That is a little less than three quarts.

Russell is a young vampire, while Cathy is a cute green toddler with purple circles under her eyes.

This is a really engaging book. And, the science looks pretty good.

The ethics of admitting you messed up.

Part of any human endeavor, including building scientific knowledge or running a magazine with a website, is the potential for messing up.

Humans make mistakes.

Some of them are the result of deliberate choices to violate a norm. Some of them are the result of honest misunderstandings, or of misjudgments about how much control we have over conditions or events. Some of them come about in instances where we didn’t really want the bad thing that happened to happen, but we didn’t take the steps we reasonably could have taken to avoid that outcome, either. Sometimes we don’t recognize that what we did (or neglected to do) was a mistake until we appreciate the negative impact it has.

Human fallibility seems like the kind of thing we’re not going to be able to engineer out of the organism, but we probably can do better at recognizing situations where we’re likely to make mistakes, at exercising more care in those conditions, and at addressing our mistakes once we’ve made them.

Ethically speaking, mistakes are a problem because they cause harm, or because they result from a lapse in an obligation we ought to be honoring, or both. Thus, an ethical response to messing up ought to involving addressing that harm and/or getting back on track with the obligation we fell down on. What does this look like?

1. Acknowledge the harm. This needs to be the very first thing you do. To admit you messed up, you have to recognize the mess, with no qualifications. There it is.

2. Acknowledge the experiential report of the people you have harmed. If you’re serious about sharing a world (which is what ethics is all about), you need to take seriously what the people with whom your sharing that world tell you about how they feel. They have privileged access to their own lived experiences; you need to rely on their testimony of those lived experiences.

Swallow your impulse to say, “I wouldn’t feel that way,” or “I wouldn’t have made such a big deal of that if it happened to me.” Swallow as well any impulse to mount an argument from first principles about how the people telling you they were harmed should feel (especially if it’s an argument that they shouldn’t feel hurt at all). These arguments don’t change how people actually feel — except, perhaps, to make them feel worse because you don’t seem to take the actual harm to them seriously! (See “secondary trauma”.)

3. Acknowledge how what you did contributed to the harm. Spell it out without excuses. Note how your action, or your failure to act, helped bring about the bad outcome. Identify the way your action, or your failure to act, fell short of you living up to your obligations (and be clear about what you understand those obligations to be).

Undoubtedly, there will be other causal factors you can point to that also contributed to bringing about the bad outcome. Pointing them out right now will give the impression that you are dodging your responsibility. Don’t do that.

4. Say you are sorry for causing the harm/falling down on the duty. Actually, you can do this earlier in the process, but doing it again won’t hurt.

What will hurt is “I’m sorry if you were offended/if you were hurt” and similar locutions, since these suggest that you don’t take seriously the experiential reports of the people to whom you’re apologizing. (See #2 above.) If it looks like you’re denying that there really was harm (or that the harm was significant), it may also look like you’re not actually apologizing.

5. Identify steps you will take to avoid repeating this kind of mistake. This is closely connected to your post-mortem of what you did wrong this time (see #3 above). How are you going to change the circumstances, be more attentive to your duties, be more aware of the potential bad consequences that you didn’t foresee this time? Spell out the plan.

6. Identify steps you will take to address the harm of your mistake. Sometimes a sincere apology and a clear plan for not messing up in that way again is enough. Sometimes offsetting the harm and rebuilding trust will take more.

This is another good juncture at which to listen to the people telling you they were harmed. What do they want to help mitigate that harm? What are they telling you might help them trust you again?

7. Don’t demand forgiveness. Some harms hurt for a long time. Trust takes longer to establish than to destroy, and rebuilding it can take longer than it took to build the initial trust. This is a good reason to be on guard against mistakes!

8. If you get off to a bad start, admit it and stop digging. People make mistakes trying to address their mistakes. People give excuses when they should instead acknowledge their culpability. People minimize the feelings of the people to whom they’re trying to apologize. It happens, but it adds an additional layer of mistakes that you ought to address.

Catch yourself. Say, “OK, I was giving an excuse, but I should just tell you that what I did was wrong, and I’m sorry it hurt you.” Or, “That reason I gave you was me being defensive, and right now it’s your feelings I need to prioritize.” Or, “I didn’t notice before that the way I was treating you was unfair. I see now that it was, and I’m going to work hard not to treat you that way again.”

Addressing a mistake is not like winning an argument. In fact, it’s the opposite of that: It’s identifying a way that what you did wasn’t successful, or defensible, or good. But this is something we have to get good at, whether we’re trying to build reliable scientific knowledge or just to share a world with others.

——
I think this very general discussion has all sorts of specific applications, for instance to Mariette DiChristina’s message in response to the outcry over the removal of a post by DNLee.

I’m happy to entertain discussion of this particular case in the comments provided it keeps pretty close to the question of our ethical duties in explaining and apologizing. Claims about people’s intent when no clear statement of that intent has been made are out-of-bounds here (although there are plenty of online spaces where you can discuss such things if you like). So are claims about legalities (since what’s legal is not strictly congruent with what’s ethical).

Also, if you haven’t already, you should read Kate Clancy’s detailed analysis of what SciAm did well and what SciAm did poorly in responding to the situation about which DNLee was blogging and in responding to the online outcry when SciAm removed her post.

Also relevant: Melanie Tannenbaum’s excellent post on why we focus on intent when we should focus on impact.