Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

In this post, we’re returning to a discussion we started back in September about whether scientists have special duties or obligations to society (or, if the notion of “society” seems too fuzzy and ill-defined to you, to the other people who are not scientists with whom they share a world) in virtue of being scientists.

You may recall that, in the post where we set out some groundwork for the discussion, I offered one reason you might think that scientists have duties that are importantly different from the duties of non-scientists:

The main arguments for scientists having special duties tend to turn on scientists being in possession of special powers. This is the scientist as Spider-Man: with great power comes great responsibility.

What kind of special powers are we talking about? The power to build reliable knowledge about the world – and in particular, about phenomena and mechanisms in the world that are not so transparent to our everyday powers of observation and the everyday tools non-scientists have at their disposal for probing features of their world. On account of their training and experience, scientists are more likely to be able to set up experiments or conditions for observation that will help them figure out the cause of an outbreak of illness, or the robust patterns in global surface temperatures and the strength of their correlation with CO2 outputs from factories and farms, or whether a particular plan for energy generation is thermodynamically plausible. In addition, working scientists are more likely to have access to chemical reagents and modern lab equipment, to beamtimes at particle accelerators, to purpose-bred experimental animals, to populations of human subjects and institutional review boards for well-regulated clinical trials.

Scientists can build specialist knowledge that the rest of us (including scientists in other fields) cannot, and many of them have access to materials, tools, and social arrangements for use in their knowledge-building that the rest of us do not. That may fall short of a superpower, but we shouldn’t kid ourselves that this doesn’t represent significant power in our world.

In her book Ethics of Scientific Research, Kristin Shrader-Frechette argues that these special abilities give rise to obligations for scientists. We can separate these into positive duties and negative duties. A positive duty is an obligation to actually do something (e.g., a duty to care for the hungry, a duty to tell the truth), while a negative duty is an obligation to refrain from doing something (e.g., a duty not to lie, a duty not to steal, a duty not to kill). There may well be context sensitivity in some of these duties (e.g, if it’s a matter of self-defense, your duty not to kill may be weakened), but you get the basic difference between the two flavors of duties.

Let’s start with ways scientists ought not to use their scientific powers. Since scientists have to share a world with everyone else, Shrader-Frechette argues that this puts some limits on the research they can do. She says that scientists shouldn’t do research that causes unjustified risks to people. Nor should they do research that violates informed consent of the human subjects who participate in the research. They should not do research that unjustly converts public resources to private profits. Nor should they do research that seriously jeopardizes environmental welfare. Finally, scientists should not do biased research.

One common theme in these prohibitions is the idea that knowledge in itself is not more important than the welfare of people. Given how focused scientific activity is on knowledge-building, this may be something about which scientists need to be reminded. For the people with whom scientists share a world, knowledge is valuable instrumentally – because people in society can benefit from it. What this means is that scientific knowledge-building that harms people more than it helps them, or that harms shared resources like the environment, is on balance a bad thing, not a good thing. This is not to say that the knowledge scientists are seeking should not be built at all. Rather, scientists need to find a way to build it without inflicting those harms – because it is their duty to avoid inflicting those harms.

Shrader-Frechette makes the observation that for research to be valuable at all to the broader public, it must be research that produces reliable knowledge. This is a big reason scientists should avoid conducting biased research. And, she notes that not doing certain research can also pose a risk to the public.

There’s another way scientists might use their powers against non-scientists that’s suggested by the Mertonian norm of disinterestedness, an “ought” scientists are supposed to feel pulling at them because of how they’ve been socialized as members of their scientific tribe. Because the scientific expert has knowledge and knowledge-building powers that the non-scientist does not, she could exploit the non-scientist’s ignorance or his tendency to trust the judgment of the expert. The scientist, in other words, could put one over on the layperson for her own benefit. This is how snake oil gets sold — and arguably, this is the kind of thing that scientists ought to refrain from doing in their interactions with non-scientists.

The overall duties of the scientist, as Shrader-Frechette describes them, also include positive duties to do research and to use research findings in ways that serve the public good, as well as to ensure that the knowledge and technologies created by the research do not harm anyone. We’ll take up these positive duties in the next post in the series.
_____
Shrader-Frechette, K. S. (1994). Ethics of scientific research. Rowman & Littlefield.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

On allies.

Those who cannot remember the past are condemned to repeat it.
–George Santayana

All of this has happened before, and all of this will happen again.
–a guy who turned out to be a Cylon

Let me start by putting my cards on the table: Jamie Vernon is not someone I count as an ally.

At least, he’s not someone I’d consider a reliable ally. I don’t have any reason to believe that he really understands my interests, and I don’t trust him not to sacrifice them for his own comfort. He travels in some of the same online spaces that I do and considers himself a longstanding member of the SciComm community of which I take myself to be a member, but that doesn’t mean I think he has my back. Undoubtedly, there are some issues for which we would find ourselves on the same side of things, but that’s not terribly informative; there are some issues (not many, but some) for which Dick Cheney and I are on the same side.

Here, I’m in agreement with Isis that we needn’t be friends to be able to work together in pursuit of shared goals. I’ve made similar observations about the scientific community:

We’re not all on the same page about everything. Pretending that we are misrepresents the nature of the tribe of science and of scientific activity. But given that there are some shared commitments that guide scientific methodology, some conditions without which scientific activity in the U.S. cannot flourish, these provide some common ground on which scientists ought to be more or less united … [which] opens the possibility of building coalitions, of finding ways to work together toward the goals we share even if we may not agree about what other goals are worth pursuing.

We probably can’t form workable coalitions, though, by showing open contempt for each other’s other commitments or interests. We cannot be allies by behaving like enemies. Human nature sucks like that sometimes.

But without coalitions, we have to be ready to go it alone, to work to achieve our goals with much less help. Without coalitions, we may find ourselves working against the effects of those who have chosen to pursue other goals instead. If you can’t work with me toward goal A, I may not be inclined to help you work toward goal B. If we made common cause with each other, we might be able to tailor strategies that would get us closer to both goals rather than sacrificing one for the other. But if we decide we’re not working on the same team, why on earth should we care about each other’s recommendations with respect to strategies?

Ironically, we humans seem sometimes to show more respect to people who are strangers than to people we call our friends. Perhaps it’s related to the uncertainty of our interactions going forward — the possibility that we may need to band together, or to accommodate the other’s interests to protect our own — or to the lack of much shared history to draw upon in guiding our interactions. We begin our interactions with strangers with the slate as blank as it can be. Strangers can’t be implored (at least not credibly) to consider our past good acts to excuse our current rotten behavior toward them.

We may recognize strangers as potential allies, but we don’t automatically assume that they’re allies already. Neither do we assume that they’ll view us as their allies.

Thinking about allies is important in the aftermath of Joe Hanson’s video that he says was meant to “lampoon” the personalities of famous scientists of yore and to make “a joke to call attention to the sexual harassment that many women still today experience.” It’s fair to say the joke was not entirely successful given that the scenes of Albert Einstein sexually harassing and assaulting Marie Curie arguably did harm to women in science:

Hanson’s video isn’t funny. It’s painful. It’s painful because 1) it’s such an accurate portrayal of exactly what so many of us have faced, and 2) the fact that Hanson thinks it’s “outrageous” demonstrates how many of our male colleagues don’t realize the fullness of the hostility that women scientists are still facing in the workplace. Furthermore, Hanson’s continued clinging to “can’t you take a joke” and the fact that he was “trying to be comedic” reflects the deeper issue. Not only does he not get it, his statement implies that he has no intention of trying to get it.

Hanson’s posted explanation after the negative reactions urges the people who reacted negatively to see him as an ally:

To anyone curious if I am not aware of, or not committed to preventing this kind of treatment (in whatever way my privileged perspective allows me to do so) I would urge you to check out my past writing and videos … This doesn’t excuse us, but I ask that you form your opinion of me, It’s Okay To Be Smart, and PBS Digital Studios from my body of work, and not a piece of it.

Indeed, Jamie Vernon not only vouches for Hanson’s ally bona fides but asserts his own while simultaneously suggesting that the negative reactions to Hanson’s video are themselves a problem for the SciComm community:

Accusations of discrimination were even pointed in my direction, based on a single ill-advised Tweet.  One tweet (that I now regret and apologize for) triggered a tsunami of anger, attacks, taunts, and accusations against me. 

Despite many years of speaking out on women’s issues in science, despite being an ardent supporter of women science communicators, despite being a father to two young girls for whom it is one of my supreme goals to create a more gender balanced science community, despite these things and many other examples of my attempts to be an ally to the community of women science communicators, I was now facing down the barrel of a gun determined to make an example out of me. …

“How could this be happening to me?  I’m an ally!” I thought. …

Hanson has worked incredibly hard for several years to create an identity that has proven to inspire young people.  He has thousands of loyal readers who share his work thousands of times daily on Tumblr, Facebook and Twitter.  He has championed women’s causes.  Just the week prior to the release of the infamous video, he railed against discriminatory practices among the Nobel Prize selection committees.  He is a force for good in a sea of apathy and ignorance.  Without a doubt, he is an asset to science and science communication.  In my opinion, any mention of removing him from his contract with PBS is shortsighted and reflects misdirected anger.  He deserves the opportunity to recalibrate and power on in the name of science.

Vernon assures us that he and Hanson are allies to women in science and in the SciComm community. At minimum, I believe that Vernon must have a very different understanding than I of what is involved in being an ally.

Allies are people with whom we make common cause to pursue particular goals or to secure particular interests. Their interests and goals are not identical to ours — that’s what makes them allies.

I do not expect allies to be perfect. They, like me, are human, and I certainly mess up with some regularity. Indeed, I understand full well the difficulty of being a good ally. As Josh Witten observed to me, as a white woman I am “in one of the more privileged classes of the oppressed, arguably the least f@#$ed over of the totally f@#$ed over groups in modern western society.” This means when I try to be an ally to people of color, or disabled people, or poor people, for example, there’s a good chance I’ll step in it. I may not be playing life on the lowest difficulty setting, but I’m pretty damn close.

Happily, many people to whom I try to be an ally are willing to tell me when I step in it and to detail just how I’ve stepped in it. This gives me valuable feedback to try to do better.

Allies I trust are people who pay attention to the people to whom they’re trying to give support because they’re imperfect and because their interests and goals are not identical. The point of paying attention is to get some firsthand reports on whether you’re helping or hurting from the people you’re trying to help.

When good allies mess up, they do their best to respond ethically and do better going forward. Because they want to do better, they want to know when they have messed up — even though it can be profoundly painful to find out your best efforts to help have not succeeded.

Let’s pause for a moment here so I can assure you that I understand it hurts when someone tells you that you messed up. I understand it because I have experienced it. I know all about the feeling of defensiveness that pops right up, as well as the feeling that your character as a human being is being unfairly judged on the basis of limited data — indeed, in your defensiveness, you might immediately start looking for ways the person suggesting you are not acting like a good ally has messed up (including failing to communicate your mistake in language that is as gentle as possible). These feelings are natural, but being a good ally means not letting these feelings overcome your commitment to actually be helpful to the people you set out to help.

On account of these feelings, you might feel great empathy for someone else who has just stepped in it but who you think it trying to be an ally. You might feel so much empathy that you don’t want to make them feel bad by calling out their mistake — or that you chide others for pointing out that mistake. (You might even start reaching for quotations about people without sin and stones.) Following this impulse undercuts the goal of being a good ally.

As I wrote elsewhere,

If identifying problematic behavior in a community is something that can only be done by perfect people — people who have never sinned themselves, who have never pissed anyone off, who emerged from the womb incapable of engaging in bad behavior themselves — then we are screwed.

People mess up. The hope is that by calling attention to the bad behavior, and to the harm it does, we can help each other do better. Focusing on problematic behavior (especially if that behavior is ongoing and needs to be addressed to stop the harm) needn’t brand the bad actor as irredeemable, and it shouldn’t require that there’s a saint on duty to file the complaint.

An ally worth the name recognizes that while good intentions can be helpful in steering his conduct, in the end it’s the actions that matter the most. Other people don’t have privileged access to our intentions, after all. What they have to go on is how we behave, what we do — and that outward behavior can have positive or negative effects regardless of whether we intended those effects. It hurts when you step on my toe whether or not you are a good person inside. Telling me it shouldn’t hurt because you didn’t intend the harm is effectively telling me that my own experience isn’t valid, and that your feelings (that you are a good person) trump mine (that my foot hurts).

The allies I trust recognize that the trust they bank from their past good acts is finite. Those past good acts don’t make it impossible for their current acts to cause real harm — in fact, they can make a current act more harmful by shattering the trust built up with the past good acts. As well, they try to understand that harm done by other can make all the banked trust easier to deplete. It may not seem fair, but it is a rational move on the part of the people they are trying to help to protect themselves from harm.

This is, by the way, a good reason for people who want to be effective allies to address the harms done by others rather than maintaining a non-intervention policy.

Being a good ally means trying very hard to understand the positions and experiences of the people with whom you’re trying to make common cause by listening carefully, by asking questions, and by refraining from launching into arguments from first principles that those experiences are imaginary or mistaken. While they ask questions, those committed to being allies don’t demand to be educated. They make an effort to do their own homework.

I expect allies worth the name not to demand forgiveness, not to insist that the people with whom they say they stand will swallow their feelings or let go of hurt on the so-called ally’s schedule. Things hurt as much and as long as they’re going to hurt. Ignoring that just adds more hurt to the pile.

The allies I trust are the ones who are focused on doing the right thing, and on helping counter the wrongs, whether or not anyone is watching, not for the street cred as an ally, but because they know they should.

The allies I believe in recognize that every day they are faced with choices about how to act — about who to be — and that how they choose can make them better or worse allies regardless of what came before.

I am not ruling out the possibility that Joe Hanson or Jamie Vernon could be reliable allies for women in science and in the SciComm community. But their professions of ally status will not be what makes them allies, nor will such professions be enough to make me trust them as allies. The proof of an ally is in how he acts — including how he acts in response to criticism that hurts. Being an ally will mean acting like one.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Near the beginning of the month, I asked my readers — those who are scientists and those who are non-scientists alike — to share their impressions about whether scientists have any special duties or obligations to society that non-scientists don’t have. I also asked whether non-scientists have any special duties or obligations to scientists.

If you click through to those linked posts and read the comments (and check out the thoughtful responses at MetaCookBook and Antijenic Drift), you’ll see a wide range of opinions on both of these questions, each with persuasive reasons offered to back them up.

In this post and a few more that will follow (I’m estimating three more, but we’ll see how it goes), I want to take a closer look at some of these responses. I’m also going to develop some of the standard arguments that have been put forward by professional philosophers and others of that ilk that scientists do, in fact, have special duties. Working through these arguments will include getting into specifics about what precisely scientists owe the non-scientists with whom they’re sharing a world, and about the sources of these putative obligations. If we’re going to take these arguments seriously, though, I think we need to think carefully about the corresponding questions: what do individual non-scientists and society as a whole owe to scientists, and what are the sources of these obligations?

First, let’s lay some groundwork for the discussion.

Right off the bat, I must acknowledge the problem of drawing clear lines around who counts as a scientist and who counts as a non-scientist. For the purposes of getting answers to my questions, I used a fairly arbitrary definition:

Who counts as a scientist here? I’m including anyone who has been trained (past the B.A. or B.S. level) in a science, including people who may be currently involved in that training and anyone working in a scientific field (even in the absences of schooling past the B.A. or B.S. level).

There are plenty of people who would count as “scientist” under this definition who would not describe themselves as scientists — or at least as professional scientists. (I am one of those people.) On the other hand, there are some professional scientists who would say lots of the people who meet my criteria, even those who would describe themselves as professional scientists, don’t really count as members of the tribe of science.

There’s not one obvious way to draw the lines here. The world is frequently messy that way.

That said, at least some of the arguments that claim scientists have special duties make particular assumptions about scientific training. These assumptions point to a source of the putative special duties.

But maybe that just means we should be examining claims about people-whose-training-puts-them-into-a-particular-relationship-with-society having special duties, whether or not those people are all scientists, and whether or not all scientists have had training that falls into that category.

Another issue here is getting to the bottom of what it means to have an obligation.

Some obligations we have may be spelled out in writing, explicitly agreed to, with the force of law behind them, but many of our obligations are not. Many flow not from written contracts but from relationships — whether our relationships with individuals, or with professional communities, or with other sorts of communities of various sizes.

Because they flow from relationships, it’s not unreasonable to expect that when we have obligations, the persons, communities, or other entities to whom we have obligations will have some corresponding obligations to us. However, this doesn’t guarantee that the obligations on each side will be perfectly symmetrical in strength or in kind. When my kids were little, my obligations to them were significantly larger than their obligations to me. Further, as our relationships change, so will our obligations. I owe my kids different things now than I did when they were toddlers. I owe my parents different things now than I did when I was a minor living under their roof.

It’s also important to notice that obligations are not like physical laws: having an obligation is no guarantee that one will live up to it and accordingly display a certain kind of behavior. Among other things, this means that how people act is not a perfectly reliable guide to how they ought to act. It also means that someone else’s failure to live up to her obligations to me does not automatically switch off my obligations to her. In some cases it might, but there are other cases where the nature of the relationship means my obligations are still in force. (For example, if my teenage kid falls down on her obligation to treat me with minimal respect, I still have a duty to feed and shelter her.)

That obligations are not like physical laws means there’s likely to be more disagreement around what we’re actually obliged to do. Indeed, some are likely to reject putative obligations out of hand because they are socially constructed. Here, I don’t think we need to appeal to a moral realist to locate objective moral facts that could ground our obligations. I’m happy to bite the bullet. Socially constructed obligations aren’t a problem because they emerge from the social processes that are an inescapable part of sharing a world — including with people who are not exactly like ourselves. These obligations flow from our understandings of the relationships we bear to one another, and they are no less “real” for being socially constructed than are bridges.

One more bit of background to ponder: The questions I posed asked whether scientists and non-scientists have any special duties or obligations to each other. A number of respondents (mostly on the scientist side of the line, as I defined it) suggested that scientists’ duties are not special, but simply duties of the same sort everyone in society has (with perhaps some differences in the fine details).

The main arguments for scientists having special duties tend to turn on scientists being in possession of special powers. This is the scientist as Spider-Man: with great power comes great responsibility. But whether the scientist has special powers may be the kind of thing that looks very different on opposite sides of the scientist-non-scientist divide; the scientists responding to my questions don’t seem to see themselves as very different from other members of society. Moreover, nearly every superhero canon provides ample evidence that power, and the responsibility that accompanies it, can feel like a burden. (One need look no further than seasons 6 and 7 of Buffy the Vampire Slayer to wonder if taking a break from her duty to slay vamps would have made Buffy a more pleasant person with whom to share a world.)

Arguably, scientists can do some things the rest of us can’t. How does that affect the relationship between scientists and non-scientists? What kind of duties could flow from that relationship? These powers, and the corresponding responsibilities, will be the focus of the next post.

______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Questions for the non-scientists in the audience.

Today in my “Ethics in Science” class, we took up a question that reliably gets my students (a mix of science majors and non-science major) going: Do scientists have special obligations to society that non-scientists don’t have?

Naturally, there are some follow-up questions if you lean towards an affirmative answer to that first question. For example:

  • What specifically are those special obligations?
  • Why do scientists have these particular obligations when non-scientists in their society don’t?
  • How strong are those obligations? (In other words, under what conditions would it be ethically permissible for scientists to fall short of doing what the obligations say they should do?)

I think these are important — and complex — questions, some of which go to the heart of what’s involved in scientists and non-scientists successfully sharing a world. But, it always helps me to hear the voices (and intuitions) of some of the folks besides me who are involved in this sharing-a-world project.

So, for the non-scientists in the audience, I have some questions I hope you will answer in the comments on this post.*

1. Are there special duties or obligations you think scientists have to the non-scientists with whom they’re sharing a world? If yes, what are they?

2. If you think scientists have special duties or obligations to the rest of society, why do they have them? Where did they come from? (If you don’t think scientists have special duties or obligations to the rest of society, why not?

3. What special duties or obligations (if any) do you think non-scientists have to the scientists with whom they’re sharing a world?

Who counts as a non-scientist here? I’m including anyone who has not received scientific training past the B.A. or B.S. level and who is not currently working in a scientific field (even in the absences of schooling past the B.A. or B.S. level).

That means I count as a scientist here (even though I’m not currently employed as a scientist or otherwise involved in scientific knowledge-building).

If you want to say something about these questions but you’re a scientist according to this definition, never fear! You are cordially invited to answer a corresponding set of questions, posed to the scientists with whom non-scientists are sharing a world, on my other blog.
_____
* If you prefer to answer the questions on your own blog, or in some other online space, please drop a link in the comments here, or point me to it via Twitter (@docfreeride) or email (dr.freeride@gmail.com).

Individual misconduct or institutional failing: “The Newsroom” and science.

I’ve been watching The Newsroom*, and in its second season, the storyline is treading on territory where journalism bears some striking similarities to science. Indeed, the most recent episode (first aired Sunday, August 25, 2013) raises questions about trust and accountability — both at the individual and the community levels — for which I think science and journalism may converge.

I’m not going to dig too deeply into the details of the show, but it’s possible that the ones I touch on here reach the level of spoilers. If you prefer to stay spoiler-free, you might want to stop reading here and come back after you’ve caught up on the show.

The central characters in The Newsroom are producing a cable news show, trying hard to get the news right but also working within the constraints set by their corporate masters (e.g., they need to get good ratings). A producer on the show, on loan to the New York-based team from the D.C. bureau, gets a lead for a fairly shocking story. He and some other members of the team try to find evidence to support the claims of this shocking story. As they’re doing this, they purposely keep other members of the production team out of the loop — not to deceive them or cut them out of the glory if, eventually, they’re able to break the story, but to enable these folks to look critically at the story once all the facts are assembled, to try to poke holes in it.** And, it’s worth noting, the folks actually in the loop, looking for information that bears on the reliability of the shocking claims in the story, are shown to be diligent about considering ways they could be wrong, identifying alternate explanations for details that seem to be support for the story, etc.

The production team looks at all the multiple sources of information they have. They look for reasons to doubt the story. They ultimately decide to air the story.

But, it turns out the story is wrong.

Worse is why key pieces of “evidence” supporting the story are unreliable. One of the interviewees is apparently honest but unreliable. One source of leaked information is false, because the person who leaked it has a grudge against a member of the production team. And, it turns out that the producer on loan from the D.C. bureau has doctored a taped interview that is the lynchpin of the story to make it appear that the interviewee said something he didn’t say.

The producer on loan from the D.C. bureau is fired. He proceeds to sue the network for wrongful termination, claiming it was an institutional failure that led to the airing of the now-retracted big story.

The parallels to scientific knowledge-building are clear.

Scientists with a hypothesis try to amass evidence that will make it clear whether the hypothesis is correct or incorrect. Rather than getting lulled into a false sense of security by observations that seem to fit the hypothesis, scientists try to find evidence that would rule out the hypothesis. They recognize that part of their job as knowledge-builders is to exercise organized skepticism — directed at their own scientific claims as well as at the claims of other scientists. And, given how vulnerable we are to our own unconscious biases, scientists rely on teamwork to effectively weed out the “evidence” that doesn’t actually provide strong support for their claims.

Some seemingly solid evidence turns out to be faulty. Measuring devices can become unreliable, or you get stuck with a bad batch of reagent, or your collaborator sends you a sample from the wrong cell line.

And sometimes a scientist who is sure in his heart he knows what the truth is doctors the evidence to “show” that truth.

Fabricating or falsifying evidence is, without question, a crime against scientific knowledge-building. But does the community that is taken in by the fraudster bear a significant share of the blame for believing him?

Generally, I think, the scientific community will say, “No.” A scientist is presumed by other members of his community to be honest unless there’s good reason to think otherwise. Otherwise, each scientist would have to replicate every observation reported by every other scientist ever before granting it any credibility. There aren’t enough grant dollars or hours in the day for that to be a plausible way to build scientific knowledge.

But, the community of science is supposed to ensure that findings reported to the public are thoroughly scrutinized for errors, not presented as more certain than the evidence warrants. The public trusts scientists to do this vetting because members of the public generally don’t know how to do this vetting themselves. Among other things, this means that a scientific fraudster, once caught, doesn’t just burn his own credibility — he can end up burning the credibility of the entire scientific community that was taken in by his lies.

Given how hard it can be to distinguish made-up data from real data, maybe that’s not fair. Still, if the scientific community is asking for the public’s trust, that community needs to be accountable to the public — and to find ways to prevent violations of trust within the community, or at least to deal effectively with those violations of trust when they happen.

In The Newsroom, after the big story unravels, as the video-doctoring producer is fired, the executive producer of the news show says, “People will never trust us again.” It’s not just the video-doctoring producer that viewers won’t trust, but the production team who didn’t catch the problem before presenting the story as reliable. Where the episodes to date leave us, it’s uncertain whether the production team will be able to win back the trust of the public — and what it might take to win back that trust.

I think it’s a reasonable question for the scientific community, too. In the face of incidents where individual scientists break trust, what does it take for the larger community of scientific knowledge-builders to win the trust of the public?

_____
* I’m not sure it’s a great show, but I have a weakness for the cadence of Aaron Sorkin’s dialogue.

** In the show, the folks who try to poke holes in the story presented with all the evidence that seems to support it are called the “red team,” and one of the characters claims its function is analogous to that of red blood cells. This … doesn’t actually make much sense, biologically. I’m putting a pin in that, but you are welcome to critique or suggest improvements to this analogy in the comments.

How far does the tether of your expertise extend?

Talking about science in the public sphere is tricky, even with someone with a lot of training in a science.

On the one hand, there’s a sense that it would be a very good thing if the general level of understanding of science was significantly higher than it is at present — if you could count on the people in your neighborhood to have a basic grasp of where scientific knowledge comes from, as well as of the big pieces of scientific knowledge directly relevant to the project of getting through their world safely and successfully.

But there seem to be a good many people in our neighborhood who don’t have this relationship with science. (Here, depending on your ‘druthers, you can fill in an explanation in terms of inadequately inspiring science teachers and/or curricula, or kids too distracted by TV or adolescence or whatever to engage with those teachers and/or curricula.) This means that, if these folks aren’t going to go it alone and try to evaluate putative scientific claims they encounter themselves, they need to get help from scientific experts.

But who’s an expert?

It’s well and good to say that a journalism major who never quite finished his degree is less of an authority on matters cosmological than a NASA scientist, but what should we say about engineers or medical doctors with “concerns” about evolutionary theory? Is a social scientist who spent time as an officer on a nuclear submarine an expert on nuclear power? Is an actor or talk show host with an autistic child an expert on the aetiology of autism? How important is all that specialization research scientists do? To some extent, doesn’t all science follow the same rules, thus equipping any scientist to weigh in intelligently about it?

Rather than give you a general answer to that question, I thought it best to lay out the competence I personally am comfortable claiming, in my capacity as a trained scientist.

As someone trained in a science, I am qualified:

  1. to say an awful lot about the research projects I have completed (although perhaps a bit less about them when they were still underway).
  2. to say something about the more or less settled knowledge, and about the live debates, in my research area (assuming, of course, that I have kept up with the literature and professional meetings where discussions of research in this area take place).
  3. to say something about the more or less settled (as opposed to “frontier”) knowledge for my field more generally (again, assuming I have kept up with the literature and the meetings).
  4. perhaps, to weigh in on frontier knowledge in research areas other than my own, if I have been very diligent about keeping up with the literature and the meetings and about communicating with colleagues working in these areas.
  5. to evaluate scientific arguments in areas of science other than my own for logical structure and persuasiveness (though I must be careful to acknowledge that there may be premises of these arguments — pieces of theory or factual claims from observations or experiments that I’m not familiar with — that I’m not qualified to evaluate).
  6. to recognize, and be wary of, logical fallacies and other less obvious pseudo-scientific moves (e.g., I should call shenanigans on claims that weaknesses in theory T1 necessarily count as support for alternative theory T2).
  7. to recognize that experts in fields of science other than my own generally know what the heck they’re talking about.
  8. to trust scientists in fields other than my own to rein in scientists in those fields who don’t know what they are talking about.
  9. to face up to the reality that, as much as I may know about the little piece of the universe I’ve been studying, I don’t know everything (which is part of why it takes a really big community to do science).

This list of my qualifications is an expression of my comfort level more than anything else. I would argue that it’s not elitist — good training and hard work can make a scientist out of almost anyone. But, it recognizes that with as much as there is to know, you can’t be an expert on everything. Knowing how far the tether of your expertise extends — and owning up to that when people look to you as an expert — is part of being a responsible scientist.

_______
An ancestor version of this post was published on my other blog.

Ethical and practical issues for uBiome to keep working on.

Earlier this week, the Scientific American Guest Blog hosted a post by Jessica Richman and Zachary Apte, two members of the team at uBiome, a crowdfunded citizen science start-up. Back in February, as uBiome was in the middle of its crowdfunding drive, a number of bloggers (including me) voiced worries that some of the ethical issues of the uBiome project might require more serious attention. Partly in response to those critiques, Richman’s and Apte’s post talks about their perspectives on Institutional Review Boards (IRBs) and how in their present configuration they seem suboptimal for commercial citizen science initiatives.

Their post provides food for thought, but there are some broader issues about which I think the uBiome team should think a little harder.

Ethics takes more than simply meeting legal requirements.

Consulting with lawyers to ensure that your project isn’t breaking any laws is a good idea, but it’s not enough. Meeting legal requirements is not sufficient to meet your ethical obligations (which are well and truly obligations even when they lack the force of law).

Now, it’s the case that there is often something like the force of law deployed to encourage researchers (among others) not to ignore their ethical obligations. If you accept federal research funds, for example, you are entering into a contract one of whose conditions is forking within federal guidelines for ethical use of animal or human subjects. If you don’t want the government to enforce this agreement, you can certainly opt out of taking the federal funds.

However, opting out of federal funding does not remove your ethical duties to animals or human subjects. It may remove the government’s involvement in making you live up to your ethical obligations, but the ethical obligations are still there.

This is a tremendously important point — especially in light of a long history of human subjects research in which researchers have often not even recognized their ethical obligations to human subjects, let alone had a good plan for living up to them.

Here, it is important to seek good ethical advice (as distinct from legal advice), from an array of ethicists, including some who see potential problems with your plans. If none of the ethicists you consult see anything to worry about, you probably need to ask a few more! Take the potential problems they identify seriously. Think through ways to manage the project to avoid those problems. Figure out a way to make things right if a worst case scenario should play out.

In a lot of ways, problems that uBiome encountered with the reception of its plan seemed to flow from a lack of good — and challenging — ethical advice. There are plenty of other people and organizations doing citizen science projects that are similar enough to uBiome (from the point of view of interactions with potential subjects/participants), and many of these have experience working with IRBs. Finding them and asking for their guidance could have helped the uBiome team foresee some of the issues with which they’re dealing now, somewhat late in the game.

There are more detailed discussions of the chasm between what satisfies the law and what’s ethical at The Broken Spoke and Drugmonkey. You should, as they say, click through and read the whole thing.

Some frustrations with IRBs may be based on a misunderstanding of how they work.

An Institutional Review Board, or IRB, is a body that examines scientific protocols to determine whether they meet ethical requirements in their engagement of human subjects (including humans who provide tissue or other material to a study). The requirement for independent ethical evaluation of experimental protocols was first articulated in the World Medical Association’s Declaration of Helsinki, which states:

The research protocol must be submitted for consideration, comment, guidance and approval to a research ethics committee before the study begins. This committee must be independent of the researcher, the sponsor and any other undue influence. It must take into consideration the laws and regulations of the country or countries in which the research is to be performed as well as applicable international norms and standards but these must not be allowed to reduce or eliminate any of the protections for research subjects set forth in this Declaration. The committee must have the right to monitor ongoing studies. The researcher must provide monitoring information to the committee, especially information about any serious adverse events. No change to the protocol may be made without consideration and approval by the committee.

(Bold emphasis added.)

In their guest post, Richman and Apte assert, “IRBs are usually associated with an academic institution, and are provided free of charge to members of that institution.”

It may appear that the services of an IRB are “free” to those affiliated with the institution, but they aren’t really. Surely it costs the institution money to run the IRB — to hire a coordinator, to provide ethics training resources for IRB members and to faculty, staff, and students involved in human subjects research, to (ideally) give release time to faculty and staff on the IRB so they can actually devote the time required to consider protocols, comment upon them, provide guidance to PIs, and so forth.

Administrative costs are part of institutional overhead, and there’s a reasonable expectation that researchers whose protocols come before the IRB will take a turn serving on the IRB at some point. So IRBs most certainly aren’t free.

Now, given that the uBiome team was told they couldn’t seek approval from the IRBs at any institutions where they plausibly could claim an affiliation, and given the expense of seeking approval from a private-sector IRB, I can understand why they might have been hesitant to put money down for IRB approval up front. They started with no money for their proposed project. If the project itself ended up being a no-go due to insufficient funding, spending money on IRB approval would seem pointless.

However, it’s worth making it clear that expense is not in itself a sufficient reason to do without ethical oversight. IRB oversight costs money (even in an academic institution where those costs are invisible to PIs because they’re bundled into institutional overhead). Research in general costs money. If you can’t swing the costs (including those of proper ethical oversight), you can’t do the research. That’s how it goes.

Richman and Apte go on:

[W]e wanted to go even further, and get IRB approval once we were funded — in case we wanted to publish, and to ensure that our customers were well-informed of the risks and benefits of participation. It seemed the right thing to do.

So, we decided to wait until after crowdfunding and, if the project was successful, submit for IRB approval at that point.

Getting IRB approval at some point in the process is better than getting none at all. However, some of the worries people (including me) were expressing while uBiome was at the crowdfunding stage of the process (before IRB approval) were focused on how the lines between citizen scientist, human subject, and customer were getting blurred.

Did donors to the drive believe that, by virtue of their donations, they were guaranteed to be enrolled in the study (as sample providers)? Did they have a reasonable picture of the potential benefits of their participation? Did they have a reasonable picture of the potential risks of their participation?

These are not questions we leave to PIs. To assess them objectively, we put these questions before a neutral third-party … the IRB.

If the expense of formal IRB consideration of the uBiome protocol was prohibitive during the crowdfunding stage, it surely would have gone some way to meeting ethical duties if the uBiome team had vetted the language in their crowdfunding drive with independent folks attentive to human subjects protection issues. That the ethical questions raised by their fundraising drive were so glaringly obvious to so many of us suggests that skipping this step was not a good call.


We next arrive at the issue of the for-profit IRB. Richman and Apte write:

Some might criticize the fact that we are using a private firm, one not connected with a prestigious academic institution. We beg to differ. This is the same institution that works with academic IRBs that need to coordinate multi-site studies, as well as private firms such as 23andme and pharmaceutical companies doing clinical trials. We agree that it’s kind of weird to pay for ethical review, but that is the current system, and the only option available to us.

I don’t think paying for IRB review is the ethical issue. If one were paying for IRB approval, that would be an ethical issue, and there are some well known rubber-stamp-y private IRBs out there.

Carl Elliott details some of the pitfalls of the for-profit IRB in his book White Coat, Black Hat. The most obvious of these is that, in a competition for clients, a for-profit IRB might well feel a pressure to forego asking the hard questions, to be less ethically rigorous (and more rubber-stamp-y) — else clients seeking approval would take their business to a competing IRB they saw as more likely to grant that approval with less hassle.

Market forces may provide good solutions to some problems, but it’s not clear that the problem of how to make research more ethical is one of them. Also, it’s worth noting that being a citizen science project does not in and of itself preclude review by an academic IRB – plenty of citizen science projects run by academic scientists do just that. It’s uBiome’s status as a private-sector citizen science project that led to the need to find another IRB.

That said, if folks with concerns knew which private IRB the uBiome team used (something they don’t disclose in their guest post), those folks could inspect the IRB’s track record for rigor and make a judgment from that.

Richman and Apte cite as further problems with IRBs, at least as currently constituted, lack of uniformity across committees and lack of transparency. The lack of uniformity is by design, the thought being that local control of committees should make them more responsive to local concerns (including those of potential subjects). Indeed, when research is conducted by collaborators from multiple institutions, one of the marks of good ethical design is when different local IRBs are comfortable approving the protocol. As well, at least part of the lack of transparency is aimed at human subjects protection — for example, ensuring that the privacy of human subjects is not compromised in the release of approved research protocols.

This is not to say that there is no reasonable discussion to have about striving for more IRB transparency, and more consistency between IRBs. However, such a discussion should center ethical considerations, not convenience or expediency.

Focusing on tone rather than substance makes it look like you don’t appreciate the substance of the critique.

Richman and Apte write the following of the worries bloggers raised with uBiome:

Some of the posts threw us off quite a bit as they seemed to be personal attacks rather than reasoned criticisms of our approach. …

We thought it was a bit… much, shall we say, to compare us to the Nazis (yes, that happened, read the posts) or to the Tuskegee Experiment because we funded our project without first paying thousands of dollars for IRB approval for a project that had not (and might never have) happened.

I have read all of the linked posts (here, here, here, here, here, here, here, and here) that Richman and Apte point to in leveling this complaint about tone. I don’t read them as comparing the uBiome team to Nazis or the researchers who oversaw the Tuskegee Syphilis Experiment.

I’m willing to stipulate that the tone of some of these posts was not at all cuddly. It may have made members of the uBiome team feel defensive.

However, addressing the actual ethical worries raised in these posts would have done a lot more for uBiome’s efforts to earn the public’s trust than adopting a defensive posture did.

Make no mistake, harsh language or not, the posts critical of uBiome were written by a bunch of people who know an awful lot about the ins and outs of ethical interactions with human subjects. These are also people who recognize from their professional lives that, while hard questions can feel like personal attacks, they still need to be answered. They are raising ethical concerns not to be pains, but because they think protecting human subjects matters — as does protecting the collective reputation of those who do human subjects research and/or citizen science.

Trust is easier to break than to build, which means one project’s ethical problems could be enough to sour the public on even the carefully designed projects of researchers who have taken much more care thinking through the ethical dimensions of their work. Addressing potential problems in advance seems like a better policy than hoping they’ll be no big deal.

And losing focus on the potential problems because you don’t like the way in which they were pointed out seems downright foolish.

Much of uBiome’s response to the hard questions raised about the ethics of their project has focused on tone, or on meeting examples that provide historical context for our ethical guidelines for human subject research with the protestation, “We’re not like that!” If nothing else, this suggests that the uBiome team hasn’t understood the point the examples are meant to convey, nor the patterns that they illuminate in terms of ethical pitfalls into which even non-evil scientists can fall if they’re not careful.

And it is not at all clear that the uBiome team’s tone in blog comments and on social media like Twitter has done much to help its case.

What is still lacking, amidst all their complaints about the tone of the critiques, is a clear account of how basic ethical questions (such as how uBiome will ensure that the joint roles of customer, citizen science participant, and human subject don’t lead to a compromise of autonomy or privacy) are being answered in uBiome’s research protocol.

A conversation on the substance of the critiques would be more productive here than one about who said something mean to whom.

Which brings me to my last issue:

New models of scientific funding, subject recruitment, and outreach that involve the internet are better served by teams that understand how the internet works.

Let’s say you’re trying to fund a project, recruit participants, build general understanding, enthusiasm, support, and trust. Let’s say that your efforts involve websites where you put out information and social media use where you amplify some of that information or push links to your websites or favorable media coverage.

People looking at the information you’ve put out there are going to draw conclusions based on the information you’ve made public. They may also draw speculative conclusions from the gaps — the information you haven’t made public.

You cannot, however, count on them to base their conclusions on information to which they’re not privy, including what’s in you’re heart.

There may be all sorts of good efforts happening behind the scenes to get rigorous ethical oversight off the ground. If it’s invisible to the public, there’s no reason the public should assume it’s happening.

If you want people to draw more accurate conclusions about what you’re doing, and about what potential problems might arise (and how you’re preparing to face them if they do), a good way to go is to make more information public.

Also, recognize that you’re involved in a conversation that is being conducted publicly. Among other things, this means it’s unreasonable to expect people with concern to take it to private email in order to get further information from you. You’re the one with a project that relies on cultivating public support and trust; you need to put the relevant information out there!

(What relevant information? Certainly the information relevant to responding to concerns and critiques articulated in the above-linked blog posts would be a good place to start — which is yet another reason why it’s good to be able to get past tone and understand substance.)

In a world where people email privately to get the information that might dispel their worries, those people are the only ones whose worries are addressed. The rest of the public that’s watching (but not necessarily tweeting, blogging, or commenting) doesn’t get that information (especially if you ask the people you email not to share the content of that email publicly). You may have fully lost their trust with nary a sign in your inboxes.

Maybe you wish the dynamics of the internet were different. Some days I do, too. But unless you’re going to fix the internet prior to embarking on your brave new world of crowdfunded citizen science, paying some attention to the dynamics as they are now will help you use it productively, rather than to create misunderstandings and distrust that then require remediation.

That could clear the way to a much more interesting and productive conversation between uBiome, other researchers, and the larger public.

When we target chemophobia, are we punching down?

Over at Pharyngula, Chris Clarke challenges those in the chemical know on their use of “dihydrogen monoxide” jokes. He writes:

Doing what I do for a living, I often find myself reading things on Facebook, Twitter, or those increasingly archaic sites called “blogs” in which the writer expresses concern about industrial effluent in our air, water, consumer products or food. Sometimes the concerns are well-founded, as in the example of pipeline breaks releasing volatile organic chemicals into your backyard. Sometimes, as in the case of concern over chemtrails or toxic vaccines, the concerns are ill-informed and spurious.

And often enough, the educational system in the United States being the way it’s been since the Reagan administration, those concerns are couched in terms that would not be used by a person with a solid grounding in science. People sometimes miss the point of dose-dependency, of acute versus chronic exposure, of the difference between parts per million and parts per trillion. Sometimes their unfamiliarity with the basic facts of chemistry causes them to make patently ridiculous alarmist statements and then double down on them when corrected.

And more times than I can count, if said statements are in a public venue like a comment thread, someone will pipe up by repeating a particular increasingly stale joke. Say it’s a discussion of contaminants in tap water allegedly stemming from hydraulic fracturing for natural gas extraction. Said wit will respond with something like:

“You know what else might be coming out of your tap? DIHYDROGEN MONOXIDE!”

Two hydrogens, one oxygen … what’s coming out of your tap here is water. Hilarious! Or perhaps not.

Clarke argues that those in the chemical know whip out the dihydrogen monoxide joke to have a laugh at the expense of someone who doesn’t have enough chemical knowledge to understand whether conditions they find alarming really ought to alarm them. However, how it usually goes down is that other chemically literate people in earshot laugh while the target of the joke ends up with no better chemical understanding of things.

Really, all the target of the joke learns is that the teller of the joke has knowledge and is willing to use it to make someone else look dumb.

Clarke explains:

Ignorance of science is an evil that for the most part is foisted upon the ignorant. The dihydrogen monoxide joke depends for its humor on ridiculing the victims of that state of affairs, while offering no solution (pun sort of intended) to the ignorance it mocks. It’s like the phrase “chemophobia.” It’s a clan marker for the Smarter Than You tribe.

The dihydrogen monoxide joke punches down, in other words. It mocks people for not having had access to a good education. And the fact that many of its practitioners use it in order to belittle utterly valid environmental concerns, in the style of (for instance) Penn Jillette, makes it all the worse — even if those concerns aren’t always expressed in phraseology a chemist would find beyond reproach, or with math that necessarily works out on close examination.

There’s a weird way in which punching down with the dihydrogen monoxide joke is the evil twin of the “deficit model” in science communication.

The deficit model assumes that the focus in science communication to audiences of non-scientists should be squarely on filling in gaps in their scientific knowledge, teaching people facts and theories that they didn’t already know, as if that is the main thing they must want from science. (It’s worth noting that the deficit model seems to assume a pretty unidirectional flow of information, from the science communicator to the non-scientist.)

The dihydrogen monoxide joke, used the way Clarke describes, identifies a gap in understanding and then, instead of trying to fill it, points and laughs. If the deficit model naïvely assumes that filling gaps in knowledge will make the public cool with science, this kind of deployment of the dihydrogen monoxide joke seems unlikely to provoke any warm feelings towards science or scientists from the person with a gappy understanding.

What’s more, this kind of joking misses an opportunity to engage with what they’re really worried about and why. Are they scared of chemicals per se? Of being at the mercy of others who have information about which chemicals can hurt us (and in which amounts) and/or who have more knowledge about or control of where those chemicals are in our environment? Do they not trust scientists at all, or are they primarily concerned about whether they can trust scientists in the employ of multinational corporations?

Do their concerns have more to do with the information and understanding our policymakers have with regard to chemicals in our world — particularly about whether these policymakers have enough to keep us relatively safe, or about whether they have the political will to do so?

Actually having a conversation and listening to what people are worried about could help. It might turn out that people with the relevant scientific knowledge to laugh at the dihydrogen monoxide joke and those without share a lot of the same concerns.

Andrew Bissette notes that there are instances where the dihydrogen monoxide joke isn’t punching down but punching up, where educated people who should know better use large platforms to take advantage of the ignorant. So perhaps it’s not the case that we need a permanent moratorium on the joke so much as more careful thought about what we hope to accomplish with it.

Let’s return to Chris Clarke’s claim that the term “chemophobia” is “a clan marker for the Smarter Than You tribe.”

Lots of chemists in the blogosphere regularly blog and tweet about chemophobia. If they took to relentlessly tagging as “chemophobe!” people who are lacking access to the body of knowledge and patterns of reasoning that define chemistry, I’d agree that it was the same kind of punching down as the use of the dihydrogen monoxide joke Clarke describes. To the extent that chemists are actually doing this to assert membership in the Smarter Than You tribe, I think it’s counterproductive and mean to boot, and we should cut it out.

But, knowing the folks I do who blog and tweet about chemophobia, I’m pretty sure their goal is not to maintain clear boundaries between The Smart and The Dumb. When they fire off a #chemophobia tweet, it’s almost like they’re sending up the Batsignal, rallying their chemical community to fight some kind of crime.

So what is it these chemists — the people who have access to the body of knowledge and patterns of reasoning that define chemistry — find problematic about the “chemophobia” of others? What do they hope to accomplish by pointing it out?

Part of where they’re coming from is probably grounded in good old fashioned deficit-model reasoning, but with more emphasis on helping others learn a bit of chemistry because it’s cool. There’s usually a conviction that the basics of the chemistry that expose the coolness are not beyond the grasp of adults of normal intelligence — if only we explain in accessibly enough. Ash Jogalekar suggests more concerted efforts in this direction, proposing a lobby for chemistry (not the chemical industry) that takes account of how people feel about chemistry and what they want to know. However it’s done, the impulse to expose the cool workings of a bit of the world to those who want to understand them should be offered as a kindness. Otherwise, we’re doing it wrong.

Another part of what moves the chemists I know who are concerned with chemophobia is that they don’t want people who are not at home with chemistry to get played. They don’t want them to be vulnerable to quack doctors, nor to merchants of doubt trying to undermine sound science to advance a particular economic or political end, nor to people trying to make a buck with misleading claims, nor to legitimately confused people who think they know much more than they really do.

People with chemical know-how could help address this kind of vulnerability, being partners to help sort out the reliable information from the bogus, the overblown risks from risks that ought to be taken seriously or investigated further.

But short of teaching the folks without access to the body of knowledge and patterns of reasoning that define chemistry everything they know to be their own experts (which is the deficit model again), providing this kind of help requires cultivating trust. It requires taking the people to whom your offering the help seriously, recognizing that gaps in their chemical understanding don’t make them unintelligent or of less value as human beings.

And laughing at the expense of the people who could use your help — using your superior chemical knowledge to punch down — seems unlikely to foster that trust.

When #chemophobia isn’t irrational: listening to the public’s real worries.

This week, the Grand CENtral blog features a guest post by Andrew Bissette defending the public’s anxiety about chemicals. In lots of places (including here), this anxiety is labeled “chemophobia”; Bissette spells it “chemphobia”, but he’s talking about the same thing.

Bissette argues that the response those of us with chemistry backgrounds often take to the successful marketing of “chemical free” products, namely, pointing out that the world around us is made of chemicals, fails to engage with people’s real concerns. He writes:

Look at the history of our profession – from tetraethyl lead to thalidomide to Bhopal – and maintain with a straight face that chemphobia is entirely unwarranted and irrational. Much like mistrust of the medical profession, it is unfortunate and unproductive, but it is in part our own fault. Arrogance and paternalism are still all too common across the sciences, and it’s entirely understandable that sections of the public treat us as villains.

Of course it’s silly to tar every chemical and chemist with the same brush, but from the outside we must appear rather esoteric and monolithic. Chemphobia ought to provoke humility, not eye-rolling. If the public are ignorant of chemistry, it’s our job to engage with them – not to lecture or hand down the Truth, but simply to talk and educate. …

[A] common response to chemphobia is to define “chemicals” as something like “any tangible matter”. From the lab this seems natural, and perhaps it is; in daily life, however, I think it’s at best overstatement and at worst dishonest. Drawing a distinction between substances which we encounter daily and are not harmful under those conditions – obvious things like water and air, kitchen ingredients, or common metals – and the more exotic, concentrated, or synthetic compounds we often deal with is useful. The observation that both groups are made of the same stuff is metaphysically profound but practically trivial for most people. We treat them very differently, and the use of the word “chemical” to draw this distinction is common, useful, and not entirely ignorant. …

This definition is of course a little fuzzy at the edges. Not all “chemicals” are synthetic, and plenty of commonly-encountered materials are. Regardless, I think we can very broadly use ‘chemical’ to mean the kinds of matter you find in a lab but not in a kitchen, and I think this is how most people use it.

Crucially, this distinction tends to lead to the notion of chemicals as harmful: bleach is a chemical; it has warning stickers, you keep it under the sink, and you wear gloves when using it. Water isn’t! You drink it, you bathe in it, it falls from the sky. Rightly or wrongly, chemphobia emerges from the common usage of the word ‘chemical’.

There are some places here where I’m not in complete agreement with Bissette.

My kitchen includes a bunch of chemicals that aren’t kept under the sink or handled only with gloves, including sodium bicarbonate, acetic acid, potassium bitartrate, lecithin, pectin, and ascorbic acid. We use these chemicals in cooking because of the reactions they undergo (and the alternative reactions they prevent — those ascorbic acid crystals see a lot of use in our homemade white sangria preventing the fruit from discoloring when it comes in contact with oxygen). And, I reckon it’s not just people with PhDs in chemistry who recognize that chemical leaveners in their quickbreads and pancakes depend on some kind of chemical reaction to produce their desired effects. Notwithstanding that recognition of chemical reactivity, many of these same folks will happily mix sodium bicarbonate with water and gulp it down if that batch of biscuits isn’t sitting well in their tummies, with nary a worry that they are ingesting something that could require a call to poison control.

Which is to say, I think Bissette puts too much weight on the assumption that there is a clear “common usage” putting all chemicals on the “bad” side of the line, even if the edges of the line are fuzzy.

Indeed, it’s hard not to believe that people in countries like the U.S. are generally moving in the direction of greater comfort with the idea that important bits of their world — including their own bodies — are composed of chemicals. (Casual talk about moody teenagers being victims of their brain chemistry is just one example of this.) Aside from the most phobic of the chemophobic, people seem OK with the idea that their bodies use chemical (say, to digest their food) and even that our pharmacopeia relies on chemical (that can, for example, relieve our pain or reduce inflammation).

These quibbles aside, I think Bissette has identified the central concern at the center of much chemophobia: The public is bombarded with products and processes that may or may not contain various kinds of chemicals for which they have no clear information. They can’t tell from their names (if those names are even disclosed on labels) what those chemicals do. They don’t know what possible harms might come from exposure to these chemicals (or what amounts it might take for exposure to be risky). They don’t know why the chemicals are in their products — what goal they achieve, and whether that goal is one that primarily serves the consumers, the retailers, or the manufacturers. And they don’t trust the people with enough knowledge and information to answer these questions.

Maybe some of this is the public’s distrust for scientists. People imagine scientists off in their supervillain labs, making plans to conquer non-scientists, rather than recognizing that scientists walk among them (and maybe even coach their kids’ soccer teams). This kind of distrust can be addressed by scientists actually being visible as members of their communities — and listening to concerns voiced by people in those communities.

A large part of this distrust, though, is likely distrust of corporations, claiming chemistry will bring us better living but then prioritizing the better living of CEOs and shareholders while cutting corners on safety testing, informative labeling, and avoiding environmental harms in the manufacture and use of the goodies they offer. I’m not chemophobic, but I think there’s good reason for presumptive distrust of corporations that see consumers as walking wallets rather than as folks deserving information to make their own sensible choices.

Scientists need start addressing that element of chemophobia — and join in putting pressure on the private sector to do a better job earning the public’s trust.

Shame versus guilt in community responses to wrongdoing.

Yesterday, on the Hastings Center Bioethics Forum, Carl Elliott pondered the question of why a petition asking the governor of Minnesota to investigate ethically problematic research at the University of Minnesota has gathered hundreds of signatures from scholars in bioethics, clinical research, medical humanities, and related disciplines — but only a handful of signatures from scholars and researchers at the University of Minnesota.

At the center of the research scandal is the death of Dan Markingson, who was a human subject in a clinical trial of psychiatric drugs. Detailed background on the case can be found here, and Judy Stone has blogged extensively about the ethical dimensions of the case.

Elliott writes:

Very few signers come from the University of Minnesota. In fact, only two people from the Center for Bioethics have signed: Leigh Turner and me. This is not because any faculty member outside the Department of Psychiatry actually defends the ethics of the study, at least as far as I can tell. What seems to bother people here is speaking out about it. Very few faculty members are willing to register their objections publicly.

Why not? Well, there are the obvious possibilities – fear, apathy, self-interest, and so on. At least one person has told me she is unwilling to sign because she doesn’t think the petition will succeed. But there may be a more interesting explanation that I’d like to explore. …

Why would faculty members remain silent about such an alarming sequence of events? One possible reason is simply because they do not feel as if the wrongdoing has anything to do with them. The University of Minnesota is a vast institution; the scandal took place in a single department; if anyone is to be blamed, it is the psychiatrists and the university administrators, not them. Simply being a faculty member at the university does not implicate them in the wrongdoing or give them any special obligation to fix it. In a phrase: no guilt, hence no responsibility.

My view is somewhat different. These events have made me deeply ashamed to be a part of the University of Minnesota, in the same way that I feel ashamed to be a Southerner when I see video clips of Strom Thurmond’s race-baiting speeches or photos of Alabama police dogs snapping at black civil rights marchers. I think that what our psychiatrists did to Dan Markingson was wrong in the deepest sense. It was exploitative, cruel, and corrupt. Almost as disgraceful are the actions university officials have taken to cover it up and protect the reputation of the university. The shame I feel comes from the fact that I have worked at the University of Minnesota for 15 years. I have even been a member of the IRB. For better or worse, my identity is bound up with the institution.

These two different reactions – shame versus guilt – differ in important ways. Shame is linked with honor; it is about losing the respect of others, and by virtue of that, losing your self-respect. And honor often involves collective identity. While we don’t usually feel guilty about the actions of other people, we often do feel ashamed if those actions reflect on our own identities. So, for example, you can feel ashamed at the actions of your parents, your fellow Lutherans, or your physician colleagues – even if you feel as if it would be unfair for anyone to blame you personally for their actions.

Shame, unlike guilt, involves the imagined gaze of other people. As Ruth Benedict writes: “Shame is a reaction to other people’s criticism. A man is shamed either by being openly ridiculed or by fantasying to himself that he has been made ridiculous. In either case it is a potent sanction. But it requires an audience or at least a man’s fantasy of an audience. Guilt does not.”

As Elliott notes, one way to avoid an audience — and thus to avoid shame — is to actively participate in, or tacitly endorse, a cover-up of the wrongdoing. I’m inclined to think, however, that taking steps to avoid shame by hiding the facts, or by allowing retaliation against people asking inconvenient questions, is itself a kind of wrongdoing — the kind of thing that incurs guilt, for which no audience is required.

As well, I think the scholars and researchers at the University of Minnesota who prefer not to take a stand on how their university responds to ethically problematic research, even if it is research in someone else’s lab, or someone else’s department, underestimate the size of the audience for their actions and for their inaction.

A hugely significant segment of this audience is their trainees. Their students and postdocs (and others involved in training relationships with them) are watching them, trying to draw lessons about how to be a grown-up scientist or scholar, a responsible member of a discipline, a responsible member of a university community, a responsible citizen of the world. The people they are training are looking to them to set a good example on how to respond to problems — by addressing them, learning from them, making things right, and doing better going forward, or by lying, covering up, and punishing people harmed by trying to recover costs from them (thus sending a message to others daring to point out how they have been harmed).

There are many fewer explicit conversations about such issues than one might hope in a scientist’s training. In the absence of explicit conversations, most of what trainees have to go on is how the people training them actually behave. And sometimes, a mentor’s silence speaks as loud as words.