The ethics of opting out of vaccination.

At my last visit to urgent care with one of my kids, the doctor who saw us mentioned that there is currently an epidemic of pertussis (whooping cough) in California, one that presents serious danger for the very young children (among others) hanging out in the waiting area. We double-checked that both my kids are current on their pertussis vaccinations (they are). I checked that I was current on my own pertussis vaccination back in December when I got my flu shot.

Sharing a world with vulnerable little kids, it’s just the responsible thing to do.

You’re already on the internet reading about science and health, so it will probably come as no surprise to you that California’s pertussis epidemic is a result of the downturn in vaccination in recent years, nor that this downturn has been driven in large part by parents worried that childhood vaccinations might lead to their kids getting autism, or asthma, or some other chronic disease. Never mind that study after study has failed to uncover evidence of such a link; these parents are weighing the risks and benefits (at least as they understand them) of vaccinating or opting out and trying to make the best decision they can for their children.

The problem is that the other children with which their children are sharing a world get ignored in the calculation.

Of course, parents are accountable to the kids they are raising. They have a duty to do what is best for them, as well as they can determine what that is. They probably also have a duty to put some effort into making a sensible determination of what’s best for their kids (which may involve seeking out expert advice, and evaluating who has the expertise to be offering trustworthy advice).


But parents and kids are also part of a community, and arguably they are accountable to other members of that community. I’d argue that members of a community may have an obligation to share relevant information with each other — and, to avoid spreading misinformation, not to represent themselves as experts when they are not. Moreover, when parents make choices with the potential to impact not only themselves and their kids but also other members of the community, they have a duty to do what is necessary to minimize bad impacts on others. Among other things, this might mean keeping your unvaccinated-by-choice kids isolated from kids who haven’t been vaccinated because of their age, because of compromised immune function, or because they are allergic to a vaccine ingredient. If you’re not willing to do your part for herd immunity, you need to take responsibility for staying out of the herd.

Otherwise, you are a free-rider on the sacrifices of the other members of the community, and you are breaking trust with them.

I know from experience that this claim upsets non-vaccinating parents a lot. They imagine that I am declaring them bad people, guilty of making a conscious choice to hurt others. I am not. However, I do think they are making a choice that has the potential to cause great harm to others. If I didn’t think that pointing out the potential consequences might be valuable to these non-vaccinating parents, at least in helping them understand more fully what they’re choosing, I wouldn’t bother.

So here, let’s take a careful look at my claim that vaccination refuseniks are free-riders.


First, what’s a free-rider?


In the simplest terms, a free-rider is someone who accepts a benefit without paying for it. The free-rider is able to partake of this benefit because others have assumed the costs necessary to bring it about. But if no one was willing to assume those costs (or indeed, in some cases, if there is not a critical mass of people assuming those costs), then that benefit would not be available, either.


Thus, when I claim that people who opt out of vaccination are free-riders on society, what I’m saying is that they are receiving benefits for which they haven’t paid their fair share — and that they receive these benefits only because other members of society have assumed the costs by being vaccinated.


Before we go any further, let’s acknowledge that people who choose to vaccinate and those who do not probably have very different understandings of the risks and benefits, and especially of their magnitudes and likelihoods. Ideally, we’d be starting this discussion about the ethics of opting out of vaccination with some agreement about what the likely outcomes are, what the unlikely outcomes are, what the unfortunate-but-tolerable outcomes are, and what the to-be-avoided-at-all-costs outcomes are.


That’s not likely to happen. People don’t even accept the same facts (regardless of scientific consensus), let alone the same weightings of them in decision making.


But ethical decision making is supposed to help us get along even in a world where people have different values and interests than our own. So, plausibly, we can talk about whether certain kinds of choices fit the pattern of free-riding even if we can’t come to agreement on probabilities and a hierarchy of really bad outcomes.


So, let’s say all the folks in my community are vaccinated against measles except me. Within this community (assuming I’m not wandering off to exotic and unvaccinated lands, and that people from exotic and unvaccinated lands don’t come wandering through), my chances of getting measles are extremely low. Indeed, they are as low as they are because everyone else in the community has been vaccinated against measles — none of my neighbors can serve as a host where the virus can hang out and then get transmitted to me. (By the way, the NIH has a nifty Disease Transmission Simulator that you can play around with to get a feel for how infectious diseases and populations whose members have differing levels of immunity interact.)


I get a benefit (freedom from measles) that I didn’t pay for. The other folks in my community who got the vaccine paid for it.


In fact, it usually doesn’t require that everyone else in the community be vaccinated against measles for me to be reasonably safe from it. Owing to “herd immunity,” measles is unlikely to run through the community if the people without immunity are relatively few and well interspersed with the vaccinated people. This is a good thing, since babies in the U.S. don’t get their first vaccination against measles until 12 months, and some people are unable to get vaccinated even if they’re willing to bear the cost (e.g., because they have compromised immune systems or are allergic to an ingredient of the vaccine). And, in other cases, people may get vaccinated but the vaccines might not be fully effective — if exposed, they might still get the disease. Herd immunity tends to protect these folks from the disease — at least as long as enough of the herd is vaccinated.


If too few members of the herd are vaccinated, even some of those who have borne the costs of being vaccinated (because even very good vaccines can’t deliver 100% protection to 100% of the people who get them), or who would bear those costs were they able (owing to their age or health or access to medical care), may miss out on the benefit. Too many free-riders can spoil things even for those who are paying their fair share.


A standard reply from non-vaccinating parents is that their unvaccinated kids are not free-riders on the vaccinated mass of society because they actually get diseases like chicken pox, pertussis, and measles (and are not counting on avoiding the other diseases against which people are routinely vaccinated). In other words, they argue, they didn’t pay the cost, but they didn’t get the benefit, either.


Does this argument work?


I’m not convinced that it does. First off, even though unvaccinated kids may get a number of diseases that their vaccinated neighbors do not, it is still unlikely that they will catch everything against which we routinely vaccinate. By opting out of vaccination but living in the midst of a herd that is mostly vaccinated, non-vaccinating parents significantly reduce the chances of their kids getting many diseases compared to what the chances would be if they lived in a completely unvaccinated herd. That statistical reduction in disease is a benefit, and the people who got vaccinated are the ones paying for it.


Now, one might reply that unvaccinated kids are actually incurring harm from their vaccinated neighbors, for example if they contract measles from a recently vaccinated kid shedding the live virus from the vaccine. However, the measles virus in the MMR vaccine is an attenuated virus — which is to say, it’s quite likely that unvaccinated kids contacting measles from vaccinated kids will have a milder bout of measles than they might have if they had been exposed to a full-strength measles virus out in the wild.
 A milder case of measles is a benefit, at least when the alternative is a severe case of measles. Again, it’s a benefit that is available because other people bore the cost of being vaccinated.


Indeed, even if they were to catch every single disease against which we vaccinate, unvaccinated kids would still reap further benefits by living in a society with a high vaccination rate. The fact that most members of society are vaccinated means that there is much less chance that epidemic diseases will shut down schools, industries, or government offices, much more chance that hospitals and medical offices will not be completely overwhelmed when outbreaks happen, much more chance that economic productivity will not be crippled and that people will be able to work and pay the taxes that support all manner of public services we take for granted.

The people who vaccinate are assuming the costs that bring us a largely epidemic-free way of life. Those who opt out of vaccinating are taking that benefit for free.


I understand that the decision not to vaccinate is often driven by concerns about what costs those who receive the vaccines might bear, and whether those costs might be worse than the benefits secured by vaccination. Set aside for the moment the issue of whether these concerns are well grounded in fact. Instead, let’s look at the parallel me might draw: 
If I vaccinate my kids, no matter what your views about the etiology of autism and asthma, you are not going to claim that my kids getting their shots raise your kids’ odds of getting autism or asthma. But if you don’t vaccinate your kids, even if I vaccinate mine, your decision does raise my kids’ chance of catching preventable infectious diseases. My decision to vaccinate doesn’t hurt you (and probably helps you in the ways discussed above). Your decision not to vaccinate could well hurt me.


The asymmetry of these choices is pretty unavoidable.


Here, it’s possible that a non-vaccinating parent might reply by saying that it ought to be possible for her to prioritize protecting her kids from whatever harms vaccination might bring to them without being accused of violating a social contract.


The herd immunity thing works for us because of an implicit social contract of sorts: those who are medically able to be vaccinated get vaccinated. Obviously, this is a social contract that views the potential harms of the diseases as more significant than the potential harms of vaccination. I would argue that under such a social contract, we as a society have an obligation to take care of those who end up paying a higher cost to achieve the shared benefit.


But if a significant number of people disagree, and think the potential harms of vaccination outweigh the potential harms of the diseases, shouldn’t they be able to opt out of this social contract?


The only way to do this without being a free-rider is to opt out of the herd altogether — or to ensure that your actions do not bring additional costs to the folks who are abiding by the social contract. If you’re planning on getting those diseases naturally, this would mean taking responsibility for keeping the germs contained and away from the herd (which, after all, contains members who are vulnerable owing to age, medical reasons they could not be vaccinated, or the chance of less than complete immunity from the vaccines). No work, no school, no supermarkets, no playgrounds, no municipal swimming pools, no doctor’s office waiting rooms, nothing while you might be able to transmit the germs. The whole time you’re able to transmit the germs, you need to isolate yourself from the members of society whose default assumption is vaccination. Otherwise, you endanger members of the herd who bore the costs of achieving herd immunity while reaping benefits (of generally disease-free work, school, supermarkets, playgrounds, municipal swimming pools, doctor’s office waiting rooms, and so forth, for which you opted out of paying your fair share).


Since you’ll generally be able to transmit these diseases before the first symptoms appear — even before you know for sure that you’re infected — you will not be able to take regular contact with the vaccinators for granted.


And if you’re traveling to someplace where the diseases whose vaccines you’re opting out of are endemic, you have a duty not to bring the germs back with you to the herd of vaccinators. Does this mean quarantining yourself for some minimum number of days before your return? It probably does. Would this be a terrible inconvenience for you? Probably so, but the 10-month-old who catches the measles you bring back might also be terrible inconvenienced. Or worse.

Here, I don’t think I’m alone in judging the harm of a vaccine refusenik giving an infant pertussis as worse than the harm in making a vaccine refusenik feel bad about violating a social contract.


An alternative, one which would admittedly required some serious logistical work, might be to join a geographically isolated herd of other people opting out of vaccination, and to commit to staying isolated from the vaccinated herd. Indeed, if the unvaccinated herd showed a lower incidence of asthma and autism after a few generations, perhaps the choices of the members of the non-vaccinating herd would be vindicated.


In the meantime, however, opting out of vaccines but sharing a society with those who get vaccinated is taking advantage of benefits that others have paid for and even threatening those benefits. Like it or not, that makes you a free-rider.
* * * * *
An earlier version of this essay originally appeared on my other blog.

Book review: Coming of Age on Zoloft.

One of the interesting and inescapable features of our knowledge-building efforts is just how hard it can be to nail down objective facts. It is especially challenging to tell an objective story when the object of study is us. It’s true that we have privileged information of a particular sort (our own experience of what it is like to be us), but we simultaneously have the impediment of never being able fully to shed that experience. As well, our immediate experience is necessarily particular — none of us knows what it is like to be human in general, just what is is like to be the particular human each of us happens to be. Indeed, if you take Heraclitus seriously (he of the impossibility of stepping in the same river twice), you might not even know what it is like to be you so much as what it is like to be you so far.

All of this complicates the stories we might try to tell about how our minds are connected to our brains, what it means for those brains to be well, and what it is for us to be ourselves or not-ourselves, especially during stretches in our lives when the task that demands our attention might be figuring out who the hell we are in the first place.

Katherine Sharpe’s new book Coming of Age on Zoloft: how antidepressants cheered us up, let us down, and changed who we are, leads us into this territory while avoiding the excesses of either ponderous philosophical treatise or catchy but overly reductive cartoon neuroscience. Rather, Sharpe draws on dozens of interviews with people prescribed selective seratonin reuptake inhibitors (SSRIs) for significant stretches from adolescence through early adulthood, and on her own experiences with antidepressants, to see how depression and antidepressants feature in the stories people tell about themselves. A major thread throughout the book is the question of how our pharmaceutical approach to mental health impacts the lives of diagnosed individuals (for better or worse), but also how it impacts our broader societal attitudes toward depression and toward the project of growing up. Sharpe writes:

When I first began to use Zoloft, my inability to pick apart my “real” thoughts and emotions from those imparted by the drug made me feel bereft. The trouble seemed to have everything to do with being young. I was conscious of needing to figure out my own interests and point myself in a direction in the world, and the fact of being on medication seemed frighteningly to compound the possibilities for error. How could I ever find my way in life if I didn’t even know which feelings were mine? (xvii)

Interleaved between personal accounts, Sharpe describes some of the larger forces whose confluence helps explain the growing ubiquity of SSRIs. One of these is the concerted effort during the revisions that updated the DSM-II to the DSM-III to abandon Freud-inflected frameworks for mental disorders which saw the causal origins of depression in relationships and replace them with checklists of symptoms (to be assessed in isolation from additional facts about what might be happening in the patient’s life) which might or might not be connected to hunches about causal origins of depression based on what scientists think they know about the actions on various neurotransmitters of drugs that seem to treat the symptoms on the checklist. Suddenly being depressed was an official diagnosis based on having particular symptoms that put you in that category — and in the bargain it was no longer approached as a possibly appropriate response to external circumstances. Sharpe also discusses the rise of direct-to-consumer advertising for drugs, which told us how to understand our feelings as symptoms and encouraged us to “talk to your doctor” about getting help from them, as well as the influence of managed care — and of funding priorities within the arena of psychiatric research — in making treatment with a pill the preferred treatment over time-consuming and “unpatentable talk-treatments.” (184)

Sharpe discusses interviewees’, and her own, experiences with talk therapy, and their experiences of trying to get off SSRIs (with varying degrees of medical supervision or premeditation) to find out whether one’s depression is an unrelenting chronic illness the having of which is a permanent fact about oneself, like having Type I diabetes, or whether it might be a transient state, something with which one needs help for a while before going back to normal. Or, if not normal, at least functional enough.

The exploration in Coming of Age on Zoloft is beautifully attentive to the ways that “functional enough” depends on a person’s interaction with environment — with family and friends, with demands of school or work or unstructured days and weeks stretching before you — and on a person’s internal dialogue with oneself — about who you are, how you feel, what you feel driven to do, what feels too overwhelming to face. Sharpe offers an especially compelling glimpse at how the forces from the world and the voices from one’s head sometimes collide, producing what professionals on college campuses describe as a significant deterioration of the baseline of mental health for their incoming students:

One college president lamented that the “moments of woolgathering, dreaming, improvisation” that were seen as part and parcel of a liberal arts education a generation ago had become a hard sell for today’s brand of highly driven students. Experts agreed that undergraduates were in a bigger hurry than ever before, expected by teachers, parents, and themselves to produce more work, of higher quality, in the same finite amount of time. (253)

Such high expectations — and the broader message that productivity is a duty — set the bar high enough that failure may become an alarmingly likely outcome. (Indeed, Sharpe quotes a Manhattan psychiatrist who raises the possibility that some college students and recent graduates “are turning to pharmaceuticals to make something possible that’s not healthy or normal.” (269)) These elevated expectations seem also to be of a piece with the broader societal mindset that makes it easier to get health coverage for a medication-check appointment than for talk-therapy. Just do the cheapest, fastest thing that lets you function well enough to get back to work. Since knowing what you want or who you are is not of primary value, exploring, reflecting, or simply being is a waste of time.

Here, of course, what kind of psychological state is functional or dysfunctional surely has something to do with what our society values, with what it demand of us. To the extent that our society is made up of individual people, those values, those demands, may be inextricably linked with whether people generally have the time, the space, the encouragement, the freedom to find or choose their own values, to be the authors (to at least some degree) of their own lives.

Finding meaning — creating meaning — is, at least experientially, connected to so much more than the release or reuptake of chemicals in our brains. Yet, as Sharpe describes, our efforts to create meaning get tangled in questions about the influence of those chemicals, especially when SSRIs are part of the story.

I no longer simply grapple with who I can become and what kind of effort it will require. Now I also grapple with the question of whether I am losing something important — cheating somehow — if I use a psychopharmaceutical to reduce the amount of effort required, or to increase my stamina to keep trying … or to lower my standards enough that being where I am (rather than trying to be better along some dimension or another) is OK with me.

And, getting satisfying answers to these questions, or even strategies for approaching them, is made harder when it seems like our society is not terribly tolerant of the woolgatherers, the grumpy, the introverted, the sad. Our right to pursue happiness (where failure is an option) has been transformed to a duty to be happy. Meanwhile, the stigma of mental illness and of needing medication to treat is dances hand in hand with the stigma attached to not conforming perfectly to societal expectations and definitions of “normal”.

In the end, what can it mean to feel “normal” when I can never get first-hand knowledge of how it feels to be anyone else? Is the “normal” I’m reaching for some state from my past, or some future state I haven’t yet experienced? Will I know it when I get there? And I can I reliably evaluate my own moods, personality, or plans with the organ whose functioning is in question?

With engaging interviews and sometimes achingly beautiful self-reflection, Coming of Age on Zoloft leads us through the terrain of these questions, illuminates the ways our pharmaceutical approach to depression makes them more fraught, and ultimately suggests the possibility that grappling with them may always have been important for our human flourishing, even without SSRIs in our systems.

Reading “White Coat, Black Hat” and discovering that ethicists might be black hats.

During one of my trips this spring, I had the opportunity to read Carl Elliott’s book White Coat, Black Hat: Adventures on the Dark Side of Medicine. It is not always the case that reading I do for my job also works as riveting reading for air travel, but this book holds its own against any of the appealing options at the airport bookstore. (I actually pounded through the entire thing before cracking open the other book I had with me, The Girl Who Kicked the Hornet’s Nest, in case you were wondering.)

Elliott takes up a number of topics of importance in our current understanding of biomedical research and how to do it ethically. He considers the role of human subjects for hire, of ghostwriters in the production of medical papers, of physicians who act as consultants and spokespeople for pharmaceutical companies, and of salespeople for the pharmaceutical companies who interact with scientists and physicians. There are lots of important issues here, engagingly presented and followed to some provocative conclusions. But the chapter of the book that gave me the most to think about, perhaps not surprisingly, is the chapter called “The Ethicists”.

You might think, since Elliott is writing a book that points out lots of ways that biomedical research could be more ethical, that he would present a picture where ethicists rush in and solve the problems created by unwitting research scientists, well-meaning physicians, and profit driven pharmaceutical company. However, Elliott presents instead reasons to worry that professional ethicists will contribute to the ethical tangles of the biomedical world rather than sorting them out. Indeed Elliott identifies what seem to be special vulnerabilities in the psyche of the professional ethicist. For example, he writes, “There is no better way to enlist bioethicists in the cause of consumer capitalism than to convince them they are working for social justice.” (139-140) Who, after all, could be against social justice? Yet, when efforts on behalf of social justice takes the form of debates on television news programs about fair access to new pharmaceuticals, the big result seems to be free advertising for the companies making those pharmaceuticals. Should bioethicists be accountable for these unforeseen results? This chapter suggests that careful bioethicists ought to foresee them, and to take responsibility.

There is an irony in professionals who see part of their job as pointing out conflicts of interest to others that they may be placing themselves right in the path of equally overwhelming conflicts of interest. Some of these have to do with the practical problem of how to fund their professional work. Universities these days are struggling with reduced budgets, which means they are encouraging their faculty to be more entrepreneurial — including by cultivating relationships that might lead to donations from the private sector. To the extent that bioethics is seen as relevant to pharmaceutical development, pharmaceutical companies, which have deeper pockets than do universities, are seen as attractive targets for fundraising.

As Elliott notes, bioethicists have seen a great deal of success in this endeavor. He writes,

For the last three decades bioethics has been vigorously generating new centers, new commissions, new journals, and new graduate programs, not to mention a highly politicized role in American public life. In the same way that sociologists saw their fortunes climb during the 1960s as the public eye turned towards social issues like poverty, crime, and education, bioethics started to ascend when medical care and scientific research began generating social questions of their own. As the field grows more prominent, bioethicists are considering a funding model familiar to the realm of business ethics, one that embraces partnership and collaboration with corporate sponsors as long as outright conflict of interest can be managed. …

Corporate funding present a public relations challenge, of course. It looks unseemly for an ethicist to share in the profits of arms dealers, industrial polluters, or multinationals that exploit the developing world. Credibility is also a concern. Bioethicist teach about pharmaceutical company issues in university classrooms, write about those issues in books and articles, and comment on them in the press. Many bioethicists evaluate industry policies and practices for professional boards, government bodies, and research ethics committees. To critics, this raises legitimate questions about the field of bioethics itself. Where does the authority of ethicists come from, and why are corporations so willing to fund them? (140-141)

That comparison of bioethics to business, by the way, is the kind of thing that gets my attention; one of the spaces frequently assigned for “Business and Professional Ethics” courses at my university is the Arthur Anderson Conference Room. Perhaps this is a permanent teachable moment, but I can’t help worry that really the lesson has to do with the vulnerability of the idealistic academic partner in the academic-corporate partnership.

Where does the authority of ethicist come from? I have scrawled in the margin something about appropriate academic credentials and good arguments. But connect this first question to Elliott’s second question: why are corporations so willing to fund them? Here, we need to consider the possibility that their credibility and professional status is, in a pragmatic sense, directly linked to corporations paying bioethicists for their labors. What, exactly, are those corporations paying for?

Let’s put that last question aside for a moment.

Arguably, the ethicist has some skills and training that render her a potentially useful partner for people trying to work out how to be ethical in the world. One hopes what she says would be informed by some amount of ethical education, serious scholarship, and decision-making strategies grounded in a real academic discipline.

Elliott notes that “[s]ome scholars have recoiled, emphatically rejecting the notion that their voices should count more than others’ on ethical affairs.” (142) Here, I agree if the claim is, in essence, that the interests of the bioethicists are no more important than others’. Surely the perspectives of others who are not ethicists matter, but one might reasonably expect that ethicists can add value, drawing on their experience in taking those interests, and the interest of other stakeholders, into account to make reasonable ethical decisions.

Maybe, though, those of us who do ethics for a living just tell ourselves we are engaged in a more or less objective decision-making process. Maybe the job we are doing is less like accounting and more like interpreting pictures in inkblots. As Elliott writes,

But ethical analysis does not really resemble a financial audit. If a company is cooking its books and the accountant closes his eyes to this fact in his audit, the accountant’s wrongdoing can be reliably detected and verified by outside monitors. It is not so easy with an ethics consultant. Ethicists have widely divergent views. They come from different religious standpoints, use different theoretical frameworks, and profess different political philosophies. Also free to change their minds at any point. How do you tell the difference between an office consultant who has changed her mind for legitimate reasons and one who has changed her mind for money? (144)

This impression of the fundamental squishiness of the ethicist’s stock in trade seems to be reinforced in a quote Elliott takes from biologist-entrepreneur Michael West: “In the field of ethics, there are no ground rules, so it’s just one ethicist opinion versus another ethicist’s opinion. You’re not getting whether someone is right or wrong, because it all depends on who you pick.” (144-145)

Here, it will probably not surprise you to learn that I think these claims are only true when the ethicists are doing it wrong.

What, then, would be involved in doing it right? To start with, what one should ask from an ethicist should be more than just an opinion. One should also ask for an argument to support that opinion, an argument that makes reference to important details like interested parties, potential consequences of the various options for action on the table, the obligations the party making the decisions to the stakeholders, and so forth — not to mention consideration of possible objections to this argument. It is fair, moreover, to ask the ethicist whether the recommended plan of action it is compatible with more than one ethical theory — or, for example, if it only works in the world we are sharing solely with other Kantians.

This would not make auditing the ethical books as easy as auditing the financial statements, but I think it would demonstrate something like rigor and lend itself to meaningful inspection by others. Along the same lines, I think it would be completely reasonable, in the case that an ethicist has gone on record as changing her mind, to ask for the argument that brought her from one position to the other. It would also be fair to ask, what argument or evidence might bring you back again?

Of course, all of this assumes an ethicist arguing in good faith. It’s not clear that what I’ve described as crucial features of sound ethical reasoning couldn’t be mimicked by someone who wanted to appear to be a good ethicist without going to the trouble of actually being one.

And if there’s someone offering you money — maybe a lot of money — for something that looks like good ethical reasoning, is there a chance you could turn from an ethicist arguing in good faith to one who just looks like she is, perhaps without even being aware of it herself?

Elliott pushes us to examine whether the dangers that may lurk when the private-sector interests are willing to put up money for your ethical insight. Have they made a point of asking for your take primarily because your paper-trail of prior ethical argumentation lines us really well with what they would like an ethicist to say to give them cover to do what they already want to do — not because it’s ethical, necessarily, but because it’s profitable or otherwise convenient? You may think your ethical stances are stable because they are well-reasoned (or maybe even right). But how can you be sure that the stability of your stance is not influenced by the size of your consultation paycheck? How can you tell that you have actually been solicited for an honest ethical assessment — one that, potentially, could be at odds with what the corporation soliciting it wants to hear? If you tell that corporation that a certain course of action would be unethical, do you have any power to prevent them from pursuing that course of action? Do you have an incentive to tell the corporation what it wants to hear, not just to pick up your consulting fee, but to keep a seat at the table where you might hope to have a chance of nudging its behavior in a more ethical direction, even if only incrementally?

None of these are easy questions to answer objectively if you’re the ethicist in the scenario.

Indeed, even if money were not part of the equation, the very fact that people at the corporations — or researchers, or physicians, or whoever it is seeking the ethicists’ expertise — are reaching out to ethicists and identifying them as experts with something worthwhile to contribute might itself make it harder for the ethicists to deliver what they think they should. As Elliott argues, the personal relationships may end up creating conflicts of interest that are at least as hard to manage as those that occur when money changes hands. These people asking for our ethical input seem like good folks, motivated at least in part by goals (like helping people with disease) that are noble. We want them to succeed. And we kind of dig that they seem interested in what we have to say. Because we end up liking them as people, we may find it hard to tell them things they don’t want to hear.

And ultimately, Elliott is arguing, barriers to delivering news that people don’t want to hear — whether those barriers come from financial dependence, the professional prestige that comes when your talents are in demand, or developing personal relationships with the people you’re advising — are barriers to being a credible ethicist. Bioethics becomes “the public relations division of modern medicine” (151) rather than carrying on the tradition of gadflies like Socrates. If they were being Socratic gadflies and telling truth to power, Elliott suggests, we would surely be able to find at least a few examples of bioethics who were punished for their candor. Instead, we see the ties between ethicists and the entities they advise growing closer.

This strikes close to home for me, as I aspire to do work in ethics that can have real impacts on the practice of scientific knowledge-building, the training of new scientists, the interaction of scientists with the rest of the world. On the one hand, it seems to help me to understand the details of scientific activity, and the concerns of scientists and scientific trainees. But, if I “go native” in the tribe of science, Elliott seems to be saying that I could end up dropping the ball as far as what it means to make the kind of contribution a proper ethicist should:

Bioethicists have gained recognition largely by carving out roles as trusted advisers. But embracing the role of trusted adviser means forgoing other potential roles, such as that of the critic. It means giving up on pressuring institutions from the outside, in the manner of investigative reporters. As bioethicists seek to become trusted advisers, rather than gadflies or watchdogs, it will not be surprising if they slowly come to resemble the people they are trusted to advise. And when that happens, moral compromise will be unnecessary, because there will be little left to compromise. (170)

This is strong stuff — the kind of stuff which, if taken seriously, I hope can keep me on track to offer honest advice even when it’s not what the people or institutions to whom I’m offering it want to hear. Heeding the warnings of a gadfly like Carl Elliott might just help an ethicist do what she has to do to be able to trust herself.

Health care provider and patient/client: situations in which fulfilling your ethical duties might not be a no-brainer.

Thanks in no small part to the invitation of the fantastic Doctor Zen, I was honored this past week to be a participant in the PACE 3rd Annual Biomedical Ethics Conference. The conference brought together an eclectic mix of people who care about bioethics: nurses, counselors, physicians, physicians’ assistants, lawyers, philosophers, scientists, students, professors, and people practicing their professions out “in the world”.*

As good conferences do, this one left me with a head full of issues with which I’m still grappling. So, as bloggers sometimes do, I’m going to put one of those issues out there and invite you to grapple with it, too.

A question that kept coming up was what exactly it means for a health care provider (broadly construed) to fulfill hir duties to hir patient/client.

Of course, the folks in the ballroom could rattle off the standard ethical principles that should guide their decision-making — respect for persons (which includes respect for the autonomy of the patient-client), beneficence, non-maleficence, justice — but sometimes these principles seem to pull in different directions, which means just what one should do when the rubber hits the road is not always obvious.

For example:

1. In some states, health care professionals are “mandatory reporters” of domestic violence — that is, if they encounter a patient who they have reason to believe is a victim of domestic violence, they are obligated by law to report it to the authorities. However, it is sometimes the case that getting the case into the legal system triggers retaliatory violence against the victim by the abuser. Moreover, in the aftermath of reporting, the victim may be less willing (or able) to seek further medical care. Is the best way to do one’s duty to one’s patient always to report? Or are their instances where one better fulfills those duties by not reporting (and if so, what are the foreseeable costs of such a course of action — to that patient, to the health care provider, to other patients, to the larger community)?

2. A patient with a terminal illness may feel that the best way for hir physician to respect hir autonomy would be to assist hir in ending hir life. However, physician-assisted suicide is usually interpreted as clearly counter to the requirements of non-maleficence (“do no harm”) and beneficence. In most of the U.S., it’s also illegal. Can a physician refuse to provide the patient in this situation with the sought-after assistance without being paternalistic?** Is it fair game for the physician’s discussion with the patient here to touch on personal values that it might not be fair for the patient to ask the physician to compromise? Are there foreseeable consequences of what, to the patient, looks like a personal choice that might impact the physician’s relationship with other patients, with hir professional community, or with the larger community?

3. In Texas, the law currently requires that patients seeking abortions must submit to transvaginal ultrasounds first. In other words, the law requires health care provider to subject patient to a medically unnecessary invasive procedure. The alternative is for the patient to carry to term an unwanted pregnancy. Both choices, arguably, subject the patient to violence.

Does the health care provider who is trying to uphold hir obligations to hir patient have an obligation to break the law? If it’s a bad law — here, one whose requirements make it impossible for a health care provider to fulfill hir duties to patients — ought health care providers to put their own skin in the game to change it?

Here’s what I’ve written before about how ethically to challenge bad rules:

If you’re part of a professional community, you’re supposed to abide by the rules set by the commissions and institutions governing your professional community.

If you don’t think they’re good rules, of course, one of the things you should do as a member of that professional community is make a case for changing them. However, in the meantime making yourself an exception to the rules that govern the other members of your professional community is pretty much the textbook definition of an ethical violation.

The gist here is that sneakily violating a bad rule (perhaps even while paying lip service to following it) rather that standing up and explicitly arguing against the bad rule — not just when it’s applied to you but when it’s applied to anyone else in your professional community — is wrong. It does nothing to overturn the bad rule, it involves you in deception, and it prioritizes your interests over everyone else’s.

The particular situation here is tricky, though, given that as I understand it the Texas law is a rule imposed on medical professionals by lawmakers, not a rule that the community of medical professionals created and implemented themselves the better to help them fulfill their duties to their patients. Indeed, it seems pretty clear that the lawmakers were willing to sacrifice duties that are absolutely central in the physician-patient relationship when they imposed this law.

Moreover, I think the way forward is complicated by concerns about how to ensure that patients get care that is helpful, not harmful, to them. If Texas physicians who opposed the mandatory transvaginal ultrasound requirement were to fill the jails to protest the law, who does that leave to deliver ethical care to people on the outside seeking abortions? Is this a place where the professional community as a whole ought to be pushing back against the law rather than leaving it to individual members of that community to push back?

* * * * *

If these examples have common threads, one of them is that what the law requires (or what the law allows) seems not to line up neatly with what our ethics require. Perhaps this speaks to the difficulty of getting laws to capture the tricky balancing act that acting ethically towards one’s patients/clients requires of health care professionals. Or, maybe it speaks to law makers not always being focused on creating an environment in which health care providers can deliver on their ethical duties to their patients/clients (perhaps even disagreeing with professional communities about just what those ethical duties are).

What does this mismatch mean for what patients/clients can legitimately expect from their health care providers? Or for what health care providers can realistically deliver to their patients/clients?

And, if you were a health care provider in one of these situations, what would you do?
_____
*Arguably, however, universities and their denizens are also in the world. We share the same fabric of space-time as the rest of y’all.

**Note that paternalism is likely warranted in a number of circumstances. However, when we’re talking about a patient of sound mind, maybe paternalism shouldn’t be the physician’s go-to stance.