Reading “White Coat, Black Hat” and discovering that ethicists might be black hats.

During one of my trips this spring, I had the opportunity to read Carl Elliott’s book White Coat, Black Hat: Adventures on the Dark Side of Medicine. It is not always the case that reading I do for my job also works as riveting reading for air travel, but this book holds its own against any of the appealing options at the airport bookstore. (I actually pounded through the entire thing before cracking open the other book I had with me, The Girl Who Kicked the Hornet’s Nest, in case you were wondering.)

Elliott takes up a number of topics of importance in our current understanding of biomedical research and how to do it ethically. He considers the role of human subjects for hire, of ghostwriters in the production of medical papers, of physicians who act as consultants and spokespeople for pharmaceutical companies, and of salespeople for the pharmaceutical companies who interact with scientists and physicians. There are lots of important issues here, engagingly presented and followed to some provocative conclusions. But the chapter of the book that gave me the most to think about, perhaps not surprisingly, is the chapter called “The Ethicists”.

You might think, since Elliott is writing a book that points out lots of ways that biomedical research could be more ethical, that he would present a picture where ethicists rush in and solve the problems created by unwitting research scientists, well-meaning physicians, and profit driven pharmaceutical company. However, Elliott presents instead reasons to worry that professional ethicists will contribute to the ethical tangles of the biomedical world rather than sorting them out. Indeed Elliott identifies what seem to be special vulnerabilities in the psyche of the professional ethicist. For example, he writes, “There is no better way to enlist bioethicists in the cause of consumer capitalism than to convince them they are working for social justice.” (139-140) Who, after all, could be against social justice? Yet, when efforts on behalf of social justice takes the form of debates on television news programs about fair access to new pharmaceuticals, the big result seems to be free advertising for the companies making those pharmaceuticals. Should bioethicists be accountable for these unforeseen results? This chapter suggests that careful bioethicists ought to foresee them, and to take responsibility.

There is an irony in professionals who see part of their job as pointing out conflicts of interest to others that they may be placing themselves right in the path of equally overwhelming conflicts of interest. Some of these have to do with the practical problem of how to fund their professional work. Universities these days are struggling with reduced budgets, which means they are encouraging their faculty to be more entrepreneurial — including by cultivating relationships that might lead to donations from the private sector. To the extent that bioethics is seen as relevant to pharmaceutical development, pharmaceutical companies, which have deeper pockets than do universities, are seen as attractive targets for fundraising.

As Elliott notes, bioethicists have seen a great deal of success in this endeavor. He writes,

For the last three decades bioethics has been vigorously generating new centers, new commissions, new journals, and new graduate programs, not to mention a highly politicized role in American public life. In the same way that sociologists saw their fortunes climb during the 1960s as the public eye turned towards social issues like poverty, crime, and education, bioethics started to ascend when medical care and scientific research began generating social questions of their own. As the field grows more prominent, bioethicists are considering a funding model familiar to the realm of business ethics, one that embraces partnership and collaboration with corporate sponsors as long as outright conflict of interest can be managed. …

Corporate funding present a public relations challenge, of course. It looks unseemly for an ethicist to share in the profits of arms dealers, industrial polluters, or multinationals that exploit the developing world. Credibility is also a concern. Bioethicist teach about pharmaceutical company issues in university classrooms, write about those issues in books and articles, and comment on them in the press. Many bioethicists evaluate industry policies and practices for professional boards, government bodies, and research ethics committees. To critics, this raises legitimate questions about the field of bioethics itself. Where does the authority of ethicists come from, and why are corporations so willing to fund them? (140-141)

That comparison of bioethics to business, by the way, is the kind of thing that gets my attention; one of the spaces frequently assigned for “Business and Professional Ethics” courses at my university is the Arthur Anderson Conference Room. Perhaps this is a permanent teachable moment, but I can’t help worry that really the lesson has to do with the vulnerability of the idealistic academic partner in the academic-corporate partnership.

Where does the authority of ethicist come from? I have scrawled in the margin something about appropriate academic credentials and good arguments. But connect this first question to Elliott’s second question: why are corporations so willing to fund them? Here, we need to consider the possibility that their credibility and professional status is, in a pragmatic sense, directly linked to corporations paying bioethicists for their labors. What, exactly, are those corporations paying for?

Let’s put that last question aside for a moment.

Arguably, the ethicist has some skills and training that render her a potentially useful partner for people trying to work out how to be ethical in the world. One hopes what she says would be informed by some amount of ethical education, serious scholarship, and decision-making strategies grounded in a real academic discipline.

Elliott notes that “[s]ome scholars have recoiled, emphatically rejecting the notion that their voices should count more than others’ on ethical affairs.” (142) Here, I agree if the claim is, in essence, that the interests of the bioethicists are no more important than others’. Surely the perspectives of others who are not ethicists matter, but one might reasonably expect that ethicists can add value, drawing on their experience in taking those interests, and the interest of other stakeholders, into account to make reasonable ethical decisions.

Maybe, though, those of us who do ethics for a living just tell ourselves we are engaged in a more or less objective decision-making process. Maybe the job we are doing is less like accounting and more like interpreting pictures in inkblots. As Elliott writes,

But ethical analysis does not really resemble a financial audit. If a company is cooking its books and the accountant closes his eyes to this fact in his audit, the accountant’s wrongdoing can be reliably detected and verified by outside monitors. It is not so easy with an ethics consultant. Ethicists have widely divergent views. They come from different religious standpoints, use different theoretical frameworks, and profess different political philosophies. Also free to change their minds at any point. How do you tell the difference between an office consultant who has changed her mind for legitimate reasons and one who has changed her mind for money? (144)

This impression of the fundamental squishiness of the ethicist’s stock in trade seems to be reinforced in a quote Elliott takes from biologist-entrepreneur Michael West: “In the field of ethics, there are no ground rules, so it’s just one ethicist opinion versus another ethicist’s opinion. You’re not getting whether someone is right or wrong, because it all depends on who you pick.” (144-145)

Here, it will probably not surprise you to learn that I think these claims are only true when the ethicists are doing it wrong.

What, then, would be involved in doing it right? To start with, what one should ask from an ethicist should be more than just an opinion. One should also ask for an argument to support that opinion, an argument that makes reference to important details like interested parties, potential consequences of the various options for action on the table, the obligations the party making the decisions to the stakeholders, and so forth — not to mention consideration of possible objections to this argument. It is fair, moreover, to ask the ethicist whether the recommended plan of action it is compatible with more than one ethical theory — or, for example, if it only works in the world we are sharing solely with other Kantians.

This would not make auditing the ethical books as easy as auditing the financial statements, but I think it would demonstrate something like rigor and lend itself to meaningful inspection by others. Along the same lines, I think it would be completely reasonable, in the case that an ethicist has gone on record as changing her mind, to ask for the argument that brought her from one position to the other. It would also be fair to ask, what argument or evidence might bring you back again?

Of course, all of this assumes an ethicist arguing in good faith. It’s not clear that what I’ve described as crucial features of sound ethical reasoning couldn’t be mimicked by someone who wanted to appear to be a good ethicist without going to the trouble of actually being one.

And if there’s someone offering you money — maybe a lot of money — for something that looks like good ethical reasoning, is there a chance you could turn from an ethicist arguing in good faith to one who just looks like she is, perhaps without even being aware of it herself?

Elliott pushes us to examine whether the dangers that may lurk when the private-sector interests are willing to put up money for your ethical insight. Have they made a point of asking for your take primarily because your paper-trail of prior ethical argumentation lines us really well with what they would like an ethicist to say to give them cover to do what they already want to do — not because it’s ethical, necessarily, but because it’s profitable or otherwise convenient? You may think your ethical stances are stable because they are well-reasoned (or maybe even right). But how can you be sure that the stability of your stance is not influenced by the size of your consultation paycheck? How can you tell that you have actually been solicited for an honest ethical assessment — one that, potentially, could be at odds with what the corporation soliciting it wants to hear? If you tell that corporation that a certain course of action would be unethical, do you have any power to prevent them from pursuing that course of action? Do you have an incentive to tell the corporation what it wants to hear, not just to pick up your consulting fee, but to keep a seat at the table where you might hope to have a chance of nudging its behavior in a more ethical direction, even if only incrementally?

None of these are easy questions to answer objectively if you’re the ethicist in the scenario.

Indeed, even if money were not part of the equation, the very fact that people at the corporations — or researchers, or physicians, or whoever it is seeking the ethicists’ expertise — are reaching out to ethicists and identifying them as experts with something worthwhile to contribute might itself make it harder for the ethicists to deliver what they think they should. As Elliott argues, the personal relationships may end up creating conflicts of interest that are at least as hard to manage as those that occur when money changes hands. These people asking for our ethical input seem like good folks, motivated at least in part by goals (like helping people with disease) that are noble. We want them to succeed. And we kind of dig that they seem interested in what we have to say. Because we end up liking them as people, we may find it hard to tell them things they don’t want to hear.

And ultimately, Elliott is arguing, barriers to delivering news that people don’t want to hear — whether those barriers come from financial dependence, the professional prestige that comes when your talents are in demand, or developing personal relationships with the people you’re advising — are barriers to being a credible ethicist. Bioethics becomes “the public relations division of modern medicine” (151) rather than carrying on the tradition of gadflies like Socrates. If they were being Socratic gadflies and telling truth to power, Elliott suggests, we would surely be able to find at least a few examples of bioethics who were punished for their candor. Instead, we see the ties between ethicists and the entities they advise growing closer.

This strikes close to home for me, as I aspire to do work in ethics that can have real impacts on the practice of scientific knowledge-building, the training of new scientists, the interaction of scientists with the rest of the world. On the one hand, it seems to help me to understand the details of scientific activity, and the concerns of scientists and scientific trainees. But, if I “go native” in the tribe of science, Elliott seems to be saying that I could end up dropping the ball as far as what it means to make the kind of contribution a proper ethicist should:

Bioethicists have gained recognition largely by carving out roles as trusted advisers. But embracing the role of trusted adviser means forgoing other potential roles, such as that of the critic. It means giving up on pressuring institutions from the outside, in the manner of investigative reporters. As bioethicists seek to become trusted advisers, rather than gadflies or watchdogs, it will not be surprising if they slowly come to resemble the people they are trusted to advise. And when that happens, moral compromise will be unnecessary, because there will be little left to compromise. (170)

This is strong stuff — the kind of stuff which, if taken seriously, I hope can keep me on track to offer honest advice even when it’s not what the people or institutions to whom I’m offering it want to hear. Heeding the warnings of a gadfly like Carl Elliott might just help an ethicist do what she has to do to be able to trust herself.

Crime, punishment, and the way forward: in the wake of Sheri Sangji’s death, what should happen to Patrick Harran?

When bad things happen in an academic laboratory, what should happen to people who bear responsibility for those bad things — even if they didn’t mean for them to happen?

This is the broad question I’ve been thinking about in connection with the prosecution of chemistry professor Patrick Harran and UCLA in connection with the laboratory accident that killed Sheri Sangji. Potentially, Harran could face jail time, and there has been a good bit of discussion (as in these posts at Chemjobber) about whether that’s what he deserves.

I’ll be honest: I find myself uncomfortable weighing Harran’s actions (and inaction) as worthy of jail time or not, let alone assigning the appropriate number of months or years behind bars to punish him for Sheri Sangji’s death. And, other than satisfying our appetite for retribution, I am utterly unsure whether such a penalty in this case would help. I don’t know that it would do much to change the conditions and institutions that ought to be changed in the wake of this accident. (On the matter of changing institutions, read the excellent posts at ChemBark and Chemjobber.)

Sheri Sangji’s death should alert us that things need to change. Conditions in academic labs need to change. Attitudes and behaviors of PIs, students, and technicians need to change. University departments (which are both builders of knowledge and trainers of new scientists) need to change. What kind of resolution of the prosecution of Prof. Harran could bring about the needed changes?

The best way forward should keep lab accidents like the one that killed Sheri Sangji from happening again. Of course, if we’re talking about avoiding such lab accidents, we’re assuming this one was preventable through some combination of proper safety equipment and attire, training, supervision, and the like.

Jailing the PI would certainly get the attention of other PIs and would underline the message that they are responsible for safety in their labs, as well as for addressing deficiencies identified in safety inspections (and maybe even for identifying and addressing the deficiencies themselves). Maybe jailing the PI in this case would also make Sheri Sangji’s family feel that justice had been served.

But, jailing the PI here might also move him, and the larger problem of making research activities reliably non-lethal, out of the sight of the people who really need to be focused on learning the lesson here.

Maybe jail would make him appear like more of the monster; his lab must have been much worse than ours. Or maybe his absence from the academic research milieu might simply mean the other PIs would return their focus to the pressing problems of securing funding, generating data, and cranking out manuscripts. Perhaps their institutions would be stricter about future safety inspections, but the PIs would do what they needed to do to return to the business as usual. Given the extent to which universities rely on external grants secured by such scientific business-as-usual, it’s hard to imagine universities doing much to shake PIs out of this routine.

If we’re interested in justice that actually addresses the dangers of business as usual, I think there is another option we should explore.

I don’t think Prof. Harran should be allowed to continue with the lines of research he was pursuing when the accident in his lab claimed Sheri Sangji’s life. The way he conducted that research — the way he supervised activities and personnel — killed someone employed to advance the research. That’s a big enough strike to bench him and let other PIs play that knowledge-building zone.

Instead, Harran should devote the remainder of his career to creating a scientific culture — at UCLA and beyond — in which the safety of the people performing the experiments (and making the reagents, and fixing the equipment, and cleaning the glassware) is never sacrificed to the goal of getting more and faster results. His mission should be to communicate just how easy it was for a “good PI” to allow lapses in safe procedures, to assume students and staff will figure out how to be safe when using materials or techniques that are new to them, to find tasks more important than supervising lab work, to discourage questions about how to be safe.

This shouldn’t be a new service requirement on Harran in addition to his research and his teaching. This should be the core of his job.

He should not only grapple with the soul-searching a decent person does when he’s allowed conditions that have killed and underling, but also do that soul-searching in a space where the rest of the scientific community can participate and include themselves in the examination. Harran’s presence in this role — his active involvement with his department in this role — means that Sheri Sangji and the circumstances that killed her will not be forgotten.

Since research grants would be unlikely to pay for this new set of professorial professional responsibilities — and since UCLA likely bears some share of responsibility for creating the conditions that killed Sheri Sangji — UCLA should fully fund these new responsibilities of Harran’s position moving forward. As well, UCLA should provide what support is necessary to allow Harran’s colleagues (and students and other personnel in their labs) to adapt their own practices in ways that incorporate his lessons. And, it might have a meaningful impact if professional organizations like the American Chemical Society provided funds for Harran to travel and speak to others running academic labs about how to make them safer.

In short, my hunch is that the best way to achieve progress on safe conditions and practices (not to mention relationships in lab groups that help everyone promote safety) is not to separate Harran from his professional community but to return him to that community with a new mission. His new charge would be to help build a better business-as-usual.

It might not be the science career he envisioned, but I reckon it’s a job that needs doing. Harran now has ample first-hand knowledge of why it matters.

Health care provider and patient/client: situations in which fulfilling your ethical duties might not be a no-brainer.

Thanks in no small part to the invitation of the fantastic Doctor Zen, I was honored this past week to be a participant in the PACE 3rd Annual Biomedical Ethics Conference. The conference brought together an eclectic mix of people who care about bioethics: nurses, counselors, physicians, physicians’ assistants, lawyers, philosophers, scientists, students, professors, and people practicing their professions out “in the world”.*

As good conferences do, this one left me with a head full of issues with which I’m still grappling. So, as bloggers sometimes do, I’m going to put one of those issues out there and invite you to grapple with it, too.

A question that kept coming up was what exactly it means for a health care provider (broadly construed) to fulfill hir duties to hir patient/client.

Of course, the folks in the ballroom could rattle off the standard ethical principles that should guide their decision-making — respect for persons (which includes respect for the autonomy of the patient-client), beneficence, non-maleficence, justice — but sometimes these principles seem to pull in different directions, which means just what one should do when the rubber hits the road is not always obvious.

For example:

1. In some states, health care professionals are “mandatory reporters” of domestic violence — that is, if they encounter a patient who they have reason to believe is a victim of domestic violence, they are obligated by law to report it to the authorities. However, it is sometimes the case that getting the case into the legal system triggers retaliatory violence against the victim by the abuser. Moreover, in the aftermath of reporting, the victim may be less willing (or able) to seek further medical care. Is the best way to do one’s duty to one’s patient always to report? Or are their instances where one better fulfills those duties by not reporting (and if so, what are the foreseeable costs of such a course of action — to that patient, to the health care provider, to other patients, to the larger community)?

2. A patient with a terminal illness may feel that the best way for hir physician to respect hir autonomy would be to assist hir in ending hir life. However, physician-assisted suicide is usually interpreted as clearly counter to the requirements of non-maleficence (“do no harm”) and beneficence. In most of the U.S., it’s also illegal. Can a physician refuse to provide the patient in this situation with the sought-after assistance without being paternalistic?** Is it fair game for the physician’s discussion with the patient here to touch on personal values that it might not be fair for the patient to ask the physician to compromise? Are there foreseeable consequences of what, to the patient, looks like a personal choice that might impact the physician’s relationship with other patients, with hir professional community, or with the larger community?

3. In Texas, the law currently requires that patients seeking abortions must submit to transvaginal ultrasounds first. In other words, the law requires health care provider to subject patient to a medically unnecessary invasive procedure. The alternative is for the patient to carry to term an unwanted pregnancy. Both choices, arguably, subject the patient to violence.

Does the health care provider who is trying to uphold hir obligations to hir patient have an obligation to break the law? If it’s a bad law — here, one whose requirements make it impossible for a health care provider to fulfill hir duties to patients — ought health care providers to put their own skin in the game to change it?

Here’s what I’ve written before about how ethically to challenge bad rules:

If you’re part of a professional community, you’re supposed to abide by the rules set by the commissions and institutions governing your professional community.

If you don’t think they’re good rules, of course, one of the things you should do as a member of that professional community is make a case for changing them. However, in the meantime making yourself an exception to the rules that govern the other members of your professional community is pretty much the textbook definition of an ethical violation.

The gist here is that sneakily violating a bad rule (perhaps even while paying lip service to following it) rather that standing up and explicitly arguing against the bad rule — not just when it’s applied to you but when it’s applied to anyone else in your professional community — is wrong. It does nothing to overturn the bad rule, it involves you in deception, and it prioritizes your interests over everyone else’s.

The particular situation here is tricky, though, given that as I understand it the Texas law is a rule imposed on medical professionals by lawmakers, not a rule that the community of medical professionals created and implemented themselves the better to help them fulfill their duties to their patients. Indeed, it seems pretty clear that the lawmakers were willing to sacrifice duties that are absolutely central in the physician-patient relationship when they imposed this law.

Moreover, I think the way forward is complicated by concerns about how to ensure that patients get care that is helpful, not harmful, to them. If Texas physicians who opposed the mandatory transvaginal ultrasound requirement were to fill the jails to protest the law, who does that leave to deliver ethical care to people on the outside seeking abortions? Is this a place where the professional community as a whole ought to be pushing back against the law rather than leaving it to individual members of that community to push back?

* * * * *

If these examples have common threads, one of them is that what the law requires (or what the law allows) seems not to line up neatly with what our ethics require. Perhaps this speaks to the difficulty of getting laws to capture the tricky balancing act that acting ethically towards one’s patients/clients requires of health care professionals. Or, maybe it speaks to law makers not always being focused on creating an environment in which health care providers can deliver on their ethical duties to their patients/clients (perhaps even disagreeing with professional communities about just what those ethical duties are).

What does this mismatch mean for what patients/clients can legitimately expect from their health care providers? Or for what health care providers can realistically deliver to their patients/clients?

And, if you were a health care provider in one of these situations, what would you do?
*Arguably, however, universities and their denizens are also in the world. We share the same fabric of space-time as the rest of y’all.

**Note that paternalism is likely warranted in a number of circumstances. However, when we’re talking about a patient of sound mind, maybe paternalism shouldn’t be the physician’s go-to stance.

Getting kids interested in math careers may require a hero.

Back when I was a high school math geek, our math team would go to meets that occasionally had tables set up to encourage us to pursue various careers that would make use of our mad math skillz. The one such profession where the level of encouragement far outstripped our teenaged interest was the actuarial field. Indeed, more than the objective boringness of the field (to the extent that we had enough information to evaluate that) it may have been the vehement protests of how not-boring actuarial work and actuaries are (really!) that persuaded us that actuarial work was probably pretty boring.

Recently, I think I have hit upon something that might help actuaries turn this perception around. They need a superhero.

Seriously, if any comic book superhero of note had been an actuary as his cover job, actuarial work would have gotten an automatic boost in the estimation of teen geeks. Journalism? Cool, because that was Superman’s day job. Millionaire-industrialist-playboy-philanthropist? Definitely an acceptable career path, since that was Batman’s day job. Librarian? Cool not just because of the access to all those books and periodicals, but also because it was Batgirl’s day job. High school student? Not cool, exactly, but more tolerable on account of being Spiderman’s day job.

Having a superhero who alternated nights of crime-fighting with days assessing risk would raise the esteem of actuarial science among high school mathletes.

There are details that would need to be worked out, of course.

The name for this superhero? Let’s pencil in The Numerator. (“He always comes out on top!”)

His origin story? Probably it would involve looking up from his calculations and crying, “Egad! Crime does pay!” After which, of course, he would dedicate himself to fighting that crime (else we’re looking at the origin story of a supervillain).*

My guess is that The Numerator is going to be one of those superheros that relies on cool gadgets and knowledge rather than on actual superhuman strength or powers — more like Batman than Spiderman. (Otherwise, we’re looking at him getting his fingers caught in a radioactive adding machine, thereby ending up with the power to shoot calculator tape from his fingers, which … I don’t think so.) His utility belt probably includes actuarial tables and a slide-rule. But maybe he’s also a synesthete who can look at the numbers and smell evil.

His nemeses? Undoubtedly they will be legion — corporate crooks, purveyors of Ponzi schemes — but one of them might be Pay-Day Shark. This supervillain, tricked out in a sharkskin suit, will be happy to give you an advance on your paycheck as long as you’re ready to pay interest and fees that end up being about 400% of the amount you’re borrowing. When you can’t pay, he’ll threaten you will his tank of hungry and ill-tempered (but not laser-sight-equipped) sharks. He may even let his pretties eat one of your limbs. But Pay-Day Shark wants to help you — he’ll loan you a prosthetic limb, for a reasonable fee.

Who can save you from his clutches? The Numerator!

DC Comics? Marvel Comics? American Academy of Actuaries? I think we have something here. Let’s talk.

*It possible that linking actuarial science with supervillainy might also make young geeks hold it in higher esteem. Maybe someone should perform a risk-benefit analysis of this … but who?