The last two meetings of my ethics in science class have focused on some of the history of research with human subjects and on the changing statements of ethical principles or rules governing such experimentation. Looking at these statements (the Nuremberg Code and the Belmont Report especially) against the backdrop of some very serious missteps (Nazi medical experiments and the Public Health Service’s Tuskegee syphilis experiment), it’s painfully clear how much regulation is scandal-driven — a reaction to a screw-up, rather than something that researchers took the time to think about before they embarked on their research. Worse, it’s clear that researchers are perfectly capable of ignoring existing moral codes or standards to get the job done.
What some of these researchers may not have understood (but my students seem pretty well attuned to) is that in ignoring the norms that one ought, as a physician or a scientist, to be committed to, one comes perilously close to choosing not to be a physician or a scientist.
How could the Nazi physicians have engaged in medical experiments in which they exposed prisoners, against their will, to malaria, typhus, yellow fever, smallpox, diptheria, poisons, and mustard gas? They were, after all, supposedly physicians. As members of this profession, they were supposed to be bound by the norm articulated in the Hippocratic Oath to do no harm.
Harm was done.
The physicians conducting the Nazi medical research may have seen themselves as scientists rather than physicians in this context. They may have seen their primary duty as answering particular scientific questions rather than as looking after the care of the persons involved in the experiments. Maybe they bought the official line from the Nazi regime that many of the persons used in these experiments were subhuman. Or perhaps, they knew they were human but made the pragmatic decision to do what the regime wanted, either to advance their careers or to secure their own safety from reprisals.
But you can’t intentionally harm human beings that way and still count as a physician.
Part of the point of the Nuremberg Code (and later, the Declaration of Helsinki) was to flesh out the particulars of doing no harm in research with human subjects. Nuremberg identifies risks to which a human subject may not be exposed no matter what the potential benefit of the knowledge that might be uncovered by doing so. As well, Nuremberg makes the voluntary consent of the participants essential to the ethical conduct of the research.
The Declaration of Helsinki emphasizes that the researcher’s obligations as a physician outweigh his or her interests as a scientist:
“Concern for the interests of the subject must always prevail over the interests of science and society.” (I. Basic Principles, 5)
“The physician can combine medical research with professional care, the objective being the acquisition of new medical knowledge, only to the extent that medical research is justified by its potential diagnostic or therapeutic value for the patient.” (II. Clinical Research, 6)
“In the purely scientific application of medical research carried out on a human being, it is the duty of the physician to remain the protector of the life and health of that person on whom biomedical research is being carried out.” (III. Nonclinical Biomedical Research, 1)
“In research on man, the interest of science and society should never take precedence over considerations related to the well being of the subject.” (III. Nonclinical Biomedical Research, 4)
The language I’m quoting is from the 1989 version, but these same points were present in the version adopted by the 18th World Medical Assembly in Helsinki, Finland, in June 1964.
At that point, in the U.S., the Public Health Service was more than 30 years into the Tuskegee syphilis experiment.
The thrust of the research was to establish the effects of the spontaneous evolution of syphilis on black males — so clearly, this was intended as a nontherapeutic study. And perhaps, when the study was launched in 1932 (prior to the discovery of penicilin), a nontherapeutic study made some sense, given the argument that the state-of-the-art therapy in 1932 (a one year course of treatment with arsphenamine and bismuth of mercury) wasn’t much better than the syphilis it was intended to treat. But in 1946 (while the study was still going), penicilin was discovered — and, it was discovered to be an effective treatment for syphilis.
In 1964 (while the study was still going) the WMA adopted the Declaration of Helsinki as “a guide to every physician in biomedical research involving human subjects.”
The P.H.S. physicians involved in this research were acting in violation of the Declaration of Helsinki and the Nuremberg Code. There was nothing like informed consent from the participants. None of the findings on untreated syphilis were of any “diagnostic or therapeutic value for the patient”. Failing to offer treatment when effective treatment was available gave precedence to the interests of science over considerations related to the well being of the subjects.
To the extent that they ignored their duties to the human subjects in their care, the P.H.S. researchers were not physicians.
Physicians doing biomedical research with human subjects are not the only ones potentially torn between two masters (here, their interest in building scientific knowledge vs. their duty as physicians to look after the interests of the humans in their care). Scientists in various contexts (such as working for a private concern, or working for a government) may be torn between their duty as employees to look out for the interests of their employer and their duties as scientists (e.g., their duty to be honest about their findings). Sometimes your findings are inconvenient from the point of view of the interests of your employer; they might mean that you won’t be able to bring that new drug to market, or you won’t be able to provide support for your administration’s policy objectives.
But to the extent that your duties as an employee overrule your duties as a scientist … you aren’t being a scientist any more. Once you’ve been persuaded that honesty is optional if it gets in the way of the interests of the guy who pays you, you’re a whore.
* * * * *
Lately, science educators are finding themselves in a similar situation, pulled in opposite directions by their duties as educators to give their students a good understanding of what science knows and how science knows it, and their duties to administrators who are trying not to run afoul of those whose material and political support is necessary to keep the schools running. The Arkansas Times (hat tip: Brian Leiter) has an article on the tightrope walk Arkansas’ science teachers, and find themselves on. The article opens with an interview wioth “Bob”, a geologist and science teacher whose identity is concealed lest he lose his job:
Teachers at his facility are forbidden to use the “e-word” (evolution) with the kids. They are permitted to use the word “adaptation” but only to refer to a current characteristic of an organism, not as a product of evolutionary change via natural selection. They cannot even use the term “natural selection.” Bob feared that not being able to use evolutionary terms and ideas to answer his students’ questions would lead to reinforcement of their misconceptions.
But Bob’s personal issue was more specific, and the prohibition more insidious. In his words, “I am instructed NOT to use hard numbers when telling kids how old rocks are. I am supposed to say that these rocks are VERY VERY OLD … but I am NOT to say that these rocks are thought to be about 300 million years old.”
As a person with a geology background, Bob found this restriction hard to justify, especially since the new Arkansas educational benchmarks for 5th grade include introduction of the concept of the 4.5-billion-year age of the earth. Bob’s facility is supposed to be meeting or exceeding those benchmarks.
The explanation that had been given to Bob by his supervisors was that their science facility is in a delicate position and must avoid irritating some religious fundamentalists who may have their fingers on the purse strings of various school districts. Apparently his supervisors feared that teachers or parents might be offended if Bob taught their children about the age of rocks and that it would result in another school district pulling out of their program. He closed his explanatory message with these lines:
“So my situation here is tenuous. I am under censure for mentioning numbers. … I find that my ‘fire’ for this place is fading if we’re going to dissemble about such a basic factor of modern science. I mean … the Scopes trial was how long ago now??? I thought we had fought this battle … and still it goes on.”…
Both of the directors [at the science education facility where “Bob” works] welcomed me warmly and were very forthcoming in their answers to my questions. They were, however, quite firm in their insistence that they and their facility be kept strictly anonymous if I was to write a story about Bob’s issue. We talked for over an hour about the site’s mission, their classes, and Bob’s situation specifically. Both directors agreed that “in a perfect world” they could, and would, teach evolution and deep time. However, back in the real world, they defended their stance on the prohibition of the “e-word,” reasoning that it would take too long to teach the concept of evolution effectively (especially if they had to defuse any objections) and expressing concern for the well-being of their facility. Their program depends upon public support and continued patronage of the region’s school districts, which they felt could be threatened by any political blowback from an unwanted evolution controversy.
With regard to Bob’s geologic time scale issue, the program director likened it to a game of Russian roulette. He admitted that probably very few students would have a real problem with a discussion about time on the order of millions of years, but that it might only take one child’s parents to cause major problems. He spun a scenario of a student’s returning home with stories beginning with “Millions of years ago …” that could set a fundamentalist parent on a veritable witch hunt, first gathering support of like-minded parents and then showing up at school board meetings until the district pulled out of the science program to avoid conflict. He added that this might cause a ripple effect, other districts following suit, leading to the demise of the program.
Essentially, they are not allowing Bob to teach a certain set of scientific data in order to protect their ability to provide students the good science curriculum they do teach. The directors are not alone in their opinion that discussions of deep time and the “e-word” could be detrimental to the program’s existence. They have polled teachers in the districts they serve and have heard from them more than enough times that teaching evolution would be “political suicide.” …
I learned that omission was the method of dealing with evolution in another of Arkansas’s largest, most quickly growing, and wealthiest school districts — an omission that was apparently strongly suggested by the administration. I tried to check on this, but made little progress, receiving the cold shoulder from the administration and the science department at that school. However, I spoke with a person who works for a private science education facility that does contract work for this district: “Helen” — she, like the other people I had visited, requested that she and her employers not be identified. I asked Helen about her experiences with the district’s teachers. Her story was that in preparation for teaching the students from that district, she had asked some of the teachers how they approached the state benchmarks for those items dealing with evolution. She said, “Oh, I later got in trouble for even asking,” but went on to describe their answers. Most teachers said that they did not know enough about evolution to teach it themselves, but one of them, after looking around to make sure they were safely out of anyone’s earshot, explained that the teachers are told by school administrators that it would be “good for their careers” not to mention such topics in their classes. …
How are teachers like “Bob,” administrators like “Susan,” and teacher trainers like “Randy” supposed to ensure proper science education if politicians like the governor consistently advocate the teaching of non-science?
It is telling that none of the people I spoke with were willing to be identified or to allow me to reveal their respective institutions. In the case of “Bob” and his facility’s directors, they were concerned about criticism from both sides. They did not want to lose students by offending fundamentalists or lose credibility in the eyes of the scientific community for omitting evolution.
What makes life hard for the folks interviewed for this story is that they recognize their duty to provide a quality education to the students in Arkansas schools. If they’re charged with teaching science, they have a duty actually to teach science and not to omit important bits because they might upset some people. They have a duty to make sure the educations they are offering meets the state’s own science benchmarks. At the same time, looking out for the interests of the kids, they are worried that the students’ science education might suffer even more if they were fired and replaced with unqualified science teachers (or, perhaps, with qualified teachers who didn’t take their duty to teach the science right so seriously).
Here, it’s much harder to tell what the best way to fulfil one’s duties to the students might be. But it is clear that the teachers are struggling with this — which is a good indication that they recognize that they have duties as teachers and are unwilling to sell them out.
And you can tell that even the little compromises (don’t mention the “e-word”) in the service of what might be a greater duty to the students make some of these teachers feel that they’ve died a little inside as teachers.
It is possible that the cleanest solution to a problem like this is to let people serve one master rather than forcing them to serve two. Maybe it’s just a mistake to ask a physician to do something that violates the duties of a physician, to ask a scientist to trade truth for profit, or to ask an educator to protect the political or religious sensibilities of a community at the expense of providing that community’s children with a quality education.
To the extent that we have an interest in what physicians, scientists, and educators can contribute to our society, maybe each of us has a duty to make sure we don’t put them in a position where they have to make such impossible choices.
Actually, Nazi doctors were among the most enthusiastic proponents of the eugenic programs that Hitler’s regime came up with. I’ve been meaning to write about how the entire Nazi biomedical vision made it possible for the Holocaust to occur. One of these days I’ll get around to it.
As a free-range scientist (having been released from my gilded cage) I wonder how your class would handle the problem faced by my research colleagues in the private sector. You ask whether a scientist or physician could potentially serve two masters, however, in the private sector researchers are typically asked to serve three masters: the patient, the research goal and the bottom line.
Given that the majority of your students will likely serve in the private sector, trying to balance the timelines and cost limitations of investors/bosses/corporate timelines while remaining true both to your patients and your research goals is a huge challenge.
As for your suggestion that scientists and physicians be asked only to serve a single master, this is simply a non-starter. It would require a doubling of staff (one physician and one scientist) in each study and ultimately an overseer to mediate disputes (it reminds me of the old joke about why KGB officers traveled in threes, one could read, one could write and the third was there to watch over the two intellectuals).
As in all things trade-offs must be made, we must trust that the individuals will act in an ethical way and then set guidelines and implement oversight and enforcement mechanisms to deal with those who stray. Needless to say we won�t always get it right but it is the moderate middle road that will succeed most of the time and only fail infrequently.
Orac: Do you think the Nazi physicians saw their commitment to eugenics as somehow upholding their duty to do no harm to their patients? Or did they think they were avoiding harm to society as a whole (while sacrificing the interests of these patients)? Or did they fail to see the folks who were sterilized as fully human? In other words, is there any plausible way to reconcile their enthusiasm for eugenics with their duties as physicians?
Blair: I understand the practical dimension of the problem (especially in a world where research costs money), but it seems like the internal conflicts of interest the current system places on individual scientists are frequent and heavy. Sets of principles like Helsinki are useful because they remind the physician-scientist that duties to the patient are the tie-breaker for the conflicts. In some ways, people’s personal integrity (and their identity as a scientist or a teacher or a physician) may have to act as the tie-breaker in conflicts between professional ethics and the bottom line. But put into such conflicts, a fair number of people crumble. So I wonder if there might be institional changes (maybe to how we fund drug development, science education, etc.) that could at least reduce the frequency of the conflicts placed on the individual’s shoulders.
The Helsinki Principles are useful but if parsed have their own difficulties. Consider Basic principle 5 in its entirety:
“Every biomedical research project involving human subjects should be preceded by careful assessment of predictable risks in comparison with foreseeable benefits to the subject or to others. Concern for the interests of the subject must always prevail over the interests of science and society”.
In your blog you extract the final sentence but the preceding sentence gives many a researcher an out in that it includes a consideration of risks with respect to benefits “to the subject or to others”. In any case where risks are balanced against benefits personal opinion/morality/ethics comes to the fore. A first year theology student would argue differently than a tobacco executive and both would likely find fault with a mere postdoc who just wants to fund his/her research project.
Another consideration not addressed to this point is the critical driver that pushes many a scientist over the edge: outside obligations. Since money is not only a driving force in research but also in everyday life one could argue that a single, independently wealthy researcher is much freer to be ethical than a married, parent with a mortgage still trying to pay her/his student loans. The reason many a researcher crumbles is the need to meet obligations that are entirely independent of their research.
You ask if an institutional change might address this problem and I would argue that in the modern political and financial reality the simple answer is no! Through the years our society has developed the next best thing (at this point) in universities. Researchers are provided with tenure and (at lest in Canada from whence I write) independent granting authorities exist to supply research grants. But even then these grants go through a human filter (peer review) and not all good work can be funded. Perhaps there exist some iterative or incremental improvements, but I do not see a grand change that could make a difference, this is particularly true in a world with so many different jurisdictions where a researcher need only cross an imaginary line in the sand to end up under an entirely different regulatory regime.
Just to step outside my comfort area and comment about the Nazi research. It is my understanding that the Nazi research on hypothermia carried out in the concentration camps had the stated goal of improving survival rates of pilots and sailors lost at sea and was part of the war effort in a society gradually being beaten down by a long and horrible war. It has been argued (but not by me) that the scientists felt they could balance the suffering of the few to preserve the lives of the many. I am not in a position to comment and will leave that discussion to those better able to assess the information. Like the ticking bomb/torture scenario I am glad that I will never be in a position to have to make the choice as to whether to use that data directly….but if as mentioned in a previous post modern life jackets are the direct result of this research then I could potentially end up using it indirectly. Not a pleasant thought and one which will probably occupy some private time.
Penicillin was discovered in 1928, before the Tuskegee experiment commenced.
Yes, Aussie is right about the date of the discovery of penicilin.
However, apparently its value in treating diseases like syphilis wasn’t recognized until the mid-1940s. This makes is plausible (perhaps) that one could, in 1932, justify a study of the natural course of untreated syphilis (since there were no good treatments known at that time), but much harder to justify the need for such a study a decade of so later.
Just to clarify the penicillin dates, Fleming discovered it in 1928, but as late as 1939 clinical trials were impeded by the difficulty of producing sufficient quantities. It was only with the war effort in the early 1940s that production in quantity was feasible, and then almost all supplies were diverted to the military. It was only after WW II that penicillin became available for general medical practice.