SPSP 2013 Contributed Papers: Computation and Simulation
Tweeted from the 4th biennial conference of the Society for Philosophy of Science in Practice in Toronto, Ontario, Canada, on June 29, 2013, during Concurrent Sessions VII
Tweeted from the 4th biennial conference of the Society for Philosophy of Science in Practice in Toronto, Ontario, Canada, on June 29, 2013, during Concurrent Sessions VII
Tweeted from the 4th biennial conference of the Society for Philosophy of Science in Practice in Toronto, Ontario, Canada, on June 29, 2013.
Tweeted from the 4th biennial conference of the Society for Philosophy of Science in Practice in Toronto, Ontario, Canada, on June 28, 2013, during Concurrent Sessions VI
I have succumbed to what I hope is my last cold of the calendar year. (If I manage to fit in another after this, I will be tempted to claim it as a testament to my efficiency, rather than the capriciousness of my immune system.) And, seeking relief of my symptoms, I have returned to using my neti pot.
However, since last I used this handy device for nasal irrigation, I saw this news item:
Louisiana’s state health department has issued a warning about the dangers of improperly using nasal-irrigation devices called neti pots, responding to two recent deaths in the state that are thought to have resulted from “brain-eating amoebas” entering people’s brains through their sinuses while they were using the devices.
Both victims are believed to have filled their neti pots with tap water instead of manufacturer-recommended distilled or sterilized water. When they used these pots to force the water up their noses and flush out their sinus cavities — a treatment for colds and hay fever — a deadly amoeba living in the tap water, called Naegleria fowleri, worked its way from their sinuses into their brains. The parasitic organism infected the victims’ brains with a neurological disease called primary amoebic meningoencephalitis (PAME), which rapidly destroys neural tissue and typically kills sufferers in a matter of days.
OK, first thing? Every neti pot user I have spoken to since seeing this story uses tap water. I no longer have the box for my neti pot (on which the instructions for use were printed), but I cannot recall the instructions stressing — or even mentioning — that the neti pot only be used with distilled or sterilized water.
Not that I don’t routinely ignore recommendations or void warrantees. It’s just that I generally do so consciously, rather than accidentally.
Anyway, a headcold sucks. Brain-eating amoebae would probably suck even more.
Commentary I have seen on this story suggests that the real danger is not so much nasal irrigation with tap water as the questionable quality of Louisiana tap water. The quality of the tap water in the San Francisco Bay Area is pretty high. So, probably I could safely continue to use tap water in my neti pot.
But, now that I have the possibility of introducing brain-eating amoebae into my brain on the brain (as it were), the magnitude of the bad outcome (amoebae eating my brain) is big enough that I’d rather reduce the risk of that happening to zero. And, I’d feel like a fool (in the moments of self-awareness that I had before my brain got eaten) if I did fall victim to this bad outcome, as unlikely as it is, by betting wrong.
Which means, I’m now boiling my tap water first before I use it to irrigate my nasal passages. And, as I get used to this new protocol, I’m risking the discomfort of applying saline solution that has not cooled down quite enough.
But so far, I haven’t seen any news items about brain tissue denatured by using a neti pot with too-hot saline solution.
Those of you who read the excellent blog White Coat Underground have probably had occasion to read PalMD’s explanation of the Quack Miranda Warning, the disclaimer found on various websites and advertisements that reads, “These statements have not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure or prevent any disease.” When found on a website that seems actually to be offering diagnosis, treatment, cure, or prevention, PalMD notes, this language seems like a warning that the big change that will be effected is that your wallet will be lightened.
In response to this, Lawrence comments:
This statement may be on every quack website but is on every legitamate website and label as well. Take vitamin C for example. Everyone knows that it can help treat & cure diseases. Vitamin C has been used for centuries to cure disease by eating various foods that are high in it. Even doctors tell you it is good to take when you are sick because it helps your body fight off the disease. So the fact that this statement is required to be on even the most obviously beneficial vitamins pretty much means that the FDA requires a companies to lie to the public and that they have failed in their one duty to encouraging truth in health. Once I realized this, it totally discredits everything the FDA says.
Sure if something is not approved by a big organization whose existance is supposed to safeguard health it makes it easier for the little con artest to step in at every opportunity, but that doesn’t mean that the big con artests arn’t doing the same thing
“Everyone knows…”
A phrase deadly to science.
I’m going to add my (less succinct) two cents.
There are plenty of things that people take to be something everyone knows. (The “everyone” is tricky, because there are enough people on the planet that it’s usually (always?) possible to find someone who doesn’t know X.). And, I’m happy to grant that, for some values of X, there are indeed many people who believe X.
But belief is not the same as knowledge.
What “everyone knows” about celebrities should help us notice the difference. Richard Gere? Jamie Lee Curtis? Even in the event that everyone has heard the same rumors, the extent of what we actually know is that there are rumors. Our propensity to believe rumors is why the team at Snopes will never want for material.
This is not to say that we have to do all of our epistemic labor ourselves. Indeed, we frequently rely on the testimony of others to help us know more than we could all by ourselves, But, this division of labor introduces risks if we accept as authoritative the testimony of someone who is mistaken — or who is trying to sell us snake-oil. Plus, when we’re accepting the testimony of someone who knows X on the basis of someone else’s testimony, our connection to the actual coming-to-know of X (through a mode other than someone else’s say-so) becomes more attenuated.
At least within the realm of science, the non-testimony route to knowledge involves gathering empirical evidence under conditions that are either controlled or at least well characterized. Ideally, the effects that are observed are both repeatable in relevantly similar conditions and observable by others. Science, in its methodology, strives to ground knowledge claims in observational evidence that anyone could come to know (assuming a standard set of properly functioning sense organs). Part of how we know that we know X is that the evidence in support of X can be inspected by others. At this basic level, we don’t have to take anyone else’s word for X; the testimony of our senses (and the fact that others who are pointing their sense organs at the same bits of the world and seeing the same things) gives us the support for our beliefs that we need.
Claims without something like empirical support might inspire belief, but they don’t pass scientific muster. To the extent that an agency like the FDA is committed to evaluating claims in a scientific framework, this means that they want to evaluate the details of the experiments used to generate the empirical data that are being counted as support for those claims. In other contexts, folks may be expecting, or settling for, other standards of evidence. In scientific contexts, including biomedical ones, scientific rules of evidence are what you get.
Why then, one might ask, might a physician suggest vitamin C to a patient with a cold if there isn’t sufficient scientific evidence to say we know vitamin C cures cold?
There are a few possibilities here. One is that the physician judges (on the basis of a reasonable body of empirical evidence) that taking vitamin C is unlikely to do harm to the patient with a cold. If the physician’s clinical experience is that cold patients will feel better with some intervention than with no intervention, recommending vitamin C may seem like the most benign therapeutic option.
It’s also possible that some of these physicians accept the testimony of someone else who tells the there is good reason to believe that vitamin C cures colds. Being human, physicians sometimes get burned by testimony that turns out to be unreliable.
It’s even possible that some physicians are not so clear on scientific rules of evidence, and that they make recommendations on the basis of beliefs that haven’t been rigorously tested. The more high profile of these physicians are the kinds of folks about whom PalMD frequently blogs.
We had to fetch the younger Free-Ride offspring from school yesterday midday on account of an unscheduled bout of vomiting.* Because, you know, the microbes and immune systems tend not to take account of things like our work schedules. (“Or whether we have a science test,” the younger Free-Ride offspring chimes in.)
Anyway, since experience has established me as the puke-parent** in the Free-Ride household (the one upon whom a child will vomit in instances where someone is vomited upon), I now have something of a procedure when I get home with a pukey kid. We cover the head of the bed, the pillow, and the floor area adjacent to the child’s bed with towels (since, in case of puke, it’s easier to remove and replace a towel or two than to strip the whole bed and change the sheets). We provide a nice big aluminum bowl next to the bed … just in case.
And we don’t even think about putting food into that tummy until the tummy shows no signs of erupting.
But then, what to put in the tummy — what counts as a “gentle” food for a kid recovering from a stomach bug — is a source of some controversy at Casa Free-Ride.
In the household in which I grew up, flat ginger ale and saltines were the canonical first foods after an upchuck. If they stayed down, maybe 24 hours later you’d get to try some baked custard, the eventually “real” food.
Sadly, we hardly ever have ginger ale in the house, and the Free-Ride offspring have declared saltines strange and disgusting. What this means is that I don’t have a well-established safe food with which to test tummy stability.
Indeed, as I was laying down towels, right before I was going to make a batch of baked custard, the younger Free-Ride offspring mentioned that a teacher at the after school program had said that eggs (an ingredient of baked custard) are not a good food for your tummy after vomiting.
This suggests to us that what people consider as the right kind of food to give a kid who’s been throwing up must be pretty strongly shaped by what kind of food they were given as kids trying to get better from crummy tummies. Also, it suggests that there is no clear unified theory of the optimal macronutrient composition for these foods — at least not one upon which a clear majority of grown-ups taking care of these kids agree.
My strategy, drawn from my childhood, has been: fluids with a little flavor (because water tastes funny when you’re sick), then carbohydrates with negligible fiber (the dreaded saltines), then some not-too-wobbly protein, and none of it very far from a flavor range it would be fair to describe as “bland”. Probably a banana somewhere in there, too.
But, see, now the younger Free-Ride offspring and I are wondering if this strategy is bunkum.***
So, because the younger Free-Ride offspring tells me that a PubMed search would not be a relaxing way to spend a sick day, we’re appealing to those more likely to have an actual evidence base here (Pal? Pascale? Other medical/nutrition types?) to tell us whether there is any informed-by-science consensus on what a kid ought to be fed (and in what sequence) once the puking subsides.
______
* No, we don’t have scheduled vomiting. It’s just that these stomach bugs hardly ever happen on a day when we had nothing else to do.
** The companion role to “puke-parent” is “poop-parent”. My better-half assumed that role, but hasn’t gotten any action in it since the sprogs were in diapers.
*** My current favorite alternate theory on why to eat bland foods in the wake of a stomach-bug: You don’t want to eat foods with more interesting flavors and textures, especially foods you really like, and then throw them up (if you’ve tested the tummy too soon) lest you develop a long-lasting aversion to those foods. It took me maybe a decade to get over my aversion to spaghetti and other long pastas served with tomato-based sauces … because of a stomach flu when I was about 11. On the other hand, if you develop an aversion to saltines, it doesn’t really impact your quality of life in quite the same way.
As promised, today I’m returning to this essay (PDF) by Scott E. Kern about the sorry state of cancer researchers at Johns Hopkins to consider the assumptions he seems to be making about what cancer patients can demand from researchers (or any other members of society), and on what basis.
Let’s review the paragraph of Kern’s essay that dropped his framing of the ethical issue like an anvil:
During the survey period, off-site laypersons offer comments on my observations. “Don’t the people with families have a right to a career in cancer research also?” I choose not to answer. How would I? Do the patients have a duty to provide this “right”, perhaps by entering suspended animation? Should I note that examining other measures of passion, such as breadth of reading and fund of knowledge, may raise the same concern and that “time” is likely only a surrogate measure? Should I note that productive scientists with adorable family lives may have “earned” their positions rather than acquiring them as a “right”? Which of the other professions can adopt a country-club mentality, restricting their activities largely to a 35–40 hour week? Don’t people with families have a right to be police? Lawyers? Astronauts? Entrepreneurs?
There’s a bit of weirdness here that I will note and then set aside, namely formulating the question as one of whether people with families have a right to a career in cancer research, rather than whether cancer researchers have a right to have families (or any other parts of their lives that exist beyond their careers).
Framing it this way, it’s hard not to suspect that Kern is the guy on the search committee who is poised to torpedo the job application of any researcher with the temerity to show any evidence of a life that might need balancing with work — the guy on the search committee who is open about wanting to hire workaholics who have no place else to go but the lab and thus can be expected to yield a higher research output for the same salary. Talented applicants with families (or aspirations to have them), or even hobbies, are a bad risk to a guy like this. And besides, if they need that other stuff too, how serious can they be about research?
If Hopkins has a policy of screening out applicants for research positions on the basis that they have families, or hobbies, or interests that they intend to pursue beyond their work duties, I’m sure that they make this policy clear in their job advertisements. Surely, this would be the sort of information a university would want to share with job seekers.
For our discussion here, let’s start with what I take to be the less odd formulation of the question: Do cancer researchers have a right to a life outside of work?
Kern’s suggestion is that this “right,” when exercised by researchers, is something that cancer patients end up paying for with their lives (unless they go into suspended animation while cancer researchers are spending time with their families or puttering around their gardens).
The big question, then, is what the researcher’s obligations are to the cancer patient — or to society in general.
For that matter, what are society’s obligations to the cancer patient? What are society’s obligations to researchers? And what are the cancer patient’s obligations in all of this?
I’ve written before about the assertion that scientists are morally obligated to practice science (including conducting research). I’ll quote some of the big reasons offered to bolster this assertion from my earlier post:
- society has paid for the training the scientists have received (through federal funding of research projects, training programs, etc.)
- society has pressing needs that can best (only?) be addressed if scientific research is conducted
- those few members of society who have specialized skills that are needed to address particular societal needs have a duty to use those skills to address those needs (i.e., if you can do research and most other people can’t, then to the extent that society as a whole needs the research that you can do, you ought to do it)
Needless to say, finding cures and treatments for cancer would be among those societal needs.
This is the whole Spider Man thing: with great power comes great responsibility, and scientific researchers have great power. If cancer researchers won’t help find cures and treatments for cancer, who else can?
Here, I think we should pause to note that there may well be an ethically relevant difference between offering help and doing everything you possibly can. It’s one thing to donate a hundred bucks to charity and quite another to give all your money and sell all your worldly goods in order to donate the proceeds. It’s a different thing for a healthy person to donate one kidney than to donate both kidneys plus the heart and lungs.
In other words, there is help you can provide, but there seems also to be a level of help that it would be wrong for anyone else to demand of you.*
And once we recognize that such a line exists, I think we have to recognize that the needs of cancer patients do not — and should not — trump every other interest of other individuals or of society as a whole. If a cancer patient cannot lay claim to the heart and lungs of a cancer researcher, then neither can that cancer patient lay claim to every moment of a cancer researcher’s time.
Indeed, in this argument of duties that spring from ability, it seems fair to ask why it is not the responsibility of everyone who might get cancer to train as a cancer researcher and contribute to the search for a cure. Why should tuning out in high school science classes, or deciding to pursue a degree in engineering or business or literature, excuse one from responsibility here? (And imagine how hard it’s going to be to get kids to study for their AP Chemistry or AP Biology classes when word gets out that their success is setting them up for a career where they ought never to take a day off, go to the beach, or cultivate friendships outside the workplace. Nerds can connect the dots.)
Surely anyone willing to argue that cancer researchers owe it to cancer patients to work the kind of hours Kern seems to think would be appropriate ought to be asking what cancer patients — and the precancerous — owe here.
Does Kern think researchers owe all their waking hours to the task because there are so few of them who can do this research? Reports from job seekers over the past several years suggest that there are plenty of other trained scientists who could do this research but have not secured employment as cancer researchers. Some may be employed in other research fields. Others, despite their best efforts, may not have secured research positions at all. What are their obligations here? Ought those employed in other research areas to abandon their current research to work on cancer, departments and funders be damned? Ought those who are not employed in a research field to be conducting their own cancer research anyway, without benefit of institution or facilities, research funding or remuneration?
Why would we feel scientific research skills, in particular, should make the individuals who have them so subject to the needs of others, even to the exclusion of their own needs?
Verily, if scientific researchers and the special skills they have are so very vital to providing for the needs of other members of society — vital enough that people like Kern feel it’s appropriate to harangue them for wanting any time out of the lab — doesn’t society owe it to its members to give researchers every resource they need for the task? Maybe even to create conditions in which everyone with the talent and skills to solve the scientific problems society wants solved can apply those skills and talents — and live a reasonably satisfying life while doing so?
My hunch is that most cancer patients would actually be less likely than Kern to regard cancer researchers as of merely instrumental value. I’m inclined to think that someone fighting a potentially life-threatening disease would be reluctant to deny someone else the opportunity to spend time with loved ones or to savor an experience that makes life worth living. To the extent that cancer researchers do sacrifice some aspects of the rest of their life to make progress on their work, I reckon most cancer patients appreciate these sacrifices. If more is needed for cancer patients, it seems reasonable to place this burden on society as a whole — teeming with potential cancer patients and their relatives and friends — to enable more (and more effective) cancer research to go on without enslaving the people qualified to conduct it, or writing off their interests in their own human flourishing.
Kern might spend some time talking with cancer patients about what they value in their lives — maybe even using this to help him extrapolate some of the things his fellow researchers might value in their lives — rather than just using them to prop up his appeal to pity.
_____
*Possibly there is also a level of help that it would be wrong for you to provide because it harms you in a fundamental and/or irreparable way.
Because it’s turning out to be that kind of semester, I’m late to the party in responding to this essay (PDF) by Scott E. Kern bemoaning the fact that more cancer researchers at Johns Hopkins aren’t passionate enough to be visible in the lab on a Sunday afternoon. But I’m sure as shooting going to respond.
First, make sure you read the thoughtful responses from Derek Lowe, Rebecca Monatgue, and Chemjobber.
Kern’s piece describes a survey he’s been conducting (apparently over the course of 25 years) in which he seemingly counts the other people in evidence in his cancer center on Saturdays and Sundays, and interviews them with “open-ended, gentle questions, such as ‘Why are YOU here? Nobody else is here!'” He also deigns to talk to the folks found working at the center 9 to 5 on weekdays to record “their insights about early morning, evening and weekend research.” Disappointingly, Kern doesn’t share even preliminary results from his survey. However, he does share plenty of disdain for the trainees and PIs who are not bustling through the center on weekends waiting for their important research to be interrupted by a guy with a clipboard conducting a survey.
Kern diagnoses the absence of all the researchers who might have been doing research as an indication of their lack of passion for scientific research. He tracks the amount of money (in terms of facilities and overhead, salaries and benefits) that is being thrown away in this horrific weekend under-utilization of resources. He suggests that the researchers who have escaped the lab on a weekend are falling down on their moral duty to cure cancer as soon as humanly possible.
Sigh.
The unsupported assumptions in Kern’s piece are numerous (and far from novel). Do we know that having each research scientist devote more hours in the lab increases the rate of scientific returns? Or might there plausibly be a point of diminishing returns, where additional lab-hours produce no appreciable return? Where’s the economic calculation to consider the potential damage to the scientists from putting in 80 hours a week (to their health, their personal relationships, their experience of a life outside of work, maybe even their enthusiasm for science)? After all, lots of resources are invested in educating and training researchers — enough so that one wouldn’t want to break them on the basis of an (unsupported) hypothesis offered in the pages of Cancer Biology & Therapy.
And while Kern is doing economic calculations, he might want to consider the impact on facilities of research activity proceeding full-tilt, 24/7. Without some downtime, equipment and facilities might wear out faster than they would otherwise.
Nowhere here does Kern consider the option of hiring more researchers to work 40 hour weeks, instead of shaming the existing research workforce into spending 60, 80, 100 hours a week in the lab.
They might still end up bringing work home (if they ever get a chance to go home).
Kern might dismiss this suggestion on purely economic grounds — organizations are more likely to want to pay for fewer employees (with benefits) who can work more hours than to pay to have the same number of hours of work done my more employees. He might also dismiss it on the basis that the people who really have the passion needed to do the research to cure cancer will not prioritize anything else in their lives above doing that research and finding that cure.
If that is so, it’s not clear how the problem is solved by browbeating researchers without this passion into working more hours because they owe it to cancer patients. Indeed, Kern might consider, in light of the relative dearth of researchers with such passion (as he defines it), the necessity of making use of the research talents and efforts of people who don’t want to spend 60 hours a week in the lab. Kern’s piece suggests he’d have a preference for keeping such people out of the research ranks, but by his own account there would hardly be enough researchers left in that case to keep research moving forward.
Might not these conditions prompt us to reconsider whether the received wisdom of scientific mentors is always so wise? Wouldn’t this be a reasonable place to reevaluate the strategy for accomplishing the grand scientific goal?
And Kern does not even consider a pertinent competing hypothesis, that people often have important insights into how to move research forward in the moments when they step back and allow their minds to wander. Perhaps less time away from one’s project means fewer of these insights.
The part of Kern’s piece that I find most worrisome is the cudgel he wields near the end:
During the survey period, off-site laypersons offer comments on my observations. “Don’t the people with families have a right to a career in cancer research also?” I choose not to answer. How would I? Do the patients have a duty to provide this “right”, perhaps by entering suspended animation? Should I note that examining other measures of passion, such as breadth of reading and fund of knowledge, may raise the same concern and that “time” is likely only a surrogate measure? Should I note that productive scientists with adorable family lives may have “earned” their positions rather than acquiring them as a “right”? Which of the other professions can adopt a country-club mentality, restricting their activities largely to a 35–40 hour week? Don’t people with families have a right to be police? Lawyers? Astronauts? Entrepreneurs?
How dare researchers go home to their families until they have cured cancer?
Indeed, Kern’s framing here warrants an examination of just what cancer patients can demand from researchers (or any other members of society), and on what basis. But that is a topic so meaty that it will require it’s own post.
Besides which, I have a pile of work I brought home that I have to start plowing through.
At White Coat Underground, PalMD ponders what to make of members of the same professional community with divergent views of the ethical principles that ought to guide them:
As I thought a bit more about the doctor who wrote the letter to the editor we discussed yesterday, I wondered how two similarly-trained doctors (he and I) could come to such different conclusions about ethical behavior.
The generally agreed upon set of medical ethics we work with has developed over centuries. Patient confidentiality, for example, was demanded by Hippocrates of Kos. But many of the medical ethics we work with are fairly modern developments that reflect the thinking of our surrounding society. The changing weight of patient dignity and autonomy vs. physician paternalism is such an example.
The very fact that our views (individually and collectively) or what is or is not ethical change over time is important to notice. The folks who believe there are “moral facts” in the world for us to discover might account for this in terms of improvements in our ability to perceive such moral facts (or maybe an improvement in our willingness to look for them). Myself, I’m not sure you need to be committed to the existence of objective moral facts to grant that the project of sharing a world with others may change in important and interesting ways as our societies do. And, I don’t think we can rule out the possibility that in some respects, earlier generations may have been jerks, and that we can do better ethically, or at least try to.
“Justice” makes its official entry into the list of essential ethical principles that need to guide research with human subjects (whether biomedical or not) in the Belmont Report, which was convened to respond to the Tuskegee syphilis experiment. That 30 year long study was notable for how unequally risks of the research and the benefits from the knowledge it produced were distributed, and the public outcry when the study was exposed in the newspapers (while it was still ongoing) made it clear that the behavior of the researchers was ethically abhorrent in the eyes of a significant segment of the American public.
In Belmont, it’s worth noting, justice is one of three guiding principles (the other two being beneficence and respect for persons). The authors of Belmont acknowledge that the tensions that sometimes arise between these three principles can make it difficult to work out the best thing (ethically speaking) to do. However, attention to these three principles can help us rule certain courses of action right out (because they wouldn’t fit with any of the principles, or only kind of fit with one while violating the other two, etc.). It’s not a matter of throwing one of the three principles overboard when the tensions arise, but rather of finding a way to do the best you ca by each of them.
On the matter of someone who might say, “I don’t believe justice is an essential ethical principle, so I’m going to opt out of being guided by it,” here’s my take on things:
Ethics do not begin and end with our personal commitments. Ethics are all about sharing a world with other people whose interests and needs may be quite different from our own. Ethical principles are meant to help up remember that other people’s interests and needs have value, too, and that we can’t just trash them because it’s inconvenient for us to take those interests and needs seriously. In other words, in ethics IT IS NEVER ALL ABOUT YOU.
This is not to say that there aren’t struggles (especially in a pluralistic society) about the extent of our ethical obligations to others. But you can’t opt out without opting out of that society.
And here’s where we get to the researcher or physician (my expertise is in the ethical standards guiding communities of researchers, but PalMD notes that the current position of medical ethics now embraces justice as a guiding principle). He’s free to say, “I’ll have no truck with justice,” if he is prepared as well to opt out of membership in that professional community. Alternatively, he can stay and try to make a persuasive case to his professional community that justice ought not to be one of the community’s shared ethical values; if he changes enough minds, so goes the community. (This could have implications for how willing the broader society is to tolerate this professional community, but that’s a separable issue.*)
But, he cannot claim to be part of the community while simultaneously making a unilateral decision that one of the community’s explicitly stated shared values does not apply to him.
I think Pal nicely captures why physicians (among others) should take the community standards seriously:
Why should physician’s adhere to any code of ethics? Can’t we just each rely on ourselves as individuals to do what’s right?
As doctors we are given extraordinary privileges and responsibilities. Physicians have always recognized that this demands high standards of behavior. The way we act professionally must take into account not just what we each believe, but what our patients and our society believes. Ethics are easy if we all have the same values. Ethics get hard when we don’t share beliefs. And when we don’t share beliefs, we must at the very least remember our core principles, those of helping our patients, and not causing them harm; of granting them autonomy and privacy; of treating them with basic human dignity.
Even physicians have to share a world with the rest of us. Our ethics, whether as members of professional communities or or society at large, are a framework to help us share that world. Maybe you can make a case for opting out of an ethical principle you don’t care for if you are the supreme leader of your world, or have a world of your very own with no world-mates. Otherwise, it behooves you to figure out how to play well with others, even if sometimes that’s hard..
_____
*While it’s a separable issue, it’s worth noting, as I have before, that the codes of conduct, ethical principles, and such adopted by professional communities exist in part to reassure the broader public that these professional communities mean the public well and don’t plan to prey on them.
In a recent post about a study of plagiarism in the personal statements of applicants for medical residency programs, the issue of professionalism reared its head. The authors of that study identified plagiarism in these application essays as a breach of professionalism, and one likely to be a harbinger of more such breaches as the applicant’s medical career progressed. Moreover, the authors noted that:
increasing public scrutiny of physicians’ ethical behavior is likely to put pressure on training programs to enforce strict rules of conduct, beginning with the application process.
I think it’s worth taking a closer look at what “professionalism” encompasses and at why it would be important to a professional community (like the professional community of physicians). To do this, let’s go way back to an era where physicians were working very hard to distinguish themselves from some of the other thinkers and purveyors of services in the public square – the time when the physicians known as the Hippocratics were flourishing in ancient Greece.
These physicians were working to make medicine a more scientific practice. They sought not just ways to heal, but an understanding of why these treatments were effective (and of how the bodies they were treating worked). But another big part of what the Hippocratics were trying to do involved establishing standards to professionalize their healing practices – and trying to establish a public reputation that would leave the public with a good opinion of learned medicine. After all, they weren’t necessarily pursuing medical knowledge for its own sake, but because they wanted to use it to help patients (and to make a living from providing these services). However, getting patients depended on being judged trustworthy by the people who might need treatment.
Professionalism, in other words, had to do not only with the relationship between members of the professional community but also with the relationship between that professional community and the larger society in which it was embedded.
The physicians in this group we’re calling the Hippocratics left a number of writings, including a statement of their responsibilities called “The Oath”. It’s worth noting that the Hippocratic corpus contains a diversity of works that reflect some significant differences of opinion among the physicians in this community – including some works (on abortion and surgery, for example) that seem to contradict some of the specific claims of “The Oath”. Still, “The Oath” gives us pretty good insight into the kind of concerns that would motivate a community of practitioners who were trying to professionalize.
We’re going to look at “The Oath” in its entirety, with my commentary interspersed. I’m using the translation of by J. Chadwick in Hippocratic Writings, edited by G.E.R. Lloyd.
I swear by Apollo the healer, by Aesculapius, by Health and all the powers of healing, and call to witness all the gods and goddesses that I may keep this Oath and Promise to the best of my ability and judgment.
In other words, it’s a serious oath.
I will pay the same respect to my master in the Science as to my parents and share my life with him and pay all my debts to him. I will regard his sons as my brothers and teach them the Science, if they desire to learn it, without fee or contract.
This is a recognition of the physician’s debt to professional community, those who taught him. It’s also a recognition of his duty to educate next generation of the profession.
I will hand on precepts, lectures and all other learning to my sons, to those of my master and to those pupils duly apprenticed and sworn, and to none other.
This part is all about keeping trade secrets secret. The assumption was that learned medicine involved knowledge that should not be shared with everyone, especially because a lot of people wouldn’t have the wisdom or intelligence or good character to use it appropriately. Also, given that these physicians wanted to be able to earn a living from their healing practices, they needed to keep something of a monopoly on this knowledge.
I will use my power to help the sick to the best of my ability and judgment; I will abstain from harming or wronging any man by it.
Here’s the recognition of the physician’s duty to his patients, the well-known commitment to do no harm. Obviously, this commitment is in the patients’ interests, but it’s also tied to the reputation of the professional community. Maintaining good stats, as it were, by not doing any harm should be expected to raise the community’s opinion of the profession of learned medicine.
I will not give a fatal draught to anyone if I am asked, nor will I suggest any such thing. Neither will I give a woman means to procure an abortion.
These two sentences forbid the physician’s participation in euthanasia or abortion. Note, however, that other writings in the Hippocratic corpus indicate that physicians in this tradition did participate in such procedures. Maybe this was a matter of local variations in what the physicians (and the public they served) found acceptable. Maybe there was a healthy debate among the Hippocratics about these practices.
I will be chaste and religious in my life and in my practice.
This part basically calls upon the physician to conduct himself as a good person. After all, the reputation of whole profession would be connected, at least in the public’s view, to the reputation of individual practitioners.
I will not cut, even for the stone, but I will leave such procedures to the practitioners of that craft.
Cutting was the turf of surgeons, not physicians. Here, too, there are other writings in the Hippocratic corpus that indicate that physicians in this tradition did some surgery. However, before the germ theory of disease or the discovery of antibiotics, you might imagine that performing surgery could lead to a lot of complications, running afoul of the precept to do no harm. Again, that was going to hurt the professional community’s stats, so it seemed reasonable just to leave it to the surgeons and let them worry about maintaining their own reputation.
Whenever I go into a house, I will go to help the sick and never with the intention of doing harm or injury.
This reads as an awareness of the physician’s power and of the responsibilities that come with it. If patients are trusting the physician and giving him this privileged access, for the good of the professional community he had better live up to that trust.
I will not abuse my position to indulge in sexual contacts with the bodies of women or men, whether they be freemen or slaves.
This is more of the same. Having privileged access means you have the opportunity to abuse it, but that kind of abuse could tarnish the reputation of the whole profession, even of physicians whose conduct met the highest standards of integrity.
Whatever I see or hear, professionally or privately, which ought not to be divulged, I will keep secret and tell no one.
To modern eyes, this part might suggest a commitment to maintain patient privacy. It’s more likely, however, that this was another admonition to protect the trade secrets of the professional community.
If, therefore, I observe this Oath and do not violate it, may I prosper both in my life and in my profession, earning good repute among all men for all time. If I transgress and forswear this Oath, may my lot be otherwise.
“Swear to God and hope to die, stick a needle in my eye!” Did we mention that it’s a serious oath?
The main thing I think is worth noticing here is the extent to which professionalism is driven by a need for the professional community to build good relations with the larger society – the source of their clients. Pick any modern code of conduct from a professional society and you will see the emphasis on duties to those clients, and to the larger public those clients inhabit, but this emphasis is at least as important for the professional community as for the people their profession is meant to serve. The code describes the conduct that members should exhibit to earn the trust of the public, without which they won’t get to practice their profession – or, at any rate, they might not be viewed as having special skills worth paying for, or as being the kind of people who could be trusted not to use those special skills against you.
Professionalism is not idealistic, then, but extremely pragmatic.