Ethics and the First of April.

On the Twitters, journalist Lee Billings posted this:

In case anyone was wondering, this is an April-Fools-free zone. Misleading readers is a disreputable practice, even under auspices of fun.

I think this is a position worth pondering, especially today.

Regular readers will have noticed that I indulge in the occasional April Fool’s post.

In fact, I posted another today.

And, today’s April Fool’s post was notable (for me, anyway) in its departure from “surprising news about me and/or my blog” terrain. Instead, I was offering “commentary” on a news story that I made up — fake news that was outrageous but had just enough plausibility that the reader might entertain the possibility that it was true. (Lately, the “real” news strikes me as outrageous a lot of the time, so my suspension-of-disbelief muscles are more toned than they used to be.)

So Lee Billings is, basically, right: I was striving to mislead you, my readers, at least momentarily, for laughs.

Was it unethical for me to have done so? Have I fallen short of my duties to you by engaging in this tomfoolery?

Maybe this comes down to what we understand those duties to be.

I try always to make my own thinking on an issue clear — to explain my stand and to give you reasons for that stand. I also try to set out my uncertainties — the things I don’t know or the places I feel myself torn between different stances.

I actually did this in today’s April Fool’s post, even though I was giving my thoughts on the implications of a proposal that no one has made (yet).

When I’m responding to a news story, I accept that I have an obligation not to misrepresent the claims the story makes. This is not to say that I treat the source as authoritative — indeed, in a number of cases I have expressed my own views of the “spin” of the reporting, and of the details that are not discussed in a news story. And, I include a link to the source so readers can read it themselves, evaluate it themselves, and draw their own conclusions about whether I’ve represented the source fairly.

Today’s post had me responding to a news story that didn’t exist. Clearly, that’s a misrepresentation. Moreover, it means that the link I included to the news story didn’t actually go to the news story — more misrepresentation. However, the diligent reader who actually clicked on that link would be alerted to the fact that there was no such news story before getting into my presentation of the purported proposal or analysis of it.

Maybe this means that readers who were successfully mislead by the post actually fell short of their duties to click those links and read that source material with a critical eye.

It’s possible, though, that I’m wrong about this — that you all want me to break things down so clearly and accurately that you never have to click a hyperlink, that you’d like me to dispense with ironic phrasing (yeah, right!), and so forth. My sense is that readers of this blog have been willing to shoulder their share of the cognitive burden, but if I’m mistaken about that, please use the comments to set me straight.

The other ethical worry one might have (and some have expressed) about today’s post is that my fake proposal might be taken up and advocated as a real proposal — which, in this case, I agree would be bad. If that were to happen, would I be responsible?

I guess I might. But then so might authors of dystopian fiction whose ideas are embraced (and implemented) by people who have a different view of how the world should be. Personally, I think exploring the pitfalls of bad ideas before someone thinks to implement them could help us to actually find better ideas to implement. However, I suppose where bad ideas that get implemented come from is an empirical question.

Does anyone have a good way to get the empirical data that would answer it?

Repost: The ethics of snail eradication.

Since I recently reposted an explanation of one method for dispatching snails and slugs, it seems only fair that I also repost my discussion of whether it’s ethical for me to be killing the snails in my garden to begin with.

In the comments of one of my snail eradication posts, Emily asks some important questions:

I’m curious about how exactly you reason the snail-killing out ethically alongside the vegetarianism. Does the fact that there’s simply no other workable way to deal with the pests mean the benefits of killing them outweigh the ethical problems? Does the fact that they’re molluscs make a big difference? Would you kill mice if they were pests in your house? If you wanted to eat snails, would you? Or maybe the not-wanting-to-kill-animals thing is a relatively small factor in your vegetarianism?

Continue reading

Quoted for truth.

Anil Dash, writing about the (old) media gnashing-of-teeth about “cyberbullying” in the aftermath of the suicide of Rutgers student Tyler Clementi:

It’s important to note that blaming technology for horrendous, violent displays of homophobia or racism or simple meanness lets adults like parents and teachers absolve themselves of the responsibility to raise kids free from these evils. By creating language like “cyberbullying”, they abdicate their own role in the hateful actions, and blame the (presumably mysterious and unknowable) new technologies that their kids use for these awful situations.

What do cancer researchers owe cancer patients?

As promised, today I’m returning to this essay (PDF) by Scott E. Kern about the sorry state of cancer researchers at Johns Hopkins to consider the assumptions he seems to be making about what cancer patients can demand from researchers (or any other members of society), and on what basis.

Let’s review the paragraph of Kern’s essay that dropped his framing of the ethical issue like an anvil:

During the survey period, off-site laypersons offer comments on my observations. “Don’t the people with families have a right to a career in cancer research also?” I choose not to answer. How would I? Do the patients have a duty to provide this “right”, perhaps by entering suspended animation? Should I note that examining other measures of passion, such as breadth of reading and fund of knowledge, may raise the same concern and that “time” is likely only a surrogate measure? Should I note that productive scientists with adorable family lives may have “earned” their positions rather than acquiring them as a “right”? Which of the other professions can adopt a country-club mentality, restricting their activities largely to a 35–40 hour week? Don’t people with families have a right to be police? Lawyers? Astronauts? Entrepreneurs?

There’s a bit of weirdness here that I will note and then set aside, namely formulating the question as one of whether people with families have a right to a career in cancer research, rather than whether cancer researchers have a right to have families (or any other parts of their lives that exist beyond their careers).

Framing it this way, it’s hard not to suspect that Kern is the guy on the search committee who is poised to torpedo the job application of any researcher with the temerity to show any evidence of a life that might need balancing with work — the guy on the search committee who is open about wanting to hire workaholics who have no place else to go but the lab and thus can be expected to yield a higher research output for the same salary. Talented applicants with families (or aspirations to have them), or even hobbies, are a bad risk to a guy like this. And besides, if they need that other stuff too, how serious can they be about research?

If Hopkins has a policy of screening out applicants for research positions on the basis that they have families, or hobbies, or interests that they intend to pursue beyond their work duties, I’m sure that they make this policy clear in their job advertisements. Surely, this would be the sort of information a university would want to share with job seekers.

For our discussion here, let’s start with what I take to be the less odd formulation of the question: Do cancer researchers have a right to a life outside of work?

Kern’s suggestion is that this “right,” when exercised by researchers, is something that cancer patients end up paying for with their lives (unless they go into suspended animation while cancer researchers are spending time with their families or puttering around their gardens).

The big question, then, is what the researcher’s obligations are to the cancer patient — or to society in general.

For that matter, what are society’s obligations to the cancer patient? What are society’s obligations to researchers? And what are the cancer patient’s obligations in all of this?

I’ve written before about the assertion that scientists are morally obligated to practice science (including conducting research). I’ll quote some of the big reasons offered to bolster this assertion from my earlier post:

  • society has paid for the training the scientists have received (through federal funding of research projects, training programs, etc.)
  • society has pressing needs that can best (only?) be addressed if scientific research is conducted
  • those few members of society who have specialized skills that are needed to address particular societal needs have a duty to use those skills to address those needs (i.e., if you can do research and most other people can’t, then to the extent that society as a whole needs the research that you can do, you ought to do it)

Needless to say, finding cures and treatments for cancer would be among those societal needs.

This is the whole Spider Man thing: with great power comes great responsibility, and scientific researchers have great power. If cancer researchers won’t help find cures and treatments for cancer, who else can?

Here, I think we should pause to note that there may well be an ethically relevant difference between offering help and doing everything you possibly can. It’s one thing to donate a hundred bucks to charity and quite another to give all your money and sell all your worldly goods in order to donate the proceeds. It’s a different thing for a healthy person to donate one kidney than to donate both kidneys plus the heart and lungs.

In other words, there is help you can provide, but there seems also to be a level of help that it would be wrong for anyone else to demand of you.*

And once we recognize that such a line exists, I think we have to recognize that the needs of cancer patients do not — and should not — trump every other interest of other individuals or of society as a whole. If a cancer patient cannot lay claim to the heart and lungs of a cancer researcher, then neither can that cancer patient lay claim to every moment of a cancer researcher’s time.

Indeed, in this argument of duties that spring from ability, it seems fair to ask why it is not the responsibility of everyone who might get cancer to train as a cancer researcher and contribute to the search for a cure. Why should tuning out in high school science classes, or deciding to pursue a degree in engineering or business or literature, excuse one from responsibility here? (And imagine how hard it’s going to be to get kids to study for their AP Chemistry or AP Biology classes when word gets out that their success is setting them up for a career where they ought never to take a day off, go to the beach, or cultivate friendships outside the workplace. Nerds can connect the dots.)

Surely anyone willing to argue that cancer researchers owe it to cancer patients to work the kind of hours Kern seems to think would be appropriate ought to be asking what cancer patients — and the precancerous — owe here.

Does Kern think researchers owe all their waking hours to the task because there are so few of them who can do this research? Reports from job seekers over the past several years suggest that there are plenty of other trained scientists who could do this research but have not secured employment as cancer researchers. Some may be employed in other research fields. Others, despite their best efforts, may not have secured research positions at all. What are their obligations here? Ought those employed in other research areas to abandon their current research to work on cancer, departments and funders be damned? Ought those who are not employed in a research field to be conducting their own cancer research anyway, without benefit of institution or facilities, research funding or remuneration?

Why would we feel scientific research skills, in particular, should make the individuals who have them so subject to the needs of others, even to the exclusion of their own needs?

Verily, if scientific researchers and the special skills they have are so very vital to providing for the needs of other members of society — vital enough that people like Kern feel it’s appropriate to harangue them for wanting any time out of the lab — doesn’t society owe it to its members to give researchers every resource they need for the task? Maybe even to create conditions in which everyone with the talent and skills to solve the scientific problems society wants solved can apply those skills and talents — and live a reasonably satisfying life while doing so?

My hunch is that most cancer patients would actually be less likely than Kern to regard cancer researchers as of merely instrumental value. I’m inclined to think that someone fighting a potentially life-threatening disease would be reluctant to deny someone else the opportunity to spend time with loved ones or to savor an experience that makes life worth living. To the extent that cancer researchers do sacrifice some aspects of the rest of their life to make progress on their work, I reckon most cancer patients appreciate these sacrifices. If more is needed for cancer patients, it seems reasonable to place this burden on society as a whole — teeming with potential cancer patients and their relatives and friends — to enable more (and more effective) cancer research to go on without enslaving the people qualified to conduct it, or writing off their interests in their own human flourishing.

Kern might spend some time talking with cancer patients about what they value in their lives — maybe even using this to help him extrapolate some of the things his fellow researchers might value in their lives — rather than just using them to prop up his appeal to pity.

_____
*Possibly there is also a level of help that it would be wrong for you to provide because it harms you in a fundamental and/or irreparable way.

Are ethical principles optional?

At White Coat Underground, PalMD ponders what to make of members of the same professional community with divergent views of the ethical principles that ought to guide them:

As I thought a bit more about the doctor who wrote the letter to the editor we discussed yesterday, I wondered how two similarly-trained doctors (he and I) could come to such different conclusions about ethical behavior.

The generally agreed upon set of medical ethics we work with has developed over centuries. Patient confidentiality, for example, was demanded by Hippocrates of Kos. But many of the medical ethics we work with are fairly modern developments that reflect the thinking of our surrounding society. The changing weight of patient dignity and autonomy vs. physician paternalism is such an example.

The very fact that our views (individually and collectively) or what is or is not ethical change over time is important to notice. The folks who believe there are “moral facts” in the world for us to discover might account for this in terms of improvements in our ability to perceive such moral facts (or maybe an improvement in our willingness to look for them). Myself, I’m not sure you need to be committed to the existence of objective moral facts to grant that the project of sharing a world with others may change in important and interesting ways as our societies do. And, I don’t think we can rule out the possibility that in some respects, earlier generations may have been jerks, and that we can do better ethically, or at least try to.

“Justice” makes its official entry into the list of essential ethical principles that need to guide research with human subjects (whether biomedical or not) in the Belmont Report, which was convened to respond to the Tuskegee syphilis experiment. That 30 year long study was notable for how unequally risks of the research and the benefits from the knowledge it produced were distributed, and the public outcry when the study was exposed in the newspapers (while it was still ongoing) made it clear that the behavior of the researchers was ethically abhorrent in the eyes of a significant segment of the American public.

In Belmont, it’s worth noting, justice is one of three guiding principles (the other two being beneficence and respect for persons). The authors of Belmont acknowledge that the tensions that sometimes arise between these three principles can make it difficult to work out the best thing (ethically speaking) to do. However, attention to these three principles can help us rule certain courses of action right out (because they wouldn’t fit with any of the principles, or only kind of fit with one while violating the other two, etc.). It’s not a matter of throwing one of the three principles overboard when the tensions arise, but rather of finding a way to do the best you ca by each of them.

On the matter of someone who might say, “I don’t believe justice is an essential ethical principle, so I’m going to opt out of being guided by it,” here’s my take on things:

Ethics do not begin and end with our personal commitments. Ethics are all about sharing a world with other people whose interests and needs may be quite different from our own. Ethical principles are meant to help up remember that other people’s interests and needs have value, too, and that we can’t just trash them because it’s inconvenient for us to take those interests and needs seriously. In other words, in ethics IT IS NEVER ALL ABOUT YOU.

This is not to say that there aren’t struggles (especially in a pluralistic society) about the extent of our ethical obligations to others. But you can’t opt out without opting out of that society.

And here’s where we get to the researcher or physician (my expertise is in the ethical standards guiding communities of researchers, but PalMD notes that the current position of medical ethics now embraces justice as a guiding principle). He’s free to say, “I’ll have no truck with justice,” if he is prepared as well to opt out of membership in that professional community. Alternatively, he can stay and try to make a persuasive case to his professional community that justice ought not to be one of the community’s shared ethical values; if he changes enough minds, so goes the community. (This could have implications for how willing the broader society is to tolerate this professional community, but that’s a separable issue.*)

But, he cannot claim to be part of the community while simultaneously making a unilateral decision that one of the community’s explicitly stated shared values does not apply to him.

I think Pal nicely captures why physicians (among others) should take the community standards seriously:

Why should physician’s adhere to any code of ethics? Can’t we just each rely on ourselves as individuals to do what’s right?

As doctors we are given extraordinary privileges and responsibilities. Physicians have always recognized that this demands high standards of behavior. The way we act professionally must take into account not just what we each believe, but what our patients and our society believes. Ethics are easy if we all have the same values. Ethics get hard when we don’t share beliefs. And when we don’t share beliefs, we must at the very least remember our core principles, those of helping our patients, and not causing them harm; of granting them autonomy and privacy; of treating them with basic human dignity.

Even physicians have to share a world with the rest of us. Our ethics, whether as members of professional communities or or society at large, are a framework to help us share that world. Maybe you can make a case for opting out of an ethical principle you don’t care for if you are the supreme leader of your world, or have a world of your very own with no world-mates. Otherwise, it behooves you to figure out how to play well with others, even if sometimes that’s hard..
_____
*While it’s a separable issue, it’s worth noting, as I have before, that the codes of conduct, ethical principles, and such adopted by professional communities exist in part to reassure the broader public that these professional communities mean the public well and don’t plan to prey on them.

Back-to-school crankiness.

For the Free-Ride offspring, this is only the seventh school day of the new academic year, and already the weekly newsletter from their elementary school has achieved a tone that could most charitably be described as weary:

This is a large school with over N students. Please consider the priorities of the school staff when making personal requests. It is unreasonable to meet with staff 3 times to make the same request, after you have been denied in person. Thanks you for considering the needs of the other N-1 students when making personal requests for your child.

My thoughts:

  1. If this (or really, anything the school does) succeeds in cultivating a bit more empathy and altruism from the parents, I will be impressed. And surprised.
  2. Was this item in the newsletter prompted by multiple parents engaging in this kind of won’t-take-no-for-an-answer behavior? Or just one child’s parents?
  3. If just one child’s parents, I’m suddenly curious about just what they were requesting, and why that request was shot down so decisively (not to mention why it was so important to keep asking after the first denial).
  4. Also, if just one child’s parents, I wonder if those parents recognize that this paragraph in the newsletter is about them.
  5. Finally, given the priorities of school staff, is what may amount to an admonishment to one set of parents (out of something on the order of magnitude of N sets of parents), a good use of scarce staff time?

Fasten your seat-belt. It’s shaping up to be one of those school years.

When applicants for medical residencies plagiarize.

ResearchBlogging.org

Long-time readers of this blog will know that plagiarism is a topic that comes up with some regularity, sometimes fueled by “kids today!” stories from the mainstream media, and sometimes due to actual research on plagiarism in different educational and professional spheres.

Today, let’s have a look of a report of one such investigation, “Plagiarism in Residency Application Essays,” published July 20, 2010 in Annals of Internal Medicine. The investigators looked at the personal statements applicants wrote (or, in some cases, “wrote”) as part of their application to residency programs at Brigham and Women’s hospital. As they describe their study:

The primary goals of this investigation were to estimate the prevalence of plagiarism in applicants’ personal statements at our institution and to determine the association of plagiarism with demographic, educational, and experience related characteristics of the applicants. (112)

The people applying to residency programs have already successfully completed medical school. The residency is an additional part of their training to help them prepare to practice a particular medical specialty. And, the personal statement is a standard part of what’s involved in applying for a residency:

All applicants to U.S. residency programs must complete an original essay known as the “personal statement.” The format is free-form, the content is not specified, and expectations may vary by specialty. Common themes include the motivation for seeking training in a chosen specialty, the factors that affect suitability for a field or program, a critical incident that affected the applicant’s career choice, and circumstances that distinguish the applicant from others. (112)

There are some fairly commonsense reasons to expect that these personal statements ought to be original work, written by the applicant rather than copied from some other source. After all, the personal essay represents the applicant to the residency program, not as a transcript or a set of test scores but as a person. The essay gives insight into why the applicant is interested in a particular medical specialty, what training experiences and life experiences might bear on his or her motivation or likelihood of success, what kind of personal qualities he or she will bring to the table.

Also, since plagiarism is explicitly forbidden, these essays may give insight into the applicant’s personal and academic integrity, or at least into his or her grasp of rudimentary rules of scholarship:

The ERAS [Electronic Residency Application Service] also warns applicants that “any substantiated findings of plagiarism may result in reporting of such findings to the programs to which [they] apply now and in the future”. Applicants must certify that work is accurate and original before an ERAS application is complete. (112)

In the study, the investigators performed an analysis of the personal statements in residency program applications to Brigham and Women’s Hospital over an interval of about 18 months. They analyzed 4975 essays using software that compared them with a database that included previously submitted essays, published works, and Internet pages.

For the purposes of the study, the researchers defined evidence of plagiarism as a match of more than 10% of an essay to an existing work. Since the software was flagging matching strings of words between the essays and the sources in the database, this methodology may well have missed instances of plagiarism where the plagiarist changed a word here or there.

It’s also worth noting that the authors point, in the Discussion section of the paper, to the following definition of plagiarism:

Plagiarism may be defined as “the action or practice of taking someone else’s work, idea, etc., and passing it off as one’s own; literary theft”. (114)

This definition seems (at least to my eye) to make intent an element of the crime. As we’ve discussed before, this requirement is by no means a standard part of the definition of plagiarism.

What did this research find? In the 4975 essays analyzed, they detected evidence of plagiarism (i.e., a match of more than 10%) in 5.2% of the essays, for an incidence of a little more than one plagiarized paper in 20. Rather than relying solely on the software analysis, the researchers examined the essays the software flagged for plagiarism to rule out false positives. (They found none.)

I’m not sure whether this frequency of plagiarism is unusually high (or unusually low). However, for a personal statement, I reckon this is higher than it should be. Again, what better source could there be for your personal statement than yourself? Still, we might want some data on the frequency of plagiarism in personal statements for other sorts of things to get a better sense of whether the results of this study indicate a special problem with people applying for medical residencies, or whether they reflect a basic human frailty of which people applying for medical residencies also partake.

The authors also report demographic trends that emerged in their results. They found a higher incidence of plagiarism among the applicants who were:

  • international (which included non-U.S. citizens and those who had attended medical school outside the U.S.)
  • older
  • fluent in languages other than English
  • applying for a residency with previous residency training under their belts

They found a lower incidence of plagiarism among the applicants who:

  • were members of Alpha Omega Alpha (a medical honor society)
  • had research experience
  • had volunteer experience
  • had higher scores on the U.S. Medical Licensing Exam Step 1

The authors offer no hypotheses about causal mechanisms that might account for these correlations, and it seems likely that more research is required to tease out the factors that might contribute to these demographic differences, not to mention strategies that might address them. (I’m guessing that the applicants with research experience and/or volunteer experience had an easier time finding stuff to write about in their personal essays.)

One might reasonably ask whether plagiarism in these personal essays is a problem that ought to worry those training the next generation of physicians. The authors of this study argue that it is. They write:

First, residency selection committees would probably find misrepresentation on the application to be a strong negative indicator of future performance as a resident. The Accreditation Council for Graduate Medical Education has deemed professionalism 1 of the 6 core competencies to be taught and assessed in undergraduate and graduate medical education. We believe that program directors would find a breach of professionalism in an application to be an unacceptable baseline from which to begin residency. Second, lapses in professionalism in medical school and residency training can be predictive of future disciplinary action by state medical boards. Third, increasing public scrutiny of physicians’ ethical behavior is likely to put pressure on training programs to enforce strict rules of conduct, beginning with the application process. (114-115)

The presumption is that honesty is a quality that physicians (and those training to be physicians) ought to display — that there is something wrong with lying not only to the patients you are treating but also to other members of your professional community. Indeed, the “professionalism” to which the authors refer is important in large part because it allows member of the larger public to recognize the professional community of physicians as possessing the necessary skills, judgment, and trustworthiness. Without this recognition, why should your average patient trust an M.D. any more than a snake-oil salesman?

In this study, as in all studies with human subjects, the researchers were required to look out for the interests of their human subjects — here, the applicants to the residency programs who wrote the personal essays that were analyzed. Protecting their interests included maintaining the anonymity of the authors of the essays in the context of the study. This, in turn, means that it’s possible that the plagiarism identified in the study may not have been identified by the residency selection committees who were also reading these essays.

Finally, near the end of the paper, the authors offer recommendations for how to address the general problem of plagiarism in applications for residency programs:

Ideally, the submission of applicant essays for comparison in a centralized database would occur at the level of ERAS, which would make this process unavoidable for applicants.This method also would eliminate the difficulties inherent in having multiple institutions using plagiarism detection software programs simultaneously, because submitted essays become part of the database for future submissions. Furthermore, manual inspection of the similarity report itself rather than simply reporting the score would allow individual program directors to make independent judgments about the seriousness of any putative offense. Finally, the mere knowledge that essays are being screened by plagiarism-detection software may substantially deter would-be plagiarizers. (119)

These recommendations are clearly leaning toward detecting plagiarism that has been committed, rather than being weighted towards prevention efforts. As they note, and as other researchers have found, an expectation that there will be a plagiarism screening may discourage applicants from committing plagiarism, but it’s possible that prevention efforts that depend on fear of detection may just end up separating the risk averse applicants from the gamblers.

Segal S, Gelfand BJ, Hurwitz S, Berkowitz L, Ashley SW, Nadel ES, & Katz JT (2010). Plagiarism in residency application essays. Annals of internal medicine, 153 (2), 112-20 PMID: 20643991

College kids and their plagiarism (or college professors and their quaint insistence on proper citation of sources).

Today, The New York Times has an article about students and plagiarism that I could have sworn I’ve read at least a dozen times before, at least in its general gist.

As an exercise, before you click through to read the article, grab some paper and a pencil and jot down two or three reasons you think will be offered that the current generation of college students does not grasp the wrongness of using the words and ideas of others without attribution.

Is your list ready?
Continue reading

Save us from the armchair philosopher with a blog.

In what is surely a contender for the photo next to the “business as usual in the blogosphere” entry in the Wiktionary, a (male) blogger has posted a list of the sexiest (all-but-one female) scientists (using photos of those scientists obtained from the web without any indication that he had also obtained proper permission to use those photos in his post), and now the blogger says he wants to know what could possibly be wrong about making such a post.

Continue reading