Getting scientists to take ethics seriously: strategies that are probably doomed to failure.

As part of my day-job as a philosophy professor, I regularly teach a semester-long “Ethics in Science” course at my university. Among other things, the course is intended to help science majors figure out why being ethical might matter to them if they continue on their path to becoming working scientists and devote their careers to the knowledge-building biz.

And, there’s a reasonable chance that my “Ethics in Science” course wouldn’t exist but for strings attached to training grants from federal funding agencies requiring that students funded by these training grants receive ethics training.

The funding agencies demand the ethics training component largely in response to high profile cases of federally funded scientists behaving badly on the public’s dime. The bad behavior suggests some number of working scientists who don’t take ethics seriously. The funders identify this as a problem and want the scientists who receive grants from them to take ethics seriously. But the big question is how to get scientists to take ethics seriously.

Here are some approaches to that problem that strike me as unpromising:

  • Delivering ethical instruction that amounts to “don’t be evil” or “don’t commit this obviously wrong act”. Most scientists are not mustache-twirling villains, and few are so ignorant that they wouldn’t know that the obviously wrong acts are obviously wrong. If ethical training is delivered with the subtext of “you’re evil” or “you’re dumb,” most of the scientists to whom you’re delivering it will tune it out, since you’re clearly talking to someone else.
  • Reducing ethics to a laundry list of “thou shalt not …” Ethics is not simply a matter of avoiding bad acts — and the bad acts are not bad simply because federal regulations or your compliance officer say they are bad. There is a significant component of ethics concerned with positive action — doing good things. Presenting ethics as results instead of a process — as a set of things the ethics algorithm says you shouldn’t do, rather than a set of strategies for evaluating the goodness of various courses of action you might pursue — is not very engaging. Besides, you can’t even count on this approach for good results, since refraining from particular actions that are expressly forbidden is no guarantee you won’t find some not-expressly-forbidden action that’s equally bad.
  • Presenting ethics as something you have to talk about because the funders require that you talk about it. If you treat the ethics-talk as just a string attached to your grant money, but something with which you wouldn’t waste your time otherwise, you’re identifying attention to ethics as a thing that gets in the way of research rather as something that supports research. Once you’ve fulfilled the requirement to have the ethics-talk, would you ever revisit ethics, or would you just get down to the business of research?
  • Segregating attention to ethics in a workshop, class, or training session. Is ethics something the entirety of which you can “do” in a few hours, or even a whole semester? That’s the impression scientific trainees can get from an ethics training requirement that floats unconnected from any discussion with the people training them about how to be a successful scientist. Once you’re done with your training, then, you’re done — why think about ethics again?
  • Pointing trainees to a professional code, the existence of which proves that your scientific discipline takes ethics seriously. The existence of a professional code suggests that someone in your discipline sat down and tried to spell out ethical standards that would support your scientific activities, but the mere existence of a code doesn’t mean the members of your scientific community even know what’s in that code, nor that they behave in ways that reflect the commitments put forward by it. Walking the walk is different from talking the talk — and knowing that there is a code, somewhere on your professional society’s website, that you could find if you Googled it probably doesn’t even rise to the level of talking the talk.
  • Delivering ethical training with the accompanying message that scientists who aren’t willing to cut ethical corners are at a competitive career disadvantage, and that this is just how things are. Essentially, this creates a situation where you tell trainees, “Here’s how you should behave … unless you’re really up against it, at which point you should be smart and drop the ethics to survive in this field.” And, what motivated trainee doesn’t recognize that she’s always up against it? It is important, I think, to recognize that unethical behavior is often motivated at least in part by a perception of extreme career pressures rather than by the inherent evil of the scientist engaging in that behavior. But noting the competitive advantage available for cheaters only to throw up your hands and say, “Eh, what are you going to do?” strikes me as a shrugging off of responsibility. At a minimum, members of a scientific community ought to reflect upon and discuss whether the structures of career rewards and career punishments incentivize bad behavior. If they do, members of the community probably have a responsibility to try to change those structures of career rewards and career punishments.

Laying out approaches to ethics training that won’t help scientists take ethics seriously might help a trainer avoid some pitfalls, but it’s not the same as spelling out approaches that are more likely to work. That’s a topic I’ll take up in a post to come.

How we decide (to falsify).

At the tail-end of a three-week vacation from all things online (something that I badly needed at the end of teaching an intensive five-week online course), the BBC news reader on the radio pulled me back in. I was driving my kid home from the end-of-season swim team banquet, engaged in a conversation about the awesome coaches, when my awareness was pierced by the words “Jonah Lehrer” and “resigned” and “falsified”.

It appears that the self-plagiarism brouhaha was not Jonah Lehrer’s biggest problem. On top of recycling work in ways that may not have conformed to his contractual obligations, Lehrer has also admitted to making up quotes in his recent book Imagine. Here are the details as I got them from the New York Times Media Decoder blog:

An article in Tablet magazine revealed that in his best-selling book, “Imagine: How Creativity Works,” Mr. Lehrer had fabricated quotes from Bob Dylan, one of the most closely studied musicians alive. …

In a statement released through his publisher, Mr. Lehrer apologized.

“The lies are over now,” he said. “I understand the gravity of my position. I want to apologize to everyone I have let down, especially my editors and readers.”

He added, “I will do my best to correct the record and ensure that my misquotations and mistakes are fixed. I have resigned my position as staff writer at The New Yorker.” …

Mr. Lehrer might have kept his job at The New Yorker if not for the Tablet article, by Michael C. Moynihan, a journalist who is something of an authority on Mr. Dylan.

Reading “Imagine,” Mr. Moynihan was stopped by a quote cited by Mr. Lehrer in the first chapter. “It’s a hard thing to describe,” Mr. Dylan said. “It’s just this sense that you got something to say.”

After searching for a source, Mr. Moynihan could not verify the authenticity of the quote. Pressed for an explanation, Mr. Lehrer “stonewalled, misled and, eventually, outright lied to me” over several weeks, Mr. Moynihan wrote, first claiming to have been given access by Mr. Dylan’s manager to an unreleased interview with the musician. Eventually, Mr. Lehrer confessed that he had made it up.

Mr. Moynihan also wrote that Mr. Lehrer had spliced together Dylan quotes from separate published interviews and, when the quotes were accurate, he took them well out of context. Mr. Dylan’s manager, Jeff Rosen, declined to comment.

In the practice of science, falsification is recognized as a “high crime” and is included in every official definition of scientific misconduct you’re likely to find. The reason for this is simple: scientists are committed to supporting their claims about what the various bits of the world are like and about how they work with empirical evidence from the world — so making up that “evidence” rather than going to the trouble to gather it is out of bounds.

Despite his undergraduate degree in neuroscience, Jonah Lehrer is not operating as a scientist. However, he is operating as a journalist — a science journalist at that — and journalism purports to recognize a similar kind of relationship to evidence. Presenting words as a quote from a source is making a claim that the person identified as the source actually said those things, actually made those claims or shared those insights. Presumably, a journalist includes such quotes to bolster an argument. Maybe if Jonah Lehrer had simply written a book presenting his thoughts about creativity readers would have no special reason to believe it. Supporting his views with the (purported) utterances of someone widely recognized as a creative genius, though, might make them more credible.

(Here, Eva notes drily that this incident might serve to raise Jonah Lehrer’s credibility on the subject of creativity.)

The problem, of course, is that a fake quote can’t really add credibility in the way it appears to when the quote is authentic. Indeed, once discovered as fake, it has precisely the opposite effect. As with falsification in science, falsification in journalism can only achieve its intended goal as long as its true nature remains undetected.

There is no question in my mind about the wrongness of falsification here. Rather, the question I grapple with is why do they do it?

In science, after falsified data is detected, one sometimes hears an explanation in terms of extreme pressure to meet a deadline (say, for a big grant application, or for submission of a tenure dossier) or to avoid being scooped on a discovery that is so close one can almost taste it … except for the damned experiments that have become uncooperative. Experiments can be hard, there is no denying it, and the awarding of scientific credit to the first across the finish-line (but not to the others right behind the first) raise the prospect that all of one’s hard work may be in vain if one can’t get those experiments to work first. Given the choice between getting no tangible credit for a few years’ worth of work (because someone else got her experiments to work first) and making up a few data points, a scientist might well feel tempted to cheat. That scientific communities regard falsifying data as such a serious crime is meant to reduce that temptation.

There is another element that may play an important role in falsification, one brought to my attention some years ago in a talk given by C. K. Gunsalus: the scientist may have such strong intuitions about the bit of the world she is trying to describe that gathering the empirical data to support these intuitions seems like a formality. If you’re sure you know the answer, the empirical data are only useful insofar as they help convince others who aren’t yet convinced. The problem here is that the empirical data are how we know whether our accounts of the world fit the actual world. If all we have is hunches, with no way to weed out the hunches that don’t fit with the details of reality, we’re no longer in the realm of science.

I wonder if this is close to the situation in which Jonah Lehrer found himself. Maybe he had strong intuitions about what kind of thing creativity is, and about what a creative guy like Bob Dylan would say when asked about his own exercise of creativity. Maybe these intuitions felt like a crucial part of the story he was trying to tell about creativity. Maybe he even looked to see if he could track down apt quotes from Bob Dylan expressing what seemed to him to be the obvious Dylanesque view … but, coming up short on this quotational data, he was not prepared to leave such an important intuition dangling without visible support, nor was he prepared to excise it. So he channeled Bob Dylan and wrote the thing he was sure in his heart Bob Dylan would have said.

At the time, it might have seemed a reasonable way to strengthen the narrative. As it turns out, though, it was a course of action that so weakened it that the publisher of Imagine, Houghton Mifflin Harcourt, has recalled print copies of the book.

Blogging and recycling: thoughts on the ethics of reuse.

Owing to summer-session teaching and a sprained ankle, I have been less attentive to the churn of online happenings than I usually am, but an email from SciCurious brought to my attention a recent controversy about a blogger’s “self-plagiarism” of his own earlier writing in his blog posts (and in one of his books).

SciCurious asked for my thoughts on the matter, and what follows is very close to what I emailed her in reply this morning. I should note that these thoughts were composed before I took to the Googles to look for links or to read up on the details of the particular controversy playing out. This means that I’ve spoken to what I understand as the general lay of the ethical land here, but I have probably not addressed some of the specific details that people elsewhere are discussing.

Here’s the broad question: Is it unethical for a blogger to reuse in blog posts material she has published before (including in earlier blog posts)?

A lot of people who write blogs are using them with the clear intention (clear at least to themselves) of developing ideas for “more serious” writing projects — books, or magazine articles or what have you. I myself am leaning heavily on stuff I’ve blogged over the past seven-plus years in writing the textbook I’m trying to finish, and plan similarly to draw on old blog posts for at least two other books that are in my head (if I can ever get them out of my head and into book form).

That this is an intended outcome is part of why many blog authors who are lucky enough get paying blogging gigs, especially those of us from academia, fight hard for ownership of what they post and for the explicit right to reuse what they’ve written.

So, I wouldn’t generally judge reuse of what one has written in blog posts as self-plagiarism, nor as unethical. Of course, my book(s) will explicitly acknowledge my blogs as the site-of-first-publication for earlier versions of the arguments I put forward. (My book(s) will also acknowledge the debt I owe to commenters on my posts who have pushed me to think much more carefully about the issues I’ve posted on.)

That said, if one is writing in a context where one has agreed to a rule that says, in effect, “Everything you write for us must be shiny and brand-new and never published by you before elsewhere in any form,” then one is obligated not to recycle what one has written elsewhere. That’s what it means to agree to a rule. If you think it’s a bad rule, you shouldn’t agree to it — and indeed, perhaps you should mount a reasoned argument as to why it’s a bad rule. Agreeing to follow the rule and then not following the rule, however, is unethical.

There are venues (including the Scientific American Blog Network) that are OK with bloggers of long standing brushing off posts from the archives. I’ve exercised this option more than once, though I usually make an effort to significantly update, expand, or otherwise revise those posts I recycle (if for no other reason than I don’t always fully agree with what that earlier time-slice of myself wrote).

This kind of reuse is OK with my corporate master. Does that necessarily make it ethical?

Potentially it would be unethical if it imposed a harm on my readers — that is, if they (you) were harmed by my reposting those posts of yore. But, I think that would require either that I had some sort of contract (express or implied) with my readers that I only post thoughts I have never posted before, or that my reposts mislead them about what I actually believe at the moment I hit the “publish” button. I don’t have such a contract with my readers (at least, I don’t think I do), and my revision of the posts I recycle is intended to make sure that they don’t mislead readers about what I believe.

Back-linking to the original post is probably good practice (from the point of view of making reuse transparent) … but I don’t always do this.

One reason is that the substantial revisions make the new posts substantially different — making different claims, coming to different conclusions, offering different reasons. The old post is an ancestor, but it’s not the same creature anymore.

Another reason is that some of the original posts I’m recycling are from my ancient Blogspot blog, from whose backend I am locked out after a recent Google update/migration — and I fear that the blog itself may disappear, which would leave my updated posts with back-links to nowhere. Bloggers tend to view back-links to nowhere as a very bad thing.

The whole question of “self-plagiarism” as an ethical problem is an interesting one, since I think there’s a relevant difference between self-plagiarism and ethical reuse.

Plagiarism, after all, is use of someone else’s words or ideas (or data, or source-code, etc.) without proper attribution. If you’re reusing your own words or ideas (or whatnot), it’s not like you’re misrepresenting them as your own when they’re really someone else’s.

There are instances, however, where self-reuse presents gets people rightly exercised. For example, some scientists reuse their own stuff to create the appearance in the scientific literature that they’ve conducted more experimental studies than they actually have, or that there are more published results supporting their hypotheses than there really are. This kind of artificial multiplication of scientific studies is ethically problematic because it is intended to mislead (and indeed, may succeed in misleading), not because the scientists involved haven’t given fair credit to the earlier time-slices of themselves. (A recent editorial for ACS Nano gives a nice discussion of other problematic aspects of “self-plagiarism” within the context of scientific publishing.)

The right ethical diagnosis of the controversy du jour may depend in part on whether journalistic ethics forbid reuse (explicitly or implicitly) — and if so, on whether (or in what conditions) bloggers count as journalists. At some level, this goes beyond what is spelled out in one’s blogging contract and turns also on the relationship between the blogger and the reader. What kind of expectations can the reader have of the blogger? What kind of expectations ought the reader to have of the blogger? To the extent that blogging is a conversation of a sort (especially when commenting is enabled), is it appropriate for that conversation to loop back to territory visited before, or is the blogger obligated always to break new ground?

And, if the readers are harmed when the blogger recycles her own back-catalogue, what exactly is the nature of that harm?

Is how to engage with the crackpot at the scientific meeting an ethical question?

There’s scientific knowledge. There are the dedicated scientists who make it, whether laboring in laboratories or in the fields, fretting over data analysis, refereeing each other’s manuscripts or second-guessing themselves.

And, well, there are some crackpots.

I’m not talking dancing-on-the-edge-of-the-paradigm folks, nor cheaters who seem to be on a quest for fame or profit. I mean the guy who has the wild idea for revolutionizing field X that actually is completely disconnected from reality.

Generally, you don’t find too much crackpottery in the scientific literature, at least not when peer review is working as it’s meant to. The referees tend to weed it out. Perhaps, as has been suggested by some critics of peer review, referees also weed out cutting edge stuff because it’s just so new and hard to fit into the stodgy old referees’ picture of what counts as well-supported by the evidence, or consistent with our best theories, or plausible. That may just be the price of doing business. One hopes that, eventually, the truth will out.

But where you do see a higher proportion of crackpottery, aside from certain preprint repositories, is at meetings. And there, face to face with the crackpot, the gate-keepers may behave quite differently than they would in an anonymous referee’s report.

Doctor Crackpot gives a talk intended to show his brilliant new solution to a nagging problem with an otherwise pretty well established theoretical approach. Jaws drop as the presentation proceeds. Then, finally, as Doctor Crackpot is aglow with the excitement of having broken the wonderful news to his people, he entertains questions.

Crickets chirp. Members of the audience look at each other nervously.

Doctor Hardass, who has been asking tough questions of presenters all day, tentatively asks a question about the mathematics of this crackpot “solution”. The other scholars in attendance inwardly cheer, thinking, “In about 10 seconds Doctor Hardass will have demonstrated to Doctor Crackpot that this could never work! Then Doctor Crackpot will back away from this ledge and reconsider!”

Ten minutes later, Doctor Crackpot is still writing equations on the board, and Doctor Hardass has been reduced to saying, “Uh huh …” Scholars start sneaking out as the chirping of the crickets competes with the squeaking of the chalk.

Granted, no one wants to hurt Doctor Crackpot’s feelings. If it’s a small enough meeting, you all probably had lunch with him, maybe even drinks the night before. He seems like a nice guy. He doesn’t seem dangerously disconnected from reality in his everyday interactions, just dangerously disconnected from reality in the neighborhood of this particular scientific question. And, as he’s been toiling in obscurity at a little backwater institution, he’s obviously lonely for scientific company and conversation. So, calling him out as a crackpot seems kind of mean.

But … it’s also a little mean not to call him out. It can feel like you’re letting him wander through the scientific community with the equivalent of spinach in his teeth while trailing toilet paper from his shoe if you leave him with the impression that his revolutionary idea has any merit. Someone has to set this guy straight … right? If you don’t, won’t he keep trying to sell this crackpot idea at future meetings?

For what it’s worth, as someone who attends philosophy conferences as well as scientific ones (plus an interesting assortment of interdisciplinary conferences of various sorts), I can attest that there is the occasional crackpot presentation from a philosopher. However, the push-back from the philosophers during the Q&A seemed much more vigorous, and seemed also to reflect a commitment that the crackpot presenter could be led back to reality if only he would listen to the reasoned arguments presented to him by the audience.

In theory, you’d expect to see the same kind of commitment among scientists: if we can agree upon the empirical evidence and seriously consider each other’s arguments about the right theoretical framework in which to interpret it, we should all end up with something like agreement on our account of the world. Using the same sorts of knowledge-building strategies, the same standards of evidence, the same logical machinery, we should be able to build knowledge about the world that holds up against tests to which others subject it — and, we should welcome that testing, since the point of all this knowledge-building is not to win the argument but to build an account that gets the world right.

In theory, the scientific norms of universalism and organized skepticism would ensure that all scientific ideas (including the ones that are en face crackpot ideas) get a fair hearing, but that this “fair hearing” include rigorous criticism to sort out the ideas worthy of further attention. (These norms would also remind scientists that any member of the scientific community has the potential to be the source of a fruitful idea, or of a crackpot idea.)

In practice, though, scientists pick their battles, just like everyone else. If your first ten-minute attempt at reaching a fellow scientist with rigorous criticism shows no signs of succeeding, you might just decide it’s too big a job to tackle before lunch. If repeated engagements with a fellow scientist suggest that he seems not to comprehend the arguments against his pet theory — and maybe that he doesn’t fully grok how the rest of the community understands the standards and strategies for scientific knowledge-building — you may have to make a calculation about whether bringing him back to the fold is a better use of your time and effort than, say, putting more time into your own research, or offering critiques to scientists who seem to understand them and take them seriously.

This is a sensible way to get through a day which seems to have too few hours for all the scientific knowledge-building there is to do, but it might have an impact on whether the scientific community functions in the way that best supports the knowledge-building project.

In the continuum of “scientific knowledge”, on whose behalf scientists are sworn to uphold standards and keep out the dross, where do meetings fall? Do the scientists in attendance have any ethical duty to give their candid assessments of crackpottery to the crackpots? Or is it OK to just snicker about it at the bar? If there’s no obligation to call the crackpot out, does that undermine the value of meetings as sources of scientific knowledge, or of the scientific communications needed to build scientific knowledge?

Could a rational decision not to engage with crackpots in one’s scientific community (because the return on the effort invested is likely to be low) morph into avoidance of other scientists with weird ideas that actually have something to them? Could it lead to avoidance of serious engagement with scientists one thinks are mistaken when it might take serious effort to spell out the nature of the mistakes?

And is there any obligation from the scientific community either to accept the crackpots as fully part of the community (meaning that their ideas and their critiques of the ideas of other ought to be taken seriously), or else to be honest with them that, while they may subscribe to the same journals and come to the same meetings, the crackpots are Not Our Kind, Dear?

End-of-semester meditations on plagiarism.

Plagiarism — presenting the words or ideas (among other things) of someone else as one’s own rather than properly citing their source — is one of the banes of my professorial existence. One of my dearest hopes at the beginning of each academic term is that this will be the term with no instances of plagiarism in the student work submitted for my evaluation.

Ten years into this academic post and I’m still waiting for that plagiarism-free term.

One school of thought posits that students plagiarize because they simply don’t understand the rules around proper citation of sources. Consequently, professorial types go to great lengths to lay out how properly to cite sources of various types. They put explicit language about plagiarism and proper citation in their syllabi. They devote hours to crafting handouts to spell out expected citation practices. They require their students to take (and pass) plagiarism tutorials developed by information literacy professionals (the people who, in my day, we called university librarians).

And, students persist in plagiarizing.

Another school of thought lays widespread student plagiarism at the feet of the new digital age.

What with all sorts of information resources available through the internets, and with copy-and-paste technology, assembling a paper that meets the minimum page length for your assignment has never been easier. Back in the olden times, our forefathers had to actually haul the sources from which they were stealing off the shelves, maybe carry them back to the dorms through the snow, find their DOS disk to boot up the dorm PC, and then laboriously transcribe those stolen passages!

And it’s not just that the copy-and-paste option exists, we are told. College students have grown up stealing music and movies online. They’ve come of age along with Wikipedia, where information is offered free for their use and without authorship credits. If “information wants to be free” (a slogan attributed to Stewart Brand in 1984), how can these young people make sense of intellectual property, and especially of the need to cite the sources from which they found the information they are using? Is not their “plagiarism” just a form of pastiche, an activity that their crusty old professors fail to recognize as creative?

Yeah, the modern world is totally different, dude. There are tales of students copying not just Wikipedia articles but also things like online FAQs, verbatim, in student papers without citing the source, and indeed while professing that they didn’t think they needed to cite them because there was no author listed. You know what source kids used to copy from in my day that didn’t list authors? The World Book Encyclopedia. Indeed, from at least seventh grade, our teachers made a big deal of teaching us how to cite encyclopedia and newspaper articles with no named authors. Every citation guide I’ve seen in recent years (including the ones that talk about proper ways to cite web pages) includes instruction on how to cite such sources.

The fact that plagiarism is perhaps less labor-intensive than it used to be strikes me as an entirely separate issue from whether kids today understand that it’s wrong. If young people are literally powerless to resist the temptations presented to them by the internet, maybe we should be getting computers out of the classroom rather than putting more computers into the classroom.

Of course, the fact that not every student plagiarizes argues against the claim that students can’t help it. Clearly, some of them can.

There is research that indicates students plagiarize less in circumstances where they know that their work is going to be scanned with plagiarism-detection software. Here, it’s not that the existence or use of the software suddenly teaches students something they didn’t already know about proper citation. Rather, the extra 28 grams of prevention comes from an expectation that the software will be checking to see if they followed the rules of scholarship that they already understood.

My own experience suggests that one doesn’t require an expensive proprietary plagiarism-detection system like Turnitin — plugging the phrases in the assignment that just don’t sound like a college student wrote them into a reasonably good search engine usually delivers the uncited sources in seconds.

It also suggests that even when students are informed that you will be using software or search engines to check for plagiarism, some students still plagiarize.

Perhaps a better approach is to frame plagiarism as a violation of trust in a community that, ultimately, has an interest in being more focused on learning than on crime and punishment. This is an approach to which I’m sympathetic, which probably comes through in the version of “the talk” on academic dishonesty I give my students at the start of the semester:

Plagiarism is evil. I used to think I was a big enough person not to take it personally if someone plagiarized on an assignment for my class. I now know that I was wrong about that. I take it very personally.


For one thing, I’m here doing everything I can to help you learn this stuff that I think is really interesting and important. I know you may not believe yet that it’s interesting and important, but I hope you’ll let me try to persuade you. And, I hope you’ll put an honest effort into learning it. If you try hard and you give it a chance, I can respect that. If you make the calculation that, given the other things on your plate, you can’t put in the kind of time and effort I’m expecting and you choose to put in what you can, I’ll respect that, too. But if you decide it’s not worth your time or effort to even try, and instead you turn to plagiarism to make it look like you learned something — well, you’re saying that the stuff you’re supposedly here to learn is of no value, except to get you the grades and the credits you want. I care about that stuff. So I take it personally when you decide, despite all I’m doing here, that it’s of no value. Moreover, this is not a diploma mill where you pay your money and get your degree. If you want the three credits from my course, the terms of engagement are that you’ll have to show some evidence of learning.


Even worse, when you hand in an essay that you’ve copied from the internet, you’re telling me you don’t think I’m smart enough to tell the difference between your words and ideas and something you found in 5 minutes with Google. You’re telling me you think I’m stupid. I take that personally, too.


If you plagiarize in my course, you fail my course, and I will take it personally. Maybe that’s unreasonable, but that’s how I am. I thought I should tell you up front so that, if you can’t handle having a professor who’s such a hardass, you can explore your alternatives.

So far, none of my students have every run screaming from this talk. Some of them even nod approvingly. The students who labor to write their papers honestly likely feel there’s something unjust about classmates who sidestep all that labor by cheating.

But students can still fully comprehend your explanation of how you view plagiarism, how personally you’ll take it, how vigorously you’ll punish it … and plagiarize.

They may even deny it to your face for 30 additional seconds after they recognize that you have them dead to rights (since given the side-by-side comparison of their assignment and the uncited source, they would need to establish psychic powers for there to be any plausible explanation besides plagiarism). And then they’ll explain that they were really pressed for time, and they need a good grade (or a passing grade) in this course, and they felt trapped by circumstances, so even though of course they know what they did is wrong, they made one bad decision, and their parents will kill them, and … isn’t there some way we could make this go away? They feel so bad now that they promise they’ve learned their lesson.

Here, I think we need to recognize that there is a relevant difference between saying you have learned a lesson and actually learning that lesson.

Indeed, one of the reasons that my university’s office of judicial affairs asks instructors to report all cases of plagiarism and cheating no matter what sanctions we apply to them (including no sanctions) is so there will be a record of whether a particular offense is really the first offense. Students who plagiarize may also lie about whether they have a record of doing so and being caught doing it. If the offenses are spread around — in different classes with different professors in different departments — you might be able to score first-time leniency half a dozen times.

Does that sound cynical? From where I sit, it’s just realistic. But this “realistic” point of view (which others in the teaching trenches share) is bound to make us tougher on the students who actually do make a single bad decision, suspecting that they might be committed cheaters, too.

Keeping the information about plagiarists secret rather than sharing it through the proper channels, in other words, can hurt students who could be helped.

There have been occasions, it should be noted, when frustrated instructors warned students that they would name and shame plagiarists, only to find (after following through on that warning) that they had run afoul of FERPA. Among other things, FERPA gives students (18 or older) some measure of control about who gets to see their academic records. If a professor announces to the world — or even to your classmates — that you’ve failed a the class for plagiarizing, information from your academic records has arguably been shared without your consent.

Still, it’s hard not to feel that plagiarism is breaking trust not just with the professor but with the learning community. Does that learning community have an interest in flagging the bad actors? If you know there are plagiarists among your classmates but you don’t know who they are, does this create a situation where you can’t trust anyone? If all traces of punishment — or of efforts at rehabilitation — are hidden behind a veil of privacy, is the reasonable default assumption that people are generally living within the rules and that the rules are being enforced against the handful of violations … or is it that people are getting away with stuff?

Is there any reasonable role for the community in punishment and in rehabilitation of plagiarism?

To some, of course, this talk of harms to learning communities will seem quaint. If you see your education as an individual endeavor rather than a team sport, your classmates may as well be desks (albeit desks whose grades may be used to determine the curve). What you do, or don’t do, in your engagement with the machinery that dispenses your education (or at least your diploma) may be driven by your rational calculations about what kind of effort you’re willing to put into creating the artifacts you need to present in exchange for grades.

The artifacts that require writing can be really time-consuming to produce de novo. The writing process, after all, is hard. People who write for a living complain of writer’s block. Have you ever heard anyone complain about Google-block? Plagiarism, in other words, is a huge time-saver, not least because it relies on skills most college students already have rather than ones they need to develop to any significant extent.

Here, I’d like to offer a modest proposal for students unwilling to engage the writing process: don’t.

Take a stand for what you believe in! Don’t lurk in the shadows pretending to knuckle under to the man by turning in essays and term papers that give the appearance that you wrote them. Instead, tell your professors that writing anything original for their assignments is against your principles. Then take your F and wear it as a badge of honor!

When all those old-timey professors who fetishize the value of clear writing, original thought, and proper citation of sources die out — when your generation is running the show — surely your principled stand will be vindicated!

And, in the meantime, your professors can spend their scarce time helping your classmates who actually want to learn to write well and uphold rudimentary rules of scholarship.

Really, it’s win-win.

_____
In the interests of full-disclosure — and of avoiding accusations of self-plagiarism — I should note that this essay draws on a number of posts I have written in the past about plagiarism in academic contexts.

Who matters (or should) when scientists engage in ethical decision-making?

One of the courses I teach regularly at my university is “Ethics in Science,” a course that explores (among other things) what’s involved in being a good scientist in one’s interactions with the phenomena about which one is building knowledge, in one’s interactions with other scientists, and in one’s interactions with the rest of the world.

Some bits of this are pretty straightforward (e.g., don’t make up data out of whole cloth, don’t smash your competitor’s lab apparatus, don’t use your mad science skillz to engage in a campaign of super-villainy that brings Gotham City to its knees). But, there are other instances where what a scientist should or should not do is less straightforward. This is why we spend significant time and effort talking about — and practicing — ethical decision-making (working with a strategy drawn from Muriel J. Bebeau, “Developing a Well-Reasoned Response to a Moral Problem in Scientific Research”). Here’s how I described the basic approach in a post of yore:

Ethical decision-making involves more than having the right gut-feeling and acting on it. Rather, when done right, it involves moving past your gut-feeling to see who else has a stake in what you do (or don’t do); what consequences, good or bad, might flow from the various courses of action available to you; to whom you have obligations that will be satisfied or ignored by your action; and how the relevant obligations and interests pull you in different directions as you try to make the best decision. Sometimes it’s helpful to think of the competing obligations and interests as vectors, since they come with both directions and magnitudes — which is to say, in some cases where they may be pulling you in opposite directions, it’s still obvious which way you should go because the magnitude of one of the obligations is so much bigger than of the others.

We practice this basic strategy by using it to look at a lot of case studies. Basically, the cases describe a situation where the protagonist is trying to figure out what to do, giving you a bunch of details that seem salient to the protagonist and leaving some interesting gaps where the protagonist maybe doesn’t have some crucial information, or hasn’t looked for it, or hasn’t thought to look for it. Then we look at the interested parties, the potential consequences, the protagonist’s obligations, and the big conflicts between obligations and interests to try to work out what we think the protagonist should do.

Recently, one of my students objected to how we approach these cases.

Specifically, the student argued that we should radically restrict our consideration of interested parties — probably to no more than the actual people identified by name in the case study. Considering the interests of a university department, or of a federal funder, or of the scientific community, the student asserted, made the protagonist responsible to so many entities that the explicit information in the case study was not sufficient to identify the correct course of action.*

And, the student argued, one interested party that it was utterly inappropriate for a scientist to include in thinking through an ethical decision is the public.

Of course, I reminded the student of some reasons you might think the public would have an interest in what scientists decide to do. Members of the public share a world with scientists, and scientific discoveries and scientific activities can have impacts on things like our environment, the safety of our buildings, what our health care providers know and what treatments they are able to offer us, and so forth. Moreover, at least in the U.S., public funds play an essential role in supporting both scientific research and the training of new scientists (even at private universities) — which means that it’s hard to find an ethical decision-making situation in a scientific training environment that is completely isolated from something the public paid for.

My student was not moved by the suggestion that financial involvement should buy the public any special consideration as a scientist was trying to decide the right thing to do.

Indeed, central to the student’s argument was the idea that the interests of the public, whether with respect to science or anything else, are just too heterogeneous. Members of the public want lots of different things. Taking these interests into account could only be a distraction.

As well, the student asserted, too small a proportion of the public actually cares about what scientists are up to that the public, even if it were more homogeneous, ought to be taken into account by the scientists grappling with their own ethical quandaries. Even worse, the student ventured, those that do care what scientists are up to are not necessarily well-informed.

I’m not unsympathetic to the objection to the extreme case here: if a scientist felt required to somehow take into account the actual particular interests of each individual member of the public, that would make it well nigh impossible to actually make an ethical decision without the use of modeling methods and supercomputers (and even then, maybe not). However, it strikes me that it shouldn’t be totally impossible to anticipate some reasonable range of interests non-scientists have that might be impacted by the consequences of a scientist’s decision in various ways. Which is to say, the lack of total fine-grained information about the public, or of complete predictability of the public’s reactions, would surely make it more challenging to make optimal ethical decisions, but these challenges don’t seem to warrant ignoring the public altogether just so the problem you’re trying to solve becomes more tractable.

In any case, I figure that there’s a good chance some members of the public** may be reading this post. To you, I pose the following questions:

  1. Do you feel like you have an interest in what science and scientists are up to? If so, how would you describe that interest? If not, why not?
  2. Do you think scientists should treat “the public” as an interested party when they try to make ethical decisions? Why or why not?
  3. If you think scientists should treat “the public” as an interested party when they try to make ethical decisions, what should scientists be doing to get an accurate read on the public’s interests?
  4. And, for the sake of symmetry, do you think members of the public ought to take account of the interests of science or scientists when they try to make ethical decisions? Why or why not?

If, for some reason, you feel like chiming in on these questions in the comments would expose you to unwanted blowback, you can also email me your responses (dr dot freeride at gmail dot com) for me to anonymize and post on your behalf.

Thanks in advance for sharing your view on this!

_____
*Here I should note that I view the ambiguities within the case studies as a feature, not a bug. In real life, we have to make good ethical decisions despite uncertainties about what consequences will actually follow our actions, for example. Those are the breaks.

**Officially, scientists are also members of the public — even if you’re stuck in the lab most of the time!

Reading “White Coat, Black Hat” and discovering that ethicists might be black hats.

During one of my trips this spring, I had the opportunity to read Carl Elliott’s book White Coat, Black Hat: Adventures on the Dark Side of Medicine. It is not always the case that reading I do for my job also works as riveting reading for air travel, but this book holds its own against any of the appealing options at the airport bookstore. (I actually pounded through the entire thing before cracking open the other book I had with me, The Girl Who Kicked the Hornet’s Nest, in case you were wondering.)

Elliott takes up a number of topics of importance in our current understanding of biomedical research and how to do it ethically. He considers the role of human subjects for hire, of ghostwriters in the production of medical papers, of physicians who act as consultants and spokespeople for pharmaceutical companies, and of salespeople for the pharmaceutical companies who interact with scientists and physicians. There are lots of important issues here, engagingly presented and followed to some provocative conclusions. But the chapter of the book that gave me the most to think about, perhaps not surprisingly, is the chapter called “The Ethicists”.

You might think, since Elliott is writing a book that points out lots of ways that biomedical research could be more ethical, that he would present a picture where ethicists rush in and solve the problems created by unwitting research scientists, well-meaning physicians, and profit driven pharmaceutical company. However, Elliott presents instead reasons to worry that professional ethicists will contribute to the ethical tangles of the biomedical world rather than sorting them out. Indeed Elliott identifies what seem to be special vulnerabilities in the psyche of the professional ethicist. For example, he writes, “There is no better way to enlist bioethicists in the cause of consumer capitalism than to convince them they are working for social justice.” (139-140) Who, after all, could be against social justice? Yet, when efforts on behalf of social justice takes the form of debates on television news programs about fair access to new pharmaceuticals, the big result seems to be free advertising for the companies making those pharmaceuticals. Should bioethicists be accountable for these unforeseen results? This chapter suggests that careful bioethicists ought to foresee them, and to take responsibility.

There is an irony in professionals who see part of their job as pointing out conflicts of interest to others that they may be placing themselves right in the path of equally overwhelming conflicts of interest. Some of these have to do with the practical problem of how to fund their professional work. Universities these days are struggling with reduced budgets, which means they are encouraging their faculty to be more entrepreneurial — including by cultivating relationships that might lead to donations from the private sector. To the extent that bioethics is seen as relevant to pharmaceutical development, pharmaceutical companies, which have deeper pockets than do universities, are seen as attractive targets for fundraising.

As Elliott notes, bioethicists have seen a great deal of success in this endeavor. He writes,

For the last three decades bioethics has been vigorously generating new centers, new commissions, new journals, and new graduate programs, not to mention a highly politicized role in American public life. In the same way that sociologists saw their fortunes climb during the 1960s as the public eye turned towards social issues like poverty, crime, and education, bioethics started to ascend when medical care and scientific research began generating social questions of their own. As the field grows more prominent, bioethicists are considering a funding model familiar to the realm of business ethics, one that embraces partnership and collaboration with corporate sponsors as long as outright conflict of interest can be managed. …

Corporate funding present a public relations challenge, of course. It looks unseemly for an ethicist to share in the profits of arms dealers, industrial polluters, or multinationals that exploit the developing world. Credibility is also a concern. Bioethicist teach about pharmaceutical company issues in university classrooms, write about those issues in books and articles, and comment on them in the press. Many bioethicists evaluate industry policies and practices for professional boards, government bodies, and research ethics committees. To critics, this raises legitimate questions about the field of bioethics itself. Where does the authority of ethicists come from, and why are corporations so willing to fund them? (140-141)

That comparison of bioethics to business, by the way, is the kind of thing that gets my attention; one of the spaces frequently assigned for “Business and Professional Ethics” courses at my university is the Arthur Anderson Conference Room. Perhaps this is a permanent teachable moment, but I can’t help worry that really the lesson has to do with the vulnerability of the idealistic academic partner in the academic-corporate partnership.

Where does the authority of ethicist come from? I have scrawled in the margin something about appropriate academic credentials and good arguments. But connect this first question to Elliott’s second question: why are corporations so willing to fund them? Here, we need to consider the possibility that their credibility and professional status is, in a pragmatic sense, directly linked to corporations paying bioethicists for their labors. What, exactly, are those corporations paying for?

Let’s put that last question aside for a moment.

Arguably, the ethicist has some skills and training that render her a potentially useful partner for people trying to work out how to be ethical in the world. One hopes what she says would be informed by some amount of ethical education, serious scholarship, and decision-making strategies grounded in a real academic discipline.

Elliott notes that “[s]ome scholars have recoiled, emphatically rejecting the notion that their voices should count more than others’ on ethical affairs.” (142) Here, I agree if the claim is, in essence, that the interests of the bioethicists are no more important than others’. Surely the perspectives of others who are not ethicists matter, but one might reasonably expect that ethicists can add value, drawing on their experience in taking those interests, and the interest of other stakeholders, into account to make reasonable ethical decisions.

Maybe, though, those of us who do ethics for a living just tell ourselves we are engaged in a more or less objective decision-making process. Maybe the job we are doing is less like accounting and more like interpreting pictures in inkblots. As Elliott writes,

But ethical analysis does not really resemble a financial audit. If a company is cooking its books and the accountant closes his eyes to this fact in his audit, the accountant’s wrongdoing can be reliably detected and verified by outside monitors. It is not so easy with an ethics consultant. Ethicists have widely divergent views. They come from different religious standpoints, use different theoretical frameworks, and profess different political philosophies. Also free to change their minds at any point. How do you tell the difference between an office consultant who has changed her mind for legitimate reasons and one who has changed her mind for money? (144)

This impression of the fundamental squishiness of the ethicist’s stock in trade seems to be reinforced in a quote Elliott takes from biologist-entrepreneur Michael West: “In the field of ethics, there are no ground rules, so it’s just one ethicist opinion versus another ethicist’s opinion. You’re not getting whether someone is right or wrong, because it all depends on who you pick.” (144-145)

Here, it will probably not surprise you to learn that I think these claims are only true when the ethicists are doing it wrong.

What, then, would be involved in doing it right? To start with, what one should ask from an ethicist should be more than just an opinion. One should also ask for an argument to support that opinion, an argument that makes reference to important details like interested parties, potential consequences of the various options for action on the table, the obligations the party making the decisions to the stakeholders, and so forth — not to mention consideration of possible objections to this argument. It is fair, moreover, to ask the ethicist whether the recommended plan of action it is compatible with more than one ethical theory — or, for example, if it only works in the world we are sharing solely with other Kantians.

This would not make auditing the ethical books as easy as auditing the financial statements, but I think it would demonstrate something like rigor and lend itself to meaningful inspection by others. Along the same lines, I think it would be completely reasonable, in the case that an ethicist has gone on record as changing her mind, to ask for the argument that brought her from one position to the other. It would also be fair to ask, what argument or evidence might bring you back again?

Of course, all of this assumes an ethicist arguing in good faith. It’s not clear that what I’ve described as crucial features of sound ethical reasoning couldn’t be mimicked by someone who wanted to appear to be a good ethicist without going to the trouble of actually being one.

And if there’s someone offering you money — maybe a lot of money — for something that looks like good ethical reasoning, is there a chance you could turn from an ethicist arguing in good faith to one who just looks like she is, perhaps without even being aware of it herself?

Elliott pushes us to examine whether the dangers that may lurk when the private-sector interests are willing to put up money for your ethical insight. Have they made a point of asking for your take primarily because your paper-trail of prior ethical argumentation lines us really well with what they would like an ethicist to say to give them cover to do what they already want to do — not because it’s ethical, necessarily, but because it’s profitable or otherwise convenient? You may think your ethical stances are stable because they are well-reasoned (or maybe even right). But how can you be sure that the stability of your stance is not influenced by the size of your consultation paycheck? How can you tell that you have actually been solicited for an honest ethical assessment — one that, potentially, could be at odds with what the corporation soliciting it wants to hear? If you tell that corporation that a certain course of action would be unethical, do you have any power to prevent them from pursuing that course of action? Do you have an incentive to tell the corporation what it wants to hear, not just to pick up your consulting fee, but to keep a seat at the table where you might hope to have a chance of nudging its behavior in a more ethical direction, even if only incrementally?

None of these are easy questions to answer objectively if you’re the ethicist in the scenario.

Indeed, even if money were not part of the equation, the very fact that people at the corporations — or researchers, or physicians, or whoever it is seeking the ethicists’ expertise — are reaching out to ethicists and identifying them as experts with something worthwhile to contribute might itself make it harder for the ethicists to deliver what they think they should. As Elliott argues, the personal relationships may end up creating conflicts of interest that are at least as hard to manage as those that occur when money changes hands. These people asking for our ethical input seem like good folks, motivated at least in part by goals (like helping people with disease) that are noble. We want them to succeed. And we kind of dig that they seem interested in what we have to say. Because we end up liking them as people, we may find it hard to tell them things they don’t want to hear.

And ultimately, Elliott is arguing, barriers to delivering news that people don’t want to hear — whether those barriers come from financial dependence, the professional prestige that comes when your talents are in demand, or developing personal relationships with the people you’re advising — are barriers to being a credible ethicist. Bioethics becomes “the public relations division of modern medicine” (151) rather than carrying on the tradition of gadflies like Socrates. If they were being Socratic gadflies and telling truth to power, Elliott suggests, we would surely be able to find at least a few examples of bioethics who were punished for their candor. Instead, we see the ties between ethicists and the entities they advise growing closer.

This strikes close to home for me, as I aspire to do work in ethics that can have real impacts on the practice of scientific knowledge-building, the training of new scientists, the interaction of scientists with the rest of the world. On the one hand, it seems to help me to understand the details of scientific activity, and the concerns of scientists and scientific trainees. But, if I “go native” in the tribe of science, Elliott seems to be saying that I could end up dropping the ball as far as what it means to make the kind of contribution a proper ethicist should:

Bioethicists have gained recognition largely by carving out roles as trusted advisers. But embracing the role of trusted adviser means forgoing other potential roles, such as that of the critic. It means giving up on pressuring institutions from the outside, in the manner of investigative reporters. As bioethicists seek to become trusted advisers, rather than gadflies or watchdogs, it will not be surprising if they slowly come to resemble the people they are trusted to advise. And when that happens, moral compromise will be unnecessary, because there will be little left to compromise. (170)

This is strong stuff — the kind of stuff which, if taken seriously, I hope can keep me on track to offer honest advice even when it’s not what the people or institutions to whom I’m offering it want to hear. Heeding the warnings of a gadfly like Carl Elliott might just help an ethicist do what she has to do to be able to trust herself.

Health care provider and patient/client: situations in which fulfilling your ethical duties might not be a no-brainer.

Thanks in no small part to the invitation of the fantastic Doctor Zen, I was honored this past week to be a participant in the PACE 3rd Annual Biomedical Ethics Conference. The conference brought together an eclectic mix of people who care about bioethics: nurses, counselors, physicians, physicians’ assistants, lawyers, philosophers, scientists, students, professors, and people practicing their professions out “in the world”.*

As good conferences do, this one left me with a head full of issues with which I’m still grappling. So, as bloggers sometimes do, I’m going to put one of those issues out there and invite you to grapple with it, too.

A question that kept coming up was what exactly it means for a health care provider (broadly construed) to fulfill hir duties to hir patient/client.

Of course, the folks in the ballroom could rattle off the standard ethical principles that should guide their decision-making — respect for persons (which includes respect for the autonomy of the patient-client), beneficence, non-maleficence, justice — but sometimes these principles seem to pull in different directions, which means just what one should do when the rubber hits the road is not always obvious.

For example:

1. In some states, health care professionals are “mandatory reporters” of domestic violence — that is, if they encounter a patient who they have reason to believe is a victim of domestic violence, they are obligated by law to report it to the authorities. However, it is sometimes the case that getting the case into the legal system triggers retaliatory violence against the victim by the abuser. Moreover, in the aftermath of reporting, the victim may be less willing (or able) to seek further medical care. Is the best way to do one’s duty to one’s patient always to report? Or are their instances where one better fulfills those duties by not reporting (and if so, what are the foreseeable costs of such a course of action — to that patient, to the health care provider, to other patients, to the larger community)?

2. A patient with a terminal illness may feel that the best way for hir physician to respect hir autonomy would be to assist hir in ending hir life. However, physician-assisted suicide is usually interpreted as clearly counter to the requirements of non-maleficence (“do no harm”) and beneficence. In most of the U.S., it’s also illegal. Can a physician refuse to provide the patient in this situation with the sought-after assistance without being paternalistic?** Is it fair game for the physician’s discussion with the patient here to touch on personal values that it might not be fair for the patient to ask the physician to compromise? Are there foreseeable consequences of what, to the patient, looks like a personal choice that might impact the physician’s relationship with other patients, with hir professional community, or with the larger community?

3. In Texas, the law currently requires that patients seeking abortions must submit to transvaginal ultrasounds first. In other words, the law requires health care provider to subject patient to a medically unnecessary invasive procedure. The alternative is for the patient to carry to term an unwanted pregnancy. Both choices, arguably, subject the patient to violence.

Does the health care provider who is trying to uphold hir obligations to hir patient have an obligation to break the law? If it’s a bad law — here, one whose requirements make it impossible for a health care provider to fulfill hir duties to patients — ought health care providers to put their own skin in the game to change it?

Here’s what I’ve written before about how ethically to challenge bad rules:

If you’re part of a professional community, you’re supposed to abide by the rules set by the commissions and institutions governing your professional community.

If you don’t think they’re good rules, of course, one of the things you should do as a member of that professional community is make a case for changing them. However, in the meantime making yourself an exception to the rules that govern the other members of your professional community is pretty much the textbook definition of an ethical violation.

The gist here is that sneakily violating a bad rule (perhaps even while paying lip service to following it) rather that standing up and explicitly arguing against the bad rule — not just when it’s applied to you but when it’s applied to anyone else in your professional community — is wrong. It does nothing to overturn the bad rule, it involves you in deception, and it prioritizes your interests over everyone else’s.

The particular situation here is tricky, though, given that as I understand it the Texas law is a rule imposed on medical professionals by lawmakers, not a rule that the community of medical professionals created and implemented themselves the better to help them fulfill their duties to their patients. Indeed, it seems pretty clear that the lawmakers were willing to sacrifice duties that are absolutely central in the physician-patient relationship when they imposed this law.

Moreover, I think the way forward is complicated by concerns about how to ensure that patients get care that is helpful, not harmful, to them. If Texas physicians who opposed the mandatory transvaginal ultrasound requirement were to fill the jails to protest the law, who does that leave to deliver ethical care to people on the outside seeking abortions? Is this a place where the professional community as a whole ought to be pushing back against the law rather than leaving it to individual members of that community to push back?

* * * * *

If these examples have common threads, one of them is that what the law requires (or what the law allows) seems not to line up neatly with what our ethics require. Perhaps this speaks to the difficulty of getting laws to capture the tricky balancing act that acting ethically towards one’s patients/clients requires of health care professionals. Or, maybe it speaks to law makers not always being focused on creating an environment in which health care providers can deliver on their ethical duties to their patients/clients (perhaps even disagreeing with professional communities about just what those ethical duties are).

What does this mismatch mean for what patients/clients can legitimately expect from their health care providers? Or for what health care providers can realistically deliver to their patients/clients?

And, if you were a health care provider in one of these situations, what would you do?
_____
*Arguably, however, universities and their denizens are also in the world. We share the same fabric of space-time as the rest of y’all.

**Note that paternalism is likely warranted in a number of circumstances. However, when we’re talking about a patient of sound mind, maybe paternalism shouldn’t be the physician’s go-to stance.

Science and ethics shouldn’t be muddled (or, advice for Jesse Bering).

Jesse Bering’s advice column is provoking some strong reactions. Most of these suggest that his use of evolutionary psychology in his answers lacks a certain scientific rigor, or that he’s being irresponsible in providing what looks like scientific cover for adult men who want to have sex with pubescent girls.

My main issue is that the very nature of Jesse Bering’s column seems bound to muddle scientific questions and ethical questions.

In response to this letter:

Dear Jesse,
I am a non-practicing heterosexual hebephile—and I think most men are—and find living in this society particularly difficult given puritanical, feminist, and parental forces against the normal male sex drive. If sex is generally good for both the body and the brain, then how is a teen having sex with an adult (versus another teen) bad for their mind? I feel like the psychological arguments surrounding the present age of consent laws need to be challenged. My focus is on consensual activity being considered always harmful in the first place. Since the legal notions of consent are based on findings from the soft sciences, shouldn’t we be a little more careful about ruining an adult life in these cases?
—Deep-thinking Hebephile

Jesse Bering offers:

  • The claim that “there are few among us who aren’t the direct descendents of those who’d be incarcerated as sex offenders today”.
  • A pointer to research on men’s measurable penile response to sexualized depiction of very young teenagers.
  • A comment that “there’s some reason to believe that a hebephilic orientation would have been biologically adaptive in the ancestral past”.
  • A mention of the worldwide variations in age-of-consent laws as indicative of deep cultural disagreements.
  • A pointer to research that “challenge[s] the popular notion that sex with underage minors is uniformly negative for all adolescents in such relationships” (although it turns out the subjects of this research were adolescent boys; given cultural forces acting on boys and girls, this might make a difference)
  • An anecdote about a 14-year-old boy who got to have sex with a prostitute before being killed by the Nazis in a concentration camp, and about how this made his father happy.
  • A comment that “Impressionist artist Paul Gauguin relocated to French Polynesia to satisfy his hebephilic lust with free-spirited Tahitian girls” in the 19th Century, but that now in the 21st century there’s less sympathy for this behavior.

And this is advice?*

Let’s pick up on just one strand of the scientific information referenced in Jesse Bering’s answer. If there exists scientific research that suggests that your trait is shared by others in the population, or that your trait may have been an adaptive one for your ancestors earlier in our evolutionary journey, what exactly does that mean?

Does it mean that your trait is a good one for you to have now? It does not.

Indeed, we seem to have no shortage of traits that may well have helped us dodge the extinction bullet but now are more likely to get us into trouble given our current environment. (Fondness for sweets is the one that gets me, and I still have cookies to bake.) Just because a trait, or a related behavior, comes with an evolutionary origin story doesn’t make it A-OK.

Otherwise, you could replace ethics and moral philosophy with genetics and evolutionary psychology.

Chris Clarke provides a beautiful illustration of how badly off the rails we might go if we confuse scientific explanation with moral justification — or with actual advice, for that matter.

This actually raises the question of what exactly Jesse Bering intends to accomplish with his “advice column”. Here’s what he says when describing the project:

Perhaps in lieu of offering you advice on how to handle your possibly perverted father-in-law who you suspect is an elderly frotteur, or how to be tactful while delicately informing your co-worker that she smells like a giant sewer rat, I can give you something even better—a peek at what the scientific data have to say about your particular issue. In other words, perhaps I can tell you why you’re going through what you are rather than what to do about it. I may not believe in free will, but I’m a firm believer that knowledge changes perspective, and perspective changes absolutely everything. Once you have that, you don’t need anyone else’s advice.

And good advice is really only good to the extent it aligns with actual research findings, anyway. Nearly two centuries worth of data in the behavioral sciences is available to inform our understanding of our everyday (and not so everyday) problems, yet rarely do we take advantage of this font of empirical wisdom…

That’s not to say that I can’t give you a piece of my subjective mind alongside the objective data. I’m happy to judge you mercilessly before throwing you and your awkward debacle to the wolves in the comments section. Oh, I’m only kidding—kind of. Actually, anyone who has read my stuff in the past knows that I’m a fan of the underdog and unconventional theories and ideas. Intellectual sobriety has never been a part of this blog and never will be, if I can help it, so let’s have a bit of fun.

(Bold emphasis added.)

Officially, Jesse Bering says he’s not offering advice, just information. It may end up being perspective-changing information, which will lead to the advice-asker no longer needing to ask anyone for advice. But it’s not actually advice!

As someone who teaches strategies in moral decision-making, I will note here that taking other people’s interests into account is absolutely central to being ethical. One way we can get a handle on other people’s interests is by asking others for advice. And, we don’t usually conceive of getting information about others and their interests as a one-shot deal.

On the point that good advice ought to align with “actual research findings,” I imagine Jesse Bering is taking actual research findings as our best current approximation of the facts. It’s important to recognize, though, that there are some published research findings that turn out to have been fabricated or falsified, and others that were the result of honest work but that have serious methodological shortcomings. Some scientific questions are hard. Even our best actual research findings may provide limited insight into how to answer them.

All of which is to say, it seems like what might really help someone looking for scientific information relevant to his personal problem would be a run-down of what the best available research tells us — and of what uncertainties still remain — rather than just finding some quirky handful of studies.

Indeed, Jesse Bering notes that he’s a fan of unconventional theories and ideas. On the one hand, it’s good to put this bias on the table. However, it strikes me that his recognition of this bias puts an extra obligation on him when he offers his services to advice seekers: an obligation to cast a heightened critical eye on the methodology used to conduct the research that supports such theories and ideas.

And maybe this comes back to the question of what the people writing to Jesse Bering for advice are actually looking for. If they want the comfort of knowing what the scientists know about X (for whatever X it is the writer is asking about), they ought to be given an accurate sense of how robust or tenuous that scientific knowledge actually is.

As well, they ought to be reminded that what we know about where X came from is a completely separate issue from whether I ought to let my behavior be directed by X. Scientific facts can inform our ethical decisions, but they don’t make the ethical questions go away.

_______
*Stephanie Zvan offers the best actual response to the the letter-writer’s request for advice, even if it wasn’t the answer the letter-writer wanted to hear.