Blogging and recycling: thoughts on the ethics of reuse.

Owing to summer-session teaching and a sprained ankle, I have been less attentive to the churn of online happenings than I usually am, but an email from SciCurious brought to my attention a recent controversy about a blogger’s “self-plagiarism” of his own earlier writing in his blog posts (and in one of his books).

SciCurious asked for my thoughts on the matter, and what follows is very close to what I emailed her in reply this morning. I should note that these thoughts were composed before I took to the Googles to look for links or to read up on the details of the particular controversy playing out. This means that I’ve spoken to what I understand as the general lay of the ethical land here, but I have probably not addressed some of the specific details that people elsewhere are discussing.

Here’s the broad question: Is it unethical for a blogger to reuse in blog posts material she has published before (including in earlier blog posts)?

A lot of people who write blogs are using them with the clear intention (clear at least to themselves) of developing ideas for “more serious” writing projects — books, or magazine articles or what have you. I myself am leaning heavily on stuff I’ve blogged over the past seven-plus years in writing the textbook I’m trying to finish, and plan similarly to draw on old blog posts for at least two other books that are in my head (if I can ever get them out of my head and into book form).

That this is an intended outcome is part of why many blog authors who are lucky enough get paying blogging gigs, especially those of us from academia, fight hard for ownership of what they post and for the explicit right to reuse what they’ve written.

So, I wouldn’t generally judge reuse of what one has written in blog posts as self-plagiarism, nor as unethical. Of course, my book(s) will explicitly acknowledge my blogs as the site-of-first-publication for earlier versions of the arguments I put forward. (My book(s) will also acknowledge the debt I owe to commenters on my posts who have pushed me to think much more carefully about the issues I’ve posted on.)

That said, if one is writing in a context where one has agreed to a rule that says, in effect, “Everything you write for us must be shiny and brand-new and never published by you before elsewhere in any form,” then one is obligated not to recycle what one has written elsewhere. That’s what it means to agree to a rule. If you think it’s a bad rule, you shouldn’t agree to it — and indeed, perhaps you should mount a reasoned argument as to why it’s a bad rule. Agreeing to follow the rule and then not following the rule, however, is unethical.

There are venues (including the Scientific American Blog Network) that are OK with bloggers of long standing brushing off posts from the archives. I’ve exercised this option more than once, though I usually make an effort to significantly update, expand, or otherwise revise those posts I recycle (if for no other reason than I don’t always fully agree with what that earlier time-slice of myself wrote).

This kind of reuse is OK with my corporate master. Does that necessarily make it ethical?

Potentially it would be unethical if it imposed a harm on my readers — that is, if they (you) were harmed by my reposting those posts of yore. But, I think that would require either that I had some sort of contract (express or implied) with my readers that I only post thoughts I have never posted before, or that my reposts mislead them about what I actually believe at the moment I hit the “publish” button. I don’t have such a contract with my readers (at least, I don’t think I do), and my revision of the posts I recycle is intended to make sure that they don’t mislead readers about what I believe.

Back-linking to the original post is probably good practice (from the point of view of making reuse transparent) … but I don’t always do this.

One reason is that the substantial revisions make the new posts substantially different — making different claims, coming to different conclusions, offering different reasons. The old post is an ancestor, but it’s not the same creature anymore.

Another reason is that some of the original posts I’m recycling are from my ancient Blogspot blog, from whose backend I am locked out after a recent Google update/migration — and I fear that the blog itself may disappear, which would leave my updated posts with back-links to nowhere. Bloggers tend to view back-links to nowhere as a very bad thing.

The whole question of “self-plagiarism” as an ethical problem is an interesting one, since I think there’s a relevant difference between self-plagiarism and ethical reuse.

Plagiarism, after all, is use of someone else’s words or ideas (or data, or source-code, etc.) without proper attribution. If you’re reusing your own words or ideas (or whatnot), it’s not like you’re misrepresenting them as your own when they’re really someone else’s.

There are instances, however, where self-reuse presents gets people rightly exercised. For example, some scientists reuse their own stuff to create the appearance in the scientific literature that they’ve conducted more experimental studies than they actually have, or that there are more published results supporting their hypotheses than there really are. This kind of artificial multiplication of scientific studies is ethically problematic because it is intended to mislead (and indeed, may succeed in misleading), not because the scientists involved haven’t given fair credit to the earlier time-slices of themselves. (A recent editorial for ACS Nano gives a nice discussion of other problematic aspects of “self-plagiarism” within the context of scientific publishing.)

The right ethical diagnosis of the controversy du jour may depend in part on whether journalistic ethics forbid reuse (explicitly or implicitly) — and if so, on whether (or in what conditions) bloggers count as journalists. At some level, this goes beyond what is spelled out in one’s blogging contract and turns also on the relationship between the blogger and the reader. What kind of expectations can the reader have of the blogger? What kind of expectations ought the reader to have of the blogger? To the extent that blogging is a conversation of a sort (especially when commenting is enabled), is it appropriate for that conversation to loop back to territory visited before, or is the blogger obligated always to break new ground?

And, if the readers are harmed when the blogger recycles her own back-catalogue, what exactly is the nature of that harm?

Is how to engage with the crackpot at the scientific meeting an ethical question?

There’s scientific knowledge. There are the dedicated scientists who make it, whether laboring in laboratories or in the fields, fretting over data analysis, refereeing each other’s manuscripts or second-guessing themselves.

And, well, there are some crackpots.

I’m not talking dancing-on-the-edge-of-the-paradigm folks, nor cheaters who seem to be on a quest for fame or profit. I mean the guy who has the wild idea for revolutionizing field X that actually is completely disconnected from reality.

Generally, you don’t find too much crackpottery in the scientific literature, at least not when peer review is working as it’s meant to. The referees tend to weed it out. Perhaps, as has been suggested by some critics of peer review, referees also weed out cutting edge stuff because it’s just so new and hard to fit into the stodgy old referees’ picture of what counts as well-supported by the evidence, or consistent with our best theories, or plausible. That may just be the price of doing business. One hopes that, eventually, the truth will out.

But where you do see a higher proportion of crackpottery, aside from certain preprint repositories, is at meetings. And there, face to face with the crackpot, the gate-keepers may behave quite differently than they would in an anonymous referee’s report.

Doctor Crackpot gives a talk intended to show his brilliant new solution to a nagging problem with an otherwise pretty well established theoretical approach. Jaws drop as the presentation proceeds. Then, finally, as Doctor Crackpot is aglow with the excitement of having broken the wonderful news to his people, he entertains questions.

Crickets chirp. Members of the audience look at each other nervously.

Doctor Hardass, who has been asking tough questions of presenters all day, tentatively asks a question about the mathematics of this crackpot “solution”. The other scholars in attendance inwardly cheer, thinking, “In about 10 seconds Doctor Hardass will have demonstrated to Doctor Crackpot that this could never work! Then Doctor Crackpot will back away from this ledge and reconsider!”

Ten minutes later, Doctor Crackpot is still writing equations on the board, and Doctor Hardass has been reduced to saying, “Uh huh …” Scholars start sneaking out as the chirping of the crickets competes with the squeaking of the chalk.

Granted, no one wants to hurt Doctor Crackpot’s feelings. If it’s a small enough meeting, you all probably had lunch with him, maybe even drinks the night before. He seems like a nice guy. He doesn’t seem dangerously disconnected from reality in his everyday interactions, just dangerously disconnected from reality in the neighborhood of this particular scientific question. And, as he’s been toiling in obscurity at a little backwater institution, he’s obviously lonely for scientific company and conversation. So, calling him out as a crackpot seems kind of mean.

But … it’s also a little mean not to call him out. It can feel like you’re letting him wander through the scientific community with the equivalent of spinach in his teeth while trailing toilet paper from his shoe if you leave him with the impression that his revolutionary idea has any merit. Someone has to set this guy straight … right? If you don’t, won’t he keep trying to sell this crackpot idea at future meetings?

For what it’s worth, as someone who attends philosophy conferences as well as scientific ones (plus an interesting assortment of interdisciplinary conferences of various sorts), I can attest that there is the occasional crackpot presentation from a philosopher. However, the push-back from the philosophers during the Q&A seemed much more vigorous, and seemed also to reflect a commitment that the crackpot presenter could be led back to reality if only he would listen to the reasoned arguments presented to him by the audience.

In theory, you’d expect to see the same kind of commitment among scientists: if we can agree upon the empirical evidence and seriously consider each other’s arguments about the right theoretical framework in which to interpret it, we should all end up with something like agreement on our account of the world. Using the same sorts of knowledge-building strategies, the same standards of evidence, the same logical machinery, we should be able to build knowledge about the world that holds up against tests to which others subject it — and, we should welcome that testing, since the point of all this knowledge-building is not to win the argument but to build an account that gets the world right.

In theory, the scientific norms of universalism and organized skepticism would ensure that all scientific ideas (including the ones that are en face crackpot ideas) get a fair hearing, but that this “fair hearing” include rigorous criticism to sort out the ideas worthy of further attention. (These norms would also remind scientists that any member of the scientific community has the potential to be the source of a fruitful idea, or of a crackpot idea.)

In practice, though, scientists pick their battles, just like everyone else. If your first ten-minute attempt at reaching a fellow scientist with rigorous criticism shows no signs of succeeding, you might just decide it’s too big a job to tackle before lunch. If repeated engagements with a fellow scientist suggest that he seems not to comprehend the arguments against his pet theory — and maybe that he doesn’t fully grok how the rest of the community understands the standards and strategies for scientific knowledge-building — you may have to make a calculation about whether bringing him back to the fold is a better use of your time and effort than, say, putting more time into your own research, or offering critiques to scientists who seem to understand them and take them seriously.

This is a sensible way to get through a day which seems to have too few hours for all the scientific knowledge-building there is to do, but it might have an impact on whether the scientific community functions in the way that best supports the knowledge-building project.

In the continuum of “scientific knowledge”, on whose behalf scientists are sworn to uphold standards and keep out the dross, where do meetings fall? Do the scientists in attendance have any ethical duty to give their candid assessments of crackpottery to the crackpots? Or is it OK to just snicker about it at the bar? If there’s no obligation to call the crackpot out, does that undermine the value of meetings as sources of scientific knowledge, or of the scientific communications needed to build scientific knowledge?

Could a rational decision not to engage with crackpots in one’s scientific community (because the return on the effort invested is likely to be low) morph into avoidance of other scientists with weird ideas that actually have something to them? Could it lead to avoidance of serious engagement with scientists one thinks are mistaken when it might take serious effort to spell out the nature of the mistakes?

And is there any obligation from the scientific community either to accept the crackpots as fully part of the community (meaning that their ideas and their critiques of the ideas of other ought to be taken seriously), or else to be honest with them that, while they may subscribe to the same journals and come to the same meetings, the crackpots are Not Our Kind, Dear?

End-of-semester meditations on plagiarism.

Plagiarism — presenting the words or ideas (among other things) of someone else as one’s own rather than properly citing their source — is one of the banes of my professorial existence. One of my dearest hopes at the beginning of each academic term is that this will be the term with no instances of plagiarism in the student work submitted for my evaluation.

Ten years into this academic post and I’m still waiting for that plagiarism-free term.

One school of thought posits that students plagiarize because they simply don’t understand the rules around proper citation of sources. Consequently, professorial types go to great lengths to lay out how properly to cite sources of various types. They put explicit language about plagiarism and proper citation in their syllabi. They devote hours to crafting handouts to spell out expected citation practices. They require their students to take (and pass) plagiarism tutorials developed by information literacy professionals (the people who, in my day, we called university librarians).

And, students persist in plagiarizing.

Another school of thought lays widespread student plagiarism at the feet of the new digital age.

What with all sorts of information resources available through the internets, and with copy-and-paste technology, assembling a paper that meets the minimum page length for your assignment has never been easier. Back in the olden times, our forefathers had to actually haul the sources from which they were stealing off the shelves, maybe carry them back to the dorms through the snow, find their DOS disk to boot up the dorm PC, and then laboriously transcribe those stolen passages!

And it’s not just that the copy-and-paste option exists, we are told. College students have grown up stealing music and movies online. They’ve come of age along with Wikipedia, where information is offered free for their use and without authorship credits. If “information wants to be free” (a slogan attributed to Stewart Brand in 1984), how can these young people make sense of intellectual property, and especially of the need to cite the sources from which they found the information they are using? Is not their “plagiarism” just a form of pastiche, an activity that their crusty old professors fail to recognize as creative?

Yeah, the modern world is totally different, dude. There are tales of students copying not just Wikipedia articles but also things like online FAQs, verbatim, in student papers without citing the source, and indeed while professing that they didn’t think they needed to cite them because there was no author listed. You know what source kids used to copy from in my day that didn’t list authors? The World Book Encyclopedia. Indeed, from at least seventh grade, our teachers made a big deal of teaching us how to cite encyclopedia and newspaper articles with no named authors. Every citation guide I’ve seen in recent years (including the ones that talk about proper ways to cite web pages) includes instruction on how to cite such sources.

The fact that plagiarism is perhaps less labor-intensive than it used to be strikes me as an entirely separate issue from whether kids today understand that it’s wrong. If young people are literally powerless to resist the temptations presented to them by the internet, maybe we should be getting computers out of the classroom rather than putting more computers into the classroom.

Of course, the fact that not every student plagiarizes argues against the claim that students can’t help it. Clearly, some of them can.

There is research that indicates students plagiarize less in circumstances where they know that their work is going to be scanned with plagiarism-detection software. Here, it’s not that the existence or use of the software suddenly teaches students something they didn’t already know about proper citation. Rather, the extra 28 grams of prevention comes from an expectation that the software will be checking to see if they followed the rules of scholarship that they already understood.

My own experience suggests that one doesn’t require an expensive proprietary plagiarism-detection system like Turnitin — plugging the phrases in the assignment that just don’t sound like a college student wrote them into a reasonably good search engine usually delivers the uncited sources in seconds.

It also suggests that even when students are informed that you will be using software or search engines to check for plagiarism, some students still plagiarize.

Perhaps a better approach is to frame plagiarism as a violation of trust in a community that, ultimately, has an interest in being more focused on learning than on crime and punishment. This is an approach to which I’m sympathetic, which probably comes through in the version of “the talk” on academic dishonesty I give my students at the start of the semester:

Plagiarism is evil. I used to think I was a big enough person not to take it personally if someone plagiarized on an assignment for my class. I now know that I was wrong about that. I take it very personally.


For one thing, I’m here doing everything I can to help you learn this stuff that I think is really interesting and important. I know you may not believe yet that it’s interesting and important, but I hope you’ll let me try to persuade you. And, I hope you’ll put an honest effort into learning it. If you try hard and you give it a chance, I can respect that. If you make the calculation that, given the other things on your plate, you can’t put in the kind of time and effort I’m expecting and you choose to put in what you can, I’ll respect that, too. But if you decide it’s not worth your time or effort to even try, and instead you turn to plagiarism to make it look like you learned something — well, you’re saying that the stuff you’re supposedly here to learn is of no value, except to get you the grades and the credits you want. I care about that stuff. So I take it personally when you decide, despite all I’m doing here, that it’s of no value. Moreover, this is not a diploma mill where you pay your money and get your degree. If you want the three credits from my course, the terms of engagement are that you’ll have to show some evidence of learning.


Even worse, when you hand in an essay that you’ve copied from the internet, you’re telling me you don’t think I’m smart enough to tell the difference between your words and ideas and something you found in 5 minutes with Google. You’re telling me you think I’m stupid. I take that personally, too.


If you plagiarize in my course, you fail my course, and I will take it personally. Maybe that’s unreasonable, but that’s how I am. I thought I should tell you up front so that, if you can’t handle having a professor who’s such a hardass, you can explore your alternatives.

So far, none of my students have every run screaming from this talk. Some of them even nod approvingly. The students who labor to write their papers honestly likely feel there’s something unjust about classmates who sidestep all that labor by cheating.

But students can still fully comprehend your explanation of how you view plagiarism, how personally you’ll take it, how vigorously you’ll punish it … and plagiarize.

They may even deny it to your face for 30 additional seconds after they recognize that you have them dead to rights (since given the side-by-side comparison of their assignment and the uncited source, they would need to establish psychic powers for there to be any plausible explanation besides plagiarism). And then they’ll explain that they were really pressed for time, and they need a good grade (or a passing grade) in this course, and they felt trapped by circumstances, so even though of course they know what they did is wrong, they made one bad decision, and their parents will kill them, and … isn’t there some way we could make this go away? They feel so bad now that they promise they’ve learned their lesson.

Here, I think we need to recognize that there is a relevant difference between saying you have learned a lesson and actually learning that lesson.

Indeed, one of the reasons that my university’s office of judicial affairs asks instructors to report all cases of plagiarism and cheating no matter what sanctions we apply to them (including no sanctions) is so there will be a record of whether a particular offense is really the first offense. Students who plagiarize may also lie about whether they have a record of doing so and being caught doing it. If the offenses are spread around — in different classes with different professors in different departments — you might be able to score first-time leniency half a dozen times.

Does that sound cynical? From where I sit, it’s just realistic. But this “realistic” point of view (which others in the teaching trenches share) is bound to make us tougher on the students who actually do make a single bad decision, suspecting that they might be committed cheaters, too.

Keeping the information about plagiarists secret rather than sharing it through the proper channels, in other words, can hurt students who could be helped.

There have been occasions, it should be noted, when frustrated instructors warned students that they would name and shame plagiarists, only to find (after following through on that warning) that they had run afoul of FERPA. Among other things, FERPA gives students (18 or older) some measure of control about who gets to see their academic records. If a professor announces to the world — or even to your classmates — that you’ve failed a the class for plagiarizing, information from your academic records has arguably been shared without your consent.

Still, it’s hard not to feel that plagiarism is breaking trust not just with the professor but with the learning community. Does that learning community have an interest in flagging the bad actors? If you know there are plagiarists among your classmates but you don’t know who they are, does this create a situation where you can’t trust anyone? If all traces of punishment — or of efforts at rehabilitation — are hidden behind a veil of privacy, is the reasonable default assumption that people are generally living within the rules and that the rules are being enforced against the handful of violations … or is it that people are getting away with stuff?

Is there any reasonable role for the community in punishment and in rehabilitation of plagiarism?

To some, of course, this talk of harms to learning communities will seem quaint. If you see your education as an individual endeavor rather than a team sport, your classmates may as well be desks (albeit desks whose grades may be used to determine the curve). What you do, or don’t do, in your engagement with the machinery that dispenses your education (or at least your diploma) may be driven by your rational calculations about what kind of effort you’re willing to put into creating the artifacts you need to present in exchange for grades.

The artifacts that require writing can be really time-consuming to produce de novo. The writing process, after all, is hard. People who write for a living complain of writer’s block. Have you ever heard anyone complain about Google-block? Plagiarism, in other words, is a huge time-saver, not least because it relies on skills most college students already have rather than ones they need to develop to any significant extent.

Here, I’d like to offer a modest proposal for students unwilling to engage the writing process: don’t.

Take a stand for what you believe in! Don’t lurk in the shadows pretending to knuckle under to the man by turning in essays and term papers that give the appearance that you wrote them. Instead, tell your professors that writing anything original for their assignments is against your principles. Then take your F and wear it as a badge of honor!

When all those old-timey professors who fetishize the value of clear writing, original thought, and proper citation of sources die out — when your generation is running the show — surely your principled stand will be vindicated!

And, in the meantime, your professors can spend their scarce time helping your classmates who actually want to learn to write well and uphold rudimentary rules of scholarship.

Really, it’s win-win.

_____
In the interests of full-disclosure — and of avoiding accusations of self-plagiarism — I should note that this essay draws on a number of posts I have written in the past about plagiarism in academic contexts.

The purpose of a funding agency (and how that should affect its response to misconduct).

In the “Ethics in Science” course I regularly teach, students spend a good bit of time honing their ethical decision-making skills by writing responses to case studies. (A recent post lays out the basic strategy we take in approaching these cases.) Over the span of the semester, my students’ responses to the cases give me pretty good data about the development of their ethical decision-making.

From time to time, they also advance claims that make me say, “Hmmm …”

Here’s one such claim, recently asserted in response to a case in which the protagonist, a scientist serving on a study section for the NIH (i.e., a committee that ranks the merit of grant proposals submitted to the NIH for funding), has to make a decision about how to respond when she detects plagiarism in a proposal:

The main purpose of the NIH is to ensure that projects with merit get funded, not to punish scientists for plagiarism.

Based on this assertion, the student argued that it wasn’t clear that the study section member had to make an official report to the NIH about the plagiarism.

I think the claim is interesting, though I think maybe we would do well to unpack it a little. What, for instance, counts as a project with merit?

Is it enough that the proposed research would, if successful, contribute a new piece of knowledge to our shared body of scientific knowledge? Does the anticipated knowledge that the research would generate need to be important, and if so, according to what metric? (Clearly applicable to a pressing problem? Advancing our basic understanding of some part of our world? Surprising? Resolving an ongoing scientific debate?) Does the proposal need to convey evidence that the proposers have a good chance at being successful in conducting the research (because they have the scientific skills, the institutional resources, etc.)?

Does plagiarism count as evidence against merit here?

Perhaps we answer this question differently if we think what should be evaluated is the proposal rather than the proposer. Maybe the proposed research is well-designed, likely to work, and likely to make an important contribution to knowledge in the field — even if the proposer is judged lacking in scholarly integrity (because she seems not to know how properly to cite the words or ideas of others, or not to care to do so if she knows how).

But, one of the expectations of federal funders like the NIH is that scientists whose research is funded will write up the results and share them in the scientific literature. Among other things, this means that one of the scientific skills that a proposer will need to see a project through to completion (including publishing the results) successfully is the ability to write without running afoul of basic standards of honest scholarship. A paper which communicates important results while also committing plagiarism will not bring glory to the NIH for funding the researcher.

More broadly, the fact that something (like detecting or punishing plagiarism) is not a primary goal does not mean it is not a goal that might support the primary goal. To the extent that certain kinds of behavior in proposing research might mark a scientist as a bad risk to carry out research responsibly, it strikes me as entirely appropriate for funding agencies to flag those behaviors when they see them — and also to share that information with other funding agencies.

As well, to the extent that an agency like the NIH might punish a scientist for plagiarism, the kind of punishment it imposes is generally barring that scientist from eligibility for funding for a finite number of years. In other words, the punishment amounts to “You don’t get our money, and you don’t get to ask us for money again for the next N years.” To me, this punishment doesn’t look like it’s disproportional, and it doesn’t look like imposing it on a plagiarist grant proposer diverges wildly from the main goal of ensuring that projects with merit get funded.

But, as always, I’m interested in what you all think about it.

Is it worth fighting about what’s taught in high school biology class?

It is probably no surprise to my regular readers that I get a little exercised about the science wars that play out across the U.S. in various school boards and court actions. It’s probably unavoidable, given that I think about science for a living — when you’ve got a horse in the race, you end up spending a lot of time at the track.

From time to time, though, thoughtful people ask whether some of these battles are distractions from more important issues — and, specifically, whether the question of what a community decides to include in, or omit from, its high school biology curriculum ought to command so much of our energy and emotional investment.

About seven years ago, the focus was on Dover, Pennsylvania, whose school board required that the biology curriculum must include the idea of an intelligent designer (not necessarily God, but … well, not necessarily not-God) as the origin of life on Earth. Parents sued, and U.S. District Judge John E. Jones III ruled that the requirement was unconstitutional. If you missed it as it was happening, there’s a very good NOVA documentary on the court case.

As much as the outcome of this trial felt like a victory to supporters of science, some expressed concerns that the battle over the Dover biology curriculum was focusing on one kind of problem but missing many bigger problems in the process — for example, this dispatch from Dover, PA by Eyal Press, printed in The Nation in November 2005.

Press describes the Dover area as it unfolded for him in a drive-along with former Dover school board member Casey Brown:

We drove out past some cornfields, a sheep farm, a meadow and a couple of barns, along the back roads of York County, a region where between 1970 and 2000, 11 percent of the manufacturing jobs disappeared, and where in the more rural areas one in five children grows up in a low-income family (in the city of York the figure is one in three). Dover isn’t dirt poor, but neither is it wealthy. It’s the kind of place where people work hard and save what they can. Looking out at the soy, wheat and dairy farms while Brown explained that lots of older people in the area can’t afford to keep up with their mortgages and end up walking away from their homes, I was struck by the thought that this was a part of the country where, a century ago, the populist movement might have made inroads by organizing small farmers against the monopolies and trusts. These days, of course, a different sort of populism prevails, infused by religion and defining itself against “outside” forces like the ACLU.

Press also went to see what the students in Dover thought of the controversy:

What do the intended beneficiaries of the Dover school board’s actions make of the intelligent design debate? A few days before meeting Casey Brown, I drove out to Dover high school to find out. It was late in the afternoon and a couple of kids were milling about outside, waiting for rides. When I asked them what they thought of the controversy, they looked at me with blank stares that suggested I could not have posed a question of less relevance to their lives. “I think you should leave us alone,” one of them said. “Everyone just sleeps through that class anyway,” said another. I approached a third kid, who was standing alone. Nobody he knew ever talked about the issue, he told me; it was no big deal.

Press suggests that this is not just a matter of teen ennui. The schools in the area may not be up to the challenge of addressing the real needs of their students:

For the most part, though, kids in Dover seem perplexed that so much attention is being paid to what happens in a single class. It is a sentiment shared by Pat Jennings, an African-American woman who runs the Lighthouse Youth Center, an organization that offers after-school programs, recreational services and parenting and Bible study classes to kids throughout York County. The center, which is privately funded, is located in a brown-brick building in downtown York, next to a church. … A deeply religious woman who describes her faith as “very important” to her, Jennings nonetheless confessed that she hasn’t paid much attention to the evolution controversy, since she’s too busy thinking about other problems the children she serves face–drugs, gangs, lack of access to opportunity, racism. “When we are in this building there are no Latinos, blacks, Caucasian children–just children,” she explained after giving me a tour of the center. “But when I go out there”–she pointed to the street–“I’m reminded that I’m different.”

“There’s a lot of kids out there looking for something,” Jennings continued. “They have questions that need answering. They’re looking for someone to trust.” I asked her if she thought schools were providing that thing. She shook her head. “I don’t know if it’s the schools or the parents or whatever, but something is wrong. The kids I see lack discipline. They lack reading skills.” Listening to her, it was hard not to view the dust-up over intelligent design as a tragic illustration of how energy that could be poured into other problems is wasted on symbolic issues of comparatively minor significance.

Why those symbolic issues have assumed such importance in America has a lot to do with the fact that, in places like Dover, the only institutions around that seem willing to address the concerns of many people are fundamentalist churches.

I take it that Press is not primarily interested in taking scientists to task. Rather, his point seems to be that folks in Dover and places like it are much less concerned about “direction” of curriculum by fundamentalist churches because those churches are perceived as taking care of social needs that no one else — including the government — seems willing or able to address in these communities. It doesn’t seem altogether irrational to bend a little to the folks keeping things together, especially if the bending involves changing the curriculum that the high school students are going to sleep through anyway, does it?

This is a variant of the ongoing debate I have at my university about what is supposed to be going on here. As it occasionally plays out with students in my “Philosophy of Science” class, it goes roughly like this:

Me: A college education should help you understand different kinds of knowledge and reasoning. My class should help you understand what’s distinctive about scientific knowledge.

Jaded Student: Dude, I really just want to sit in the chair and do the minimum I need to do to get the three units of upper division science general education credit. Don’t bug me.

Me: You’re a college student! Learning this is good for you!

Jaded Student: I’m only in college so I can get a job that pays a decent wage. If I could do that any other way, I wouldn’t be here.

Me: How will you navigate the modern world without some understanding of science?

Jaded Student: Unless understanding science gets me a better salary it ain’t gonna happen. Learning for its own sake is for suckers.

And here’s where I want to say that, although Eyal Press is right that there are very bad things that are much larger than the details of the biology curriculum happening in communities like Dover, the fight over quality public education is central rather than merely symbolic.

Whether intelligent design is presented as legitimate and empirically supported scientific theory in the classroom is one piece of delivering quality education, but it’s not the only piece. Making sure schools have the funding they for current books, for lab supplies, for computers and internet connections is another piece. So is making sure teachers can incorporate active learning that is not completely driven by a standardized test. So is ensuring small enough classes that students can get the interaction with their teachers and their classmate that they need to learn effectively. So is finding ways to support student learning in more basic ways — say, by making sure kids get adequate nutrition so they can focus on what they’re learning rather than on gnawing hunger, and making their trips to and from school (not to mention their walks down the school corridors) safer. Each of these issues ought to be addressed. None of them strikes me as a place where it would be legitimate for us to give up rather than to fight for what kids deserve.

Education is not a dispensible luxury. Rather, it is an essential tool for people in making reasonable choices about their own lives. Education isn’t just about teaching specific skills for the workforce; it also lays a foundation with which to learn new skills to keep up with a changing economy (or, dare I say it, with one’s changing interests). Even more, education is supposed to open up a world quite apart from the world of work. The world may need ditch diggers (or repair technicians for the ditch-digging robots), but it would be a much better world if the ditch diggers (and repair technicians) not only earned a decent wage but also had enough left over to buy a few books and to think about things they wanted to think about. (Yes, I’m going on my “everyone deserves a life of the mind” rant. It happens.)

Making a better world may require choosing one’s battles. Some would suggest that the battle over science education is a high-investment, low-payoff battle. But my own sense is that the minute we decide a certain population of students don’t really need good science education, we’ve put up the white flag.

Do we help students who are in difficult socio-economic circumstances by reducing their future prospects to succeed in further science classes or pursue a career in science? Do we help these students when we throw them out into the world as voters and consumers without a clear understanding of how scientific knowledge is produced and of how it is different from other kinds of knowledge? Might it not reinforce the feeling that the larger society really doesn’t actually care much about you or your future if you find out that people with a voice didn’t even whimper as you were subjected to an “education” these people wouldn’t have allowed their own kids to suffer through?

One of the guiding ideals of science is that it is a project in which anyone can engage — provided they have the necessary training. Scientists try to work out accounts of what’s going on in the world that are tested against and built upon observation that human beings can make regardless of their home country, their socio-economic status, their race, their gender, their age. The scientific ideal of universality ought to make science a realm of work that is open to anyone willing to put in the work to become scientist. A career in science could be a real avenue for class mobility.

Unless, of course, we decide that public school students in less affluent communities (or more rural communities, or red states, or whatever) aren’t really entitled to the best science education we can give them. If keeping them fed and out of gangs and passing the standardized tests in reading and writing is the extent of our obligation to these students, maybe a sound science education is a luxury. But if this is the case, we probably ought to cut out the whole “American dream” story and admit to ourselves that this place is not a perfect meritocracy. Those who have the luxury of a quality education have an advantage over those who don’t, and by golly they should own up to that. Especially when budgets are being hammered out, or when elections are coming up.

Lately, of course, as public schools are trying to weather dramatic cuts in state and local budgets (and for those far from the action it keeps getting worse despite claims that the economy is showing signs of improvement), science instruction of any kind has come to be viewed as a frill, something that could be cut in favor of more focus on reading or math (the areas most important for the high-stakes standardized tests). Or perhaps science instruction will need to be cut because budgetary pressures require a shorter school day. Or maybe science instruction will end up being delivered in ever more overcrowded classrooms, with fewer materials for hands-on learning that might give students experience with something like scientific methods for inquiry. Sure, in a perfect world we might want to provide more opportunities for active learning and guided inquiry, but, we are told, we just can’t afford it.

But what does it cost us in the long run not to make this educational investment?

The kids in Dover, and Iowa, and Kansas, whose science classes have become the ground on which grown-ups play out their anxieties about science, are part of your future and mine. So are the kids in the public schools cutting back on science instruction for lack of funds. So are the kids in classrooms where teachers convey the message that one has to be really, really smart — smarter than they are, certainly — to understand anything about science. These kids are the electorate of tomorrow, the workforce of tomorrow, the people who will have to make sensible decisions in their everyday lives as consumers of scientific information.

Even if, as 15 year olds, they don’t fully appreciate the stand being taken on their behalf, I’m not willing to back down from taking it, just the same way I’m not willing to let jaded students out of my classes without some learning taking place. Valuing other members of our society means valuing their future options to set their own course and to find meaning in their own lives.

Making good science education is not sufficient here, but my gut says it may be necessary.

Whither mentoring?

Drugmonkey takes issue with the assertion that mentoring is dead*:

Seriously? People are complaining that mentoring in academic science sucks now compared with some (unspecified) halcyon past?

Please.

What should we say about the current state of mentoring in science, as compared to scientific mentoring in days of yore? Here are some possibilities:

Maybe there has been a decline in mentoring.

This might be because mentoring is not incentivized in the same way, or to the same degree, as publishing, grant-getting, etc. (Note, though, that some programs require evidence of successful mentoring for faculty promotion. Note also that some funding mechanisms require that the early-career scientist being funded have a mentor.)

Or it might be because no one trained the people who are expected to mentor (such as PIs) in how to mentor. (In this case, though, we might take this as a clue that the mentoring these PIs received in days of yore was not so perfect after all.)

Or, it might be that mentoring seems to PIs like a risky move given that it would require too much empathetic attachment with the trainees who are also one’s primary source of cheap labor, and whose prospects for getting a job like the PI’s are perhaps nowhere near as good as the PI (or the folks running the program) have led the trainees to believe.

Or, possibly PIs are not mentoring so well because the people they are being asked to mentor are increasingly diverse and less obviously like the PIs.

Maybe mentoring is no worse than it has ever been.

Perhaps it has always been a poorly defined part of the advisor’s job duties, not to mention one for which hardly anyone gets formal training in how to do. Moreover, the fact that it may depend on inclination and personal compatibility might make it more chancy than things like joining a lab or writing a dissertation.

Maybe mentoring has actually gotten better than it used to be.

It’s even possible that increased diversity in training populations might tend to improve mentoring by forcing PIs to be more conscious of their interactions (since they recognize that the people they are mentoring are not just like them). Similarly, awareness that trainees are facing a significantly different employment landscape than the one the mentor faced might help the mentor think harder about what kind of advice could actual be useful.

Here, I think that we might also want to recognize the possibility that what has changed is not the level of mentoring being delivered, but rather the expectations the trainees have for what kind of mentoring they should receive.

Pulling back from the question of whether mentoring has gotten better, worse, or stayed the same, there are two big issues that prevent us from being able to answer that question. One is whether we can get our hands on sensible empirical data to make anything like an apples-to-apples comparison of mentoring in different times (or, for that matter, in different places). The other is whether we’re all even talking about the same thing when we’re holding forth about mentoring and its putative decline.

Let’s take the second issue first. What do we have in mind when we say that trainees should have mentors? What exactly is it that they are supposed to get out of mentoring.

Vivian Weil [1], among others, points us to the literary origin of the term mentor, and the meanings this origin suggests, in the relationship between the characters Mentor and Telemachus in Homer’s epic poem, the Odyssey. Telemachus was the son of Odysseus; his father was off fighting the Trojan war, and his mother was busy fending off suitors (which involved a lot of weaving and unweaving), so the kid needed a parental surrogate to help him find his way through a confusing and sometimes dangerous world. Mentor took up that role.**

At the heart of mentoring, Weil argues, is the same kind of commitment to protect the interests of someone just entering the world of your discipline, and to help the mentee to develop skills sufficient to take care of himself or herself in this world:

All the activities of mentoring, but especially the nurturing activities, require interacting with those mentored, and so to be a mentor is to be involved in a relationship. The relationships are informal, fully voluntary for both members, but at least initially and for some time thereafter, characterized by a great disparity of experience and wisdom. … In situations where neophytes or apprentices are learning to “play the game”, mentors act on behalf of the interests of these less experienced, more vulnerable parties. (Weil, 473)

In the world of academic science, the guidance a mentor might offer would then be focused on the particular challenges the mentee is likely to face in graduate school, the period in which one is expected to make the transition from being a learner of scientific knowledge to being a maker of new knowledge:

On the traditional model, the mentoring relationship is usually thought of as gradual, evolving, long-term, and involving personal closeness. Conveying technical understanding and skills and encouraging investigative efforts, the mentor helps the mentee move through the graduate program, providing feedback needed for reaching milestones in a timely fashion. Mentors interpret the culture of the discipline for their mentees, and help them identify good practices amid the complexities of the research environment. (Weil, 474)

A mentor, in other words, is a competent grown-up member of the community in which the mentee is striving to become a grown-up. The mentor understands how things work, including what kinds of social interactions are central to conducting research, critically evaluating knowledge claims, and coordinating the efforts of members of the scientific community more generally.

Weil emphasizes that the the role of mentor, understood in this way, is not perfectly congruent with the role of the advisor:

While mentors advise, and some of their other activities overlap with or supplement those of an advisor, mentors should not be confused with advisors. Advising is a structured role in graduate education. Advisors are expected to perform more formal and technical functions, such as providing information about the program and degree requirements and periodic monitoring of advisees’ progress. The advisor may also have another structured role, that of research (dissertation) director, for advisors are often principal investigators or laboratory directors for projects on which advisees are working. In the role of research director, they “may help students formulate research projects and instruct them in technical aspects of their work such as design, methodology, and the use of instrumentation.” Students sometimes refer to the research or laboratory director as “boss”, conveying an employer/employee relationship rather than a mentor/mentee relationship. It is easy to see that good advising can become mentoring and, not surprisingly, advisors sometimes become mentors. Nevertheless, it is important to distinguish the institutionalized role of advisor from the informal activities of a mentor. (Weil, 474)

Mentoring can happen in an advising relationship, but the evaluation an advisor needs to do of the advisee may be in tension with the kind of support and encouragement a mentor should give. The advisor might have to sideline an advisee in the interests of the larger research project; the mentor would try to prioritize the mentee’s interests.

Add to this that the mentoring relationship is voluntary to a greater degree than the advising relationship (where you have to be someone’s advisee to get through), and the interaction is personal rather than strictly professional.

Among other things, this suggests that good advising is not necessarily going to achieve the desired goal of providing good mentoring. It also suggests that it’s a good idea to seek out multiple mentors (e.g., so in situations where an advisor cannot be a mentor due to the conflicting duties of the advisor, another mentor without these conflicts can pick up the slack).

So far, we have a description of the spirit of the relationship between mentor and mentee, and a rough idea of how that relationship might advance the welfare of the mentee, but it’s not clear that this is precise enough that we could use it to assess mentoring “in the wild”.

And surely, if we want to do more than just argue based on subjective anecdata about how mentoring for today’s scientific trainees compares to the good old days, we need to find some way to be more precise about the mentoring we have in mind, and to measure whether it’s happening. (Absent a time machine, or some stack of data collected on mentoring in the halcyon past, we probably have to acknowledge that we just don’t know how past mentoring would have measured up.)

A faculty team from the School of Nursing at Johns Hopkins University, led by Roland A. Berk [2], grappled with the issue of how to measure whether effective mentoring was going on. Here, the mentoring relationships in question were between more junior and more senior faculty members (rather than between graduate students and faculty members), and the impetus for developing a reliable way to measure mentoring effectiveness was the fact that evidence of successful mentoring activities was a criterion for faculty promotion.

Finding no consistent definition of mentoring in the literature on medical faculty mentoring programs, Berk et al. put forward this one:

A mentoring relationship is one that may vary along a continuum from informal/short-term to formal/long-term in which faculty with useful experience, knowledge, skills, and/or wisdom offers advice, information, guidance, support, or opportunity to another faculty member or student for that individual’s professional development. (Note: This is a voluntary relationship initiated by the mentee.) (Berk et al., 67)

Then, they spelled out central responsibilities within this relationship:

[F]aculty must commit to certain concrete responsibilities for which he or she will be held accountable by the mentees. Those concrete responsibilities are:

  • Commits to mentoring
  • Provides resources, experts, and source materials in the field
  • Offers guidance and direction regarding professional issues
  • Encourages mentee’s ideas and work
  • Provides constructive and useful critiques of the mentee’s work
  • Challenges the mentee to expand his or her abilities
  • Provides timely, clear, and comprehensive feedback to mentee’s questions
  • Respects mentee’s uniqueness and his or her contributions
  • Appropriately acknowledges contributions of mentee
  • Shares success and benefits of the products and activities with mentee

(Berk et al., 67)

These were then used to construct a “Mentorship Effectiveness Scale” that mentees could use to share their perceptions of how well their mentors did on each of these responsibilities.

Here, one might raise concerns that there might be a divergence between how effective a mentee thinks the mentor is in each of these areas and how effective the mentor actually is. Still, tracking the perceptions of the mentees with the instrument developed by Berk et al. provides some kind of empirical data. In discussions about whether mentoring is getting better or worse, such data might be useful.

And, if this data isn’t enough, it should be possible to work out strategies to get the data you want: Survey PIs to see what kind of mentoring they want to provide and how this compares to what kind of mentoring they feel able to provide. (If there are gaps here, follow-up questions might explore the perceived impediments to delivering certain elements of mentoring.) Survey the people running graduate programs to see what kind of mentoring they think they are (or should be) providing and what kind of mechanisms they have in place to ensure that if it doesn’t happen informally between the student and the PI, it’s happening somewhere.

To the extent that successful mentoring is already linked to tangible career rewards in some places, being able to make a reasonable assessment of it seems appropriate.

It’s possible that making it a standard thing to evaluate mentoring and to tie it to tangible career rewards (or penalties, if one does an irredeemably bad job of it) might help focus attention on mentoring as an important thing for grown-up members of the scientific community to do. This might also lead to more effort to help people learn how to mentor effectively and to offer support and remediation for people whose mentoring skills are not up to snuff.

But, I have a worry (not a huge one, but not nanoscale either). Evaluation of effective mentoring seems to rely on breaking out particular things the mentor does for the mentee, or particular kinds of interactions that take place between the two. In other words, the assessment tracks measurable proxies for a more complicated relationship.

That’s fine, but there’s a risk that a standardized assessment might end up reducing the “mentorship” that mentors offer, and that mentees seek, to these proxies. Were this to happen, we might lose sight of the broader, richer, harder-to-evaluate thing that mentoring can be — an entanglement of interests, a transmission of wisdom, and of difficult questions, and of hopes, and of fears, in what boils down to a personal relationship based on a certain kind of care.

The thing we want the mentorship relationship to be is not something that you could force two people to be in — any more than we could force two people to be in love. We feel the outcomes are important, but we cannot compel them.

And obviously, the assessable outcomes that serve as proxies for successful mentoring are better than nothing. Still, it’s not unreasonable for us to hope for more as mentees, nor to try to offer more as mentors.

After all, having someone on the inside of the world of which you are trying to become a part, someone who knows the way and can lead you through, and someone who believes in you and your potential even a little more than you believe in them yourself, can make all the difference.

_____
*Drugmonkey must know that my “Ethics in Science” class will be discussing mentoring this coming week, or else he’s just looking for ways to distract me from grading.

**As it happened, Mentor was actually Athena, the goddess of wisdom and war, in disguise. Make of that what you will.

[1] Weil, V. (2001) Mentoring: Some Ethical Considerations. Science and Engineering Ethics. 7 (4): 471-482.

[2] Berk, R. A., Berg, J., Mortimer, R., Walton-Moss, B., and Yeo, T. P. (2005) Measuring the Effectiveness of Faculty Mentoring Relationships. Academic Medicine. 80: 66-71.

Who matters (or should) when scientists engage in ethical decision-making?

One of the courses I teach regularly at my university is “Ethics in Science,” a course that explores (among other things) what’s involved in being a good scientist in one’s interactions with the phenomena about which one is building knowledge, in one’s interactions with other scientists, and in one’s interactions with the rest of the world.

Some bits of this are pretty straightforward (e.g., don’t make up data out of whole cloth, don’t smash your competitor’s lab apparatus, don’t use your mad science skillz to engage in a campaign of super-villainy that brings Gotham City to its knees). But, there are other instances where what a scientist should or should not do is less straightforward. This is why we spend significant time and effort talking about — and practicing — ethical decision-making (working with a strategy drawn from Muriel J. Bebeau, “Developing a Well-Reasoned Response to a Moral Problem in Scientific Research”). Here’s how I described the basic approach in a post of yore:

Ethical decision-making involves more than having the right gut-feeling and acting on it. Rather, when done right, it involves moving past your gut-feeling to see who else has a stake in what you do (or don’t do); what consequences, good or bad, might flow from the various courses of action available to you; to whom you have obligations that will be satisfied or ignored by your action; and how the relevant obligations and interests pull you in different directions as you try to make the best decision. Sometimes it’s helpful to think of the competing obligations and interests as vectors, since they come with both directions and magnitudes — which is to say, in some cases where they may be pulling you in opposite directions, it’s still obvious which way you should go because the magnitude of one of the obligations is so much bigger than of the others.

We practice this basic strategy by using it to look at a lot of case studies. Basically, the cases describe a situation where the protagonist is trying to figure out what to do, giving you a bunch of details that seem salient to the protagonist and leaving some interesting gaps where the protagonist maybe doesn’t have some crucial information, or hasn’t looked for it, or hasn’t thought to look for it. Then we look at the interested parties, the potential consequences, the protagonist’s obligations, and the big conflicts between obligations and interests to try to work out what we think the protagonist should do.

Recently, one of my students objected to how we approach these cases.

Specifically, the student argued that we should radically restrict our consideration of interested parties — probably to no more than the actual people identified by name in the case study. Considering the interests of a university department, or of a federal funder, or of the scientific community, the student asserted, made the protagonist responsible to so many entities that the explicit information in the case study was not sufficient to identify the correct course of action.*

And, the student argued, one interested party that it was utterly inappropriate for a scientist to include in thinking through an ethical decision is the public.

Of course, I reminded the student of some reasons you might think the public would have an interest in what scientists decide to do. Members of the public share a world with scientists, and scientific discoveries and scientific activities can have impacts on things like our environment, the safety of our buildings, what our health care providers know and what treatments they are able to offer us, and so forth. Moreover, at least in the U.S., public funds play an essential role in supporting both scientific research and the training of new scientists (even at private universities) — which means that it’s hard to find an ethical decision-making situation in a scientific training environment that is completely isolated from something the public paid for.

My student was not moved by the suggestion that financial involvement should buy the public any special consideration as a scientist was trying to decide the right thing to do.

Indeed, central to the student’s argument was the idea that the interests of the public, whether with respect to science or anything else, are just too heterogeneous. Members of the public want lots of different things. Taking these interests into account could only be a distraction.

As well, the student asserted, too small a proportion of the public actually cares about what scientists are up to that the public, even if it were more homogeneous, ought to be taken into account by the scientists grappling with their own ethical quandaries. Even worse, the student ventured, those that do care what scientists are up to are not necessarily well-informed.

I’m not unsympathetic to the objection to the extreme case here: if a scientist felt required to somehow take into account the actual particular interests of each individual member of the public, that would make it well nigh impossible to actually make an ethical decision without the use of modeling methods and supercomputers (and even then, maybe not). However, it strikes me that it shouldn’t be totally impossible to anticipate some reasonable range of interests non-scientists have that might be impacted by the consequences of a scientist’s decision in various ways. Which is to say, the lack of total fine-grained information about the public, or of complete predictability of the public’s reactions, would surely make it more challenging to make optimal ethical decisions, but these challenges don’t seem to warrant ignoring the public altogether just so the problem you’re trying to solve becomes more tractable.

In any case, I figure that there’s a good chance some members of the public** may be reading this post. To you, I pose the following questions:

  1. Do you feel like you have an interest in what science and scientists are up to? If so, how would you describe that interest? If not, why not?
  2. Do you think scientists should treat “the public” as an interested party when they try to make ethical decisions? Why or why not?
  3. If you think scientists should treat “the public” as an interested party when they try to make ethical decisions, what should scientists be doing to get an accurate read on the public’s interests?
  4. And, for the sake of symmetry, do you think members of the public ought to take account of the interests of science or scientists when they try to make ethical decisions? Why or why not?

If, for some reason, you feel like chiming in on these questions in the comments would expose you to unwanted blowback, you can also email me your responses (dr dot freeride at gmail dot com) for me to anonymize and post on your behalf.

Thanks in advance for sharing your view on this!

_____
*Here I should note that I view the ambiguities within the case studies as a feature, not a bug. In real life, we have to make good ethical decisions despite uncertainties about what consequences will actually follow our actions, for example. Those are the breaks.

**Officially, scientists are also members of the public — even if you’re stuck in the lab most of the time!

What does a Ph.D. in chemistry get you?

A few weeks back, Chemjobber had an interesting post looking at the pros and cons of a PhD program in chemistry at a time when job prospects for PhD chemists are grim. The post was itself a response to a piece in the Chronicle of Higher Education by a neuroscience graduate student named Jon Bardin which advocated strongly that senior grad students look to non-traditional career pathways to have both their Ph.D.s and permanent jobs that might sustain them. Bardin also suggested that graduate students “learn to approach their education as a series of learning opportunities rather than a five-year-long job interview,” recognizing the relative luxury of having a “safe environment” in which to learn skills that are reasonably portable and useful in a wide range of career trajectories — all while taking home a salary (albeit a graduate-stipend sized one).

Chemjobber replied:

Here’s what I think Mr. Bardin’s essay elides: cost. His Ph.D. education (and mine) were paid for by the US taxpayer. Is this the best deal that the taxpayer can get? As I’ve said in the past, I think society gets a pretty good deal: they get 5+ years of cheap labor in science, (hopefully) contributions to greater knowledge and, at the end of the process, they get a trained scientist. Usually, that trained scientist can go on to generate new innovations in their independent career in industry or academia. It’s long been my supposition that the latter will pay (directly and indirectly) for the former. If that’s not the case, is this a bargain that society should continue to support? 

Mr. Bardin also shows a great deal of insouciance about the costs to himself: what else could he have done, if he hadn’t gone to graduate school? When we talk about the costs of getting a Ph.D., I believe that we don’t talk enough about the sheer length of time (5+ years) and what other training might have been taken during that time. Opportunity costs matter! An apprenticeship at a microbrewery (likely at a similar (if not higher) pay scale as a graduate student) or a 1 or 2 year teaching certification process easily fits in the half-decade that most of us seem to spend in graduate school. Are the communications skills and the problem-solving skills that he gained worth the time and the (opportunity) cost? Could he have obtained those skills somewhere else for a lower cost? 

Chemjobber also note that while a Ph.D. in chemistry may provide tools for range of careers, actually having a Ph.D. in chemistry on your resume is not necessarily advantageous in securing a job in one of those career.

As you might imagine this is an issue to which I have given some thought. After all, I have a Ph.D. in chemistry and am not currently employed in a job that is at all traditional for a Ph.D. in chemistry. However, given that it has been nearly two decades since I last dipped a toe into the job market for chemistry Ph.D.s, my observations should be taken with a large grain of sodium chloride.

First off, how should one think of a Ph.D. program in chemistry? There are many reasons you might value a Ph.D. program. A Ph.D. program may be something you value primarily because it prepares you for a career of a certain sort. It may also be something you value for what it teaches you, whether about your own fortitude in facing challenges, or about how the knowledge is built. Indeed, it is possible — maybe even common — to value your Ph.D. program for more than one of these reasons at a time. And some weeks, you may value it primarily because it seemed like the path of least resistance compared to landing a “real job” right out of college.

I certainly don’t think it’s the case that valuing one of these aspects of a Ph.D. program over the others is right or wrong. But …

Economic forces in the world beyond your graduate program might be such that there aren’t as many jobs suited to your Ph.D. chemist skills as there are Ph.D. chemists competing for those jobs. Among other things, this means that earning a Ph.D. in chemistry does not guarantee you a job in chemistry on the other end.

To which, as the proud holder of a Ph.D. in philosophy, I am tempted to respond: join the club! Indeed, I daresay that recent college graduates in many, many majors have found themselves in a world where a bachelors degree guarantees little except that the student loans will still need to be repaid.

To be fair, my sense is that the mismatch between supply of Ph.D. chemists and demand for Ph.D. chemists in the workplace is not new. I have a vivid memory of being an undergraduate chemistry major, circa 1988 or 1989, and being told that the world needed more Ph.D. chemists. I have an equally vivid memory of being a first-year chemistry graduate student, in early 1990, and picking up a copy of Chemical & Engineering News in which I read that something like 30% too many Ph.D. chemists were being produced given the number of available jobs for Ph.D. chemists. Had the memo not reached my undergraduate chemistry professors? Or had I not understood the business model inherent in the production of new chemists?

Here, I’m not interested in putting forward a conspiracy theory about how this situation came to be. My point is that even back in the last millennium, those in the know had no reason to believe that making it through a Ph.D. program in chemistry would guarantee your employment as a chemist.

So, what should we say about this situation?

One response to this situation might be to throttle production of Ph.D. chemists.

This might result in a landscape where there is a better chance of getting a Ph.D. chemist job with your Ph.D. in chemistry. But, the market could shift suddenly (up or down). Were this to happen, it would take time to adjust the Ph.D. throughput in response. As well, current PIs would have to adjust to having fewer graduate students to crank out their data. Instead, they might have to pay more technicians and postdocs. Indeed, the number of available postdocs would likely drop once the number of Ph.D.s being produced more closely matched the number of permanent jobs for holders of those Ph.D.s.

Needless to say, this might be a move that the current generation of chemists with permanent positions at the research institutions that train new chemists would find unduly burdensome.

We might also worry about whether the thinning of the herd of chemists ought to happen on the basis of bachelors-level training. Being a successful chemistry major tends to reflect your ability to learn scientific knowledge, but it’s not clear to me that this is a great predictor of how good you would be at the project of making new scientific knowledge.

In fact, the thinning of the herd wherever it happens seems to put a weird spin on the process of graduate-level education. Education, after all, tends to aim for something bigger, deeper, and broader than a particular set of job skills. This is not to say that developing skills is not an important part of an education — it is! But in addition to these skills, one might want an understanding of the field in which one is being educated and its workings. I think this is connected to how being a chemist becomes linked to our identity, a matter of who we are rather than just of what we do.

Looked at this way, we might actually wonder about who could be harmed by throttling Ph.D. program enrollments.

Shouldn’t someone who’s up for the challenge have that experience open to her, even if there’s no guarantee of a job at the other end? As long as people have accurate information with which to form reasonable expectations about their employment prospects, do we want to be paternalistic and tell them they can’t?

(There are limits here, of course. There are not unlimited resources for the training of Ph.D. chemists, nor unlimited slots in graduate programs, nor in the academic labs where graduate students might participate meaningfully in research. The point is that maybe these limits are the ones that ought to determine how many people who want to learn how to be chemists get to do that.)

Believe it or not, we had a similar conversation in a graduate seminar filled with first and second year students in my philosophy Ph.D. program. Even philosophy graduate students have an interest in someday finding stable employment, the better to eat regularly and live indoors. Yet my sense was that even the best graduate students in my philosophy Ph.D. program recognized that employment in a job tailor-made for a philosophy Ph.D. was a chancy thing. Certainly, there were opportunity costs to being there. Certainly, there was a chance that one might end up trying to get hired to a job for which having a PhD would be viewed as a disadvantage to getting hired. But the graduate students in my philosophy program had, upon weighing the risks, decided to take the gamble.

How exactly are chemistry graduate students presumed to be different here? Maybe they are placing their bets at a table with higher payoffs, and where the game is more likely to pay off in the first place. But this is still not a situation in which one should expect that everyone is always going to win. Sometimes the house will win instead.

(Who’s the house in this metaphor? Is it the PIs who depend on cheap grad-student labor? Universities with hordes of pre-meds who need chemistry TAs and lab instructors? The public that gets a screaming deal on knowledge production when you break it down in terms of price per publishable unit? A public that includes somewhat more members with a clearer idea of how scientific knowledge is built? Specifying the identity of the house is left as an exercise for the reader.)

Maybe the relevant difference between taking a gamble on a philosophy Ph.D. and taking a gamble on a chemistry Ph.D. is that the players in the latter have, purposely or accidentally, not been given accurate information about the odds of the game.

I think it’s fair for chemistry graduate students to be angry and cynical about having been misled as far as likely prospects for employment. But given that it’s been going on for at least a couple decades (and maybe more), how the hell is it that people in Ph.D. programs haven’t already figured out the score? Is it that they expect that they will be the ones awesome enough to get those scarce jobs? Have they really not thought far enough ahead to seek information (maybe even from a disinterested source) about how plausible their life plans are before they turn up at grad school? Could it be that they have decided that they want to be chemists when they grow up without doing sensible things like reading the blogs of chemists at various stages of careers and training?

Presumably, prospective chemistry grad students might want to get ahold of the relevant facts and take account of them in their decision-making. Why this isn’t happening is somewhat mysterious to me, but for those who regard their Ph.D. training in chemistry as a means to a career end, it’s absolutely crucial — and trusting the people who stand to benefit from your labors as a graduate student to hook you up with those facts seems not to be the best strategy ever.

And, as I noted in comments on Chemjobber’s post, the whole discussion suggests to me that the very best reason to pursue a Ph.D. in chemistry is because you want to learn what it is like to build new knowledge in chemistry, in an academic setting. Since being plugged into a particular kind of career (or even job) on the other end is a crap-shoot, if you don’t want to learn about this knowledge-building process — and want it enough to put up with long hours, crummy pay, unrewarding piles of grading, and the like — then possibly a Ph.D. program is not the best way to spend 5+ years of your life.

Who profits from killing Pluto?

You may recall (as I and my offspring do) the controversy about six years ago around the demotion of Pluto. There seemed to me to be reasonable arguments on both sides, and indeed, my household included pro-Pluto partisans and partisans for a new, clear definition of “planet” that might end up leaving Pluto on the exo-planet side of the line.

At the time, Neil deGrasse Tyson was probably the most recognizable advocate of the anti-Pluto position, and since then he has not been shy about reaffirming his position. I had taken this vocal (even gleeful) advocacy as just an instance of a scientist working to do effective public outreach, but recently, I’ve been made aware of reasons to believe that there may be more going on with Neil deGrasse Tyson here.

You may be familiar with the phenomenon of offshore banking, which involves depositors stashing their assets in bank accounts in countries with much lower taxes than the jurisdictions in which the depositors actually reside. Indeed, residents of the U.S. have occasionally used offshore bank accounts (and bank secrecy policies) to hide their money from the prying (and tax-assessing) eyes of the Internal Revenue Service.

Officially, those who are subject to U.S. income tax are required to declare any offshore bank accounts they might have. However, since the offshore banks themselves have generally not been required by law to report interest income on their accounts to the U.S. tax authorities, lots of account holders have kept mum about it, too.

Recently, however, the U.S. government has been more vigorous in its efforts to track down this taxable offshore income, and has put more pressure on the offshore bankers not to aid their depositors in hiding assets. International pressure seems to be pushing banks in the direction of more transparency and accountability.

What does any of this have to do with Neil deGrasse Tyson, or with Pluto?

You may recall, back when the International Astronomical Union (IAU) was formally considering the question of Pluto’s status, that Neil deGrasse Tyson was a vocal proponent of demoting Pluto from planethood. Despite his position at the Hayden Planetarium, a position in which he had rather more contact with school children and other interested non-scientists making heartfelt arguments in support of Pluto’s planethood, Neil deGrasse Tyson was utterly unmoved.

Steely in his determination to get Pluto reclassified. And forward looking. Add to that remarkably well-dressed (seriously, have you seen his vests?) for a Ph.D. astrophysicist who has spent most of his career working for museums.

The only way it makes sense is if Neil deGrasse Tyson has been stashing money someplace it can earn interest without being taxed. Given his connections, this can only mean off-world banking.

But again, what does this have to do with Pluto?

Pluto killer though he may be, Neil deGrasse Tyson is law abiding. There have so far been no legal requirements to report interest income earned in banks on other planets. But Neil deGrasse Tyson, as a forward looking kind of guy, undoubtedly recognizes that regulators are rapidly moving in the direction of requiring those subject to U.S. income tax to declare their bank accounts on other planets.

The regulators, however, seem uninterested in making any such requirements for those with assets in off-world banks that are not on planets. Which means that while Pluto is less than 1/5 the mass of Earth’s Moon, as a non-planet, it will remain a convenient place for Neil deGrasse Tyson to benefit from compound interest without increasing his tax liability.

It kind of casts his stance on Pluto in a different light, doesn’t it?

[More details in this story from the Associated Press.]

Reading “White Coat, Black Hat” and discovering that ethicists might be black hats.

During one of my trips this spring, I had the opportunity to read Carl Elliott’s book White Coat, Black Hat: Adventures on the Dark Side of Medicine. It is not always the case that reading I do for my job also works as riveting reading for air travel, but this book holds its own against any of the appealing options at the airport bookstore. (I actually pounded through the entire thing before cracking open the other book I had with me, The Girl Who Kicked the Hornet’s Nest, in case you were wondering.)

Elliott takes up a number of topics of importance in our current understanding of biomedical research and how to do it ethically. He considers the role of human subjects for hire, of ghostwriters in the production of medical papers, of physicians who act as consultants and spokespeople for pharmaceutical companies, and of salespeople for the pharmaceutical companies who interact with scientists and physicians. There are lots of important issues here, engagingly presented and followed to some provocative conclusions. But the chapter of the book that gave me the most to think about, perhaps not surprisingly, is the chapter called “The Ethicists”.

You might think, since Elliott is writing a book that points out lots of ways that biomedical research could be more ethical, that he would present a picture where ethicists rush in and solve the problems created by unwitting research scientists, well-meaning physicians, and profit driven pharmaceutical company. However, Elliott presents instead reasons to worry that professional ethicists will contribute to the ethical tangles of the biomedical world rather than sorting them out. Indeed Elliott identifies what seem to be special vulnerabilities in the psyche of the professional ethicist. For example, he writes, “There is no better way to enlist bioethicists in the cause of consumer capitalism than to convince them they are working for social justice.” (139-140) Who, after all, could be against social justice? Yet, when efforts on behalf of social justice takes the form of debates on television news programs about fair access to new pharmaceuticals, the big result seems to be free advertising for the companies making those pharmaceuticals. Should bioethicists be accountable for these unforeseen results? This chapter suggests that careful bioethicists ought to foresee them, and to take responsibility.

There is an irony in professionals who see part of their job as pointing out conflicts of interest to others that they may be placing themselves right in the path of equally overwhelming conflicts of interest. Some of these have to do with the practical problem of how to fund their professional work. Universities these days are struggling with reduced budgets, which means they are encouraging their faculty to be more entrepreneurial — including by cultivating relationships that might lead to donations from the private sector. To the extent that bioethics is seen as relevant to pharmaceutical development, pharmaceutical companies, which have deeper pockets than do universities, are seen as attractive targets for fundraising.

As Elliott notes, bioethicists have seen a great deal of success in this endeavor. He writes,

For the last three decades bioethics has been vigorously generating new centers, new commissions, new journals, and new graduate programs, not to mention a highly politicized role in American public life. In the same way that sociologists saw their fortunes climb during the 1960s as the public eye turned towards social issues like poverty, crime, and education, bioethics started to ascend when medical care and scientific research began generating social questions of their own. As the field grows more prominent, bioethicists are considering a funding model familiar to the realm of business ethics, one that embraces partnership and collaboration with corporate sponsors as long as outright conflict of interest can be managed. …

Corporate funding present a public relations challenge, of course. It looks unseemly for an ethicist to share in the profits of arms dealers, industrial polluters, or multinationals that exploit the developing world. Credibility is also a concern. Bioethicist teach about pharmaceutical company issues in university classrooms, write about those issues in books and articles, and comment on them in the press. Many bioethicists evaluate industry policies and practices for professional boards, government bodies, and research ethics committees. To critics, this raises legitimate questions about the field of bioethics itself. Where does the authority of ethicists come from, and why are corporations so willing to fund them? (140-141)

That comparison of bioethics to business, by the way, is the kind of thing that gets my attention; one of the spaces frequently assigned for “Business and Professional Ethics” courses at my university is the Arthur Anderson Conference Room. Perhaps this is a permanent teachable moment, but I can’t help worry that really the lesson has to do with the vulnerability of the idealistic academic partner in the academic-corporate partnership.

Where does the authority of ethicist come from? I have scrawled in the margin something about appropriate academic credentials and good arguments. But connect this first question to Elliott’s second question: why are corporations so willing to fund them? Here, we need to consider the possibility that their credibility and professional status is, in a pragmatic sense, directly linked to corporations paying bioethicists for their labors. What, exactly, are those corporations paying for?

Let’s put that last question aside for a moment.

Arguably, the ethicist has some skills and training that render her a potentially useful partner for people trying to work out how to be ethical in the world. One hopes what she says would be informed by some amount of ethical education, serious scholarship, and decision-making strategies grounded in a real academic discipline.

Elliott notes that “[s]ome scholars have recoiled, emphatically rejecting the notion that their voices should count more than others’ on ethical affairs.” (142) Here, I agree if the claim is, in essence, that the interests of the bioethicists are no more important than others’. Surely the perspectives of others who are not ethicists matter, but one might reasonably expect that ethicists can add value, drawing on their experience in taking those interests, and the interest of other stakeholders, into account to make reasonable ethical decisions.

Maybe, though, those of us who do ethics for a living just tell ourselves we are engaged in a more or less objective decision-making process. Maybe the job we are doing is less like accounting and more like interpreting pictures in inkblots. As Elliott writes,

But ethical analysis does not really resemble a financial audit. If a company is cooking its books and the accountant closes his eyes to this fact in his audit, the accountant’s wrongdoing can be reliably detected and verified by outside monitors. It is not so easy with an ethics consultant. Ethicists have widely divergent views. They come from different religious standpoints, use different theoretical frameworks, and profess different political philosophies. Also free to change their minds at any point. How do you tell the difference between an office consultant who has changed her mind for legitimate reasons and one who has changed her mind for money? (144)

This impression of the fundamental squishiness of the ethicist’s stock in trade seems to be reinforced in a quote Elliott takes from biologist-entrepreneur Michael West: “In the field of ethics, there are no ground rules, so it’s just one ethicist opinion versus another ethicist’s opinion. You’re not getting whether someone is right or wrong, because it all depends on who you pick.” (144-145)

Here, it will probably not surprise you to learn that I think these claims are only true when the ethicists are doing it wrong.

What, then, would be involved in doing it right? To start with, what one should ask from an ethicist should be more than just an opinion. One should also ask for an argument to support that opinion, an argument that makes reference to important details like interested parties, potential consequences of the various options for action on the table, the obligations the party making the decisions to the stakeholders, and so forth — not to mention consideration of possible objections to this argument. It is fair, moreover, to ask the ethicist whether the recommended plan of action it is compatible with more than one ethical theory — or, for example, if it only works in the world we are sharing solely with other Kantians.

This would not make auditing the ethical books as easy as auditing the financial statements, but I think it would demonstrate something like rigor and lend itself to meaningful inspection by others. Along the same lines, I think it would be completely reasonable, in the case that an ethicist has gone on record as changing her mind, to ask for the argument that brought her from one position to the other. It would also be fair to ask, what argument or evidence might bring you back again?

Of course, all of this assumes an ethicist arguing in good faith. It’s not clear that what I’ve described as crucial features of sound ethical reasoning couldn’t be mimicked by someone who wanted to appear to be a good ethicist without going to the trouble of actually being one.

And if there’s someone offering you money — maybe a lot of money — for something that looks like good ethical reasoning, is there a chance you could turn from an ethicist arguing in good faith to one who just looks like she is, perhaps without even being aware of it herself?

Elliott pushes us to examine whether the dangers that may lurk when the private-sector interests are willing to put up money for your ethical insight. Have they made a point of asking for your take primarily because your paper-trail of prior ethical argumentation lines us really well with what they would like an ethicist to say to give them cover to do what they already want to do — not because it’s ethical, necessarily, but because it’s profitable or otherwise convenient? You may think your ethical stances are stable because they are well-reasoned (or maybe even right). But how can you be sure that the stability of your stance is not influenced by the size of your consultation paycheck? How can you tell that you have actually been solicited for an honest ethical assessment — one that, potentially, could be at odds with what the corporation soliciting it wants to hear? If you tell that corporation that a certain course of action would be unethical, do you have any power to prevent them from pursuing that course of action? Do you have an incentive to tell the corporation what it wants to hear, not just to pick up your consulting fee, but to keep a seat at the table where you might hope to have a chance of nudging its behavior in a more ethical direction, even if only incrementally?

None of these are easy questions to answer objectively if you’re the ethicist in the scenario.

Indeed, even if money were not part of the equation, the very fact that people at the corporations — or researchers, or physicians, or whoever it is seeking the ethicists’ expertise — are reaching out to ethicists and identifying them as experts with something worthwhile to contribute might itself make it harder for the ethicists to deliver what they think they should. As Elliott argues, the personal relationships may end up creating conflicts of interest that are at least as hard to manage as those that occur when money changes hands. These people asking for our ethical input seem like good folks, motivated at least in part by goals (like helping people with disease) that are noble. We want them to succeed. And we kind of dig that they seem interested in what we have to say. Because we end up liking them as people, we may find it hard to tell them things they don’t want to hear.

And ultimately, Elliott is arguing, barriers to delivering news that people don’t want to hear — whether those barriers come from financial dependence, the professional prestige that comes when your talents are in demand, or developing personal relationships with the people you’re advising — are barriers to being a credible ethicist. Bioethics becomes “the public relations division of modern medicine” (151) rather than carrying on the tradition of gadflies like Socrates. If they were being Socratic gadflies and telling truth to power, Elliott suggests, we would surely be able to find at least a few examples of bioethics who were punished for their candor. Instead, we see the ties between ethicists and the entities they advise growing closer.

This strikes close to home for me, as I aspire to do work in ethics that can have real impacts on the practice of scientific knowledge-building, the training of new scientists, the interaction of scientists with the rest of the world. On the one hand, it seems to help me to understand the details of scientific activity, and the concerns of scientists and scientific trainees. But, if I “go native” in the tribe of science, Elliott seems to be saying that I could end up dropping the ball as far as what it means to make the kind of contribution a proper ethicist should:

Bioethicists have gained recognition largely by carving out roles as trusted advisers. But embracing the role of trusted adviser means forgoing other potential roles, such as that of the critic. It means giving up on pressuring institutions from the outside, in the manner of investigative reporters. As bioethicists seek to become trusted advisers, rather than gadflies or watchdogs, it will not be surprising if they slowly come to resemble the people they are trusted to advise. And when that happens, moral compromise will be unnecessary, because there will be little left to compromise. (170)

This is strong stuff — the kind of stuff which, if taken seriously, I hope can keep me on track to offer honest advice even when it’s not what the people or institutions to whom I’m offering it want to hear. Heeding the warnings of a gadfly like Carl Elliott might just help an ethicist do what she has to do to be able to trust herself.