Some musings on Jonah Lehrer’s $20,000 “meh culpa”.

Remember some months ago when we were talking about how Jonah Lehrer was making stuff up in his “non-fiction” pop science books? This was as big enough deal that his publisher, Houghton Mifflin Harcourt, recalled print copies of Lehrer’s book Imagine, and that the media outlets for which Lehrer wrote went back through his writing for them looking for “irregularities” (like plagiarism — which one hopes is not regular, but once your trust has been abused, hopes are no longer all that durable).

Lehrer’s behavior was clearly out of bounds for anyone hoping for a shred of credibility as a journalist or non-fiction author. However, at the time, I opined in a comment:

At 31, I think Jonah Lehrer has time to redeem himself and earn back trust and stuff like that.

Well, the events of this week stand as evidence that having time to redeem oneself is not a guarantee that one will not instead dig the hole deeper.

You see, Jonah Lehrer was invited to give a talk this week at a “media learning seminar” in Miami, a talk which marked his first real public comments a large group of journalistic peers since his fabrications and plagiarism were exposed — and a talk for which the sponsor of the conference, the Knight Foundation, paid Lehrer an honorarium of $20,000.

At the New York Times “Arts Beat” blog, Jennifer Schuessler describes Lehrer’s talk:

Mr. Lehrer … dived right in with a full-throated mea culpa. “I am the author of a book on creativity that contains several fabricated Bob Dylan quotes,” he told the crowd, which apparently could not be counted on to have followed the intense schadenfreude-laced commentary that accompanied his downfall. “I committed plagiarism on my blog, taking without credit or citation an entire paragraph from the blog of Christian Jarrett. I plagiarized from myself. I lied to a journalist named Michael Moynihan to cover up the Dylan fabrications.”

“My mistakes have caused deep pain to those I care about,” he continued. “I’m constantly remembering all the people I’ve hurt and let down.”

If the introduction had the ring of an Alcoholics Anonymous declaration, before too long Mr. Lehrer was surrendering to the higher power of scientific research, cutting back and forth between his own story and the kind of scientific terms — “confirmation bias,” “anchoring” — he helped popularize. Within minutes he had pivoted from his own “arrogance” and other character flaws to the article on flawed forensic science within the F.B.I. that he was working on when his career began unraveling, at one point likening his own corner-cutting to the overconfidence of F.B.I. scientists who fingered the wrong suspect in the 2004 Madrid bombings.

“If we try to hide our mistakes, as I did, any error can become a catastrophe,” he said, adding: “The only way to prevent big failures is a willingness to consider every little one.”

Not everyone shares the view that Lehrer’s apology constituted a full-throated mea culpa, though. At Slate, Daniel Engber shared this assessment:

Lehrer has been humbled, and yet nearly every bullet in his speech managed to fire in both directions. It was a wild display of self-negation, of humble arrogance and arrogant humility. What are these “standard operating procedures” according to which Lehrer will now do his work? He says he’ll be more scrupulous in his methods—even recording and transcribing interviews(!)—but in the same breath promises that other people will be more scrupulous of him. “I need my critics to tell me what I’ve gotten wrong,” he said, as if to blame his adoring crowds at TED for past offenses. Then he promised that all his future pieces would be fact-checked, which is certainly true but hardly indicative of his “getting better” (as he puts it, in the clammy, familiar rhetoric of self-help).

What remorse Lehrer had to share was couched in elaborate and perplexing disavowals. He tried to explain his behavior as, first of all, a hazard of working in an expert field. Like forensic scientists who misjudge fingerprints and DNA analyses, and whose failings Lehrer elaborated on in his speech, he was blind to his own shortcomings. These two categories of mistake hardly seem analogous—lab errors are sloppiness, making up quotes is willful distortion—yet somehow the story made Lehrer out to be a hapless civil servant, a well-intentioned victim of his wonky and imperfect brain.

(Bold emphasis added.)

At Forbes, Jeff Bercovici noted:

Ever the original thinker, even when he’s plagiarizing from press releases, Lehrer apologized abjectly for his actions but pointedly avoided promising to become a better person. “These flaws are a basic part of me,” he said. “They’re as fundamental to me as the other parts of me I’m not ashamed of.”

Still, Lehrer said he is aiming to return to the world of journalism, and has been spending several hours a day writing. “It’s my hope that someday my transgressions might be forgiven,” he said.

How, then, does he propose to bridge the rather large credibility gap he faces? By the methods of the technocrat, not the ethicist: “What I clearly need is a new set of rules, a stricter set of standard operating procedures,” he said. “If I’m lucky enough to write again, then whatever I write will be fully fact-checked and footnoted. Every conversation will be fully taped and transcribed.”

(Bold emphasis added.)

How do I see Jonah Lehrer’s statement? The title of this post should give you a clue. Like most bloggers, I took five years of Latin.* “Mea culpa” would describe a statement wherein the speaker (in this case, Jonah Lehrer) actually acknowledged that the blame was his for the bad thing of which he was a part. From what I can gather, Lehrer hasn’t quite done that.

Let the record reflect that the “new set of rules” and “stricter set of standard operating procedures” Lehrer described in his talk are not new, nor were they non-standard when Lehrer was falsifying and plagiarizing to build his stories. It’s not that Jonah Lehrer’s unfortunate trajectory shed light on the need for these standards, and now the journalistic community (and we consumers of journalism) can benefit from their creation. Serious journalists were already using these standards.

Jonah Lehrer, however, decided he didn’t need to use them.

This does have a taste of Leona Helmsleyesque “rules are for the little people” to it. And, I think it’s important to note that Lehrer gave the outward appearance of following the rules. He did not stand up and say, “I think these rules are unnecessary to good journalistic practice, and here’s why…” Rather, he quietly excused himself from following them.

But now, Lehrer tells us, he recognizes the importance of the rules.

That’s well and good. However, the rules he’s pointing to — taping and transcribing interviews, fact-checking claims and footnoting sources — seem designed to prevent unwitting mistakes. They could head off misremembering what interviewees said, miscommunicating whose words or insights animate part of a story, getting the facts wrong accidentally. It’s less clear that these rules can head off willful lies and efforts to mislead — which is to say, the kind of misdeeds that got Lehrer into trouble.

Moreover, that he now accepts these rules after being caught lying does not indicate that Jonah Lehrer is now especially sage about journalism. It’s remedial work.

Let’s move on from his endorsement (finally) of standards of journalistic practice to the constellation of cognitive biases and weaknesses of will that Jonah Lehrer seems to be trying to saddle with the responsibility for his lies.

Recognizing cognitive biases is a good thing. It is useful to the extent that it helps us to avoid getting fooled by them. You’ll recall that, knowledge-builders, whether scientists or journalists, are supposed to do their best to avoid being fooled.

But, what Lehrer did is hard to cast in terms of ignoring strong cognitive biases. He made stuff up. He fabricated quotes. He presented other authors’ writing as his own. When confronted about his falsifications, he lied. Did his cognitive biases do all this?

What Jonah Lehrer seems to be sidestepping in his “meh culpa” is the fact that, when he had to make choices about whether to work with the actual facts or instead to make stuff up, about whether to write his own pieces (or at least to properly cite the material from others that he used) or to plagiarize, about whether to be honest about what he’d done when confronted or to lie some more, he decided to be dishonest.

If we’re to believe this was a choice his cognitive biases made for him, then his seem much more powerful (and dangerous) than the garden-variety cognitive biases most grown-up humans have.

It seems to me more plausible that Lehrer’s problem was a weakness of will. It’s not that he didn’t know what he was doing was wrong — he wasn’t fooled by his brain into believing it was OK, or else he wouldn’t have tried to conceal it. Instead, despite recognizing the wrongness of his deeds, he couldn’t muster the effort not to do them.

If Jonah Lehrer cannot recognize this — that it frequently requires conscious effort to do the right thing — it’s hard to believe he’ll be committed to putting that effort into doing the right (journalistic) thing going forward. Verily, given the trust he’s burned with his journalistic colleagues, he can expect that proving himself to be reformed will require extra effort.

But maybe what Lehrer is claiming is something different. Maybe he’s denying that he understood the right thing to do and then opted not to do it because it seemed like too much work. Maybe he’s claiming instead that he just couldn’t resist the temptation (whether of rule-breaking for its own sake or of rule-breaking as the most efficient route to secure the prestige he craved). In other words, maybe he’s saying he was literally powerless, that he could not help committing those misdeeds.

If that’s Lehrer’s claim — and if, in addition, he’s claiming that the piece of his cognitive apparatus that was so vulnerable to temptation that it seized control to make him do wrong is as integral to who Jonah Lehrer is as his cognitive biases are — the whole rehabilitation thing may be a non-starter. If this is how Lehrer understands why he did wrong, he seems to be identifying himself as a wrongdoer with a high probability of reoffending.

If he can parlay that into more five-figure speaker fees, maybe that will be a decent living for Jonah Lehrer, but it will be a big problem for the community of journalists and for the public that trusts journalists as generally reliable sources of information.

Weakness is part of Lehrer, as it is for all of us, but it is not a part he is acknowledging he could control or counteract by concerted effort, or by asking for help from others.

It’s part of him, but not in a way that makes him inclined to actually take responsibility or to acknowledge that he could have done otherwise under the circumstances.

If he couldn’t have done otherwise — and if he might not be able to when faced with similar temptation in the future — then Jonah Lehrer has no business in journalism. Until he can recognize his own agency, and the responsibility that attaches to it, the most he has to offer is one more cautionary tale.
_____
*Fact check: I have absolutely no idea how many other bloggers took five years of Latin. My evidence-free guess is that it’s not just me.

Academic tone-trolling: How does interactivity impact online science communication?

Later this week at ScienceOnline 2013, Emily Willingham and I are co-moderating a session called Dialogue or fight? (Un)moderated science communication online. Here’s the description:

Cultivating a space where commentators can vigorously disagree with a writer–whether on a blog, Twitter, G+, or Facebook, *and* remain committed to being in a real dialogue is pretty challenging. It’s fantastic when these exchanges work and become constructive in that space. On the other hand, there are times when it goes off the rails despite your efforts. What drives the difference? How can you identify someone who is commenting simply to cause trouble versus a commenter there to engage in and add value to a genuine debate? What influence does this capacity for *anyone* to engage with one another via the great leveler that is social media have on social media itself and the tenor and direction of scientific communication?

Getting ready for this session was near the top of my mind when I read a perspective piece by Dominique Brossard and Dietram A. Scheufele in the January 4, 2013 issue of Science. [1] In the article, Brossard and Scheufele raise concerns about the effects of moving the communication of science information to the public from dead-tree newspapers and magazines into online, interactive spaces.

Here’s the paragraph that struck me as especially relevant to the issues Emily and I had been discussing for our session at ScienceOnline 2013:

A recent conference presented an examination of the effects of these unintended influences of Web 2.0 environments empirically by manipulating only the tone of the comments (civil or uncivil) that followed an online science news story in a national survey experiment. All participants were exposed to the same, balanced news item (covering nanotechnology as an emerging technology) and to a set of comments following the story that were consistent in terms of content but differed in tone. Disturbingly, readers’ interpretations of potential risks associated with the technology described in the news article differed significantly depending only on the tone of the manipulated reader comments posted with the story. Exposure to uncivil comments (which included name calling and other non-content-specific expressions of incivility) polarized the views among proponents and opponents of the technology with respect to its potential risks. In other words, just the tone of the comments following balanced science stories in Web 2.0 environments can significantly alter how audiences think about the technology itself. (41)

There’s lots to talk about here.

Does this research finding mean that, when you’re trying to communicate scientific information online, enabling comments is a bad idea?

Lots of us are betting that it’s not. Rather, we’re optimistic that people will be more engaged with the information when they have a chance to engage in a conversation about it (e.g., by asking questions and getting answers).

However, the research finding described in the Science piece suggests that there may be better and worse ways of managing commenting on your posts if your goal is to help your readers understand a particular piece of science.

This might involve having a comment policy that puts some things clearly out-of-bounds, like name-calling or other kinds of incivility, and then consistently enforcing this policy.

It should be noted — and has been — that some kinds of incivility wear the trappings of polite language, which means that it’s not enough to set up automatic screens that weed out comments containing particular specified naughty words. Effective promotion of civility rather than incivility might well involve having the author of the online piece and/or designated moderators as active participants in the ongoing conversation, calling out bad commenter behavior as well as misinformation, answering questions to make sure the audience really understands the information being presented, and being attentive to how the unfolding discussion is likely to be welcoming — or forbidding — to the audience one is hoping to reach.

There are a bunch of details that are not clear from this brief paragraph in the perspective piece. Were the readers whose opinions were swayed by the tone of the comments reacting to a conversation that had already happened or were they watching as it happened? (My guess is the former, since the latter would be hard to orchestrate and coordinate with a survey.) Were they looking at a series of comments that dropped them in the middle of a conversation that might plausibly continue, or were they looking at a conversation that had reached its conclusion? Did the manipulated reader comments include any comments that appeared to be from the author of the science article, or were the research subjects responding to a conversation from which the author appeared to be absent? Potentially, these details could make a difference to the results — a conversation could impact someone reading it differently depending on whether it seems to be gearing up or winding down, just as participation from the author could carry a different kind of weigh than the views of random people on the internet. I’m hopeful that future research in this area will explore just what kind of difference they might make.

I’m also guessing that the experimental subjects reading the science article and the manipulated comments that followed could not themselves participate in the discussion by posting a comment. I wonder how much being stuck on the sidelines rather than involved in the dialogue affected their views. We should remember, though, that most indicators suggest that readers of online articles — even on blogs — who actually post comments are much smaller in number than the readers who “lurk” without commenting. This means that commenters are generally a very small percentage of the readers one is trying to reach, and perhaps not very representative of those readers overall.

At this point, the take-home seems to be that social scientists haven’t discovered all the factors that matter in how an audience for online science is going to receive and respond to what’s being offered — which means that those of us delivering science-y content online should assume we haven’t discovered all those factors, either. It might be useful, though, if we are reflective about our interactions with our audiences and if we keep track of the circumstances around communicative efforts that seem to work and those that seem to fail. Cataloguing these anecdote could surely provide fodder for some systematic empirical study, and I’m guessing it could help us think through strategies for really listening to the audiences we hope are listening to us.

* * * * *
As might be expected, Bora has a great deal to say about the implications of this particular piece of research and about commenting, comment moderation, and Web 2.0 conversations more generally. Grab a mug of coffee, settle in, and read it.

——
[1] Dominique Brossard and Dietram A. Scheufele, “Science, New Media, and the Public.” Science 4 January 2013:Vol. 339, pp. 40-41.
DOI: 10.1126/science.1160364

Reasonably honest impressions of #overlyhonestmethods.

I suspect at least some of you who are regular Twitter users have been following the #overlyhonestmethods hashtag, with which scientists have been sharing details of their methodology that are maybe not explicitly spelled out in their published “Materials and Methods” sections. And, as with many other hashtag genres, the tweets in #overlyhonestmethods are frequently hilarious.

I was interviewed last week about #overlyhonestmethods for the Public Radio International program Living On Earth, and the length of my commentary was more or less Twitter-scaled. This means some of the nuance (at least in my head), about questions like whether I thought the tweets were an overshare that could make science look bad, didn’t quite make it to the radio. Also, in response to the Living On Earth segment, one of the people with whom I regularly discuss the philosophy of science in the three-dimensional world, shared some concerns about this hashtag in the hopes I’d say a bit more:

I am concerned about the brevity of the comments which may influence what one expresses.  Second there is an ego component; some may try to outdo others’ funny stories, and may stretch things in order to gain a competitive advantage.

So, I’m going to say a bit more.

Should we worry that #overlyhonestmethods tweets share information that will make scientific practice look bad to (certain segments of) the public?

I don’t think so. I suppose this may depend on what exactly the public expects of scientists.

The people doing science are human. They are likely to be working with all kinds of constraints — how close their equipment is to the limits of its capabilities (and to making scary noises), how frequently lab personnel can actually make it into the lab to tend to cell cultures, how precisely (or not) pumping rates can be controlled, how promptly (or not) the folks receiving packages can get perishable deliveries to the researchers. (Notice that at least some of these limitations are connected to limited budgets for research … which maybe means that if the public finds them unacceptable, they should lobby their Congresscritters for increased research funding.) There are also constraints that come from the limits of the human animal: with a finite attention span, without a built in chronometer or calibrated eyeballs, and with a need for sleep and possibly even recreation every so often (despite what some might have you think).

Maybe I’m wrong, but my guess is that it’s a good thing to have a public that is aware of these limitations imposed by the available equipment, reagents, and non-robot workforce.

Actually, I’m willing to bet that some of these limitations, and an awareness of them, are also really handy in scientific knowledge-building. They are departures from ideality that may help scientists nail down which variables in the system really matter in producing and controlling the phenomenon being studied. Reproducibility might be easy for a robot that can do every step of the experiment precisely every single time, but we really learn what’s going on when we drift from that. Does it matter if I use reagents from a different supplier? Can I leave the cultures to incubate a day longer? Can I successfully run the reaction in a lab that’s 10 oC warmer or 10 oC colder? Working out the tolerances helps turn an experimental protocol from a magic trick into a system where we have some robust understanding of what variables matter and of how they’re hooked to each other.

Does the 140 character limit mean #overlyhonestmethods tweets leave out important information, or that scientists will only use the hashtag to be candid about some of their methods while leaving others unexplored?

The need for brevity surely means that methods for which candor requires a great deal of context and/or explanation won’t be as well-represented as methods where one can be candid and pithy simultaneously. These tweeted glimpses into how the science gets done are more likely to be one-liners than shaggy-dog stories.

However, it’s hard to imagine that folks who really wanted to share wouldn’t use a series of tweets if they wanted to play along, or maybe even write a blog post about it and use the hashtag to tweet a link to that post.

What if #overlyhonestmethods becomes a game of one-upmanship and puffery, in which researchers sacrifice honesty for laughs?

Maybe there’s some of this happening, and if the point of the hashtag is for researchers to entertain each other, maybe that’s not a problem. However, in the case that other members of one’s scientific community were actually looking to those tweets to fill in some of the important details of methodology that are elided in the terse “Materials and Methods” section of a published research paper, I hope the tweeters would, when queried, provide clear and candid information on how they actually conducted their experiments. Correcting or retracting a tweet should be less of an ego blow than correcting or retracting a published paper, I hope (and indeed, as hard as it might be to correct or retract published claims, good scientists do it when they need to).

The whole #overlyhonestmethods hashtag raises the perennial question of why it is so much is elided in published “Materials and Methods” sections. Blame is usually put on limitations of space in the journals, but it’s also reasonable to acknowledge that sometimes details-that-turn-out-to-be-important are left out because the researchers don’t fully recognize their importance. Other times, researchers may have empirical grounds for thinking these details are important, but they don’t yet have a satisfying story to tell about why they should be.

By the way, I think it would be an excellent thing if, for research that is already published, #overlyhonestmethods included the relevant DOI. These tweets would be supplementary information researchers could really use.

What researchers use #overlyhonestmethods to disclose ethically problematic methods?

Given that Twitter is a social medium, I expect other scientists in the community watching the hashtag would challenge those methods or chime in to explain just what makes them ethically problematic. They might also suggest less ethically problematic ways to achieve the same research goals.

The researchers on Twitter could, in other words, use the social medium to exert social pressure in order to make sure other members of their scientific community understand and live up to the norms of that community.

That outcome would strike me as a very good one.

* * * * *

In addition to the ever expanding collection of tweets about methods, #overlyhonestmethods also has links to some thoughtful, smart, and funny commentary on the hashtag and the conversations around it. Check it out!

The danger of pointing out bad behavior: retribution (and the community’s role in preventing it).

There has been a lot of discussion of Dario Maestripieri’s disappointment at the unattractiveness of his female colleagues in the neuroscience community. Indeed, it’s notable how much of this discussion has been in public channels, not just private emails or conversations conducted with sound waves which then dissipate into the aether. No doubt, this is related to Maestripieri’s decision to share his hot-or-not assessment of the women in his profession in a semi-public space where it could achieve more permanence — and amplification — than it would have as an utterance at the hotel bar.

His behavior became something that any member of his scientific community with an internet connection (and a whole lot of people outside his scientific community) could inspect. The impacts of an actual, rather than hypothetical, piece of behavior, could be brought into the conversation about the climate of professional and learning communities, especially for the members of these communities who are women.

It’s worth pointing out that there is nothing especially surprising about such sexist behavior* within these communities. The people in the communities who have been paying attention have seen them before (and besides have good empirical grounds for expecting that gender biases may be a problem). But many sexist behaviors go unreported and unremarked, sometimes because of the very real fear of retribution.

What kind of retribution could there be for pointing out a piece of behavior that has sexist effects, or arguing that it is an inappropriate way for a member of the professional community to behave?

Let’s say you are an early career scientist, applying for a faculty post. As it happens, Dario Maestripieri‘s department, the University of Chicago Department of Comparative Human Development, currently has an open search for a tenure-track assistant professor. There is a non-zero chance that Dario Maestripieri is a faculty member on that search committee, or that he has the ear of a colleague that is.

It is not a tremendous stretch to hypothesize that Dario Maestripieri may not be thrilled at the public criticism he’s gotten in response to his Facebook post (including some quite close to home). Possibly he’s looking through the throngs of his Facebook friends and trying to guess which of them is the one who took the screenshot of his ill advised post and shared it more widely. Or looking through his Facebook friends’ Facebook friends. Or considering which early career neuroscientists might be in-real-life friends or associates with his Facebook friends or their Facebook friends.

Now suppose you’re applying for that faculty position in his department and you happen to be one of his Facebook friends,** or one of their Facebook friends, or one of the in-real-life friends of either of those.

Of course, shooting down an applicant for a faculty position for the explicit reason that you think he or she may have cast unwanted attention on your behavior towards your professional community would be a problem. But there are probably enough applicants for the position, enough variation in the details of their CVs, and enough subjective judgment on the part of the members of the search committee in evaluating all those materials that it would be possible to cut all applicants who are Dario Maestripieri’s Facebook friends (or their Facebook friends, or in-real-life friends of either of those) from consideration while providing some other plausible reason for their elimination. Indeed, the circle could be broadened to eliminate candidates with letters of recommendation from Dario Maestripieri’s Facebook friends (or their Facebook friends, or in-real-life friends of either of those), candidates who have coauthored papers with Dario Maestripieri’s Facebook friends (or their Facebook friends, or in-real-life friends of either of those), etc.)

And, since candidates who don’t get the job generally aren’t told why they were found wanting — only that some other candidate was judged to be better — these other plausible reasons for shooting down a candidate would only even matter in the discussions of the search committee.

In other words, real retaliation (rejection from consideration for a faculty job) could fall on people who are merely suspected of sharing information that led to Dario Maestripieri becoming the focus of a public discussion of sexist behavior — not just on the people who have publicly spoken about his behavior. And, the retaliation would be practically impossible to prove.

If you don’t think this kind of possibility has a chilling effect on the willingness of members of a professional community to speak up when they see a relatively powerful colleague behave in they think is harmful, you just don’t understand power dynamics.

And even if Dario Maestripieri has no part at all in his department’s ongoing faculty search, there are other interactions within his professional community in which his suspicions about who might have exposed his behavior could come into play. Senior scientists are routinely asked to referee papers submitted to scientific journals and to serve on panels and study sections that rank applications for grants. In some of these circumstances, the identities of the scientists one is judging (e.g., for grants) are known to the scientists making the evaluations. In others, they are masked, but the scientists making the evaluations have hunches about whose work they are evaluating. If those hunches are mingled with hunches about who could have shared evidence of behavior that is now making the evaluator’s life difficult, it’s hard to imagine the grant applicant or the manuscript author getting a completely fair shake.

Let’s pause here to note that the attitude Dario Maestripieri’s Facebook posting reveals, that it’s appropriate to evaluate women in the field on their physical beauty rather than their scientific achievements, could itself be a source of bias as he does things that are part of a normal professional life, like serving on search committees, reviewing journal submissions and grant applications, evaluating students, and so forth. A bias like this could manifest itself in a preference for hiring job candidates one finds aesthetically pleasing. (Sure, academic job application packets usually don’t include a headshot, but even senior scientists have probably heard of Google Image search.) Or it could manifest itself in a preference against hiring more women (since too high a concentration of female colleagues might be perceived as increasing the likelihood that one would be taken to task for freely expressing one’s aesthetic preferences about women in the field). Again, it would be extraordinarily hard to prove the operation of such a bias in any particular case — but that doesn’t rule out the possibility that it is having an effect in activities where members of the professional community are supposed to be as objective as possible.

Objectivity, as we’ve noted before, is hard.

We should remember, though, that faculty searches are conducted by committees, rather than by a single individual with the power to make all the decisions. And, the University of Chicago Department of Comparative Human Development (as well as the University of Chicago more generally) may recognize that it is likely to be getting more public scrutiny as a result of the public scrutiny Dario Maestripieri has been getting.

Among other things, this means that the department and the university have a real interest in conducting a squeaky-clean search that avoids even the appearance of retaliation. In any search, members of the search committee have a responsibility to identify, disclose, and manage their own biases. In this search, discharging that responsibility is even more vital. In any search, members of the hiring department have a responsibility to discuss their shared needs and interests, and how these should inform the selection of the new faculty member. In this search, that discussion of needs and interests must include a discussion of the climate within the department and the larger scientific community — what it is now, and what members of the department think it should be.

In any search, members of the hiring department have an interest in sharing their opinions on who the best candidate might be, and to having a dialogue around the disagreements. In this search, if it turns out one of the disagreements about a candidate comes down to “I suspect he may have been involved in exposing my Facebook post and making me feel bad,” well, arguably there’s a responsibility to have a discussion about that.

Ask academics what it’s like to hire a colleague and it’s not uncommon to hear them describe the experience as akin to entering a marriage. You’re looking for someone with whom you might spend the next 30 years, someone who will grow with you, who will become an integral part of your department and its culture, even to the point of helping that departmental culture grow and change. This is a good reason not to choose the new hire based on the most superficial assessment of what each candidate might bring to the relationship — and to recognize that helping one faculty member avoid discomfort might not be the most important thing.

Indeed, Dario Maestripieri’s colleagues may have all kinds of reasons to engage him in uncomfortable discussions about his behavior that have nothing to do with conducting a squeaky-clean faculty search. Their reputations are intertwined, and leaving things alone rather than challenging Dario Maestripieri’s behavior may impact their own ability to attract graduate students or maintain the respect of undergraduates. These are things that matter to academic scientists — which means that Dario Maestripieri’s colleagues have an interest in pushing back for their own good and the good of the community.

The pushback, if it happens, is likely to be just as invisible publicly as any retaliation against job candidates for possibly sharing the screenshot of Dario Maestripieri’s Facebook posting. If positive effects are visible, it might make it seem less dangerous for members of the professional community to speak up about bad behavior when they see it. But if the outward appearance is that nothing has changed for Dario Maestripieri and his department, expect that there will be plenty of bad behavior that is not discussed in public because the career costs of doing so are just too high.

______
* This is not at all an issue about whether Dario Maestripieri is a sexist. This is an issue about the effects of the behavior, which have a disproportionate negative impact on women in the community. I do not know, or care, what is in the heart of the person who displays these behaviors, and it is not at all relevant to a discussion of how the behaviors affect the community.

** Given the number of his Facebook friends and their range of ages, career stages, etc., this doesn’t strike me as improbable. (At last check, I have 11 Facebook friends in common with Dario Maestripieri.)

Reading the writing on the (Facebook) wall: a community responds to Dario Maestripieri.

Imagine an academic scientist goes to a big professional meeting in his field. For whatever reason, he then decides to share the following “impression” of that meeting with his Facebook friends:

My impression of the Conference of the Society for Neuroscience in New Orleans. There are thousands of people at the conference and an unusually high concentration of unattractive women. The super model types are completely absent. What is going on? Are unattractive women particularly attracted to neuroscience? Are beautiful women particularly uninterested in the brain? No offense to anyone..

Maybe this is a lapse in judgment, but it’s no big thing, right?

I would venture, from the selection of links collected below discussing Dario Maestripieri and his recent social media foible, this is very much A Thing. Read on to get a sense of how the discussion is unfolding within the scientific community and the higher education community:

Drugmonkey, SfN 2012: Professors behaving badly:

There is a very simple response here. Don’t do this. It’s sexist, juvenile, offensive and stupid. For a senior scientist it is yet another contribution to the othering of women in science. In his lab, in his subfield, in his University and in his academic societies. We should not tolerate this crap.

Professor Maestripieri needs to apologize for this in a very public way and take responsibility for his actions. You know, not with a nonpology of “I’m sorry you were offended” but with an “I shouldn’t have done that” type of response.

Me, at Adventures in Ethics and Science, The point of calling out bad behavior:

It’s almost like people have something invested in denying the existence of gender bias among scientists, the phenomenon of a chilly climate in scientific professions, or even the possibility that Dario Maestripieri’s Facebook post was maybe not the first observable piece of sexism a working scientist put out there for the world to see.

The thing is, that denial is also the denial of the actual lived experience of a hell of a lot of women in science

Isis the Scientist, at On Becoming a Domestic and Laboratory Goddess, What We Learn When Professorly d00ds Take to Facebook:

Dr. Maestripieri’s comments will certainly come as no great shock to the women who read them.  That’s because those of us who have been around the conference scene for a while know that this is pretty par for the course.  There’s not just sekrit, hidden sexism in academia.  A lot of it is pretty overt.  And many of us know about the pockets of perv-fest that can occur at scientific meetings.  We know which events to generally avoid.  Many of us know who to not have cocktails with or be alone with, who the ass grabbers are, and we share our lists with other female colleagues.  We know to look out for the more junior women scientists who travel with us.  I am in no way shocked that Dr. Maestripieri would be so brazen as to post his thoughts on Facebook because I know that there are some who wouldn’t hesistate to say the same sorts of things aloud. …

The real question is whether the ability to evaluate Dr. Maestripieri’s asshattery in all of its screenshot-captured glory will actually actually change hearts and minds.

Erin Gloria Ryan at Jezebel, University of Chicago Professor Very Disappointed that Female Neuroscientists Aren’t Sexier:

Professor Maestripieri is a multiple-award winning academic working at the University of Chicago, which basically means he is Nerd Royalty. And, judging by his impressive resume, which includes a Ph.D in Psychobiology, the 2000 American Psychological Association Distinguished Scientific Award for Early Career Contribution to Psychology, and several committees at the U of C, he’s well aware of how hard someone in his position has had to work in order to rise to the top of an extremely competitive and demanding field. So it’s confusing to me that he would fail to grasp the fact that women in his field had to perform similar work and exhibit similar levels of dedication that he did.

Women: also people! Just like men, but with different genitals!

Cory Doctorow at BoingBoing, Why casual sexism in science matters:

I’ve got a daughter who, at four and a half, wants to be a scientist. Every time she says this, it makes me swell up with so much pride, I almost bust. If she grows up to be a scientist, I want her to be judged on the reproducibility of her results, the elegance of her experimental design, and the insight in her hypotheses, not on her ability to live up to someone’s douchey standard of “super model” looks.

(Also, do check out the conversation in the comments; it’s very smart and very funny.)

Scott Jaschik at Inside Higher Education, (Mis)Judging Female Scientists:

Pity the attendees at last week’s annual meeting of the Society for Neuroscience who thought they needed to focus on their papers and the research breakthroughs being discussed. It turns out they were also being judged — at least by one prominent scientist — on their looks. At least the female attendees were. …

Maestripieri did not respond to e-mail messages or phone calls over the past two days. A spokesman for the University of Chicago said that he had decided not to comment.

Pat Campbell at Fairer Science, No offense to anyone:

I’m glad the story hit Inside Higher Ed; I find it really telling that only women are quoted … Inside Higher Ed makes this a woman’s problem not a science problem and that is a much more important issue than Dario Maestripieri’s stupid comments.

Beryl Benderly at the Science Careers Blog, A Facebook Furor:

There’s another unpleasant implication embedded in Maestripieri’s post. He apparently assumed that some of his Facebook readers would find his observations interesting or amusing. This indicates that, in at least some circles, women scientists are still not evaluated on their work but rather on qualities irrelevant to their science. …

[T]he point of the story is not one faculty member’s egregious slip.  It is the apparently more widespread attitudes that this slip reveals

Dana Smith at Brain Study, More sexism in science:

However, others still think his behavior was acceptable, writing it off as a joke and telling people to not take it so seriously. This is particularly problematic given the underlying gender bias we know to still exist in science. If we accept overt and covert discrimination against women in science we all lose out, not just women who are dissuaded from the field because of it, but also everyone who might have benefited from their future work.

Minerva Cheevy at Research Centered (Chronicle of Higher Education Blog Network), Where’s the use of looking nice?:

There’s just no winning for women in academia – if you’re unattractive, then you’re a bad female. But if you’re attractive, you’re a bad academic.

The Maroon Editorial Board at The Chicago Maroon, Changing the conversation:

[T]his incident offers the University community an opportunity to reexamine our culture of “self-deprecation”—especially in relation to the physical attractiveness of students—and how that culture can condone assumptions which are just as baseless and offensive. …

Associating the depth of intellectual interests with a perceived lack of physical beauty fosters a culture of permissiveness towards derogatory comments. Negative remarks about peers’ appearances make blanket statements about their social lives and demeanors more acceptable. Though recently the popular sentiment among students is that the U of C gets more attractive the further away it gets from its last Uncommon App class, such comments stem from the same type of confused associations—that “normal” is “attractive” and that “weird” is not. It’s about time that we distance ourselves from these kinds of normative assumptions. While not as outrageous as Maestripieri’s comments, the belief that intelligence should be related to any other trait—be it attractiveness, normalcy, or social skills—is just as unproductive and illogical.

It’s quite possible that I’ve missed other good discussions of this situation and its broader implications. If so, please feel free to share links to them in the comments.

Dueling narratives: what’s the job market like for scientists and is a Ph.D. worth it?

At the very end of August, Slate posted an essay by Daniel Lametti taking up, yet again, what the value of a science Ph.D. is in a world where the pool of careers for science Ph.D.s in academia and industry is (maybe) shrinking. Lametti, who is finishing up a Ph.D. in neuroscience, expresses optimism that the outlook is not so bleak, reading the tea leaves of some of the available survey data to conclude that unemployment is not much of a problem for science Ph.D.s. Moreover, he points to the rewards of the learning that happens in a Ph.D. program as something that might be values in its own right rather than as a mere instrument to make a living later. (This latter argument will no doubt sound familiar.)

Of course, Chemjobber had to rain on the parade of this youthful optimism. (In the blogging biz, we call that “due diligence”.) Chemjobber critiques Lametti’s reading of the survey data (and points out some important limitations with those data), questions his assertion that a science Ph.D. is a sterling credential to get you into all manner of non-laboratory jobs, reiterates that the opportunity costs of spending years in a Ph.D. program are non-neglible, and reminds us that unemployed Ph.D. scientists do exist.

Beryl Benderly mounts similar challenges to Lametti’s take on the job market at the Science Careers blog.

You’ve seen this disagreement before. And, I reckon, you’re likely to see it again.

But this time, I feel like I’m starting to notice what may be driving these dueling narratives about how things are for science Ph.D.s. It’s not just an inability to pin down the facts about the job markets, or the employment trajectories of those science Ph.D.s. In the end, it’s not even a deep disagreement about what may be valuable in economic or non-economic ways about the training one receives in a science Ph.D. program.

Where one narrative focuses on the overall trends within STEM fields, the other focuses on individual experiences. And, it strikes me that part of what drives the dueling narratives is what feels like a tension between voicing an individual view it may be helpful to adopt for one’s own well-being and acknowledging the existence of systemic forces that tend to create unhelpful outcomes.

Of course, part of the problem in these discussions may be that we humans have a hard time generally reconciling overall trends with individual experiences. Even if it were a true fact that the employment outlook was very, very good for people in your field with Ph.D.s, if you have one of those Ph.D.s and you can’t find a job with it, the employment situation is not good for you. Similarly, if you’re a person who can find happiness (or at least satisfaction) in pretty much whatever situation you’re thrown into, a generally grim job market in your field may not big you very much.

But I think the narratives keep missing each other because of something other than not being able to reconcile the pooled labor data with our own anecdata. I think, at their core, the two narratives are trying to do different things.

* * *

I’ve written before about some of what I found valuable in my chemistry Ph.D. program, including the opportunity to learn how scientific knowledge is made by actually making some. That’s not to say that the experience is without its challenges, and it’s hard for me to imagine taking on those challenges without a burning curiosity, a drive to go deeper than sitting in a classroom and learning the science that others have built.

It can feel a bit like a calling — like what I imagine people learning how to be artists or musicians must feel. And, if you come to this calling in a time where you know the job prospects at the other end are anything but certain, you pretty much have to do the gut-check that I imagine artists and musicians do, too:

Am I brave enough to try this, even though I know there’s a non-negligible chance that I won’t be able to make a career out of it? Is it worth it to devote these years of toil and study, with long hours and low salary, to immersing myself in this world, even knowing I might not get to stay in it?

A couple quick caveats here: I suspect it’s much easier to play music or make art “on the side” after you get home from the job that pays for your food but doesn’t feed your soul than it is to do science on the side. (Maybe this points to the need for community science workspaces?) And, it’s by no means clear that those embarking on Ph.D. training in a scientific field are generally presented with realistic expectations about the job market for Ph.D.s in their field.

Despite the fact that my undergraduate professors talked up a supposed shortage of Ph.D. chemists (one that was not reflected in the labor statistics less than a year later), I somehow came to my own Ph.D. training with the attitude that it was an open question whether I’d be able to get a job as a chemist in academia or industry or a national lab. I knew I was going to leave my graduate program with a Ph.D., and I knew I was going to work.

The rent needed to be paid, and I was well acclimated to a diet that alternated between lentils and ramen noodles, so I didn’t see myself holding out for a dream job with a really high salary and luxe benefits. A career was something I wanted, but the more pressing need was a paycheck.

Verily, by the time I completed my chemistry Ph.D., this was a very pressing need. It’s true that students in a chemistry Ph.D. program are “paid to go to school,” but we weren’t paid much. I kept my head, and credit card balance, mostly above water by being a cyclist rather than a driver, saving money for registration, insurance, parking permits, and gas that my car-owning classmates had to pay. But it took two veterinary emergencies, one knee surgery, and ultimately the binding and microfilming fee I had to pay when I submitted the final version of my dissertation to completely wipe out my savings.

I was ready to teach remedial arithmetic at a local business college for $12 an hour (and significantly less than 40 hours a week) if it came to that. Ph.D. chemist or not, I needed to pay the bills.

Ultimately, I did line up a postdoctoral position, though I didn’t end up taking it because I had my epiphany about needing to become a philosopher. When I was hunting for postdocs, though, I knew that there was still no guarantee of a tenure track job, or a gig at a national lab, or a job in industry at the end of the postdoc. I knew plenty of postdocs who were still struggling to find a permanent job. Even before my philosophy epiphany, I was thinking through other jobs I was probably qualified to do that I wouldn’t hate — because I kind of assumed it would be hard, and that the economy wouldn’t feel like it owed me anything, and that I might be lucky, but I also might not be. Seeing lots of really good people have really bad luck on the job market can do that to a person.

My individual take on the situation had everything to do with keeping me from losing it. It’s healthy to be able to recognize that bad luck is not the same as the universe (or even your chosen professional community) rendering the judgment that you suck. It’s healthy to be able to weather the bad luck rather than be crushed by it.

But, it’s probably also healthy to recognize when there may be systemic forces making it a lot harder than it needs to be to join a professional community for the long haul.

* * *

Indeed, the discussion of the community-level issues in scientific fields is frequently much less optimistic than the individual-level pep-talks people give themselves or each other.

What can you say about a profession that asks people who want to join it to sink as much as a decade into graduate school, and maybe another decade into postdoctoral positions (jobs defined as not permanent) just to meet the training prerequisite for desirable permanent jobs that may not exist in sufficient numbers to accommodate all the people who sacrificed maybe two decades at relatively low salaries for their level of education, who likely had to uproot and change their geographical location at least once, and who succeeded at the research tasks they were asked to take on during that training? And what can you say about that profession when the people asked to embark on this gamble aren’t given anything like a realistic estimate of their likelihood of success?

Much of what people do say frames this as a problem of supply and demand. There are just too many qualified candidates for the available positions, at least from the point of view of the candidates. From the point of view of a hiring department or corporation, the excess of available workers may seem like less of a problem, driving wages downward and making it easier to sell job candidates on positions in “geographically unattractive” locations.

Things might get better for the job seeker with a Ph.D. if the supply of science Ph.D.s were adjusted downward, but this would disrupt another labor pool, graduate students working to generate data for PIs in their graduate labs. Given the “productivity” expectations on those PIs, imposed by institutions and granting agencies, reducing student throughput in Ph.D. programs is likely to make things harder for those lucky enough to have secured tenure track positions in the first place.

The narrative about the community-level issues takes on a different tone depending on who’s telling it, and with which end of the power gradient they identify. Do Ph.D. programs depend on presenting a misleading picture of job prospects and quality of life for Ph.D. holders to create the big pools of student labor on which they depend? Do PIs and administrators running training programs encourage the (mistaken) belief that the academic job market is a perfect meritocracy, and that each new Ph.D.’s failure will be seen as hers alone? Are graduate students themselves to blame for not considering the employment data before embarking on their Ph.D. programs? Are they being spoiled brats when they should recognize that their unemployment numbers are much, much lower than for the population as a whole, that most employed people have nothing like tenure to protect their jobs, and indeed that most people don’t have jobs that have anything to do with their passions?

So the wrangling continues over whether things are generally good or generally bad for Ph.D. scientists, over whether the right basis for evaluating this is the life Ph.D. programs promise when they recruit students (which maybe they are only promising to the very best — or the very lucky) or the life most people (including large numbers of people who never finished college, or high school) can expect, over whether this is a problem that ought to be addressed or simply how things are.

* * *

The narratives here feel like they’re in conflict because they’re meant to do different things.

The individual-level narrative is intended to buoy the spirits of the student facing adversity, to find some glimmers of victory that can’t be taken away even by a grim employment market. It treats the background conditions as fixed, or at least as something the individual cannot change; what she can control is her reaction to them.

It’s pretty much the Iliad, but with lab coats.

The community-level narrative instead strives for a more accurate accounting of what all the individual trajectories add up to, focusing not on who has experienced personal growth but on who is employed. Here too, there is a striking assumption that The Way Things Are is a stable feature of the system, not something individual action could change — or that individual members of the community should feel any responsibility for changing.

And this is where I think there’s a need for another narrative, one with the potential to move us beyond the disagreement and disgruntlement we see each time the other two collide.

Professional communities, after all, are made up of individuals. People, not the economy, make hiring decisions. Members of professional communities make decisions about how they’re going to treat each other, and in particular about how they will treat the most vulnerable members of their community.

Graduate students are not receiving a mere service or commodity from their Ph.D. programs (“Would you like to supersize that scientific education?”). They are entering a relationship resembling an apprenticeship with the members of the professional community they’re trying to join. Arguably, this relationship means that the professional community has some responsibility for the ongoing well-being of those new Ph.D.s.

Here, I don’t think this is a responsibility to infantilize new Ph.D.s, to cover them with bubble-wrap or to create for them a sparkly artificial economy full of rainbows and unicorns. But they probably have a duty to provide help when they can.

Maybe this help would come in the form of showing compassion, rather than claiming that the people who deserve to be scientists will survive the rigors of the job market and that those who don’t weren’t meant to be in science. Maybe it would come by examining one’s own involvement in a system that defines success too narrowly, or that treats Ph.D. students as a consumable resource, or that fails to help those students cultivate a broad enough set of skills to ensure that they can find some gainful employment. Maybe it would come from professional communities finding ways to include as real members people they have trained but who have not been able to find employment in that profession.

Individuals make the communities. The aggregate of the decisions the communities make create the economic conditions and the quality of life issues. Treating current conditions — including current ways of recruiting students or describing the careers and lives they ought to expect at the other end of their training — as fixed for all time it a way of ignoring how individuals and institutions are responsible for those conditions. And, it doesn’t do anything to help change them.

It’s useful to have discussions of how to navigate the waters of The Way Things Are. It’s also useful to try to get accurate data about the topology of those waters. But these discussions shouldn’t distract us from serious discussions of The Way Things Could Be — and of how scientific communities can get there from here.

Safety in academic chemistry labs (with some thoughts on incentives).

Earlier this month, Chemjobber and I had a conversation that became a podcast. We covered lots of territory, from the Sheri Sangji case, to the different perspectives on lab safety in industry and academia, to broader questions about how to make attention to safety part of the culture of chemistry. Below is a transcript of a piece of that conversation (from about 07:45 to 19:25). I think there are some relevant connections here to my earlier post about strategies for delivering ethics training — a post which Jyllian Kemsley notes may have some lessons for safety-training, too.

Chemjobber: I think, academic-chemistry-wise, we might do better at looking out after premeds than we do at looking out after your typical first year graduate student in the lab.

Janet: Yeah, and I wonder why that is, actually, given the excess of premeds. Maybe that’s the wrong place to put our attention.* But maybe the assumption is that, you know, not everyone taking a chemistry lab course is necessarily going to come into the lab knowing everything they need to know to be safe. And that’s probably a safe assumption to make even about people who are good in chemistry classes. So, that’s one of those things that I think we could do a lot better at, just recognizing that there are hazards and that people who have never been in these situations before don’t necessarily know ho to handle them.

Chemjobber: Yeah, I agree. I don’t know what the best way is to make sure to inculcate that sort of lab safety stuff into graduate school. Because graduate school research is supposed to be kind of free-flowing and spontaneous — you have a project and you don’t really know where it’s going to lead you. On the other hand, a premed organic chemistry class is a really artificial environment where there is an obvious beginning and an obvious end and you stick the safety discussion right at the beginning. I remember doing this, where you pull out the MSDS that’s really scary sounding and you scare the pants off the students.

Janet: I don’t even think alarming them is necessarily the way to go, but just saying, hey, it matters how you do this, it matters where you do this, this is why it matters.

Chemjobber: Right.

Janet: And I guess in research, you’re right, there is this very open-ended, free-flowing thing. You try to build knowledge that maybe doesn’t exist yet. You don’t know where it’s going to go. You don’t necessarily know what the best way to build that knowledge is going to be. I think where we fall short sometimes is that there may be an awful lot of knowledge out there somewhere, that if you take this approach, with these techniques or with these chemicals, here are some dangers that are known. Here are some risks that someone knows about. You may not know them yet, but maybe we need to do better in the conceiving-of-the-project stage at making that part of the search of prior literature. Not just, what do we know about this reaction mechanism, but what do we know about the gnarly reagents you need to be able to work with to pursue a similar kind of reaction.

Chemjobber: Yeah. My understanding is that in the UK, before you do every experiment, there’s supposed to be a formalized written risk analysis. UK listeners can comment on whether those actually happen. But it seems like they do, because, you know, when you see online conversation of it, it’s like, “What? You guys don’t do that in the US?” No, we don’t.

Janet: There’s lots of things we don’t do. We don’t have a national health service either.

Chemjobber: But how would you make the bench-level researcher do that risk analysis? How does the PI make the bench-level researcher do that? I don’t know. … Neal Langerman is a prominent chemical safety expert. Beryl Benderly is somebody who writes on the Sheri Sangji case who’s talked about this, which is that basically that we should fully and totally incentivize this by tying academic lab safety to grants and tenure. What do you think?

Janet: I think that the intuition is right that if there’s not some real consequence for not caring about safety, it’s going to be the case that some academic researchers, making a rational calculation about what they have to do and what they’re going to be rewarded on and what they’re going to be punished for, are going to say, this would be nice in a perfect world. But there really aren’t enough hours in the day, and I’ve got to churn out the data, and I’ve got to get it analyzed and get the manuscript submitted, especially because I think that other group that was working on something like this might be getting close, and lord knows we don’t want to get scooped — you know, if there’s no consequence for not doing it, if there’s no culture of doing it, if there’s no kind of repercussion among their peers and their professional community for not doing it, a large number of people are going to make the rational calculation that there’s no point in doing it.

Chemjobber: Yeah.

Janet: Maybe they’ll do it as a student exercise or something, but you know what, students are pretty clever, and they get to a point where they actually watch what the PI who is advising them does, and form something like a model of “this is what you need to do to be a successful PI”. And all the parts of what their PI does that are invisible to them? At least to a first approximation, those are not part of the model.

Chemjobber: Right. I’ve been on record as saying that I find tying lab safety to tenure especially to be really dangerous, because you’re giving an incredible incentive to hide incidents. I mean, “For everybody’s sake, sweep this under the rug!” is what might come of this. Obviously, if somebody dies, you can’t hide that.

Janet: Hard to hide unless you’ve got off-the-books grad students, which … why would you do that?

Chemjobber: Are you kidding? There’s a huge supply of them already! But, my concern with tying lab safety to tenure is that I have a difficult time seeing how you would make that a metric other than, if you’ve reported an accident, you will not get tenure, or, if you have more than two accidents a year, you will not get tenure. For the marginal cases, the incentive becomes very high to hide these accidents.

Janet: Here’s a way it might work, though — and I know this sort of goes against the grain, since tenure committees much prefer something they can count to things they have to think about, which is why the number of publications and the impact factor becomes way more important somehow than the quality or importance of the publications as judged by experts in the field. But, something like this might work: if you said, what we’re going to look at in evaluating safety and commitment to safety for your grants and tenure is whether you’ve developed a plan. We’re going to look at what you’ve done to talk with the people in your lab about the plan, and at what you’ve done to involve them in executing the plan. So we’re going to look at it as maybe a part of your teaching, a part of your mentoring — and here, I know some people are going to laugh, because mentoring is another one of those things that presumably is supposed to be happening in academic chemistry programs, but whether it’s seriously evaluated or not, other than by counting the number of students who you graduate per year, is … you know, maybe it’s not evaluated as rigorously as it might be. But, if it became a matter of “Show us the steps you’re taking to incorporate an awareness and a seriousness about safety into how you train these graduate students to be grown-up chemists,” that’s a different kind of thing from, “Oh, and did you have any accidents or not?” Because sometimes the accidents are because you haven’t paid attention at all to safety, but sometimes the accidents are really just bad luck.

Chemjobber: Right.

Janet: And you know, maybe this isn’t going to happen every place, but at places like my university, in our tenure dossiers, they take seriously things like grant proposals we have written as part of our scholarly work, whether or not they get funded. You include them so the people evaluating your tenure dossier can evaluate the quality of your grant proposal, and you get some credit for that work even if it’s a bad pay-line year. So a safety plan and evidence of its implementation you might get credit for even if it’s been a bad year as far as accidents.

Chemjobber: I think that’s fair. You know, I think that everybody hopes that with a high-stakes thing like tenure, there’s lots of “human factor” and relatively little number-crunching.

Janet: Yeah, but you know, then you’re on the committee that has to evaluate a large number of dossiers. Human nature kicks in and counting is easier than evaluating, isn’t it?

______
* Let the record reflect that despite our joking about “excesses” of premeds, neither I nor Chemjobber have it in for premeds. Especially so now that neither of us is TAing a premed course.

How we decide (to falsify).

At the tail-end of a three-week vacation from all things online (something that I badly needed at the end of teaching an intensive five-week online course), the BBC news reader on the radio pulled me back in. I was driving my kid home from the end-of-season swim team banquet, engaged in a conversation about the awesome coaches, when my awareness was pierced by the words “Jonah Lehrer” and “resigned” and “falsified”.

It appears that the self-plagiarism brouhaha was not Jonah Lehrer’s biggest problem. On top of recycling work in ways that may not have conformed to his contractual obligations, Lehrer has also admitted to making up quotes in his recent book Imagine. Here are the details as I got them from the New York Times Media Decoder blog:

An article in Tablet magazine revealed that in his best-selling book, “Imagine: How Creativity Works,” Mr. Lehrer had fabricated quotes from Bob Dylan, one of the most closely studied musicians alive. …

In a statement released through his publisher, Mr. Lehrer apologized.

“The lies are over now,” he said. “I understand the gravity of my position. I want to apologize to everyone I have let down, especially my editors and readers.”

He added, “I will do my best to correct the record and ensure that my misquotations and mistakes are fixed. I have resigned my position as staff writer at The New Yorker.” …

Mr. Lehrer might have kept his job at The New Yorker if not for the Tablet article, by Michael C. Moynihan, a journalist who is something of an authority on Mr. Dylan.

Reading “Imagine,” Mr. Moynihan was stopped by a quote cited by Mr. Lehrer in the first chapter. “It’s a hard thing to describe,” Mr. Dylan said. “It’s just this sense that you got something to say.”

After searching for a source, Mr. Moynihan could not verify the authenticity of the quote. Pressed for an explanation, Mr. Lehrer “stonewalled, misled and, eventually, outright lied to me” over several weeks, Mr. Moynihan wrote, first claiming to have been given access by Mr. Dylan’s manager to an unreleased interview with the musician. Eventually, Mr. Lehrer confessed that he had made it up.

Mr. Moynihan also wrote that Mr. Lehrer had spliced together Dylan quotes from separate published interviews and, when the quotes were accurate, he took them well out of context. Mr. Dylan’s manager, Jeff Rosen, declined to comment.

In the practice of science, falsification is recognized as a “high crime” and is included in every official definition of scientific misconduct you’re likely to find. The reason for this is simple: scientists are committed to supporting their claims about what the various bits of the world are like and about how they work with empirical evidence from the world — so making up that “evidence” rather than going to the trouble to gather it is out of bounds.

Despite his undergraduate degree in neuroscience, Jonah Lehrer is not operating as a scientist. However, he is operating as a journalist — a science journalist at that — and journalism purports to recognize a similar kind of relationship to evidence. Presenting words as a quote from a source is making a claim that the person identified as the source actually said those things, actually made those claims or shared those insights. Presumably, a journalist includes such quotes to bolster an argument. Maybe if Jonah Lehrer had simply written a book presenting his thoughts about creativity readers would have no special reason to believe it. Supporting his views with the (purported) utterances of someone widely recognized as a creative genius, though, might make them more credible.

(Here, Eva notes drily that this incident might serve to raise Jonah Lehrer’s credibility on the subject of creativity.)

The problem, of course, is that a fake quote can’t really add credibility in the way it appears to when the quote is authentic. Indeed, once discovered as fake, it has precisely the opposite effect. As with falsification in science, falsification in journalism can only achieve its intended goal as long as its true nature remains undetected.

There is no question in my mind about the wrongness of falsification here. Rather, the question I grapple with is why do they do it?

In science, after falsified data is detected, one sometimes hears an explanation in terms of extreme pressure to meet a deadline (say, for a big grant application, or for submission of a tenure dossier) or to avoid being scooped on a discovery that is so close one can almost taste it … except for the damned experiments that have become uncooperative. Experiments can be hard, there is no denying it, and the awarding of scientific credit to the first across the finish-line (but not to the others right behind the first) raise the prospect that all of one’s hard work may be in vain if one can’t get those experiments to work first. Given the choice between getting no tangible credit for a few years’ worth of work (because someone else got her experiments to work first) and making up a few data points, a scientist might well feel tempted to cheat. That scientific communities regard falsifying data as such a serious crime is meant to reduce that temptation.

There is another element that may play an important role in falsification, one brought to my attention some years ago in a talk given by C. K. Gunsalus: the scientist may have such strong intuitions about the bit of the world she is trying to describe that gathering the empirical data to support these intuitions seems like a formality. If you’re sure you know the answer, the empirical data are only useful insofar as they help convince others who aren’t yet convinced. The problem here is that the empirical data are how we know whether our accounts of the world fit the actual world. If all we have is hunches, with no way to weed out the hunches that don’t fit with the details of reality, we’re no longer in the realm of science.

I wonder if this is close to the situation in which Jonah Lehrer found himself. Maybe he had strong intuitions about what kind of thing creativity is, and about what a creative guy like Bob Dylan would say when asked about his own exercise of creativity. Maybe these intuitions felt like a crucial part of the story he was trying to tell about creativity. Maybe he even looked to see if he could track down apt quotes from Bob Dylan expressing what seemed to him to be the obvious Dylanesque view … but, coming up short on this quotational data, he was not prepared to leave such an important intuition dangling without visible support, nor was he prepared to excise it. So he channeled Bob Dylan and wrote the thing he was sure in his heart Bob Dylan would have said.

At the time, it might have seemed a reasonable way to strengthen the narrative. As it turns out, though, it was a course of action that so weakened it that the publisher of Imagine, Houghton Mifflin Harcourt, has recalled print copies of the book.

Blogging and recycling: thoughts on the ethics of reuse.

Owing to summer-session teaching and a sprained ankle, I have been less attentive to the churn of online happenings than I usually am, but an email from SciCurious brought to my attention a recent controversy about a blogger’s “self-plagiarism” of his own earlier writing in his blog posts (and in one of his books).

SciCurious asked for my thoughts on the matter, and what follows is very close to what I emailed her in reply this morning. I should note that these thoughts were composed before I took to the Googles to look for links or to read up on the details of the particular controversy playing out. This means that I’ve spoken to what I understand as the general lay of the ethical land here, but I have probably not addressed some of the specific details that people elsewhere are discussing.

Here’s the broad question: Is it unethical for a blogger to reuse in blog posts material she has published before (including in earlier blog posts)?

A lot of people who write blogs are using them with the clear intention (clear at least to themselves) of developing ideas for “more serious” writing projects — books, or magazine articles or what have you. I myself am leaning heavily on stuff I’ve blogged over the past seven-plus years in writing the textbook I’m trying to finish, and plan similarly to draw on old blog posts for at least two other books that are in my head (if I can ever get them out of my head and into book form).

That this is an intended outcome is part of why many blog authors who are lucky enough get paying blogging gigs, especially those of us from academia, fight hard for ownership of what they post and for the explicit right to reuse what they’ve written.

So, I wouldn’t generally judge reuse of what one has written in blog posts as self-plagiarism, nor as unethical. Of course, my book(s) will explicitly acknowledge my blogs as the site-of-first-publication for earlier versions of the arguments I put forward. (My book(s) will also acknowledge the debt I owe to commenters on my posts who have pushed me to think much more carefully about the issues I’ve posted on.)

That said, if one is writing in a context where one has agreed to a rule that says, in effect, “Everything you write for us must be shiny and brand-new and never published by you before elsewhere in any form,” then one is obligated not to recycle what one has written elsewhere. That’s what it means to agree to a rule. If you think it’s a bad rule, you shouldn’t agree to it — and indeed, perhaps you should mount a reasoned argument as to why it’s a bad rule. Agreeing to follow the rule and then not following the rule, however, is unethical.

There are venues (including the Scientific American Blog Network) that are OK with bloggers of long standing brushing off posts from the archives. I’ve exercised this option more than once, though I usually make an effort to significantly update, expand, or otherwise revise those posts I recycle (if for no other reason than I don’t always fully agree with what that earlier time-slice of myself wrote).

This kind of reuse is OK with my corporate master. Does that necessarily make it ethical?

Potentially it would be unethical if it imposed a harm on my readers — that is, if they (you) were harmed by my reposting those posts of yore. But, I think that would require either that I had some sort of contract (express or implied) with my readers that I only post thoughts I have never posted before, or that my reposts mislead them about what I actually believe at the moment I hit the “publish” button. I don’t have such a contract with my readers (at least, I don’t think I do), and my revision of the posts I recycle is intended to make sure that they don’t mislead readers about what I believe.

Back-linking to the original post is probably good practice (from the point of view of making reuse transparent) … but I don’t always do this.

One reason is that the substantial revisions make the new posts substantially different — making different claims, coming to different conclusions, offering different reasons. The old post is an ancestor, but it’s not the same creature anymore.

Another reason is that some of the original posts I’m recycling are from my ancient Blogspot blog, from whose backend I am locked out after a recent Google update/migration — and I fear that the blog itself may disappear, which would leave my updated posts with back-links to nowhere. Bloggers tend to view back-links to nowhere as a very bad thing.

The whole question of “self-plagiarism” as an ethical problem is an interesting one, since I think there’s a relevant difference between self-plagiarism and ethical reuse.

Plagiarism, after all, is use of someone else’s words or ideas (or data, or source-code, etc.) without proper attribution. If you’re reusing your own words or ideas (or whatnot), it’s not like you’re misrepresenting them as your own when they’re really someone else’s.

There are instances, however, where self-reuse presents gets people rightly exercised. For example, some scientists reuse their own stuff to create the appearance in the scientific literature that they’ve conducted more experimental studies than they actually have, or that there are more published results supporting their hypotheses than there really are. This kind of artificial multiplication of scientific studies is ethically problematic because it is intended to mislead (and indeed, may succeed in misleading), not because the scientists involved haven’t given fair credit to the earlier time-slices of themselves. (A recent editorial for ACS Nano gives a nice discussion of other problematic aspects of “self-plagiarism” within the context of scientific publishing.)

The right ethical diagnosis of the controversy du jour may depend in part on whether journalistic ethics forbid reuse (explicitly or implicitly) — and if so, on whether (or in what conditions) bloggers count as journalists. At some level, this goes beyond what is spelled out in one’s blogging contract and turns also on the relationship between the blogger and the reader. What kind of expectations can the reader have of the blogger? What kind of expectations ought the reader to have of the blogger? To the extent that blogging is a conversation of a sort (especially when commenting is enabled), is it appropriate for that conversation to loop back to territory visited before, or is the blogger obligated always to break new ground?

And, if the readers are harmed when the blogger recycles her own back-catalogue, what exactly is the nature of that harm?

Things to read on my other blog: #scio12 preparations, truthiness at NYT, and an interview with a chloroplast.

For those of you who mostly follow my writing here on “Doing Good Science,” I thought I should give you a pointer to some things I’ve posted so far this month (which is almost half-over already?!) on my other blog, “Adventures in Ethics and Science”. Feel free to jump in to the discussions in the comments over there. Or, if you prefer, go ahead and discuss them here.

The month kicked off with a bunch of posts looking forward to ScienceOnline 2012, which is next week. First, on the issue of what to pack:

Packing for #scio12: plague relief.
Packing for #scio12: what are you drinking?
Packing for #scio12: sharing space with others.
Packing for #scio12: plumbing the inky depths.

Then, a discussion of what’s special about an unconference: Looking ahead to #scio12: the nature of the unconference. In this post, I put a call out for contributions to the wikis for the two sessions I’ll be helping to moderate: one (with Amy Freitag) on “Citizens, experts, and science”, the other (with Christie Wilcox) on “Blogging Science While Female”. Those wiki pages are just calling out for ideas, questions, or useful links. (Your ideas, questions, or useful links! What are you waiting for?)

After that, my response to a recent blog post by the New York Times’s Public Editor: Straightforward answers to questions we shouldn’t even have to ask: New York Times edition.

Finally, courtesy of my elder offspring, Friday Sprog Blogging: Interview with a Chloroplast..