Movie review: Strange Culture.

The other day I was looking for a movie I could watch with instant streaming that featured Josh Kornbluth* and I came upon Strange Culture. Strange Culture is a documentary about the arrest of artist and SUNY-Buffalo professor of art history Steve Kurtz on charges of bioterrorism, mail fraud, and wire fraud in 2004 after the death of his wife, Hope.

At the time Strange Culture was released in 2007, the legal case against Steve Kurtz (and against University of Pittsburgh professor of genetics Robert Ferrell) was ongoing, so the documentary uses actors to interpret events in the case about which Kurtz could not speak on advice of counsel, as well as the usual news footage and interviews of people in the case who were able to talk freely. It also draws on a vividly illustrated graphic novel about the case (titled “Suspect Culture”) written by Timothy Stock and illustrated by Warren Heise.

The central question of the documentary is how an artist found himself the target of federal charges of bioterrorism. I should mention that I watched Strange Culture not long after I finished reading The Radioactive Boy Scout, which no doubt colored my thinking. If The Radioactive Boy Scout is a story of scientific risks taken too lightly, Strange Culture strikes me as a story of scientific risks blown far out of proportion. At the very least, I think there are questions worth pondering here about why the two cases provoked such wildly different reactions.

In 2004, as part of the Critical Art Ensemble, Steve and Hope Kurtz were working on an art installation for the Massachusetts Museum of Contemporary Art on genetically modified agriculture. The nature of the installation was to demonstrate (and involve museum-goers in) scientific techniques used to isolate genetic information from various food products and to identify genetically modified organisms. The larger aim of the installation was to help the audience better understand the use of biotechnology in agriculture, and to push the audience to think more deeply about the scientific decisions made by agribusiness and how they might impact everyday life.

Regardless of whether one thinks the Critical Art Ensemble was raising legitimate worries about GMOs, or ignoring potential benefits from this use of biotechnology**, there is something about the effort to give members of the public a better understanding of — and even some hands-on engagement with — the scientific techniques that I find deeply appealing. Indeed, Steve and Hope Kurtz were in active collaboration with working biologists so that they could master the scientific techniques in question and use them appropriately in assembling the installation. Their preparations included work they were doing in their home with petri dishes and commercially available incubators using benign bacteria.

However, this was where the problems began for Steve Kurtz. One night in May of 2004, Hope Kurtz died in her sleep of heart failure. Steve Kurtz dialed 911. The Buffalo first responders who responded to the call saw the petri dishes and freaked out and notified the FBI. Suddenly, the Kurtz home was swarming with federal agents looking for evidence of bioterrorist activities and Steve Kurtz was under arrest.

Watching Strange Culture, I found myself grappling with the question of just why the authorities reacted with such alarm to what they found in the Kurtz home. My recollection of the news coverage at the time was that the authorities suspected that whatever was growing in those petri dishes might have killed hope Kurtz, but at this point indications are that her death was due to a congenital heart defect. First responders are supposed to be alert to dangers, but they should also recognize that coincidence in space and time is not the same as causation. Hope Kurtz’s death was less than three years after the September 11th attacks, and the anthrax attacks that came close on their heels, which likely raised anxiety about the destructive potential of biological agents in the hands of someone who knows how to use them. I wonder, though, whether some amount of the reaction was not just post-9/11 hypervigilance but a deeper fear of biological material at the microscopic level. If you can grow it in a petri dish, the reaction seemed to say, it must be some seriously dangerous stuff. (I am grateful that these first responders didn’t stumble upon the forgotten leftovers in the back of my fridge and judge me a bioterrorism suspect, too.)

More baffling than the behavior of the first responders was the behavior of the federal agents who searched the Kurtz home. While they raised the specter that Steve Kurtz was producing biological weapons, they ended up leaving the place in shambles, strewn with bags of purportedly biohazardous material (as well as with the trash generated by the agents over the long course of their investigation). Leaving things in this state would be puzzling if the prime concern of the government was to protect the community from harmful biological materials, suggesting that perhaps the investigative teams was more interested in creating a show of government force.

Strange Culture raises, but does not answer, the question of how the government turned out to be even more alarmed by biotechnology in widespread agricultural use than was an art group aiming to raise concerns about GMOs. It suggests that scientific understanding and accurate risk assessment is a problem not just for the public at large but also for the people entrusted with keeping the public safe. It also suggests that members of the public are not terribly safe if the default response from the government is an overreaction, or a presumption that members of the public have no business getting their hands dirty with science.

It’s worth noting that a 2008 ruling found there was insufficient evidence to support the charges against Steve Kurtz, and that the Department of Justice declined to appeal this ruling. You can read the Critical Art Ensemble Defense Fund press release issued at the conclusion of Steve Kurtz’s legal battle.

_____
*Yes, it’s a very particular kind of thing to want. People are like that sometimes.

**On the question of GMOs, if you haven’t yet read Christie Wilcox’s posts (here, here, and here), you really should.

Facing felony charges in lab death of Sheri Sangji, UCLA settles, Harran stretches credulity.

There have been recent developments in the criminal case against UCLA and chemistry professor Patrick Harran in connection with the fatal laboratory accident that resulted in the death of Sheri Sangji (which we’ve discussed here and here). The positive development is that UCLA has reached a plea agreement with prosecutors. (CORRECTION: UCLA has reached a settlement agreement with the prosecutors, not a plea agreement. Sorry for the confusion.) However, Patrick Harran’s legal strategy has taken a turn that strikes me as ill-advised.

From the Los Angeles Times:

Half of the felony charges stemming from a 2008 lab accident that killed UCLA research assistant Sheri Sangji were dropped Friday when the University of California regents agreed to follow comprehensive safety measures and endow a $500,000 scholarship in her name.

“The regents acknowledge and accept responsibility for the conditions under which the laboratory operated on Dec. 29, 2008,” the agreement read in part, referring to the date that Sangji, 23, suffered fatal burns.

Charges remain against her supervisor, chemistry professor Patrick Harran. His arraignment was postponed to Sept. 5 to allow the judge to consider defense motions, including one challenging the credibility of the state’s chief investigator on the case. …

UCLA and Harran have called her death a tragic accident and said she was a seasoned chemist who chose not to wear a protective lab coat. …

In court papers this week, Harran’s lawyers said prosecutors had matched the fingerprints of Brian Baudendistel, a senior special investigator who handled the case for the state Division of Occupational Safety and Health, with the prints of a teenager who pleaded no contest to murder in Northern California in 1985.

The defense contends that the investigator, whose report formed the basis for the charges, is the same Brian A. Baudendistel who took part in a plot to rob a drug dealer of $3,000 worth of methamphetamine, then shot him. Another teenager admitted to pulling the trigger but said it was Baudendistel’s shotgun.

Baudendistel told The Times this week that it is a case of mistaken identity and that he is not the individual involved in the 1985 case.

Cal/OSHA defended the integrity of the investigation in a statement issued Friday by spokesman Dean Fryer.

“The defendants’ most recent attempt to deflect attention from the charges brought against them simply does not relate in any way to the circumstances of Ms. Sangji’s death or the actual evidence collected in Cal/OSHA’s comprehensive investigation,” it read.

Deborah Blum adds:

Should  chemist-in-training approach hazardous chemicals with extreme caution? Yes. Should she expect her employer to provide her with the necessary information and equipment to engage in such caution? Most of us would argue yes. Should chemistry professors be held to the standard of employee safety as, say, chemical manufacturers or other industries? The most important “yes” to that question comes from  Cal/OSHA senior  investigator Brian Baudendistal.

Baudendistal concluded that the laboratory operation was careless enough for long enough to justify felony charges of willful negligence.  The Sangji family, angered by those suggestions that Sheri’s experience should have taught her better, pushed for prosecution. Late last year the Los Angeles District Attorney’s office  officially brought charges against Harran, UCLA, and the University of California system itself. …

[Harran’s] lawyers have responded to the Baudendistal report in part by focusing on Baudendistal himself. They claim to have found evidence that in 1985 he and two friends conspired to set up the murder of a drug dealer. All three boys were convicted and although, since they were juveniles, the records were sealed, attorneys were able to identify the killers through press coverage at the time. Although Baudendistal has insisted that Harran’s defense team tracked down the wrong man, they say they have a fingerprint match to prove it. They say further that a man who covers up his past history is not credible – and therefore neither is is report on the UCLA laboratory.

I am not a lawyer, so I’m not terribly interested in speculating on the arcane legal considerations that might be driving this move by Harran’s legal team. (Chemjobber speculates that it might be a long shot they’re playing amid plea negotiations that are not going well.)

As someone with a professional interest in crime and punishment within scientific communities, and in ethics more broadly, I do, however, think it’s worth examining the logic of Patrick Harran’s legal strategy.

The strategy, as I understand it, is to cast aspersions on the Cal/OSHA report on the basis of the legal history of the senior investigator that prepared it — specifically, his alleged involvement as a teenager in 1985 in a murder plot.

Does a past bad act like this serve as prima facie reason to doubt the accuracy of the report of the investigation of conditions in Harran’s lab? It’s not clear how it could, especially if there were other investigators on the team, not alleged to be involved in such criminal behavior, who endorsed the claims in the report.

Unless, of course, the reason Harran’s legal team thinks we should doubt the accuracy of the report is that the senior investigator who prepared it is a habitual liar. To support the claim that he cannot be trusted, they point to a single alleged lie — denying involvement in the 1985 murder plot.

But this strikes me as a particularly dangerous strategy for Patrick Harran to pursue.

Essentially, the strategy rests on the claim that if a person has lied about some particular issue, we should assume that any claim that person makes, about whatever issue, might also be a lie. I’m not unsympathetic to this claim — trust is something that is earned, not simply assumed in the absence of clear evidence of dishonesty.

However, this same reasoning cannot help Patrick Harran’s credibility, given that he is on record describing Sheri Sangji, a 23-year-old with a bachelor’s degree, as an experienced chemist. Many have noted already that claiming Sheri Sangji was a experienced chemist is ridiculous on its face.

Thus, it’s not unreasonable to conclude that Patrick Harran lied when he described Sheri Sangji as an experienced chemist. And, if this is the case, following the reasoning advocated by his legal team, we must doubt the credibility of every other claim he has made — including claims about the safety training he did or did not provide to people in his lab, conditions in his lab in 2008 when the fatal accident happened, even whether he recommended that Sangji wear a lab coat.

If Patrick Harran was not lying when he said he believed Sheri Sangji was an experienced chemist, the other possibility is that he is incredibly stupid — certainly too stupid to be in charge of a lab where people work with potentially hazardous chemicals.

Some might posit that Harran’s claims about Sangji’s chemical experience were made on the advice of his legal team. That may well be, but I’m unclear on how lying on the advice of counsel is any less a lie. (If it is, this might well mitigate the “lie of omission” of an investigator advised by his lawyers that his juvenile record is sealed.) And if one lie is all it takes to decimate credibility, Harran is surely as vulnerable as Baudendistel.

Finally, a piece of free advice to PIs worrying that they may find themselves facing criminal charges should their students, postdocs, or technicians choose not to wear lab coats or other safety gear: It is perfectly reasonable to establish, and enforce, a lab policy that states that those choosing to opt out of the required safety equipment are also opting out of access to the laboratory.

Book review: The Radioactive Boy Scout.

When I and my three younger siblings were growing up, our parents had a habit of muttering, “A little knowledge is a dangerous thing.” The muttering that followed that aphorism usually had to do with the danger coming from the “little” amount of knowledge rather than a more comprehensive understanding of whatever field of endeavor was playing host to the hare-brained scheme of the hour. Now, as a parent myself, I suspect that another source of danger involved asymmetric distribution of the knowledge among the interested parties: while our parents may have had knowledge of the potential hazards of various activities, knowledge that we kids lacked, they didn’t always have detailed knowledge of what exactly we kids were up to. It may take a village to raise a child, but it can take less than an hour for a determined child to scorch the hell out of a card table with a chemistry kit. (For the record, the determined child in question was not me.)

The question of knowledge — and of gaps in knowledge — is a central theme in The Radioactive Boy Scout: The Frightening True Story of a Whiz Kid and His Homemade Nuclear Reactor by Ken Silverstein. Silverstein relates the story of David Hahn, a Michigan teen in the early 1990s who, largely free of adult guidance or supervision, worked tirelessly to build a breeder reactor in his back yard. At times this feels like a tale of youthful determination to reach a goal, a story of a self-motivated kid immersing himself in self-directed learning and doing an impressive job of identifying the resources he required. However, this is also a story about how, in the quest to achieve that goal, safety considerations can pretty much disappear.

David Hahn’s source of inspiration — not to mention his guide to many of the experimental techniques he used — was The Golden Book of Chemistry Experiments. Published in 1960, the text by Robert Brent conveys an almost ruthlessly optimistic view of the benefits chemistry and chemical experimentation can bring, whether to the individual or to humanity as a whole. Part of this optimism is what appears to modern eyes as an alarmingly cavalier attitude towards potential hazards and chemical safety. If anything, the illustrations by Harry Lazarus downplay the risks even more than does the text — across 112 pages, the only pictured items remotely resembling safety apparatus are lab coats and a protective mask for an astronaut.

Coupled with the typical teenager’s baseline assumption of invulnerability, you might imagine that leaving safety considerations in the subtext, or omitting them altogether, could be a problem. In the case of a teenager teaching himself chemistry from the book, relying on it almost as a bible of the concepts, history, and experimental techniques a serious chemist ought to know, the lack of focus on potential harms might well have suggested that there was no potential for harm — or at any rate that the harm would be minor compared to the benefits of mastery. David Hahn seems to have maintained this belief despite a series of mishaps that made him a regular at his local emergency room.

Ah, youth.

Here, though, The Radioactive Boy Scout reminds us that young David Hahn was not the only party operating with far too little knowledge. Silverstein’s book expands on his earlier Harper’s article on the incident with chapters that convey just how widespread our ignorance of radioactive hazards has been for most of the history of our scientific, commercial, and societal engagement with radioactivity. At nearly every turn in this history, potential benefits have been extolled (with radium elixirs sold in the early 1900s to lower blood pressure, ease arthritis pain, and produce “sexual rejuvenescence”) and risks denied, sometimes until the body count was so large and the legal damages were so high that they could no longer be denied.

Surely part of the problem here is that the hazards of radioactivity are less immediately obvious than those of corrosive chemicals or explosive chemicals. The charred table is directly observable in a way that damage to one’s body from exposure to radioisotopes is not (partly because the table doesn’t have an immune system that kicks in to try to counter the damage). But the invisibility of these risks was also enhanced when manufacturers who used radioactive materials proclaimed their safety for both the end-user of consumer products and the workers making those products, and when the nuclear energy industry throttled the information the public got about mishaps at various nuclear reactors.

Possibly some of David Hahn’s teachers could have given him a more accurate view of the kinds of hazards he might undertake in trying to build a back yard breeder reactor … but the teen didn’t seem to feel like he could get solid mentoring from any of them, and didn’t let them in on his plans in any detail. The guidance he got from the Boy Scouts came in the form of an atomic energy merit badge pamphlet authored by the Atomic Energy Commission, a group created to promote atomic energy, and thus one unlikely to foreground the risks. (To be fair, this merit badge pamphlet did not anticipate that scouts working on the badge would actually take it upon themselves to build breeder reactors.) Presumably some of the scientists with whom David Hahn corresponded to request materials and advice on reactions would have emphasized the risks of his activities had they realized that they were corresponding with a high school student undertaking experiments in his back yard rather than with a science teacher trying to get clear on conceptual issues.

Each of these gaps of information ended up coalescing in such a way that David Hahn got remarkably close to his goal. He did an impressive job isolating radioactive materials from consumer products, performing chemical reactions to put them in suitable form for a breeder reactor, and assembling the pieces that might have initiated a chain reaction. He also succeeded in turning the back yard shed in which he conducted his work into a Superfund site. (According to Silverstein, the official EPA clean-up missed materials that his father and step-mother found hidden in their house and discarded in their household trash — which means that both the EPA and those close enough to the local landfill where the radioactive materials ended up had significant gaps in their knowledge about the hazards David Hahn introduced to the environment.)

The Radioactive Boy Scout manages to be at once an engaging walk through a challenging set of scientific problems and a chilling look at what can happen when scientific problems are stripped out of their real-life context of potential impacts for good and for ill that stretch across time and space and impact people who aren’t even aware of the scientific work being undertaken. It is a book I suspect my 13-year-old would enjoy very much.

I’m just not sure I’m ready to give it to her.

How we decide (to falsify).

At the tail-end of a three-week vacation from all things online (something that I badly needed at the end of teaching an intensive five-week online course), the BBC news reader on the radio pulled me back in. I was driving my kid home from the end-of-season swim team banquet, engaged in a conversation about the awesome coaches, when my awareness was pierced by the words “Jonah Lehrer” and “resigned” and “falsified”.

It appears that the self-plagiarism brouhaha was not Jonah Lehrer’s biggest problem. On top of recycling work in ways that may not have conformed to his contractual obligations, Lehrer has also admitted to making up quotes in his recent book Imagine. Here are the details as I got them from the New York Times Media Decoder blog:

An article in Tablet magazine revealed that in his best-selling book, “Imagine: How Creativity Works,” Mr. Lehrer had fabricated quotes from Bob Dylan, one of the most closely studied musicians alive. …

In a statement released through his publisher, Mr. Lehrer apologized.

“The lies are over now,” he said. “I understand the gravity of my position. I want to apologize to everyone I have let down, especially my editors and readers.”

He added, “I will do my best to correct the record and ensure that my misquotations and mistakes are fixed. I have resigned my position as staff writer at The New Yorker.” …

Mr. Lehrer might have kept his job at The New Yorker if not for the Tablet article, by Michael C. Moynihan, a journalist who is something of an authority on Mr. Dylan.

Reading “Imagine,” Mr. Moynihan was stopped by a quote cited by Mr. Lehrer in the first chapter. “It’s a hard thing to describe,” Mr. Dylan said. “It’s just this sense that you got something to say.”

After searching for a source, Mr. Moynihan could not verify the authenticity of the quote. Pressed for an explanation, Mr. Lehrer “stonewalled, misled and, eventually, outright lied to me” over several weeks, Mr. Moynihan wrote, first claiming to have been given access by Mr. Dylan’s manager to an unreleased interview with the musician. Eventually, Mr. Lehrer confessed that he had made it up.

Mr. Moynihan also wrote that Mr. Lehrer had spliced together Dylan quotes from separate published interviews and, when the quotes were accurate, he took them well out of context. Mr. Dylan’s manager, Jeff Rosen, declined to comment.

In the practice of science, falsification is recognized as a “high crime” and is included in every official definition of scientific misconduct you’re likely to find. The reason for this is simple: scientists are committed to supporting their claims about what the various bits of the world are like and about how they work with empirical evidence from the world — so making up that “evidence” rather than going to the trouble to gather it is out of bounds.

Despite his undergraduate degree in neuroscience, Jonah Lehrer is not operating as a scientist. However, he is operating as a journalist — a science journalist at that — and journalism purports to recognize a similar kind of relationship to evidence. Presenting words as a quote from a source is making a claim that the person identified as the source actually said those things, actually made those claims or shared those insights. Presumably, a journalist includes such quotes to bolster an argument. Maybe if Jonah Lehrer had simply written a book presenting his thoughts about creativity readers would have no special reason to believe it. Supporting his views with the (purported) utterances of someone widely recognized as a creative genius, though, might make them more credible.

(Here, Eva notes drily that this incident might serve to raise Jonah Lehrer’s credibility on the subject of creativity.)

The problem, of course, is that a fake quote can’t really add credibility in the way it appears to when the quote is authentic. Indeed, once discovered as fake, it has precisely the opposite effect. As with falsification in science, falsification in journalism can only achieve its intended goal as long as its true nature remains undetected.

There is no question in my mind about the wrongness of falsification here. Rather, the question I grapple with is why do they do it?

In science, after falsified data is detected, one sometimes hears an explanation in terms of extreme pressure to meet a deadline (say, for a big grant application, or for submission of a tenure dossier) or to avoid being scooped on a discovery that is so close one can almost taste it … except for the damned experiments that have become uncooperative. Experiments can be hard, there is no denying it, and the awarding of scientific credit to the first across the finish-line (but not to the others right behind the first) raise the prospect that all of one’s hard work may be in vain if one can’t get those experiments to work first. Given the choice between getting no tangible credit for a few years’ worth of work (because someone else got her experiments to work first) and making up a few data points, a scientist might well feel tempted to cheat. That scientific communities regard falsifying data as such a serious crime is meant to reduce that temptation.

There is another element that may play an important role in falsification, one brought to my attention some years ago in a talk given by C. K. Gunsalus: the scientist may have such strong intuitions about the bit of the world she is trying to describe that gathering the empirical data to support these intuitions seems like a formality. If you’re sure you know the answer, the empirical data are only useful insofar as they help convince others who aren’t yet convinced. The problem here is that the empirical data are how we know whether our accounts of the world fit the actual world. If all we have is hunches, with no way to weed out the hunches that don’t fit with the details of reality, we’re no longer in the realm of science.

I wonder if this is close to the situation in which Jonah Lehrer found himself. Maybe he had strong intuitions about what kind of thing creativity is, and about what a creative guy like Bob Dylan would say when asked about his own exercise of creativity. Maybe these intuitions felt like a crucial part of the story he was trying to tell about creativity. Maybe he even looked to see if he could track down apt quotes from Bob Dylan expressing what seemed to him to be the obvious Dylanesque view … but, coming up short on this quotational data, he was not prepared to leave such an important intuition dangling without visible support, nor was he prepared to excise it. So he channeled Bob Dylan and wrote the thing he was sure in his heart Bob Dylan would have said.

At the time, it might have seemed a reasonable way to strengthen the narrative. As it turns out, though, it was a course of action that so weakened it that the publisher of Imagine, Houghton Mifflin Harcourt, has recalled print copies of the book.

Book review: Suffering Succotash.

What is the deal with the picky eater?

Is she simply being willful, choosing the dinner table as a battlefield on which to fight for her right to self-determination? Or, is the behavior that those purveyors of succotash and fruit cup interpret as willfulness actually rooted in factors that are beyond the picky eater’s control? If the latter, is the picky eater doomed to a lifetime of pickiness, or can help be found for it?

These are the questions at the center of Suffering Succotash: A Picky Eater’s Quest to Understand Why We Hate the Foods We Hate. Its author, Stephanie V. W. Lucianovic, survived a childhood of picky eating, grappled with the persistence of pickiness into adulthood, went to culinary school, became a cheesemonger and food writer, and then mounted her quest for explanations of pickiness.

Her book tries to illuminate the origin story of picky eaters. Is it in their taste buds, and if so, due to the number of taste buds or to their sensitivity, to genetic factors driving their detection power or to environmental impacts on their operation? Is it rather their keen sense of smell that triggers pickiness? An overachieving gag-reflex? Their “emotional” stomachs? Or maybe how they were raised by the people feeding them when they were young? Are there good evolutionary reasons for the pickiness of picky eaters — and will this pickiness again be adaptive when the zombie apocalypse renders our food supply less safe in various ways?

As well, Lucianovic inquires into the likely fates of picky eaters. Are picky eaters destined to spawn more picky eaters? Can picky eaters find lasting love with humans who are significantly less discriminating about what they eat? Can picky eaters ever get over their pickiness? (Spoiler: The answers to the last two of these questions here are both “To a significant extent, yes!”)

One of the joys of this book is how Lucianovic’s narrative weaves along the path of science-y question she was prompted to ask by her troubled relationship with yucky foods as with the people trying to feed them to her. Lucianovic leads us on a non-scientist’s journey through science on a quest to better understand features of her everyday life that mattered to her — and, which likely matter to readers who are themselves picky eaters or have picky eaters in their lives. After all, you’ve got to eat.

Suffering Succotash explores a wide swath of the science behind the foods people like, the foods people hate, and the various features that might make some of us pickier eaters that others, without ever seeming like a science book. Indeed, Lucianovic is candid about the usefulness (and limits) of the scientific literature to the lay person trying to find answers to her questions:

When you’re in search of very specific information, pawing through scientific papers is like disemboweling one of those Russian nesting dolls. The first article makes a claim and gives just enough information to be intriguing and useless, unless you look up the source article behind that claim. The source article leads to another claim, and therefore another source article that needs to be looked up, and another and another until you finally reach the tiniest of all the dolls, which hopefully is where all the answers will be found since the tiniest of all dolls can’t be opened. (31)

The literature, thankfully, was just one source of information in Lucianovic’s journey. Alongside it, she partook of a veritable smorgasbord of test-strips, questionnaires, genotypying, and interviews with scientists who work on very aspects of how we taste food and why we react to foods the way we do. She even got to try her hand at some of the relevant laboratory techniques at the Monell Chemical Sense Center in Philadelphia.

What she found was that there are not simple scientific answers to the question of why some people are pickier eaters and others are not. Instead, there seems to be a complicated interplay of many different kinds of factors. She also discovered some of the limitations of the scientific tools at our disposal to identify potential causal factors behind pickiness or to reliably sort the picky from the not-so-picky eaters. However, in describing the shortcomings of taste-tests, the imprecision of questionnaires, the sheer number of factors that may (or may not) be at play in making peaches a food to be loathed, Lucianovic manages to convey an enthusiasm about the scientific search to understand picky eaters even a little better, not a frustration that science hasn’t nailed down The Answer yet.

There are many other strands woven into Suffering Succotash along with the scientific journey, including personal reminiscences of coping with picky eating as a kid — and then as an adult trying very hard not to be an inconvenient houseguest, interviews with other picky eaters about their own experiences with foods, a meditation on how parenting strategies might entrench or defuse pickiness, consideration of the extent to which eating preferences can be negotiable (or non-negotioable) in relationships, and practical strategies for overcoming one’s own pickiness — and for moving through a world of restaurants and friends’ dinner tables with the elements of pickiness that persist. These other strands, and the seamless (and often hilarious) manner in which Lucianovic connects them to the scientific questions and answers, make Suffering Succotash the perfect popular science book for a reader that doesn’t think he or she wants to read a popular science book.

Plus, there are recipes included. My offspring are surely not the world’s pickiest eaters, but they have strong views about a few notorious vegetables. However, when prepared according to the recipes included in Suffering Succotash, those vegetables were good enough that my kids wanted seconds, and thirds.

Book review: Uncaged.

In our modern world, many of the things that contribute to the mostly smooth running of our day-to-day lives are largely invisible to us. We tend to notice them only when they break. Uncaged, a thriller by Paul McKellips, identifies animal research as one of the activities in the background supporting the quality of life we take for granted, and explores what might happen if all the animal research in the U.S. ended overnight.

Part of the fun of a thriller is the unfolding of plot turns and the uncertainty about which characters who come into focus will end up becoming important. Therefore, in order not to spoil the book for those who haven’t read it yet, I’m not going to say much about the details of the plot or the main characters.

The crisis emerges from a confluence of events and an intertwining of the actions of disparate persons acting in ignorance of each other. This complex tangle of causal factors is one of the most compelling parts of the narrative. McKellips gives us “good guys,” “bad guys,” and ordinary folks just trying to get by and to satisfy whatever they think their job description or life circumstances demand of them, weaving a tapestry where each triggers chains of events that compound in ways they could scarcely have foreseen. This is a viscerally persuasive picture of how connected we are to each other, whether by political processes, public health infrastructure, the food supply, or the germ pool.

There is much to like in Uncaged. The central characters are complex, engaging, and even surprising. McKellips is deft in his descriptions of events, especially the impacts of causal chains initiated by nature or by human action on researchers and on members of the public. Especially strong are McKellips’s explanations of scientific techniques and rationales for animal research in ways that are reasonably accessible to the lay reader without being oversimplified.

Uncaged gets to the crux of the societal debate about scientific animal use in a statement issued by the President of the United States as, in response to a series of events, he issues an executive order halting animal research. This president spells out his take on the need — or not — for continued biomedical research with animals:

I realize that the National Institutes of Health grants billions of dollars to American universities and our brightest scientists for biomedical research each year. But there comes a point when we must ask ourselves — that we must seriously question — has our health reached the level of “good enough”? Think of all the medicine we have available to us today. It’s amazing. It’s plenty. It’s more than we have had available in the history of humanity. And for those of us who need medicines, surgeries, therapies and diagnostic tools — it is the sum total of all that we have available to us today. If it’s good enough for those of us who need it today, then perhaps it’s good enough for those who will need it tomorrow as well. Every generation has searched for the fountain of youth. But can we afford to spend more time, more money, and — frankly — more animals just to live longer? Natural selection is an uninvited guest within every family. Some of us will die old; some of us will die far too young. We cannot continue to fund the search for the fountain of youth. We must realize that certain diseases of aging — such as cancer, Alzheimer’s, and Parkinson’s — are inevitable. Our lifestyles and nutrition are environmental factors that certainly contribute to our health. How much longer can we pretend to play the role of God in our own laboratories? (58-59)

In some ways, this statement is the ethical pivot-point around which all the events of the novel — and the reader’s moral calculations — turn. How do we gauge “good enough”? Who gets to make the call, the people for whom modern medicine is more or less sufficient, or the people whose ailments still have no good treatment? What kind of process ought we as a society to use for this assessment?

These live questions end up being beside the point within the universe of Uncaged though. The president issuing this statement has become, to all appearances, a one-man death panel.

McKellips develops a compelling and diverse selection of minor characters here: capitalists, terrorists, animal researchers, animal rights activists, military personnel, political appointees. Some of these (especially the animal rights activists) are clearly based on particular real people who are instantly recognizable to those who have been paying attention to the targeting of researchers in recent years. (If you’ve followed the extremists and their efforts less closely, entering bits of text from the communiques of the fictional animal rights organizations into a search engine is likely to help you get a look at their real-life counterparts.)

But, while McKellips’s portrayal of the animal rights activists is accurate in capturing their rhetoric, these key players who are central in creating the crisis to which the protagonists must respond remain ciphers. The reader gets little sense of the events or thought processes that brought them to these positions, or of the sorts of internal conflicts that might occur within animal rights organizations — or within the hearts and minds of individual activists.

Maybe this is unavoidable — the internet animal rights activists often do seem like ciphers who work very hard to deny the complexities acknowledged by the researchers in Uncaged. But, perhaps naïvely, I have a hard time believing they are not more complex in real life than this.

As well, I would have liked for Uncaged to give us more of a glimpse into the internal workings of the executive branch — how the president and his cabinet made the decision to issue the executive order for a moratorium on animal research, what kinds of arguments various advisors might have offered for or against this order, what assemblage of political considerations, ideals, gut feelings, and unforeseen consequences born of incomplete information or sheer ignorance might have been at work. But maybe presidents, cabinet members, agency heads, and other political animals are ciphers, too — at least to research scientists who have to navigate the research environment these political animals establish and then rearrange.

Maybe this is an instance of the author grappling with the same challenge researchers face: you can’t build a realistic model without accurate and detailed information about the system you’re modeling. Maybe making such a large cast of characters more nuanced, and drawing us deeply into their inner lives, would have undercut the taut pacing of what is, after all, intended as an action thriller.

But to me, this feels like a missed opportunity. Ultimately, I worry that the various players in Uncaged — and worse, their real life counterparts — the researchers and other advocates of humane animal research, the animal rights activists, the political animals, and the various segments of the broader public — continue to see each other as ciphers rather than trying to get in each others heads and figure out where their adversaries are coming from, the better to be able to reflect upon and address the real concerns that are driving people. Modeling your opponents as automata has a certain efficiency, but to me it leaves the resolution feeling somewhat hollow — and it’s certainly not a strategy for engagement that I see leading to healthy civil society in real life.

I suspect, though, that my disappointments are a side-effect of the fact that I am not a newcomer to these disputes. For readers not already immersed in the battles over research with animals, Uncaged renders researchers as complex human beings to whom one can relate. This is a good read for someone who wants a thriller that also conveys a compelling picture of what motivates various lines of biomedical research — and why such research might matter to us all.

Book review: Coming of Age on Zoloft.

One of the interesting and inescapable features of our knowledge-building efforts is just how hard it can be to nail down objective facts. It is especially challenging to tell an objective story when the object of study is us. It’s true that we have privileged information of a particular sort (our own experience of what it is like to be us), but we simultaneously have the impediment of never being able fully to shed that experience. As well, our immediate experience is necessarily particular — none of us knows what it is like to be human in general, just what is is like to be the particular human each of us happens to be. Indeed, if you take Heraclitus seriously (he of the impossibility of stepping in the same river twice), you might not even know what it is like to be you so much as what it is like to be you so far.

All of this complicates the stories we might try to tell about how our minds are connected to our brains, what it means for those brains to be well, and what it is for us to be ourselves or not-ourselves, especially during stretches in our lives when the task that demands our attention might be figuring out who the hell we are in the first place.

Katherine Sharpe’s new book Coming of Age on Zoloft: how antidepressants cheered us up, let us down, and changed who we are, leads us into this territory while avoiding the excesses of either ponderous philosophical treatise or catchy but overly reductive cartoon neuroscience. Rather, Sharpe draws on dozens of interviews with people prescribed selective seratonin reuptake inhibitors (SSRIs) for significant stretches from adolescence through early adulthood, and on her own experiences with antidepressants, to see how depression and antidepressants feature in the stories people tell about themselves. A major thread throughout the book is the question of how our pharmaceutical approach to mental health impacts the lives of diagnosed individuals (for better or worse), but also how it impacts our broader societal attitudes toward depression and toward the project of growing up. Sharpe writes:

When I first began to use Zoloft, my inability to pick apart my “real” thoughts and emotions from those imparted by the drug made me feel bereft. The trouble seemed to have everything to do with being young. I was conscious of needing to figure out my own interests and point myself in a direction in the world, and the fact of being on medication seemed frighteningly to compound the possibilities for error. How could I ever find my way in life if I didn’t even know which feelings were mine? (xvii)

Interleaved between personal accounts, Sharpe describes some of the larger forces whose confluence helps explain the growing ubiquity of SSRIs. One of these is the concerted effort during the revisions that updated the DSM-II to the DSM-III to abandon Freud-inflected frameworks for mental disorders which saw the causal origins of depression in relationships and replace them with checklists of symptoms (to be assessed in isolation from additional facts about what might be happening in the patient’s life) which might or might not be connected to hunches about causal origins of depression based on what scientists think they know about the actions on various neurotransmitters of drugs that seem to treat the symptoms on the checklist. Suddenly being depressed was an official diagnosis based on having particular symptoms that put you in that category — and in the bargain it was no longer approached as a possibly appropriate response to external circumstances. Sharpe also discusses the rise of direct-to-consumer advertising for drugs, which told us how to understand our feelings as symptoms and encouraged us to “talk to your doctor” about getting help from them, as well as the influence of managed care — and of funding priorities within the arena of psychiatric research — in making treatment with a pill the preferred treatment over time-consuming and “unpatentable talk-treatments.” (184)

Sharpe discusses interviewees’, and her own, experiences with talk therapy, and their experiences of trying to get off SSRIs (with varying degrees of medical supervision or premeditation) to find out whether one’s depression is an unrelenting chronic illness the having of which is a permanent fact about oneself, like having Type I diabetes, or whether it might be a transient state, something with which one needs help for a while before going back to normal. Or, if not normal, at least functional enough.

The exploration in Coming of Age on Zoloft is beautifully attentive to the ways that “functional enough” depends on a person’s interaction with environment — with family and friends, with demands of school or work or unstructured days and weeks stretching before you — and on a person’s internal dialogue with oneself — about who you are, how you feel, what you feel driven to do, what feels too overwhelming to face. Sharpe offers an especially compelling glimpse at how the forces from the world and the voices from one’s head sometimes collide, producing what professionals on college campuses describe as a significant deterioration of the baseline of mental health for their incoming students:

One college president lamented that the “moments of woolgathering, dreaming, improvisation” that were seen as part and parcel of a liberal arts education a generation ago had become a hard sell for today’s brand of highly driven students. Experts agreed that undergraduates were in a bigger hurry than ever before, expected by teachers, parents, and themselves to produce more work, of higher quality, in the same finite amount of time. (253)

Such high expectations — and the broader message that productivity is a duty — set the bar high enough that failure may become an alarmingly likely outcome. (Indeed, Sharpe quotes a Manhattan psychiatrist who raises the possibility that some college students and recent graduates “are turning to pharmaceuticals to make something possible that’s not healthy or normal.” (269)) These elevated expectations seem also to be of a piece with the broader societal mindset that makes it easier to get health coverage for a medication-check appointment than for talk-therapy. Just do the cheapest, fastest thing that lets you function well enough to get back to work. Since knowing what you want or who you are is not of primary value, exploring, reflecting, or simply being is a waste of time.

Here, of course, what kind of psychological state is functional or dysfunctional surely has something to do with what our society values, with what it demand of us. To the extent that our society is made up of individual people, those values, those demands, may be inextricably linked with whether people generally have the time, the space, the encouragement, the freedom to find or choose their own values, to be the authors (to at least some degree) of their own lives.

Finding meaning — creating meaning — is, at least experientially, connected to so much more than the release or reuptake of chemicals in our brains. Yet, as Sharpe describes, our efforts to create meaning get tangled in questions about the influence of those chemicals, especially when SSRIs are part of the story.

I no longer simply grapple with who I can become and what kind of effort it will require. Now I also grapple with the question of whether I am losing something important — cheating somehow — if I use a psychopharmaceutical to reduce the amount of effort required, or to increase my stamina to keep trying … or to lower my standards enough that being where I am (rather than trying to be better along some dimension or another) is OK with me.

And, getting satisfying answers to these questions, or even strategies for approaching them, is made harder when it seems like our society is not terribly tolerant of the woolgatherers, the grumpy, the introverted, the sad. Our right to pursue happiness (where failure is an option) has been transformed to a duty to be happy. Meanwhile, the stigma of mental illness and of needing medication to treat is dances hand in hand with the stigma attached to not conforming perfectly to societal expectations and definitions of “normal”.

In the end, what can it mean to feel “normal” when I can never get first-hand knowledge of how it feels to be anyone else? Is the “normal” I’m reaching for some state from my past, or some future state I haven’t yet experienced? Will I know it when I get there? And I can I reliably evaluate my own moods, personality, or plans with the organ whose functioning is in question?

With engaging interviews and sometimes achingly beautiful self-reflection, Coming of Age on Zoloft leads us through the terrain of these questions, illuminates the ways our pharmaceutical approach to depression makes them more fraught, and ultimately suggests the possibility that grappling with them may always have been important for our human flourishing, even without SSRIs in our systems.

Blogging and recycling: thoughts on the ethics of reuse.

Owing to summer-session teaching and a sprained ankle, I have been less attentive to the churn of online happenings than I usually am, but an email from SciCurious brought to my attention a recent controversy about a blogger’s “self-plagiarism” of his own earlier writing in his blog posts (and in one of his books).

SciCurious asked for my thoughts on the matter, and what follows is very close to what I emailed her in reply this morning. I should note that these thoughts were composed before I took to the Googles to look for links or to read up on the details of the particular controversy playing out. This means that I’ve spoken to what I understand as the general lay of the ethical land here, but I have probably not addressed some of the specific details that people elsewhere are discussing.

Here’s the broad question: Is it unethical for a blogger to reuse in blog posts material she has published before (including in earlier blog posts)?

A lot of people who write blogs are using them with the clear intention (clear at least to themselves) of developing ideas for “more serious” writing projects — books, or magazine articles or what have you. I myself am leaning heavily on stuff I’ve blogged over the past seven-plus years in writing the textbook I’m trying to finish, and plan similarly to draw on old blog posts for at least two other books that are in my head (if I can ever get them out of my head and into book form).

That this is an intended outcome is part of why many blog authors who are lucky enough get paying blogging gigs, especially those of us from academia, fight hard for ownership of what they post and for the explicit right to reuse what they’ve written.

So, I wouldn’t generally judge reuse of what one has written in blog posts as self-plagiarism, nor as unethical. Of course, my book(s) will explicitly acknowledge my blogs as the site-of-first-publication for earlier versions of the arguments I put forward. (My book(s) will also acknowledge the debt I owe to commenters on my posts who have pushed me to think much more carefully about the issues I’ve posted on.)

That said, if one is writing in a context where one has agreed to a rule that says, in effect, “Everything you write for us must be shiny and brand-new and never published by you before elsewhere in any form,” then one is obligated not to recycle what one has written elsewhere. That’s what it means to agree to a rule. If you think it’s a bad rule, you shouldn’t agree to it — and indeed, perhaps you should mount a reasoned argument as to why it’s a bad rule. Agreeing to follow the rule and then not following the rule, however, is unethical.

There are venues (including the Scientific American Blog Network) that are OK with bloggers of long standing brushing off posts from the archives. I’ve exercised this option more than once, though I usually make an effort to significantly update, expand, or otherwise revise those posts I recycle (if for no other reason than I don’t always fully agree with what that earlier time-slice of myself wrote).

This kind of reuse is OK with my corporate master. Does that necessarily make it ethical?

Potentially it would be unethical if it imposed a harm on my readers — that is, if they (you) were harmed by my reposting those posts of yore. But, I think that would require either that I had some sort of contract (express or implied) with my readers that I only post thoughts I have never posted before, or that my reposts mislead them about what I actually believe at the moment I hit the “publish” button. I don’t have such a contract with my readers (at least, I don’t think I do), and my revision of the posts I recycle is intended to make sure that they don’t mislead readers about what I believe.

Back-linking to the original post is probably good practice (from the point of view of making reuse transparent) … but I don’t always do this.

One reason is that the substantial revisions make the new posts substantially different — making different claims, coming to different conclusions, offering different reasons. The old post is an ancestor, but it’s not the same creature anymore.

Another reason is that some of the original posts I’m recycling are from my ancient Blogspot blog, from whose backend I am locked out after a recent Google update/migration — and I fear that the blog itself may disappear, which would leave my updated posts with back-links to nowhere. Bloggers tend to view back-links to nowhere as a very bad thing.

The whole question of “self-plagiarism” as an ethical problem is an interesting one, since I think there’s a relevant difference between self-plagiarism and ethical reuse.

Plagiarism, after all, is use of someone else’s words or ideas (or data, or source-code, etc.) without proper attribution. If you’re reusing your own words or ideas (or whatnot), it’s not like you’re misrepresenting them as your own when they’re really someone else’s.

There are instances, however, where self-reuse presents gets people rightly exercised. For example, some scientists reuse their own stuff to create the appearance in the scientific literature that they’ve conducted more experimental studies than they actually have, or that there are more published results supporting their hypotheses than there really are. This kind of artificial multiplication of scientific studies is ethically problematic because it is intended to mislead (and indeed, may succeed in misleading), not because the scientists involved haven’t given fair credit to the earlier time-slices of themselves. (A recent editorial for ACS Nano gives a nice discussion of other problematic aspects of “self-plagiarism” within the context of scientific publishing.)

The right ethical diagnosis of the controversy du jour may depend in part on whether journalistic ethics forbid reuse (explicitly or implicitly) — and if so, on whether (or in what conditions) bloggers count as journalists. At some level, this goes beyond what is spelled out in one’s blogging contract and turns also on the relationship between the blogger and the reader. What kind of expectations can the reader have of the blogger? What kind of expectations ought the reader to have of the blogger? To the extent that blogging is a conversation of a sort (especially when commenting is enabled), is it appropriate for that conversation to loop back to territory visited before, or is the blogger obligated always to break new ground?

And, if the readers are harmed when the blogger recycles her own back-catalogue, what exactly is the nature of that harm?

Is how to engage with the crackpot at the scientific meeting an ethical question?

There’s scientific knowledge. There are the dedicated scientists who make it, whether laboring in laboratories or in the fields, fretting over data analysis, refereeing each other’s manuscripts or second-guessing themselves.

And, well, there are some crackpots.

I’m not talking dancing-on-the-edge-of-the-paradigm folks, nor cheaters who seem to be on a quest for fame or profit. I mean the guy who has the wild idea for revolutionizing field X that actually is completely disconnected from reality.

Generally, you don’t find too much crackpottery in the scientific literature, at least not when peer review is working as it’s meant to. The referees tend to weed it out. Perhaps, as has been suggested by some critics of peer review, referees also weed out cutting edge stuff because it’s just so new and hard to fit into the stodgy old referees’ picture of what counts as well-supported by the evidence, or consistent with our best theories, or plausible. That may just be the price of doing business. One hopes that, eventually, the truth will out.

But where you do see a higher proportion of crackpottery, aside from certain preprint repositories, is at meetings. And there, face to face with the crackpot, the gate-keepers may behave quite differently than they would in an anonymous referee’s report.

Doctor Crackpot gives a talk intended to show his brilliant new solution to a nagging problem with an otherwise pretty well established theoretical approach. Jaws drop as the presentation proceeds. Then, finally, as Doctor Crackpot is aglow with the excitement of having broken the wonderful news to his people, he entertains questions.

Crickets chirp. Members of the audience look at each other nervously.

Doctor Hardass, who has been asking tough questions of presenters all day, tentatively asks a question about the mathematics of this crackpot “solution”. The other scholars in attendance inwardly cheer, thinking, “In about 10 seconds Doctor Hardass will have demonstrated to Doctor Crackpot that this could never work! Then Doctor Crackpot will back away from this ledge and reconsider!”

Ten minutes later, Doctor Crackpot is still writing equations on the board, and Doctor Hardass has been reduced to saying, “Uh huh …” Scholars start sneaking out as the chirping of the crickets competes with the squeaking of the chalk.

Granted, no one wants to hurt Doctor Crackpot’s feelings. If it’s a small enough meeting, you all probably had lunch with him, maybe even drinks the night before. He seems like a nice guy. He doesn’t seem dangerously disconnected from reality in his everyday interactions, just dangerously disconnected from reality in the neighborhood of this particular scientific question. And, as he’s been toiling in obscurity at a little backwater institution, he’s obviously lonely for scientific company and conversation. So, calling him out as a crackpot seems kind of mean.

But … it’s also a little mean not to call him out. It can feel like you’re letting him wander through the scientific community with the equivalent of spinach in his teeth while trailing toilet paper from his shoe if you leave him with the impression that his revolutionary idea has any merit. Someone has to set this guy straight … right? If you don’t, won’t he keep trying to sell this crackpot idea at future meetings?

For what it’s worth, as someone who attends philosophy conferences as well as scientific ones (plus an interesting assortment of interdisciplinary conferences of various sorts), I can attest that there is the occasional crackpot presentation from a philosopher. However, the push-back from the philosophers during the Q&A seemed much more vigorous, and seemed also to reflect a commitment that the crackpot presenter could be led back to reality if only he would listen to the reasoned arguments presented to him by the audience.

In theory, you’d expect to see the same kind of commitment among scientists: if we can agree upon the empirical evidence and seriously consider each other’s arguments about the right theoretical framework in which to interpret it, we should all end up with something like agreement on our account of the world. Using the same sorts of knowledge-building strategies, the same standards of evidence, the same logical machinery, we should be able to build knowledge about the world that holds up against tests to which others subject it — and, we should welcome that testing, since the point of all this knowledge-building is not to win the argument but to build an account that gets the world right.

In theory, the scientific norms of universalism and organized skepticism would ensure that all scientific ideas (including the ones that are en face crackpot ideas) get a fair hearing, but that this “fair hearing” include rigorous criticism to sort out the ideas worthy of further attention. (These norms would also remind scientists that any member of the scientific community has the potential to be the source of a fruitful idea, or of a crackpot idea.)

In practice, though, scientists pick their battles, just like everyone else. If your first ten-minute attempt at reaching a fellow scientist with rigorous criticism shows no signs of succeeding, you might just decide it’s too big a job to tackle before lunch. If repeated engagements with a fellow scientist suggest that he seems not to comprehend the arguments against his pet theory — and maybe that he doesn’t fully grok how the rest of the community understands the standards and strategies for scientific knowledge-building — you may have to make a calculation about whether bringing him back to the fold is a better use of your time and effort than, say, putting more time into your own research, or offering critiques to scientists who seem to understand them and take them seriously.

This is a sensible way to get through a day which seems to have too few hours for all the scientific knowledge-building there is to do, but it might have an impact on whether the scientific community functions in the way that best supports the knowledge-building project.

In the continuum of “scientific knowledge”, on whose behalf scientists are sworn to uphold standards and keep out the dross, where do meetings fall? Do the scientists in attendance have any ethical duty to give their candid assessments of crackpottery to the crackpots? Or is it OK to just snicker about it at the bar? If there’s no obligation to call the crackpot out, does that undermine the value of meetings as sources of scientific knowledge, or of the scientific communications needed to build scientific knowledge?

Could a rational decision not to engage with crackpots in one’s scientific community (because the return on the effort invested is likely to be low) morph into avoidance of other scientists with weird ideas that actually have something to them? Could it lead to avoidance of serious engagement with scientists one thinks are mistaken when it might take serious effort to spell out the nature of the mistakes?

And is there any obligation from the scientific community either to accept the crackpots as fully part of the community (meaning that their ideas and their critiques of the ideas of other ought to be taken seriously), or else to be honest with them that, while they may subscribe to the same journals and come to the same meetings, the crackpots are Not Our Kind, Dear?

End-of-semester meditations on plagiarism.

Plagiarism — presenting the words or ideas (among other things) of someone else as one’s own rather than properly citing their source — is one of the banes of my professorial existence. One of my dearest hopes at the beginning of each academic term is that this will be the term with no instances of plagiarism in the student work submitted for my evaluation.

Ten years into this academic post and I’m still waiting for that plagiarism-free term.

One school of thought posits that students plagiarize because they simply don’t understand the rules around proper citation of sources. Consequently, professorial types go to great lengths to lay out how properly to cite sources of various types. They put explicit language about plagiarism and proper citation in their syllabi. They devote hours to crafting handouts to spell out expected citation practices. They require their students to take (and pass) plagiarism tutorials developed by information literacy professionals (the people who, in my day, we called university librarians).

And, students persist in plagiarizing.

Another school of thought lays widespread student plagiarism at the feet of the new digital age.

What with all sorts of information resources available through the internets, and with copy-and-paste technology, assembling a paper that meets the minimum page length for your assignment has never been easier. Back in the olden times, our forefathers had to actually haul the sources from which they were stealing off the shelves, maybe carry them back to the dorms through the snow, find their DOS disk to boot up the dorm PC, and then laboriously transcribe those stolen passages!

And it’s not just that the copy-and-paste option exists, we are told. College students have grown up stealing music and movies online. They’ve come of age along with Wikipedia, where information is offered free for their use and without authorship credits. If “information wants to be free” (a slogan attributed to Stewart Brand in 1984), how can these young people make sense of intellectual property, and especially of the need to cite the sources from which they found the information they are using? Is not their “plagiarism” just a form of pastiche, an activity that their crusty old professors fail to recognize as creative?

Yeah, the modern world is totally different, dude. There are tales of students copying not just Wikipedia articles but also things like online FAQs, verbatim, in student papers without citing the source, and indeed while professing that they didn’t think they needed to cite them because there was no author listed. You know what source kids used to copy from in my day that didn’t list authors? The World Book Encyclopedia. Indeed, from at least seventh grade, our teachers made a big deal of teaching us how to cite encyclopedia and newspaper articles with no named authors. Every citation guide I’ve seen in recent years (including the ones that talk about proper ways to cite web pages) includes instruction on how to cite such sources.

The fact that plagiarism is perhaps less labor-intensive than it used to be strikes me as an entirely separate issue from whether kids today understand that it’s wrong. If young people are literally powerless to resist the temptations presented to them by the internet, maybe we should be getting computers out of the classroom rather than putting more computers into the classroom.

Of course, the fact that not every student plagiarizes argues against the claim that students can’t help it. Clearly, some of them can.

There is research that indicates students plagiarize less in circumstances where they know that their work is going to be scanned with plagiarism-detection software. Here, it’s not that the existence or use of the software suddenly teaches students something they didn’t already know about proper citation. Rather, the extra 28 grams of prevention comes from an expectation that the software will be checking to see if they followed the rules of scholarship that they already understood.

My own experience suggests that one doesn’t require an expensive proprietary plagiarism-detection system like Turnitin — plugging the phrases in the assignment that just don’t sound like a college student wrote them into a reasonably good search engine usually delivers the uncited sources in seconds.

It also suggests that even when students are informed that you will be using software or search engines to check for plagiarism, some students still plagiarize.

Perhaps a better approach is to frame plagiarism as a violation of trust in a community that, ultimately, has an interest in being more focused on learning than on crime and punishment. This is an approach to which I’m sympathetic, which probably comes through in the version of “the talk” on academic dishonesty I give my students at the start of the semester:

Plagiarism is evil. I used to think I was a big enough person not to take it personally if someone plagiarized on an assignment for my class. I now know that I was wrong about that. I take it very personally.


For one thing, I’m here doing everything I can to help you learn this stuff that I think is really interesting and important. I know you may not believe yet that it’s interesting and important, but I hope you’ll let me try to persuade you. And, I hope you’ll put an honest effort into learning it. If you try hard and you give it a chance, I can respect that. If you make the calculation that, given the other things on your plate, you can’t put in the kind of time and effort I’m expecting and you choose to put in what you can, I’ll respect that, too. But if you decide it’s not worth your time or effort to even try, and instead you turn to plagiarism to make it look like you learned something — well, you’re saying that the stuff you’re supposedly here to learn is of no value, except to get you the grades and the credits you want. I care about that stuff. So I take it personally when you decide, despite all I’m doing here, that it’s of no value. Moreover, this is not a diploma mill where you pay your money and get your degree. If you want the three credits from my course, the terms of engagement are that you’ll have to show some evidence of learning.


Even worse, when you hand in an essay that you’ve copied from the internet, you’re telling me you don’t think I’m smart enough to tell the difference between your words and ideas and something you found in 5 minutes with Google. You’re telling me you think I’m stupid. I take that personally, too.


If you plagiarize in my course, you fail my course, and I will take it personally. Maybe that’s unreasonable, but that’s how I am. I thought I should tell you up front so that, if you can’t handle having a professor who’s such a hardass, you can explore your alternatives.

So far, none of my students have every run screaming from this talk. Some of them even nod approvingly. The students who labor to write their papers honestly likely feel there’s something unjust about classmates who sidestep all that labor by cheating.

But students can still fully comprehend your explanation of how you view plagiarism, how personally you’ll take it, how vigorously you’ll punish it … and plagiarize.

They may even deny it to your face for 30 additional seconds after they recognize that you have them dead to rights (since given the side-by-side comparison of their assignment and the uncited source, they would need to establish psychic powers for there to be any plausible explanation besides plagiarism). And then they’ll explain that they were really pressed for time, and they need a good grade (or a passing grade) in this course, and they felt trapped by circumstances, so even though of course they know what they did is wrong, they made one bad decision, and their parents will kill them, and … isn’t there some way we could make this go away? They feel so bad now that they promise they’ve learned their lesson.

Here, I think we need to recognize that there is a relevant difference between saying you have learned a lesson and actually learning that lesson.

Indeed, one of the reasons that my university’s office of judicial affairs asks instructors to report all cases of plagiarism and cheating no matter what sanctions we apply to them (including no sanctions) is so there will be a record of whether a particular offense is really the first offense. Students who plagiarize may also lie about whether they have a record of doing so and being caught doing it. If the offenses are spread around — in different classes with different professors in different departments — you might be able to score first-time leniency half a dozen times.

Does that sound cynical? From where I sit, it’s just realistic. But this “realistic” point of view (which others in the teaching trenches share) is bound to make us tougher on the students who actually do make a single bad decision, suspecting that they might be committed cheaters, too.

Keeping the information about plagiarists secret rather than sharing it through the proper channels, in other words, can hurt students who could be helped.

There have been occasions, it should be noted, when frustrated instructors warned students that they would name and shame plagiarists, only to find (after following through on that warning) that they had run afoul of FERPA. Among other things, FERPA gives students (18 or older) some measure of control about who gets to see their academic records. If a professor announces to the world — or even to your classmates — that you’ve failed a the class for plagiarizing, information from your academic records has arguably been shared without your consent.

Still, it’s hard not to feel that plagiarism is breaking trust not just with the professor but with the learning community. Does that learning community have an interest in flagging the bad actors? If you know there are plagiarists among your classmates but you don’t know who they are, does this create a situation where you can’t trust anyone? If all traces of punishment — or of efforts at rehabilitation — are hidden behind a veil of privacy, is the reasonable default assumption that people are generally living within the rules and that the rules are being enforced against the handful of violations … or is it that people are getting away with stuff?

Is there any reasonable role for the community in punishment and in rehabilitation of plagiarism?

To some, of course, this talk of harms to learning communities will seem quaint. If you see your education as an individual endeavor rather than a team sport, your classmates may as well be desks (albeit desks whose grades may be used to determine the curve). What you do, or don’t do, in your engagement with the machinery that dispenses your education (or at least your diploma) may be driven by your rational calculations about what kind of effort you’re willing to put into creating the artifacts you need to present in exchange for grades.

The artifacts that require writing can be really time-consuming to produce de novo. The writing process, after all, is hard. People who write for a living complain of writer’s block. Have you ever heard anyone complain about Google-block? Plagiarism, in other words, is a huge time-saver, not least because it relies on skills most college students already have rather than ones they need to develop to any significant extent.

Here, I’d like to offer a modest proposal for students unwilling to engage the writing process: don’t.

Take a stand for what you believe in! Don’t lurk in the shadows pretending to knuckle under to the man by turning in essays and term papers that give the appearance that you wrote them. Instead, tell your professors that writing anything original for their assignments is against your principles. Then take your F and wear it as a badge of honor!

When all those old-timey professors who fetishize the value of clear writing, original thought, and proper citation of sources die out — when your generation is running the show — surely your principled stand will be vindicated!

And, in the meantime, your professors can spend their scarce time helping your classmates who actually want to learn to write well and uphold rudimentary rules of scholarship.

Really, it’s win-win.

_____
In the interests of full-disclosure — and of avoiding accusations of self-plagiarism — I should note that this essay draws on a number of posts I have written in the past about plagiarism in academic contexts.