Science, priorities, and the challenges of sharing a world.

For scientists, doing science is often about trying to satisfy deep curiosity about how various bits of our world work. For society at large, it often seems like science ought to exist primarily to solve particular pressing problems — or at least, that this is what science ought to be doing, given that our tax dollars are going to support it. It’s not a completely crazy idea. Even if tax dollars weren’t funding lots of scientific research and the education of scientists (even at private universities), the public might expect scientists to focus their attention on pressing problems, simply because scientists have the expertise to solve these problems and other members of society don’t.

This makes it harder to get the public to care about funding science for which the pay-off is not obviously useful, especially “basic research”. You want to understand the structure of subatomic particles, or the fundamental forces at work in our universe? That’s great, but how is it going to help us live longer, or help us build more fuel-efficient vehicles, or bring smaller iPods to market? Most members of the public don’t even know what a quark is, let alone care about whether you can detect a particular kind of quark experimentally. Satisfying our curiosity about the details on the surface of Mars can strike folks not gripped by that particular curiosity as a distraction from important questions that science could be answering instead.

A typical response is to note that basic research has in the past led to unanticipated practical applications. Of course, this isn’t a way to get the public to see the intrinsic value of basic research — it merely asks them to value such research instrumentally, as sort of a mystery box that is bound to contain some payoff which we cannot describe in advance but which promises to be awesome.

Some years ago Rick Weiss made an argument like this in the Washington Post in defense of space research. For example, space exploration. Weiss expressed concern that “Americans have lost sight of the value of non-applied, curiosity-driven research — the open-ended sort of exploration that doesn’t know exactly where it’s going but so often leads to big payoffs,” then went through an impressive list of scientific projects that started off without any practical applications but ended up making possible all manner of useful applications. Limit basic science, the argument went, and you’re risking economic growth.

But Weiss was careful not to say the only value in scientific research is in marketable products. Rather, he offered an even more important reason for the public to support research:

Because our understanding of the world and our support of the quest for knowledge for knowledge’s sake is a core measure of our success as a civilization. Our grasp, however tentative, of what we are and where we fit in the cosmos should be a source of pride to all of us. Our scientific achievements are a measure of ourselves that our children can honor and build upon.

I find that a pretty inspiring description of science’s value, but it’s not clear that most members of the public would be similarly misty-eyed.

Scientists may already feel that they have to become the masters of spin to get even their practical research projects funded. Will the scientists also have to take on the task of convincing the public at large that a scientific understanding of ourselves and of the world we live in should be a source of pride? Will a certain percentage of the scientist’s working budget have to go to public relations? (“Knowledge: It’s not just for dilettantes anymore!”) Maybe the message that knowledge for knowledge’s sake is a fitting goal for a civilized society is the kind of thing that people would just get as part of their education. Only it’s not on the standardized tests, and it seems like that’s the only place the public wants to put up money for education any more. Sometimes not even then.

The problem here is that scientists value something that the public at large seems not to value. The scientists think the public ought to value it, but they don’t have the power to impose their will on the public in this regard any more than the public can demand that scientists stop caring about weird things like quarks. Meanwhile, the public supports science, at least to the extent that science can deliver practical results in a timely fashion. There would probably be tension in this relationship even if scientists weren’t looking to the public for funding.

Of course, when scientists do tackle real-life problems and develop real-life solutions, it’s not like the public is always so good about accepting them. Consider the mixed public reception of the vaccine against human papilloma virus (HPV). The various strains of HPV are the leading cause of cervical cancer, and are not totally benign for men, causing genital warts and penile cancers. You would think that developing a reasonably safe and effective vaccine against a virus like HPV is exactly the sort of scientific accomplishment the public might value — except that religious groups in the US voiced opposition to the HPV vaccine on the grounds that it might give young women license to engage in premarital sex rather than practicing abstinence.

(The scientist scratches her head.) Let me get this straight: Y’all want to cut funding for the basic science because you don’t think it will lead to practical applications. But when we do the research to solve what seems like a real problem — people are dying from cervical cancer — y’all tell us this is a problem you didn’t really want us to solve?

Here, to be fair, it’s not everyone who wants to opt out of the science, just a part of the population with a fair bit of political clout at particular moments in history. The central issue seems to be that our society is made up of a bunch of people (including scientists) with rather different values, which lead to rather different priorities. In thinking about where scientific funding comes from, we talk as though there were a unitary Public with whom the unitary Science transacts business. It might be easier were that really the case. Instead, the scientists get to deal with the writhing mass of contradictory impulses that is the American public. About the only thing that public knows for sure is that it doesn’t want to pay more taxes.

How can scientists direct their efforts at satisfying public wants, or addressing public needs, if the public itself can’t come to any robust agreement on what those wants and needs are? If science has to prove to the public that the research dollars are going to the good stuff, will scientists have to stretch things a little in the telling?

Or might it actually be better if the public (or the politicians acting in the public’s name) spent less time trying to micro-manage scientists as they set the direction of their research? Maybe it would make sense, if the public decided that having scientists in society was a good thing for society, to let the scientists have some freedom to pursue their own scientific interests, and to make sure they have the funding to do so.

I’m not denying that the public has a right to decide where its money goes, but I don’t think putting up the money means you get total control. Because if you demand that much control, you may end up having to do the science yourself. Also, once science delivers the knowledge, it seems like the next step is to make that knowledge available. If particular members of the public decide not to avail themselves of that knowledge (because they feel it would be morally wrong, or maybe just silly, as in the case of pet cloning), that is their decision. We shouldn’t be making life harder for the scientists for doing what good scientists do.

It’s clear that there are forces at work in American culture right now that are not altogether comfortable with all that science has to offer at the moment. Discomfort is a normal part of sharing society with others who don’t think just like you do. But hardly anyone thinks it would be a good idea to ship all the scientists off to someplace else. We like our tablet computers and our smartphones and our headache medicines and our DSL and our Splenda too much for that.

Perhaps, for a few moments, we should give the hard-working men and women of science a break and thank them for the knowledge they produce, whether we know what to do with it or not. Then, we can return to telling them about the pieces of our world we’d like more help navigating, and see whether they have any help to offer yet.

Getting scientists to take ethics seriously: strategies that are probably doomed to failure.

As part of my day-job as a philosophy professor, I regularly teach a semester-long “Ethics in Science” course at my university. Among other things, the course is intended to help science majors figure out why being ethical might matter to them if they continue on their path to becoming working scientists and devote their careers to the knowledge-building biz.

And, there’s a reasonable chance that my “Ethics in Science” course wouldn’t exist but for strings attached to training grants from federal funding agencies requiring that students funded by these training grants receive ethics training.

The funding agencies demand the ethics training component largely in response to high profile cases of federally funded scientists behaving badly on the public’s dime. The bad behavior suggests some number of working scientists who don’t take ethics seriously. The funders identify this as a problem and want the scientists who receive grants from them to take ethics seriously. But the big question is how to get scientists to take ethics seriously.

Here are some approaches to that problem that strike me as unpromising:

  • Delivering ethical instruction that amounts to “don’t be evil” or “don’t commit this obviously wrong act”. Most scientists are not mustache-twirling villains, and few are so ignorant that they wouldn’t know that the obviously wrong acts are obviously wrong. If ethical training is delivered with the subtext of “you’re evil” or “you’re dumb,” most of the scientists to whom you’re delivering it will tune it out, since you’re clearly talking to someone else.
  • Reducing ethics to a laundry list of “thou shalt not …” Ethics is not simply a matter of avoiding bad acts — and the bad acts are not bad simply because federal regulations or your compliance officer say they are bad. There is a significant component of ethics concerned with positive action — doing good things. Presenting ethics as results instead of a process — as a set of things the ethics algorithm says you shouldn’t do, rather than a set of strategies for evaluating the goodness of various courses of action you might pursue — is not very engaging. Besides, you can’t even count on this approach for good results, since refraining from particular actions that are expressly forbidden is no guarantee you won’t find some not-expressly-forbidden action that’s equally bad.
  • Presenting ethics as something you have to talk about because the funders require that you talk about it. If you treat the ethics-talk as just a string attached to your grant money, but something with which you wouldn’t waste your time otherwise, you’re identifying attention to ethics as a thing that gets in the way of research rather as something that supports research. Once you’ve fulfilled the requirement to have the ethics-talk, would you ever revisit ethics, or would you just get down to the business of research?
  • Segregating attention to ethics in a workshop, class, or training session. Is ethics something the entirety of which you can “do” in a few hours, or even a whole semester? That’s the impression scientific trainees can get from an ethics training requirement that floats unconnected from any discussion with the people training them about how to be a successful scientist. Once you’re done with your training, then, you’re done — why think about ethics again?
  • Pointing trainees to a professional code, the existence of which proves that your scientific discipline takes ethics seriously. The existence of a professional code suggests that someone in your discipline sat down and tried to spell out ethical standards that would support your scientific activities, but the mere existence of a code doesn’t mean the members of your scientific community even know what’s in that code, nor that they behave in ways that reflect the commitments put forward by it. Walking the walk is different from talking the talk — and knowing that there is a code, somewhere on your professional society’s website, that you could find if you Googled it probably doesn’t even rise to the level of talking the talk.
  • Delivering ethical training with the accompanying message that scientists who aren’t willing to cut ethical corners are at a competitive career disadvantage, and that this is just how things are. Essentially, this creates a situation where you tell trainees, “Here’s how you should behave … unless you’re really up against it, at which point you should be smart and drop the ethics to survive in this field.” And, what motivated trainee doesn’t recognize that she’s always up against it? It is important, I think, to recognize that unethical behavior is often motivated at least in part by a perception of extreme career pressures rather than by the inherent evil of the scientist engaging in that behavior. But noting the competitive advantage available for cheaters only to throw up your hands and say, “Eh, what are you going to do?” strikes me as a shrugging off of responsibility. At a minimum, members of a scientific community ought to reflect upon and discuss whether the structures of career rewards and career punishments incentivize bad behavior. If they do, members of the community probably have a responsibility to try to change those structures of career rewards and career punishments.

Laying out approaches to ethics training that won’t help scientists take ethics seriously might help a trainer avoid some pitfalls, but it’s not the same as spelling out approaches that are more likely to work. That’s a topic I’ll take up in a post to come.

Wikipedia, the DSM, and Beavis.

There are some nights that Wikipedia raises more questions for me than it answers.

The other evening, reminiscing about some of the background noise of my life (viz. “Beavis and Butt-head”) when I was in graduate school, I happened to look up Cornholio. After I got over my amusement that its first six letters were enough to put my desired search target second on the list of Wikipedia’s suggestions for what I might be looking for (right between cornhole and Cornholme, I read the entry and got something of a jolt at its diagnostic tone:

After consuming large amounts of sugar and/or caffeine, Beavis sometimes undergoes a radical personality change, or psychotic break. In one episode, “Holy Cornholio”, the transformation occurred after chewing and swallowing many pain killer pills. He will raise his forearms in a 90-degree angle next to his chest, pull his shirt over his head, and then begin to yell or scream erratically, producing a stream of gibberish and strange noises, his eyes wide. This is an alter-ego named ‘Cornholio,’ a normally dormant persona. Cornholio tends to wander aimlessly while reciting “I am the Great Cornholio! I need TP for my bunghole!” in an odd faux-Spanish accent. Sometimes Beavis will momentarily talk normally before resuming the persona of Cornholio. Once his Cornholio episode is over, Beavis usually has no memory of what happened.

Regular viewers of “Beavis and Butt-head” probably suspected that Beavis had problems, but I’m not sure we knew that he had a diagnosable problem. For that matter, I’m not sure we would have classified moments of Cornholio as falling outside the broad umbrella of Things Beavis Does to Make Things Difficult for Teachers.

But, the Wikipedia editors seem to have taken a shine to the DSM (or other relevant literature on psychiatric conditions), and to have confidence that the behavior Beavis displays here is properly classified as a psychotic break.

Here, given my familiarity with the details of the DSM (hardly any), I find myself asking some questions:

  • Was the show written with the intention that the Beavis-to-Cornholio transformation be seen as a psychotic break?
  • Is it possible to give a meaningful psychiatric diagnosis of a cartoon character?
  • Does a cartoon character need a substantial inner life of some sort for a psychiatric diagnosis of that cartoon character to make any sense?
  • If psychiatric diagnoses are based wholly on outward behavioral manifestations rather than on the inner stuff that might be driving that behavior (as may be the case if it’s really possible to apply diagnostic criteria to Beavis), is this a good reason for us to be cautious about the potential value of these definitions and diagnostic criteria?
  • Is there a psychology or psychiatry classroom somewhere that is using clips of the Beavis-to-Cornholio transformation in order to teach students what a psychotic break is?

I’m definitely uncomfortable that this fictional character has a psychiatric classification thrust upon him so easily — though at least, as a fictional character, he doesn’t have to deal with any actual stigma associated with such a psychiatric classification. And, I think perhaps my unease points to a worry I have (and that Katherine Sharpe also voices in her book Coming of Age on Zoloft) about the project of assembling checklists of easy-to-assess symptoms that seem detached from the harder-to-assess conditions in someone’s head, or in his environment, that are involved in causing the symptoms in the first place.

Possibly Wikipedia’s take on Beavis is simply an indication that the relevant Wikipedia editors like the DSM a lot more than I do (or that they intended their psychiatric framing of Beavis ironically — and if so, well played, editors!). But possibly it reflects a larger society that is much more willing than I am to put behaviors into boxes, regardless of the details (or even existence) of the inner life that accompanies that behavior.

I would welcome the opinions and insight of psychiatrists, psychologist, and others who run with that crowd on this matter.

Movie review: Strange Culture.

The other day I was looking for a movie I could watch with instant streaming that featured Josh Kornbluth* and I came upon Strange Culture. Strange Culture is a documentary about the arrest of artist and SUNY-Buffalo professor of art history Steve Kurtz on charges of bioterrorism, mail fraud, and wire fraud in 2004 after the death of his wife, Hope.

At the time Strange Culture was released in 2007, the legal case against Steve Kurtz (and against University of Pittsburgh professor of genetics Robert Ferrell) was ongoing, so the documentary uses actors to interpret events in the case about which Kurtz could not speak on advice of counsel, as well as the usual news footage and interviews of people in the case who were able to talk freely. It also draws on a vividly illustrated graphic novel about the case (titled “Suspect Culture”) written by Timothy Stock and illustrated by Warren Heise.

The central question of the documentary is how an artist found himself the target of federal charges of bioterrorism. I should mention that I watched Strange Culture not long after I finished reading The Radioactive Boy Scout, which no doubt colored my thinking. If The Radioactive Boy Scout is a story of scientific risks taken too lightly, Strange Culture strikes me as a story of scientific risks blown far out of proportion. At the very least, I think there are questions worth pondering here about why the two cases provoked such wildly different reactions.

In 2004, as part of the Critical Art Ensemble, Steve and Hope Kurtz were working on an art installation for the Massachusetts Museum of Contemporary Art on genetically modified agriculture. The nature of the installation was to demonstrate (and involve museum-goers in) scientific techniques used to isolate genetic information from various food products and to identify genetically modified organisms. The larger aim of the installation was to help the audience better understand the use of biotechnology in agriculture, and to push the audience to think more deeply about the scientific decisions made by agribusiness and how they might impact everyday life.

Regardless of whether one thinks the Critical Art Ensemble was raising legitimate worries about GMOs, or ignoring potential benefits from this use of biotechnology**, there is something about the effort to give members of the public a better understanding of — and even some hands-on engagement with — the scientific techniques that I find deeply appealing. Indeed, Steve and Hope Kurtz were in active collaboration with working biologists so that they could master the scientific techniques in question and use them appropriately in assembling the installation. Their preparations included work they were doing in their home with petri dishes and commercially available incubators using benign bacteria.

However, this was where the problems began for Steve Kurtz. One night in May of 2004, Hope Kurtz died in her sleep of heart failure. Steve Kurtz dialed 911. The Buffalo first responders who responded to the call saw the petri dishes and freaked out and notified the FBI. Suddenly, the Kurtz home was swarming with federal agents looking for evidence of bioterrorist activities and Steve Kurtz was under arrest.

Watching Strange Culture, I found myself grappling with the question of just why the authorities reacted with such alarm to what they found in the Kurtz home. My recollection of the news coverage at the time was that the authorities suspected that whatever was growing in those petri dishes might have killed hope Kurtz, but at this point indications are that her death was due to a congenital heart defect. First responders are supposed to be alert to dangers, but they should also recognize that coincidence in space and time is not the same as causation. Hope Kurtz’s death was less than three years after the September 11th attacks, and the anthrax attacks that came close on their heels, which likely raised anxiety about the destructive potential of biological agents in the hands of someone who knows how to use them. I wonder, though, whether some amount of the reaction was not just post-9/11 hypervigilance but a deeper fear of biological material at the microscopic level. If you can grow it in a petri dish, the reaction seemed to say, it must be some seriously dangerous stuff. (I am grateful that these first responders didn’t stumble upon the forgotten leftovers in the back of my fridge and judge me a bioterrorism suspect, too.)

More baffling than the behavior of the first responders was the behavior of the federal agents who searched the Kurtz home. While they raised the specter that Steve Kurtz was producing biological weapons, they ended up leaving the place in shambles, strewn with bags of purportedly biohazardous material (as well as with the trash generated by the agents over the long course of their investigation). Leaving things in this state would be puzzling if the prime concern of the government was to protect the community from harmful biological materials, suggesting that perhaps the investigative teams was more interested in creating a show of government force.

Strange Culture raises, but does not answer, the question of how the government turned out to be even more alarmed by biotechnology in widespread agricultural use than was an art group aiming to raise concerns about GMOs. It suggests that scientific understanding and accurate risk assessment is a problem not just for the public at large but also for the people entrusted with keeping the public safe. It also suggests that members of the public are not terribly safe if the default response from the government is an overreaction, or a presumption that members of the public have no business getting their hands dirty with science.

It’s worth noting that a 2008 ruling found there was insufficient evidence to support the charges against Steve Kurtz, and that the Department of Justice declined to appeal this ruling. You can read the Critical Art Ensemble Defense Fund press release issued at the conclusion of Steve Kurtz’s legal battle.

_____
*Yes, it’s a very particular kind of thing to want. People are like that sometimes.

**On the question of GMOs, if you haven’t yet read Christie Wilcox’s posts (here, here, and here), you really should.

Facing felony charges in lab death of Sheri Sangji, UCLA settles, Harran stretches credulity.

There have been recent developments in the criminal case against UCLA and chemistry professor Patrick Harran in connection with the fatal laboratory accident that resulted in the death of Sheri Sangji (which we’ve discussed here and here). The positive development is that UCLA has reached a plea agreement with prosecutors. (CORRECTION: UCLA has reached a settlement agreement with the prosecutors, not a plea agreement. Sorry for the confusion.) However, Patrick Harran’s legal strategy has taken a turn that strikes me as ill-advised.

From the Los Angeles Times:

Half of the felony charges stemming from a 2008 lab accident that killed UCLA research assistant Sheri Sangji were dropped Friday when the University of California regents agreed to follow comprehensive safety measures and endow a $500,000 scholarship in her name.

“The regents acknowledge and accept responsibility for the conditions under which the laboratory operated on Dec. 29, 2008,” the agreement read in part, referring to the date that Sangji, 23, suffered fatal burns.

Charges remain against her supervisor, chemistry professor Patrick Harran. His arraignment was postponed to Sept. 5 to allow the judge to consider defense motions, including one challenging the credibility of the state’s chief investigator on the case. …

UCLA and Harran have called her death a tragic accident and said she was a seasoned chemist who chose not to wear a protective lab coat. …

In court papers this week, Harran’s lawyers said prosecutors had matched the fingerprints of Brian Baudendistel, a senior special investigator who handled the case for the state Division of Occupational Safety and Health, with the prints of a teenager who pleaded no contest to murder in Northern California in 1985.

The defense contends that the investigator, whose report formed the basis for the charges, is the same Brian A. Baudendistel who took part in a plot to rob a drug dealer of $3,000 worth of methamphetamine, then shot him. Another teenager admitted to pulling the trigger but said it was Baudendistel’s shotgun.

Baudendistel told The Times this week that it is a case of mistaken identity and that he is not the individual involved in the 1985 case.

Cal/OSHA defended the integrity of the investigation in a statement issued Friday by spokesman Dean Fryer.

“The defendants’ most recent attempt to deflect attention from the charges brought against them simply does not relate in any way to the circumstances of Ms. Sangji’s death or the actual evidence collected in Cal/OSHA’s comprehensive investigation,” it read.

Deborah Blum adds:

Should  chemist-in-training approach hazardous chemicals with extreme caution? Yes. Should she expect her employer to provide her with the necessary information and equipment to engage in such caution? Most of us would argue yes. Should chemistry professors be held to the standard of employee safety as, say, chemical manufacturers or other industries? The most important “yes” to that question comes from  Cal/OSHA senior  investigator Brian Baudendistal.

Baudendistal concluded that the laboratory operation was careless enough for long enough to justify felony charges of willful negligence.  The Sangji family, angered by those suggestions that Sheri’s experience should have taught her better, pushed for prosecution. Late last year the Los Angeles District Attorney’s office  officially brought charges against Harran, UCLA, and the University of California system itself. …

[Harran’s] lawyers have responded to the Baudendistal report in part by focusing on Baudendistal himself. They claim to have found evidence that in 1985 he and two friends conspired to set up the murder of a drug dealer. All three boys were convicted and although, since they were juveniles, the records were sealed, attorneys were able to identify the killers through press coverage at the time. Although Baudendistal has insisted that Harran’s defense team tracked down the wrong man, they say they have a fingerprint match to prove it. They say further that a man who covers up his past history is not credible – and therefore neither is is report on the UCLA laboratory.

I am not a lawyer, so I’m not terribly interested in speculating on the arcane legal considerations that might be driving this move by Harran’s legal team. (Chemjobber speculates that it might be a long shot they’re playing amid plea negotiations that are not going well.)

As someone with a professional interest in crime and punishment within scientific communities, and in ethics more broadly, I do, however, think it’s worth examining the logic of Patrick Harran’s legal strategy.

The strategy, as I understand it, is to cast aspersions on the Cal/OSHA report on the basis of the legal history of the senior investigator that prepared it — specifically, his alleged involvement as a teenager in 1985 in a murder plot.

Does a past bad act like this serve as prima facie reason to doubt the accuracy of the report of the investigation of conditions in Harran’s lab? It’s not clear how it could, especially if there were other investigators on the team, not alleged to be involved in such criminal behavior, who endorsed the claims in the report.

Unless, of course, the reason Harran’s legal team thinks we should doubt the accuracy of the report is that the senior investigator who prepared it is a habitual liar. To support the claim that he cannot be trusted, they point to a single alleged lie — denying involvement in the 1985 murder plot.

But this strikes me as a particularly dangerous strategy for Patrick Harran to pursue.

Essentially, the strategy rests on the claim that if a person has lied about some particular issue, we should assume that any claim that person makes, about whatever issue, might also be a lie. I’m not unsympathetic to this claim — trust is something that is earned, not simply assumed in the absence of clear evidence of dishonesty.

However, this same reasoning cannot help Patrick Harran’s credibility, given that he is on record describing Sheri Sangji, a 23-year-old with a bachelor’s degree, as an experienced chemist. Many have noted already that claiming Sheri Sangji was a experienced chemist is ridiculous on its face.

Thus, it’s not unreasonable to conclude that Patrick Harran lied when he described Sheri Sangji as an experienced chemist. And, if this is the case, following the reasoning advocated by his legal team, we must doubt the credibility of every other claim he has made — including claims about the safety training he did or did not provide to people in his lab, conditions in his lab in 2008 when the fatal accident happened, even whether he recommended that Sangji wear a lab coat.

If Patrick Harran was not lying when he said he believed Sheri Sangji was an experienced chemist, the other possibility is that he is incredibly stupid — certainly too stupid to be in charge of a lab where people work with potentially hazardous chemicals.

Some might posit that Harran’s claims about Sangji’s chemical experience were made on the advice of his legal team. That may well be, but I’m unclear on how lying on the advice of counsel is any less a lie. (If it is, this might well mitigate the “lie of omission” of an investigator advised by his lawyers that his juvenile record is sealed.) And if one lie is all it takes to decimate credibility, Harran is surely as vulnerable as Baudendistel.

Finally, a piece of free advice to PIs worrying that they may find themselves facing criminal charges should their students, postdocs, or technicians choose not to wear lab coats or other safety gear: It is perfectly reasonable to establish, and enforce, a lab policy that states that those choosing to opt out of the required safety equipment are also opting out of access to the laboratory.

Book review: The Radioactive Boy Scout.

When I and my three younger siblings were growing up, our parents had a habit of muttering, “A little knowledge is a dangerous thing.” The muttering that followed that aphorism usually had to do with the danger coming from the “little” amount of knowledge rather than a more comprehensive understanding of whatever field of endeavor was playing host to the hare-brained scheme of the hour. Now, as a parent myself, I suspect that another source of danger involved asymmetric distribution of the knowledge among the interested parties: while our parents may have had knowledge of the potential hazards of various activities, knowledge that we kids lacked, they didn’t always have detailed knowledge of what exactly we kids were up to. It may take a village to raise a child, but it can take less than an hour for a determined child to scorch the hell out of a card table with a chemistry kit. (For the record, the determined child in question was not me.)

The question of knowledge — and of gaps in knowledge — is a central theme in The Radioactive Boy Scout: The Frightening True Story of a Whiz Kid and His Homemade Nuclear Reactor by Ken Silverstein. Silverstein relates the story of David Hahn, a Michigan teen in the early 1990s who, largely free of adult guidance or supervision, worked tirelessly to build a breeder reactor in his back yard. At times this feels like a tale of youthful determination to reach a goal, a story of a self-motivated kid immersing himself in self-directed learning and doing an impressive job of identifying the resources he required. However, this is also a story about how, in the quest to achieve that goal, safety considerations can pretty much disappear.

David Hahn’s source of inspiration — not to mention his guide to many of the experimental techniques he used — was The Golden Book of Chemistry Experiments. Published in 1960, the text by Robert Brent conveys an almost ruthlessly optimistic view of the benefits chemistry and chemical experimentation can bring, whether to the individual or to humanity as a whole. Part of this optimism is what appears to modern eyes as an alarmingly cavalier attitude towards potential hazards and chemical safety. If anything, the illustrations by Harry Lazarus downplay the risks even more than does the text — across 112 pages, the only pictured items remotely resembling safety apparatus are lab coats and a protective mask for an astronaut.

Coupled with the typical teenager’s baseline assumption of invulnerability, you might imagine that leaving safety considerations in the subtext, or omitting them altogether, could be a problem. In the case of a teenager teaching himself chemistry from the book, relying on it almost as a bible of the concepts, history, and experimental techniques a serious chemist ought to know, the lack of focus on potential harms might well have suggested that there was no potential for harm — or at any rate that the harm would be minor compared to the benefits of mastery. David Hahn seems to have maintained this belief despite a series of mishaps that made him a regular at his local emergency room.

Ah, youth.

Here, though, The Radioactive Boy Scout reminds us that young David Hahn was not the only party operating with far too little knowledge. Silverstein’s book expands on his earlier Harper’s article on the incident with chapters that convey just how widespread our ignorance of radioactive hazards has been for most of the history of our scientific, commercial, and societal engagement with radioactivity. At nearly every turn in this history, potential benefits have been extolled (with radium elixirs sold in the early 1900s to lower blood pressure, ease arthritis pain, and produce “sexual rejuvenescence”) and risks denied, sometimes until the body count was so large and the legal damages were so high that they could no longer be denied.

Surely part of the problem here is that the hazards of radioactivity are less immediately obvious than those of corrosive chemicals or explosive chemicals. The charred table is directly observable in a way that damage to one’s body from exposure to radioisotopes is not (partly because the table doesn’t have an immune system that kicks in to try to counter the damage). But the invisibility of these risks was also enhanced when manufacturers who used radioactive materials proclaimed their safety for both the end-user of consumer products and the workers making those products, and when the nuclear energy industry throttled the information the public got about mishaps at various nuclear reactors.

Possibly some of David Hahn’s teachers could have given him a more accurate view of the kinds of hazards he might undertake in trying to build a back yard breeder reactor … but the teen didn’t seem to feel like he could get solid mentoring from any of them, and didn’t let them in on his plans in any detail. The guidance he got from the Boy Scouts came in the form of an atomic energy merit badge pamphlet authored by the Atomic Energy Commission, a group created to promote atomic energy, and thus one unlikely to foreground the risks. (To be fair, this merit badge pamphlet did not anticipate that scouts working on the badge would actually take it upon themselves to build breeder reactors.) Presumably some of the scientists with whom David Hahn corresponded to request materials and advice on reactions would have emphasized the risks of his activities had they realized that they were corresponding with a high school student undertaking experiments in his back yard rather than with a science teacher trying to get clear on conceptual issues.

Each of these gaps of information ended up coalescing in such a way that David Hahn got remarkably close to his goal. He did an impressive job isolating radioactive materials from consumer products, performing chemical reactions to put them in suitable form for a breeder reactor, and assembling the pieces that might have initiated a chain reaction. He also succeeded in turning the back yard shed in which he conducted his work into a Superfund site. (According to Silverstein, the official EPA clean-up missed materials that his father and step-mother found hidden in their house and discarded in their household trash — which means that both the EPA and those close enough to the local landfill where the radioactive materials ended up had significant gaps in their knowledge about the hazards David Hahn introduced to the environment.)

The Radioactive Boy Scout manages to be at once an engaging walk through a challenging set of scientific problems and a chilling look at what can happen when scientific problems are stripped out of their real-life context of potential impacts for good and for ill that stretch across time and space and impact people who aren’t even aware of the scientific work being undertaken. It is a book I suspect my 13-year-old would enjoy very much.

I’m just not sure I’m ready to give it to her.

How we decide (to falsify).

At the tail-end of a three-week vacation from all things online (something that I badly needed at the end of teaching an intensive five-week online course), the BBC news reader on the radio pulled me back in. I was driving my kid home from the end-of-season swim team banquet, engaged in a conversation about the awesome coaches, when my awareness was pierced by the words “Jonah Lehrer” and “resigned” and “falsified”.

It appears that the self-plagiarism brouhaha was not Jonah Lehrer’s biggest problem. On top of recycling work in ways that may not have conformed to his contractual obligations, Lehrer has also admitted to making up quotes in his recent book Imagine. Here are the details as I got them from the New York Times Media Decoder blog:

An article in Tablet magazine revealed that in his best-selling book, “Imagine: How Creativity Works,” Mr. Lehrer had fabricated quotes from Bob Dylan, one of the most closely studied musicians alive. …

In a statement released through his publisher, Mr. Lehrer apologized.

“The lies are over now,” he said. “I understand the gravity of my position. I want to apologize to everyone I have let down, especially my editors and readers.”

He added, “I will do my best to correct the record and ensure that my misquotations and mistakes are fixed. I have resigned my position as staff writer at The New Yorker.” …

Mr. Lehrer might have kept his job at The New Yorker if not for the Tablet article, by Michael C. Moynihan, a journalist who is something of an authority on Mr. Dylan.

Reading “Imagine,” Mr. Moynihan was stopped by a quote cited by Mr. Lehrer in the first chapter. “It’s a hard thing to describe,” Mr. Dylan said. “It’s just this sense that you got something to say.”

After searching for a source, Mr. Moynihan could not verify the authenticity of the quote. Pressed for an explanation, Mr. Lehrer “stonewalled, misled and, eventually, outright lied to me” over several weeks, Mr. Moynihan wrote, first claiming to have been given access by Mr. Dylan’s manager to an unreleased interview with the musician. Eventually, Mr. Lehrer confessed that he had made it up.

Mr. Moynihan also wrote that Mr. Lehrer had spliced together Dylan quotes from separate published interviews and, when the quotes were accurate, he took them well out of context. Mr. Dylan’s manager, Jeff Rosen, declined to comment.

In the practice of science, falsification is recognized as a “high crime” and is included in every official definition of scientific misconduct you’re likely to find. The reason for this is simple: scientists are committed to supporting their claims about what the various bits of the world are like and about how they work with empirical evidence from the world — so making up that “evidence” rather than going to the trouble to gather it is out of bounds.

Despite his undergraduate degree in neuroscience, Jonah Lehrer is not operating as a scientist. However, he is operating as a journalist — a science journalist at that — and journalism purports to recognize a similar kind of relationship to evidence. Presenting words as a quote from a source is making a claim that the person identified as the source actually said those things, actually made those claims or shared those insights. Presumably, a journalist includes such quotes to bolster an argument. Maybe if Jonah Lehrer had simply written a book presenting his thoughts about creativity readers would have no special reason to believe it. Supporting his views with the (purported) utterances of someone widely recognized as a creative genius, though, might make them more credible.

(Here, Eva notes drily that this incident might serve to raise Jonah Lehrer’s credibility on the subject of creativity.)

The problem, of course, is that a fake quote can’t really add credibility in the way it appears to when the quote is authentic. Indeed, once discovered as fake, it has precisely the opposite effect. As with falsification in science, falsification in journalism can only achieve its intended goal as long as its true nature remains undetected.

There is no question in my mind about the wrongness of falsification here. Rather, the question I grapple with is why do they do it?

In science, after falsified data is detected, one sometimes hears an explanation in terms of extreme pressure to meet a deadline (say, for a big grant application, or for submission of a tenure dossier) or to avoid being scooped on a discovery that is so close one can almost taste it … except for the damned experiments that have become uncooperative. Experiments can be hard, there is no denying it, and the awarding of scientific credit to the first across the finish-line (but not to the others right behind the first) raise the prospect that all of one’s hard work may be in vain if one can’t get those experiments to work first. Given the choice between getting no tangible credit for a few years’ worth of work (because someone else got her experiments to work first) and making up a few data points, a scientist might well feel tempted to cheat. That scientific communities regard falsifying data as such a serious crime is meant to reduce that temptation.

There is another element that may play an important role in falsification, one brought to my attention some years ago in a talk given by C. K. Gunsalus: the scientist may have such strong intuitions about the bit of the world she is trying to describe that gathering the empirical data to support these intuitions seems like a formality. If you’re sure you know the answer, the empirical data are only useful insofar as they help convince others who aren’t yet convinced. The problem here is that the empirical data are how we know whether our accounts of the world fit the actual world. If all we have is hunches, with no way to weed out the hunches that don’t fit with the details of reality, we’re no longer in the realm of science.

I wonder if this is close to the situation in which Jonah Lehrer found himself. Maybe he had strong intuitions about what kind of thing creativity is, and about what a creative guy like Bob Dylan would say when asked about his own exercise of creativity. Maybe these intuitions felt like a crucial part of the story he was trying to tell about creativity. Maybe he even looked to see if he could track down apt quotes from Bob Dylan expressing what seemed to him to be the obvious Dylanesque view … but, coming up short on this quotational data, he was not prepared to leave such an important intuition dangling without visible support, nor was he prepared to excise it. So he channeled Bob Dylan and wrote the thing he was sure in his heart Bob Dylan would have said.

At the time, it might have seemed a reasonable way to strengthen the narrative. As it turns out, though, it was a course of action that so weakened it that the publisher of Imagine, Houghton Mifflin Harcourt, has recalled print copies of the book.

Book review: Suffering Succotash.

What is the deal with the picky eater?

Is she simply being willful, choosing the dinner table as a battlefield on which to fight for her right to self-determination? Or, is the behavior that those purveyors of succotash and fruit cup interpret as willfulness actually rooted in factors that are beyond the picky eater’s control? If the latter, is the picky eater doomed to a lifetime of pickiness, or can help be found for it?

These are the questions at the center of Suffering Succotash: A Picky Eater’s Quest to Understand Why We Hate the Foods We Hate. Its author, Stephanie V. W. Lucianovic, survived a childhood of picky eating, grappled with the persistence of pickiness into adulthood, went to culinary school, became a cheesemonger and food writer, and then mounted her quest for explanations of pickiness.

Her book tries to illuminate the origin story of picky eaters. Is it in their taste buds, and if so, due to the number of taste buds or to their sensitivity, to genetic factors driving their detection power or to environmental impacts on their operation? Is it rather their keen sense of smell that triggers pickiness? An overachieving gag-reflex? Their “emotional” stomachs? Or maybe how they were raised by the people feeding them when they were young? Are there good evolutionary reasons for the pickiness of picky eaters — and will this pickiness again be adaptive when the zombie apocalypse renders our food supply less safe in various ways?

As well, Lucianovic inquires into the likely fates of picky eaters. Are picky eaters destined to spawn more picky eaters? Can picky eaters find lasting love with humans who are significantly less discriminating about what they eat? Can picky eaters ever get over their pickiness? (Spoiler: The answers to the last two of these questions here are both “To a significant extent, yes!”)

One of the joys of this book is how Lucianovic’s narrative weaves along the path of science-y question she was prompted to ask by her troubled relationship with yucky foods as with the people trying to feed them to her. Lucianovic leads us on a non-scientist’s journey through science on a quest to better understand features of her everyday life that mattered to her — and, which likely matter to readers who are themselves picky eaters or have picky eaters in their lives. After all, you’ve got to eat.

Suffering Succotash explores a wide swath of the science behind the foods people like, the foods people hate, and the various features that might make some of us pickier eaters that others, without ever seeming like a science book. Indeed, Lucianovic is candid about the usefulness (and limits) of the scientific literature to the lay person trying to find answers to her questions:

When you’re in search of very specific information, pawing through scientific papers is like disemboweling one of those Russian nesting dolls. The first article makes a claim and gives just enough information to be intriguing and useless, unless you look up the source article behind that claim. The source article leads to another claim, and therefore another source article that needs to be looked up, and another and another until you finally reach the tiniest of all the dolls, which hopefully is where all the answers will be found since the tiniest of all dolls can’t be opened. (31)

The literature, thankfully, was just one source of information in Lucianovic’s journey. Alongside it, she partook of a veritable smorgasbord of test-strips, questionnaires, genotypying, and interviews with scientists who work on very aspects of how we taste food and why we react to foods the way we do. She even got to try her hand at some of the relevant laboratory techniques at the Monell Chemical Sense Center in Philadelphia.

What she found was that there are not simple scientific answers to the question of why some people are pickier eaters and others are not. Instead, there seems to be a complicated interplay of many different kinds of factors. She also discovered some of the limitations of the scientific tools at our disposal to identify potential causal factors behind pickiness or to reliably sort the picky from the not-so-picky eaters. However, in describing the shortcomings of taste-tests, the imprecision of questionnaires, the sheer number of factors that may (or may not) be at play in making peaches a food to be loathed, Lucianovic manages to convey an enthusiasm about the scientific search to understand picky eaters even a little better, not a frustration that science hasn’t nailed down The Answer yet.

There are many other strands woven into Suffering Succotash along with the scientific journey, including personal reminiscences of coping with picky eating as a kid — and then as an adult trying very hard not to be an inconvenient houseguest, interviews with other picky eaters about their own experiences with foods, a meditation on how parenting strategies might entrench or defuse pickiness, consideration of the extent to which eating preferences can be negotiable (or non-negotioable) in relationships, and practical strategies for overcoming one’s own pickiness — and for moving through a world of restaurants and friends’ dinner tables with the elements of pickiness that persist. These other strands, and the seamless (and often hilarious) manner in which Lucianovic connects them to the scientific questions and answers, make Suffering Succotash the perfect popular science book for a reader that doesn’t think he or she wants to read a popular science book.

Plus, there are recipes included. My offspring are surely not the world’s pickiest eaters, but they have strong views about a few notorious vegetables. However, when prepared according to the recipes included in Suffering Succotash, those vegetables were good enough that my kids wanted seconds, and thirds.

Book review: Uncaged.

In our modern world, many of the things that contribute to the mostly smooth running of our day-to-day lives are largely invisible to us. We tend to notice them only when they break. Uncaged, a thriller by Paul McKellips, identifies animal research as one of the activities in the background supporting the quality of life we take for granted, and explores what might happen if all the animal research in the U.S. ended overnight.

Part of the fun of a thriller is the unfolding of plot turns and the uncertainty about which characters who come into focus will end up becoming important. Therefore, in order not to spoil the book for those who haven’t read it yet, I’m not going to say much about the details of the plot or the main characters.

The crisis emerges from a confluence of events and an intertwining of the actions of disparate persons acting in ignorance of each other. This complex tangle of causal factors is one of the most compelling parts of the narrative. McKellips gives us “good guys,” “bad guys,” and ordinary folks just trying to get by and to satisfy whatever they think their job description or life circumstances demand of them, weaving a tapestry where each triggers chains of events that compound in ways they could scarcely have foreseen. This is a viscerally persuasive picture of how connected we are to each other, whether by political processes, public health infrastructure, the food supply, or the germ pool.

There is much to like in Uncaged. The central characters are complex, engaging, and even surprising. McKellips is deft in his descriptions of events, especially the impacts of causal chains initiated by nature or by human action on researchers and on members of the public. Especially strong are McKellips’s explanations of scientific techniques and rationales for animal research in ways that are reasonably accessible to the lay reader without being oversimplified.

Uncaged gets to the crux of the societal debate about scientific animal use in a statement issued by the President of the United States as, in response to a series of events, he issues an executive order halting animal research. This president spells out his take on the need — or not — for continued biomedical research with animals:

I realize that the National Institutes of Health grants billions of dollars to American universities and our brightest scientists for biomedical research each year. But there comes a point when we must ask ourselves — that we must seriously question — has our health reached the level of “good enough”? Think of all the medicine we have available to us today. It’s amazing. It’s plenty. It’s more than we have had available in the history of humanity. And for those of us who need medicines, surgeries, therapies and diagnostic tools — it is the sum total of all that we have available to us today. If it’s good enough for those of us who need it today, then perhaps it’s good enough for those who will need it tomorrow as well. Every generation has searched for the fountain of youth. But can we afford to spend more time, more money, and — frankly — more animals just to live longer? Natural selection is an uninvited guest within every family. Some of us will die old; some of us will die far too young. We cannot continue to fund the search for the fountain of youth. We must realize that certain diseases of aging — such as cancer, Alzheimer’s, and Parkinson’s — are inevitable. Our lifestyles and nutrition are environmental factors that certainly contribute to our health. How much longer can we pretend to play the role of God in our own laboratories? (58-59)

In some ways, this statement is the ethical pivot-point around which all the events of the novel — and the reader’s moral calculations — turn. How do we gauge “good enough”? Who gets to make the call, the people for whom modern medicine is more or less sufficient, or the people whose ailments still have no good treatment? What kind of process ought we as a society to use for this assessment?

These live questions end up being beside the point within the universe of Uncaged though. The president issuing this statement has become, to all appearances, a one-man death panel.

McKellips develops a compelling and diverse selection of minor characters here: capitalists, terrorists, animal researchers, animal rights activists, military personnel, political appointees. Some of these (especially the animal rights activists) are clearly based on particular real people who are instantly recognizable to those who have been paying attention to the targeting of researchers in recent years. (If you’ve followed the extremists and their efforts less closely, entering bits of text from the communiques of the fictional animal rights organizations into a search engine is likely to help you get a look at their real-life counterparts.)

But, while McKellips’s portrayal of the animal rights activists is accurate in capturing their rhetoric, these key players who are central in creating the crisis to which the protagonists must respond remain ciphers. The reader gets little sense of the events or thought processes that brought them to these positions, or of the sorts of internal conflicts that might occur within animal rights organizations — or within the hearts and minds of individual activists.

Maybe this is unavoidable — the internet animal rights activists often do seem like ciphers who work very hard to deny the complexities acknowledged by the researchers in Uncaged. But, perhaps naïvely, I have a hard time believing they are not more complex in real life than this.

As well, I would have liked for Uncaged to give us more of a glimpse into the internal workings of the executive branch — how the president and his cabinet made the decision to issue the executive order for a moratorium on animal research, what kinds of arguments various advisors might have offered for or against this order, what assemblage of political considerations, ideals, gut feelings, and unforeseen consequences born of incomplete information or sheer ignorance might have been at work. But maybe presidents, cabinet members, agency heads, and other political animals are ciphers, too — at least to research scientists who have to navigate the research environment these political animals establish and then rearrange.

Maybe this is an instance of the author grappling with the same challenge researchers face: you can’t build a realistic model without accurate and detailed information about the system you’re modeling. Maybe making such a large cast of characters more nuanced, and drawing us deeply into their inner lives, would have undercut the taut pacing of what is, after all, intended as an action thriller.

But to me, this feels like a missed opportunity. Ultimately, I worry that the various players in Uncaged — and worse, their real life counterparts — the researchers and other advocates of humane animal research, the animal rights activists, the political animals, and the various segments of the broader public — continue to see each other as ciphers rather than trying to get in each others heads and figure out where their adversaries are coming from, the better to be able to reflect upon and address the real concerns that are driving people. Modeling your opponents as automata has a certain efficiency, but to me it leaves the resolution feeling somewhat hollow — and it’s certainly not a strategy for engagement that I see leading to healthy civil society in real life.

I suspect, though, that my disappointments are a side-effect of the fact that I am not a newcomer to these disputes. For readers not already immersed in the battles over research with animals, Uncaged renders researchers as complex human beings to whom one can relate. This is a good read for someone who wants a thriller that also conveys a compelling picture of what motivates various lines of biomedical research — and why such research might matter to us all.

Book review: Coming of Age on Zoloft.

One of the interesting and inescapable features of our knowledge-building efforts is just how hard it can be to nail down objective facts. It is especially challenging to tell an objective story when the object of study is us. It’s true that we have privileged information of a particular sort (our own experience of what it is like to be us), but we simultaneously have the impediment of never being able fully to shed that experience. As well, our immediate experience is necessarily particular — none of us knows what it is like to be human in general, just what is is like to be the particular human each of us happens to be. Indeed, if you take Heraclitus seriously (he of the impossibility of stepping in the same river twice), you might not even know what it is like to be you so much as what it is like to be you so far.

All of this complicates the stories we might try to tell about how our minds are connected to our brains, what it means for those brains to be well, and what it is for us to be ourselves or not-ourselves, especially during stretches in our lives when the task that demands our attention might be figuring out who the hell we are in the first place.

Katherine Sharpe’s new book Coming of Age on Zoloft: how antidepressants cheered us up, let us down, and changed who we are, leads us into this territory while avoiding the excesses of either ponderous philosophical treatise or catchy but overly reductive cartoon neuroscience. Rather, Sharpe draws on dozens of interviews with people prescribed selective seratonin reuptake inhibitors (SSRIs) for significant stretches from adolescence through early adulthood, and on her own experiences with antidepressants, to see how depression and antidepressants feature in the stories people tell about themselves. A major thread throughout the book is the question of how our pharmaceutical approach to mental health impacts the lives of diagnosed individuals (for better or worse), but also how it impacts our broader societal attitudes toward depression and toward the project of growing up. Sharpe writes:

When I first began to use Zoloft, my inability to pick apart my “real” thoughts and emotions from those imparted by the drug made me feel bereft. The trouble seemed to have everything to do with being young. I was conscious of needing to figure out my own interests and point myself in a direction in the world, and the fact of being on medication seemed frighteningly to compound the possibilities for error. How could I ever find my way in life if I didn’t even know which feelings were mine? (xvii)

Interleaved between personal accounts, Sharpe describes some of the larger forces whose confluence helps explain the growing ubiquity of SSRIs. One of these is the concerted effort during the revisions that updated the DSM-II to the DSM-III to abandon Freud-inflected frameworks for mental disorders which saw the causal origins of depression in relationships and replace them with checklists of symptoms (to be assessed in isolation from additional facts about what might be happening in the patient’s life) which might or might not be connected to hunches about causal origins of depression based on what scientists think they know about the actions on various neurotransmitters of drugs that seem to treat the symptoms on the checklist. Suddenly being depressed was an official diagnosis based on having particular symptoms that put you in that category — and in the bargain it was no longer approached as a possibly appropriate response to external circumstances. Sharpe also discusses the rise of direct-to-consumer advertising for drugs, which told us how to understand our feelings as symptoms and encouraged us to “talk to your doctor” about getting help from them, as well as the influence of managed care — and of funding priorities within the arena of psychiatric research — in making treatment with a pill the preferred treatment over time-consuming and “unpatentable talk-treatments.” (184)

Sharpe discusses interviewees’, and her own, experiences with talk therapy, and their experiences of trying to get off SSRIs (with varying degrees of medical supervision or premeditation) to find out whether one’s depression is an unrelenting chronic illness the having of which is a permanent fact about oneself, like having Type I diabetes, or whether it might be a transient state, something with which one needs help for a while before going back to normal. Or, if not normal, at least functional enough.

The exploration in Coming of Age on Zoloft is beautifully attentive to the ways that “functional enough” depends on a person’s interaction with environment — with family and friends, with demands of school or work or unstructured days and weeks stretching before you — and on a person’s internal dialogue with oneself — about who you are, how you feel, what you feel driven to do, what feels too overwhelming to face. Sharpe offers an especially compelling glimpse at how the forces from the world and the voices from one’s head sometimes collide, producing what professionals on college campuses describe as a significant deterioration of the baseline of mental health for their incoming students:

One college president lamented that the “moments of woolgathering, dreaming, improvisation” that were seen as part and parcel of a liberal arts education a generation ago had become a hard sell for today’s brand of highly driven students. Experts agreed that undergraduates were in a bigger hurry than ever before, expected by teachers, parents, and themselves to produce more work, of higher quality, in the same finite amount of time. (253)

Such high expectations — and the broader message that productivity is a duty — set the bar high enough that failure may become an alarmingly likely outcome. (Indeed, Sharpe quotes a Manhattan psychiatrist who raises the possibility that some college students and recent graduates “are turning to pharmaceuticals to make something possible that’s not healthy or normal.” (269)) These elevated expectations seem also to be of a piece with the broader societal mindset that makes it easier to get health coverage for a medication-check appointment than for talk-therapy. Just do the cheapest, fastest thing that lets you function well enough to get back to work. Since knowing what you want or who you are is not of primary value, exploring, reflecting, or simply being is a waste of time.

Here, of course, what kind of psychological state is functional or dysfunctional surely has something to do with what our society values, with what it demand of us. To the extent that our society is made up of individual people, those values, those demands, may be inextricably linked with whether people generally have the time, the space, the encouragement, the freedom to find or choose their own values, to be the authors (to at least some degree) of their own lives.

Finding meaning — creating meaning — is, at least experientially, connected to so much more than the release or reuptake of chemicals in our brains. Yet, as Sharpe describes, our efforts to create meaning get tangled in questions about the influence of those chemicals, especially when SSRIs are part of the story.

I no longer simply grapple with who I can become and what kind of effort it will require. Now I also grapple with the question of whether I am losing something important — cheating somehow — if I use a psychopharmaceutical to reduce the amount of effort required, or to increase my stamina to keep trying … or to lower my standards enough that being where I am (rather than trying to be better along some dimension or another) is OK with me.

And, getting satisfying answers to these questions, or even strategies for approaching them, is made harder when it seems like our society is not terribly tolerant of the woolgatherers, the grumpy, the introverted, the sad. Our right to pursue happiness (where failure is an option) has been transformed to a duty to be happy. Meanwhile, the stigma of mental illness and of needing medication to treat is dances hand in hand with the stigma attached to not conforming perfectly to societal expectations and definitions of “normal”.

In the end, what can it mean to feel “normal” when I can never get first-hand knowledge of how it feels to be anyone else? Is the “normal” I’m reaching for some state from my past, or some future state I haven’t yet experienced? Will I know it when I get there? And I can I reliably evaluate my own moods, personality, or plans with the organ whose functioning is in question?

With engaging interviews and sometimes achingly beautiful self-reflection, Coming of Age on Zoloft leads us through the terrain of these questions, illuminates the ways our pharmaceutical approach to depression makes them more fraught, and ultimately suggests the possibility that grappling with them may always have been important for our human flourishing, even without SSRIs in our systems.