The ideal of objectivity.

In trying to figure out what ethics ought to guide scientists in their activities, we’re really asking a question about what values scientists are committed to. Arguably, something that a scientist values may not be valued as much (if at all) by the average person in that scientist’s society.

Objectivity is a value – perhaps one of the values that scientists and non-scientists most strongly associate with science. So, it’s worth thinking about how scientists understand that value, some of the challenges in meeting the ideal it sets, and some of the historical journey that was involved in objectivity becoming a central scientific value in the first place. I’ll be splitting this discussion into three posts. This post sets the stage and considers how modern scientific practitioners describe objectivity. The next post will look at objectivity (and its challenges) in the context of work being done by Renaissance anatomists. The third post will examine how the notion of objectivity was connected to the efforts of Seventeenth Century “natural philosophers” to establish a method for building reliable knowledge about the world.

First, what do we mean by objectivity?

In everyday discussions of ethics, being objective usually means applying the rules fairly and treating everyone the same rather than showing favoritism to one party or another. Is this what scientists have in mind when they voice their commitment to objectivity? Perhaps in part. It could be connected to applying “the rules” of science (i.e., the scientific method) fairly and not letting bias creep into the production of scientific knowledge.

This seems close to the characterization of good scientific practice that we see in the National Academy of Science and National Research Council document, “The Nature of Science.” [1] This document describes science as an activity in which hypotheses undergo rigorous tests, whereby researchers compare the predictions of the hypotheses to verifiable facts determined by observation and experiment, and findings and corrections are announced in refereed scientific publications. It states, “Although [science’s] goal is to approach true explanations as closely as possible, its investigators claim no final or permanent explanatory truths.” (38)

Note that rigorous facts, verification of those facts (or the information necessary to verify them), correction of mistakes, and reliable reports of findings all depend on honesty – you can’t perform these activities by making up your results, or presenting them in a deceptive way, for example. So being objective in the sense of following good scientific methodology requires a commitment not to mislead.

But here, in “The Nature of Science,” we see hints that there are two closely related, yet distinct, meanings of “objective”. One is what anyone applying the appropriate methodology could see. The other is a picture of what the world is really like. Getting a true picture of the world (or aiming for such a picture) means seeking objectivity in the second sense -– finding the true facts. Seeking out the observational data that other scientists could verify -– the first sense of objectivity -– is closely tied to the experimental method scientists use and their strategies for reporting their results. Presumably, applying objective methodology would be a good strategy for generating an accurate (and thus objective) picture of the world.

But we should note a tension here that’s at least as old as the tension between Plato and his student Aristotle. What exactly are the facts about the world that anyone could see? Are sense organs like eyes all we need to see them? If such facts really exist, are they enough to help us build a true picture of the world?

In the chapter “Making Observations” from his book The Scientific Attitude [2], Fred Grinnell discusses some of the challenges of seeing what there is to see. He argues that, especially in the realms science tries to probe, seeing what’s out there is not automatic. Rather, we have to learn to see the facts that are there for anyone to observe.

Grinnell describes the difficulty students have seeing cells under a light microscope, a difficulty that persists even after students work out how to use the microscope to adjust the focus. He writes:

The students’ inability to see the cells was not a technical problem. There can be technical problems, of course -– as when one takes an unstained tissue section and places it under a microscope. Under these conditions it is possible to tell that something is “there,” but not precisely what. As discussed in any histology textbook, the reason is that there are few visual features of unstained tissue sections that our eyes can discriminate. As the students were studying stained specimens, however, sufficient details of the field were observable that could have permitted them to distinguish among different cells and between cells and the noncellular elements of the tissue. Thus, for these students, the cells were visible but unseen. (10-11)

Grinnell’s example suggests that seeing cells, for example, requires more than putting your eye to the eyepiece of a microscope focused on a stained sample of cells. Rather, you need to be able to recognize those bits of your visual field as belonging to a particular kind of object -– and, you may even need to have something like the concept of a cell to be able to identify what you are seeing as cells. At the very least, this suggests that we should amend our gloss of objective as “what anyone could see” to something more like “what anyone could see given a particular conceptual background and some training with the necessary scientific measuring devices.”

But Grinnell makes even this seem too optimistic. He notes that “seeing things one way means not seeing them another way,” which implies that there are multiple ways to interpret any given piece of the world toward which we point our sense organs. Moreover, he argues,

Each person’s previous experiences will have led to the development of particular concepts of things, which will influence what objects can be seen and what they will appear to be. As a consequence, it is not unusual for two investigators to disagree about their observations if the investigators are looking at the data according to different conceptual frameworks. Resolution of such conflicts requires that the investigators clarify for each other the concepts that they have in mind. (15)

In other words, scientists may need to share a bundle of background assumptions about the world to look at a particular piece of that world and agree on what they see. Much more is involved in seeing “what anyone can see” than meets the eye.

We’ll say more about this challenge in the next post, when we look at how Renaissance anatomists tried to build (and communicate) objective knowledge about the human body.
_____________

[1] “The Nature of Science,” in Panel on Scientific Responsibility and the Conduct of Research, National Academy of Sciences, National Academy of Engineering, Institute of Medicine. Responsible Science, Volume I: Ensuring the Integrity of the Research Process. Washington, DC: The National Academies Press, 1992.

[2] Frederick Grinnell, The Scientific Attitude. Guilford Press, 1992.

More on rudeness, civility, and the care and feeding of online conversations.

Late last month, I pondered the implications of a piece of research that was mentioned but not described in detail in a perspective piece in the January 4, 2013 issue of Science. [1] In its broad details, the research suggests that the comments that follow an online article about science — and particularly the perceived tone of the comments, whether civil or uncivil — can influence readers’ assessment of the science described in the article itself.

Today, an article by Paul Basken at The Chronicle of Higher Education shares some more details of the study:

The study, outlined on Thursday at the annual meeting of the American Association for the Advancement of Science, involved a survey of 2,338 Americans asked to read an article that discussed the risks of nanotechnology, which involves engineering materials at the atomic scale.

Of participants who had already expressed wariness toward the technology, those who read the sample article—with politely written comments at the bottom—came out almost evenly split. Nearly 43 percent said they saw low risks in the technology, and 46 percent said they considered the risks high.

But with the same article and comments that expressed the same reactions in a rude manner, the split among readers widened, with 32 percent seeing a low risk and 52 percent a high risk.

“The only thing that made a difference was the tone of the comments that followed the story,” said a co-author of the study, Dominique Brossard, a professor of life-science communication at the University of Wisconsin at Madison. The study found “a polarization effect of those rude comments,” Ms. Brossard said.

The study, conducted by researchers at Wisconsin and George Mason University, will be published in a coming issue of the Journal of Computer-Mediated Communication. It was presented at the AAAS conference during a daylong examination of how scientists communicate their work, especially online.

If you click through to read the article, you’ll notice that I was asked for comment on the findings. As you may guess, I had more to say on the paper (which is still under embargo) and its implications than ended up in the article, so I’m sharing my extended thoughts here.

First, I think these results are useful in reassuring bloggers who have been moderating comments that what they are doing is not just permissible (moderating comments is not “censorship,” since bloggers don’t have the power of the state, and folks can find all sorts of places in the Internet to state their views if any given blog denies them a soapbox) but also reasonable. Blogging with comments enabled assumes more than transmission of information, it assumes a conversation, and what kind of conversation it ends up being depends on what kind of behavior is encouraged or forbidden, who feels welcome or alienated.

But, there are some interesting issues that the study doesn’t seem to address, issues that I think can matter quite a lot to bloggers.

In the study, readers (lurkers) were reacting to factual information in an online posting plus the discourse about that article in the comments. As the study is constructed, it looks like that discourse is being shaped by commenters, but not by the author of the article. It seems likely to me (and worth further empirical study!) that comment sections in which the author is engaging with commenters — not just responding to the questions they ask and the views they express, but also responding to the ways that they are interacting with other commenters and to their “tone” — have a different impact on readers than comment sections where the author of the piece that is being discussed is totally absent from the scene. To put it more succinctly, comment sections where the author is present and engaged, or absent and disengaged, communicate information to lurkers, too.

Here’s another issue I don’t think the study really addresses: While blogs usually aim to communicate with lurkers as well as readers who post comments (and every piece of evidence I’ve been shown suggests that commenters tend to be a small proportion of readers), most are aiming to reach a core audience that is narrower than “everyone in the world with an internet connection”.

Sometimes what this means is that bloggers are speaking to an audience that finds comment sections that look unruly and contentious to be welcoming, rather than alienating. This isn’t just the case for bloggers seeking an audience that likes to debate or to play rough.

Some blogs have communities that are intentionally uncivil towards casual expressions of sexism, racism, homophobia, etc. Pharyngula is a blog that has taken this approrach, and just yesterday Chris Clarke posted a statement on “civility” there that leads with a commitment “not to fetishize civility over justice.” Setting the rules of engagement between bloggers and posters this way means that people in groups especially affected by sexism, racism, homophobia, etc., have a haven in the blogosphere where they don’t have to waste time politely defending the notion that they are fully human, too (or swallowing their anger and frustration at having their humanity treated as a topic of debate). Yes, some people find the environment there alienating — but the people who are alienated by unquestioned biases in most other quarters of the internet (and the physical world, for that matter) are the ones being consciously welcomed into the conversation at Pharyngula, and those who don’t like the environment can find another conversation. It’s a big blogosphere. That not every potential reader does not feel perfectly comfortable at a blog, in other words, is not proof that the blogger is doing it wrong.

So, where do we find ourselves?

We’re in a situation where lots of people are using online venues like blogs to communicate information and viewpoints in the context of a conversation (where readers can actively engage as commenters). We have a piece of research indicating that the tenor of the commenting (as perceived by lurkers, readers who are not commenting) can communicate as much to readers as the content of the post that is the subject of the comments. And we have lots of questions still unanswered about what kinds of engagement will have what kinds of effect on what kinds or readers (and how reliably). What does this mean for those of us who blog?

I think what it means is that we have to be really reflective about what we’re trying to communicate, who we’re trying to communicate it to, and how our level of visible engagement (or disengagement) in the conversation might make a difference. We have to acknowledge that we have information that’s gappy at best about what’s coming across to the lurkers, and attentive to ways to get more feedback about how successfully we’re communicating what we’re trying to communicate. We have to recognize that, given all we don’t know, we may want to shift our strategies for blogging and engaging commenters, especially if we come upon evidence that they’re not working the way we thought they were.

* * * * *
In the interests of spelling out the parameters of the conversation I’d like to have here, let me note that whether or not you like the way Pharyngula sets a tone for conversations is off topic here. You are, however, welcome to share in the comments here what you find makes you feel more or less welcome to engage with online postings, whether as a commenter or a lurker.
_____

[1] Dominique Brossard and Dietram A. Scheufele, “Science, New Media, and the Public.” Science 4 January 2013:Vol. 339, pp. 40-41.
DOI: 10.1126/science.1160364

Some musings on Jonah Lehrer’s $20,000 “meh culpa”.

Remember some months ago when we were talking about how Jonah Lehrer was making stuff up in his “non-fiction” pop science books? This was as big enough deal that his publisher, Houghton Mifflin Harcourt, recalled print copies of Lehrer’s book Imagine, and that the media outlets for which Lehrer wrote went back through his writing for them looking for “irregularities” (like plagiarism — which one hopes is not regular, but once your trust has been abused, hopes are no longer all that durable).

Lehrer’s behavior was clearly out of bounds for anyone hoping for a shred of credibility as a journalist or non-fiction author. However, at the time, I opined in a comment:

At 31, I think Jonah Lehrer has time to redeem himself and earn back trust and stuff like that.

Well, the events of this week stand as evidence that having time to redeem oneself is not a guarantee that one will not instead dig the hole deeper.

You see, Jonah Lehrer was invited to give a talk this week at a “media learning seminar” in Miami, a talk which marked his first real public comments a large group of journalistic peers since his fabrications and plagiarism were exposed — and a talk for which the sponsor of the conference, the Knight Foundation, paid Lehrer an honorarium of $20,000.

At the New York Times “Arts Beat” blog, Jennifer Schuessler describes Lehrer’s talk:

Mr. Lehrer … dived right in with a full-throated mea culpa. “I am the author of a book on creativity that contains several fabricated Bob Dylan quotes,” he told the crowd, which apparently could not be counted on to have followed the intense schadenfreude-laced commentary that accompanied his downfall. “I committed plagiarism on my blog, taking without credit or citation an entire paragraph from the blog of Christian Jarrett. I plagiarized from myself. I lied to a journalist named Michael Moynihan to cover up the Dylan fabrications.”

“My mistakes have caused deep pain to those I care about,” he continued. “I’m constantly remembering all the people I’ve hurt and let down.”

If the introduction had the ring of an Alcoholics Anonymous declaration, before too long Mr. Lehrer was surrendering to the higher power of scientific research, cutting back and forth between his own story and the kind of scientific terms — “confirmation bias,” “anchoring” — he helped popularize. Within minutes he had pivoted from his own “arrogance” and other character flaws to the article on flawed forensic science within the F.B.I. that he was working on when his career began unraveling, at one point likening his own corner-cutting to the overconfidence of F.B.I. scientists who fingered the wrong suspect in the 2004 Madrid bombings.

“If we try to hide our mistakes, as I did, any error can become a catastrophe,” he said, adding: “The only way to prevent big failures is a willingness to consider every little one.”

Not everyone shares the view that Lehrer’s apology constituted a full-throated mea culpa, though. At Slate, Daniel Engber shared this assessment:

Lehrer has been humbled, and yet nearly every bullet in his speech managed to fire in both directions. It was a wild display of self-negation, of humble arrogance and arrogant humility. What are these “standard operating procedures” according to which Lehrer will now do his work? He says he’ll be more scrupulous in his methods—even recording and transcribing interviews(!)—but in the same breath promises that other people will be more scrupulous of him. “I need my critics to tell me what I’ve gotten wrong,” he said, as if to blame his adoring crowds at TED for past offenses. Then he promised that all his future pieces would be fact-checked, which is certainly true but hardly indicative of his “getting better” (as he puts it, in the clammy, familiar rhetoric of self-help).

What remorse Lehrer had to share was couched in elaborate and perplexing disavowals. He tried to explain his behavior as, first of all, a hazard of working in an expert field. Like forensic scientists who misjudge fingerprints and DNA analyses, and whose failings Lehrer elaborated on in his speech, he was blind to his own shortcomings. These two categories of mistake hardly seem analogous—lab errors are sloppiness, making up quotes is willful distortion—yet somehow the story made Lehrer out to be a hapless civil servant, a well-intentioned victim of his wonky and imperfect brain.

(Bold emphasis added.)

At Forbes, Jeff Bercovici noted:

Ever the original thinker, even when he’s plagiarizing from press releases, Lehrer apologized abjectly for his actions but pointedly avoided promising to become a better person. “These flaws are a basic part of me,” he said. “They’re as fundamental to me as the other parts of me I’m not ashamed of.”

Still, Lehrer said he is aiming to return to the world of journalism, and has been spending several hours a day writing. “It’s my hope that someday my transgressions might be forgiven,” he said.

How, then, does he propose to bridge the rather large credibility gap he faces? By the methods of the technocrat, not the ethicist: “What I clearly need is a new set of rules, a stricter set of standard operating procedures,” he said. “If I’m lucky enough to write again, then whatever I write will be fully fact-checked and footnoted. Every conversation will be fully taped and transcribed.”

(Bold emphasis added.)

How do I see Jonah Lehrer’s statement? The title of this post should give you a clue. Like most bloggers, I took five years of Latin.* “Mea culpa” would describe a statement wherein the speaker (in this case, Jonah Lehrer) actually acknowledged that the blame was his for the bad thing of which he was a part. From what I can gather, Lehrer hasn’t quite done that.

Let the record reflect that the “new set of rules” and “stricter set of standard operating procedures” Lehrer described in his talk are not new, nor were they non-standard when Lehrer was falsifying and plagiarizing to build his stories. It’s not that Jonah Lehrer’s unfortunate trajectory shed light on the need for these standards, and now the journalistic community (and we consumers of journalism) can benefit from their creation. Serious journalists were already using these standards.

Jonah Lehrer, however, decided he didn’t need to use them.

This does have a taste of Leona Helmsleyesque “rules are for the little people” to it. And, I think it’s important to note that Lehrer gave the outward appearance of following the rules. He did not stand up and say, “I think these rules are unnecessary to good journalistic practice, and here’s why…” Rather, he quietly excused himself from following them.

But now, Lehrer tells us, he recognizes the importance of the rules.

That’s well and good. However, the rules he’s pointing to — taping and transcribing interviews, fact-checking claims and footnoting sources — seem designed to prevent unwitting mistakes. They could head off misremembering what interviewees said, miscommunicating whose words or insights animate part of a story, getting the facts wrong accidentally. It’s less clear that these rules can head off willful lies and efforts to mislead — which is to say, the kind of misdeeds that got Lehrer into trouble.

Moreover, that he now accepts these rules after being caught lying does not indicate that Jonah Lehrer is now especially sage about journalism. It’s remedial work.

Let’s move on from his endorsement (finally) of standards of journalistic practice to the constellation of cognitive biases and weaknesses of will that Jonah Lehrer seems to be trying to saddle with the responsibility for his lies.

Recognizing cognitive biases is a good thing. It is useful to the extent that it helps us to avoid getting fooled by them. You’ll recall that, knowledge-builders, whether scientists or journalists, are supposed to do their best to avoid being fooled.

But, what Lehrer did is hard to cast in terms of ignoring strong cognitive biases. He made stuff up. He fabricated quotes. He presented other authors’ writing as his own. When confronted about his falsifications, he lied. Did his cognitive biases do all this?

What Jonah Lehrer seems to be sidestepping in his “meh culpa” is the fact that, when he had to make choices about whether to work with the actual facts or instead to make stuff up, about whether to write his own pieces (or at least to properly cite the material from others that he used) or to plagiarize, about whether to be honest about what he’d done when confronted or to lie some more, he decided to be dishonest.

If we’re to believe this was a choice his cognitive biases made for him, then his seem much more powerful (and dangerous) than the garden-variety cognitive biases most grown-up humans have.

It seems to me more plausible that Lehrer’s problem was a weakness of will. It’s not that he didn’t know what he was doing was wrong — he wasn’t fooled by his brain into believing it was OK, or else he wouldn’t have tried to conceal it. Instead, despite recognizing the wrongness of his deeds, he couldn’t muster the effort not to do them.

If Jonah Lehrer cannot recognize this — that it frequently requires conscious effort to do the right thing — it’s hard to believe he’ll be committed to putting that effort into doing the right (journalistic) thing going forward. Verily, given the trust he’s burned with his journalistic colleagues, he can expect that proving himself to be reformed will require extra effort.

But maybe what Lehrer is claiming is something different. Maybe he’s denying that he understood the right thing to do and then opted not to do it because it seemed like too much work. Maybe he’s claiming instead that he just couldn’t resist the temptation (whether of rule-breaking for its own sake or of rule-breaking as the most efficient route to secure the prestige he craved). In other words, maybe he’s saying he was literally powerless, that he could not help committing those misdeeds.

If that’s Lehrer’s claim — and if, in addition, he’s claiming that the piece of his cognitive apparatus that was so vulnerable to temptation that it seized control to make him do wrong is as integral to who Jonah Lehrer is as his cognitive biases are — the whole rehabilitation thing may be a non-starter. If this is how Lehrer understands why he did wrong, he seems to be identifying himself as a wrongdoer with a high probability of reoffending.

If he can parlay that into more five-figure speaker fees, maybe that will be a decent living for Jonah Lehrer, but it will be a big problem for the community of journalists and for the public that trusts journalists as generally reliable sources of information.

Weakness is part of Lehrer, as it is for all of us, but it is not a part he is acknowledging he could control or counteract by concerted effort, or by asking for help from others.

It’s part of him, but not in a way that makes him inclined to actually take responsibility or to acknowledge that he could have done otherwise under the circumstances.

If he couldn’t have done otherwise — and if he might not be able to when faced with similar temptation in the future — then Jonah Lehrer has no business in journalism. Until he can recognize his own agency, and the responsibility that attaches to it, the most he has to offer is one more cautionary tale.
_____
*Fact check: I have absolutely no idea how many other bloggers took five years of Latin. My evidence-free guess is that it’s not just me.

Intuitions, scientific methodology, and the challenge of not getting fooled.

At Context and Variation, Kate Clancy has posted some advice for researchers in evolutionary psychology who want to build reliable knowledge about the phenomena they’re trying to study. This advice, of course, is prompted in part by methodology that is not so good for scientific knowledge-building. Kate writes:

The biggest problem, to my mind, is that so often the conclusions of the bad sort of evolutionary psychology match the stereotypes and cultural expectations we already hold about the world: more feminine women are more beautiful, more masculine men more handsome; appearance is important to men while wealth is important to women; women are prone to flighty changes in political and partner preference depending on the phase of their menstrual cycles. Rather than clue people in to problems with research design or interpretation, this alignment with stereotype further confirms the study. Variation gets erased: in bad evolutionary psychology, there are only straight people, and everyone wants the same things in life. …

No one should ever love their idea so much that it becomes detached from reality.

It’s a lovely post about the challenges of good scientific methodology when studying human behavior (and why it matters to more than just scientists), so you should read the whole thing.

Kate’s post also puts me in mind of some broader issues about which scientists should remind themselves from time to time to keep themselves honest. I’m putting some of those on the table here.

Let’s start with a quotable quote from Richard Feynman:

The first principle is that you must not fool yourself, and you are the easiest person to fool.

Scientists are trying to build reliable knowledge about the world from information that they know is necessarily incomplete. There are many ways to interpret the collections of empirical data we have on hand — indeed, many contradictory ways to interpret them. This means that lots of the possible interpretations will be wrong.

You don’t want to draw the wrong conclusion from the available data, not if you can possibly avoid it. Feynman’s “first principle” is noting that we need to be on guard against letting ourselves be fooled by wrong conclusions — and on guard against the peculiar ways that we are more vulnerable to being fooled.

This means we have to talk about our attachment to intuitions. All scientists have intuitions. They surely help in motivating questions to ask about the world and strategies for finding good answers to them. But intuitions, no matter how strong, are not the same as empirical evidence.

Making things more challenging, our strong intuitions can shape what we take to be the empirical evidence. They can play a role in which results we set aside because they “couldn’t be right,” in which features of a system we pay attention to and which we ignore, in which questions we bother to ask in the first place. If we don’t notice the operation of our intuitions, and the way they impact our view of the empirical evidence, we’re making it easier to get fooled. Indeed, if our intuitions are very strong, we’re essentially fooling ourselves.

As if this weren’t enough, we humans (and, by extension, human scientists) are not always great at recognizing when we are in the grips of our intuitions. It can feel like we’re examining a phenomenon to answer a question and that we’re refraining from making any assumptions to guide our enquiry, but chances are it’s not a feeling we should trust.

This is not to say that our intuitions are guaranteed safe haven from our noticing them. We can become aware of them and try to neutralize the extent to which they, rather than the empirical evidence, are driving the scientific story — but to do this, we tend to need help from people who have conflicting intuitions about the same bit of the world. This is a good methodological reason to take account of the assumptions and intuitions of others, especially when they conflict with our own.

What happens if there are intuitions about which we all agree — assumptions we are making (and may well be unaware that we’re making, because they seem so bleeding obvious) with which no one disagrees? I don’t know that there are any such universal human intuitions. It seems unlikely to me, but I can’t rule out the possibility. How would they bode for our efforts at scientific knowledge-building?

First, we would probably want to recognize that the universality of an intuition still wouldn’t make it into independent empirical evidence. Even if it had been the case, prior to Galileo, or Copernicus, or Aristarchus of Samos, that every human took it as utterly obvious that Earth is stationary, we recognize that this intuition could still be wrong. As it happened, it was an intuition that was questioned, though not without serious resistance.

Developing a capacity to question the obvious, and also to recognize and articulate what it is we’re taking to be obvious in order that we might question it, seems like a crucial skill for scientists to cultivate.

But, as I think comes out quite clearly in Kate’s post, there are some intuitions we have that, even once we’ve recognized them, may be extremely difficult to subject to empirical test. This doesn’t mean that the questions connected in our heads to these intuitions are outside the realm of scientific inquiry, but it would be foolish not to notice that it’s likely to be extremely difficult to find good scientific answers to these questions. We need to be wary of the way our intuitions try to stack the evidential deck. We need to acknowledge that the very fact of our having strong intuitions doesn’t count as empirical evidence in favor of them. We need to come to grips with the possibility that our intuitions could be wrong — perhaps to the extent that we recognize that empirical results that seem to support our intuitions require extra scrutiny, just to be sure.

To do any less is to ask to be fooled, and that’s the outcome scientific knowledge-building is trying to avoid.

Academic tone-trolling: How does interactivity impact online science communication?

Later this week at ScienceOnline 2013, Emily Willingham and I are co-moderating a session called Dialogue or fight? (Un)moderated science communication online. Here’s the description:

Cultivating a space where commentators can vigorously disagree with a writer–whether on a blog, Twitter, G+, or Facebook, *and* remain committed to being in a real dialogue is pretty challenging. It’s fantastic when these exchanges work and become constructive in that space. On the other hand, there are times when it goes off the rails despite your efforts. What drives the difference? How can you identify someone who is commenting simply to cause trouble versus a commenter there to engage in and add value to a genuine debate? What influence does this capacity for *anyone* to engage with one another via the great leveler that is social media have on social media itself and the tenor and direction of scientific communication?

Getting ready for this session was near the top of my mind when I read a perspective piece by Dominique Brossard and Dietram A. Scheufele in the January 4, 2013 issue of Science. [1] In the article, Brossard and Scheufele raise concerns about the effects of moving the communication of science information to the public from dead-tree newspapers and magazines into online, interactive spaces.

Here’s the paragraph that struck me as especially relevant to the issues Emily and I had been discussing for our session at ScienceOnline 2013:

A recent conference presented an examination of the effects of these unintended influences of Web 2.0 environments empirically by manipulating only the tone of the comments (civil or uncivil) that followed an online science news story in a national survey experiment. All participants were exposed to the same, balanced news item (covering nanotechnology as an emerging technology) and to a set of comments following the story that were consistent in terms of content but differed in tone. Disturbingly, readers’ interpretations of potential risks associated with the technology described in the news article differed significantly depending only on the tone of the manipulated reader comments posted with the story. Exposure to uncivil comments (which included name calling and other non-content-specific expressions of incivility) polarized the views among proponents and opponents of the technology with respect to its potential risks. In other words, just the tone of the comments following balanced science stories in Web 2.0 environments can significantly alter how audiences think about the technology itself. (41)

There’s lots to talk about here.

Does this research finding mean that, when you’re trying to communicate scientific information online, enabling comments is a bad idea?

Lots of us are betting that it’s not. Rather, we’re optimistic that people will be more engaged with the information when they have a chance to engage in a conversation about it (e.g., by asking questions and getting answers).

However, the research finding described in the Science piece suggests that there may be better and worse ways of managing commenting on your posts if your goal is to help your readers understand a particular piece of science.

This might involve having a comment policy that puts some things clearly out-of-bounds, like name-calling or other kinds of incivility, and then consistently enforcing this policy.

It should be noted — and has been — that some kinds of incivility wear the trappings of polite language, which means that it’s not enough to set up automatic screens that weed out comments containing particular specified naughty words. Effective promotion of civility rather than incivility might well involve having the author of the online piece and/or designated moderators as active participants in the ongoing conversation, calling out bad commenter behavior as well as misinformation, answering questions to make sure the audience really understands the information being presented, and being attentive to how the unfolding discussion is likely to be welcoming — or forbidding — to the audience one is hoping to reach.

There are a bunch of details that are not clear from this brief paragraph in the perspective piece. Were the readers whose opinions were swayed by the tone of the comments reacting to a conversation that had already happened or were they watching as it happened? (My guess is the former, since the latter would be hard to orchestrate and coordinate with a survey.) Were they looking at a series of comments that dropped them in the middle of a conversation that might plausibly continue, or were they looking at a conversation that had reached its conclusion? Did the manipulated reader comments include any comments that appeared to be from the author of the science article, or were the research subjects responding to a conversation from which the author appeared to be absent? Potentially, these details could make a difference to the results — a conversation could impact someone reading it differently depending on whether it seems to be gearing up or winding down, just as participation from the author could carry a different kind of weigh than the views of random people on the internet. I’m hopeful that future research in this area will explore just what kind of difference they might make.

I’m also guessing that the experimental subjects reading the science article and the manipulated comments that followed could not themselves participate in the discussion by posting a comment. I wonder how much being stuck on the sidelines rather than involved in the dialogue affected their views. We should remember, though, that most indicators suggest that readers of online articles — even on blogs — who actually post comments are much smaller in number than the readers who “lurk” without commenting. This means that commenters are generally a very small percentage of the readers one is trying to reach, and perhaps not very representative of those readers overall.

At this point, the take-home seems to be that social scientists haven’t discovered all the factors that matter in how an audience for online science is going to receive and respond to what’s being offered — which means that those of us delivering science-y content online should assume we haven’t discovered all those factors, either. It might be useful, though, if we are reflective about our interactions with our audiences and if we keep track of the circumstances around communicative efforts that seem to work and those that seem to fail. Cataloguing these anecdote could surely provide fodder for some systematic empirical study, and I’m guessing it could help us think through strategies for really listening to the audiences we hope are listening to us.

* * * * *
As might be expected, Bora has a great deal to say about the implications of this particular piece of research and about commenting, comment moderation, and Web 2.0 conversations more generally. Grab a mug of coffee, settle in, and read it.

——
[1] Dominique Brossard and Dietram A. Scheufele, “Science, New Media, and the Public.” Science 4 January 2013:Vol. 339, pp. 40-41.
DOI: 10.1126/science.1160364

Can we combat chemophobia … with home-baked bread?

This post was inspired by the session at the upcoming ScienceOnline 2013 entitled Chemophobia & Chemistry in The Modern World, to be moderated by Dr. Rubidium and Carmen Drahl

For some reason, a lot of people seem to have an unreasonable fear of chemistry. I’m not just talking about fear of chemistry instruction, but full-on fear of chemicals in their world. Because what people think they know about chemicals is that they go boom, or they’re artificial, or they’re drugs which are maybe useful but maybe just making big pharma CEOs rich, and maybe they’re addictive and subject to abuse. Or, they are seeping into our water, our air, our food, our bodies and maybe poisoning us.

At the extreme, it strikes me that chemophobia is really just a fear of recognizing that our world is made of chemicals. I can assure you, it is!

Your computer is made of chemicals, but so are paper and ink. Snails are made of chemicals, as are plants (which carry out chemical reactions right under our noses. Also carrying out chemical reactions right under our noses are yeasts, without which many of our potables would be less potent. Indeed, our kitchens and pantries, from which we draw our ingredients and prepare our meals, are full of many impressively reactive chemicals.

And here, it actually strikes me that we might be able to ratchet down the levels of chemophobia if people find ways to return to de novo syntheses of more of what they eat — which is to say, to making their food from scratch.

For the last several months, our kitchen has been a hotbed of homemade bread. Partly this is because we had a stretch of a couple years where our only functional oven was a toaster over, which means when we got a working full-sized oven again, we became very enthusiastic about using it.

As it turns out, when you’re baking two or three loaves of bread every week, you start looking at things like different kinds of flour on the market and figuring out how things like gluten content affect your dough — how dense of a bread it will make, how much “spring” it has in the oven, and so forth.

(Gluten is a chemical.)

Maybe you dabble with the occasional batch of biscuits of muffins or quick-bread that uses a leavening agent other than yeast — otherwise known as a chemical leavener.

(Chemical leaveners are chemicals.)

And, you might even start to pick up a feel for which chemical leaveners depend on there being an acidic ingredient (like vinegar or buttermilk) in your batter and which will do the job without an acidic ingredient in the batter.

(Those ingredients, whether acidic or not, are made of chemicals. Even the water.)

Indeed, many who find their inner baker will start playing around with recipes that call for more exotic ingredients like lecithin or ascorbic acid or caramel color (each one: a chemical).

It’s to the point that I have joked, while perusing the pages of “baking enhancers” in the fancy baking supply catalogs, “People start baking their own bread so they can avoid all the chemicals in the commercially baked bread, but then they get really good at baking and start improving their homemade bread with all these chemicals!”

And yes, there’s a bit of a disconnect in baking to avoid chemicals in your food and then discovering that there are certain chemicals that will make that food better. But, I’m hopeful that the process leads to a connection, wherein people who are getting back in touch with making one of the oldest kinds of foods we have can also make peace with the recognition that wholesome foods (and the people who eat them) are made of chemicals.

It’s something to chew on, anyway.

Reasonably honest impressions of #overlyhonestmethods.

I suspect at least some of you who are regular Twitter users have been following the #overlyhonestmethods hashtag, with which scientists have been sharing details of their methodology that are maybe not explicitly spelled out in their published “Materials and Methods” sections. And, as with many other hashtag genres, the tweets in #overlyhonestmethods are frequently hilarious.

I was interviewed last week about #overlyhonestmethods for the Public Radio International program Living On Earth, and the length of my commentary was more or less Twitter-scaled. This means some of the nuance (at least in my head), about questions like whether I thought the tweets were an overshare that could make science look bad, didn’t quite make it to the radio. Also, in response to the Living On Earth segment, one of the people with whom I regularly discuss the philosophy of science in the three-dimensional world, shared some concerns about this hashtag in the hopes I’d say a bit more:

I am concerned about the brevity of the comments which may influence what one expresses.  Second there is an ego component; some may try to outdo others’ funny stories, and may stretch things in order to gain a competitive advantage.

So, I’m going to say a bit more.

Should we worry that #overlyhonestmethods tweets share information that will make scientific practice look bad to (certain segments of) the public?

I don’t think so. I suppose this may depend on what exactly the public expects of scientists.

The people doing science are human. They are likely to be working with all kinds of constraints — how close their equipment is to the limits of its capabilities (and to making scary noises), how frequently lab personnel can actually make it into the lab to tend to cell cultures, how precisely (or not) pumping rates can be controlled, how promptly (or not) the folks receiving packages can get perishable deliveries to the researchers. (Notice that at least some of these limitations are connected to limited budgets for research … which maybe means that if the public finds them unacceptable, they should lobby their Congresscritters for increased research funding.) There are also constraints that come from the limits of the human animal: with a finite attention span, without a built in chronometer or calibrated eyeballs, and with a need for sleep and possibly even recreation every so often (despite what some might have you think).

Maybe I’m wrong, but my guess is that it’s a good thing to have a public that is aware of these limitations imposed by the available equipment, reagents, and non-robot workforce.

Actually, I’m willing to bet that some of these limitations, and an awareness of them, are also really handy in scientific knowledge-building. They are departures from ideality that may help scientists nail down which variables in the system really matter in producing and controlling the phenomenon being studied. Reproducibility might be easy for a robot that can do every step of the experiment precisely every single time, but we really learn what’s going on when we drift from that. Does it matter if I use reagents from a different supplier? Can I leave the cultures to incubate a day longer? Can I successfully run the reaction in a lab that’s 10 oC warmer or 10 oC colder? Working out the tolerances helps turn an experimental protocol from a magic trick into a system where we have some robust understanding of what variables matter and of how they’re hooked to each other.

Does the 140 character limit mean #overlyhonestmethods tweets leave out important information, or that scientists will only use the hashtag to be candid about some of their methods while leaving others unexplored?

The need for brevity surely means that methods for which candor requires a great deal of context and/or explanation won’t be as well-represented as methods where one can be candid and pithy simultaneously. These tweeted glimpses into how the science gets done are more likely to be one-liners than shaggy-dog stories.

However, it’s hard to imagine that folks who really wanted to share wouldn’t use a series of tweets if they wanted to play along, or maybe even write a blog post about it and use the hashtag to tweet a link to that post.

What if #overlyhonestmethods becomes a game of one-upmanship and puffery, in which researchers sacrifice honesty for laughs?

Maybe there’s some of this happening, and if the point of the hashtag is for researchers to entertain each other, maybe that’s not a problem. However, in the case that other members of one’s scientific community were actually looking to those tweets to fill in some of the important details of methodology that are elided in the terse “Materials and Methods” section of a published research paper, I hope the tweeters would, when queried, provide clear and candid information on how they actually conducted their experiments. Correcting or retracting a tweet should be less of an ego blow than correcting or retracting a published paper, I hope (and indeed, as hard as it might be to correct or retract published claims, good scientists do it when they need to).

The whole #overlyhonestmethods hashtag raises the perennial question of why it is so much is elided in published “Materials and Methods” sections. Blame is usually put on limitations of space in the journals, but it’s also reasonable to acknowledge that sometimes details-that-turn-out-to-be-important are left out because the researchers don’t fully recognize their importance. Other times, researchers may have empirical grounds for thinking these details are important, but they don’t yet have a satisfying story to tell about why they should be.

By the way, I think it would be an excellent thing if, for research that is already published, #overlyhonestmethods included the relevant DOI. These tweets would be supplementary information researchers could really use.

What researchers use #overlyhonestmethods to disclose ethically problematic methods?

Given that Twitter is a social medium, I expect other scientists in the community watching the hashtag would challenge those methods or chime in to explain just what makes them ethically problematic. They might also suggest less ethically problematic ways to achieve the same research goals.

The researchers on Twitter could, in other words, use the social medium to exert social pressure in order to make sure other members of their scientific community understand and live up to the norms of that community.

That outcome would strike me as a very good one.

* * * * *

In addition to the ever expanding collection of tweets about methods, #overlyhonestmethods also has links to some thoughtful, smart, and funny commentary on the hashtag and the conversations around it. Check it out!

Fear of scientific knowledge about firearm-related injuries.

In the United States, a significant amount of scientific research is funded through governmental agencies, using public money. Presumably, this is not primarily aimed at keeping scientists employed and off the streets*, but rather is driven by a recognition that reliable knowledge about how various bits of our world work can be helpful to us (individually and collectively) in achieving particular goals and solving particular problems.

Among other things, this suggests a willingness to put the scientific knowledge to use once it’s built.** If we learn some relevant details about the workings of the world, taking those into account as we figure out how best to achieve our goals or solve our problems seems like a reasonable thing to do — especially if we’ve made a financial investment in discovering those relevant details.

And yet, some of the “strings” attached to federally funded research suggest that the legislators involved in approving funding for research are less than enthusiastic to see our best scientific knowledge put to use in crafting policy — or, that they would prefer that the relevant scientific knowledge not be built or communicated at all.

A case in point, which has been very much on my mind for the last month, is the way language in appropriations bills has restricted Centers for Disease Control and Prevention (CDC) and National Institutes of Health (NIH) research funds for research related to firearms.

The University of Chicago Crime Lab organized a joint letter (PDF) to the gun violence task force being headed by Vice President Joe Biden, signed by 108 researchers and scholars, which is very clear in laying out the impediments that have been put on research about the effects of guns. They identify the crucial language, which is still present in subsection c of section 503 and 218 of FY2013 Appropriations Act governing NIH and CDC funding:

None of the funds made available in this title may be used, in whole or in part, to advocate or promote gun control.

As the letter from the Crime Lab rightly notes,

Federal scientific funds should not be used to advance ideological agendas on any topic. Yet that legislative language has the effect of discouraging the funding of well-crafted scientific studies.

What is the level of this discouragement? The letter presents a table comparing major NIH research awards connected to a handful of conditions between 1973 and 2012, noting the number of reported cases of these conditions in the U.S. during this time period alongside the number of grants to study the condition. There were 212 NIH research awards to study cholera and 400 reported U.S. cases of cholera. There were 56 NIH research awards to study diphtheria and 1337 reported U.S. cases of diphtheria. There were 129 NIH research awards to study polio and 266 reported U.S. cases of polio. There were 89 NIH research awards to study rabies and 65 reported U.S. cases of rabies. But, for more than 4 million reported firearm injuries in the U.S. during this time period, there were exactly 3 NIH research awards to study firearm injuries.

One possibility here is that, from 1973 to 2012, there were very few researchers interested enough in firearm injuries to propose well-crafted scientific studies of them. I suspect that that the 108 signatories of the letter linked above would disagree with that explanation for this disparity in research funding.

Another possibility is that legislators want to prevent the relevant scientific knowledge from being built. The fact that they have imposed restrictions on the collection and sharing of data by the Federal Bureau of Alcohol, Tobacco, Firearms and Explosives (in particular, data tracing illegal sales and purchases of firearms) strongly supports the hypothesis that, at least when it comes to firearms, legislators would rather be able to make policy unencumbered by pesky facts about how the relevant pieces of the world actually work.

What this suggests to me is that these legislators either don’t understand that knowing more about how the world works can help you achieve desired outcomes in that world, or that they don’t want to achieve the outcome of reducing firearm injury or death.

Perhaps these legislators don’t want researchers to build reliable knowledge about the causes of firearm injury because they fear it will get in the way of their achieving some other goal that is more important to them than reducing firearm injury or death.

Perhaps they fear that careful scientific research will turn up facts which themselves seem to “to advocate or promote gun control” — at least to the extent that they show that the most effective way to reduce firearm injury and death would be to implement controls that the legislators view as politically unpalatable.

If nothing else, I find that a legislator’s aversion to scientific evidence is a useful piece of information about him or her to me, as a voter.
______
*If federal funding for research did function like a subsidy, meant to keep the researchers employed and out of trouble, you’d expect to see a much higher level of support for philosophical research. History suggests that philosophers in the public square with nothing else to keep them busy end up asking people lots of annoying questions, undermining the authority of institutions, corrupting the youth, and so forth.

**One of the challenges in getting the public on board to fund scientific research is that they can be quite skeptical that “basic research” will have any useful application beyond satisfying researchers’ curiosity.

“Are you going to raise the child picky?” Interview with Stephanie V. W. Lucianovic (part 3).

This is the last part of my interview with Stephanie V. W. Lucianovic, author of Suffering Succotash: A Picky Eater’s Quest to Understand Why We Hate the Foods We Hate, conducted earlier this month over lunch at Evvia in Palo Alto. (Here is part 1 of the interview. Here is part 2 of the interview.)

In this segment of the interview, we talk about foodies as picky eaters whose preferences get respect and about how pickiness looks from the parenting side of the transaction. Also, we notice that culinary school might involve encounters with a classic Star Trek monster.

Janet D. Stemwedel: It does seem like there are certain ways to be picky that people will not only accept but actually look at as praiseworthy. “Oh, you’ve decided to give up this really delightful food that everyone else would wallow in!” I’ll come clean: part of the reason I’m vegetarian is that I have never cared for meat. Once I moved out of my parents’ house and not eating meat became an option, I stopped eating the stuff without any kind of impressive exercise of will. And, in restaurants that are big on fake meat, I’ll end up pulling it out of my soup. The waitrons will tell me, “Oh, don’t worry, you can eat that! It’s not meat!” And I’ll say, “I can eat it, but I don’t like it, so I won’t be eating it.”

Stephanie V. W. Lucianovich: You don’t need a meat substitute if the point is that you don’t like meat.

JS: Although veggie bacon rocks.

SL: Really? Bacon, man …

JS: It’s the holy grail, taste-wise, right?

SL: There’s a thought it could be more psychological than biological.

JS: Salt and fat.

SL: And a high concentration of nutrients that you’d need to survive in the wilderness. But also, there’s the happy memory of smelling it cooking on a weekend morning, not something the scientists discount. These are learned experiences.

JS: But a favorite food can become a food you can’t deal with if you eat it right before your stomach flu.

SL: Right. It just takes one time. Except for with my husband. He had eaten a pastrami sandwich earlier in the day, then drank a lot and threw up. And his reaction was, “Oh yeah, that was a good pastrami sandwich.” As it was coming up, this is what was going through his head!

JS: Not a very picky eater.

SL: He’s such a freak! He just doesn’t get turned off to foods easily. Although he does have his bugaboos, like bologna (maybe because he didn’t grow up with it) and cheese with apples. But anyway, the aspect of choice …

JS: Like being able to say, “I can’t eat that because the dietary laws of my religion forbid it,” which generally gets some level of respect.

SL: But then there are the foodies! And that seems to be a socially sanctioned way to be a picky eater. “Oh, I would never eat that!”

JS: “I would never drink that wine! That year was horrible!”

SL: Exactly! Or, “I don’t eat Wonder Bread because it’s full of preservatives!” Foodies can certainly be moralistic, in their own way, about what they will and will not eat. But it’s annoying when they’re like that.

JS: Because their picky preferences are better than yours.

SL: It’s obnoxious.

JS: Are there some foods you don’t regret being picky about?

SL: Well, there are some foods I still don’t eat, and I’m fine with that. Bananas and raisins are right up there, and I wrote a piece for the Washington Post detailing the reasons why I’m OK not liking bananas. They’re trying to kill me in various ways — they’ve got radiation in them —

JS: We can’t grow them locally.

SL: Due to their lack of genetic diversity, they’re going to doe out anyway, so it’s probably better that I never liked them. They used to come with tarantulas in them, back in the day.

JS: That’s extra protein!

SL: So, I could list a bunch of foods that I still don’t like but without regret. Braised meats? I just don’t like them. People go on and on about how great they are, but to me it’s a big mass of everything-tastes-the-same with none of it highly flavored enough for me. WIth stews I have the same kind of issue. I think I don’t regret not liking these kinds of food now because I recognize how far I’ve come. I like so many more things than I used to, and I can get by without it impacting my health or my social life. And, when faced with them at somebody’s house, I will eat something that has bananas or whatever in it. I’ve learned how to deal with it. But I won’t choose to have it myself at home.

JS: You won’t seek it out.

SL: But I am bringing some of these foods into my home, because I don’t want to prejudice my son against them. He likes bananas, sometimes, but often they’ll end up wasted. He’ll go through a phase where he wants them, and then another where he doesn’t want them. His interest level is at the point where I can buy two bananas at a time. I have had friends ask me, “Are you going to not feed him raisins?” Of course I’m going to give him raisins. I can touch the things!

JS: “Are you going to raise the child picky?”

SL: Right! So far, the kid likes okra, so I think we’re OK. But everything on the list I give in the book of foods I still don’t like, I have absolutely no problem not liking them, because it just doesn’t impact my life. There are just a few things out there I wish I liked more, because it would vary our diet more. For example, I don’t love green beans. I toss them with pesto sometimes, but I have just not found a way to make them where I love them. I don’t love peas either, except when Evvia does them in the summertime — huge English peas that come cold dressed with feta and scallions and dill (which I normally don’t like) and olive oil and lemon, and they’re only here for like three weeks. And they’re the best damn peas — that’s the only way I want them. The things I kind of wish I liked that I don’t, I’ve tried, and I’ll try them again, but it doesn’t really bug me.

JS: I wonder how much my regrets for the things I feel like I should be able to like but don’t are connected to the fact that I was not an especially picky eater as a kid (except for not liking meat). I kind of feel like I should like asparagus, but I don’t. It’s been so long since I’ve eaten it that I can’t even remember whether I can smell the funny asparagus metabolite in my pee.

SL: I didn’t like asparagus, and then I wanted to like it and found a recipe that worked, roasting it and dressing it with a vinaigrette and goat cheese. But then we ate a lot of it, and it was really good, and after a while I was noticing that I only ate the tips, not the woody, stringy bits.

JS: And that it still tasted like asparagus.

SL: Yeah. In the end, I tried it.

JS: For me, olives are another challenging food. I’m the only one in my household who doesn’t like them at all. So we may order a pizza with olives to share, but I’m going to pick all the olives off of mine and give them to whoever is nicest to me.

SL: How do you feel about the pizza once you’ve picked them off? Can you actually eat the pizza then?

JS: If I’m hungry enough, I can. I guess it depends. The black olive penetration on pizza is not as extreme as biting into a whole olive.

SL: No. I think the kind of olives they use for pizza are …

JS: Sort of defanged?

SL: Yeah. They’re just not as bitter as the whole olives you find.

JS: Are there foods you’ve grown to like where you still feel some residual pickiness? It sounds like asparagus may be one.

SL: Sweet potatoes and squash are two others I’m still on the fence about. I have to be very careful about how I make them. Lentils — maybe legumes more generally — are foods I don’t love unconditionally. They have to be prepared a certain way. Broccoli, too! I will only eat broccoli made according to the recipe I give in the book or, failing that, roasted but without the vinaigrette. Just because I like a food does not mean I fully accept every rendition of it. Speaking from a cook’s perspective, you just can’t disrespect vegetables. I will not eat broccoli steamed, I just don’t think it’s fair.

JS: Fair enough.

SL: I’m still pretty picky about how I like even the foods that I like.

JS: OK, death is not an option: a dish with a flavor you’re picky about and a good texture, or a dish with a texture you’re picky about and a good flavor?

SL: That’s so hard.

JS: You really want death on the table?

SL: It depends … How bad is the flavor? How good is the flavor?

JS: So, if the good is good enough, you might be able to deal with the challenging part?

SL: I think texture really gets me more. For example, I don’t have a problem with the flavor of flan or panna cotta. Very good flavors. Mango I’ve had, and the flavor is good, but it’s so gelatinous and slimy.

JS: To your palate, it’s wrong.

SL: Yeah. It just gets the gag reflex going for me more. But thinking about it now, I probably wouldn’t do bad flavor/good texture.

JS: So flavor might have a slight edge?

SL: Yeah. I’m thinking about stew: for me, bad all around. Everything is mushy and everything is one flavor, and it’s just very un-fun for me. But then there’s something like bananas, where my problem probably started as a texture issue, but because I disliked the texture so much, I started to associate the smell and the flavor with that texture, and now I don’t like anything banana flavored. I don’t like banana bread. I’ll eat it, but I don’t like it.

JS: And banana flavored cocktails would be right out.

SL: Auugh! Anything that’s a banana flavored cocktail is usually creamy too, and I have a problem with creamy cocktails. I used to be able to do the creamy cocktail in my youth, but now I think there’s something very wrong with them. Unless it’s got coffee.

JS: Did pickiness make culinary school harder?

SL: Yeah, it probably did. I noticed I wasn’t the only one who didn’t want to eat certain things. If you’re picky, you do have to really steel yourself to touch certain things that you might not want to touch, like fish. In general, I don’t like handling raw chicken, although I love to eat cooked chicken. I don’t mind handling red meats at all. There’s more blood to it — chicken, by comparison, is more pale and dead looking. So yeah, being picky probably made culinary school more challenging, but I was so into food by that point that it overrode some of it. I knew I would have to eat stuff like veal, stuff that would be difficult for me, and that it would be embarrassing if I didn’t, because the chefs told us we would have to taste everything. I was totally scared about that. But, the fact that it was probably harder for me than it was for someone who was an unabashed lover of all foods probably made it more of a moral victory. Just like becoming a foodie in the face of pickiness, I knew I had to work harder at it. I wasn’t born that way, I had to earn my stripes by getting over a lot of hurdles.

JS: It was a bigger deal because you overcame more adversity to get there.

SL: I think it meant more to me personally.

JS: Did you find that some of the stuff you learned in culinary school gave you more tools to deal with your own pickiness?

SL: Oh, yeah, because it just taught me better methods of cooking things that maybe I didn’t yet know. And, it really made me fearless about adding salt. Roberta Dowling was the director of the school, and nothing was ever salty enough for her. I started calling her the salt-vampire. There was a character on —

JS: Star Trek! I know that one!

SL: For every dish she tasted, she’d say, “Needs more salt,” even if we added all the salt the recipe called for. She tried to get us to recognize that the recipe was just a guideline. And salt really does do a lot for food. People who are not so confident in the kitchen get infuriated by “salt to taste,” but it really is all about your personal taste. What’s going on inside your mouth is so different from what may be going on in someone else’s, which means only you can determine whether it’s enough salt.

JS: Does pickiness look different when you’re on the parental side of the transaction.

SL: Yes. It’s so frustrating! It’s so, “Oh my God, don’t be like me!” I know my mom was like, “Whatever. You guys were picky. I wasn’t worried about it.” The doctor was like, “Give ’em vitamins.” I do think that writing the book, especially the chapter on children, relaxed me. On the other hand, I feel the same way a lot of other picky eaters who are parents feel: I’m just a little bit more conditioned to understand what they’re going through and not push it. But I have to be careful, because sometimes you can still fall into “No, no, no! I know you think you don’t like it now, but really, just try it and you’ll like it.” I have to remember that it’s him and what tastes good to him and what he wants to do. Later on in life, if he changes his mind about whatever it is he doesn’t like this week, great. This week he told me he didn’t like grilled cheese. My response was, “You’re no son of mine! How does a person not like grilled cheese? It was always there for me.”

JS: I think the right answer to, “I don’t like grilled cheese, Mom,” is “More for me!”

SL: Exactly! But yeah, it’s a very different perspective on pickiness. But again, I’m probably more conditioned to be understanding about it than a non-picky parent who gets a picky child might be. They just don’t even know what it’s like.

JS: It’s an interesting thing as they get older. Until this school year, I was the school lunch packer of the house for both of my kids, and I’d get the complaints along the lines of, “Why do you pack us stuff we don’t like?” Of course, I’d say, “OK, tell me what you would like,” but then within a few months they’d be sick of that. This year, I’m still packing my older kid’s linch, since she has to get out the door early to catch a bus, but my 11-year-old has been making her own lunches, and I catch her making these sandwiches that two years ago she would have claimed she didn’t like any components of them at all. The other day, she made a sandwich on home-baked whole wheat bread with a honey-mustard marinate she dug out of the back of the fridge, and smoked gouda, and arugula. I said, “I didn’t know you liked those things.” She said, “Me neither, but they were here, and I tried them, and they were good.” Another day, she made a sandwich with some homemade lime curd, and the parent in the vicinity said, “What about some more protein on that?” so she put some peanut butter on that sandwich and later reported that it tasted kind of Thai.

SL: Of course it did!

JS: I’ll take their word for what they like (or don’t like) this week, but that’s not going to stop me from eating other stuff in front of them, and if it smells or looks good enough to them and they say, “Can I try some of that?” maybe I’ll be nice and I’ll share.

SL: That’s the way to do it, no pressure but you keep offering the stuff, exposing them to it but not getting hurt feelings if they don’t like it.

JS: And ultimately, who cares if the kid ends up liking it? If it’s less hassle for me, one less fight? I have enough fights. I don’t need more fights.

SL: You don’t really need the bragging rights, either. “Oh, my kid is so rarefied!” Who cares?

Scientific knowledge, societal judgment, and the picky eater: Interview with Stephanie V. W. Lucianovic (part 2).

We continue my interview with Stephanie V. W. Lucianovic, author of Suffering Succotash: A Picky Eater’s Quest to Understand Why We Hate the Foods We Hate, conducted earlier this month over lunch at Evvia in Palo Alto. (Here is part 1 of the interview.)

In this segment of the interview, we ponder the kind of power picky eaters find in the scientific research on pickiness, the different ways people get judgmental about what someone else is eating, and the curious fact that scientists who research picky eating seem not to be picky eaters themselves. Also, we cast aspersions on lima beans and kale.

Janet D. Stemwedel: Are there some aspects of pickiness that you’d like to see the scientists research that they don’t seem to be researching yet?

Stephanie V. W. Lucianovic: There was the question of whether there are sex differences in pickiness, which it seems like maybe they’re looking into more now. Also, and this is because of where I am right now, I’d really like to see them look into the impact of having early examples of well-prepared food, because I have a hunch this might be pretty important. I’m pretty sure there’s no silver bullet, whether you’re breast-fed or formula-fed or whatever. It can make parents feel really bad when they get a long list of things to do to help your kid not be picky, and they do everything on the list, and the kid still ends up picky. But I’d like to see more of the research suggesting that it’s not just early exposure to food but early exposure to good food. I’m also intrigued by the research suggesting that pickiness is not a choice but rather a part of your biology. Lots of my friends who are gay have likened it to coming out of the closet and accepting that who you are is not a choice. I’d like to see more pickiness research here, but maybe it’s not so much about the science as the sociology of finding acceptance as a picky eater. Also, I’m not sure the extent to which scientists are taking the cultural aspects into account when they study pickiness — you figure they must. I am sick of people throwing the French back at me, saying, there’s this book written by the mother who raised her kids in France, and her kids were not picky, so, generally, kids in France are not picky. And I’m thinking, you know, I’m willing to bet that there are picky kids in France, but they just don’t talk about it. Scientifically speaking, there’s a high probability that there are picky eaters there.

JS: Right, and their parents probably just have access to enough good wine to not be as bothered by it.

SL: Or maybe their stance is just generally not to be bothered by it. Jacques Pepin said to me, “We just didn’t talk about it.” His daughter liked some things and disliked others, and he said, “You know, when she decided she liked Brussels sprouts, we didn’t get down on the floor to praise God; we just didn’t talk about it either way.” It doesn’t become a thing in the family. Parents today are so educated about food and nutrition, but it can have bad effects as well as good effects.

JS: We have the knowledge, but we don’t always know what to do with it.

SL: I’m hoping that scientists will be able to take all that they’re learning about the different facets of pickiness and put that knowledge together to develop ways to help people. People have asked me whether hypnosis works. I don’t know, and the scientists I asked didn’t know either. But there are people looking for help, and I hope that what the scientists are learning can make that help more accessible.

JS: Something occurred to me as I was reading what you wrote about the various aspects of why people like or don’t like certain flavors or different textures. I know someone who studies drugs of abuse. During the period of time just after my tenure dossier when in, I detoxed from caffeine, but I kept drinking decaffeinated coffee, because I love the taste of coffee. But, this researcher told me, “No, you don’t. You think you do, but the research we have shows that coffee is objectively aversive.” So you look at the animal studies and the research on how humans get in there and get themselves to like coffee, and all the indications are that we’re biologically predisposed not to like it.

SL: We’re not supposed to like it.

JS: But we can get this neurochemical payoff if we can get past that aversion. And I’m thinking, why on earth aren’t leafy greens doing that for us? How awesome would that be?

SL: They don’t get us high. They don’t give us the stimulant boost of caffeine. I think what your researcher friend is saying is that the benefit of caffeine is enough that it’s worth it to learn how to handle the bitterness to get the alertness. I started out with really sweet coffee drinks, with General Foods International coffees, then moved on to Starbucks drinks. I can finally drink black coffee. (I usually put milk in it, but that’s more for my stomach.) I can actually appreciate good coffees, like the ones from Hawaii. But, it’s because I worked at it — just like I worked at liking some of the foods I’ve disliked. I wanted to like it because the payoff was good. With greens, the only payoff is that they’re good for you. I reached a certain age where that was a payoff I wanted. I wanted to like Brussels sprouts because the idea of actually healthful foods became appealing to me. But there are plenty of people I know who are picky eaters who couldn’t care less about that.

JS: So, if there were more reasons apparent within our lifestyle to like leafy greens and their nutritional payoff, we’d work harder when we were in junior high and high school and college to like them? Maybe as hard as we do to become coffee drinkers?

SL: Sure! I’m trying very hard to like kale.

JS: Me too! I feel bad that I don’t like it.

SL: I know, right?

JS: I feel like I should — like a good vegetarian should like kale.

SL: Well, everyone’s trying to like it, and I’ve found some ways of liking it. But, what’s the payoff for kale? Obviously, it’s very good for you, and it’s supposed to have some specific benefits like being really good for your complexion, and cleaning out your liver. Have another glass of wine? OK, if you eat your kale. But again, “good for you” is a weird kind of payoff.

JS: It’s a payoff you have to wait for.

SL: And one you’re not necessarily always going to see. I’ve been told that eating lots of salmon also has health benefits, but I just don’t like salmon enough to eat enough of it to see those benefits.

JS: Heh. That reminds me of the stories I heard from our pediatrician that you’ve probably heard from yours, that if you feed your baby too much strained carrot, the baby might turn orange and you shouldn’t be alarmed. And of course, I was determined to sit down and feed my child enough carrots that weekend to see if I could make that happen.

SL: I’ve never seen that happen. Does it really happen?

JS: Apparently with some kids it does. I tried with mine and could not achieve the effect.

[At this point we got a little sidetracked as I offered Stephanie some of my Gigantes (baked organic Gigante beans with tomatoes, leeks, and herbed feta). I had ordered them with some trepidation because someone on Yelp had described this as a lima bean dish, and I … am not a fan of lima beans. The beans turned out to be a broad bean that bore no resemblance to the smaller, starchy lima beans of my youthful recollection.]

SL: I’ve never actually seen those lima beans fresh, just in bags in the frozen section.

JS: And assuming they still taste like we remember them, who would get them?

SL: Well, my husband is the kind of person who will eat anything, so he might. But you can also take limas and puree them with lemon juice, garlic, and olive oil and make a white bean spread. If I had to eat limas, that’s what I’d do with them. Maybe add a little mint. But I wouldn’t just eat them out of the bag, not even with butter.

JS: They’re not right.

SL: No.

JS: With so many different kinds of beans, why would you eat that one?

SL: There’s a reason why Alexander, of the terrible, horrible, no good, very bad day, had lima beans as his hated food. But, there are scientists at Monell working on flavors and acceptance of food — trying, among other things, to work out ways to make the drug cocktails less yucky for pediatric AIDS patients. They’re working on “bitter blockers” for that. (Maybe that could help with lima beans, too.) Anyway, getting Americans to eat more healthy foods …

JS: There’s probably some pill we could take for that, right?

SL: Hey, I thought we could do that with vitamins. Then I heard Michael Pollan saying, basically vitamins are pointless. (I still take them.) It’s tricky, because lots of people eat primarily for pleasure, not for health. I’m not sure why we have to see the two as being in opposition to each other; I enjoy food so much now that I find pleasure in eating foods that are good for me. But there are also plenty of people who just see food as fuel, and don’t find it any more interesting or worthy of discussion than that.

JS: At that point, why not just stock up on the nutrition bars and never do dishes again?

SL: When Anderson Cooper came out as a picky eater on his talk show, he said, “I would rather just drink my meals. I would rather have a shake.” His reaction to food was at the level where he wasn’t interested in anything more than that, at all. He’d rather go for convenience.

JS: That seems OK to me. That’s not how I am, or how the people I live with (and cook for) are, which means I can’t just blend it for meals, but that’s how it goes.

SL: For people who are like that, and know that they’re like that, if drinking meals is what works for them, that’s great. Personally, I wouldn’t want to be that way, but then again, I say that not really knowing what it’s like to be them instead of me.

JS: Do you think that interest in the causes of pickiness is driven by the amount of judgment people attach to picky eaters?

SL: Certainly, that’s my interest in it. I don’t think that’s necessarily why the scientific community is interested in it — I mean, I don’t think it bothers them very much, except in terms of understanding the psychological effects that are connected to pickiness. But yes, let’s talk about how food is the subject of judgment in general — especially among people in the Bay Area, among foodies.

JS: “Are you really going to eat that?! Do you know where that’s from?”

SL: Right, or “I won’t eat anything that wasn’t grown or raised within a 90 mile radius.” We have so many levels at which we judge what someone else is eating. My personal motivation for writing this book was to shed light on this topic because of the judgment that I saw picky eaters experience. For a while, I wouldn’t even admit my past as a picky eater. I had become a foodie and I was out here reinventing myself, but I kept my mouth shut about things I didn’t like until other people around me were admitting that they went through a picky stage of their own. Whenever I’ve written about pickiness online, the comments end up having a lot of people sharing their own stories. It seems like everyone can relate to it: “This is what I don’t like, and here’s why …” or, “I never thought I’d find anyone else who didn’t like this food for the same reason I don’t like it.” I’ve found that people can bond just as much over hating foods as they do over liking them. Let’s face it, food is often about community, so discussions of things we hate and things we love can be equally interesting to people. Even if you have the Pollyannas who say, “Who really wants to talk about something as unpleasant as what we don’t like?” guess what? We all dislike things.

JS: How many of the scientists who do research on the different aspects that contribute to pickiness outed themselves as picky eaters to you? Or do you think the scientists who study this stuff seem to be less picky than the rest of us?

SL: None of them really admitted to me that they were picky eaters. And I would ask them point blank if they were. One of the scientists working on the Duke study, Nancy Zucker, told me, “No. I ate everything as a kid, and I still do.” And, she told me her mom did some really weird things with food because her job was to sample products. The other scientist I spoke to on the Duke study admitted to not really liking tomatoes, but that was the extent of her pickiness. I got the sense from Dr. Dani Reed at Monell that she loves food and loves to cook. There were some foods, like organ meats, that she hadn’t quite accepted but that her friends were trying to get her to like. But, not a whole lot of people in this scientific community admitted to me that they were picky. I’m now thinking through everyone I interviewed, and I don’t recall any of them expressing food issues.

JS: I wonder if that’s at all connected with the research — whether doing research in this area is a way to make yourself less picky, or whether people who are picky are not especially drawn to this area of research.

SL: A lot of them would admit to having family members or friends who were picky. So then you wonder if they might have been drawn to the research because of this need to understand someone in their life.

JS: Maybe in the same way that losing a family member to leukemia could draw you to a career in oncology, having a family member who ruined family dinners by not eating what was on the plate draws you to this?

SL: Quite possibly. By and large, the scientists I spoke to about pickiness were so non-judgmental, probably because they’ve been studying it in various forms for various reasons. The rest of us are just now talking more about it and starting to notice the research that’s been amassed (on children, or breast feeding, or “inter-uterine feeding” and what they’re “tasting” in the womb). Since Monell is the center for research on taste and smell, they are used to journalists asking them about picky eaters. They’re also used to being misquoted and having the journalists’ accounts of the science come out wrong. (For example, they hate the word “supertaster,” which the media loves.) I got the impression that they were very non-judmental about pickiness, but none of them really described themselves as picky to me — and I asked.

JS: Maybe the picky eaters who are scientists go into some other field.

SL: Maybe. Maybe they don’t want to be involved with the food anymore.

JS: “Get it away from me! Get it away from me!”

SL: Seriously! “I lived it; I don’t need to study it!”

JS: Do you think having a scientific story to tell about pickiness makes it easier for picky eaters to push back against the societal judgment?

SL: Oh yeah. Lots of interviewers I’ve spoken to have wanted to tout this book as the science of picky eating — and let’s face it, it’s not all about the science — but people want to latch onto the scientific story because, for the lay person, when science hands down a judgment, you kind of just accept it. This is how I felt — you can’t argue with science. Science is saying, this is why I am who I am. Having scientific facts about pickiness gives you the back-up of a big-brained community, we can explain at least part of why you’re the way you are, and it’s OK. When parents can be given scientific explanations for why their kids are the way they are —

JS: And that the kid’s not just messing with you.

SL: Right! And that it’s not your fault. It’s not that you did something wrong to your kid that made your kid a picky eater. We’re really talking about two communities of picky eating, the parents of kids who are picky, and the adults who are picky eaters, and both those communities are looking for science because it’s as solid a thing as they can find to help them get through it.

JS: But here, we loop back to what you were saying earlier, as you were discussing how there’s potentially a genetic basis for pickiness, and how this kind of finding is almost analogous to finding a biological basis for sexual orientation. In both cases, you could draw the conclusion that it isn’t a choice but who you are.

SL: Exactly.

JS: But when I hear that, I’m always thinking to myself, but what if it were a choice? Why would that make us any more ready to say it’s a bad thing? Why should a biological basis be required for us to accept it? Do you think picky eaters need to have some scientific justification, or should society just be more accepting of people’s individual likes and dislike around food?

SL: Well, a psychologist would say, the first thing a picky eaters needs to do is accept that that’s who she is. Whatever the reason, whether their biology or their life history, this is who they are. The next thing is how does this impact you, and do you want to change it? If it’s something you want to change, you can then deal with it in steps. Why do we need to know that it’s not a choice? Because you get judged more for your choices. Let’s face it, you also get judged for who you are, but you get judged far more if you make what is assumed to be a choice to dislike certain foods. Then it’s like, “Why would you make that choice?” But there might also be a bully-population thing going on. There seem to be more people who like food of various kinds than who dislike them; why are they the ones who get to be right?

JS: Good question!

SL: And then there are discussions about evolution, where maybe not liking a particular food could be viewed as a weakness (because in an environment where that’s what there was to eat, you’d be out of luck). Sometimes it seems like our culture treats the not-picky eaters as fitter (evolutionarily) than the picky eaters. Of course, those who like and eat everything indiscriminately are more likely to eat something who kills them, so maybe the picky eaters will be the ultimate survivors. But definitely, the scientific story does feel like it helps fend off some of the societal criticism. Vegetarians and vegans already have some cover for their eating preferences. They have reasons they can give about ethics or environmental impacts. The scientific information can give picky eaters reasons to push back with that stronger than just individual preferences. For some reason, “I just don’t like it” isn’t treated like a good reason not to eat something.