The ideal of objectivity.

In trying to figure out what ethics ought to guide scientists in their activities, we’re really asking a question about what values scientists are committed to. Arguably, something that a scientist values may not be valued as much (if at all) by the average person in that scientist’s society.

Objectivity is a value – perhaps one of the values that scientists and non-scientists most strongly associate with science. So, it’s worth thinking about how scientists understand that value, some of the challenges in meeting the ideal it sets, and some of the historical journey that was involved in objectivity becoming a central scientific value in the first place. I’ll be splitting this discussion into three posts. This post sets the stage and considers how modern scientific practitioners describe objectivity. The next post will look at objectivity (and its challenges) in the context of work being done by Renaissance anatomists. The third post will examine how the notion of objectivity was connected to the efforts of Seventeenth Century “natural philosophers” to establish a method for building reliable knowledge about the world.

First, what do we mean by objectivity?

In everyday discussions of ethics, being objective usually means applying the rules fairly and treating everyone the same rather than showing favoritism to one party or another. Is this what scientists have in mind when they voice their commitment to objectivity? Perhaps in part. It could be connected to applying “the rules” of science (i.e., the scientific method) fairly and not letting bias creep into the production of scientific knowledge.

This seems close to the characterization of good scientific practice that we see in the National Academy of Science and National Research Council document, “The Nature of Science.” [1] This document describes science as an activity in which hypotheses undergo rigorous tests, whereby researchers compare the predictions of the hypotheses to verifiable facts determined by observation and experiment, and findings and corrections are announced in refereed scientific publications. It states, “Although [science’s] goal is to approach true explanations as closely as possible, its investigators claim no final or permanent explanatory truths.” (38)

Note that rigorous facts, verification of those facts (or the information necessary to verify them), correction of mistakes, and reliable reports of findings all depend on honesty – you can’t perform these activities by making up your results, or presenting them in a deceptive way, for example. So being objective in the sense of following good scientific methodology requires a commitment not to mislead.

But here, in “The Nature of Science,” we see hints that there are two closely related, yet distinct, meanings of “objective”. One is what anyone applying the appropriate methodology could see. The other is a picture of what the world is really like. Getting a true picture of the world (or aiming for such a picture) means seeking objectivity in the second sense -– finding the true facts. Seeking out the observational data that other scientists could verify -– the first sense of objectivity -– is closely tied to the experimental method scientists use and their strategies for reporting their results. Presumably, applying objective methodology would be a good strategy for generating an accurate (and thus objective) picture of the world.

But we should note a tension here that’s at least as old as the tension between Plato and his student Aristotle. What exactly are the facts about the world that anyone could see? Are sense organs like eyes all we need to see them? If such facts really exist, are they enough to help us build a true picture of the world?

In the chapter “Making Observations” from his book The Scientific Attitude [2], Fred Grinnell discusses some of the challenges of seeing what there is to see. He argues that, especially in the realms science tries to probe, seeing what’s out there is not automatic. Rather, we have to learn to see the facts that are there for anyone to observe.

Grinnell describes the difficulty students have seeing cells under a light microscope, a difficulty that persists even after students work out how to use the microscope to adjust the focus. He writes:

The students’ inability to see the cells was not a technical problem. There can be technical problems, of course -– as when one takes an unstained tissue section and places it under a microscope. Under these conditions it is possible to tell that something is “there,” but not precisely what. As discussed in any histology textbook, the reason is that there are few visual features of unstained tissue sections that our eyes can discriminate. As the students were studying stained specimens, however, sufficient details of the field were observable that could have permitted them to distinguish among different cells and between cells and the noncellular elements of the tissue. Thus, for these students, the cells were visible but unseen. (10-11)

Grinnell’s example suggests that seeing cells, for example, requires more than putting your eye to the eyepiece of a microscope focused on a stained sample of cells. Rather, you need to be able to recognize those bits of your visual field as belonging to a particular kind of object -– and, you may even need to have something like the concept of a cell to be able to identify what you are seeing as cells. At the very least, this suggests that we should amend our gloss of objective as “what anyone could see” to something more like “what anyone could see given a particular conceptual background and some training with the necessary scientific measuring devices.”

But Grinnell makes even this seem too optimistic. He notes that “seeing things one way means not seeing them another way,” which implies that there are multiple ways to interpret any given piece of the world toward which we point our sense organs. Moreover, he argues,

Each person’s previous experiences will have led to the development of particular concepts of things, which will influence what objects can be seen and what they will appear to be. As a consequence, it is not unusual for two investigators to disagree about their observations if the investigators are looking at the data according to different conceptual frameworks. Resolution of such conflicts requires that the investigators clarify for each other the concepts that they have in mind. (15)

In other words, scientists may need to share a bundle of background assumptions about the world to look at a particular piece of that world and agree on what they see. Much more is involved in seeing “what anyone can see” than meets the eye.

We’ll say more about this challenge in the next post, when we look at how Renaissance anatomists tried to build (and communicate) objective knowledge about the human body.
_____________

[1] “The Nature of Science,” in Panel on Scientific Responsibility and the Conduct of Research, National Academy of Sciences, National Academy of Engineering, Institute of Medicine. Responsible Science, Volume I: Ensuring the Integrity of the Research Process. Washington, DC: The National Academies Press, 1992.

[2] Frederick Grinnell, The Scientific Attitude. Guilford Press, 1992.

More on rudeness, civility, and the care and feeding of online conversations.

Late last month, I pondered the implications of a piece of research that was mentioned but not described in detail in a perspective piece in the January 4, 2013 issue of Science. [1] In its broad details, the research suggests that the comments that follow an online article about science — and particularly the perceived tone of the comments, whether civil or uncivil — can influence readers’ assessment of the science described in the article itself.

Today, an article by Paul Basken at The Chronicle of Higher Education shares some more details of the study:

The study, outlined on Thursday at the annual meeting of the American Association for the Advancement of Science, involved a survey of 2,338 Americans asked to read an article that discussed the risks of nanotechnology, which involves engineering materials at the atomic scale.

Of participants who had already expressed wariness toward the technology, those who read the sample article—with politely written comments at the bottom—came out almost evenly split. Nearly 43 percent said they saw low risks in the technology, and 46 percent said they considered the risks high.

But with the same article and comments that expressed the same reactions in a rude manner, the split among readers widened, with 32 percent seeing a low risk and 52 percent a high risk.

“The only thing that made a difference was the tone of the comments that followed the story,” said a co-author of the study, Dominique Brossard, a professor of life-science communication at the University of Wisconsin at Madison. The study found “a polarization effect of those rude comments,” Ms. Brossard said.

The study, conducted by researchers at Wisconsin and George Mason University, will be published in a coming issue of the Journal of Computer-Mediated Communication. It was presented at the AAAS conference during a daylong examination of how scientists communicate their work, especially online.

If you click through to read the article, you’ll notice that I was asked for comment on the findings. As you may guess, I had more to say on the paper (which is still under embargo) and its implications than ended up in the article, so I’m sharing my extended thoughts here.

First, I think these results are useful in reassuring bloggers who have been moderating comments that what they are doing is not just permissible (moderating comments is not “censorship,” since bloggers don’t have the power of the state, and folks can find all sorts of places in the Internet to state their views if any given blog denies them a soapbox) but also reasonable. Blogging with comments enabled assumes more than transmission of information, it assumes a conversation, and what kind of conversation it ends up being depends on what kind of behavior is encouraged or forbidden, who feels welcome or alienated.

But, there are some interesting issues that the study doesn’t seem to address, issues that I think can matter quite a lot to bloggers.

In the study, readers (lurkers) were reacting to factual information in an online posting plus the discourse about that article in the comments. As the study is constructed, it looks like that discourse is being shaped by commenters, but not by the author of the article. It seems likely to me (and worth further empirical study!) that comment sections in which the author is engaging with commenters — not just responding to the questions they ask and the views they express, but also responding to the ways that they are interacting with other commenters and to their “tone” — have a different impact on readers than comment sections where the author of the piece that is being discussed is totally absent from the scene. To put it more succinctly, comment sections where the author is present and engaged, or absent and disengaged, communicate information to lurkers, too.

Here’s another issue I don’t think the study really addresses: While blogs usually aim to communicate with lurkers as well as readers who post comments (and every piece of evidence I’ve been shown suggests that commenters tend to be a small proportion of readers), most are aiming to reach a core audience that is narrower than “everyone in the world with an internet connection”.

Sometimes what this means is that bloggers are speaking to an audience that finds comment sections that look unruly and contentious to be welcoming, rather than alienating. This isn’t just the case for bloggers seeking an audience that likes to debate or to play rough.

Some blogs have communities that are intentionally uncivil towards casual expressions of sexism, racism, homophobia, etc. Pharyngula is a blog that has taken this approrach, and just yesterday Chris Clarke posted a statement on “civility” there that leads with a commitment “not to fetishize civility over justice.” Setting the rules of engagement between bloggers and posters this way means that people in groups especially affected by sexism, racism, homophobia, etc., have a haven in the blogosphere where they don’t have to waste time politely defending the notion that they are fully human, too (or swallowing their anger and frustration at having their humanity treated as a topic of debate). Yes, some people find the environment there alienating — but the people who are alienated by unquestioned biases in most other quarters of the internet (and the physical world, for that matter) are the ones being consciously welcomed into the conversation at Pharyngula, and those who don’t like the environment can find another conversation. It’s a big blogosphere. That not every potential reader does not feel perfectly comfortable at a blog, in other words, is not proof that the blogger is doing it wrong.

So, where do we find ourselves?

We’re in a situation where lots of people are using online venues like blogs to communicate information and viewpoints in the context of a conversation (where readers can actively engage as commenters). We have a piece of research indicating that the tenor of the commenting (as perceived by lurkers, readers who are not commenting) can communicate as much to readers as the content of the post that is the subject of the comments. And we have lots of questions still unanswered about what kinds of engagement will have what kinds of effect on what kinds or readers (and how reliably). What does this mean for those of us who blog?

I think what it means is that we have to be really reflective about what we’re trying to communicate, who we’re trying to communicate it to, and how our level of visible engagement (or disengagement) in the conversation might make a difference. We have to acknowledge that we have information that’s gappy at best about what’s coming across to the lurkers, and attentive to ways to get more feedback about how successfully we’re communicating what we’re trying to communicate. We have to recognize that, given all we don’t know, we may want to shift our strategies for blogging and engaging commenters, especially if we come upon evidence that they’re not working the way we thought they were.

* * * * *
In the interests of spelling out the parameters of the conversation I’d like to have here, let me note that whether or not you like the way Pharyngula sets a tone for conversations is off topic here. You are, however, welcome to share in the comments here what you find makes you feel more or less welcome to engage with online postings, whether as a commenter or a lurker.
_____

[1] Dominique Brossard and Dietram A. Scheufele, “Science, New Media, and the Public.” Science 4 January 2013:Vol. 339, pp. 40-41.
DOI: 10.1126/science.1160364

Some musings on Jonah Lehrer’s $20,000 “meh culpa”.

Remember some months ago when we were talking about how Jonah Lehrer was making stuff up in his “non-fiction” pop science books? This was as big enough deal that his publisher, Houghton Mifflin Harcourt, recalled print copies of Lehrer’s book Imagine, and that the media outlets for which Lehrer wrote went back through his writing for them looking for “irregularities” (like plagiarism — which one hopes is not regular, but once your trust has been abused, hopes are no longer all that durable).

Lehrer’s behavior was clearly out of bounds for anyone hoping for a shred of credibility as a journalist or non-fiction author. However, at the time, I opined in a comment:

At 31, I think Jonah Lehrer has time to redeem himself and earn back trust and stuff like that.

Well, the events of this week stand as evidence that having time to redeem oneself is not a guarantee that one will not instead dig the hole deeper.

You see, Jonah Lehrer was invited to give a talk this week at a “media learning seminar” in Miami, a talk which marked his first real public comments a large group of journalistic peers since his fabrications and plagiarism were exposed — and a talk for which the sponsor of the conference, the Knight Foundation, paid Lehrer an honorarium of $20,000.

At the New York Times “Arts Beat” blog, Jennifer Schuessler describes Lehrer’s talk:

Mr. Lehrer … dived right in with a full-throated mea culpa. “I am the author of a book on creativity that contains several fabricated Bob Dylan quotes,” he told the crowd, which apparently could not be counted on to have followed the intense schadenfreude-laced commentary that accompanied his downfall. “I committed plagiarism on my blog, taking without credit or citation an entire paragraph from the blog of Christian Jarrett. I plagiarized from myself. I lied to a journalist named Michael Moynihan to cover up the Dylan fabrications.”

“My mistakes have caused deep pain to those I care about,” he continued. “I’m constantly remembering all the people I’ve hurt and let down.”

If the introduction had the ring of an Alcoholics Anonymous declaration, before too long Mr. Lehrer was surrendering to the higher power of scientific research, cutting back and forth between his own story and the kind of scientific terms — “confirmation bias,” “anchoring” — he helped popularize. Within minutes he had pivoted from his own “arrogance” and other character flaws to the article on flawed forensic science within the F.B.I. that he was working on when his career began unraveling, at one point likening his own corner-cutting to the overconfidence of F.B.I. scientists who fingered the wrong suspect in the 2004 Madrid bombings.

“If we try to hide our mistakes, as I did, any error can become a catastrophe,” he said, adding: “The only way to prevent big failures is a willingness to consider every little one.”

Not everyone shares the view that Lehrer’s apology constituted a full-throated mea culpa, though. At Slate, Daniel Engber shared this assessment:

Lehrer has been humbled, and yet nearly every bullet in his speech managed to fire in both directions. It was a wild display of self-negation, of humble arrogance and arrogant humility. What are these “standard operating procedures” according to which Lehrer will now do his work? He says he’ll be more scrupulous in his methods—even recording and transcribing interviews(!)—but in the same breath promises that other people will be more scrupulous of him. “I need my critics to tell me what I’ve gotten wrong,” he said, as if to blame his adoring crowds at TED for past offenses. Then he promised that all his future pieces would be fact-checked, which is certainly true but hardly indicative of his “getting better” (as he puts it, in the clammy, familiar rhetoric of self-help).

What remorse Lehrer had to share was couched in elaborate and perplexing disavowals. He tried to explain his behavior as, first of all, a hazard of working in an expert field. Like forensic scientists who misjudge fingerprints and DNA analyses, and whose failings Lehrer elaborated on in his speech, he was blind to his own shortcomings. These two categories of mistake hardly seem analogous—lab errors are sloppiness, making up quotes is willful distortion—yet somehow the story made Lehrer out to be a hapless civil servant, a well-intentioned victim of his wonky and imperfect brain.

(Bold emphasis added.)

At Forbes, Jeff Bercovici noted:

Ever the original thinker, even when he’s plagiarizing from press releases, Lehrer apologized abjectly for his actions but pointedly avoided promising to become a better person. “These flaws are a basic part of me,” he said. “They’re as fundamental to me as the other parts of me I’m not ashamed of.”

Still, Lehrer said he is aiming to return to the world of journalism, and has been spending several hours a day writing. “It’s my hope that someday my transgressions might be forgiven,” he said.

How, then, does he propose to bridge the rather large credibility gap he faces? By the methods of the technocrat, not the ethicist: “What I clearly need is a new set of rules, a stricter set of standard operating procedures,” he said. “If I’m lucky enough to write again, then whatever I write will be fully fact-checked and footnoted. Every conversation will be fully taped and transcribed.”

(Bold emphasis added.)

How do I see Jonah Lehrer’s statement? The title of this post should give you a clue. Like most bloggers, I took five years of Latin.* “Mea culpa” would describe a statement wherein the speaker (in this case, Jonah Lehrer) actually acknowledged that the blame was his for the bad thing of which he was a part. From what I can gather, Lehrer hasn’t quite done that.

Let the record reflect that the “new set of rules” and “stricter set of standard operating procedures” Lehrer described in his talk are not new, nor were they non-standard when Lehrer was falsifying and plagiarizing to build his stories. It’s not that Jonah Lehrer’s unfortunate trajectory shed light on the need for these standards, and now the journalistic community (and we consumers of journalism) can benefit from their creation. Serious journalists were already using these standards.

Jonah Lehrer, however, decided he didn’t need to use them.

This does have a taste of Leona Helmsleyesque “rules are for the little people” to it. And, I think it’s important to note that Lehrer gave the outward appearance of following the rules. He did not stand up and say, “I think these rules are unnecessary to good journalistic practice, and here’s why…” Rather, he quietly excused himself from following them.

But now, Lehrer tells us, he recognizes the importance of the rules.

That’s well and good. However, the rules he’s pointing to — taping and transcribing interviews, fact-checking claims and footnoting sources — seem designed to prevent unwitting mistakes. They could head off misremembering what interviewees said, miscommunicating whose words or insights animate part of a story, getting the facts wrong accidentally. It’s less clear that these rules can head off willful lies and efforts to mislead — which is to say, the kind of misdeeds that got Lehrer into trouble.

Moreover, that he now accepts these rules after being caught lying does not indicate that Jonah Lehrer is now especially sage about journalism. It’s remedial work.

Let’s move on from his endorsement (finally) of standards of journalistic practice to the constellation of cognitive biases and weaknesses of will that Jonah Lehrer seems to be trying to saddle with the responsibility for his lies.

Recognizing cognitive biases is a good thing. It is useful to the extent that it helps us to avoid getting fooled by them. You’ll recall that, knowledge-builders, whether scientists or journalists, are supposed to do their best to avoid being fooled.

But, what Lehrer did is hard to cast in terms of ignoring strong cognitive biases. He made stuff up. He fabricated quotes. He presented other authors’ writing as his own. When confronted about his falsifications, he lied. Did his cognitive biases do all this?

What Jonah Lehrer seems to be sidestepping in his “meh culpa” is the fact that, when he had to make choices about whether to work with the actual facts or instead to make stuff up, about whether to write his own pieces (or at least to properly cite the material from others that he used) or to plagiarize, about whether to be honest about what he’d done when confronted or to lie some more, he decided to be dishonest.

If we’re to believe this was a choice his cognitive biases made for him, then his seem much more powerful (and dangerous) than the garden-variety cognitive biases most grown-up humans have.

It seems to me more plausible that Lehrer’s problem was a weakness of will. It’s not that he didn’t know what he was doing was wrong — he wasn’t fooled by his brain into believing it was OK, or else he wouldn’t have tried to conceal it. Instead, despite recognizing the wrongness of his deeds, he couldn’t muster the effort not to do them.

If Jonah Lehrer cannot recognize this — that it frequently requires conscious effort to do the right thing — it’s hard to believe he’ll be committed to putting that effort into doing the right (journalistic) thing going forward. Verily, given the trust he’s burned with his journalistic colleagues, he can expect that proving himself to be reformed will require extra effort.

But maybe what Lehrer is claiming is something different. Maybe he’s denying that he understood the right thing to do and then opted not to do it because it seemed like too much work. Maybe he’s claiming instead that he just couldn’t resist the temptation (whether of rule-breaking for its own sake or of rule-breaking as the most efficient route to secure the prestige he craved). In other words, maybe he’s saying he was literally powerless, that he could not help committing those misdeeds.

If that’s Lehrer’s claim — and if, in addition, he’s claiming that the piece of his cognitive apparatus that was so vulnerable to temptation that it seized control to make him do wrong is as integral to who Jonah Lehrer is as his cognitive biases are — the whole rehabilitation thing may be a non-starter. If this is how Lehrer understands why he did wrong, he seems to be identifying himself as a wrongdoer with a high probability of reoffending.

If he can parlay that into more five-figure speaker fees, maybe that will be a decent living for Jonah Lehrer, but it will be a big problem for the community of journalists and for the public that trusts journalists as generally reliable sources of information.

Weakness is part of Lehrer, as it is for all of us, but it is not a part he is acknowledging he could control or counteract by concerted effort, or by asking for help from others.

It’s part of him, but not in a way that makes him inclined to actually take responsibility or to acknowledge that he could have done otherwise under the circumstances.

If he couldn’t have done otherwise — and if he might not be able to when faced with similar temptation in the future — then Jonah Lehrer has no business in journalism. Until he can recognize his own agency, and the responsibility that attaches to it, the most he has to offer is one more cautionary tale.
_____
*Fact check: I have absolutely no idea how many other bloggers took five years of Latin. My evidence-free guess is that it’s not just me.

Intuitions, scientific methodology, and the challenge of not getting fooled.

At Context and Variation, Kate Clancy has posted some advice for researchers in evolutionary psychology who want to build reliable knowledge about the phenomena they’re trying to study. This advice, of course, is prompted in part by methodology that is not so good for scientific knowledge-building. Kate writes:

The biggest problem, to my mind, is that so often the conclusions of the bad sort of evolutionary psychology match the stereotypes and cultural expectations we already hold about the world: more feminine women are more beautiful, more masculine men more handsome; appearance is important to men while wealth is important to women; women are prone to flighty changes in political and partner preference depending on the phase of their menstrual cycles. Rather than clue people in to problems with research design or interpretation, this alignment with stereotype further confirms the study. Variation gets erased: in bad evolutionary psychology, there are only straight people, and everyone wants the same things in life. …

No one should ever love their idea so much that it becomes detached from reality.

It’s a lovely post about the challenges of good scientific methodology when studying human behavior (and why it matters to more than just scientists), so you should read the whole thing.

Kate’s post also puts me in mind of some broader issues about which scientists should remind themselves from time to time to keep themselves honest. I’m putting some of those on the table here.

Let’s start with a quotable quote from Richard Feynman:

The first principle is that you must not fool yourself, and you are the easiest person to fool.

Scientists are trying to build reliable knowledge about the world from information that they know is necessarily incomplete. There are many ways to interpret the collections of empirical data we have on hand — indeed, many contradictory ways to interpret them. This means that lots of the possible interpretations will be wrong.

You don’t want to draw the wrong conclusion from the available data, not if you can possibly avoid it. Feynman’s “first principle” is noting that we need to be on guard against letting ourselves be fooled by wrong conclusions — and on guard against the peculiar ways that we are more vulnerable to being fooled.

This means we have to talk about our attachment to intuitions. All scientists have intuitions. They surely help in motivating questions to ask about the world and strategies for finding good answers to them. But intuitions, no matter how strong, are not the same as empirical evidence.

Making things more challenging, our strong intuitions can shape what we take to be the empirical evidence. They can play a role in which results we set aside because they “couldn’t be right,” in which features of a system we pay attention to and which we ignore, in which questions we bother to ask in the first place. If we don’t notice the operation of our intuitions, and the way they impact our view of the empirical evidence, we’re making it easier to get fooled. Indeed, if our intuitions are very strong, we’re essentially fooling ourselves.

As if this weren’t enough, we humans (and, by extension, human scientists) are not always great at recognizing when we are in the grips of our intuitions. It can feel like we’re examining a phenomenon to answer a question and that we’re refraining from making any assumptions to guide our enquiry, but chances are it’s not a feeling we should trust.

This is not to say that our intuitions are guaranteed safe haven from our noticing them. We can become aware of them and try to neutralize the extent to which they, rather than the empirical evidence, are driving the scientific story — but to do this, we tend to need help from people who have conflicting intuitions about the same bit of the world. This is a good methodological reason to take account of the assumptions and intuitions of others, especially when they conflict with our own.

What happens if there are intuitions about which we all agree — assumptions we are making (and may well be unaware that we’re making, because they seem so bleeding obvious) with which no one disagrees? I don’t know that there are any such universal human intuitions. It seems unlikely to me, but I can’t rule out the possibility. How would they bode for our efforts at scientific knowledge-building?

First, we would probably want to recognize that the universality of an intuition still wouldn’t make it into independent empirical evidence. Even if it had been the case, prior to Galileo, or Copernicus, or Aristarchus of Samos, that every human took it as utterly obvious that Earth is stationary, we recognize that this intuition could still be wrong. As it happened, it was an intuition that was questioned, though not without serious resistance.

Developing a capacity to question the obvious, and also to recognize and articulate what it is we’re taking to be obvious in order that we might question it, seems like a crucial skill for scientists to cultivate.

But, as I think comes out quite clearly in Kate’s post, there are some intuitions we have that, even once we’ve recognized them, may be extremely difficult to subject to empirical test. This doesn’t mean that the questions connected in our heads to these intuitions are outside the realm of scientific inquiry, but it would be foolish not to notice that it’s likely to be extremely difficult to find good scientific answers to these questions. We need to be wary of the way our intuitions try to stack the evidential deck. We need to acknowledge that the very fact of our having strong intuitions doesn’t count as empirical evidence in favor of them. We need to come to grips with the possibility that our intuitions could be wrong — perhaps to the extent that we recognize that empirical results that seem to support our intuitions require extra scrutiny, just to be sure.

To do any less is to ask to be fooled, and that’s the outcome scientific knowledge-building is trying to avoid.