The challenges of objectivity: lessons from anatomy.

In the last post, we talked about objectivity as a scientific ideal aimed at building a reliable picture of what the world is actually like. We also noted that this goal travels closely with the notion of objectivity as what anyone applying the appropriate methodology could see. But, as we saw, it takes a great deal of scientific training to learn to see what anyone could see.

The problem of how to see what is really there is not a new one for scientists. In her book The Scientific Renaissance: 1450-1630 [1], Marie Boas Hall describes how this issue presented itself to Renaissance anatomists. These anatomists endeavored to learn about the parts of the human body that could be detected with the naked eye and the help of a scalpel.

You might think that the subject matter of anatomy would be more straightforward for scientists to “see” than the cells Fred Grinnell describes [2] (discussed in the last post), which require preparation and staining and the twiddling of knobs on microscopes. However, the most straightforward route to gross anatomical knowledge -– dissections of cadavers -– had its own challenges. For one thing, cadavers (especially human cadavers) were often in short supply. When they were available, anatomists hardly ever performed solitary dissections of them. Rather, dissections were performed, quite literally, for an audience of scientific students, generally with a surgeon doing the cutting while a professor stood nearby and read aloud from an anatomical textbook describing the organs, muscles, or bones encountered at each stage of the dissection process. The hope was that the features described in the text would match the features being revealed by the surgeon doing the dissecting, but there were doubtless instances where the audio track (as it were) was not quite in sync with the visual. Also, as a practical matter, before the invention of refrigeration dissections were seasonal, performed in the winter rather than the warmer months to retard the cadaver’s decomposition. This put limits on how much anatomical study a person could cram into any given year.

In these conditions, most of the scientists who studied anatomy logged many more hours watching dissections than performing dissections themselves. In other words, they were getting information about the systems of interest by seeing rather than by doing -– and they weren’t always seeing those dissections from the good seats. Thus, we shouldn’t be surprised that anatomists greeted the invention of the printing press by producing a number of dissection guides and anatomy textbooks.

What’s the value of a good textbook? It shares detailed information compiled by another scientist, sometimes over the course of years of study, yet you can consume that information in a more timely fashion. If it has diagrams, it can give you a clearer view of what there is to observe (albeit through someone else’s eyes) than you may be able to get from the cheap seats at a dissection. And, if you should be so lucky as to get your own specimens for study, a good textbook can guide your examination of the new material before you, helping you deal with the specimen in a way that lets you see more of what there is to see (including spatial relations and points of attachment) rather than messing it up with sloppy dissection technique.

Among the most widely used anatomy texts in the Renaissance were “uncorrupted” translations of On the Use of the Parts and Anatomical Procedures by the ancient Greek anatomist Galen, and the groundbreaking new text On the Fabric of the Human Body (published in 1543) by Vesalius. The revival of Galen fit into a pattern of Renaissance celebration of the wisdom of the ancients rather than setting out to build “new” knowledge, and Hall describes the attitude of Renaissance anatomists toward his work as “Galen-worship.” Had Galen been alive during the Renaissance, he might well have been irritated at the extent to which his discussions of anatomy -– based on dissections of animals, not human cadavers –- were taken to be authoritative. Galen himself, as an advocate of empiricism, would have urged other anatomists to “dissect with a fresh eye,” attentive to what the book of nature (as written on the bodies of creatures to be dissected) could teach them.

As it turns out, this may be the kind of thing that’s easier to urge than to do. Hall asks,

[W]hat scientific apprentice has not, many times since the sixteenth century, preferred to trust the authoritative text rather than his own unskilled eye? (137)

Once again, it requires training to be able to see what there is to see. And surely someone who has written textbooks on the subject (even centuries before) has more training in how to see than does the novice leaning on the textbook.

Of course, the textbook becomes part of the training in how to see, which can, ironically, make it harder to be sure that what you are seeing is an accurate reflection of the world, not just of the expectations you bring to your observations of it.

The illustrations in the newer anatomy texts made it seem less urgent to anatomy students that they observe (or participate in) actual dissections for themselves. As the technique for mass-produced illustrations got better (especially with the shift from woodcuts to engravings), the illustrators could include much more detail in their images. Paradoxically, this could be a problem, as the illustrator was usually someone other than the scientist who wrote the book, and the author and illustrator were not always in close communication as the images were produced. Given a visual representation of what there is to observe and a description of what there is to observe in the text, which would a student trust more?

Bruce Bower discusses this sort of problem in his article “Objective Visions,” [3] describing the procedures used by Dutch anatomist Berhard Albinus in the mid-1700s to create an image of the human skeleton. Bower writes:

Albinus carefully cleans, reassembles, and props up a complete male skeleton; checks the position of each bone in comparison with observations of an extremely skinny man hired to stand naked next to the skeleton; he calculates the exact spot at which an artist must sit to view the skeleton’s proportions accurately; and he covers engraving plates with cross-hatched grids so that images can be drawn square-by-square and thus be reproduced more reliably. (360)

Here, it sounds like Albinus is trying hard to create an image that accurately conveys what there is to see about the skeleton and its spatial relations. The methodology seems designed to make the image-creation faithful to the particulars of the actual specimen — in a word, objective. But, Bower continues:

After all that excruciating attention to detail, the eminent anatomist announces that his atlas portrays not a real skeleton, but an idealized version. Albinus has dictated alterations to the artist. The scrupulously assembled model is only a spingboard for insights into a more “perfect” representation of the human skeleton, visible only to someone with Albinus’ anatomical acumen. (360)

Here, Albinus was trying to abstract away from the peculiarities of the particular skeleton he had staged as a model for observation in order to describe what he saw as the real thing. This is a decidedly Platonist move. Plato’s view was that the stuff of our world consists largely of imperfect material instantiations of immaterial ideal forms -– and that science makes the observations it does of many examples of material stuff to get a handle on those ideal forms.

If you know the allegory of the cave, however, you know that Plato didn’t put much faith in feeble human sense organs as a route to grasping the forms. The very imperfection of those material instantiations that our sense organs apprehend would be bound to mislead us about the forms. Instead, Plato thought we’d need to use the mind to grasp the forms.

This is a crucial juncture where Aristotle parted ways with Plato. Aristotle still thought that there was something like the forms, but he rejected Plato’s full-strength rationalism in favor of an empirical approach to grasping them. If you wanted to get a handle on the form of “horse,” for example, Aristotle thought the thing to do was to examine lots of actual specimens of horse and to identify the essence they all have in common. The Aristotelian approach probably feels more sensible to modern scientists than the Platonist alternative, but note that we’re still talking about arriving at a description of “horse-ness” that transcends the observable features of any particular horse.

Whether you’re a Platonist, an Aristotelian, or something else, it seems pretty clear that scientists do decide that some features of the systems they’re studying are crucial and others are not. They distinguish what they take to be background from what they take to be the thing they’re observing. Rather than presenting every single squiggle in their visual field, they abstract away to present the piece of the world they’re interested in talking about.

And this is where the collaboration between anatomist and illustrator gets ticklish. What happens if the engraver is abstracting away from the observed particulars differently than the anatomist would? As Hall notes, the engravings in Renaissance anatomy texts were not always accurate representations of the texts. (Nor, for that matter, did the textual descriptions always get the anatomical features right — Renaissance anatomists, Vesalius included, managed to repeat some anatomical mistakes that went back to Galen, likely because they “saw” their specimens through a lens of expectations shaped by what Galen said they were going to see.)

On top of this, the fact that artists like Leonardo Da Vinci studied anatomy to improve their artistic representations of the human form spilled back to influence Renaissance scientific illustrators. These illustrators, as much as their artist contemporaries, may have looked beyond the spatial relations between bones or muscles or internal organs for hidden beauty in their subjects. While this resulted in striking illustrations, it also meant that their engravings were not always accurate representations of the cadavers that were officially their subjects.

These factors conspired to produce visually arresting anatomy texts that exerted an influence on how the anatomy students using them understood the subject, even when these students went beyond the texts to perform their own dissections. Hall writes,

[I]t is often quite easy to “see” what a textbook or manual says should be seen. (141)

Indeed, faced with a conflict between the evidence of one’s eyes pointed at a cadaver and the evidence of one’s eyes pointed at an anatomical diagram, one might easily conclude that the cadaver in question was a weird variant while the diagram captured the “standard” configuration.

Bower’s article describes efforts scientists made to come up with visual representations that were less subjective. Bower writes:

Scientists of the 19th century rapidly adopted a new generation of devices that rendered images in an automatic fashion. For instance, the boxy contraption known as the camera obscura projected images of a specimen, such as a bone or a plant, onto a surface where a researcher could trace its form onto a piece of paper. Photography soon took over and further diminished human involvement in image-making. … Researchers explicitly equated the manual representation of items in the natural world with a moral code of self-restraint. … A blurry photograph of a star or ragged edges on a slide of tumor tissues were deemed preferable to tidy, idealized portraits. (361)

Our naïve picture of objectivity may encourage us that seeing is believing, and that mechanically captured images are more reliable than those rendered by the hand of a (subjective) human, but it’s important to remember that pictures -– even photographs -– have points of view, depend on choices made about the conditions of their creation, and can be used as arguments to support one particular way of seeing the world over another.

In the next post, we’ll look at how Seventeenth Century “natural philosophers” labored to establish a general-use method for building reliable knowledge about the world, and at how the notion of objectivity was connected to these efforts, and to the recognizable features of “the scientific method” that resulted.
_____________

[1] Marie Boas Hall, The Scientific Renaissance: 1450-1630. Dover, 1994.

[2] Frederick Grinnell, The Scientific Attitude. Guilford Press, 1992.

[3] Bruce Bower, “Objective Visions,” Science News. 5 December 1998: Vol. 154, pp. 360-362

The ideal of objectivity.

In trying to figure out what ethics ought to guide scientists in their activities, we’re really asking a question about what values scientists are committed to. Arguably, something that a scientist values may not be valued as much (if at all) by the average person in that scientist’s society.

Objectivity is a value – perhaps one of the values that scientists and non-scientists most strongly associate with science. So, it’s worth thinking about how scientists understand that value, some of the challenges in meeting the ideal it sets, and some of the historical journey that was involved in objectivity becoming a central scientific value in the first place. I’ll be splitting this discussion into three posts. This post sets the stage and considers how modern scientific practitioners describe objectivity. The next post will look at objectivity (and its challenges) in the context of work being done by Renaissance anatomists. The third post will examine how the notion of objectivity was connected to the efforts of Seventeenth Century “natural philosophers” to establish a method for building reliable knowledge about the world.

First, what do we mean by objectivity?

In everyday discussions of ethics, being objective usually means applying the rules fairly and treating everyone the same rather than showing favoritism to one party or another. Is this what scientists have in mind when they voice their commitment to objectivity? Perhaps in part. It could be connected to applying “the rules” of science (i.e., the scientific method) fairly and not letting bias creep into the production of scientific knowledge.

This seems close to the characterization of good scientific practice that we see in the National Academy of Science and National Research Council document, “The Nature of Science.” [1] This document describes science as an activity in which hypotheses undergo rigorous tests, whereby researchers compare the predictions of the hypotheses to verifiable facts determined by observation and experiment, and findings and corrections are announced in refereed scientific publications. It states, “Although [science’s] goal is to approach true explanations as closely as possible, its investigators claim no final or permanent explanatory truths.” (38)

Note that rigorous facts, verification of those facts (or the information necessary to verify them), correction of mistakes, and reliable reports of findings all depend on honesty – you can’t perform these activities by making up your results, or presenting them in a deceptive way, for example. So being objective in the sense of following good scientific methodology requires a commitment not to mislead.

But here, in “The Nature of Science,” we see hints that there are two closely related, yet distinct, meanings of “objective”. One is what anyone applying the appropriate methodology could see. The other is a picture of what the world is really like. Getting a true picture of the world (or aiming for such a picture) means seeking objectivity in the second sense -– finding the true facts. Seeking out the observational data that other scientists could verify -– the first sense of objectivity -– is closely tied to the experimental method scientists use and their strategies for reporting their results. Presumably, applying objective methodology would be a good strategy for generating an accurate (and thus objective) picture of the world.

But we should note a tension here that’s at least as old as the tension between Plato and his student Aristotle. What exactly are the facts about the world that anyone could see? Are sense organs like eyes all we need to see them? If such facts really exist, are they enough to help us build a true picture of the world?

In the chapter “Making Observations” from his book The Scientific Attitude [2], Fred Grinnell discusses some of the challenges of seeing what there is to see. He argues that, especially in the realms science tries to probe, seeing what’s out there is not automatic. Rather, we have to learn to see the facts that are there for anyone to observe.

Grinnell describes the difficulty students have seeing cells under a light microscope, a difficulty that persists even after students work out how to use the microscope to adjust the focus. He writes:

The students’ inability to see the cells was not a technical problem. There can be technical problems, of course -– as when one takes an unstained tissue section and places it under a microscope. Under these conditions it is possible to tell that something is “there,” but not precisely what. As discussed in any histology textbook, the reason is that there are few visual features of unstained tissue sections that our eyes can discriminate. As the students were studying stained specimens, however, sufficient details of the field were observable that could have permitted them to distinguish among different cells and between cells and the noncellular elements of the tissue. Thus, for these students, the cells were visible but unseen. (10-11)

Grinnell’s example suggests that seeing cells, for example, requires more than putting your eye to the eyepiece of a microscope focused on a stained sample of cells. Rather, you need to be able to recognize those bits of your visual field as belonging to a particular kind of object -– and, you may even need to have something like the concept of a cell to be able to identify what you are seeing as cells. At the very least, this suggests that we should amend our gloss of objective as “what anyone could see” to something more like “what anyone could see given a particular conceptual background and some training with the necessary scientific measuring devices.”

But Grinnell makes even this seem too optimistic. He notes that “seeing things one way means not seeing them another way,” which implies that there are multiple ways to interpret any given piece of the world toward which we point our sense organs. Moreover, he argues,

Each person’s previous experiences will have led to the development of particular concepts of things, which will influence what objects can be seen and what they will appear to be. As a consequence, it is not unusual for two investigators to disagree about their observations if the investigators are looking at the data according to different conceptual frameworks. Resolution of such conflicts requires that the investigators clarify for each other the concepts that they have in mind. (15)

In other words, scientists may need to share a bundle of background assumptions about the world to look at a particular piece of that world and agree on what they see. Much more is involved in seeing “what anyone can see” than meets the eye.

We’ll say more about this challenge in the next post, when we look at how Renaissance anatomists tried to build (and communicate) objective knowledge about the human body.
_____________

[1] “The Nature of Science,” in Panel on Scientific Responsibility and the Conduct of Research, National Academy of Sciences, National Academy of Engineering, Institute of Medicine. Responsible Science, Volume I: Ensuring the Integrity of the Research Process. Washington, DC: The National Academies Press, 1992.

[2] Frederick Grinnell, The Scientific Attitude. Guilford Press, 1992.

More on rudeness, civility, and the care and feeding of online conversations.

Late last month, I pondered the implications of a piece of research that was mentioned but not described in detail in a perspective piece in the January 4, 2013 issue of Science. [1] In its broad details, the research suggests that the comments that follow an online article about science — and particularly the perceived tone of the comments, whether civil or uncivil — can influence readers’ assessment of the science described in the article itself.

Today, an article by Paul Basken at The Chronicle of Higher Education shares some more details of the study:

The study, outlined on Thursday at the annual meeting of the American Association for the Advancement of Science, involved a survey of 2,338 Americans asked to read an article that discussed the risks of nanotechnology, which involves engineering materials at the atomic scale.

Of participants who had already expressed wariness toward the technology, those who read the sample article—with politely written comments at the bottom—came out almost evenly split. Nearly 43 percent said they saw low risks in the technology, and 46 percent said they considered the risks high.

But with the same article and comments that expressed the same reactions in a rude manner, the split among readers widened, with 32 percent seeing a low risk and 52 percent a high risk.

“The only thing that made a difference was the tone of the comments that followed the story,” said a co-author of the study, Dominique Brossard, a professor of life-science communication at the University of Wisconsin at Madison. The study found “a polarization effect of those rude comments,” Ms. Brossard said.

The study, conducted by researchers at Wisconsin and George Mason University, will be published in a coming issue of the Journal of Computer-Mediated Communication. It was presented at the AAAS conference during a daylong examination of how scientists communicate their work, especially online.

If you click through to read the article, you’ll notice that I was asked for comment on the findings. As you may guess, I had more to say on the paper (which is still under embargo) and its implications than ended up in the article, so I’m sharing my extended thoughts here.

First, I think these results are useful in reassuring bloggers who have been moderating comments that what they are doing is not just permissible (moderating comments is not “censorship,” since bloggers don’t have the power of the state, and folks can find all sorts of places in the Internet to state their views if any given blog denies them a soapbox) but also reasonable. Blogging with comments enabled assumes more than transmission of information, it assumes a conversation, and what kind of conversation it ends up being depends on what kind of behavior is encouraged or forbidden, who feels welcome or alienated.

But, there are some interesting issues that the study doesn’t seem to address, issues that I think can matter quite a lot to bloggers.

In the study, readers (lurkers) were reacting to factual information in an online posting plus the discourse about that article in the comments. As the study is constructed, it looks like that discourse is being shaped by commenters, but not by the author of the article. It seems likely to me (and worth further empirical study!) that comment sections in which the author is engaging with commenters — not just responding to the questions they ask and the views they express, but also responding to the ways that they are interacting with other commenters and to their “tone” — have a different impact on readers than comment sections where the author of the piece that is being discussed is totally absent from the scene. To put it more succinctly, comment sections where the author is present and engaged, or absent and disengaged, communicate information to lurkers, too.

Here’s another issue I don’t think the study really addresses: While blogs usually aim to communicate with lurkers as well as readers who post comments (and every piece of evidence I’ve been shown suggests that commenters tend to be a small proportion of readers), most are aiming to reach a core audience that is narrower than “everyone in the world with an internet connection”.

Sometimes what this means is that bloggers are speaking to an audience that finds comment sections that look unruly and contentious to be welcoming, rather than alienating. This isn’t just the case for bloggers seeking an audience that likes to debate or to play rough.

Some blogs have communities that are intentionally uncivil towards casual expressions of sexism, racism, homophobia, etc. Pharyngula is a blog that has taken this approrach, and just yesterday Chris Clarke posted a statement on “civility” there that leads with a commitment “not to fetishize civility over justice.” Setting the rules of engagement between bloggers and posters this way means that people in groups especially affected by sexism, racism, homophobia, etc., have a haven in the blogosphere where they don’t have to waste time politely defending the notion that they are fully human, too (or swallowing their anger and frustration at having their humanity treated as a topic of debate). Yes, some people find the environment there alienating — but the people who are alienated by unquestioned biases in most other quarters of the internet (and the physical world, for that matter) are the ones being consciously welcomed into the conversation at Pharyngula, and those who don’t like the environment can find another conversation. It’s a big blogosphere. That not every potential reader does not feel perfectly comfortable at a blog, in other words, is not proof that the blogger is doing it wrong.

So, where do we find ourselves?

We’re in a situation where lots of people are using online venues like blogs to communicate information and viewpoints in the context of a conversation (where readers can actively engage as commenters). We have a piece of research indicating that the tenor of the commenting (as perceived by lurkers, readers who are not commenting) can communicate as much to readers as the content of the post that is the subject of the comments. And we have lots of questions still unanswered about what kinds of engagement will have what kinds of effect on what kinds or readers (and how reliably). What does this mean for those of us who blog?

I think what it means is that we have to be really reflective about what we’re trying to communicate, who we’re trying to communicate it to, and how our level of visible engagement (or disengagement) in the conversation might make a difference. We have to acknowledge that we have information that’s gappy at best about what’s coming across to the lurkers, and attentive to ways to get more feedback about how successfully we’re communicating what we’re trying to communicate. We have to recognize that, given all we don’t know, we may want to shift our strategies for blogging and engaging commenters, especially if we come upon evidence that they’re not working the way we thought they were.

* * * * *
In the interests of spelling out the parameters of the conversation I’d like to have here, let me note that whether or not you like the way Pharyngula sets a tone for conversations is off topic here. You are, however, welcome to share in the comments here what you find makes you feel more or less welcome to engage with online postings, whether as a commenter or a lurker.
_____

[1] Dominique Brossard and Dietram A. Scheufele, “Science, New Media, and the Public.” Science 4 January 2013:Vol. 339, pp. 40-41.
DOI: 10.1126/science.1160364

Some musings on Jonah Lehrer’s $20,000 “meh culpa”.

Remember some months ago when we were talking about how Jonah Lehrer was making stuff up in his “non-fiction” pop science books? This was as big enough deal that his publisher, Houghton Mifflin Harcourt, recalled print copies of Lehrer’s book Imagine, and that the media outlets for which Lehrer wrote went back through his writing for them looking for “irregularities” (like plagiarism — which one hopes is not regular, but once your trust has been abused, hopes are no longer all that durable).

Lehrer’s behavior was clearly out of bounds for anyone hoping for a shred of credibility as a journalist or non-fiction author. However, at the time, I opined in a comment:

At 31, I think Jonah Lehrer has time to redeem himself and earn back trust and stuff like that.

Well, the events of this week stand as evidence that having time to redeem oneself is not a guarantee that one will not instead dig the hole deeper.

You see, Jonah Lehrer was invited to give a talk this week at a “media learning seminar” in Miami, a talk which marked his first real public comments a large group of journalistic peers since his fabrications and plagiarism were exposed — and a talk for which the sponsor of the conference, the Knight Foundation, paid Lehrer an honorarium of $20,000.

At the New York Times “Arts Beat” blog, Jennifer Schuessler describes Lehrer’s talk:

Mr. Lehrer … dived right in with a full-throated mea culpa. “I am the author of a book on creativity that contains several fabricated Bob Dylan quotes,” he told the crowd, which apparently could not be counted on to have followed the intense schadenfreude-laced commentary that accompanied his downfall. “I committed plagiarism on my blog, taking without credit or citation an entire paragraph from the blog of Christian Jarrett. I plagiarized from myself. I lied to a journalist named Michael Moynihan to cover up the Dylan fabrications.”

“My mistakes have caused deep pain to those I care about,” he continued. “I’m constantly remembering all the people I’ve hurt and let down.”

If the introduction had the ring of an Alcoholics Anonymous declaration, before too long Mr. Lehrer was surrendering to the higher power of scientific research, cutting back and forth between his own story and the kind of scientific terms — “confirmation bias,” “anchoring” — he helped popularize. Within minutes he had pivoted from his own “arrogance” and other character flaws to the article on flawed forensic science within the F.B.I. that he was working on when his career began unraveling, at one point likening his own corner-cutting to the overconfidence of F.B.I. scientists who fingered the wrong suspect in the 2004 Madrid bombings.

“If we try to hide our mistakes, as I did, any error can become a catastrophe,” he said, adding: “The only way to prevent big failures is a willingness to consider every little one.”

Not everyone shares the view that Lehrer’s apology constituted a full-throated mea culpa, though. At Slate, Daniel Engber shared this assessment:

Lehrer has been humbled, and yet nearly every bullet in his speech managed to fire in both directions. It was a wild display of self-negation, of humble arrogance and arrogant humility. What are these “standard operating procedures” according to which Lehrer will now do his work? He says he’ll be more scrupulous in his methods—even recording and transcribing interviews(!)—but in the same breath promises that other people will be more scrupulous of him. “I need my critics to tell me what I’ve gotten wrong,” he said, as if to blame his adoring crowds at TED for past offenses. Then he promised that all his future pieces would be fact-checked, which is certainly true but hardly indicative of his “getting better” (as he puts it, in the clammy, familiar rhetoric of self-help).

What remorse Lehrer had to share was couched in elaborate and perplexing disavowals. He tried to explain his behavior as, first of all, a hazard of working in an expert field. Like forensic scientists who misjudge fingerprints and DNA analyses, and whose failings Lehrer elaborated on in his speech, he was blind to his own shortcomings. These two categories of mistake hardly seem analogous—lab errors are sloppiness, making up quotes is willful distortion—yet somehow the story made Lehrer out to be a hapless civil servant, a well-intentioned victim of his wonky and imperfect brain.

(Bold emphasis added.)

At Forbes, Jeff Bercovici noted:

Ever the original thinker, even when he’s plagiarizing from press releases, Lehrer apologized abjectly for his actions but pointedly avoided promising to become a better person. “These flaws are a basic part of me,” he said. “They’re as fundamental to me as the other parts of me I’m not ashamed of.”

Still, Lehrer said he is aiming to return to the world of journalism, and has been spending several hours a day writing. “It’s my hope that someday my transgressions might be forgiven,” he said.

How, then, does he propose to bridge the rather large credibility gap he faces? By the methods of the technocrat, not the ethicist: “What I clearly need is a new set of rules, a stricter set of standard operating procedures,” he said. “If I’m lucky enough to write again, then whatever I write will be fully fact-checked and footnoted. Every conversation will be fully taped and transcribed.”

(Bold emphasis added.)

How do I see Jonah Lehrer’s statement? The title of this post should give you a clue. Like most bloggers, I took five years of Latin.* “Mea culpa” would describe a statement wherein the speaker (in this case, Jonah Lehrer) actually acknowledged that the blame was his for the bad thing of which he was a part. From what I can gather, Lehrer hasn’t quite done that.

Let the record reflect that the “new set of rules” and “stricter set of standard operating procedures” Lehrer described in his talk are not new, nor were they non-standard when Lehrer was falsifying and plagiarizing to build his stories. It’s not that Jonah Lehrer’s unfortunate trajectory shed light on the need for these standards, and now the journalistic community (and we consumers of journalism) can benefit from their creation. Serious journalists were already using these standards.

Jonah Lehrer, however, decided he didn’t need to use them.

This does have a taste of Leona Helmsleyesque “rules are for the little people” to it. And, I think it’s important to note that Lehrer gave the outward appearance of following the rules. He did not stand up and say, “I think these rules are unnecessary to good journalistic practice, and here’s why…” Rather, he quietly excused himself from following them.

But now, Lehrer tells us, he recognizes the importance of the rules.

That’s well and good. However, the rules he’s pointing to — taping and transcribing interviews, fact-checking claims and footnoting sources — seem designed to prevent unwitting mistakes. They could head off misremembering what interviewees said, miscommunicating whose words or insights animate part of a story, getting the facts wrong accidentally. It’s less clear that these rules can head off willful lies and efforts to mislead — which is to say, the kind of misdeeds that got Lehrer into trouble.

Moreover, that he now accepts these rules after being caught lying does not indicate that Jonah Lehrer is now especially sage about journalism. It’s remedial work.

Let’s move on from his endorsement (finally) of standards of journalistic practice to the constellation of cognitive biases and weaknesses of will that Jonah Lehrer seems to be trying to saddle with the responsibility for his lies.

Recognizing cognitive biases is a good thing. It is useful to the extent that it helps us to avoid getting fooled by them. You’ll recall that, knowledge-builders, whether scientists or journalists, are supposed to do their best to avoid being fooled.

But, what Lehrer did is hard to cast in terms of ignoring strong cognitive biases. He made stuff up. He fabricated quotes. He presented other authors’ writing as his own. When confronted about his falsifications, he lied. Did his cognitive biases do all this?

What Jonah Lehrer seems to be sidestepping in his “meh culpa” is the fact that, when he had to make choices about whether to work with the actual facts or instead to make stuff up, about whether to write his own pieces (or at least to properly cite the material from others that he used) or to plagiarize, about whether to be honest about what he’d done when confronted or to lie some more, he decided to be dishonest.

If we’re to believe this was a choice his cognitive biases made for him, then his seem much more powerful (and dangerous) than the garden-variety cognitive biases most grown-up humans have.

It seems to me more plausible that Lehrer’s problem was a weakness of will. It’s not that he didn’t know what he was doing was wrong — he wasn’t fooled by his brain into believing it was OK, or else he wouldn’t have tried to conceal it. Instead, despite recognizing the wrongness of his deeds, he couldn’t muster the effort not to do them.

If Jonah Lehrer cannot recognize this — that it frequently requires conscious effort to do the right thing — it’s hard to believe he’ll be committed to putting that effort into doing the right (journalistic) thing going forward. Verily, given the trust he’s burned with his journalistic colleagues, he can expect that proving himself to be reformed will require extra effort.

But maybe what Lehrer is claiming is something different. Maybe he’s denying that he understood the right thing to do and then opted not to do it because it seemed like too much work. Maybe he’s claiming instead that he just couldn’t resist the temptation (whether of rule-breaking for its own sake or of rule-breaking as the most efficient route to secure the prestige he craved). In other words, maybe he’s saying he was literally powerless, that he could not help committing those misdeeds.

If that’s Lehrer’s claim — and if, in addition, he’s claiming that the piece of his cognitive apparatus that was so vulnerable to temptation that it seized control to make him do wrong is as integral to who Jonah Lehrer is as his cognitive biases are — the whole rehabilitation thing may be a non-starter. If this is how Lehrer understands why he did wrong, he seems to be identifying himself as a wrongdoer with a high probability of reoffending.

If he can parlay that into more five-figure speaker fees, maybe that will be a decent living for Jonah Lehrer, but it will be a big problem for the community of journalists and for the public that trusts journalists as generally reliable sources of information.

Weakness is part of Lehrer, as it is for all of us, but it is not a part he is acknowledging he could control or counteract by concerted effort, or by asking for help from others.

It’s part of him, but not in a way that makes him inclined to actually take responsibility or to acknowledge that he could have done otherwise under the circumstances.

If he couldn’t have done otherwise — and if he might not be able to when faced with similar temptation in the future — then Jonah Lehrer has no business in journalism. Until he can recognize his own agency, and the responsibility that attaches to it, the most he has to offer is one more cautionary tale.
_____
*Fact check: I have absolutely no idea how many other bloggers took five years of Latin. My evidence-free guess is that it’s not just me.

Intuitions, scientific methodology, and the challenge of not getting fooled.

At Context and Variation, Kate Clancy has posted some advice for researchers in evolutionary psychology who want to build reliable knowledge about the phenomena they’re trying to study. This advice, of course, is prompted in part by methodology that is not so good for scientific knowledge-building. Kate writes:

The biggest problem, to my mind, is that so often the conclusions of the bad sort of evolutionary psychology match the stereotypes and cultural expectations we already hold about the world: more feminine women are more beautiful, more masculine men more handsome; appearance is important to men while wealth is important to women; women are prone to flighty changes in political and partner preference depending on the phase of their menstrual cycles. Rather than clue people in to problems with research design or interpretation, this alignment with stereotype further confirms the study. Variation gets erased: in bad evolutionary psychology, there are only straight people, and everyone wants the same things in life. …

No one should ever love their idea so much that it becomes detached from reality.

It’s a lovely post about the challenges of good scientific methodology when studying human behavior (and why it matters to more than just scientists), so you should read the whole thing.

Kate’s post also puts me in mind of some broader issues about which scientists should remind themselves from time to time to keep themselves honest. I’m putting some of those on the table here.

Let’s start with a quotable quote from Richard Feynman:

The first principle is that you must not fool yourself, and you are the easiest person to fool.

Scientists are trying to build reliable knowledge about the world from information that they know is necessarily incomplete. There are many ways to interpret the collections of empirical data we have on hand — indeed, many contradictory ways to interpret them. This means that lots of the possible interpretations will be wrong.

You don’t want to draw the wrong conclusion from the available data, not if you can possibly avoid it. Feynman’s “first principle” is noting that we need to be on guard against letting ourselves be fooled by wrong conclusions — and on guard against the peculiar ways that we are more vulnerable to being fooled.

This means we have to talk about our attachment to intuitions. All scientists have intuitions. They surely help in motivating questions to ask about the world and strategies for finding good answers to them. But intuitions, no matter how strong, are not the same as empirical evidence.

Making things more challenging, our strong intuitions can shape what we take to be the empirical evidence. They can play a role in which results we set aside because they “couldn’t be right,” in which features of a system we pay attention to and which we ignore, in which questions we bother to ask in the first place. If we don’t notice the operation of our intuitions, and the way they impact our view of the empirical evidence, we’re making it easier to get fooled. Indeed, if our intuitions are very strong, we’re essentially fooling ourselves.

As if this weren’t enough, we humans (and, by extension, human scientists) are not always great at recognizing when we are in the grips of our intuitions. It can feel like we’re examining a phenomenon to answer a question and that we’re refraining from making any assumptions to guide our enquiry, but chances are it’s not a feeling we should trust.

This is not to say that our intuitions are guaranteed safe haven from our noticing them. We can become aware of them and try to neutralize the extent to which they, rather than the empirical evidence, are driving the scientific story — but to do this, we tend to need help from people who have conflicting intuitions about the same bit of the world. This is a good methodological reason to take account of the assumptions and intuitions of others, especially when they conflict with our own.

What happens if there are intuitions about which we all agree — assumptions we are making (and may well be unaware that we’re making, because they seem so bleeding obvious) with which no one disagrees? I don’t know that there are any such universal human intuitions. It seems unlikely to me, but I can’t rule out the possibility. How would they bode for our efforts at scientific knowledge-building?

First, we would probably want to recognize that the universality of an intuition still wouldn’t make it into independent empirical evidence. Even if it had been the case, prior to Galileo, or Copernicus, or Aristarchus of Samos, that every human took it as utterly obvious that Earth is stationary, we recognize that this intuition could still be wrong. As it happened, it was an intuition that was questioned, though not without serious resistance.

Developing a capacity to question the obvious, and also to recognize and articulate what it is we’re taking to be obvious in order that we might question it, seems like a crucial skill for scientists to cultivate.

But, as I think comes out quite clearly in Kate’s post, there are some intuitions we have that, even once we’ve recognized them, may be extremely difficult to subject to empirical test. This doesn’t mean that the questions connected in our heads to these intuitions are outside the realm of scientific inquiry, but it would be foolish not to notice that it’s likely to be extremely difficult to find good scientific answers to these questions. We need to be wary of the way our intuitions try to stack the evidential deck. We need to acknowledge that the very fact of our having strong intuitions doesn’t count as empirical evidence in favor of them. We need to come to grips with the possibility that our intuitions could be wrong — perhaps to the extent that we recognize that empirical results that seem to support our intuitions require extra scrutiny, just to be sure.

To do any less is to ask to be fooled, and that’s the outcome scientific knowledge-building is trying to avoid.

Academic tone-trolling: How does interactivity impact online science communication?

Later this week at ScienceOnline 2013, Emily Willingham and I are co-moderating a session called Dialogue or fight? (Un)moderated science communication online. Here’s the description:

Cultivating a space where commentators can vigorously disagree with a writer–whether on a blog, Twitter, G+, or Facebook, *and* remain committed to being in a real dialogue is pretty challenging. It’s fantastic when these exchanges work and become constructive in that space. On the other hand, there are times when it goes off the rails despite your efforts. What drives the difference? How can you identify someone who is commenting simply to cause trouble versus a commenter there to engage in and add value to a genuine debate? What influence does this capacity for *anyone* to engage with one another via the great leveler that is social media have on social media itself and the tenor and direction of scientific communication?

Getting ready for this session was near the top of my mind when I read a perspective piece by Dominique Brossard and Dietram A. Scheufele in the January 4, 2013 issue of Science. [1] In the article, Brossard and Scheufele raise concerns about the effects of moving the communication of science information to the public from dead-tree newspapers and magazines into online, interactive spaces.

Here’s the paragraph that struck me as especially relevant to the issues Emily and I had been discussing for our session at ScienceOnline 2013:

A recent conference presented an examination of the effects of these unintended influences of Web 2.0 environments empirically by manipulating only the tone of the comments (civil or uncivil) that followed an online science news story in a national survey experiment. All participants were exposed to the same, balanced news item (covering nanotechnology as an emerging technology) and to a set of comments following the story that were consistent in terms of content but differed in tone. Disturbingly, readers’ interpretations of potential risks associated with the technology described in the news article differed significantly depending only on the tone of the manipulated reader comments posted with the story. Exposure to uncivil comments (which included name calling and other non-content-specific expressions of incivility) polarized the views among proponents and opponents of the technology with respect to its potential risks. In other words, just the tone of the comments following balanced science stories in Web 2.0 environments can significantly alter how audiences think about the technology itself. (41)

There’s lots to talk about here.

Does this research finding mean that, when you’re trying to communicate scientific information online, enabling comments is a bad idea?

Lots of us are betting that it’s not. Rather, we’re optimistic that people will be more engaged with the information when they have a chance to engage in a conversation about it (e.g., by asking questions and getting answers).

However, the research finding described in the Science piece suggests that there may be better and worse ways of managing commenting on your posts if your goal is to help your readers understand a particular piece of science.

This might involve having a comment policy that puts some things clearly out-of-bounds, like name-calling or other kinds of incivility, and then consistently enforcing this policy.

It should be noted — and has been — that some kinds of incivility wear the trappings of polite language, which means that it’s not enough to set up automatic screens that weed out comments containing particular specified naughty words. Effective promotion of civility rather than incivility might well involve having the author of the online piece and/or designated moderators as active participants in the ongoing conversation, calling out bad commenter behavior as well as misinformation, answering questions to make sure the audience really understands the information being presented, and being attentive to how the unfolding discussion is likely to be welcoming — or forbidding — to the audience one is hoping to reach.

There are a bunch of details that are not clear from this brief paragraph in the perspective piece. Were the readers whose opinions were swayed by the tone of the comments reacting to a conversation that had already happened or were they watching as it happened? (My guess is the former, since the latter would be hard to orchestrate and coordinate with a survey.) Were they looking at a series of comments that dropped them in the middle of a conversation that might plausibly continue, or were they looking at a conversation that had reached its conclusion? Did the manipulated reader comments include any comments that appeared to be from the author of the science article, or were the research subjects responding to a conversation from which the author appeared to be absent? Potentially, these details could make a difference to the results — a conversation could impact someone reading it differently depending on whether it seems to be gearing up or winding down, just as participation from the author could carry a different kind of weigh than the views of random people on the internet. I’m hopeful that future research in this area will explore just what kind of difference they might make.

I’m also guessing that the experimental subjects reading the science article and the manipulated comments that followed could not themselves participate in the discussion by posting a comment. I wonder how much being stuck on the sidelines rather than involved in the dialogue affected their views. We should remember, though, that most indicators suggest that readers of online articles — even on blogs — who actually post comments are much smaller in number than the readers who “lurk” without commenting. This means that commenters are generally a very small percentage of the readers one is trying to reach, and perhaps not very representative of those readers overall.

At this point, the take-home seems to be that social scientists haven’t discovered all the factors that matter in how an audience for online science is going to receive and respond to what’s being offered — which means that those of us delivering science-y content online should assume we haven’t discovered all those factors, either. It might be useful, though, if we are reflective about our interactions with our audiences and if we keep track of the circumstances around communicative efforts that seem to work and those that seem to fail. Cataloguing these anecdote could surely provide fodder for some systematic empirical study, and I’m guessing it could help us think through strategies for really listening to the audiences we hope are listening to us.

* * * * *
As might be expected, Bora has a great deal to say about the implications of this particular piece of research and about commenting, comment moderation, and Web 2.0 conversations more generally. Grab a mug of coffee, settle in, and read it.

——
[1] Dominique Brossard and Dietram A. Scheufele, “Science, New Media, and the Public.” Science 4 January 2013:Vol. 339, pp. 40-41.
DOI: 10.1126/science.1160364

Can we combat chemophobia … with home-baked bread?

This post was inspired by the session at the upcoming ScienceOnline 2013 entitled Chemophobia & Chemistry in The Modern World, to be moderated by Dr. Rubidium and Carmen Drahl

For some reason, a lot of people seem to have an unreasonable fear of chemistry. I’m not just talking about fear of chemistry instruction, but full-on fear of chemicals in their world. Because what people think they know about chemicals is that they go boom, or they’re artificial, or they’re drugs which are maybe useful but maybe just making big pharma CEOs rich, and maybe they’re addictive and subject to abuse. Or, they are seeping into our water, our air, our food, our bodies and maybe poisoning us.

At the extreme, it strikes me that chemophobia is really just a fear of recognizing that our world is made of chemicals. I can assure you, it is!

Your computer is made of chemicals, but so are paper and ink. Snails are made of chemicals, as are plants (which carry out chemical reactions right under our noses. Also carrying out chemical reactions right under our noses are yeasts, without which many of our potables would be less potent. Indeed, our kitchens and pantries, from which we draw our ingredients and prepare our meals, are full of many impressively reactive chemicals.

And here, it actually strikes me that we might be able to ratchet down the levels of chemophobia if people find ways to return to de novo syntheses of more of what they eat — which is to say, to making their food from scratch.

For the last several months, our kitchen has been a hotbed of homemade bread. Partly this is because we had a stretch of a couple years where our only functional oven was a toaster over, which means when we got a working full-sized oven again, we became very enthusiastic about using it.

As it turns out, when you’re baking two or three loaves of bread every week, you start looking at things like different kinds of flour on the market and figuring out how things like gluten content affect your dough — how dense of a bread it will make, how much “spring” it has in the oven, and so forth.

(Gluten is a chemical.)

Maybe you dabble with the occasional batch of biscuits of muffins or quick-bread that uses a leavening agent other than yeast — otherwise known as a chemical leavener.

(Chemical leaveners are chemicals.)

And, you might even start to pick up a feel for which chemical leaveners depend on there being an acidic ingredient (like vinegar or buttermilk) in your batter and which will do the job without an acidic ingredient in the batter.

(Those ingredients, whether acidic or not, are made of chemicals. Even the water.)

Indeed, many who find their inner baker will start playing around with recipes that call for more exotic ingredients like lecithin or ascorbic acid or caramel color (each one: a chemical).

It’s to the point that I have joked, while perusing the pages of “baking enhancers” in the fancy baking supply catalogs, “People start baking their own bread so they can avoid all the chemicals in the commercially baked bread, but then they get really good at baking and start improving their homemade bread with all these chemicals!”

And yes, there’s a bit of a disconnect in baking to avoid chemicals in your food and then discovering that there are certain chemicals that will make that food better. But, I’m hopeful that the process leads to a connection, wherein people who are getting back in touch with making one of the oldest kinds of foods we have can also make peace with the recognition that wholesome foods (and the people who eat them) are made of chemicals.

It’s something to chew on, anyway.

Reasonably honest impressions of #overlyhonestmethods.

I suspect at least some of you who are regular Twitter users have been following the #overlyhonestmethods hashtag, with which scientists have been sharing details of their methodology that are maybe not explicitly spelled out in their published “Materials and Methods” sections. And, as with many other hashtag genres, the tweets in #overlyhonestmethods are frequently hilarious.

I was interviewed last week about #overlyhonestmethods for the Public Radio International program Living On Earth, and the length of my commentary was more or less Twitter-scaled. This means some of the nuance (at least in my head), about questions like whether I thought the tweets were an overshare that could make science look bad, didn’t quite make it to the radio. Also, in response to the Living On Earth segment, one of the people with whom I regularly discuss the philosophy of science in the three-dimensional world, shared some concerns about this hashtag in the hopes I’d say a bit more:

I am concerned about the brevity of the comments which may influence what one expresses.  Second there is an ego component; some may try to outdo others’ funny stories, and may stretch things in order to gain a competitive advantage.

So, I’m going to say a bit more.

Should we worry that #overlyhonestmethods tweets share information that will make scientific practice look bad to (certain segments of) the public?

I don’t think so. I suppose this may depend on what exactly the public expects of scientists.

The people doing science are human. They are likely to be working with all kinds of constraints — how close their equipment is to the limits of its capabilities (and to making scary noises), how frequently lab personnel can actually make it into the lab to tend to cell cultures, how precisely (or not) pumping rates can be controlled, how promptly (or not) the folks receiving packages can get perishable deliveries to the researchers. (Notice that at least some of these limitations are connected to limited budgets for research … which maybe means that if the public finds them unacceptable, they should lobby their Congresscritters for increased research funding.) There are also constraints that come from the limits of the human animal: with a finite attention span, without a built in chronometer or calibrated eyeballs, and with a need for sleep and possibly even recreation every so often (despite what some might have you think).

Maybe I’m wrong, but my guess is that it’s a good thing to have a public that is aware of these limitations imposed by the available equipment, reagents, and non-robot workforce.

Actually, I’m willing to bet that some of these limitations, and an awareness of them, are also really handy in scientific knowledge-building. They are departures from ideality that may help scientists nail down which variables in the system really matter in producing and controlling the phenomenon being studied. Reproducibility might be easy for a robot that can do every step of the experiment precisely every single time, but we really learn what’s going on when we drift from that. Does it matter if I use reagents from a different supplier? Can I leave the cultures to incubate a day longer? Can I successfully run the reaction in a lab that’s 10 oC warmer or 10 oC colder? Working out the tolerances helps turn an experimental protocol from a magic trick into a system where we have some robust understanding of what variables matter and of how they’re hooked to each other.

Does the 140 character limit mean #overlyhonestmethods tweets leave out important information, or that scientists will only use the hashtag to be candid about some of their methods while leaving others unexplored?

The need for brevity surely means that methods for which candor requires a great deal of context and/or explanation won’t be as well-represented as methods where one can be candid and pithy simultaneously. These tweeted glimpses into how the science gets done are more likely to be one-liners than shaggy-dog stories.

However, it’s hard to imagine that folks who really wanted to share wouldn’t use a series of tweets if they wanted to play along, or maybe even write a blog post about it and use the hashtag to tweet a link to that post.

What if #overlyhonestmethods becomes a game of one-upmanship and puffery, in which researchers sacrifice honesty for laughs?

Maybe there’s some of this happening, and if the point of the hashtag is for researchers to entertain each other, maybe that’s not a problem. However, in the case that other members of one’s scientific community were actually looking to those tweets to fill in some of the important details of methodology that are elided in the terse “Materials and Methods” section of a published research paper, I hope the tweeters would, when queried, provide clear and candid information on how they actually conducted their experiments. Correcting or retracting a tweet should be less of an ego blow than correcting or retracting a published paper, I hope (and indeed, as hard as it might be to correct or retract published claims, good scientists do it when they need to).

The whole #overlyhonestmethods hashtag raises the perennial question of why it is so much is elided in published “Materials and Methods” sections. Blame is usually put on limitations of space in the journals, but it’s also reasonable to acknowledge that sometimes details-that-turn-out-to-be-important are left out because the researchers don’t fully recognize their importance. Other times, researchers may have empirical grounds for thinking these details are important, but they don’t yet have a satisfying story to tell about why they should be.

By the way, I think it would be an excellent thing if, for research that is already published, #overlyhonestmethods included the relevant DOI. These tweets would be supplementary information researchers could really use.

What researchers use #overlyhonestmethods to disclose ethically problematic methods?

Given that Twitter is a social medium, I expect other scientists in the community watching the hashtag would challenge those methods or chime in to explain just what makes them ethically problematic. They might also suggest less ethically problematic ways to achieve the same research goals.

The researchers on Twitter could, in other words, use the social medium to exert social pressure in order to make sure other members of their scientific community understand and live up to the norms of that community.

That outcome would strike me as a very good one.

* * * * *

In addition to the ever expanding collection of tweets about methods, #overlyhonestmethods also has links to some thoughtful, smart, and funny commentary on the hashtag and the conversations around it. Check it out!

Fear of scientific knowledge about firearm-related injuries.

In the United States, a significant amount of scientific research is funded through governmental agencies, using public money. Presumably, this is not primarily aimed at keeping scientists employed and off the streets*, but rather is driven by a recognition that reliable knowledge about how various bits of our world work can be helpful to us (individually and collectively) in achieving particular goals and solving particular problems.

Among other things, this suggests a willingness to put the scientific knowledge to use once it’s built.** If we learn some relevant details about the workings of the world, taking those into account as we figure out how best to achieve our goals or solve our problems seems like a reasonable thing to do — especially if we’ve made a financial investment in discovering those relevant details.

And yet, some of the “strings” attached to federally funded research suggest that the legislators involved in approving funding for research are less than enthusiastic to see our best scientific knowledge put to use in crafting policy — or, that they would prefer that the relevant scientific knowledge not be built or communicated at all.

A case in point, which has been very much on my mind for the last month, is the way language in appropriations bills has restricted Centers for Disease Control and Prevention (CDC) and National Institutes of Health (NIH) research funds for research related to firearms.

The University of Chicago Crime Lab organized a joint letter (PDF) to the gun violence task force being headed by Vice President Joe Biden, signed by 108 researchers and scholars, which is very clear in laying out the impediments that have been put on research about the effects of guns. They identify the crucial language, which is still present in subsection c of section 503 and 218 of FY2013 Appropriations Act governing NIH and CDC funding:

None of the funds made available in this title may be used, in whole or in part, to advocate or promote gun control.

As the letter from the Crime Lab rightly notes,

Federal scientific funds should not be used to advance ideological agendas on any topic. Yet that legislative language has the effect of discouraging the funding of well-crafted scientific studies.

What is the level of this discouragement? The letter presents a table comparing major NIH research awards connected to a handful of conditions between 1973 and 2012, noting the number of reported cases of these conditions in the U.S. during this time period alongside the number of grants to study the condition. There were 212 NIH research awards to study cholera and 400 reported U.S. cases of cholera. There were 56 NIH research awards to study diphtheria and 1337 reported U.S. cases of diphtheria. There were 129 NIH research awards to study polio and 266 reported U.S. cases of polio. There were 89 NIH research awards to study rabies and 65 reported U.S. cases of rabies. But, for more than 4 million reported firearm injuries in the U.S. during this time period, there were exactly 3 NIH research awards to study firearm injuries.

One possibility here is that, from 1973 to 2012, there were very few researchers interested enough in firearm injuries to propose well-crafted scientific studies of them. I suspect that that the 108 signatories of the letter linked above would disagree with that explanation for this disparity in research funding.

Another possibility is that legislators want to prevent the relevant scientific knowledge from being built. The fact that they have imposed restrictions on the collection and sharing of data by the Federal Bureau of Alcohol, Tobacco, Firearms and Explosives (in particular, data tracing illegal sales and purchases of firearms) strongly supports the hypothesis that, at least when it comes to firearms, legislators would rather be able to make policy unencumbered by pesky facts about how the relevant pieces of the world actually work.

What this suggests to me is that these legislators either don’t understand that knowing more about how the world works can help you achieve desired outcomes in that world, or that they don’t want to achieve the outcome of reducing firearm injury or death.

Perhaps these legislators don’t want researchers to build reliable knowledge about the causes of firearm injury because they fear it will get in the way of their achieving some other goal that is more important to them than reducing firearm injury or death.

Perhaps they fear that careful scientific research will turn up facts which themselves seem to “to advocate or promote gun control” — at least to the extent that they show that the most effective way to reduce firearm injury and death would be to implement controls that the legislators view as politically unpalatable.

If nothing else, I find that a legislator’s aversion to scientific evidence is a useful piece of information about him or her to me, as a voter.
______
*If federal funding for research did function like a subsidy, meant to keep the researchers employed and out of trouble, you’d expect to see a much higher level of support for philosophical research. History suggests that philosophers in the public square with nothing else to keep them busy end up asking people lots of annoying questions, undermining the authority of institutions, corrupting the youth, and so forth.

**One of the challenges in getting the public on board to fund scientific research is that they can be quite skeptical that “basic research” will have any useful application beyond satisfying researchers’ curiosity.

Competing theories on the relation between Santa and the elves.

For many, this time of year is the height of hectic, whether due to holiday preparations or grade-filing deadlines at the end of the semester (or, for some of us, both of those together). Amidst the buzz and bustle, sometimes it’s a gift to slow down enough to find a quiet moment and listen to the people in your life. What you might hear in those moments can be a gift, too.

During a pause in my grading, my eldest child (age 13) related this conversation to me, which I am sharing with her permission.*

On a recent drive to a trumpet lesson, my father and I were speculating the social role of Santa Claus as compared to his elves. We managed to come up with two different possible theories that took account of the many different factors that were present in Santa’s supposed habits.

My dad’s theory was that Santa was a zombie. Not one of those brain-munching decomposing corpses that constitute the modern definition of zombies, but a zombie in the voodoo sense. Basically, a flesh puppet; a person under mind control that was being used to perform a task. He came to the conclusion that the elves brought Santa back every year to play a leadership role. According to my dad, resurrecting Santa was all the elves could do autonomously.

You can read more about how to make an old-school zombie in this excellent post from the archives of Cocktail Party Physics. Kids, be sure to get a parent’s permission first!

My theory was a bit more complex, and seemed more feasible to me. I hypothesized that Santa and his elves were like an ant or bee colony, with Santa as the “queen” and the elves as the workers. I proposed that milk and cookies were like the royal jelly. If an elf was given milk and/or cookies, it would metamorphose into another Santa and would challenge the existing Santa’s dominance. What would follow would be an intense and potentially disastrous Santa-on-Santa battle.

So my kids haven’t exactly outgrown speculating about Santa, but that speculation seems to have gone in an interesting direction. One wonders how many scientific careers can be traced back to childhood conversations where a grown-up was willing to spin theories with a kid.

_____
*Not only did she give her permission for me to share it, but she typed it up herself.