James Watson’s sense of entitlement, and misunderstandings of science that need to be countered.

James Watson, who shared a Nobel Prize in 1962 for discovering the double helix structure of DNA, is in the news, offering his Nobel Prize medal at auction. As reported by the Telegraph:

Mr Watson, who shared the 1962 Nobel Prize for uncovering the double helix structure of DNA, sparked an outcry in 2007 when he suggested that people of African descent were inherently less intelligent than white people.

If the medal is sold Mr Watson said he would use some of the proceeds to make donations to the “institutions that have looked after me”, such as University of Chicago, where he was awarded his undergraduate degree, and Clare College, Cambridge.

Mr Watson said his income had plummeted following his controversial remarks in 2007, which forced him to retire from the Cold Spring Harbor Laboratory on Long Island, New York. He still holds the position of chancellor emeritus there.

“Because I was an ‘unperson’ I was fired from the boards of companies, so I have no income, apart from my academic income,” he said.

He would also use some of the proceeds to buy an artwork, he said. “I really would love to own a [painting by David] Hockney”. …

Mr Watson said he hoped the publicity surrounding the sale of the medal would provide an opportunity for him to “re-enter public life”. Since the furore in 2007 he has not delivered any public lectures.

There’s a lot I could say here about James Watson, the assumptions under which he is laboring, and the potential impacts on science and the public’s engagement with it. In fact, I have said much of it before, although not always in reference to James Watson in particular. However, given the likelihood that we’ll keep hearing the same unhelpful responses to James Watson and his ilk if we don’t grapple with some of the fundamental misunderstandings of science at work here, it’s worth covering this ground again.

First, I’ll start with some of the claims I see Watson making around his decision to auction his Nobel Prize medal:

  • He needs money, given that he has “no income beyond [his] academic income”. One might take this as an indication that academic salaries in general ought to be raised (although I’m willing to bet a few buck that Watson’s inadequate academic income is at least as much as that of the average academic actively engaged in research and/or teaching in the U.S. today). However, Watson gives no sign of calling for such an across-the-board increase, since…
  • He connects his lack of income to being fired from boards of companies and to his inability to book public speaking engagements after his 2007 remarks on race.
  • He equates this removal from boards and lack of invitations to speak with being an “unperson”.

What comes across to me here is that James Watson sees himself as special, as entitled to seats on boards and speaker invitations. On what basis, we might ask, is he entitled to these perks, especially in the face of a scientific community just brimming with talented members currently working at the cutting edge(s) of scientific knowledge-building? It is worth noting that some who attended recent talks by Watson judged them to be nothing special:

Possibly, then, speaking engagements may have dried up at least partly because James Watson was not such an engaging speaker — with an asking price of $50,000 for a paid speaking engagement, whether you give good talk is a relevant criterion — rather than being driven entirely by his remarks on race in 2007, or before 2007. However, Watson seems sure that these remarks are the proximate cause of his lack of invitations to give public talks since 2007. And, he finds this result not to be in accord with what a scientist like himself deserves.

Positioning James Watson as a very special scientist who deserves special treatment above and beyond the recognition of the Nobel committee feeds the problematic narrative of scientific knowledge as an achievement of great men (and yes, in this narrative, it is usually great men who are recognized). This narrative ignores the fundamentally social nature of scientific knowledge-building and the fact that objectivity is the result of teamwork.

Of course, it’s even more galling to have James Watson portrayed (including by himself) as an exceptional hero of science rather than as part of a knowledge-building community given the role of Rosalind Franklin’s work in determining the structure of DNA — and given Watson’s apparent contempt for Franklin, rather than regard for her as a member of the knowledge-building team, in The Double Helix.

Indeed, part of the danger of the hero narrative is that scientists themselves may start to believe it. They can come to see themselves as individuals possessing more powers of objectivity than other humans (thus fundamentally misunderstanding where objectivity comes from), with privileged access to truth, with insights that don’t need to be rigorously tested or supported with empirical evidence. (Watson’s 2007 claims about race fit in this territory .)

Scientists making authoritative claims beyond what science can support is a bigger problem. To the extent that the public also buys into the hero narrative of science, that public is likely to take what Nobel Prize winners say as authoritative, even in the absence of good empirical evidence. Here Watson keeps company with William Shockley and his claims on race, Kary Mullis and his claims on HIV, and Linus Pauling and his advocacy of mega-doses of vitamin C. Some may argue that non-scientists need to be more careful consumers of scientific claims, but it would surely help if scientists themselves would recognize the limits of their own expertise and refrain from overselling either their claims or their individual knowledge-building power.

Where Watson’s claims about race are concerned, the harm of positioning him as an exceptional scientist goes further than reinforcing a common misunderstanding of where scientific knowledge comes from. These views, asserted authoritatively by a Nobel Prize winner, give cover to people who want to believe that their racist views are justified by scientific knowledge.

As well, as I have argued before (in regard to Richard Feynman and sexism), the hero narrative can be harmful to the goal of scientific outreach given the fact that human scientists usually have some problematic features and that these problematic features are often ignored, minimized, or even justified (e.g., as “a product of the time”) in order to foreground the hero’s great achievement and sell the science. There seems to be no shortage of folks willing to label Watson’s racist views as unfortunate but also as something that should not overshadow his discovery of the structure of DNA. In order that the unfortunate views not overshadow the big scientific contribution, some of these folks would rather we stop talking about Watson’s having made the claims he has made about racial difference (although Watson shows no apparent regret for holding these views, only for having voiced them to reporters).

However, especially for people in the groups that James Watson has claimed are genetically inferior, asserting that Watson’s massive scientific achievement trumps his problematic claims about race can be alienating. His scientific achievement doesn’t magically remove the malign effects of the statements he has made from a very large soapbox, using his authority as a Nobel Prize winning scientist. Ignoring those malign effects, or urging people to ignore them because of the scientific achievement which gave him that big soapbox, sounds an awful lot like saying that including the whole James Watson package in science is more important than including black people as scientific practitioners or science fans.

The hero narrative gives James Watson’s claims more power than they deserve. The hero narrative also makes urgent the need to deem James Watson’s “foibles” forgivable so we can appreciate his contribution to knowledge. None of this is helpful to the practice of science. None of it helps non-scientists engage more responsibly with scientific claims or scientific practitioners.

Holding James Watson to account for his claims, holding him responsible for scientific standards of evidence, doesn’t render him an unperson. Indeed, it amounts to treating him as a person engaged in the scientific knowledge-building project, as well as a person sharing a world with the rest of us.

* * * * *
Michael Hendricks offers a more concise argument against the hero narrative in science.

And, if you’re not up on the role of Rosalind Franklin in the discovery of the structure of DNA, these seventh graders can get you started:

Conduct of scientists (and science writers) can shape the public’s view of science.

Scientists undertake a peculiar kind of project. In striving to build objective knowledge about the world, they are tacitly recognizing that our unreflective picture of the world is likely to be riddled with mistakes and distortions. On the other hand, they frequently come to regard themselves as better thinkers — as more reliably objective — than humans who are not scientists, and end up forgetting that they have biases and blindspots of their own which they are helpless to detect without help from others who don’t share these particular biases and blindspots.

Building reliable knowledge about the world requires good methodology, teamwork, and concerted efforts to ensure that the “knowledge” you build doesn’t simply reify preexisting individual and cultural biases. It’s hard work, but it’s important to do it well — especially given a long history of “scientific” findings being used to justify and enforce preexisting cultural biases.

I think this bigger picture is especially appropriate to keep in mind in reading the response from Scientific American Blogs Editor Curtis Brainard to criticisms of a pair of problematic posts on the Scientific American Blog Network. Brainard writes:

The posts provoked accusations on social media that Scientific American was promoting sexism, racism and genetic determinism. While we believe that such charges are excessive, we share readers’ concerns. Although we expect our bloggers to cover controversial topics from time to time, we also recognize that sensitive issues require extra care, and that did not happen here. The author and I have discussed the shortcomings of the two posts in detail, including the lack of attention given to countervailing arguments and evidence, and he understood the deficiencies.

As stated at the top of every post, Scientific American does not always share the views and opinions expressed by our bloggers, just as our writers do not always share our editorial positions. At the same time, we realize our network’s bloggers carry the Scientific American imprimatur and that we have a responsibility to ensure that—differences of opinion notwithstanding—their work meets our standards for accuracy, integrity, transparency, sensitivity and other attributes.

(Bold emphasis added.)

The problem here isn’t that the posts in question advocated sound scientific views with implications that people on social media didn’t like. Rather, the posts presented claims in a way that made them look like they had much stronger scientific support than they really do — and did so in the face of ample published scientific counterarguments. Scientific American is not requiring that posts on its blog network meet a political litmus test, but rather that they embody the same kind of care, responsibility to the available facts, and intellectual honesty that science itself should display.

This is hard work, but it’s important. And engaging seriously with criticism, rather than just dismissing it, can help us do it better.

There’s an irony in the fact that one of the problematic posts which ignored some significant relevant scientific literature (helpfully cited by commenters in the comments section of that very post) was ignoring that literature in the service of defending Larry Summers and his remarks on possible innate biological causes that make men better at math and science than women. The irony lies in the fact that Larry Summers displayed an apparently ironclad commitment to ignore any and all empirical findings that might challenge his intuition that there’s something innate at the heart of the gender imbalance in math and science faculty.

Back in January of 2005, Larry Summers gave a speech at a conference about what can be done to attract more women to the study of math and science, and to keep them in the field long enough to become full professors. In his talk, Summers suggested as a possible hypothesis for the relatively low number of women in math and science careers that there may be innate biological factors that make males better at math and science than females. (He also related an anecdote about his daughter naming her toy trucks as if they were dolls, but it’s fair to say he probably meant this anecdote to be illustrative rather than evidentiary.)

The talk did not go over well with the rest of the participants in the conference.

Several scientific studies were presented at the conference before Summers made his speech. All these studies presented significant evidence against the claim of an innate difference between males and females that could account for the “science gap”.


In the aftermath of this conference of yore, there were some commenters who lauded Summers for voicing “unpopular truths” and others who distanced themselves from his claims but said they supported his right to make them as an exercise of “academic freedom.”

But if Summers was representing himself as a scientist* when he made his speech, I don’t think the “academic freedom” defense works.


Summers is free to state hypotheses — even unpopular hypotheses — that might account for a particular phenomenon. But, as a scientist, he is also responsible to take account of data relevant to his hypotheses. If the data weighs against his preferred hypothesis, intellectual honesty requires that he at least acknowledge this fact. Some would argue that it could even require that he abandon his hypothesis (since science is supposed to be evidence-based whenever possible).


When news of Summers’ speech, and reactions to it, was fresh, one of the details that stuck with me was that one of the conference organizers noted to Summers, after he gave his speech, that there was a large body of evidence — some of it presented at that very conference — that seemed to undermine his hypothesis, after which Summers gave a reply that amounted to, “Well, I don’t find those studies convincing.”

Was Summers within his rights to not believe these studies? Sure. But, he had a responsibility to explain why he rejected them. As a part of a scientific community, he can’t just reject a piece of scientific knowledge out of hand. Doing so comes awfully close to undermining the process of communication that scientific knowledge is based upon. You aren’t supposed to reject a study because you have more prestige than the authors of the study (so, you don’t have to care what they say). You can question the experimental design, you can question the data analysis, you can challenge the conclusions drawn, but you have to be able to articulate the precise objection. Surely, rejecting a study just because it doesn’t fit with your preferred hypothesis is not an intellectually honest move.


By my reckoning, Summers did not conduct himself as a responsible scientist in this incident. But I’d argue that the problem went beyond a lack of intellectual honesty within the universe of scientific discourse. Summers is also responsible for the bad consequences that flowed from his remark.


The bad consequence I have in mind here is the mistaken view of science and its workings that Summers’ conduct conveys. Especially by falling back on a plain vanilla “academic freedom” defense here, defenders of Summers conveyed to the public at large the idea that any hypothesis in science is as good as any other. Scientists who are conscious of the evidence-based nature of their field will see the absurdity of this idea — some hypotheses are better, others worse, and whenever possible we turn to the evidence to make these discriminations. Summers compounded ignorance of the relevant data with what came across as a statement that he didn’t care what the data showed. From this, the public at large could assume he was within his scientific rights to decide which data to care about without giving any justification for this choice**, or they could infer that data has little bearing on the scientific picture of the world.

Clearly, such a picture of science would undermine the standing of the rest of the bits of knowledge produced by scientists far more intellectually honest than Summers.


Indeed, we might go further here. Not only did Summers have some responsibilities that seemed to have escaped him while he was speaking as a scientist, but we could argue that the rest of the scientists (whether at the conference or elsewhere) have a collective responsibility to address the mistaken picture of science his conduct conveyed to society at large. It’s disappointing that, nearly a decade later, we instead have to contend with the problem of scientists following in Summers’ footsteps by ignoring, rather than engaging with, the scientific findings that challenge their intuitions.

Owing to the role we play in presenting a picture of what science knows and of how scientists come to know it to a broader audience, those of us who write about science (on blogs and elsewhere) also have a responsibility to be clear about the kind of standards scientists need to live up to in order to build a body of knowledge that is as accurate and unbiased as humanly possible. If we’re not clear about these standards in our science writing, we risk misleading our audiences about the current state of our knowledge and about how science works to build reliable knowledge about our world. Our responsibility here isn’t just a matter of noticing when scientists are messing up — it’s also a matter of acknowledging and correcting our own mistakes and of working harder to notice our own biases. I’m pleased that our Blogs Editor is committed to helping us fulfill this duty.
_____
*Summers is an economist, and whether to regard economics as a scientific field is a somewhat contentious matter. I’m willing to give the scientific status of economics the benefit of the doubt, but this means I’ll also expect economists to conduct themselves like scientists, and will criticize them when they do not.

**It’s worth noting that a number of the studies that Summers seemed to be dismissing out of hand were conducted by women. One wonders what lessons the public might draw from that.

_____
A portion of this post is an updated version of an ancestor post on my other blog.

Pub-Style Science: exclusion, inclusion, and methodological disputes.

This is the second part of my transcript of the Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building, wherein we discuss methodological disputes, who gets included or excluded in scientific knowledge-building, and ways the exclusion or inclusion might matter. Also, we talk about power gradients and make the scary suggestion that “the scientific method” might be a lie…

Michael Tomasson: Rubidium, you got me started on this. I made a comment on Twitter about our aspirations to build objective knowledge and that that was what science was about, and whether there’s sexism or racism or whatever other -isms around is peripheral to the holy of holies, which is the finding of objective truth. And you made … a comment.

Dr. Rubidium: I think I told you that was cute.

Michael Tomasson: Let me leverage it this way: One reason I think philosophy is important is the basics of structure, of hypothesis-driven research. The other thing I’m kind of intrigued by is part of Twitter culture and what we’re doing with Pub-Style Science is to throw the doors open to people from different cultures and different backgrounds are really say, hey, we want to have science that’s not just a white bread monoculture, but have it be a little more open. But does that mean that everyone can bring their own way of doing science? It sounds like Andrew might say, well, there’s a lot of different ways, and maybe everyone who shows up can bring their own. Maybe one person wants a hypothesis, another doesn’t. Does everybody get to do their own thing, or do we need to educate people in the one way to do science?

As I mentioned on my blog, I had never known that there was a feminist way of doing science.

Janet Stemwedel: There’s actually more than one.

Dr. Isis: We’re not all the same.

Janet Stemwedel: I think even the claim that there’s a single, easily described scientific method is kind of a tricky one. One of the things I’m interested in — one of the things that sucked me over from building knowledge in chemistry to trying to build knowledge in philosophy — is, if you look at scientific practice, scientists who are nominally studying the same thing, the same phenomena, but who’re doing it in different disciplines (say, the chemical physicists and the physical chemists) can be looking at the same thing, but they’re using very different experimental tools and conceptual tools and methodological tools to try to describe what’s going on there. There’s ways in which, when you cross a disciplinary boundary — and sometimes, when you leave your research group and go to another research group in the same department — that what you see on the ground as the method you’re using to build knowledge shifts.

In some ways, I’m inclined to say it’s an empirical question whether there’s a single unified scientific method, or whether we’ve got something more like a family resemblance kind of thing going on. There’s enough overlap in the tools that we’re going to call them all science, but whether we can give necessary and sufficient conditions that describe the whole thing, that’s still up in the air.

Andrew Brandel: I just want to add to that point, if I can. I think that one of the major topics in social sciences of science and in the philosophy of science recently has been the point that science itself, as it’s been practiced, has a history that is also built on certain kinds of power structures. So it’s not even enough to say, let’s bring lots of different kinds of people to the table, but we actually have to uncover the ways in which certain power structures have been built into the very way that we think about science or the way that the disciplines are arranged.

(23:10)
Michael Tomasson: You’ve got to expand on that. What do you mean? There’s only one good — there’s good science and there’s bad science. I don’t understand.

Janet Stemwedel: So wait, everyone who does science like you do is doing good science, and everyone who uses different approaches, that’s bad?

Michael Tomasson: Yes, exactly.

Janet Stemwedel: There’s no style choices in there at all?

Michael Tomasson: That’s what I’m throwing out there. I’m trying to explore that. I’m going to take poor Casey over here, we’re going to stamp him, turn him into a white guy in a tie and he’s going to do science the way God intended it.

Dr. Isis: This is actually a good point, though. I had a conversation with a friend recently about “Cosmos.” As they look back on the show, at all the historical scientists, who, historically has done science? Up until very recently, it has been people who were sufficiently wealthy to support the lifestyle to which they would like to become accustomed, and it’s very easy to sit and think and philosophize about how we do science when it’s not your primary livelihood. It was sort of gentleman scientists who were of the independently wealthy variety who were interested in science and were making these observations, and now that’s very much changed.

It was really interesting to me when you suggested this as a topic because recently I’ve become very pragmatic about doing science. I think I’m taking the “Friday” approach to science — you know, the movie? Danielle Lee wants to remake “Friday” as a science movie. Right now, messing with my money is like messing with my emotions. I’m about writing things in a way to get them funded and writing things in a way that gets them published, and it’s cute to think that we might change the game or make it better, but there’s also a pragmatic side to it. It’s a human endeavor, and doing things in a certain way gets certain responses from your colleagues. The thing that I see, especially watching young people on Twitter, is they try to change the game before they understand the game, and then they get smacked on the nose, and then they write is off as “science is broken”. Well, you don’t understand the game yet.

Janet Stemwedel: Although it’s complicated, I’d say. It is a human endeavor. Forgetting it’s a human endeavor is a road to nothing but pain. And you’ve got the knowledge-building thing going on, and that’s certainly at the center of science, but you’ve also got the getting credit for the awesome things you’ve done and getting paid so you can stay in the pool and keep building knowledge, because we haven’t got this utopian science island where anyone who wants to build knowledge can and all their needs are taken care of. And, you’ve got power gradients. So, there may well be principled arguments from the point of view of what’s going to incentivize practices that will result in better knowledge and less cheating and things like that, to change the game. I’d argue that’s one of the things that philosophy of science can contribute — I’ve tried to contribute that as part of my day job. But the first step is, you’ve got to start talking about the knowledge-building as an activity that’s conducted by humans rather than you put more data into the scientific method box, you turn the crank, and out comes the knowledge.

Michael Tomasson: This is horrifying. I guess what I’m concerned about is I’d hoped you’d teach the scientific method as some sort of central methodology from lab to lab. Are you saying, from the student’s point of view, whatever lab you’re in, you’ve got to figure out whatever the boss wants, and that’s what science is? Is there no skeleton key or structure that we can take from lab to lab?

Dr. Rubidium: Isn’t that what you’re doing? You’re going to instruct your people to do science the way you think it should be done? That pretty much sounds like what you just said.

Dr. Isis: That’s the point of being an apprentice, right?

Michael Tomasson: I had some fantasy that there was some universal currency or universal toolset that could be taken from one lab to another. Are you saying that I’m just teaching my people how to do Tomasson science, and they’re going to go over to Rubidium and be like, forget all that, and do things totally differently?

Dr. Rubidium: That might be the case.

Janet Stemwedel: Let’s put out there that a unified scientific method that’s accepted across scientific disciplines, and from lab to lab and all that, is an ideal. We have this notion that part of why we’re engaged in science to try to build knowledge of the world is that there is a world that we share. We’re trying to build objective knowledge, and why that matters is because we take it that there is a reality out there that goes deeper than how, subjectively, things seem to us.

(30:00)
Michael Tomasson: Yes!

Janet Stemwedel: So, we’re looking for a way to share that world, and the pictures of the method involved in doing that, the logical connections involved in doing that, that we got from the logical empiricists and Popper and that crowd — if you like, they’re giving sort of the idealized model of how we could do that. It’s analogous to the story they tell you about orbitals in intro chem. You know what happens, if you keep on going with chem, is they mess up that model. They say, it’s not that simple, it’s more complicated.

And that’s what philosophers of science do, is we mess up that model. We say, it can’t possible be that simple, because real human beings couldn’t drive that and make it work as well as it does. So there must be something more complicated going on; let’s figure out what it is. My impression, looking at the practice through the lens of philosophy of science, is that you find a lot of diversity in the details of the methods, you find a reasonable amount of diversity in terms of what’s the right attitude to have towards our theories — if we’ve got a lot of evidence in favor of our theories, are we allowed to believe our theories are probably right about the world, or just that they’re better at churning out predictions than the other theories we’ve considered so far? We have places where you can start to look at how methodologies embraced by Western primatologists compared to Japanese primatologists — where they differ on what’s the right thing to do to get the knowledge — you could say, it’s not the case that one side is right and one side is wrong, we’ve located a trade-off here, where one camp is deciding one of the things you could get is more important and you can sacrifice the other, and the other camp is going the other direction on that.

It’s not to say we should just give up on this project of science and building objective, reliable knowledge about the world. But how we do that is not really anything like the flowchart of the scientific method that you find in the junior high science text book. That’s like staying with the intro chem picture of the orbitals and saying, that’s all I need to know.

(32:20)
Dr. Isis: I sort of was having a little frightened moment where, as I was listening to you talk, Michael, I was having this “I don’t think that word means what you think it means” reaction. And I realize that you’re a physician and not a real scientist, but “the scientific method” is actually a narrow construct of generating a hypothesis, generating methods to test the hypothesis, generating results, and then either rejecting or failing to reject your hypothesis. This idea of going to people’s labs and learning to do science is completely tangential from the scientific method. I think we can all agree that, for most of us at are core, the scientific method is different from the culture. Now, whether I go to Tomasson’s lab and learn to label my reagents with the wrong labels because they’re a trifling, scandalous bunch who will mess up your experiment, and then I go to Rubidium’s lab and we all go marathon training at 3 o’clock in the afternoon, that’s the culture of science, that’s not the scientific method.

(34:05)
Janet Stemwedel: Maybe what we mean by the scientific method is either more nebulous or more complicated, and that’s where the disagreements come from.

If I can turn back to the example of the Japanese primatologists and the primatologists from the U.S. [1]… You’re trying to study monkeys. You want to see how they’re behaving, you want to tell some sort of story, you probably are driven by some sort of hypotheses. As it turns out, the Western primatologists are starting with the hypothesis that basically you start at the level of the individual monkey, that this is a biological machine, and you figure out how that works, and how they interact with each other if you put them in a group. The Japanese primatologists are starting out with the assumption that you look at the level of social groups to understand what’s going on.

(35:20)
And there’s this huge methodological disagreement that they had when they started actually paying attention to each other: is it OK to leave food in the clearing to draw the monkeys to where you can see them more closely?

The Western primatologists said, hell no, that interferes with the system you’re trying to study. You want to know what the monkeys would be like in nature, without you there. So, leaving food out there for them, “provisioning” them, is a bad call.

The Japanese primatologists (who are, by the way, studying monkeys that live in the islands that are part of Japan, monkeys that are well aware of the existence of humans because they’re bumping up against them all the time) say, you know what, if we get them closer to where we are, if we draw them into the clearings, we can see more subtle behaviors, we can actually get more information.

So here, there’s a methodological trade-off. Is it important to you to get more detailed observations, or to get observations that are untainted by human interference? ‘Cause you can’t get both. They’re both using the scientific method, but they’re making different choices about the kind of knowledge they’re building with that scientific method. Yet, on the surface of things, these primatologists were sort of looking at each other like, “Those guys don’t know how to do science! What the hell?”

(36:40)
Andrew Brandel: The other thing I wanted to mention to this point and, I think, to Tomasson’s question also, is that there are lots of anthropologists embedded with laboratory scientists all over the world, doing research into specifically what kinds of differences, both in the ways that they’re organized and in the ways that arguments get levied, what counts as “true” or “false,” what counts as a hypothesis, how that gets determined within these different contexts. There are broad fields of social sciences doing exactly this.

Dr. Rubidium: I think this gets to the issue: Tomasson, what are you calling the scientific method? Versus, can you really at some point separate out the idea that science is a thing — like Janet was saying, it’s a machine, you put the stuff in, give it a spin, and get the stuff out — can you really separate something called “the scientific method” from the people who do it?

I’ve taught general chemistry, and one of the first things we do is to define science, which is always exciting. It’s like trying to define art.

Michael Tomasson: So what do you come up with? What is science?

Dr. Rubidium: It’s a body of knowledge and a process — it’s two different things, when people say science. We always tell students, it’s a body of knowledge but it’s also a process, a thing you can do. I’m not saying it’s [the only] good answer, but it’s the answer we give students in class.

Then, of course, the idea is, what’s the scientific method? And everyone’s got some sort of a figure. In the gen chem book, in chapter 1, it’s always going to be in there. And it makes it seem like we’ve all agreed at some point, maybe taken a vote, I don’t know, that this is what we do.

Janet Stemwedel: And you get the laminated card with the steps on it when you get your lab coat.

Dr. Rubidium: And there’s the flowchart, usually laid out like a circle.

Michael Tomasson: Exactly!

Dr. Rubidium: It’s awesome! But that’s what we tell people. It’s kind of like the lie we tell the about orbitals, like Janet was saying, in the beginning of gen chem. But then, this is how sausages are really made. And yes, we have this method, and these are the steps we say are involved with it, but are we talking about that, which is what you learn in high school or junior high or science camp or whatever, or are you actually talking about how you run your research group? Which one are you talking about?

(39:30)
Janet Stemwedel: It can get more complicated than that. There’s also this question of: is the scientific method — whatever the heck we do to build reliable knowledge about the world using science — is that the kind of thing you could do solo, or is it necessarily a process that involves interaction with other people? So, maybe we don’t need to be up at night worrying about whether individual scientists fail to instantiate this idealized scientific method as long as the whole community collectively shakes out as instantiating it.

Michael Tomasson: Hmmm.

Casey: Isn’t this part of what a lot of scientists are doing, that it shakes out some of the human problems that come with it? It’s a messy process and you have a globe full of people performing experiments, doing research. That should, to some extent, push out some noise. We have made advances. Science works to some degree.

Janet Stemwedel: It mostly keeps the plane up in the air when it’s supposed to be in the air, and the water from being poisoned when it’s not supposed to be poisoned. The science does a pretty good job building the knowledge. I can’t always explain why it’s so good at that, but I believe that it does. And I think you’re right, there’s something — certainly in peer review, there’s this assumption that why we play with others here is that they help us catch the thing we’re missing, they help us to make sure the experiments really are reproducible, to make sure that we’re not smuggling in unconscious assumptions, whatever. I would argue, following on something Tomasson wrote in his blog post, that this is a good epistemic reason for some of the stuff that scientists rail on about on Twitter, about how we should try to get rid of sexism and racism and ableism and other kinds of -isms in the practice of science. It’s not just because scientists shouldn’t be jerks to people who could be helping them build the knowledge. It’s that, if you’ve got a more diverse community of people building the knowledge, you up the chances that you’re going to locate the unconscious biases that are sneaking in to the story we tell about what the world is like.

When the transcript continues, we do some more musing about methodology, the frailties of individual humans when it comes to being objective, and epistemic violence.

_______

[1] This discussion based on my reading of Pamela J. Asquith, “Japanese science and western hegemonies: primatology and the limits set to questions.” Naked science: Anthropological inquiry into boundaries, power, and knowledge (1996): 239-258.

* * * * *

Part 1 of the transcript.

Archived video of this Pub-Style Science episode.

Storify’d version of the simultaneous Twitter conversation.

Credibility, bias, and the perils of having too much fun.

If you’re a regular reader of this blog (or, you know, attentive at all to the world around you), you will have noticed that scientific knowledge is built by human beings, creatures that, even on the job, resemble other humans more closely than they do Mr. Spock or his Vulcan conspecifics. When an experiment yields really informative results, most human scientists don’t cooly raise an eyebrow and murmur “Fascinating.” Instead, you’re likely to see a reactions somewhere on the continuum between big smiles, shouts of delight, and full-on end zone happy-dance. You can observe human scientists displaying similar emotional responses in other kinds of scientific situations, too — say, for example, when they find the fatal flaw in a competitor’s conclusion or experimental strategy.

Many scientists enjoy doing science. (If this weren’t so, the rest of us would have to feel pretty bad for making them do such thankless work to build knowledge that we’re not willing or able to build ourselves but from which we benefit nonetheless.) At least some scientists are enjoying more than just the careful work of forming hypotheses, making observations, comparing outcomes and predictions, and contributing to a more reliable account of the world and its workings. Sometimes the enjoyment comes from playing a particular kind of role in the scientific conversation.

Some scientists delight in the role of advancer or supporter of the new piece of knowledge that will change how we understand our world in some fundamental way. Other scientists delight in the role of curmudgeon, shooting down overly-bold claims. Some scientists relish being contrarians. Others find comfort in being upholders of consensus.

In light of this, we should probably consider whether having one of these human predilections like enjoying being a contrarian (or a consensus-supporter, for that matter) is a potential source of bias against which scientists should guard.

The basic problem is nothing new: what we observe, and how we interpret what we observe, can be influenced by what we expect to see — and, sometimes, by what we want to see. Obviously, scientists don’t always see what they want to see, else people’s grad school lab experiences would be deliriously happy rather than soul-crushingly frustrating. But sometimes what there is to see is ambiguous, and the person making the observation has to make a call. And frequently, with a finite set of data, there are multiple conclusions — not all of them compatible with each other — that can be drawn.

These are moments when our expectations and our ‘druthers might creep in as the tie-breaker.

At the scale of the larger community of science and the body of knowledge it produces, this may not be such a big deal. (As we’ve noted before, objectivity requires teamwork). Given a sufficiently diverse scientific community, there will be loads of other scientists who are likely to have different expectations and ‘druthers. In trying to take someone else’s result and use it to build more knowledge, the thought is that something like replication of the earlier result happens, and biases that may have colored the earlier result will be identified and corrected. (Especially since scientists are in competition for scarce goods like jobs, grants, and Nobel Prizes, you might start with the assumption that there’s no reason not to identify problems with the existing knowledge base. Of course, actual conditions on the ground for scientists can make things more complicated.)

But even given the rigorous assessment she can expect from the larger scientific community, each scientist would also like, individually, to be as unbiased as possible. One of the advantages of engaging with lots of other scientists, with different biases than your own, is you get better at noticing your own biases and keeping them on a shorter leash — putting you in a better place to make objective knowledge.

So, what if you discover that you take a lot of pleasure in being a naysayer or contrarian? Is coming to such self-awareness the kind of thing that should make you extra careful in coming to contrarian conclusions about the data? If you actually come to the awareness that you dig being a contrarian, does it put you in a better position to take corrective action than you would if you enjoyed being a contrarian but didn’t realize that being contrarian was what was bringing you the enjoyment?

(That’s right, a philosopher of science just made something like an argument that scientists might benefit — as scientists, not just as human beings — from self-reflection. Go figure.)

What kind of corrective action do I have in mind for scientists who discover that they may have a tilt, whether towards contrarianism or consensus-supporting? I’m thinking of a kind of scientific buddy-system, for example matching scientists with contrarian leanings to scientists who are made happier by consensus-supporting. Such a pairing would be useful for each scientist in the pair as far as vetting their evidence and conclusions: Here’s the scientist you have to convince! Here’s the colleague whose objections you need to understand and engage with before this goes any further!

After all, one of the things serious scientists are after is a good grip on how things actually are. An explanation that a scientist with different default assumptions than yours can’t easily dismiss is an explanation worth taking seriously. If, on the other hand, your “buddy” can dismiss your explanation, it would be good to know why so you can address its weaknesses (or even, if it is warranted, change your conclusions).

Such a buddy-system would probably only be workable with scientists who are serious about intellectual honesty and getting knowledge that is objective as possible. Among other things, this means you wouldn’t want to be paired with a scientist for whom having an open mind would be at odds with the conditions of his employment.

_____
An ancestor version of this post was published on my other blog.

Strategies to address questionable statistical practices.

If you have not yet read all you want to read about the wrongdoing of social psychologist Diederik Stapel, you may be interested in reading the 2012 Tilburg Report (PDF) on the matter. The full title of the English translation is “Flawed science: the fraudulent research practices of social psychologist Diederik Stapel” (in Dutch, “Falende wetenschap: De fruaduleuze onderzoekspraktijken van social-psycholoog Diederik Stapel”), and it’s 104 pages long, which might make it beach reading for the right kind of person.

If you’re not quite up to the whole report, Error Statistics Philosophy has a nice discussion of some of the highlights. In that post, D. G. Mayo writes:

The authors of the Report say they never anticipated giving a laundry list of “undesirable conduct” by which researchers can flout pretty obvious requirements for the responsible practice of science. It was an accidental byproduct of the investigation of one case (Diederik Stapel, social psychology) that they walked into a culture of “verification bias”. Maybe that’s why I find it so telling. It’s as if they could scarcely believe their ears when people they interviewed “defended the serious and less serious violations of proper scientific method with the words: that is what I have learned in practice; everyone in my research environment does the same, and so does everyone we talk to at international conferences” (Report 48). …

I would place techniques for ‘verification bias’ under the general umbrella of techniques for squelching stringent criticism and repressing severe tests. These gambits make it so easy to find apparent support for one’s pet theory or hypotheses, as to count as no evidence at all (see some from their list ). Any field that regularly proceeds this way I would call a pseudoscience, or non-science, following Popper. “Observations or experiments can be accepted as supporting a theory (or a hypothesis, or a scientific assertion) only if these observations or experiments are severe tests of the theory.”

You’d imagine this would raise the stakes pretty significantly for the researcher who could be teetering on the edge of verification bias: fall off that cliff and what you’re doing is no longer worthy of the name scientific knowledge-building.

Psychology, after all, is one of those fields given a hard time by people in “hard sciences,” which are popularly reckoned to be more objective, more revealing of actual structures and mechanisms in the world — more science-y. Fair or not, this might mean that psychologist have something to prove about their hardheadedness as researchers, about the stringency of their methods. Some peer pressure within the field to live up to such standards would obviously be a good thing — and certainly, it would be a better thing for the scientific respectability of psychology than an “everyone is doing it” excuse for less stringent methods.

Plus, isn’t psychology a field whose practitioners should have a grip on the various cognitive biases to which we humans fall prey? Shouldn’t psychologists understand better than most the wisdom of putting structures in place (whether embodied in methodology or in social interactions) to counteract those cognitive biases?

Remember that part of Stapel’s M.O. was keeping current with the social psychology literature so he could formulate hypotheses that fit very comfortably with researchers’ expectations of how the phenomena they studied behaved. Then, fabricating the expected results for his “investigations” of these hypotheses, Stapel caught peer reviewers being credulous rather than appropriately skeptical.

Short of trying to reproduce the experiments Stapel described themselves, how could peer reviewers avoid being fooled? Mayo has a suggestion:

Rather than report on believability, researchers need to report the properties of the methods they used: What was their capacity to have identified, avoided, admitted verification bias? The role of probability here would not be to quantify the degree of confidence or believability in a hypothesis, given the background theory or most intuitively plausible paradigms, but rather to check how severely probed or well-tested a hypothesis is– whether the assessment is formal, quasi-formal or informal. Was a good job done in scrutinizing flaws…or a terrible one?  Or was there just a bit of data massaging and cherry picking to support the desired conclusion? As a matter of routine, researchers should tell us.

I’m no social psychologist, but this strikes me as a good concrete step that could help peer reviewers make better evaluations — and that should help scientists who don’t want to fool themselves (let alone their scientific peers) to be clearer about what they really know and how well they really know it.

Reluctance to act on suspicions about fellow scientists: inside the frauds of Diederik Stapel (part 4).

It’s time for another post in which I chew on some tidbits from Yudhijit Bhattacharjee’s incredibly thought-provoking New York Times Magazine article (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. (You can also look at the tidbits I chewed on in part 1, part 2, and part 3.) This time I consider the question of why it was that, despite mounting clues that Stapel’s results were too good to be true, other scientists in Stapel’s orbit were reluctant to act on their suspicions that Stapel might be up to some sort of scientific misbehavior.

Let’s look at how Bhattacharjee sets the scene in the article:

[I]n the spring of 2010, a graduate student noticed anomalies in three experiments Stapel had run for him. When asked for the raw data, Stapel initially said he no longer had it. Later that year, shortly after Stapel became dean, the student mentioned his concerns to a young professor at the university gym. Each of them spoke to me but requested anonymity because they worried their careers would be damaged if they were identified.

The bold emphasis here (and in the quoted passages that follow) is mine. I find it striking that even now, when Stapel has essentially been fully discredited as a trustworthy scientist, these two members of the scientific community feel safer not being identified. It’s not entirely obvious to me if their worry is being identified as someone who was suspicious that fabrication was taking place but who said nothing to launch official inquiries — or whether they fear that being identified as someone who was suspicious of a fellow scientist could harm their standing in the scientific community.

If you dismiss that second possibility as totally implausible, read on:

The professor, who had been hired recently, began attending Stapel’s lab meetings. He was struck by how great the data looked, no matter the experiment. “I don’t know that I ever saw that a study failed, which is highly unusual,” he told me. “Even the best people, in my experience, have studies that fail constantly. Usually, half don’t work.”

The professor approached Stapel to team up on a research project, with the intent of getting a closer look at how he worked. “I wanted to kind of play around with one of these amazing data sets,” he told me. The two of them designed studies to test the premise that reminding people of the financial crisis makes them more likely to act generously.

In early February, Stapel claimed he had run the studies. “Everything worked really well,” the professor told me wryly. Stapel claimed there was a statistical relationship between awareness of the financial crisis and generosity. But when the professor looked at the data, he discovered inconsistencies confirming his suspicions that Stapel was engaging in fraud.

If one has suspicions about how reliable a fellow scientist’s results are, doing some empirical investigation seems like the right thing to do. Keeping an open mind and then examining the actual data might well show one’s suspicions to be unfounded.

Of course, that’s not what happened here. So, given a reason for doubt with stronger empirical support — not to mention the fact that scientists are trying to build a shared body of scientific knowledge (which means that unreliable papers in the literature can hurt the knowledge-building efforts of other scientists who trust that the work reported in that literature was done honestly), you would think the time was right for this professor to pass on what he had found to those at the university who could investigate further. Right?

The professor consulted a senior colleague in the United States, who told him he shouldn’t feel any obligation to report the matter.

For all the talk of science, and the scientific literature, being “self-correcting,” it’s hard to imagine the precise mechanism for such self-correction in a world where no scientist who is aware of likely scientific misconduct feels any obligation to report the matter.

But the person who alerted the young professor, along with another graduate student, refused to let it go. That spring, the other graduate student examined a number of data sets that Stapel had supplied to students and postdocs in recent years, many of which led to papers and dissertations. She found a host of anomalies, the smoking gun being a data set in which Stapel appeared to have done a copy-paste job, leaving two rows of data nearly identical to each other.

The two students decided to report the charges to the department head, Marcel Zeelenberg. But they worried that Zeelenberg, Stapel’s friend, might come to his defense. To sound him out, one of the students made up a scenario about a professor who committed academic fraud, and asked Zeelenberg what he thought about the situation, without telling him it was hypothetical. “They should hang him from the highest tree” if the allegations were true, was Zeelenberg’s response, according to the student.

Some might think these students were being excessively cautious, but the sad fact is that scientists faced with allegations of misconduct against a colleague — especially if they are brought by students — frequently side with their colleague and retaliate against those making the allegations. Students, after all, are new members of one’s professional community, so green one might not even think of them as really members. They are low status, they are learning how things work, they are judged likely to have misunderstood what they have seen. And, in contrast to one’s colleagues, students are transients. They are just passing through the training program, whereas you might hope to be with your colleagues for your whole professional life. In a case of dueling testimony, who are you more likely to believe?

Maybe the question should be whether your bias towards believing one over the other is strong enough to keep you from examining the available evidence to determine whether your trust is misplaced.

The students waited till the end of summer, when they would be at a conference with Zeelenberg in London. “We decided we should tell Marcel at the conference so that he couldn’t storm out and go to Diederik right away,” one of the students told me.

In London, the students met with Zeelenberg after dinner in the dorm where they were staying. As the night wore on, his initial skepticism turned into shock. It was nearly 3 when Zeelenberg finished his last beer and walked back to his room in a daze. In Tilburg that weekend, he confronted Stapel.

It might not be universally true, but at least some of the people who will lie about their scientific findings in a journal article will lie right to your face about whether they obtained those findings honestly. Yet lots of us think we can tell — at least with the people we know — whether they are being honest with us. This hunch can be just as wrong as the wrongest scientific hunch waiting for us to accumulate empirical evidence against it.

The students seeking Zeelenberg’s help in investigating Stapel’s misbehavior found a situation in which Zeelenberg would have to look at the empirical evidence first before he looked his colleague in the eye and asked him whether he was fabricating his results. They had already gotten him to say, at least in the abstract, that the kind of behavior they had reason to believe Stapel was committing was unacceptable in their scientific community. To make a conscious decision to ignore the empirical evidence would have meant Zeelenberg would have to see himself as displaying a kind of intellectual dishonesty — because if fabrication is harmful to science, it is harmful to science no matter who perpetrates it.

As it was, Zeelenberg likely had to make the painful concession that he had misjudged his colleague’s character and trustworthiness. But having wrong hunches is science is much less of a crime than clinging to those hunches in the face of mounting evidence against them.

Doing good science requires a delicate balance of trust and accountability. Scientists’ default position is to trust that other scientists are making honest efforts to build reliable scientific knowledge about the world, using empirical evidence and methods of inference that they display for the inspection (and critique) of their colleagues. Not to hold this default position means you have to build all your knowledge of the world yourself (which makes achieving anything like objective knowledge really hard). However, this trust is not unconditional, which is where the accountability comes is. Scientists recognize that they need to be transparent about what they did to build the knowledge — to be accountable when other scientists ask questions or disagree about conclusions — else that trust evaporates. When the evidence warrants it, distrusting a fellow scientist is not mean or uncollegial — it’s your duty. We need the help of other to build scientific knowledge, but if they insist that they ignore evidence of their scientific misbehavior, they’re not actually helping.

Building a scientific method around the ideal of objectivity.

While modern science seems committed to the idea that seeking verifiable facts that are accessible to anyone is a good strategy for building a reliable picture of the world as it really is, historically, these two ideas have not always gone together. Peter Machamer describes a historical moment when these two senses of objectivity were coupled in his article, “The Concept of the Individual and the Idea(l) of Method in Seventeenth-Century Natural Philosophy.” [1]

Prior to the emergence of a scientific method that stressed objectivity, Machamer says, most people thought knowledge came from divine inspiration (whether written in holy books or transmitted by religious authorities) or from ancient sources that were only shared with initiates (think alchemy, stone masonry, and healing arts here). Knowledge, in other words, was a scarce resource that not everyone could get his or her hands (or brains) on. To the extent that a person found the world intelligible at all, it was probably based on the story that someone else in a special position of authority was telling.

How did this change? Machamer argues that it changed when people started to think of themselves as individuals. The erosion of feudalism, the reformation and counter-reformation, European voyages to the New World (which included encounters with plants, animals, and people previously unknown in the Old World), and the shift from a geocentric to a heliocentric view of the cosmos all contributed to this shift by calling old knowledge and old sources of authority into question. As the old sources of knowledge became less credible (or at least less monopolistic), the individual came to be seen as a new source of knowledge.

Machamer describes two key aspects of individuality at work. One is what he calls the “Epistemic I.” This is the recognition that an individual can gain knowledge and ideas directly from his or her own interactions with the world, and that these interactions depend on senses and powers of reason that all humans have (or could have, given the opportunity to develop them). This recognition casts knowledge (and the ability to get it) as universal and democratic. The power to build knowledge is not concentrated in the hands (or eyes) of just the elite — this power is our birthright as human beings.

The other side of individuality here is what Machamer calls the “Entrepreneurial I.” This is the belief that an individual’s insights deserve credit and recognition, perhaps even payment. This recognition casts the individual who has it as a leader, or a teacher — definitely, as a special human worth listening to.

Pause for a moment to notice that this tension is still present in science. For all the commitment to science as an enterprise that builds knowledge from observations of the world that others must be able to make (which is the whole point of reproducibility), scientists also compete for prestige and career capital based on which individual was the first to observe (and report observing) a particular detail that anyone could see. Seeing something new is not effortless (as we’ve discussed in the last two posts), but there’s still an uneasy coexistence between the idea of scientific knowledge-building as within the powers of normal human beings and the idea of scientific knowledge-building as the activity of special human beings with uniquely powerful insights and empirical capacities.

The two “I”s that Machamer describes came together as thinkers in the 1600s tried to work out a reliable method by which individuals could replace discredited sources of “knowledge” and expand on what remained to produce their own knowledge. Lots of “natural philosophers” (what we would call scientists today) set out to formulate just such a method. The paradox here is that each thinker was selling (often literally) a way of knowing that was supposed to work for everyone, while simultaneously presenting himself as the only one clever enough to have found it.

Looking for a method that anyone could use to get the facts about the world, the thinkers Machamer describes recognized that they needed to formulate a clear set of procedures that was broadly applicable to the different kinds of phenomena in the world about which people wanted to build knowledge, that was teachable (rather than being a method that only the person who came up with it could use), and that was able to bring about consensus and halt controversy. However, in the 1600s there were many candidates for this method on offer, which meant that there was a good bit of controversy about the question of which method was the method.

Among the contenders for the method, the Baconian method involved cataloguing many experiences of phenomena, then figuring out how to classify them. The Galilean method involved representing the phenomena in terms of mechanical models (and even going so far as to build the corresponding machine). The Hobbesian model focused on analyzing compositions and divisions of substances in order to distinguish causes from effects. And these were just three contenders in a crowded field. If there was a common thread in these many methods, it was describing or representing the phenomena of interest in spatial terms. In the seventeenth century, as now, seeing is believing.

In a historical moment when people were considering the accessibility and the power of knowledge through experience, it became clear to the natural philosophers trying to develop an appropriate method that such knowledge also required control. To get knowledge, it was not enough to have just any experience -– you had to have the right kind of experiences. This meant that the methods under development had to give guidance on how to track empirical data and then analyze it. As well, these methods had to invent the concept of a controlled experiment.

Whether it was in a published dialogue or an experiment conducted in a public space before witnesses, the natural philosophers developing knowledge-building methods recognized the importance of demonstration. Machamer writes:

Demonstration … consists in laying a phenomenon before oneself and others. This “laying out” exhibits the structure of the phenomenon, exhibits its true nature. What is laid out provides an experience for those seeing it. It carries informational certainty that causes assent.” (94)

Interestingly, there seems to have been an assumption that once people hit on the appropriate procedure for gathering empirical facts about the phenomena, these facts would be sufficient to produce agreement among those who observed them. The ideal method was supposed to head off controversy. Disagreements were either a sign that you were using the wrong method, or that you were using the right method incorrectly. As Machamer describes it:

[T]he doctrines of method all held that disputes or controversies are due to ignorance. Controversies are stupid and accomplish nothing. Only those who cannot reason properly will find it necessary to dispute. Obviously, as noted, the ideal of universality and consensus contrasts starkly with the increasing number of disputes that engage these scientific entrepreneurs, and with the entrepreneurial claims of each that he alone has found the true method.

Ultimately, what stemmed the proliferations of competing methods was a professionalization of science, in which the practitioners essentially agreed to be guided by a shared method. The hope was that the method the scientific profession agreed upon would be the one that allowed scientists to harness human senses and intellect to best discover what the world is really like. Within this context, scientists might still disagree about the details of the method, but they took it that such agreements ought to be resolved in such a way that the resulting methodology better approximated this ideal method.

The adoption of shared methodology and the efforts to minimize controversy are echoed in Bruce Bower’s [2] discussion of how the ideal of objectivity has been manifested in scientific practices. He writes:

Researchers began to standardize their instruments, clarify basic concepts, and write in an impersonal style so that their peers in other countries and even in future centuries could understand them. Enlightenment-influenced scholars thus came to regard facts no longer as malleable observations but as unbreakable nuggets of reality. Imagination represented a dangerous, wild force that substituted personal fantasies for a sober, objective grasp of nature. (361)

What the seventeenth century natural philosophers Machamer describes were striving for is clearly recognizable to us as objectivity -– both in the form of an objective method for producing knowledge and in the form of a body of knowledge that gives a reliable picture of how the world really is. The objective scientific method they sought was supposed to produce knowledge we could all agree upon and to head off controversy.

As you might imagine, the project of building reliable knowledge about the world has pushed scientists in the direction of also building experimental and observational techniques that are more standardized and require less individual judgment across observers. But an interesting side-effect of this focus on objective knowledge as a goal of science is the extent to which scientific reports can make it look like no human observers were involved in making the knowledge being reported. The passive voice of scientific papers — these procedures were performed, these results were observed — does more than just suggest that the particular individuals that performed the procedures and observed the results are interchangeable with other individuals (who, scientists trust, would, upon performing the same procedures, see the same results for themselves). The passive voice can actually erase the human labor involved in making knowledge about the world.

This seems like a dangerous move when objectivity is not an easy goal to achieve, but rather one that requires concerted teamwork along with one’s objective method.
_____________

[1] “The Concept of the Individual and the Idea(l) of Method in Seventeenth-Century Natural Philosophy,” in Peter Machamer, Marcello Pera, and Aristides Baltas (eds.), Scientific Controversies: Philosophical and Historical Perspectives. Oxford University Press, 2000.

[2] Bruce Bower, “Objective Visions,” Science News. 5 December 1998: Vol. 154, pp. 360-362

The challenges of objectivity: lessons from anatomy.

In the last post, we talked about objectivity as a scientific ideal aimed at building a reliable picture of what the world is actually like. We also noted that this goal travels closely with the notion of objectivity as what anyone applying the appropriate methodology could see. But, as we saw, it takes a great deal of scientific training to learn to see what anyone could see.

The problem of how to see what is really there is not a new one for scientists. In her book The Scientific Renaissance: 1450-1630 [1], Marie Boas Hall describes how this issue presented itself to Renaissance anatomists. These anatomists endeavored to learn about the parts of the human body that could be detected with the naked eye and the help of a scalpel.

You might think that the subject matter of anatomy would be more straightforward for scientists to “see” than the cells Fred Grinnell describes [2] (discussed in the last post), which require preparation and staining and the twiddling of knobs on microscopes. However, the most straightforward route to gross anatomical knowledge -– dissections of cadavers -– had its own challenges. For one thing, cadavers (especially human cadavers) were often in short supply. When they were available, anatomists hardly ever performed solitary dissections of them. Rather, dissections were performed, quite literally, for an audience of scientific students, generally with a surgeon doing the cutting while a professor stood nearby and read aloud from an anatomical textbook describing the organs, muscles, or bones encountered at each stage of the dissection process. The hope was that the features described in the text would match the features being revealed by the surgeon doing the dissecting, but there were doubtless instances where the audio track (as it were) was not quite in sync with the visual. Also, as a practical matter, before the invention of refrigeration dissections were seasonal, performed in the winter rather than the warmer months to retard the cadaver’s decomposition. This put limits on how much anatomical study a person could cram into any given year.

In these conditions, most of the scientists who studied anatomy logged many more hours watching dissections than performing dissections themselves. In other words, they were getting information about the systems of interest by seeing rather than by doing -– and they weren’t always seeing those dissections from the good seats. Thus, we shouldn’t be surprised that anatomists greeted the invention of the printing press by producing a number of dissection guides and anatomy textbooks.

What’s the value of a good textbook? It shares detailed information compiled by another scientist, sometimes over the course of years of study, yet you can consume that information in a more timely fashion. If it has diagrams, it can give you a clearer view of what there is to observe (albeit through someone else’s eyes) than you may be able to get from the cheap seats at a dissection. And, if you should be so lucky as to get your own specimens for study, a good textbook can guide your examination of the new material before you, helping you deal with the specimen in a way that lets you see more of what there is to see (including spatial relations and points of attachment) rather than messing it up with sloppy dissection technique.

Among the most widely used anatomy texts in the Renaissance were “uncorrupted” translations of On the Use of the Parts and Anatomical Procedures by the ancient Greek anatomist Galen, and the groundbreaking new text On the Fabric of the Human Body (published in 1543) by Vesalius. The revival of Galen fit into a pattern of Renaissance celebration of the wisdom of the ancients rather than setting out to build “new” knowledge, and Hall describes the attitude of Renaissance anatomists toward his work as “Galen-worship.” Had Galen been alive during the Renaissance, he might well have been irritated at the extent to which his discussions of anatomy -– based on dissections of animals, not human cadavers –- were taken to be authoritative. Galen himself, as an advocate of empiricism, would have urged other anatomists to “dissect with a fresh eye,” attentive to what the book of nature (as written on the bodies of creatures to be dissected) could teach them.

As it turns out, this may be the kind of thing that’s easier to urge than to do. Hall asks,

[W]hat scientific apprentice has not, many times since the sixteenth century, preferred to trust the authoritative text rather than his own unskilled eye? (137)

Once again, it requires training to be able to see what there is to see. And surely someone who has written textbooks on the subject (even centuries before) has more training in how to see than does the novice leaning on the textbook.

Of course, the textbook becomes part of the training in how to see, which can, ironically, make it harder to be sure that what you are seeing is an accurate reflection of the world, not just of the expectations you bring to your observations of it.

The illustrations in the newer anatomy texts made it seem less urgent to anatomy students that they observe (or participate in) actual dissections for themselves. As the technique for mass-produced illustrations got better (especially with the shift from woodcuts to engravings), the illustrators could include much more detail in their images. Paradoxically, this could be a problem, as the illustrator was usually someone other than the scientist who wrote the book, and the author and illustrator were not always in close communication as the images were produced. Given a visual representation of what there is to observe and a description of what there is to observe in the text, which would a student trust more?

Bruce Bower discusses this sort of problem in his article “Objective Visions,” [3] describing the procedures used by Dutch anatomist Berhard Albinus in the mid-1700s to create an image of the human skeleton. Bower writes:

Albinus carefully cleans, reassembles, and props up a complete male skeleton; checks the position of each bone in comparison with observations of an extremely skinny man hired to stand naked next to the skeleton; he calculates the exact spot at which an artist must sit to view the skeleton’s proportions accurately; and he covers engraving plates with cross-hatched grids so that images can be drawn square-by-square and thus be reproduced more reliably. (360)

Here, it sounds like Albinus is trying hard to create an image that accurately conveys what there is to see about the skeleton and its spatial relations. The methodology seems designed to make the image-creation faithful to the particulars of the actual specimen — in a word, objective. But, Bower continues:

After all that excruciating attention to detail, the eminent anatomist announces that his atlas portrays not a real skeleton, but an idealized version. Albinus has dictated alterations to the artist. The scrupulously assembled model is only a spingboard for insights into a more “perfect” representation of the human skeleton, visible only to someone with Albinus’ anatomical acumen. (360)

Here, Albinus was trying to abstract away from the peculiarities of the particular skeleton he had staged as a model for observation in order to describe what he saw as the real thing. This is a decidedly Platonist move. Plato’s view was that the stuff of our world consists largely of imperfect material instantiations of immaterial ideal forms -– and that science makes the observations it does of many examples of material stuff to get a handle on those ideal forms.

If you know the allegory of the cave, however, you know that Plato didn’t put much faith in feeble human sense organs as a route to grasping the forms. The very imperfection of those material instantiations that our sense organs apprehend would be bound to mislead us about the forms. Instead, Plato thought we’d need to use the mind to grasp the forms.

This is a crucial juncture where Aristotle parted ways with Plato. Aristotle still thought that there was something like the forms, but he rejected Plato’s full-strength rationalism in favor of an empirical approach to grasping them. If you wanted to get a handle on the form of “horse,” for example, Aristotle thought the thing to do was to examine lots of actual specimens of horse and to identify the essence they all have in common. The Aristotelian approach probably feels more sensible to modern scientists than the Platonist alternative, but note that we’re still talking about arriving at a description of “horse-ness” that transcends the observable features of any particular horse.

Whether you’re a Platonist, an Aristotelian, or something else, it seems pretty clear that scientists do decide that some features of the systems they’re studying are crucial and others are not. They distinguish what they take to be background from what they take to be the thing they’re observing. Rather than presenting every single squiggle in their visual field, they abstract away to present the piece of the world they’re interested in talking about.

And this is where the collaboration between anatomist and illustrator gets ticklish. What happens if the engraver is abstracting away from the observed particulars differently than the anatomist would? As Hall notes, the engravings in Renaissance anatomy texts were not always accurate representations of the texts. (Nor, for that matter, did the textual descriptions always get the anatomical features right — Renaissance anatomists, Vesalius included, managed to repeat some anatomical mistakes that went back to Galen, likely because they “saw” their specimens through a lens of expectations shaped by what Galen said they were going to see.)

On top of this, the fact that artists like Leonardo Da Vinci studied anatomy to improve their artistic representations of the human form spilled back to influence Renaissance scientific illustrators. These illustrators, as much as their artist contemporaries, may have looked beyond the spatial relations between bones or muscles or internal organs for hidden beauty in their subjects. While this resulted in striking illustrations, it also meant that their engravings were not always accurate representations of the cadavers that were officially their subjects.

These factors conspired to produce visually arresting anatomy texts that exerted an influence on how the anatomy students using them understood the subject, even when these students went beyond the texts to perform their own dissections. Hall writes,

[I]t is often quite easy to “see” what a textbook or manual says should be seen. (141)

Indeed, faced with a conflict between the evidence of one’s eyes pointed at a cadaver and the evidence of one’s eyes pointed at an anatomical diagram, one might easily conclude that the cadaver in question was a weird variant while the diagram captured the “standard” configuration.

Bower’s article describes efforts scientists made to come up with visual representations that were less subjective. Bower writes:

Scientists of the 19th century rapidly adopted a new generation of devices that rendered images in an automatic fashion. For instance, the boxy contraption known as the camera obscura projected images of a specimen, such as a bone or a plant, onto a surface where a researcher could trace its form onto a piece of paper. Photography soon took over and further diminished human involvement in image-making. … Researchers explicitly equated the manual representation of items in the natural world with a moral code of self-restraint. … A blurry photograph of a star or ragged edges on a slide of tumor tissues were deemed preferable to tidy, idealized portraits. (361)

Our naïve picture of objectivity may encourage us that seeing is believing, and that mechanically captured images are more reliable than those rendered by the hand of a (subjective) human, but it’s important to remember that pictures -– even photographs -– have points of view, depend on choices made about the conditions of their creation, and can be used as arguments to support one particular way of seeing the world over another.

In the next post, we’ll look at how Seventeenth Century “natural philosophers” labored to establish a general-use method for building reliable knowledge about the world, and at how the notion of objectivity was connected to these efforts, and to the recognizable features of “the scientific method” that resulted.
_____________

[1] Marie Boas Hall, The Scientific Renaissance: 1450-1630. Dover, 1994.

[2] Frederick Grinnell, The Scientific Attitude. Guilford Press, 1992.

[3] Bruce Bower, “Objective Visions,” Science News. 5 December 1998: Vol. 154, pp. 360-362

The ideal of objectivity.

In trying to figure out what ethics ought to guide scientists in their activities, we’re really asking a question about what values scientists are committed to. Arguably, something that a scientist values may not be valued as much (if at all) by the average person in that scientist’s society.

Objectivity is a value – perhaps one of the values that scientists and non-scientists most strongly associate with science. So, it’s worth thinking about how scientists understand that value, some of the challenges in meeting the ideal it sets, and some of the historical journey that was involved in objectivity becoming a central scientific value in the first place. I’ll be splitting this discussion into three posts. This post sets the stage and considers how modern scientific practitioners describe objectivity. The next post will look at objectivity (and its challenges) in the context of work being done by Renaissance anatomists. The third post will examine how the notion of objectivity was connected to the efforts of Seventeenth Century “natural philosophers” to establish a method for building reliable knowledge about the world.

First, what do we mean by objectivity?

In everyday discussions of ethics, being objective usually means applying the rules fairly and treating everyone the same rather than showing favoritism to one party or another. Is this what scientists have in mind when they voice their commitment to objectivity? Perhaps in part. It could be connected to applying “the rules” of science (i.e., the scientific method) fairly and not letting bias creep into the production of scientific knowledge.

This seems close to the characterization of good scientific practice that we see in the National Academy of Science and National Research Council document, “The Nature of Science.” [1] This document describes science as an activity in which hypotheses undergo rigorous tests, whereby researchers compare the predictions of the hypotheses to verifiable facts determined by observation and experiment, and findings and corrections are announced in refereed scientific publications. It states, “Although [science’s] goal is to approach true explanations as closely as possible, its investigators claim no final or permanent explanatory truths.” (38)

Note that rigorous facts, verification of those facts (or the information necessary to verify them), correction of mistakes, and reliable reports of findings all depend on honesty – you can’t perform these activities by making up your results, or presenting them in a deceptive way, for example. So being objective in the sense of following good scientific methodology requires a commitment not to mislead.

But here, in “The Nature of Science,” we see hints that there are two closely related, yet distinct, meanings of “objective”. One is what anyone applying the appropriate methodology could see. The other is a picture of what the world is really like. Getting a true picture of the world (or aiming for such a picture) means seeking objectivity in the second sense -– finding the true facts. Seeking out the observational data that other scientists could verify -– the first sense of objectivity -– is closely tied to the experimental method scientists use and their strategies for reporting their results. Presumably, applying objective methodology would be a good strategy for generating an accurate (and thus objective) picture of the world.

But we should note a tension here that’s at least as old as the tension between Plato and his student Aristotle. What exactly are the facts about the world that anyone could see? Are sense organs like eyes all we need to see them? If such facts really exist, are they enough to help us build a true picture of the world?

In the chapter “Making Observations” from his book The Scientific Attitude [2], Fred Grinnell discusses some of the challenges of seeing what there is to see. He argues that, especially in the realms science tries to probe, seeing what’s out there is not automatic. Rather, we have to learn to see the facts that are there for anyone to observe.

Grinnell describes the difficulty students have seeing cells under a light microscope, a difficulty that persists even after students work out how to use the microscope to adjust the focus. He writes:

The students’ inability to see the cells was not a technical problem. There can be technical problems, of course -– as when one takes an unstained tissue section and places it under a microscope. Under these conditions it is possible to tell that something is “there,” but not precisely what. As discussed in any histology textbook, the reason is that there are few visual features of unstained tissue sections that our eyes can discriminate. As the students were studying stained specimens, however, sufficient details of the field were observable that could have permitted them to distinguish among different cells and between cells and the noncellular elements of the tissue. Thus, for these students, the cells were visible but unseen. (10-11)

Grinnell’s example suggests that seeing cells, for example, requires more than putting your eye to the eyepiece of a microscope focused on a stained sample of cells. Rather, you need to be able to recognize those bits of your visual field as belonging to a particular kind of object -– and, you may even need to have something like the concept of a cell to be able to identify what you are seeing as cells. At the very least, this suggests that we should amend our gloss of objective as “what anyone could see” to something more like “what anyone could see given a particular conceptual background and some training with the necessary scientific measuring devices.”

But Grinnell makes even this seem too optimistic. He notes that “seeing things one way means not seeing them another way,” which implies that there are multiple ways to interpret any given piece of the world toward which we point our sense organs. Moreover, he argues,

Each person’s previous experiences will have led to the development of particular concepts of things, which will influence what objects can be seen and what they will appear to be. As a consequence, it is not unusual for two investigators to disagree about their observations if the investigators are looking at the data according to different conceptual frameworks. Resolution of such conflicts requires that the investigators clarify for each other the concepts that they have in mind. (15)

In other words, scientists may need to share a bundle of background assumptions about the world to look at a particular piece of that world and agree on what they see. Much more is involved in seeing “what anyone can see” than meets the eye.

We’ll say more about this challenge in the next post, when we look at how Renaissance anatomists tried to build (and communicate) objective knowledge about the human body.
_____________

[1] “The Nature of Science,” in Panel on Scientific Responsibility and the Conduct of Research, National Academy of Sciences, National Academy of Engineering, Institute of Medicine. Responsible Science, Volume I: Ensuring the Integrity of the Research Process. Washington, DC: The National Academies Press, 1992.

[2] Frederick Grinnell, The Scientific Attitude. Guilford Press, 1992.