Pub-Style Science: dreams of objectivity in a game built around power.

This is the third and final installment of my transcript of the Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building. Leading up to this part of the conversation, we were considering the possibility that the idealization of the scientific method left out a lot of the details of how real humans actually interact to build scientific knowledge …

Dr. Isis: And that’s the tricky part, I think. That’s where this becomes a messy endeavor. You think about the parts of the scientific method, and you write the scientific method out, we teach it to our students, it’s on the little card, and I think it’s one of the most amazing constructs that there is. It’s certainly a philosophy.

I have devoted my career to the scientific method, and yet it’s that last step that is the messiest. We take our results and we interpret them, we either reject or fail to reject the hypothesis, and in a lot of cases, the way we interpret the very objective data that we’re getting is based on the social and cultural constructs of who we are. And the messier part is that the who we are — you say that science is done around the world, sure, but really, who is it done by? We all get the CV, “Dear honorable and most respected professor…” And what do you do with those emails? You spam them. But why? Why do we do that? There are people [doing science] around the world, and yet we reject their science-doing because of who they are and where they’re from and our understanding, our capacity to take [our doing] of that last step of the scientific method as superior because of some pedigree of our training, which is absolutely rooted in the narrowest sliver of our population.

And that’s the part that frightens me about science. Going from lab to lab and learning things, you’re not just learning objective skills, you’re learning a political process — who do you shake hands with at meetings, who do you have lunch with, who do you have drinks with, how do you phrase your grants in a particular way so they get funded because this is the very narrow sliver of people who are reading them? And I have no idea what to do about that.

Janet Stemwedel: I think this is a place where the acknowledgement that’s embodied in editorial policies of journals like PLOS ONE, that we can’t actually reliably predict what’s going to be important, is a good step forward. That’s saying, look, what we can do is talk about whether this is a result that seems to be robust: this is how I got it; I think if you try to get it in your lab, you’re likely to get it, too; this is why it looked interesting to me in light of what we knew already. Without saying: oh, and this is going to be the best thing since sliced bread. At least that’s acknowledging a certain level of epistemic humility that it’s useful for the scientific community to put out there, to no pretend that the scientific method lets you see into the future. Because last time I checked, it doesn’t.

(46:05)
Andrew Brandel: I just want to build on this point, that this question of objective truth also is a question that is debated hotly, obviously, in science, and I will get in much trouble for my vision of what is objective and what is not objective. This question of whether, to quote a famous philosopher of science, we’re all looking at the same world through different-colored glasses, or whether there’s something more to it, if we’re actually talking about nature in different ways, if we can really learn something not even from science being practiced wherever in the world, but from completely different systems of thinking about how the world works. Because the other part of this violence is not just the ways in which certain groups have not been included in the scientific community, the professional community, which was controlled by the church and wealthy estates and things, but also with the institutions like the scientific method, like certain kinds of philosophy. A lot of violence has been propagated in the name of those things. So I think it’s important to unpack not just this question of let’s get more voices to the table, but literally think about how the structures of what we’re doing themselves — the way the universities are set up, the way that we think about what science does, the way that we think about objective truth — also propagate certain kinds of violence, epistemic kinds of violence.

Michael Tomasson: Wait wait wait, this is fascinating. Epistemic violence? Expand on that.

Andrew Brandel: What I mean to say is, part of the problem, at least from the view of myself — I don’t want to actually represent anybody else — is that if we think that we’re getting to some better method of getting to objective truth, if we think that we have — even if it’s only in an ideal state — some sort of cornerstone, some sort of key to the reality of things as they are, then we can squash the other systems of thinking about the world. And that is also a kind of violence, in a way, that’s not just the violence of there’s no women at the table, there’s no different kinds of people at the table. But there’s actually another kind of power structure that’s embedded in the very way that we think about truths. So, for example, a famous anthropologist, Levi-Strauss, would always point out that the botanists would go to places in Latin America and they would identify 14 different kinds of XYZ plant, and the people living in that jungle who aren’t scientists or don’t have that kind of sophisticated knowledge could distinguish like 45 kinds of these plants. And they took them back to the lab, and they were completely right.

So what does that mean? How do we think about these different ways [of knowing]? I think unpacking that is a big thing that social science and philosophy of science can bring to this conversation, pointing out when there is a place to critique the ways in which science becomes like an ideology.

Michael Tomasson: That just sort of blew my mind. I have to process that for awhile. I want to pick up on something you’re saying and that I think Janet said before, which is really part of the spirit of what Pub-Style Science is all about, the idea that is we get more different kinds of voices into science, we’ll have a little bit better science at the other end of it.

Dr. Rubidium: Yeaaaah. We can all sit around like, I’ve got a ton of great ideas, and that’s fabulous, and new voices, and rah rah. But, where are the new voices? are the new voices, or what you would call new voices, or new opinions, or different opinions (maybe not even new, just different from the current power structure) — if those voices aren’t getting to positions of real power to affect change, it doesn’t matter how many foot soldiers you get on the ground. You have got to get people into the position of being generals. And is that happening? No. I would say no.

Janet Stemwedel: Having more different kinds of people at the table doesn’t matter if you don’t take them seriously.

Andrew Brandel: Exactly. That’s a key point.

Dr. Isis: This is the tricky thing that I sort of alluded to. And I’m not talking about diverse voices in terms of gender and racial and sexual orientation diversity and disability issues. I’m talking about just this idea of diverse voices. One of the things that is tricky, again, is that to get to play the game you have to know the rules, and trying to change the rules too early — one, I think it’s dangerous to try to change the rules before you understand what the rules even are, and two, that is the quickest way to get smacked in the nose when you’re very young. And now, to extend that to issues of actual diversity in science, at least my experience has been that some of the folks who are diverse in science are some of the biggest rule-obeyers. Because you have to be in order to survive. You can’t come in and be different as it is and decide you’re going to change the rules out from under everybody until you get into that — until you become a general, to use Dr. Rubidium’s analogy. The problem is, by the time you become the general, have you drunk enough of the Kool-Aid that you remember who you were? Do you still have enough of yourself to change the system? Some of my more senior colleagues, diverse colleagues, who came up the ranks, are some of the biggest believers in the rules. I don’t know if they felt that way when they were younger folks.

Janet Stemwedel: Part of it can be, if the rules work for you, there’s less incentive to think about changing them. But this is one of those places where those of us philosophers who think about where the knowledge-building bumps up against the ethics will say: look, the ethical responsibilities of the people in the community with more power are different that the ethical responsibilities of the people in the community who are just coming up, because they don’t have as much weight to throw around. They don’t have as much power. So I talk a lot to mid-career and late-career scientists and say, hey look, you want to help build a different community, a different environment for the people you’re training? You’ve got to put some skin in the game to make that happen. You’re in a relatively safe place to throw that weight around. You do that!

And you know, I try to make these prudential arguments about, if you shift around the incentive structures [in various ways], what’s likely to produce better knowledge on the other end? That’s presumably why scientists are doing science, ’cause otherwise there’d be some job that they’d be doing that takes up less time and less brain.

Andrew Brandel: This is a question also of where ethics and epistemic issues also come together, because I think that’s really part of what kind of radical politics — there’s a lot of different theories about what kind of revolution you can talk about, what a revolutionary politics might be to overthrow the system in science. But I think this issue that it’s also an epistemic thing, that it’s also a question of producing better knowledge, and that, to bring back this point about how it’s not just about putting people in positions, it’s not just hiring an assistant professor from XYZ country or more women or these kinds of things, but it’s also a question of putting oneself sufficiently at risk, and taking seriously the possibility that I’m wrong, from radically different positions. That would really move things, I think, in a more interesting direction. That’s maybe something we can bring to the table.

Janet Stemwedel: This is the piece of Karl Popper, by the way, that scientists like as an image of what kind of tough people they are. Scientists are not trying to prove their hypotheses, they’re trying to falsify them, they’re trying to show that they’re wrong, and they’re ready to kiss even their favorite hypothesis goodbye if that’s what the evidence shows.

Some of those hypotheses that scientists need to be willing to kiss goodbye have to do with narrow views of what kind of details count as fair game for building real reliable knowledge about the world and what kind of people and what kind of training could do that, too. Scientists really have to be more evidence-attentive around issues like their own implicit bias. And for some reason that’s really hard, because scientists think that individually they are way more objective than they average bear. The real challenge of science is recognizing that we are all average bears, and it is just the coordination of our efforts within this particular methodological structure that gets us something better than the individual average bear could get by him- or herself.

Michael Tomasson: I’m going to backpedal as furiously as I can, since we’re running out of time. So I’ll give my final spiel and then we’ll go around for closing comments.

I guess I will pare down my skeleton-key: I think there’s an idea of different ways of doing science, and there’s a lot of culture that comes with it that I think is very flexible. I think what I’m getting at is, is there some universal hub for whatever different ways people are looking at science? Is there some sort of universal skeleton or structure? And I guess, if I had to backpedal furiously, that I would say, what I would try to teach my folks, is number one, there is an objective world, it’s not just my opinion. When people come in and talk to me about their science and experiments, it’s not just about what I want, it’s not just about what I think, it’s that there is some objective world out there that we’re trying to describe. The second thing, the most stripped-down version of the scientific method I can think of, is that in order to understand that objective world, it helps to have a hypothesis, a preconceived notion, first to challenge.

What I get frustrated about, and this is just a very practical day-to-day thing, is I see people coming and doing experiments saying, “I have no preconceived notion of how this should go, I did this experiment, and here’s what I got.” It’s like, OK, that’s very hard to interpret unless you start from a certain place — here’s my prediction, here’s what I think was going on — and then test it.

Dr. Isis: I’ll say, Tomasson, actually this wasn’t as boring as I thought it would be. I was really worried about this one. I wasn’t really sure what we were supposed to be talking about — philosophy and science — but this one was OK. So, good on you.

But, I think that I will concur with you that science is about seeking objective truth. I think it’s a darned shame that humans are the ones doing the seeking.

Janet Stemwedel: You know, dolphin science would be completely different, though.

Dr. Rubidium: Yeah, dolphins are jerks! What are you talking about?

Janet Stemwedel: Exactly! All their journals would be behind paywalls.

Andrew Brandel: I’ll just say that I was saying to David, who I know is a regular member of your group, that I think it’s a good step in the right direction to have these conversations. I don’t think we get asked as social scientists, even those of us who work in science settings, to at least talk about these issues more, and talk about, what are the ethical and epistemic stakes involved in doing what we do? What can we bring to the table on similar kinds of questions? For me, this question of cultivating a kind of openness to being wrong is so central to thinking about the kind of science that I do. I think that these kinds of conversations are important, and we need to generate some kind of momentum. I jokingly said to Tomasson that we need a grant to pay for a workshop to get more people into these types of conversations, because I think it’s significant. It’s a step in the right direction.

Janet Stemwedel: I’m inclined to say one of the take-home messages here is that there’s a whole bunch of scientists and me, and none of you said, “Let’s not talk about philosophy at all, that’s not at all useful.” I would like some university administrators to pay attention to this. It’s possible that those of us in the philosophy department are actually contributing something that enhances not only the fortunes of philosophy majors but also the mindfulness of scientists about what they’re doing.

I’m pretty committed to the idea that there is some common core to what scientists across disciplines and across cultures are doing to build knowledge. I think the jury’s still out on what precisely the right thing to say about that common core of the scientific method is. But, I think there’s something useful in being able to step back and examine that question, rather than saying, “Science is whatever the hell we do in my lab. And as long as I keep doing all my future knowledge-building on the same pattern, nothing could go wrong.”

Dr. Rubidium: I think that for me, I’ll echo Isis’s comments: science is an endeavor done by people. And people are jerks — No! With people, then, if you have this endeavor, this job, whatever you want to call it — some people would call it a calling — once people are involved, I think it’s essential that we talk about philosophy, sociology, the behavior of people. They are doing the work. It doesn’t make sense to me, then — and I’m an analytical chemist and I have zero background in all of the social stuff — it doesn’t make sense to me that you would have this thing done by people and then actually say with a straight face, “But let’s not talk about people.” That part just doesn’t compute. So I think these conversations definitely need to continue, and I hope that we can talk more about the people behind the endeavor and more about the things attached to their thoughts and behaviors.

* * * * *

Part 1 of the transcript.

Part 2 of the transcript.

Archived video of this Pub-Style Science episode.

Storify’d version of the simultaneous Twitter conversation.

You should also check out Dr. Isis’s post on why the conversations that happen in Pub-Style Science are valuable to scientists-in-training.

Pub-Style Science: exclusion, inclusion, and methodological disputes.

This is the second part of my transcript of the Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building, wherein we discuss methodological disputes, who gets included or excluded in scientific knowledge-building, and ways the exclusion or inclusion might matter. Also, we talk about power gradients and make the scary suggestion that “the scientific method” might be a lie…

Michael Tomasson: Rubidium, you got me started on this. I made a comment on Twitter about our aspirations to build objective knowledge and that that was what science was about, and whether there’s sexism or racism or whatever other -isms around is peripheral to the holy of holies, which is the finding of objective truth. And you made … a comment.

Dr. Rubidium: I think I told you that was cute.

Michael Tomasson: Let me leverage it this way: One reason I think philosophy is important is the basics of structure, of hypothesis-driven research. The other thing I’m kind of intrigued by is part of Twitter culture and what we’re doing with Pub-Style Science is to throw the doors open to people from different cultures and different backgrounds are really say, hey, we want to have science that’s not just a white bread monoculture, but have it be a little more open. But does that mean that everyone can bring their own way of doing science? It sounds like Andrew might say, well, there’s a lot of different ways, and maybe everyone who shows up can bring their own. Maybe one person wants a hypothesis, another doesn’t. Does everybody get to do their own thing, or do we need to educate people in the one way to do science?

As I mentioned on my blog, I had never known that there was a feminist way of doing science.

Janet Stemwedel: There’s actually more than one.

Dr. Isis: We’re not all the same.

Janet Stemwedel: I think even the claim that there’s a single, easily described scientific method is kind of a tricky one. One of the things I’m interested in — one of the things that sucked me over from building knowledge in chemistry to trying to build knowledge in philosophy — is, if you look at scientific practice, scientists who are nominally studying the same thing, the same phenomena, but who’re doing it in different disciplines (say, the chemical physicists and the physical chemists) can be looking at the same thing, but they’re using very different experimental tools and conceptual tools and methodological tools to try to describe what’s going on there. There’s ways in which, when you cross a disciplinary boundary — and sometimes, when you leave your research group and go to another research group in the same department — that what you see on the ground as the method you’re using to build knowledge shifts.

In some ways, I’m inclined to say it’s an empirical question whether there’s a single unified scientific method, or whether we’ve got something more like a family resemblance kind of thing going on. There’s enough overlap in the tools that we’re going to call them all science, but whether we can give necessary and sufficient conditions that describe the whole thing, that’s still up in the air.

Andrew Brandel: I just want to add to that point, if I can. I think that one of the major topics in social sciences of science and in the philosophy of science recently has been the point that science itself, as it’s been practiced, has a history that is also built on certain kinds of power structures. So it’s not even enough to say, let’s bring lots of different kinds of people to the table, but we actually have to uncover the ways in which certain power structures have been built into the very way that we think about science or the way that the disciplines are arranged.

(23:10)
Michael Tomasson: You’ve got to expand on that. What do you mean? There’s only one good — there’s good science and there’s bad science. I don’t understand.

Janet Stemwedel: So wait, everyone who does science like you do is doing good science, and everyone who uses different approaches, that’s bad?

Michael Tomasson: Yes, exactly.

Janet Stemwedel: There’s no style choices in there at all?

Michael Tomasson: That’s what I’m throwing out there. I’m trying to explore that. I’m going to take poor Casey over here, we’re going to stamp him, turn him into a white guy in a tie and he’s going to do science the way God intended it.

Dr. Isis: This is actually a good point, though. I had a conversation with a friend recently about “Cosmos.” As they look back on the show, at all the historical scientists, who, historically has done science? Up until very recently, it has been people who were sufficiently wealthy to support the lifestyle to which they would like to become accustomed, and it’s very easy to sit and think and philosophize about how we do science when it’s not your primary livelihood. It was sort of gentleman scientists who were of the independently wealthy variety who were interested in science and were making these observations, and now that’s very much changed.

It was really interesting to me when you suggested this as a topic because recently I’ve become very pragmatic about doing science. I think I’m taking the “Friday” approach to science — you know, the movie? Danielle Lee wants to remake “Friday” as a science movie. Right now, messing with my money is like messing with my emotions. I’m about writing things in a way to get them funded and writing things in a way that gets them published, and it’s cute to think that we might change the game or make it better, but there’s also a pragmatic side to it. It’s a human endeavor, and doing things in a certain way gets certain responses from your colleagues. The thing that I see, especially watching young people on Twitter, is they try to change the game before they understand the game, and then they get smacked on the nose, and then they write is off as “science is broken”. Well, you don’t understand the game yet.

Janet Stemwedel: Although it’s complicated, I’d say. It is a human endeavor. Forgetting it’s a human endeavor is a road to nothing but pain. And you’ve got the knowledge-building thing going on, and that’s certainly at the center of science, but you’ve also got the getting credit for the awesome things you’ve done and getting paid so you can stay in the pool and keep building knowledge, because we haven’t got this utopian science island where anyone who wants to build knowledge can and all their needs are taken care of. And, you’ve got power gradients. So, there may well be principled arguments from the point of view of what’s going to incentivize practices that will result in better knowledge and less cheating and things like that, to change the game. I’d argue that’s one of the things that philosophy of science can contribute — I’ve tried to contribute that as part of my day job. But the first step is, you’ve got to start talking about the knowledge-building as an activity that’s conducted by humans rather than you put more data into the scientific method box, you turn the crank, and out comes the knowledge.

Michael Tomasson: This is horrifying. I guess what I’m concerned about is I’d hoped you’d teach the scientific method as some sort of central methodology from lab to lab. Are you saying, from the student’s point of view, whatever lab you’re in, you’ve got to figure out whatever the boss wants, and that’s what science is? Is there no skeleton key or structure that we can take from lab to lab?

Dr. Rubidium: Isn’t that what you’re doing? You’re going to instruct your people to do science the way you think it should be done? That pretty much sounds like what you just said.

Dr. Isis: That’s the point of being an apprentice, right?

Michael Tomasson: I had some fantasy that there was some universal currency or universal toolset that could be taken from one lab to another. Are you saying that I’m just teaching my people how to do Tomasson science, and they’re going to go over to Rubidium and be like, forget all that, and do things totally differently?

Dr. Rubidium: That might be the case.

Janet Stemwedel: Let’s put out there that a unified scientific method that’s accepted across scientific disciplines, and from lab to lab and all that, is an ideal. We have this notion that part of why we’re engaged in science to try to build knowledge of the world is that there is a world that we share. We’re trying to build objective knowledge, and why that matters is because we take it that there is a reality out there that goes deeper than how, subjectively, things seem to us.

(30:00)
Michael Tomasson: Yes!

Janet Stemwedel: So, we’re looking for a way to share that world, and the pictures of the method involved in doing that, the logical connections involved in doing that, that we got from the logical empiricists and Popper and that crowd — if you like, they’re giving sort of the idealized model of how we could do that. It’s analogous to the story they tell you about orbitals in intro chem. You know what happens, if you keep on going with chem, is they mess up that model. They say, it’s not that simple, it’s more complicated.

And that’s what philosophers of science do, is we mess up that model. We say, it can’t possible be that simple, because real human beings couldn’t drive that and make it work as well as it does. So there must be something more complicated going on; let’s figure out what it is. My impression, looking at the practice through the lens of philosophy of science, is that you find a lot of diversity in the details of the methods, you find a reasonable amount of diversity in terms of what’s the right attitude to have towards our theories — if we’ve got a lot of evidence in favor of our theories, are we allowed to believe our theories are probably right about the world, or just that they’re better at churning out predictions than the other theories we’ve considered so far? We have places where you can start to look at how methodologies embraced by Western primatologists compared to Japanese primatologists — where they differ on what’s the right thing to do to get the knowledge — you could say, it’s not the case that one side is right and one side is wrong, we’ve located a trade-off here, where one camp is deciding one of the things you could get is more important and you can sacrifice the other, and the other camp is going the other direction on that.

It’s not to say we should just give up on this project of science and building objective, reliable knowledge about the world. But how we do that is not really anything like the flowchart of the scientific method that you find in the junior high science text book. That’s like staying with the intro chem picture of the orbitals and saying, that’s all I need to know.

(32:20)
Dr. Isis: I sort of was having a little frightened moment where, as I was listening to you talk, Michael, I was having this “I don’t think that word means what you think it means” reaction. And I realize that you’re a physician and not a real scientist, but “the scientific method” is actually a narrow construct of generating a hypothesis, generating methods to test the hypothesis, generating results, and then either rejecting or failing to reject your hypothesis. This idea of going to people’s labs and learning to do science is completely tangential from the scientific method. I think we can all agree that, for most of us at are core, the scientific method is different from the culture. Now, whether I go to Tomasson’s lab and learn to label my reagents with the wrong labels because they’re a trifling, scandalous bunch who will mess up your experiment, and then I go to Rubidium’s lab and we all go marathon training at 3 o’clock in the afternoon, that’s the culture of science, that’s not the scientific method.

(34:05)
Janet Stemwedel: Maybe what we mean by the scientific method is either more nebulous or more complicated, and that’s where the disagreements come from.

If I can turn back to the example of the Japanese primatologists and the primatologists from the U.S. [1]… You’re trying to study monkeys. You want to see how they’re behaving, you want to tell some sort of story, you probably are driven by some sort of hypotheses. As it turns out, the Western primatologists are starting with the hypothesis that basically you start at the level of the individual monkey, that this is a biological machine, and you figure out how that works, and how they interact with each other if you put them in a group. The Japanese primatologists are starting out with the assumption that you look at the level of social groups to understand what’s going on.

(35:20)
And there’s this huge methodological disagreement that they had when they started actually paying attention to each other: is it OK to leave food in the clearing to draw the monkeys to where you can see them more closely?

The Western primatologists said, hell no, that interferes with the system you’re trying to study. You want to know what the monkeys would be like in nature, without you there. So, leaving food out there for them, “provisioning” them, is a bad call.

The Japanese primatologists (who are, by the way, studying monkeys that live in the islands that are part of Japan, monkeys that are well aware of the existence of humans because they’re bumping up against them all the time) say, you know what, if we get them closer to where we are, if we draw them into the clearings, we can see more subtle behaviors, we can actually get more information.

So here, there’s a methodological trade-off. Is it important to you to get more detailed observations, or to get observations that are untainted by human interference? ‘Cause you can’t get both. They’re both using the scientific method, but they’re making different choices about the kind of knowledge they’re building with that scientific method. Yet, on the surface of things, these primatologists were sort of looking at each other like, “Those guys don’t know how to do science! What the hell?”

(36:40)
Andrew Brandel: The other thing I wanted to mention to this point and, I think, to Tomasson’s question also, is that there are lots of anthropologists embedded with laboratory scientists all over the world, doing research into specifically what kinds of differences, both in the ways that they’re organized and in the ways that arguments get levied, what counts as “true” or “false,” what counts as a hypothesis, how that gets determined within these different contexts. There are broad fields of social sciences doing exactly this.

Dr. Rubidium: I think this gets to the issue: Tomasson, what are you calling the scientific method? Versus, can you really at some point separate out the idea that science is a thing — like Janet was saying, it’s a machine, you put the stuff in, give it a spin, and get the stuff out — can you really separate something called “the scientific method” from the people who do it?

I’ve taught general chemistry, and one of the first things we do is to define science, which is always exciting. It’s like trying to define art.

Michael Tomasson: So what do you come up with? What is science?

Dr. Rubidium: It’s a body of knowledge and a process — it’s two different things, when people say science. We always tell students, it’s a body of knowledge but it’s also a process, a thing you can do. I’m not saying it’s [the only] good answer, but it’s the answer we give students in class.

Then, of course, the idea is, what’s the scientific method? And everyone’s got some sort of a figure. In the gen chem book, in chapter 1, it’s always going to be in there. And it makes it seem like we’ve all agreed at some point, maybe taken a vote, I don’t know, that this is what we do.

Janet Stemwedel: And you get the laminated card with the steps on it when you get your lab coat.

Dr. Rubidium: And there’s the flowchart, usually laid out like a circle.

Michael Tomasson: Exactly!

Dr. Rubidium: It’s awesome! But that’s what we tell people. It’s kind of like the lie we tell the about orbitals, like Janet was saying, in the beginning of gen chem. But then, this is how sausages are really made. And yes, we have this method, and these are the steps we say are involved with it, but are we talking about that, which is what you learn in high school or junior high or science camp or whatever, or are you actually talking about how you run your research group? Which one are you talking about?

(39:30)
Janet Stemwedel: It can get more complicated than that. There’s also this question of: is the scientific method — whatever the heck we do to build reliable knowledge about the world using science — is that the kind of thing you could do solo, or is it necessarily a process that involves interaction with other people? So, maybe we don’t need to be up at night worrying about whether individual scientists fail to instantiate this idealized scientific method as long as the whole community collectively shakes out as instantiating it.

Michael Tomasson: Hmmm.

Casey: Isn’t this part of what a lot of scientists are doing, that it shakes out some of the human problems that come with it? It’s a messy process and you have a globe full of people performing experiments, doing research. That should, to some extent, push out some noise. We have made advances. Science works to some degree.

Janet Stemwedel: It mostly keeps the plane up in the air when it’s supposed to be in the air, and the water from being poisoned when it’s not supposed to be poisoned. The science does a pretty good job building the knowledge. I can’t always explain why it’s so good at that, but I believe that it does. And I think you’re right, there’s something — certainly in peer review, there’s this assumption that why we play with others here is that they help us catch the thing we’re missing, they help us to make sure the experiments really are reproducible, to make sure that we’re not smuggling in unconscious assumptions, whatever. I would argue, following on something Tomasson wrote in his blog post, that this is a good epistemic reason for some of the stuff that scientists rail on about on Twitter, about how we should try to get rid of sexism and racism and ableism and other kinds of -isms in the practice of science. It’s not just because scientists shouldn’t be jerks to people who could be helping them build the knowledge. It’s that, if you’ve got a more diverse community of people building the knowledge, you up the chances that you’re going to locate the unconscious biases that are sneaking in to the story we tell about what the world is like.

When the transcript continues, we do some more musing about methodology, the frailties of individual humans when it comes to being objective, and epistemic violence.

_______

[1] This discussion based on my reading of Pamela J. Asquith, “Japanese science and western hegemonies: primatology and the limits set to questions.” Naked science: Anthropological inquiry into boundaries, power, and knowledge (1996): 239-258.

* * * * *

Part 1 of the transcript.

Archived video of this Pub-Style Science episode.

Storify’d version of the simultaneous Twitter conversation.

Pub-Style Science: philosophy, hypotheses, and the scientific method.

Last week I was honored to participate in a Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building. Some technical glitches notwithstanding, it was a rollicking good conversation — so much so that I have put together a transcript for those who don’t want to review the archived video.

The full transcript is long (approaching 8000 words even excising the non-substantive smack-talk), so I’ll be presenting it here in a few chunks that I’ve split more or less at points where the topic of the discussion shifted.

In places, I’ve cleaned up the grammar a bit, attempting to faithfully capture the gist of what each speaker was saying. As well, because my mom reads this blog, I’ve cleaned up some of the more colorful language. If you prefer the PG-13 version, the archived video will give you what you need.

Simultaneously with our video-linked discussion, there was a conversation on Twitter under the #pubscience hashtag. You can see that conversation Storify’d here.

____
(05:40)
Michael Tomasson: The reason I was interested in this is because I have one very naïve view and one esoteric view. My naïve view is that there is something useful about philosophy in terms of the scientific method, and when people are in my lab, I try to beat into their heads (I mean, educate them) that there’s a certain structure to how we do science, and this is a life-raft and a tool that is essential. And I guess that’s the question, whether there is some sort of essential tool kit. We talk about the scientific method. Is that a universal? I started thinking about this talking with my brother-in-law, who’s an amateur philosopher, about different theories of epistemology, and he was shocked that I would think that science had a lock on creating knowledge. But I think we do, through the scientific method.

Janet, take us to the next level. To me, from where I am, the scientific method is the key to the city of knowledge. No?

Janet Stemwedel: Well, that’s certainly a common view, and that’s a view that, in the philosophy of science class I regularly teach, we start with — that there’s something special about whatever it is scientists are doing, something special about the way they gather very careful observations of the world, and hook them together in the right logical way, and draw inferences and find patterns, that’s a reliable way to build knowledge. But at least for most of the 20th Century, what people who looked closely at this assumption in philosophy found was that it had to be more complicated than that. So you end up with folks like Sir Karl Popper pointing out that there is a problem of induction — that deductive logic will get you absolutely guaranteed conclusions if your premises are true, but inductive inference could go wrong; the future might not be like the past we’ve observed so far.

(08:00)
Michael Tomasson: I’ve got to keep the glossary attached. Deductive and inductive?

Janet Stemwedel: Sure. A deductive argument might run something like this:

All men are mortal. Socrates is a man. Therefore, Socrates is mortal.

If it’s true that all men are mortal, and that Socrates is a man, then you are guaranteed that Socrates is also going to be mortal. The form of the argument is enough to say, if the assumptions are true, then the conclusion has to be true, and you can take that to the bank.

Inductive inference is actually most of what we seem to use in drawing inferences from observations and experiments. So, let’s say you observe a whole lot of frogs, and you observe that, after some amount of time, each of the frogs that you’ve had in your possession kicks off. After a certain number of frogs have done this, you might draw the inference that all frogs are mortal. And, it seems like a pretty good inference. But, it’s possible that there are frogs not yet observed that aren’t mortal.

Inductive inference is something we use all the time. But Karl Popper said, guess what, it’s not guaranteed in the same way deductive logic is. And this is why he thought the power of the scientific method is that scientists are actually only ever concerned to find evidence against their hypotheses. The evidence against your hypotheses lets you conclude, via deductive inference, that those hypotheses are wrong, and then you cross them off. Any hypothesis where you seem to get observational support, Popper says, don’t get too excited! Keep testing it, because maybe the next test is going to be the one where you find evidence against it, and you don’t want to get screwed over by induction. Inductive reasoning is just a little too shaky to put your faith in.

(10:05)
Michael Tomasson: That’s my understanding of Karl Popper. I learned about the core of falsifying hypotheses, and that’s sort of what I teach as truth. But I’ve heard some anti-Karl Popper folks, which I don’t really quite understand.

Let me ask Isis, because I know Isis has very strong opinions about hypotheses. You had a blog post a long time ago about hypotheses. Am I putting words in your mouth to say you think hypotheses and hypothesis testing are important?

(10:40)
Dr. Isis: No, I did. That’s sort of become the running joke here is that my only contribution to lab meeting is to say, wait wait wait, what was your hypothesis? I think that having hypotheses is critical, and I’m a believer, as Dr. Tomasson knows, that a hypothesis has four parts. I think that’s fundamental, framing the question, because I think that the question frames how you do your analysis. The design and the analysis fall out of the hypothesis, so I don’t understand doing science without a hypothesis.

Michael Tomasson: Let me throw it over to Andrew … You’re coming from anthropology, you’re looking at science from 30,000 feet, where maybe in anthropology it’s tough to do hypothesis-testing. So, what do you say to this claim that the hypothesis is everything?

Andrew Brandel: I would give two basic responses. One: in the social sciences, we definitely have a different relationship to hypotheses, to the scientific method, perhaps. I don’t want to represent the entire world of social and human sciences.

Michael Tomasson: Too bad!

(12:40)
Andrew Brandel: So, there’s definitely a different relationship to hypothesis-testing — we don’t have a controlled setting. This is what a lot of famous anthropologists would talk about. The other area where we might interject is, science is (in the view of some of us) one among many different ways of viewing and organizing our knowledge about the world, and not necessarily better than some other view.

Michael Tomasson: No, it’s better! Come on!

Andrew Brandel: Well, we can debate about this. This is a debate that’s been going on for a long time, but basically my position would be that we have something to learn from all the different sciences that exist in the world, and that there are lots of different logics which condition the possibility of experiencing different kinds of things. When we ask, what is the hypothesis, when Dr. Isis is saying that is crucial for the research, we would agree with you, that that is also conditioning the responses you get. That’s both what you want and part of the problem. It’s part of a culture that operates like an ideology — too close to you to come at from within it.

Janet Stemwedel: One of the things that philosophers of science started twigging to, since the late 20th Century, is that science is not working with this scientific method that’s essentially a machine that you toss observations into and you turn the crank and on the other end out comes pristine knowledge. Science is an activity done by human beings, and human beings who do science have as many biases and blindspots as human beings who don’t do science. So, recognizing some of the challenges that are built into the kind of critter we are trying to build reliable knowledge about the world becomes crucial, and even places where the scientist will say, look, I’m not doing (in this particular field) hypothesis-driven science, it doesn’t mean that there aren’t some hypotheses sort of behind the curtain directing the attention of the people trying to build knowledge. It just means that they haven’t bumped into enough people trying to build knowledge in the same area that have different assumptions to notice that they’re making assumptions in the first place.

(15:20)
Dr. Isis: I think that’s a crucial distinction. Is the science that you’re doing really not hypothesis-driven, or are you too lazy to write down a hypothesis?

To give an example, I’m writing a paper with this clinical fellow, and she’s great. She brought a draft, which is amazing, because I’m all about the paper right now. And in there, she wrote, we sought to observe this because to the best of our knowledge this has never been reported in the literature.

First of all, the phrase “to the best of our knowledge,” any time you write that you should just punch yourself in the throat, because if it wasn’t to the best of your knowledge, you wouldn’t be writing it. I mean, you wouldn’t be lying: “this has never been reported in the literature.” The other thing is, “this has never been reported in the literature” as the motivation to do it is a stupid reason. I told her, the frequency of the times of the week that I wear black underwear has never been reported in the literature. That doesn’t mean it should be.

Janet Stemwedel: Although, if it correlates with your experiment working or not — I have never met more superstitious people than experimentalists. If the experiment only works on the days you were black underwear, you’re wearing black underwear until the paper is submitted, that’s how it’s going to be. Because the world is complicated!

Dr. Isis: The point is that it’s not that she didn’t have a hypothesis. It’s that pulling it out of her — it was like a tapeworm. It was a struggle. That to me is the question. Are we really doing science without a hypothesis, or are we making the story about ourselves? About what we know about in the literature, what the gap in the literature is, and the motivation to do the experiment, or are we writing, “we wanted to do this to see if this was the thing”? — in which case, I don’t find it very interesting.

Michael Tomasson: That’s an example of something that I try to teach, when you’re writing papers: we did this, we wanted to do that, we thought about this. It’s not really about you.

But friend of the show Cedar Riener tweets in, aren’t the biggest science projects those least likely to have clearly hypothesis-driven experiments, like HGP, BRANI, etc.? I think the BRAIN example is a good one. We talk about how you need hypotheses to do science, and yet here’s this very high profile thing which, as far as I can tell, doesn’t really have any hypotheses driving it.

When the transcript continues: Issues of inclusion, methodological disputes, and the possibility that “the scientific method” is actually a lie.

What is philosophy of science (and should scientists care)?

Just about 20 years ago, I abandoned a career as a physical chemist to become a philosopher of science. For most of those 20 years, people (especially scientists) have been asking me what the heck the philosophy of science is, and whether scientists have any need of it.

There are lots of things philosophers of science study, but one central set of concerns is what is distinctive about science — how science differs from other human activities, what grounds its body of knowledge, what features are essential to scientific engagement with phenomena, etc. This means philosophers of science have spent a good bit of time trying to find the line between science and non-science, trying to figure out the logic with which scientific claims are grounded, working to understand the relation between theory and empirical data, and working out the common thread that unites many disparate scientific fields — assuming such a common thread exists. *

If you like, you can think of this set of philosophical projects as trying to give an account of what science is trying to do — how science attempts to construct a picture of the world that is accountable to the world in a particular way, how that picture of the world develops and changes in response to further empirical information (among other factors), and what kind of explanations can be given for the success of scientific accounts (insofar as they have been successful). Frequently, the philosopher is concerned with “Science” rather than a particular field of science. As well, some philosophers are more concerned with an idealized picture of science as an optimally rational knowledge building activity — something they will emphasize is quite different from science as actually practiced.**

Practicing scientists pretty much want to know how to attack questions in their particular field of science. If your goal is to understand the digestive system of some exotic bug, you may have no use at all for a subtle account of scientific theory change, let alone for a firm stand on the question of scientific anti-realism. You have much more use for information about how to catch the bug, how to get to its digestive system, what sorts of things you could observe measure or manipulate that could give you useful information about its digestive system, how to collect good data, how to tell when you’ve collected enough data to draw useful conclusions, appropriate methods for processing the data and drawing conclusions, and so forth.

A philosophy of science course doesn’t hand the entomologist any of those practical tools for studying the scientific problems around the bug’s digestive system. But philosophy of science is aimed at answering different questions than the working scientist is trying to answer. The goal of philosophy of science is not to answer scientific questions, but to answer questions about science.***

Does a working scientist need to have learned philosophy of science in order to get the scientific job done? Probably not. Neither does a scientist need to have studied Shakespeare or history to be a good scientist — but these still might be worthwhile endeavors for the scientist as a person. Every now and then it’s nice to be able to think about something besides your day job. (Recreational thinking can be fun!)

Now, there are some folks who will argue that studying philosophy of science could be detrimental to the practicing scientist. Reading Kuhn’s Structure of Scientific Revolutions with its claim that shifts in scientific paradigm have an inescapable subjective component, or even Popper’s view of the scientific method that’s meant to get around the problem of induction, might blow the young scientist’s mind and convince him that the goal of objective knowledge is unattainable. This would probably undermine his efforts to build objective knowledge in the lab.

(However, I’d argue that reading Helen Longino’s account of how we build objective knowledge — another philosophical account — might answer some of the worries raised by Popper, Kuhn, and that crowd, making the young scientist’s knowledge-building endeavors seem more promising.)

My graduate advisor in chemistry had a little story he told that was supposed to illustrate the dangers for scientists of falling in with the philosophers and historians and sociologists of science: A centipede is doing a beautiful and complicated dance. An ant walks up to the centipede and says, “That dance is lovely! How do you coordinate all your feet so perfectly to do it?” The centipede pauses to think about this and eventually replies, “I don’t know.” Then the centipede watches his feet and tries to do the dance again — and can’t!

The centipede could do the dance without knowing precisely how each foot was supposed to move relative to the others. A scientist can do science while taking the methodology of her field for granted. But having to give a philosophical account of or a justification for that methodology deeper than “this is what we do and it works pretty well for the problems we want to solve” may render that methodology strange looking and hard to keep using.

Then again, I’m told what Einstein did for physics had as much to do with proposing a (philosophical) reorganization of the theoretical territory as it did with new empirical data. So perhaps the odd scientist can put some philosophical training to good scientific use.

_____
This post is an updated version of an ancestor post on my other blog, and was prompted by the Pub-Style Science discussion of epistemology scheduled for Tuesday, April 8, 2014 (starting 9 PM EDT/6 PM PDT). Watch the hashtag #pubscience for more details.

_____
*I take it that one can identify “science” by enumerating the fields included in the category (biology, chemistry, physics, astronomy, geology, …) and then pose the question of what commonalities (if any) these examples of scientific fields have with no risk of circularity. Especially since we’re leaving it to the scientists to tell us what the sciences are. It’s quite possible that the sciences won’t end up having a common core — that there won’t be any there there.

**For the record, I find science-as-actually-practiced — in particular scientific fields, rather than generalized as ‘Science” — more philosophically interesting than the idealized stuff. But, as one of my labmates in graduate school used to put it, “One person’s ‘whoop-de-doo’ is another person’s life’s work.”

***Really, to answer philosophical questions about science, since historians and sociologists and anthropologist also try to answer questions about science.

Brief thoughts on uncertainty.

For context, these thoughts follow upon a very good session at ScienceOnline Together 2014 on “How to communicate uncertainty with the brevity that online communication requires.” Two of the participants in the session used Storify to collect tweets of the discussion (here and here).

About a month later, this does less to answer the question of the session title than to give you a peek into my thoughts about science and uncertainty. This may be what you’ve come to expect of me.

Humans are uncomfortable with uncertainty, at least in those moments when we notice it and where we have to make decisions that have more than entertainment value riding on them. We’d rather have certainty, since that makes it easier to enact plans that won’t be thwarted.

Science is (probably) a response to our desire for more certainty. Finding natural explanations for natural phenomena, stable patterns in our experience, gives us a handle on our world and what we can expect from it that’s less capricious than “the gods are in a mood today.”

But the scientific method isn’t magic. It’s a tool that cranks out explanations of what’s happened, predictions of what’s coming up, based on observations made by humans with our fallible human senses.

The fallibility of those human senses (plus things like the trickiness of being certain you’re awake and not dreaming) was (probably) what drove philosopher René Descartes in his famous Meditations, the work that yielded the conclusion “I think, therefore I am” and that featured not one but two proofs of the existence of a God who is not a deceiver. Descartes was not pursuing a theological project here. Rather, he was trying to explain how empirical science — science relying on all kinds of observations made by fallible humans with their fallible senses — could possibly build reliable knowledge. Trying to put empirical science on firm foundations, he engaged in his “method of doubt” to locate some solid place to stand, some thing that could not be doubted. That something was “I think, therefore I am” — in other words, if I’m here doubting that my experience is reliable, that I’m awake instead of dreaming, that I’m a human being rather than a brain in a vat, I can at least me sure that there exists a thinking thing that’s doing the doubting.

From this fact that could not be doubted, Descartes tried to climb back out of that pit of doubt and to work out the extent to which we could trust our senses (and the ways in which our sense were likely to mislead us). This involved those two proofs of the existence of a God who is not a deceiver, plus a whole complicated story of minds and brain communicating with each other (via the wiggling of our pineal glands) — which is to say, it was not entirely persuasive. Still, it was all in the service of getting us more certainty from our empirical science.

Certainty and its limits are at the heart of another piece of philosophy, “the problem of induction,” this one most closely associated with David Hume. The problem here rests on our basic inability to be certain that what we have so far observed of our world will be a reliable guide to what we haven’t observed yet, that the future will be like the past. Observing a hundred, or a thousand, or a million ravens that are black is not enough for us to conclude with absolute certainty that the ravens we haven’t yet observed must also be black. Just because the sun rose today, and yesterday, and everyday through recorded human history to date does not guarantee that it will rise tomorrow.

But while Hume pointed out the limits of what we could conclude with certainty from our observations at any given moment — limits which impelled Karl Popper to assert that the scientific attitude was one of trying to prove hypotheses false rather than seeking support for them — he also acknowledged our almost irresistible inclination to believe that the future will be like the past, that the patterns of our experience so far will be repeated in the parts of the world still waiting for us to experience them. Logic can’t guarantee these patterns will persist, but our expectations (especially in cases where we have oodles of very consistent observations) feel like certainty.

Scientists are trained to recognize the limits of their certainty when they draw conclusions, offer explanations, make predictions. They are officially on the hook to acknowledge their knowledge claims as tentative, likely to be updated in the light of further information.

This care in acknowledging the limits of what careful observation and logical inference guarantee us can make it appear to people who don’t obsess over uncertainties in everyday life that scientists don’t know what’s going on. But the existence of some amount of uncertainty does not mean we have no idea what’s going on, no clue what’s likely to happen next.

What non-scientists who dismiss scientific knowledge claims on the basis of acknowledged uncertainty forget is that making decisions in the face of uncertainty is the human condition. We do it all the time. If we didn’t, we’d make no decisions at all (or else we’d be living a sustained lie about how clearly we see into our future).

Strangely, though, we seem to have a hard time reconciling our everyday pragmatism about everyday uncertainty with our suspicion about the uncertainties scientists flag in the knowledge they share with us. Maybe we’re making the jump from viewing scientific knowledge as reliable to demanding that it be perfect. Or maybe we’re just not very reflective about how easily we navigate uncertainty in our everyday decision-making.

I see this firsthand when my “Ethics in Science” students grapple with ethics case studies. At first they are freaked out by the missing details, the less-than-perfect information about what will happen if the protagonist does X or if she does Y instead. How can we make good decisions about what the protagonist should do if we can’t be certain about those potential outcomes?

My answer to them: The same way we do in real life, whose future we can’t see with any more certainty.

When there’s more riding on our decisions, we’re more likely to notice the gaps in the information that informs those decisions, the uncertainty inherent in the outcomes that will follow on what we decide. But we never have perfect information, and neither do scientists. That doesn’t mean our decision-making is hopeless, just that we need to get comfortable making do with the certainty we have.

Want good reasons to be a Creationist? You won’t find them here.

I don’t know why it surprises me when technology reporters turn out to be not only anti-science, but also deeply confused about what’s actually going on in scientific knowledge-building. Today’s reminder comes in Virginia Heffernan’s column, “Why I’m a creationist”.

There seems not to be much in the way of a coherent argument in support of Creationism in the column. As near as I can tell, Heffernan is down on science because:

  1. Science sometimes uses chains of inference that are long and complicated.
  2. Science has a hard time coming up with decisive answers to complicated questions (at least at a satisfyingly prompt rate).
  3. Science maybe provides some good reasons to worry about the environment, and she’d prefer not to worry about the environment.
  4. A scientist was mean to a religious person at some point. Some scientists just don’t seem like nice people.
  5. Science trades in hypotheses, and hypotheses aren’t facts — they could be false!
  6. Darwin based his whole theory on a tautology, “whatever survives survives”! [Nope!]
  7. Evolutionary psychology first claimed X, then claimed Y (which seems to directly contradict X), and neither of those claims seems to have especially rigorous empirical backing … so all of evolutionary theory must be wrong!
  8. Evolutionary theory just isn’t as compelling (at least to Heffernan) as a theory of human origins should be.

On item #5 there, if this is an issue for one’s acceptance of evolutionary theory, it’s also an issue for one’s acceptance knowledge claims from other areas of science.

This is something we can lay at the feet of the problem of induction. But, we can also notice that scientists deal quite sensibly with the problem of induction lurking in the background. Philosopher of science Heather Douglas explains this nicely in her book Science, Policy, and the Value-Free Ideal, where she describes what it means for scientists to accept a hypothesis.

To say P has been accepted is to say P belongs to the stock of established scientific knowledge, which means it satisfies criteria for standards of appraisal from within science (including what kind of empirical evidence there is for P, whether there is empirical evidence that supports not-P, etc.). Accepting P is saying that there is no reason to expect that P will be rejected after more research, and that only general inductive doubts render P uncertain.

That’s as certain as knowledge can get, at least without a divine guarantee. Needless to say, such a “guarantee” would present epistemic problems of its own.

As for Heffernan’s other reasons for preferring Creationism to science, I’m not sure I have much to say that I haven’t already said elsewhere about why they’re silly, but I invite you to mount your own critiques in the comments.

The quest for underlying order: inside the frauds of Diederik Stapel (part 1)

Yudhijit Bhattacharjee has an excellent article in the most recent New York Times Magazine (published April 26, 2013) on disgraced Dutch social psychologist Diederik Stapel. Why is Stapel disgraced? At the last count at Retraction Watch, 54 53 of his scientific publications have been retracted, owing to the fact that the results reported in those publications were made up. [Scroll in that Retraction Watch post for the update — apparently one of the Stapel retractions was double-counted. This is the risk when you publish so much made-up stuff.]

There’s not much to say about the badness of a scientist making results up. Science is supposed to be an activity in which people build a body of reliable knowledge about the world, grounding that knowledge in actual empirical observations of that world. Substituting the story you want to tell for those actual empirical observations undercuts that goal.

But Bhattacharjee’s article is fascinating because it goes some way to helping illuminate why Stapel abandoned the path of scientific discovery and went down the path of scientific fraud instead. It shows us some of the forces and habits that, while seemingly innocuous taken individually, can compound to reinforce scientific behavior that is not helpful to the project of knowledge-building. It reveals forces within scientific communities that make it hard for scientists to pursue suspicions of fraud to get formal determinations of whether their colleagues are actually cheating. And, the article exposes some of the harms Stapel committed beyond publishing lies as scientific findings.

It’s an incredibly rich piece of reporting, one which I recommend you read in its entirety, maybe more than once. Given just how much there is to talk about here, I’ll be taking at least a few posts to highlight bits of the article as nourishing food for thought.

Let’s start with how Stapel describes his early motivation for fabricating results to Bhattacharjee. From the article:

Stapel did not deny that his deceit was driven by ambition. But it was more complicated than that, he told me. He insisted that he loved social psychology but had been frustrated by the messiness of experimental data, which rarely led to clear conclusions. His lifelong obsession with elegance and order, he said, led him to concoct sexy results that journals found attractive. “It was a quest for aesthetics, for beauty — instead of the truth,” he said. He described his behavior as an addiction that drove him to carry out acts of increasingly daring fraud, like a junkie seeking a bigger and better high.

(Bold emphasis added.)

It’s worth noting here that other scientists — plenty of scientists who were never cheaters, in fact — have also pursued science as a quest for beauty, elegance, and order. For many, science is powerful because it is a way to find order in a messy universe, to discover simple natural laws that give rise to such an array of complex phenomena. We’ve discussed this here before, when looking at the tension between Platonist and Aristotelian strategies for getting to objective truths:

Plato’s view was that the stuff of our world consists largely of imperfect material instantiations of immaterial ideal forms -– and that science makes the observations it does of many examples of material stuff to get a handle on those ideal forms.

If you know the allegory of the cave, however, you know that Plato didn’t put much faith in feeble human sense organs as a route to grasping the forms. The very imperfection of those material instantiations that our sense organs apprehend would be bound to mislead us about the forms. Instead, Plato thought we’d need to use the mind to grasp the forms.

This is a crucial juncture where Aristotle parted ways with Plato. Aristotle still thought that there was something like the forms, but he rejected Plato’s full-strength rationalism in favor of an empirical approach to grasping them. If you wanted to get a handle on the form of “horse,” for example, Aristotle thought the thing to do was to examine lots of actual specimens of horse and to identify the essence they all have in common. The Aristotelian approach probably feels more sensible to modern scientists than the Platonist alternative, but note that we’re still talking about arriving at a description of “horse-ness” that transcends the observable features of any particular horse.

Honest scientists simultaneously reach for beautiful order and the truth. They use careful observations of the world to try to discern the actual structures and forces giving rise to what they are observing. They recognize that our observational powers are imperfect, that our measurements are not infinitely precise (and that they are often at least a little inaccurate), but those observations, those measurements, are what we have to work with in discerning the order underlying them.

This is why Ockham’s razor — to prefer simple explanations for phenomena over more complicated ones — is a strategy but not a rule. Scientists go into their knowledge-building endeavor with the hunch that the world has more underlying order than is immediately apparent to us — and that careful empirical study will help us discover that order — but how things actually are provides a constraint on how much elegance there is to be found.

However, as the article in the New York Times Magazine makes clear, Stapel was not alone in expecting the world he was trying to describe in his research to yield elegance:

In his early years of research — when he supposedly collected real experimental data — Stapel wrote papers laying out complicated and messy relationships between multiple variables. He soon realized that journal editors preferred simplicity. “They are actually telling you: ‘Leave out this stuff. Make it simpler,’” Stapel told me. Before long, he was striving to write elegant articles.

The journal editors’ preference here connects to a fairly common notion of understanding. Understanding a system is being able to identify that components of that system that make a difference in producing the effects of interest — and, by extension, recognizing which components of the system don’t feature prominently in bringing about the behaviors you’re studying. Again, the hunch is that there are likely to be simple mechanisms underlying apparently complex behavior. When you really understand the system, you can point out those mechanisms and explain what’s going on while leaving all the other extraneous bits in the background.

Pushing to find this kind of underlying simplicity has been a fruitful scientific strategy, but it’s a strategy that can run into trouble if the mechanisms giving rise to the behavior you’re studying are in fact complicated. There’s a phrase attributed to Einstein that captures this tension nicely: as simple as possible … but not simpler.

The journal editors, by expressing to Stapel that they liked simplicity more than messy relationships between multiple variables, were surely not telling Stapel to lie about his findings to create such simplicity. They were likely conveying their view that further study, or more careful analysis of data, might yield elegant relations that were really there but elusive. However, intentionally or not, they did communicate to Stapel that simple relationships fit better with journal editors’ hunches about what the world is like than did messy ones — and that results that seemed to reveal simple relations were thus more likely to pass through peer review without raising serious objections.

So, Stapel was aware that the gatekeepers of the literature in his field preferred elegant results. He also seemed to have felt the pressure that early-career academic scientists often feel to make all of his research time productive — where the ultimate measure of productivity is a publishable result. Again, from the New York Times Magazine article:

The experiment — and others like it — didn’t give Stapel the desired results, he said. He had the choice of abandoning the work or redoing the experiment. But he had already spent a lot of time on the research and was convinced his hypothesis was valid. “I said — you know what, I am going to create the data set,” he told me.

(Bold emphasis added.)

The sunk time clearly struck Stapel as a problem. Making a careful study of the particular psychological phenomenon he was trying to understand hadn’t yielded good results — which is to say, results that would be recognized by scientific journal editors or peer reviewers as adding to the shared body of knowledge by revealing something about the mechanism at work in the phenomenon. This is not to say that experiments with negative results don’t tell scientists something about how the world is. But what negative results tell us is usually that the available data don’t support the hypothesis, or perhaps that the experimental design wasn’t a great way to obtain data to let us evaluate that hypothesis.

Scientific journals have not generally been very interested in publishing negative results, however, so scientists tend to view them as failures. They may help us to reject appealing hypotheses or to refine experimental strategies, but they don’t usually do much to help advance a scientist’s career. If negative results don’t help you get publications, without which it’s harder to get grants to fund research that could find positive results, then the time and money spent doing all that research has been wasted.

And Stapel felt — maybe because of his hunch that the piece of the world he was trying to describe had to have an underlying order, elegance, simplicity — that his hypothesis was right. The messiness of actual data from the world got in the way of proving it, but it had to be so. And this expectation of elegance and simplicity fit perfectly with the feedback he had heard before from journal editors in his field (feedback that may well have fed Stapel’s own conviction).

A career calculation paired with a strong metaphysical commitment to underlying simplicity seems, then, to have persuaded Diederik Stapel to let his hunch weigh more heavily than the data and then to commit the cardinal sin of falsifying data that could be presented to other scientists as “evidence” to support that hunch.

No one made Diederik Stapel cross that line. But it’s probably worth thinking about the ways that commitments within scientific communities — especially methodological commitments that start to take on the strength of metaphysical commitments — could have made crossing it more tempting.

Building a scientific method around the ideal of objectivity.

While modern science seems committed to the idea that seeking verifiable facts that are accessible to anyone is a good strategy for building a reliable picture of the world as it really is, historically, these two ideas have not always gone together. Peter Machamer describes a historical moment when these two senses of objectivity were coupled in his article, “The Concept of the Individual and the Idea(l) of Method in Seventeenth-Century Natural Philosophy.” [1]

Prior to the emergence of a scientific method that stressed objectivity, Machamer says, most people thought knowledge came from divine inspiration (whether written in holy books or transmitted by religious authorities) or from ancient sources that were only shared with initiates (think alchemy, stone masonry, and healing arts here). Knowledge, in other words, was a scarce resource that not everyone could get his or her hands (or brains) on. To the extent that a person found the world intelligible at all, it was probably based on the story that someone else in a special position of authority was telling.

How did this change? Machamer argues that it changed when people started to think of themselves as individuals. The erosion of feudalism, the reformation and counter-reformation, European voyages to the New World (which included encounters with plants, animals, and people previously unknown in the Old World), and the shift from a geocentric to a heliocentric view of the cosmos all contributed to this shift by calling old knowledge and old sources of authority into question. As the old sources of knowledge became less credible (or at least less monopolistic), the individual came to be seen as a new source of knowledge.

Machamer describes two key aspects of individuality at work. One is what he calls the “Epistemic I.” This is the recognition that an individual can gain knowledge and ideas directly from his or her own interactions with the world, and that these interactions depend on senses and powers of reason that all humans have (or could have, given the opportunity to develop them). This recognition casts knowledge (and the ability to get it) as universal and democratic. The power to build knowledge is not concentrated in the hands (or eyes) of just the elite — this power is our birthright as human beings.

The other side of individuality here is what Machamer calls the “Entrepreneurial I.” This is the belief that an individual’s insights deserve credit and recognition, perhaps even payment. This recognition casts the individual who has it as a leader, or a teacher — definitely, as a special human worth listening to.

Pause for a moment to notice that this tension is still present in science. For all the commitment to science as an enterprise that builds knowledge from observations of the world that others must be able to make (which is the whole point of reproducibility), scientists also compete for prestige and career capital based on which individual was the first to observe (and report observing) a particular detail that anyone could see. Seeing something new is not effortless (as we’ve discussed in the last two posts), but there’s still an uneasy coexistence between the idea of scientific knowledge-building as within the powers of normal human beings and the idea of scientific knowledge-building as the activity of special human beings with uniquely powerful insights and empirical capacities.

The two “I”s that Machamer describes came together as thinkers in the 1600s tried to work out a reliable method by which individuals could replace discredited sources of “knowledge” and expand on what remained to produce their own knowledge. Lots of “natural philosophers” (what we would call scientists today) set out to formulate just such a method. The paradox here is that each thinker was selling (often literally) a way of knowing that was supposed to work for everyone, while simultaneously presenting himself as the only one clever enough to have found it.

Looking for a method that anyone could use to get the facts about the world, the thinkers Machamer describes recognized that they needed to formulate a clear set of procedures that was broadly applicable to the different kinds of phenomena in the world about which people wanted to build knowledge, that was teachable (rather than being a method that only the person who came up with it could use), and that was able to bring about consensus and halt controversy. However, in the 1600s there were many candidates for this method on offer, which meant that there was a good bit of controversy about the question of which method was the method.

Among the contenders for the method, the Baconian method involved cataloguing many experiences of phenomena, then figuring out how to classify them. The Galilean method involved representing the phenomena in terms of mechanical models (and even going so far as to build the corresponding machine). The Hobbesian model focused on analyzing compositions and divisions of substances in order to distinguish causes from effects. And these were just three contenders in a crowded field. If there was a common thread in these many methods, it was describing or representing the phenomena of interest in spatial terms. In the seventeenth century, as now, seeing is believing.

In a historical moment when people were considering the accessibility and the power of knowledge through experience, it became clear to the natural philosophers trying to develop an appropriate method that such knowledge also required control. To get knowledge, it was not enough to have just any experience -– you had to have the right kind of experiences. This meant that the methods under development had to give guidance on how to track empirical data and then analyze it. As well, these methods had to invent the concept of a controlled experiment.

Whether it was in a published dialogue or an experiment conducted in a public space before witnesses, the natural philosophers developing knowledge-building methods recognized the importance of demonstration. Machamer writes:

Demonstration … consists in laying a phenomenon before oneself and others. This “laying out” exhibits the structure of the phenomenon, exhibits its true nature. What is laid out provides an experience for those seeing it. It carries informational certainty that causes assent.” (94)

Interestingly, there seems to have been an assumption that once people hit on the appropriate procedure for gathering empirical facts about the phenomena, these facts would be sufficient to produce agreement among those who observed them. The ideal method was supposed to head off controversy. Disagreements were either a sign that you were using the wrong method, or that you were using the right method incorrectly. As Machamer describes it:

[T]he doctrines of method all held that disputes or controversies are due to ignorance. Controversies are stupid and accomplish nothing. Only those who cannot reason properly will find it necessary to dispute. Obviously, as noted, the ideal of universality and consensus contrasts starkly with the increasing number of disputes that engage these scientific entrepreneurs, and with the entrepreneurial claims of each that he alone has found the true method.

Ultimately, what stemmed the proliferations of competing methods was a professionalization of science, in which the practitioners essentially agreed to be guided by a shared method. The hope was that the method the scientific profession agreed upon would be the one that allowed scientists to harness human senses and intellect to best discover what the world is really like. Within this context, scientists might still disagree about the details of the method, but they took it that such agreements ought to be resolved in such a way that the resulting methodology better approximated this ideal method.

The adoption of shared methodology and the efforts to minimize controversy are echoed in Bruce Bower’s [2] discussion of how the ideal of objectivity has been manifested in scientific practices. He writes:

Researchers began to standardize their instruments, clarify basic concepts, and write in an impersonal style so that their peers in other countries and even in future centuries could understand them. Enlightenment-influenced scholars thus came to regard facts no longer as malleable observations but as unbreakable nuggets of reality. Imagination represented a dangerous, wild force that substituted personal fantasies for a sober, objective grasp of nature. (361)

What the seventeenth century natural philosophers Machamer describes were striving for is clearly recognizable to us as objectivity -– both in the form of an objective method for producing knowledge and in the form of a body of knowledge that gives a reliable picture of how the world really is. The objective scientific method they sought was supposed to produce knowledge we could all agree upon and to head off controversy.

As you might imagine, the project of building reliable knowledge about the world has pushed scientists in the direction of also building experimental and observational techniques that are more standardized and require less individual judgment across observers. But an interesting side-effect of this focus on objective knowledge as a goal of science is the extent to which scientific reports can make it look like no human observers were involved in making the knowledge being reported. The passive voice of scientific papers — these procedures were performed, these results were observed — does more than just suggest that the particular individuals that performed the procedures and observed the results are interchangeable with other individuals (who, scientists trust, would, upon performing the same procedures, see the same results for themselves). The passive voice can actually erase the human labor involved in making knowledge about the world.

This seems like a dangerous move when objectivity is not an easy goal to achieve, but rather one that requires concerted teamwork along with one’s objective method.
_____________

[1] “The Concept of the Individual and the Idea(l) of Method in Seventeenth-Century Natural Philosophy,” in Peter Machamer, Marcello Pera, and Aristides Baltas (eds.), Scientific Controversies: Philosophical and Historical Perspectives. Oxford University Press, 2000.

[2] Bruce Bower, “Objective Visions,” Science News. 5 December 1998: Vol. 154, pp. 360-362

The challenges of objectivity: lessons from anatomy.

In the last post, we talked about objectivity as a scientific ideal aimed at building a reliable picture of what the world is actually like. We also noted that this goal travels closely with the notion of objectivity as what anyone applying the appropriate methodology could see. But, as we saw, it takes a great deal of scientific training to learn to see what anyone could see.

The problem of how to see what is really there is not a new one for scientists. In her book The Scientific Renaissance: 1450-1630 [1], Marie Boas Hall describes how this issue presented itself to Renaissance anatomists. These anatomists endeavored to learn about the parts of the human body that could be detected with the naked eye and the help of a scalpel.

You might think that the subject matter of anatomy would be more straightforward for scientists to “see” than the cells Fred Grinnell describes [2] (discussed in the last post), which require preparation and staining and the twiddling of knobs on microscopes. However, the most straightforward route to gross anatomical knowledge -– dissections of cadavers -– had its own challenges. For one thing, cadavers (especially human cadavers) were often in short supply. When they were available, anatomists hardly ever performed solitary dissections of them. Rather, dissections were performed, quite literally, for an audience of scientific students, generally with a surgeon doing the cutting while a professor stood nearby and read aloud from an anatomical textbook describing the organs, muscles, or bones encountered at each stage of the dissection process. The hope was that the features described in the text would match the features being revealed by the surgeon doing the dissecting, but there were doubtless instances where the audio track (as it were) was not quite in sync with the visual. Also, as a practical matter, before the invention of refrigeration dissections were seasonal, performed in the winter rather than the warmer months to retard the cadaver’s decomposition. This put limits on how much anatomical study a person could cram into any given year.

In these conditions, most of the scientists who studied anatomy logged many more hours watching dissections than performing dissections themselves. In other words, they were getting information about the systems of interest by seeing rather than by doing -– and they weren’t always seeing those dissections from the good seats. Thus, we shouldn’t be surprised that anatomists greeted the invention of the printing press by producing a number of dissection guides and anatomy textbooks.

What’s the value of a good textbook? It shares detailed information compiled by another scientist, sometimes over the course of years of study, yet you can consume that information in a more timely fashion. If it has diagrams, it can give you a clearer view of what there is to observe (albeit through someone else’s eyes) than you may be able to get from the cheap seats at a dissection. And, if you should be so lucky as to get your own specimens for study, a good textbook can guide your examination of the new material before you, helping you deal with the specimen in a way that lets you see more of what there is to see (including spatial relations and points of attachment) rather than messing it up with sloppy dissection technique.

Among the most widely used anatomy texts in the Renaissance were “uncorrupted” translations of On the Use of the Parts and Anatomical Procedures by the ancient Greek anatomist Galen, and the groundbreaking new text On the Fabric of the Human Body (published in 1543) by Vesalius. The revival of Galen fit into a pattern of Renaissance celebration of the wisdom of the ancients rather than setting out to build “new” knowledge, and Hall describes the attitude of Renaissance anatomists toward his work as “Galen-worship.” Had Galen been alive during the Renaissance, he might well have been irritated at the extent to which his discussions of anatomy -– based on dissections of animals, not human cadavers –- were taken to be authoritative. Galen himself, as an advocate of empiricism, would have urged other anatomists to “dissect with a fresh eye,” attentive to what the book of nature (as written on the bodies of creatures to be dissected) could teach them.

As it turns out, this may be the kind of thing that’s easier to urge than to do. Hall asks,

[W]hat scientific apprentice has not, many times since the sixteenth century, preferred to trust the authoritative text rather than his own unskilled eye? (137)

Once again, it requires training to be able to see what there is to see. And surely someone who has written textbooks on the subject (even centuries before) has more training in how to see than does the novice leaning on the textbook.

Of course, the textbook becomes part of the training in how to see, which can, ironically, make it harder to be sure that what you are seeing is an accurate reflection of the world, not just of the expectations you bring to your observations of it.

The illustrations in the newer anatomy texts made it seem less urgent to anatomy students that they observe (or participate in) actual dissections for themselves. As the technique for mass-produced illustrations got better (especially with the shift from woodcuts to engravings), the illustrators could include much more detail in their images. Paradoxically, this could be a problem, as the illustrator was usually someone other than the scientist who wrote the book, and the author and illustrator were not always in close communication as the images were produced. Given a visual representation of what there is to observe and a description of what there is to observe in the text, which would a student trust more?

Bruce Bower discusses this sort of problem in his article “Objective Visions,” [3] describing the procedures used by Dutch anatomist Berhard Albinus in the mid-1700s to create an image of the human skeleton. Bower writes:

Albinus carefully cleans, reassembles, and props up a complete male skeleton; checks the position of each bone in comparison with observations of an extremely skinny man hired to stand naked next to the skeleton; he calculates the exact spot at which an artist must sit to view the skeleton’s proportions accurately; and he covers engraving plates with cross-hatched grids so that images can be drawn square-by-square and thus be reproduced more reliably. (360)

Here, it sounds like Albinus is trying hard to create an image that accurately conveys what there is to see about the skeleton and its spatial relations. The methodology seems designed to make the image-creation faithful to the particulars of the actual specimen — in a word, objective. But, Bower continues:

After all that excruciating attention to detail, the eminent anatomist announces that his atlas portrays not a real skeleton, but an idealized version. Albinus has dictated alterations to the artist. The scrupulously assembled model is only a spingboard for insights into a more “perfect” representation of the human skeleton, visible only to someone with Albinus’ anatomical acumen. (360)

Here, Albinus was trying to abstract away from the peculiarities of the particular skeleton he had staged as a model for observation in order to describe what he saw as the real thing. This is a decidedly Platonist move. Plato’s view was that the stuff of our world consists largely of imperfect material instantiations of immaterial ideal forms -– and that science makes the observations it does of many examples of material stuff to get a handle on those ideal forms.

If you know the allegory of the cave, however, you know that Plato didn’t put much faith in feeble human sense organs as a route to grasping the forms. The very imperfection of those material instantiations that our sense organs apprehend would be bound to mislead us about the forms. Instead, Plato thought we’d need to use the mind to grasp the forms.

This is a crucial juncture where Aristotle parted ways with Plato. Aristotle still thought that there was something like the forms, but he rejected Plato’s full-strength rationalism in favor of an empirical approach to grasping them. If you wanted to get a handle on the form of “horse,” for example, Aristotle thought the thing to do was to examine lots of actual specimens of horse and to identify the essence they all have in common. The Aristotelian approach probably feels more sensible to modern scientists than the Platonist alternative, but note that we’re still talking about arriving at a description of “horse-ness” that transcends the observable features of any particular horse.

Whether you’re a Platonist, an Aristotelian, or something else, it seems pretty clear that scientists do decide that some features of the systems they’re studying are crucial and others are not. They distinguish what they take to be background from what they take to be the thing they’re observing. Rather than presenting every single squiggle in their visual field, they abstract away to present the piece of the world they’re interested in talking about.

And this is where the collaboration between anatomist and illustrator gets ticklish. What happens if the engraver is abstracting away from the observed particulars differently than the anatomist would? As Hall notes, the engravings in Renaissance anatomy texts were not always accurate representations of the texts. (Nor, for that matter, did the textual descriptions always get the anatomical features right — Renaissance anatomists, Vesalius included, managed to repeat some anatomical mistakes that went back to Galen, likely because they “saw” their specimens through a lens of expectations shaped by what Galen said they were going to see.)

On top of this, the fact that artists like Leonardo Da Vinci studied anatomy to improve their artistic representations of the human form spilled back to influence Renaissance scientific illustrators. These illustrators, as much as their artist contemporaries, may have looked beyond the spatial relations between bones or muscles or internal organs for hidden beauty in their subjects. While this resulted in striking illustrations, it also meant that their engravings were not always accurate representations of the cadavers that were officially their subjects.

These factors conspired to produce visually arresting anatomy texts that exerted an influence on how the anatomy students using them understood the subject, even when these students went beyond the texts to perform their own dissections. Hall writes,

[I]t is often quite easy to “see” what a textbook or manual says should be seen. (141)

Indeed, faced with a conflict between the evidence of one’s eyes pointed at a cadaver and the evidence of one’s eyes pointed at an anatomical diagram, one might easily conclude that the cadaver in question was a weird variant while the diagram captured the “standard” configuration.

Bower’s article describes efforts scientists made to come up with visual representations that were less subjective. Bower writes:

Scientists of the 19th century rapidly adopted a new generation of devices that rendered images in an automatic fashion. For instance, the boxy contraption known as the camera obscura projected images of a specimen, such as a bone or a plant, onto a surface where a researcher could trace its form onto a piece of paper. Photography soon took over and further diminished human involvement in image-making. … Researchers explicitly equated the manual representation of items in the natural world with a moral code of self-restraint. … A blurry photograph of a star or ragged edges on a slide of tumor tissues were deemed preferable to tidy, idealized portraits. (361)

Our naïve picture of objectivity may encourage us that seeing is believing, and that mechanically captured images are more reliable than those rendered by the hand of a (subjective) human, but it’s important to remember that pictures -– even photographs -– have points of view, depend on choices made about the conditions of their creation, and can be used as arguments to support one particular way of seeing the world over another.

In the next post, we’ll look at how Seventeenth Century “natural philosophers” labored to establish a general-use method for building reliable knowledge about the world, and at how the notion of objectivity was connected to these efforts, and to the recognizable features of “the scientific method” that resulted.
_____________

[1] Marie Boas Hall, The Scientific Renaissance: 1450-1630. Dover, 1994.

[2] Frederick Grinnell, The Scientific Attitude. Guilford Press, 1992.

[3] Bruce Bower, “Objective Visions,” Science News. 5 December 1998: Vol. 154, pp. 360-362

The ideal of objectivity.

In trying to figure out what ethics ought to guide scientists in their activities, we’re really asking a question about what values scientists are committed to. Arguably, something that a scientist values may not be valued as much (if at all) by the average person in that scientist’s society.

Objectivity is a value – perhaps one of the values that scientists and non-scientists most strongly associate with science. So, it’s worth thinking about how scientists understand that value, some of the challenges in meeting the ideal it sets, and some of the historical journey that was involved in objectivity becoming a central scientific value in the first place. I’ll be splitting this discussion into three posts. This post sets the stage and considers how modern scientific practitioners describe objectivity. The next post will look at objectivity (and its challenges) in the context of work being done by Renaissance anatomists. The third post will examine how the notion of objectivity was connected to the efforts of Seventeenth Century “natural philosophers” to establish a method for building reliable knowledge about the world.

First, what do we mean by objectivity?

In everyday discussions of ethics, being objective usually means applying the rules fairly and treating everyone the same rather than showing favoritism to one party or another. Is this what scientists have in mind when they voice their commitment to objectivity? Perhaps in part. It could be connected to applying “the rules” of science (i.e., the scientific method) fairly and not letting bias creep into the production of scientific knowledge.

This seems close to the characterization of good scientific practice that we see in the National Academy of Science and National Research Council document, “The Nature of Science.” [1] This document describes science as an activity in which hypotheses undergo rigorous tests, whereby researchers compare the predictions of the hypotheses to verifiable facts determined by observation and experiment, and findings and corrections are announced in refereed scientific publications. It states, “Although [science’s] goal is to approach true explanations as closely as possible, its investigators claim no final or permanent explanatory truths.” (38)

Note that rigorous facts, verification of those facts (or the information necessary to verify them), correction of mistakes, and reliable reports of findings all depend on honesty – you can’t perform these activities by making up your results, or presenting them in a deceptive way, for example. So being objective in the sense of following good scientific methodology requires a commitment not to mislead.

But here, in “The Nature of Science,” we see hints that there are two closely related, yet distinct, meanings of “objective”. One is what anyone applying the appropriate methodology could see. The other is a picture of what the world is really like. Getting a true picture of the world (or aiming for such a picture) means seeking objectivity in the second sense -– finding the true facts. Seeking out the observational data that other scientists could verify -– the first sense of objectivity -– is closely tied to the experimental method scientists use and their strategies for reporting their results. Presumably, applying objective methodology would be a good strategy for generating an accurate (and thus objective) picture of the world.

But we should note a tension here that’s at least as old as the tension between Plato and his student Aristotle. What exactly are the facts about the world that anyone could see? Are sense organs like eyes all we need to see them? If such facts really exist, are they enough to help us build a true picture of the world?

In the chapter “Making Observations” from his book The Scientific Attitude [2], Fred Grinnell discusses some of the challenges of seeing what there is to see. He argues that, especially in the realms science tries to probe, seeing what’s out there is not automatic. Rather, we have to learn to see the facts that are there for anyone to observe.

Grinnell describes the difficulty students have seeing cells under a light microscope, a difficulty that persists even after students work out how to use the microscope to adjust the focus. He writes:

The students’ inability to see the cells was not a technical problem. There can be technical problems, of course -– as when one takes an unstained tissue section and places it under a microscope. Under these conditions it is possible to tell that something is “there,” but not precisely what. As discussed in any histology textbook, the reason is that there are few visual features of unstained tissue sections that our eyes can discriminate. As the students were studying stained specimens, however, sufficient details of the field were observable that could have permitted them to distinguish among different cells and between cells and the noncellular elements of the tissue. Thus, for these students, the cells were visible but unseen. (10-11)

Grinnell’s example suggests that seeing cells, for example, requires more than putting your eye to the eyepiece of a microscope focused on a stained sample of cells. Rather, you need to be able to recognize those bits of your visual field as belonging to a particular kind of object -– and, you may even need to have something like the concept of a cell to be able to identify what you are seeing as cells. At the very least, this suggests that we should amend our gloss of objective as “what anyone could see” to something more like “what anyone could see given a particular conceptual background and some training with the necessary scientific measuring devices.”

But Grinnell makes even this seem too optimistic. He notes that “seeing things one way means not seeing them another way,” which implies that there are multiple ways to interpret any given piece of the world toward which we point our sense organs. Moreover, he argues,

Each person’s previous experiences will have led to the development of particular concepts of things, which will influence what objects can be seen and what they will appear to be. As a consequence, it is not unusual for two investigators to disagree about their observations if the investigators are looking at the data according to different conceptual frameworks. Resolution of such conflicts requires that the investigators clarify for each other the concepts that they have in mind. (15)

In other words, scientists may need to share a bundle of background assumptions about the world to look at a particular piece of that world and agree on what they see. Much more is involved in seeing “what anyone can see” than meets the eye.

We’ll say more about this challenge in the next post, when we look at how Renaissance anatomists tried to build (and communicate) objective knowledge about the human body.
_____________

[1] “The Nature of Science,” in Panel on Scientific Responsibility and the Conduct of Research, National Academy of Sciences, National Academy of Engineering, Institute of Medicine. Responsible Science, Volume I: Ensuring the Integrity of the Research Process. Washington, DC: The National Academies Press, 1992.

[2] Frederick Grinnell, The Scientific Attitude. Guilford Press, 1992.