Last week I was honored to participate in a Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building. Some technical glitches notwithstanding, it was a rollicking good conversation — so much so that I have put together a transcript for those who don’t want to review the archived video.
The full transcript is long (approaching 8000 words even excising the non-substantive smack-talk), so I’ll be presenting it here in a few chunks that I’ve split more or less at points where the topic of the discussion shifted.
In places, I’ve cleaned up the grammar a bit, attempting to faithfully capture the gist of what each speaker was saying. As well, because my mom reads this blog, I’ve cleaned up some of the more colorful language. If you prefer the PG-13 version, the archived video will give you what you need.
Simultaneously with our video-linked discussion, there was a conversation on Twitter under the #pubscience hashtag. You can see that conversation Storify’d here.
____
(05:40)
Michael Tomasson: The reason I was interested in this is because I have one very naïve view and one esoteric view. My naïve view is that there is something useful about philosophy in terms of the scientific method, and when people are in my lab, I try to beat into their heads (I mean, educate them) that there’s a certain structure to how we do science, and this is a life-raft and a tool that is essential. And I guess that’s the question, whether there is some sort of essential tool kit. We talk about the scientific method. Is that a universal? I started thinking about this talking with my brother-in-law, who’s an amateur philosopher, about different theories of epistemology, and he was shocked that I would think that science had a lock on creating knowledge. But I think we do, through the scientific method.
Janet, take us to the next level. To me, from where I am, the scientific method is the key to the city of knowledge. No?
Janet Stemwedel: Well, that’s certainly a common view, and that’s a view that, in the philosophy of science class I regularly teach, we start with — that there’s something special about whatever it is scientists are doing, something special about the way they gather very careful observations of the world, and hook them together in the right logical way, and draw inferences and find patterns, that’s a reliable way to build knowledge. But at least for most of the 20th Century, what people who looked closely at this assumption in philosophy found was that it had to be more complicated than that. So you end up with folks like Sir Karl Popper pointing out that there is a problem of induction — that deductive logic will get you absolutely guaranteed conclusions if your premises are true, but inductive inference could go wrong; the future might not be like the past we’ve observed so far.
(08:00)
Michael Tomasson: I’ve got to keep the glossary attached. Deductive and inductive?
Janet Stemwedel: Sure. A deductive argument might run something like this:
All men are mortal. Socrates is a man. Therefore, Socrates is mortal.
If it’s true that all men are mortal, and that Socrates is a man, then you are guaranteed that Socrates is also going to be mortal. The form of the argument is enough to say, if the assumptions are true, then the conclusion has to be true, and you can take that to the bank.
Inductive inference is actually most of what we seem to use in drawing inferences from observations and experiments. So, let’s say you observe a whole lot of frogs, and you observe that, after some amount of time, each of the frogs that you’ve had in your possession kicks off. After a certain number of frogs have done this, you might draw the inference that all frogs are mortal. And, it seems like a pretty good inference. But, it’s possible that there are frogs not yet observed that aren’t mortal.
Inductive inference is something we use all the time. But Karl Popper said, guess what, it’s not guaranteed in the same way deductive logic is. And this is why he thought the power of the scientific method is that scientists are actually only ever concerned to find evidence against their hypotheses. The evidence against your hypotheses lets you conclude, via deductive inference, that those hypotheses are wrong, and then you cross them off. Any hypothesis where you seem to get observational support, Popper says, don’t get too excited! Keep testing it, because maybe the next test is going to be the one where you find evidence against it, and you don’t want to get screwed over by induction. Inductive reasoning is just a little too shaky to put your faith in.
(10:05)
Michael Tomasson: That’s my understanding of Karl Popper. I learned about the core of falsifying hypotheses, and that’s sort of what I teach as truth. But I’ve heard some anti-Karl Popper folks, which I don’t really quite understand.
Let me ask Isis, because I know Isis has very strong opinions about hypotheses. You had a blog post a long time ago about hypotheses. Am I putting words in your mouth to say you think hypotheses and hypothesis testing are important?
(10:40)
Dr. Isis: No, I did. That’s sort of become the running joke here is that my only contribution to lab meeting is to say, wait wait wait, what was your hypothesis? I think that having hypotheses is critical, and I’m a believer, as Dr. Tomasson knows, that a hypothesis has four parts. I think that’s fundamental, framing the question, because I think that the question frames how you do your analysis. The design and the analysis fall out of the hypothesis, so I don’t understand doing science without a hypothesis.
Michael Tomasson: Let me throw it over to Andrew … You’re coming from anthropology, you’re looking at science from 30,000 feet, where maybe in anthropology it’s tough to do hypothesis-testing. So, what do you say to this claim that the hypothesis is everything?
Andrew Brandel: I would give two basic responses. One: in the social sciences, we definitely have a different relationship to hypotheses, to the scientific method, perhaps. I don’t want to represent the entire world of social and human sciences.
Michael Tomasson: Too bad!
(12:40)
Andrew Brandel: So, there’s definitely a different relationship to hypothesis-testing — we don’t have a controlled setting. This is what a lot of famous anthropologists would talk about. The other area where we might interject is, science is (in the view of some of us) one among many different ways of viewing and organizing our knowledge about the world, and not necessarily better than some other view.
Michael Tomasson: No, it’s better! Come on!
Andrew Brandel: Well, we can debate about this. This is a debate that’s been going on for a long time, but basically my position would be that we have something to learn from all the different sciences that exist in the world, and that there are lots of different logics which condition the possibility of experiencing different kinds of things. When we ask, what is the hypothesis, when Dr. Isis is saying that is crucial for the research, we would agree with you, that that is also conditioning the responses you get. That’s both what you want and part of the problem. It’s part of a culture that operates like an ideology — too close to you to come at from within it.
Janet Stemwedel: One of the things that philosophers of science started twigging to, since the late 20th Century, is that science is not working with this scientific method that’s essentially a machine that you toss observations into and you turn the crank and on the other end out comes pristine knowledge. Science is an activity done by human beings, and human beings who do science have as many biases and blindspots as human beings who don’t do science. So, recognizing some of the challenges that are built into the kind of critter we are trying to build reliable knowledge about the world becomes crucial, and even places where the scientist will say, look, I’m not doing (in this particular field) hypothesis-driven science, it doesn’t mean that there aren’t some hypotheses sort of behind the curtain directing the attention of the people trying to build knowledge. It just means that they haven’t bumped into enough people trying to build knowledge in the same area that have different assumptions to notice that they’re making assumptions in the first place.
(15:20)
Dr. Isis: I think that’s a crucial distinction. Is the science that you’re doing really not hypothesis-driven, or are you too lazy to write down a hypothesis?
To give an example, I’m writing a paper with this clinical fellow, and she’s great. She brought a draft, which is amazing, because I’m all about the paper right now. And in there, she wrote, we sought to observe this because to the best of our knowledge this has never been reported in the literature.
First of all, the phrase “to the best of our knowledge,” any time you write that you should just punch yourself in the throat, because if it wasn’t to the best of your knowledge, you wouldn’t be writing it. I mean, you wouldn’t be lying: “this has never been reported in the literature.” The other thing is, “this has never been reported in the literature” as the motivation to do it is a stupid reason. I told her, the frequency of the times of the week that I wear black underwear has never been reported in the literature. That doesn’t mean it should be.
Janet Stemwedel: Although, if it correlates with your experiment working or not — I have never met more superstitious people than experimentalists. If the experiment only works on the days you were black underwear, you’re wearing black underwear until the paper is submitted, that’s how it’s going to be. Because the world is complicated!
Dr. Isis: The point is that it’s not that she didn’t have a hypothesis. It’s that pulling it out of her — it was like a tapeworm. It was a struggle. That to me is the question. Are we really doing science without a hypothesis, or are we making the story about ourselves? About what we know about in the literature, what the gap in the literature is, and the motivation to do the experiment, or are we writing, “we wanted to do this to see if this was the thing”? — in which case, I don’t find it very interesting.
Michael Tomasson: That’s an example of something that I try to teach, when you’re writing papers: we did this, we wanted to do that, we thought about this. It’s not really about you.
But friend of the show Cedar Riener tweets in, aren’t the biggest science projects those least likely to have clearly hypothesis-driven experiments, like HGP, BRANI, etc.? I think the BRAIN example is a good one. We talk about how you need hypotheses to do science, and yet here’s this very high profile thing which, as far as I can tell, doesn’t really have any hypotheses driving it.
When the transcript continues: Issues of inclusion, methodological disputes, and the possibility that “the scientific method” is actually a lie.