Resistance to ethics is different from resistance to other required courses.

For academic types like myself, the end of the semester can be a weird juxtaposition of projects that are ending and new projects that are on the horizon, a juxtaposition that can be an opportunity for reflexion.

I’ve just seen another offering of my “Ethics in Science” course to a (mostly successful) conclusion. Despite the fact that the class was huge (more than 100 students) for a course that is heavy on discussion, its students were significantly more active and engaged than those in the much smaller class I taught right after it. The students thought hard and well, and regularly floored me with their razor-sharp insights. All the evidence suggests that these students were pretty into it.

Meanwhile, I’m getting set for a new project that will involve developing ethics units for required courses offered in another college at my university — and one of the things I’ve been told is that the students required to take these courses (as well as some non-zero number of the professors in their disciplines) are very resistant to the inclusion of ethics coursework in courses otherwise focused on their major subjects.

I find this resistance interesting, especially given that the majority of the students in my “Ethics in Science” class were taking it because it was required for their majors.

I recognize that part of what’s going on may be a blanket resistance to required courses. Requirements can feel like an attack on one’s autonomy and individuality — rather than being able to choose what you will to study, you’re told what you must study to major in a particular subject or to earn a degree from a particular university. A course that a student might have been open to enjoying were it freely chosen can become a loathed burden merely by virtue of being required. I’ve seen the effect often enough that it no longer surprises me.

However, requirements aren’t usually imposed solely to constrain students’ autonomy. There’s almost always a reason that the course, or subject-matter, or problem-solving area that’s required is being required. The students may not know that reason (or judge it to be a compelling reason if they do know it), but that doesn’t meant that there’s not a reason.

In some ways, ethics is really not much different here from other major requirements or subject matter that students bemoan, including calculus, thermodynamics, writing in the major, and significant figures. On the other hand, the moaning for some of those other requirements tends to take the form of “When am I ever going to use that?”

I don’t believe I’ve ever heard a science or engineering student say, “When am I ever going to use ethics?”

In other words, they generally accept that they should be ethical, but they also sometimes voice resistance to the idea that a course (or workshop, or online training module) about how to be ethical will be anything but a massive waste of their time.

My sense is that at least part of what’s going on here is that scientists and engineers and their ilk feel like ethics are being imposed on them from without, by university administrators or funding agencies or accrediting organizations. Worse, the people exhorting scientists, engineers, et alia to take ethics seriously often seem to take a finger-wagging approach. And this, I suspect, makes it harder to get what those business types call “buy-in” from the scientists.

The typical story I’ve heard about ethics sessions in industry (and some university settings) goes something like this:

You get a big packet with the regulations you have to follow — to get your protocols approved by the IRB and/or the IACUC, to disclose potential conflicts of interest, to protect the company’s or university’s patent rights, to fill out the appropriate paperwork for hazardous waste disposal, etc., etc. You are admonished against committing the “big three” of falsification, fabrication, and plagiarism. Sometimes, you are also admonished against sexually harassing those with whom you are working. The whole thing has the feel of being driven by the legal department’s concerns: for goodness sake, don’t do anything that will embarrass the organization or get us into hot water with regulators or funders!


Listening to the litany of things you ought not to do, it’s really easy to think: Very bad people do things like this. But I’m not a very bad person. So I can tune this out, and I can kind of ignore ethics.


The decision to tune out ethics is enabled by the fact that the people wagging the fingers at the scientists are generally outsiders (from the legal department, or the philosophy department, or wherever). These outsiders are coming in telling us how to do our jobs! And, the upshot of what they’re telling us seems to be “Don’t be evil,” and we’re not evil! Besides, these outsiders clearly don’t care about (let alone understand) the science so much as avoiding scandals or legal problems. And they don’t really trust us not to be evil.


So just nod earnestly and let’s get this over with.

One hurdle here is the need to get past the idea that being ethical is somehow innate, a mere matter of not being evil, rather than a problem-solving practice that gets better with concrete strategies and repeated use. Another hurdle is the feeling that ethics instruction is the result of meddling by outsiders.


If ethics is seen as something imposed upon scientists by a group from the outside — one that neither understands science, nor values it, nor trusts that scientists are generally not evil — then scientists will resist ethics. To get “buy-in” from the scientists, they need to see how ethics are intimately connected to the job they’re trying to get done. In other words, scientists need to understand how ethical conduct is essential to the project of doing science. Once scientists make that connection, they will be ethical — not because someone else is telling them to be ethical, but because being ethical is required to make progress on the job of building scientific knowledge.
_____________
This post is an updated version of an ancestor post on my other blog, and was prompted by the Virtually Speaking Science discussion of philosophy in and of science scheduled for Wednesday, May 28, 2014 (starting 8 PM EDT/8 PM PDT). Watch the hashtags #VSpeak and #AskVS for more details.

Pub-Style Science: dreams of objectivity in a game built around power.

This is the third and final installment of my transcript of the Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building. Leading up to this part of the conversation, we were considering the possibility that the idealization of the scientific method left out a lot of the details of how real humans actually interact to build scientific knowledge …

Dr. Isis: And that’s the tricky part, I think. That’s where this becomes a messy endeavor. You think about the parts of the scientific method, and you write the scientific method out, we teach it to our students, it’s on the little card, and I think it’s one of the most amazing constructs that there is. It’s certainly a philosophy.

I have devoted my career to the scientific method, and yet it’s that last step that is the messiest. We take our results and we interpret them, we either reject or fail to reject the hypothesis, and in a lot of cases, the way we interpret the very objective data that we’re getting is based on the social and cultural constructs of who we are. And the messier part is that the who we are — you say that science is done around the world, sure, but really, who is it done by? We all get the CV, “Dear honorable and most respected professor…” And what do you do with those emails? You spam them. But why? Why do we do that? There are people [doing science] around the world, and yet we reject their science-doing because of who they are and where they’re from and our understanding, our capacity to take [our doing] of that last step of the scientific method as superior because of some pedigree of our training, which is absolutely rooted in the narrowest sliver of our population.

And that’s the part that frightens me about science. Going from lab to lab and learning things, you’re not just learning objective skills, you’re learning a political process — who do you shake hands with at meetings, who do you have lunch with, who do you have drinks with, how do you phrase your grants in a particular way so they get funded because this is the very narrow sliver of people who are reading them? And I have no idea what to do about that.

Janet Stemwedel: I think this is a place where the acknowledgement that’s embodied in editorial policies of journals like PLOS ONE, that we can’t actually reliably predict what’s going to be important, is a good step forward. That’s saying, look, what we can do is talk about whether this is a result that seems to be robust: this is how I got it; I think if you try to get it in your lab, you’re likely to get it, too; this is why it looked interesting to me in light of what we knew already. Without saying: oh, and this is going to be the best thing since sliced bread. At least that’s acknowledging a certain level of epistemic humility that it’s useful for the scientific community to put out there, to no pretend that the scientific method lets you see into the future. Because last time I checked, it doesn’t.

(46:05)
Andrew Brandel: I just want to build on this point, that this question of objective truth also is a question that is debated hotly, obviously, in science, and I will get in much trouble for my vision of what is objective and what is not objective. This question of whether, to quote a famous philosopher of science, we’re all looking at the same world through different-colored glasses, or whether there’s something more to it, if we’re actually talking about nature in different ways, if we can really learn something not even from science being practiced wherever in the world, but from completely different systems of thinking about how the world works. Because the other part of this violence is not just the ways in which certain groups have not been included in the scientific community, the professional community, which was controlled by the church and wealthy estates and things, but also with the institutions like the scientific method, like certain kinds of philosophy. A lot of violence has been propagated in the name of those things. So I think it’s important to unpack not just this question of let’s get more voices to the table, but literally think about how the structures of what we’re doing themselves — the way the universities are set up, the way that we think about what science does, the way that we think about objective truth — also propagate certain kinds of violence, epistemic kinds of violence.

Michael Tomasson: Wait wait wait, this is fascinating. Epistemic violence? Expand on that.

Andrew Brandel: What I mean to say is, part of the problem, at least from the view of myself — I don’t want to actually represent anybody else — is that if we think that we’re getting to some better method of getting to objective truth, if we think that we have — even if it’s only in an ideal state — some sort of cornerstone, some sort of key to the reality of things as they are, then we can squash the other systems of thinking about the world. And that is also a kind of violence, in a way, that’s not just the violence of there’s no women at the table, there’s no different kinds of people at the table. But there’s actually another kind of power structure that’s embedded in the very way that we think about truths. So, for example, a famous anthropologist, Levi-Strauss, would always point out that the botanists would go to places in Latin America and they would identify 14 different kinds of XYZ plant, and the people living in that jungle who aren’t scientists or don’t have that kind of sophisticated knowledge could distinguish like 45 kinds of these plants. And they took them back to the lab, and they were completely right.

So what does that mean? How do we think about these different ways [of knowing]? I think unpacking that is a big thing that social science and philosophy of science can bring to this conversation, pointing out when there is a place to critique the ways in which science becomes like an ideology.

Michael Tomasson: That just sort of blew my mind. I have to process that for awhile. I want to pick up on something you’re saying and that I think Janet said before, which is really part of the spirit of what Pub-Style Science is all about, the idea that is we get more different kinds of voices into science, we’ll have a little bit better science at the other end of it.

Dr. Rubidium: Yeaaaah. We can all sit around like, I’ve got a ton of great ideas, and that’s fabulous, and new voices, and rah rah. But, where are the new voices? are the new voices, or what you would call new voices, or new opinions, or different opinions (maybe not even new, just different from the current power structure) — if those voices aren’t getting to positions of real power to affect change, it doesn’t matter how many foot soldiers you get on the ground. You have got to get people into the position of being generals. And is that happening? No. I would say no.

Janet Stemwedel: Having more different kinds of people at the table doesn’t matter if you don’t take them seriously.

Andrew Brandel: Exactly. That’s a key point.

Dr. Isis: This is the tricky thing that I sort of alluded to. And I’m not talking about diverse voices in terms of gender and racial and sexual orientation diversity and disability issues. I’m talking about just this idea of diverse voices. One of the things that is tricky, again, is that to get to play the game you have to know the rules, and trying to change the rules too early — one, I think it’s dangerous to try to change the rules before you understand what the rules even are, and two, that is the quickest way to get smacked in the nose when you’re very young. And now, to extend that to issues of actual diversity in science, at least my experience has been that some of the folks who are diverse in science are some of the biggest rule-obeyers. Because you have to be in order to survive. You can’t come in and be different as it is and decide you’re going to change the rules out from under everybody until you get into that — until you become a general, to use Dr. Rubidium’s analogy. The problem is, by the time you become the general, have you drunk enough of the Kool-Aid that you remember who you were? Do you still have enough of yourself to change the system? Some of my more senior colleagues, diverse colleagues, who came up the ranks, are some of the biggest believers in the rules. I don’t know if they felt that way when they were younger folks.

Janet Stemwedel: Part of it can be, if the rules work for you, there’s less incentive to think about changing them. But this is one of those places where those of us philosophers who think about where the knowledge-building bumps up against the ethics will say: look, the ethical responsibilities of the people in the community with more power are different that the ethical responsibilities of the people in the community who are just coming up, because they don’t have as much weight to throw around. They don’t have as much power. So I talk a lot to mid-career and late-career scientists and say, hey look, you want to help build a different community, a different environment for the people you’re training? You’ve got to put some skin in the game to make that happen. You’re in a relatively safe place to throw that weight around. You do that!

And you know, I try to make these prudential arguments about, if you shift around the incentive structures [in various ways], what’s likely to produce better knowledge on the other end? That’s presumably why scientists are doing science, ’cause otherwise there’d be some job that they’d be doing that takes up less time and less brain.

Andrew Brandel: This is a question also of where ethics and epistemic issues also come together, because I think that’s really part of what kind of radical politics — there’s a lot of different theories about what kind of revolution you can talk about, what a revolutionary politics might be to overthrow the system in science. But I think this issue that it’s also an epistemic thing, that it’s also a question of producing better knowledge, and that, to bring back this point about how it’s not just about putting people in positions, it’s not just hiring an assistant professor from XYZ country or more women or these kinds of things, but it’s also a question of putting oneself sufficiently at risk, and taking seriously the possibility that I’m wrong, from radically different positions. That would really move things, I think, in a more interesting direction. That’s maybe something we can bring to the table.

Janet Stemwedel: This is the piece of Karl Popper, by the way, that scientists like as an image of what kind of tough people they are. Scientists are not trying to prove their hypotheses, they’re trying to falsify them, they’re trying to show that they’re wrong, and they’re ready to kiss even their favorite hypothesis goodbye if that’s what the evidence shows.

Some of those hypotheses that scientists need to be willing to kiss goodbye have to do with narrow views of what kind of details count as fair game for building real reliable knowledge about the world and what kind of people and what kind of training could do that, too. Scientists really have to be more evidence-attentive around issues like their own implicit bias. And for some reason that’s really hard, because scientists think that individually they are way more objective than they average bear. The real challenge of science is recognizing that we are all average bears, and it is just the coordination of our efforts within this particular methodological structure that gets us something better than the individual average bear could get by him- or herself.

Michael Tomasson: I’m going to backpedal as furiously as I can, since we’re running out of time. So I’ll give my final spiel and then we’ll go around for closing comments.

I guess I will pare down my skeleton-key: I think there’s an idea of different ways of doing science, and there’s a lot of culture that comes with it that I think is very flexible. I think what I’m getting at is, is there some universal hub for whatever different ways people are looking at science? Is there some sort of universal skeleton or structure? And I guess, if I had to backpedal furiously, that I would say, what I would try to teach my folks, is number one, there is an objective world, it’s not just my opinion. When people come in and talk to me about their science and experiments, it’s not just about what I want, it’s not just about what I think, it’s that there is some objective world out there that we’re trying to describe. The second thing, the most stripped-down version of the scientific method I can think of, is that in order to understand that objective world, it helps to have a hypothesis, a preconceived notion, first to challenge.

What I get frustrated about, and this is just a very practical day-to-day thing, is I see people coming and doing experiments saying, “I have no preconceived notion of how this should go, I did this experiment, and here’s what I got.” It’s like, OK, that’s very hard to interpret unless you start from a certain place — here’s my prediction, here’s what I think was going on — and then test it.

Dr. Isis: I’ll say, Tomasson, actually this wasn’t as boring as I thought it would be. I was really worried about this one. I wasn’t really sure what we were supposed to be talking about — philosophy and science — but this one was OK. So, good on you.

But, I think that I will concur with you that science is about seeking objective truth. I think it’s a darned shame that humans are the ones doing the seeking.

Janet Stemwedel: You know, dolphin science would be completely different, though.

Dr. Rubidium: Yeah, dolphins are jerks! What are you talking about?

Janet Stemwedel: Exactly! All their journals would be behind paywalls.

Andrew Brandel: I’ll just say that I was saying to David, who I know is a regular member of your group, that I think it’s a good step in the right direction to have these conversations. I don’t think we get asked as social scientists, even those of us who work in science settings, to at least talk about these issues more, and talk about, what are the ethical and epistemic stakes involved in doing what we do? What can we bring to the table on similar kinds of questions? For me, this question of cultivating a kind of openness to being wrong is so central to thinking about the kind of science that I do. I think that these kinds of conversations are important, and we need to generate some kind of momentum. I jokingly said to Tomasson that we need a grant to pay for a workshop to get more people into these types of conversations, because I think it’s significant. It’s a step in the right direction.

Janet Stemwedel: I’m inclined to say one of the take-home messages here is that there’s a whole bunch of scientists and me, and none of you said, “Let’s not talk about philosophy at all, that’s not at all useful.” I would like some university administrators to pay attention to this. It’s possible that those of us in the philosophy department are actually contributing something that enhances not only the fortunes of philosophy majors but also the mindfulness of scientists about what they’re doing.

I’m pretty committed to the idea that there is some common core to what scientists across disciplines and across cultures are doing to build knowledge. I think the jury’s still out on what precisely the right thing to say about that common core of the scientific method is. But, I think there’s something useful in being able to step back and examine that question, rather than saying, “Science is whatever the hell we do in my lab. And as long as I keep doing all my future knowledge-building on the same pattern, nothing could go wrong.”

Dr. Rubidium: I think that for me, I’ll echo Isis’s comments: science is an endeavor done by people. And people are jerks — No! With people, then, if you have this endeavor, this job, whatever you want to call it — some people would call it a calling — once people are involved, I think it’s essential that we talk about philosophy, sociology, the behavior of people. They are doing the work. It doesn’t make sense to me, then — and I’m an analytical chemist and I have zero background in all of the social stuff — it doesn’t make sense to me that you would have this thing done by people and then actually say with a straight face, “But let’s not talk about people.” That part just doesn’t compute. So I think these conversations definitely need to continue, and I hope that we can talk more about the people behind the endeavor and more about the things attached to their thoughts and behaviors.

* * * * *

Part 1 of the transcript.

Part 2 of the transcript.

Archived video of this Pub-Style Science episode.

Storify’d version of the simultaneous Twitter conversation.

You should also check out Dr. Isis’s post on why the conversations that happen in Pub-Style Science are valuable to scientists-in-training.

Pub-Style Science: exclusion, inclusion, and methodological disputes.

This is the second part of my transcript of the Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building, wherein we discuss methodological disputes, who gets included or excluded in scientific knowledge-building, and ways the exclusion or inclusion might matter. Also, we talk about power gradients and make the scary suggestion that “the scientific method” might be a lie…

Michael Tomasson: Rubidium, you got me started on this. I made a comment on Twitter about our aspirations to build objective knowledge and that that was what science was about, and whether there’s sexism or racism or whatever other -isms around is peripheral to the holy of holies, which is the finding of objective truth. And you made … a comment.

Dr. Rubidium: I think I told you that was cute.

Michael Tomasson: Let me leverage it this way: One reason I think philosophy is important is the basics of structure, of hypothesis-driven research. The other thing I’m kind of intrigued by is part of Twitter culture and what we’re doing with Pub-Style Science is to throw the doors open to people from different cultures and different backgrounds are really say, hey, we want to have science that’s not just a white bread monoculture, but have it be a little more open. But does that mean that everyone can bring their own way of doing science? It sounds like Andrew might say, well, there’s a lot of different ways, and maybe everyone who shows up can bring their own. Maybe one person wants a hypothesis, another doesn’t. Does everybody get to do their own thing, or do we need to educate people in the one way to do science?

As I mentioned on my blog, I had never known that there was a feminist way of doing science.

Janet Stemwedel: There’s actually more than one.

Dr. Isis: We’re not all the same.

Janet Stemwedel: I think even the claim that there’s a single, easily described scientific method is kind of a tricky one. One of the things I’m interested in — one of the things that sucked me over from building knowledge in chemistry to trying to build knowledge in philosophy — is, if you look at scientific practice, scientists who are nominally studying the same thing, the same phenomena, but who’re doing it in different disciplines (say, the chemical physicists and the physical chemists) can be looking at the same thing, but they’re using very different experimental tools and conceptual tools and methodological tools to try to describe what’s going on there. There’s ways in which, when you cross a disciplinary boundary — and sometimes, when you leave your research group and go to another research group in the same department — that what you see on the ground as the method you’re using to build knowledge shifts.

In some ways, I’m inclined to say it’s an empirical question whether there’s a single unified scientific method, or whether we’ve got something more like a family resemblance kind of thing going on. There’s enough overlap in the tools that we’re going to call them all science, but whether we can give necessary and sufficient conditions that describe the whole thing, that’s still up in the air.

Andrew Brandel: I just want to add to that point, if I can. I think that one of the major topics in social sciences of science and in the philosophy of science recently has been the point that science itself, as it’s been practiced, has a history that is also built on certain kinds of power structures. So it’s not even enough to say, let’s bring lots of different kinds of people to the table, but we actually have to uncover the ways in which certain power structures have been built into the very way that we think about science or the way that the disciplines are arranged.

(23:10)
Michael Tomasson: You’ve got to expand on that. What do you mean? There’s only one good — there’s good science and there’s bad science. I don’t understand.

Janet Stemwedel: So wait, everyone who does science like you do is doing good science, and everyone who uses different approaches, that’s bad?

Michael Tomasson: Yes, exactly.

Janet Stemwedel: There’s no style choices in there at all?

Michael Tomasson: That’s what I’m throwing out there. I’m trying to explore that. I’m going to take poor Casey over here, we’re going to stamp him, turn him into a white guy in a tie and he’s going to do science the way God intended it.

Dr. Isis: This is actually a good point, though. I had a conversation with a friend recently about “Cosmos.” As they look back on the show, at all the historical scientists, who, historically has done science? Up until very recently, it has been people who were sufficiently wealthy to support the lifestyle to which they would like to become accustomed, and it’s very easy to sit and think and philosophize about how we do science when it’s not your primary livelihood. It was sort of gentleman scientists who were of the independently wealthy variety who were interested in science and were making these observations, and now that’s very much changed.

It was really interesting to me when you suggested this as a topic because recently I’ve become very pragmatic about doing science. I think I’m taking the “Friday” approach to science — you know, the movie? Danielle Lee wants to remake “Friday” as a science movie. Right now, messing with my money is like messing with my emotions. I’m about writing things in a way to get them funded and writing things in a way that gets them published, and it’s cute to think that we might change the game or make it better, but there’s also a pragmatic side to it. It’s a human endeavor, and doing things in a certain way gets certain responses from your colleagues. The thing that I see, especially watching young people on Twitter, is they try to change the game before they understand the game, and then they get smacked on the nose, and then they write is off as “science is broken”. Well, you don’t understand the game yet.

Janet Stemwedel: Although it’s complicated, I’d say. It is a human endeavor. Forgetting it’s a human endeavor is a road to nothing but pain. And you’ve got the knowledge-building thing going on, and that’s certainly at the center of science, but you’ve also got the getting credit for the awesome things you’ve done and getting paid so you can stay in the pool and keep building knowledge, because we haven’t got this utopian science island where anyone who wants to build knowledge can and all their needs are taken care of. And, you’ve got power gradients. So, there may well be principled arguments from the point of view of what’s going to incentivize practices that will result in better knowledge and less cheating and things like that, to change the game. I’d argue that’s one of the things that philosophy of science can contribute — I’ve tried to contribute that as part of my day job. But the first step is, you’ve got to start talking about the knowledge-building as an activity that’s conducted by humans rather than you put more data into the scientific method box, you turn the crank, and out comes the knowledge.

Michael Tomasson: This is horrifying. I guess what I’m concerned about is I’d hoped you’d teach the scientific method as some sort of central methodology from lab to lab. Are you saying, from the student’s point of view, whatever lab you’re in, you’ve got to figure out whatever the boss wants, and that’s what science is? Is there no skeleton key or structure that we can take from lab to lab?

Dr. Rubidium: Isn’t that what you’re doing? You’re going to instruct your people to do science the way you think it should be done? That pretty much sounds like what you just said.

Dr. Isis: That’s the point of being an apprentice, right?

Michael Tomasson: I had some fantasy that there was some universal currency or universal toolset that could be taken from one lab to another. Are you saying that I’m just teaching my people how to do Tomasson science, and they’re going to go over to Rubidium and be like, forget all that, and do things totally differently?

Dr. Rubidium: That might be the case.

Janet Stemwedel: Let’s put out there that a unified scientific method that’s accepted across scientific disciplines, and from lab to lab and all that, is an ideal. We have this notion that part of why we’re engaged in science to try to build knowledge of the world is that there is a world that we share. We’re trying to build objective knowledge, and why that matters is because we take it that there is a reality out there that goes deeper than how, subjectively, things seem to us.

(30:00)
Michael Tomasson: Yes!

Janet Stemwedel: So, we’re looking for a way to share that world, and the pictures of the method involved in doing that, the logical connections involved in doing that, that we got from the logical empiricists and Popper and that crowd — if you like, they’re giving sort of the idealized model of how we could do that. It’s analogous to the story they tell you about orbitals in intro chem. You know what happens, if you keep on going with chem, is they mess up that model. They say, it’s not that simple, it’s more complicated.

And that’s what philosophers of science do, is we mess up that model. We say, it can’t possible be that simple, because real human beings couldn’t drive that and make it work as well as it does. So there must be something more complicated going on; let’s figure out what it is. My impression, looking at the practice through the lens of philosophy of science, is that you find a lot of diversity in the details of the methods, you find a reasonable amount of diversity in terms of what’s the right attitude to have towards our theories — if we’ve got a lot of evidence in favor of our theories, are we allowed to believe our theories are probably right about the world, or just that they’re better at churning out predictions than the other theories we’ve considered so far? We have places where you can start to look at how methodologies embraced by Western primatologists compared to Japanese primatologists — where they differ on what’s the right thing to do to get the knowledge — you could say, it’s not the case that one side is right and one side is wrong, we’ve located a trade-off here, where one camp is deciding one of the things you could get is more important and you can sacrifice the other, and the other camp is going the other direction on that.

It’s not to say we should just give up on this project of science and building objective, reliable knowledge about the world. But how we do that is not really anything like the flowchart of the scientific method that you find in the junior high science text book. That’s like staying with the intro chem picture of the orbitals and saying, that’s all I need to know.

(32:20)
Dr. Isis: I sort of was having a little frightened moment where, as I was listening to you talk, Michael, I was having this “I don’t think that word means what you think it means” reaction. And I realize that you’re a physician and not a real scientist, but “the scientific method” is actually a narrow construct of generating a hypothesis, generating methods to test the hypothesis, generating results, and then either rejecting or failing to reject your hypothesis. This idea of going to people’s labs and learning to do science is completely tangential from the scientific method. I think we can all agree that, for most of us at are core, the scientific method is different from the culture. Now, whether I go to Tomasson’s lab and learn to label my reagents with the wrong labels because they’re a trifling, scandalous bunch who will mess up your experiment, and then I go to Rubidium’s lab and we all go marathon training at 3 o’clock in the afternoon, that’s the culture of science, that’s not the scientific method.

(34:05)
Janet Stemwedel: Maybe what we mean by the scientific method is either more nebulous or more complicated, and that’s where the disagreements come from.

If I can turn back to the example of the Japanese primatologists and the primatologists from the U.S. [1]… You’re trying to study monkeys. You want to see how they’re behaving, you want to tell some sort of story, you probably are driven by some sort of hypotheses. As it turns out, the Western primatologists are starting with the hypothesis that basically you start at the level of the individual monkey, that this is a biological machine, and you figure out how that works, and how they interact with each other if you put them in a group. The Japanese primatologists are starting out with the assumption that you look at the level of social groups to understand what’s going on.

(35:20)
And there’s this huge methodological disagreement that they had when they started actually paying attention to each other: is it OK to leave food in the clearing to draw the monkeys to where you can see them more closely?

The Western primatologists said, hell no, that interferes with the system you’re trying to study. You want to know what the monkeys would be like in nature, without you there. So, leaving food out there for them, “provisioning” them, is a bad call.

The Japanese primatologists (who are, by the way, studying monkeys that live in the islands that are part of Japan, monkeys that are well aware of the existence of humans because they’re bumping up against them all the time) say, you know what, if we get them closer to where we are, if we draw them into the clearings, we can see more subtle behaviors, we can actually get more information.

So here, there’s a methodological trade-off. Is it important to you to get more detailed observations, or to get observations that are untainted by human interference? ‘Cause you can’t get both. They’re both using the scientific method, but they’re making different choices about the kind of knowledge they’re building with that scientific method. Yet, on the surface of things, these primatologists were sort of looking at each other like, “Those guys don’t know how to do science! What the hell?”

(36:40)
Andrew Brandel: The other thing I wanted to mention to this point and, I think, to Tomasson’s question also, is that there are lots of anthropologists embedded with laboratory scientists all over the world, doing research into specifically what kinds of differences, both in the ways that they’re organized and in the ways that arguments get levied, what counts as “true” or “false,” what counts as a hypothesis, how that gets determined within these different contexts. There are broad fields of social sciences doing exactly this.

Dr. Rubidium: I think this gets to the issue: Tomasson, what are you calling the scientific method? Versus, can you really at some point separate out the idea that science is a thing — like Janet was saying, it’s a machine, you put the stuff in, give it a spin, and get the stuff out — can you really separate something called “the scientific method” from the people who do it?

I’ve taught general chemistry, and one of the first things we do is to define science, which is always exciting. It’s like trying to define art.

Michael Tomasson: So what do you come up with? What is science?

Dr. Rubidium: It’s a body of knowledge and a process — it’s two different things, when people say science. We always tell students, it’s a body of knowledge but it’s also a process, a thing you can do. I’m not saying it’s [the only] good answer, but it’s the answer we give students in class.

Then, of course, the idea is, what’s the scientific method? And everyone’s got some sort of a figure. In the gen chem book, in chapter 1, it’s always going to be in there. And it makes it seem like we’ve all agreed at some point, maybe taken a vote, I don’t know, that this is what we do.

Janet Stemwedel: And you get the laminated card with the steps on it when you get your lab coat.

Dr. Rubidium: And there’s the flowchart, usually laid out like a circle.

Michael Tomasson: Exactly!

Dr. Rubidium: It’s awesome! But that’s what we tell people. It’s kind of like the lie we tell the about orbitals, like Janet was saying, in the beginning of gen chem. But then, this is how sausages are really made. And yes, we have this method, and these are the steps we say are involved with it, but are we talking about that, which is what you learn in high school or junior high or science camp or whatever, or are you actually talking about how you run your research group? Which one are you talking about?

(39:30)
Janet Stemwedel: It can get more complicated than that. There’s also this question of: is the scientific method — whatever the heck we do to build reliable knowledge about the world using science — is that the kind of thing you could do solo, or is it necessarily a process that involves interaction with other people? So, maybe we don’t need to be up at night worrying about whether individual scientists fail to instantiate this idealized scientific method as long as the whole community collectively shakes out as instantiating it.

Michael Tomasson: Hmmm.

Casey: Isn’t this part of what a lot of scientists are doing, that it shakes out some of the human problems that come with it? It’s a messy process and you have a globe full of people performing experiments, doing research. That should, to some extent, push out some noise. We have made advances. Science works to some degree.

Janet Stemwedel: It mostly keeps the plane up in the air when it’s supposed to be in the air, and the water from being poisoned when it’s not supposed to be poisoned. The science does a pretty good job building the knowledge. I can’t always explain why it’s so good at that, but I believe that it does. And I think you’re right, there’s something — certainly in peer review, there’s this assumption that why we play with others here is that they help us catch the thing we’re missing, they help us to make sure the experiments really are reproducible, to make sure that we’re not smuggling in unconscious assumptions, whatever. I would argue, following on something Tomasson wrote in his blog post, that this is a good epistemic reason for some of the stuff that scientists rail on about on Twitter, about how we should try to get rid of sexism and racism and ableism and other kinds of -isms in the practice of science. It’s not just because scientists shouldn’t be jerks to people who could be helping them build the knowledge. It’s that, if you’ve got a more diverse community of people building the knowledge, you up the chances that you’re going to locate the unconscious biases that are sneaking in to the story we tell about what the world is like.

When the transcript continues, we do some more musing about methodology, the frailties of individual humans when it comes to being objective, and epistemic violence.

_______

[1] This discussion based on my reading of Pamela J. Asquith, “Japanese science and western hegemonies: primatology and the limits set to questions.” Naked science: Anthropological inquiry into boundaries, power, and knowledge (1996): 239-258.

* * * * *

Part 1 of the transcript.

Archived video of this Pub-Style Science episode.

Storify’d version of the simultaneous Twitter conversation.

Pub-Style Science: philosophy, hypotheses, and the scientific method.

Last week I was honored to participate in a Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building. Some technical glitches notwithstanding, it was a rollicking good conversation — so much so that I have put together a transcript for those who don’t want to review the archived video.

The full transcript is long (approaching 8000 words even excising the non-substantive smack-talk), so I’ll be presenting it here in a few chunks that I’ve split more or less at points where the topic of the discussion shifted.

In places, I’ve cleaned up the grammar a bit, attempting to faithfully capture the gist of what each speaker was saying. As well, because my mom reads this blog, I’ve cleaned up some of the more colorful language. If you prefer the PG-13 version, the archived video will give you what you need.

Simultaneously with our video-linked discussion, there was a conversation on Twitter under the #pubscience hashtag. You can see that conversation Storify’d here.

____
(05:40)
Michael Tomasson: The reason I was interested in this is because I have one very naïve view and one esoteric view. My naïve view is that there is something useful about philosophy in terms of the scientific method, and when people are in my lab, I try to beat into their heads (I mean, educate them) that there’s a certain structure to how we do science, and this is a life-raft and a tool that is essential. And I guess that’s the question, whether there is some sort of essential tool kit. We talk about the scientific method. Is that a universal? I started thinking about this talking with my brother-in-law, who’s an amateur philosopher, about different theories of epistemology, and he was shocked that I would think that science had a lock on creating knowledge. But I think we do, through the scientific method.

Janet, take us to the next level. To me, from where I am, the scientific method is the key to the city of knowledge. No?

Janet Stemwedel: Well, that’s certainly a common view, and that’s a view that, in the philosophy of science class I regularly teach, we start with — that there’s something special about whatever it is scientists are doing, something special about the way they gather very careful observations of the world, and hook them together in the right logical way, and draw inferences and find patterns, that’s a reliable way to build knowledge. But at least for most of the 20th Century, what people who looked closely at this assumption in philosophy found was that it had to be more complicated than that. So you end up with folks like Sir Karl Popper pointing out that there is a problem of induction — that deductive logic will get you absolutely guaranteed conclusions if your premises are true, but inductive inference could go wrong; the future might not be like the past we’ve observed so far.

(08:00)
Michael Tomasson: I’ve got to keep the glossary attached. Deductive and inductive?

Janet Stemwedel: Sure. A deductive argument might run something like this:

All men are mortal. Socrates is a man. Therefore, Socrates is mortal.

If it’s true that all men are mortal, and that Socrates is a man, then you are guaranteed that Socrates is also going to be mortal. The form of the argument is enough to say, if the assumptions are true, then the conclusion has to be true, and you can take that to the bank.

Inductive inference is actually most of what we seem to use in drawing inferences from observations and experiments. So, let’s say you observe a whole lot of frogs, and you observe that, after some amount of time, each of the frogs that you’ve had in your possession kicks off. After a certain number of frogs have done this, you might draw the inference that all frogs are mortal. And, it seems like a pretty good inference. But, it’s possible that there are frogs not yet observed that aren’t mortal.

Inductive inference is something we use all the time. But Karl Popper said, guess what, it’s not guaranteed in the same way deductive logic is. And this is why he thought the power of the scientific method is that scientists are actually only ever concerned to find evidence against their hypotheses. The evidence against your hypotheses lets you conclude, via deductive inference, that those hypotheses are wrong, and then you cross them off. Any hypothesis where you seem to get observational support, Popper says, don’t get too excited! Keep testing it, because maybe the next test is going to be the one where you find evidence against it, and you don’t want to get screwed over by induction. Inductive reasoning is just a little too shaky to put your faith in.

(10:05)
Michael Tomasson: That’s my understanding of Karl Popper. I learned about the core of falsifying hypotheses, and that’s sort of what I teach as truth. But I’ve heard some anti-Karl Popper folks, which I don’t really quite understand.

Let me ask Isis, because I know Isis has very strong opinions about hypotheses. You had a blog post a long time ago about hypotheses. Am I putting words in your mouth to say you think hypotheses and hypothesis testing are important?

(10:40)
Dr. Isis: No, I did. That’s sort of become the running joke here is that my only contribution to lab meeting is to say, wait wait wait, what was your hypothesis? I think that having hypotheses is critical, and I’m a believer, as Dr. Tomasson knows, that a hypothesis has four parts. I think that’s fundamental, framing the question, because I think that the question frames how you do your analysis. The design and the analysis fall out of the hypothesis, so I don’t understand doing science without a hypothesis.

Michael Tomasson: Let me throw it over to Andrew … You’re coming from anthropology, you’re looking at science from 30,000 feet, where maybe in anthropology it’s tough to do hypothesis-testing. So, what do you say to this claim that the hypothesis is everything?

Andrew Brandel: I would give two basic responses. One: in the social sciences, we definitely have a different relationship to hypotheses, to the scientific method, perhaps. I don’t want to represent the entire world of social and human sciences.

Michael Tomasson: Too bad!

(12:40)
Andrew Brandel: So, there’s definitely a different relationship to hypothesis-testing — we don’t have a controlled setting. This is what a lot of famous anthropologists would talk about. The other area where we might interject is, science is (in the view of some of us) one among many different ways of viewing and organizing our knowledge about the world, and not necessarily better than some other view.

Michael Tomasson: No, it’s better! Come on!

Andrew Brandel: Well, we can debate about this. This is a debate that’s been going on for a long time, but basically my position would be that we have something to learn from all the different sciences that exist in the world, and that there are lots of different logics which condition the possibility of experiencing different kinds of things. When we ask, what is the hypothesis, when Dr. Isis is saying that is crucial for the research, we would agree with you, that that is also conditioning the responses you get. That’s both what you want and part of the problem. It’s part of a culture that operates like an ideology — too close to you to come at from within it.

Janet Stemwedel: One of the things that philosophers of science started twigging to, since the late 20th Century, is that science is not working with this scientific method that’s essentially a machine that you toss observations into and you turn the crank and on the other end out comes pristine knowledge. Science is an activity done by human beings, and human beings who do science have as many biases and blindspots as human beings who don’t do science. So, recognizing some of the challenges that are built into the kind of critter we are trying to build reliable knowledge about the world becomes crucial, and even places where the scientist will say, look, I’m not doing (in this particular field) hypothesis-driven science, it doesn’t mean that there aren’t some hypotheses sort of behind the curtain directing the attention of the people trying to build knowledge. It just means that they haven’t bumped into enough people trying to build knowledge in the same area that have different assumptions to notice that they’re making assumptions in the first place.

(15:20)
Dr. Isis: I think that’s a crucial distinction. Is the science that you’re doing really not hypothesis-driven, or are you too lazy to write down a hypothesis?

To give an example, I’m writing a paper with this clinical fellow, and she’s great. She brought a draft, which is amazing, because I’m all about the paper right now. And in there, she wrote, we sought to observe this because to the best of our knowledge this has never been reported in the literature.

First of all, the phrase “to the best of our knowledge,” any time you write that you should just punch yourself in the throat, because if it wasn’t to the best of your knowledge, you wouldn’t be writing it. I mean, you wouldn’t be lying: “this has never been reported in the literature.” The other thing is, “this has never been reported in the literature” as the motivation to do it is a stupid reason. I told her, the frequency of the times of the week that I wear black underwear has never been reported in the literature. That doesn’t mean it should be.

Janet Stemwedel: Although, if it correlates with your experiment working or not — I have never met more superstitious people than experimentalists. If the experiment only works on the days you were black underwear, you’re wearing black underwear until the paper is submitted, that’s how it’s going to be. Because the world is complicated!

Dr. Isis: The point is that it’s not that she didn’t have a hypothesis. It’s that pulling it out of her — it was like a tapeworm. It was a struggle. That to me is the question. Are we really doing science without a hypothesis, or are we making the story about ourselves? About what we know about in the literature, what the gap in the literature is, and the motivation to do the experiment, or are we writing, “we wanted to do this to see if this was the thing”? — in which case, I don’t find it very interesting.

Michael Tomasson: That’s an example of something that I try to teach, when you’re writing papers: we did this, we wanted to do that, we thought about this. It’s not really about you.

But friend of the show Cedar Riener tweets in, aren’t the biggest science projects those least likely to have clearly hypothesis-driven experiments, like HGP, BRANI, etc.? I think the BRAIN example is a good one. We talk about how you need hypotheses to do science, and yet here’s this very high profile thing which, as far as I can tell, doesn’t really have any hypotheses driving it.

When the transcript continues: Issues of inclusion, methodological disputes, and the possibility that “the scientific method” is actually a lie.

What is philosophy of science (and should scientists care)?

Just about 20 years ago, I abandoned a career as a physical chemist to become a philosopher of science. For most of those 20 years, people (especially scientists) have been asking me what the heck the philosophy of science is, and whether scientists have any need of it.

There are lots of things philosophers of science study, but one central set of concerns is what is distinctive about science — how science differs from other human activities, what grounds its body of knowledge, what features are essential to scientific engagement with phenomena, etc. This means philosophers of science have spent a good bit of time trying to find the line between science and non-science, trying to figure out the logic with which scientific claims are grounded, working to understand the relation between theory and empirical data, and working out the common thread that unites many disparate scientific fields — assuming such a common thread exists. *

If you like, you can think of this set of philosophical projects as trying to give an account of what science is trying to do — how science attempts to construct a picture of the world that is accountable to the world in a particular way, how that picture of the world develops and changes in response to further empirical information (among other factors), and what kind of explanations can be given for the success of scientific accounts (insofar as they have been successful). Frequently, the philosopher is concerned with “Science” rather than a particular field of science. As well, some philosophers are more concerned with an idealized picture of science as an optimally rational knowledge building activity — something they will emphasize is quite different from science as actually practiced.**

Practicing scientists pretty much want to know how to attack questions in their particular field of science. If your goal is to understand the digestive system of some exotic bug, you may have no use at all for a subtle account of scientific theory change, let alone for a firm stand on the question of scientific anti-realism. You have much more use for information about how to catch the bug, how to get to its digestive system, what sorts of things you could observe measure or manipulate that could give you useful information about its digestive system, how to collect good data, how to tell when you’ve collected enough data to draw useful conclusions, appropriate methods for processing the data and drawing conclusions, and so forth.

A philosophy of science course doesn’t hand the entomologist any of those practical tools for studying the scientific problems around the bug’s digestive system. But philosophy of science is aimed at answering different questions than the working scientist is trying to answer. The goal of philosophy of science is not to answer scientific questions, but to answer questions about science.***

Does a working scientist need to have learned philosophy of science in order to get the scientific job done? Probably not. Neither does a scientist need to have studied Shakespeare or history to be a good scientist — but these still might be worthwhile endeavors for the scientist as a person. Every now and then it’s nice to be able to think about something besides your day job. (Recreational thinking can be fun!)

Now, there are some folks who will argue that studying philosophy of science could be detrimental to the practicing scientist. Reading Kuhn’s Structure of Scientific Revolutions with its claim that shifts in scientific paradigm have an inescapable subjective component, or even Popper’s view of the scientific method that’s meant to get around the problem of induction, might blow the young scientist’s mind and convince him that the goal of objective knowledge is unattainable. This would probably undermine his efforts to build objective knowledge in the lab.

(However, I’d argue that reading Helen Longino’s account of how we build objective knowledge — another philosophical account — might answer some of the worries raised by Popper, Kuhn, and that crowd, making the young scientist’s knowledge-building endeavors seem more promising.)

My graduate advisor in chemistry had a little story he told that was supposed to illustrate the dangers for scientists of falling in with the philosophers and historians and sociologists of science: A centipede is doing a beautiful and complicated dance. An ant walks up to the centipede and says, “That dance is lovely! How do you coordinate all your feet so perfectly to do it?” The centipede pauses to think about this and eventually replies, “I don’t know.” Then the centipede watches his feet and tries to do the dance again — and can’t!

The centipede could do the dance without knowing precisely how each foot was supposed to move relative to the others. A scientist can do science while taking the methodology of her field for granted. But having to give a philosophical account of or a justification for that methodology deeper than “this is what we do and it works pretty well for the problems we want to solve” may render that methodology strange looking and hard to keep using.

Then again, I’m told what Einstein did for physics had as much to do with proposing a (philosophical) reorganization of the theoretical territory as it did with new empirical data. So perhaps the odd scientist can put some philosophical training to good scientific use.

_____
This post is an updated version of an ancestor post on my other blog, and was prompted by the Pub-Style Science discussion of epistemology scheduled for Tuesday, April 8, 2014 (starting 9 PM EDT/6 PM PDT). Watch the hashtag #pubscience for more details.

_____
*I take it that one can identify “science” by enumerating the fields included in the category (biology, chemistry, physics, astronomy, geology, …) and then pose the question of what commonalities (if any) these examples of scientific fields have with no risk of circularity. Especially since we’re leaving it to the scientists to tell us what the sciences are. It’s quite possible that the sciences won’t end up having a common core — that there won’t be any there there.

**For the record, I find science-as-actually-practiced — in particular scientific fields, rather than generalized as ‘Science” — more philosophically interesting than the idealized stuff. But, as one of my labmates in graduate school used to put it, “One person’s ‘whoop-de-doo’ is another person’s life’s work.”

***Really, to answer philosophical questions about science, since historians and sociologists and anthropologist also try to answer questions about science.

Reflections on being part of a science blogging network.

This is another post following up on a session at ScienceOnline Together 2014, this one called Blog Networks: Benefits, Role of, Next Steps, and moderated by Scientific American Blogs Editor Curtis Brainard. You should also read David Zaslavsky’s summary of the session and what people were tweeting on the session hashtag, #scioBlogNet.

My own thoughts are shaped by writing an independent science blog that less than a year later became part of one of the first “pro” science blogging networks when it launched in January 2006, moving my blog from that network to a brand new science blogging community in August 2010, and keeping that blog going while starting Doing Good Science here on the Scientific American Blog Network when it launched in July 2011. This is to say, I’ve been blogging in the context of science blogging networks for a long time, and have seen the view from a few different vantage points.

That said, my view is also very particular and likely peculiar — for example, I’m a professional philosopher (albeit one with a misspent scientific youth) blogging about science while trying to hold down a day-job as a professor in a public university during a time of state budget terror and to maintain a reasonable semblance of family life. My blogging is certainly more than a hobby — in many ways it provides vital connective tissue that helps knit together my weirdly interdisciplinary professional self into a coherent whole (and has thus been evaluated as a professional activity for the day-job) — but, despite the fact that I’m a “pro” who gets paid to blog here, it’s not something I could live on.

In my experience, a science blogging network can be a great place to get visibility and to build an audience. This can be especially useful early in one’s blogging career, since it’s a big, crowded blogosphere out there. Networks can also be handy for readers, since they deliver more variety and more of a regular flow of posts than most individual bloggers can do (especially when we’re under the weather and/or catching up on grading backlogs). It’s worth noting, though, that very large blog networks can provide a regular flow of content that frequently resembles a firehose. Some blog networks provide curation in the form of featured content or topical feeds. Many provide something like quality control, although sometimes it’s exercised primarily in the determination of who will blog in the network.

Blog networks can also have a distinctive look and feel, embodied in shared design elements, or in an atmosphere set within the commenting community, for example. Bloggers within blog networks may have an easier time finding opportunities for productive cross-pollination or coordination of efforts with their network neighbors, whether to raise political awareness or philanthropic dollars or simply to contribute many distinctive perspectives to the discussion of a particular topic. Bloggers sharing networks can also become friends (although sometimes, being humans, they develop antagonisms instead).

On a science blogging network, bloggers seem also to regularly encounter the question of what counts as a proper “science blog” — about whose content is science-y enough, and what exactly that should mean. This kind of policing of boundaries happens even here.

While the confluence of different people blogging on similar terrain can open up lots of opportunities for collaboration, there are moments when the business of running a blog network (at least when that blog network is a commercial enterprise) can be in tension with what the bloggers value about blogging in the network. Sometimes the people running the network aren’t the same as the people writing the blogs, and they end up having very different visions, interests, pressing needs, and understandings of their relationships to each other.

Sometimes bloggers and networks grow apart and can’t give each other what they need for the relationship to continue to be worthwhile going forward.

And, while blogging networks can be handy, there are other ways that online communicators and consumers of information can find each other and coordinate their efforts online. Twitter has seen the ride of tremendously productive conversations around hashtags like #scistuchat and #BlackandSTEM, and undoubtedly similarly productive conversations among science-y folk regularly coalesce on Facebook and Tumblr and in Google Hangouts. Some of these online interactions lead to face-to-face collaborations like the DIY Science Zone at GeekGirlCon and conference proposals made to traditional professional societies that get their start in online conversations.

Networks can be nice. They can even help people transition from blogging into careers in science writing and outreach. But even before blog networks, awesome people managed to find each other and to come up with awesome projects to do together. Networks can lower the activation energy for this, but there are other ways to catalyze these collaborations, too.

Brief thoughts on uncertainty.

For context, these thoughts follow upon a very good session at ScienceOnline Together 2014 on “How to communicate uncertainty with the brevity that online communication requires.” Two of the participants in the session used Storify to collect tweets of the discussion (here and here).

About a month later, this does less to answer the question of the session title than to give you a peek into my thoughts about science and uncertainty. This may be what you’ve come to expect of me.

Humans are uncomfortable with uncertainty, at least in those moments when we notice it and where we have to make decisions that have more than entertainment value riding on them. We’d rather have certainty, since that makes it easier to enact plans that won’t be thwarted.

Science is (probably) a response to our desire for more certainty. Finding natural explanations for natural phenomena, stable patterns in our experience, gives us a handle on our world and what we can expect from it that’s less capricious than “the gods are in a mood today.”

But the scientific method isn’t magic. It’s a tool that cranks out explanations of what’s happened, predictions of what’s coming up, based on observations made by humans with our fallible human senses.

The fallibility of those human senses (plus things like the trickiness of being certain you’re awake and not dreaming) was (probably) what drove philosopher René Descartes in his famous Meditations, the work that yielded the conclusion “I think, therefore I am” and that featured not one but two proofs of the existence of a God who is not a deceiver. Descartes was not pursuing a theological project here. Rather, he was trying to explain how empirical science — science relying on all kinds of observations made by fallible humans with their fallible senses — could possibly build reliable knowledge. Trying to put empirical science on firm foundations, he engaged in his “method of doubt” to locate some solid place to stand, some thing that could not be doubted. That something was “I think, therefore I am” — in other words, if I’m here doubting that my experience is reliable, that I’m awake instead of dreaming, that I’m a human being rather than a brain in a vat, I can at least me sure that there exists a thinking thing that’s doing the doubting.

From this fact that could not be doubted, Descartes tried to climb back out of that pit of doubt and to work out the extent to which we could trust our senses (and the ways in which our sense were likely to mislead us). This involved those two proofs of the existence of a God who is not a deceiver, plus a whole complicated story of minds and brain communicating with each other (via the wiggling of our pineal glands) — which is to say, it was not entirely persuasive. Still, it was all in the service of getting us more certainty from our empirical science.

Certainty and its limits are at the heart of another piece of philosophy, “the problem of induction,” this one most closely associated with David Hume. The problem here rests on our basic inability to be certain that what we have so far observed of our world will be a reliable guide to what we haven’t observed yet, that the future will be like the past. Observing a hundred, or a thousand, or a million ravens that are black is not enough for us to conclude with absolute certainty that the ravens we haven’t yet observed must also be black. Just because the sun rose today, and yesterday, and everyday through recorded human history to date does not guarantee that it will rise tomorrow.

But while Hume pointed out the limits of what we could conclude with certainty from our observations at any given moment — limits which impelled Karl Popper to assert that the scientific attitude was one of trying to prove hypotheses false rather than seeking support for them — he also acknowledged our almost irresistible inclination to believe that the future will be like the past, that the patterns of our experience so far will be repeated in the parts of the world still waiting for us to experience them. Logic can’t guarantee these patterns will persist, but our expectations (especially in cases where we have oodles of very consistent observations) feel like certainty.

Scientists are trained to recognize the limits of their certainty when they draw conclusions, offer explanations, make predictions. They are officially on the hook to acknowledge their knowledge claims as tentative, likely to be updated in the light of further information.

This care in acknowledging the limits of what careful observation and logical inference guarantee us can make it appear to people who don’t obsess over uncertainties in everyday life that scientists don’t know what’s going on. But the existence of some amount of uncertainty does not mean we have no idea what’s going on, no clue what’s likely to happen next.

What non-scientists who dismiss scientific knowledge claims on the basis of acknowledged uncertainty forget is that making decisions in the face of uncertainty is the human condition. We do it all the time. If we didn’t, we’d make no decisions at all (or else we’d be living a sustained lie about how clearly we see into our future).

Strangely, though, we seem to have a hard time reconciling our everyday pragmatism about everyday uncertainty with our suspicion about the uncertainties scientists flag in the knowledge they share with us. Maybe we’re making the jump from viewing scientific knowledge as reliable to demanding that it be perfect. Or maybe we’re just not very reflective about how easily we navigate uncertainty in our everyday decision-making.

I see this firsthand when my “Ethics in Science” students grapple with ethics case studies. At first they are freaked out by the missing details, the less-than-perfect information about what will happen if the protagonist does X or if she does Y instead. How can we make good decisions about what the protagonist should do if we can’t be certain about those potential outcomes?

My answer to them: The same way we do in real life, whose future we can’t see with any more certainty.

When there’s more riding on our decisions, we’re more likely to notice the gaps in the information that informs those decisions, the uncertainty inherent in the outcomes that will follow on what we decide. But we never have perfect information, and neither do scientists. That doesn’t mean our decision-making is hopeless, just that we need to get comfortable making do with the certainty we have.

Engagement with science needs more than heroes

Narratives about the heroic scientist are not what got me interested in science.

It was (and still is) hard for me to connect with a larger-than-life figure when my own aspirations have always been pretty life-sized.

Also, there’s the fact that the scientific heroes whose stories have been told have mostly been heroes, not heroines, just one more issue making it harder for me to relate to their experiences. And when the stories of pioneering women of science are told, these stories frequently emphasize how these heroines made it against big odds, how exceptional they are. Having to be exceptional even to succeed in scientific work is not a prospect I find inviting.

While tales of great scientific pioneers never did much for me, I am enraptured with science. The hook that drew me in is the process of knowledge-building, the ways in which framing questions and engaging in logical thinking and methodical observation of a piece of the world can help us learn quite unexpected things about that world’s workings. I am intrigued by the power of this process, by the ways that it frequently rewards insight and patience.

What I didn’t really grasp when I was younger but appreciate now is the inescapably collaborative nature of the process of building scientific knowledge. The plan of attack, the observations, the troubleshooting, the evaluation of what the results do and do not show — that all comes down to teamwork of one sort or another, the product of many hands, many eyes, many brains, many voices.

We take our perfectly human capacities as individuals and bring them into concert to create a depth of understanding of our world that no heroic scientist — no Newton, no Darwin, no Einstein — could achieve on his own.

The power of science lies not in individual genius but in a method of coordinating our efforts. This is what makes me interested in what science can do — what makes it possible for me to see myself doing science. And I’m willing to bet I’m not the only one.

The heroes of science are doubtless plenty inspiring to a good segment of the population, and given the popularity of heroic narratives, I doubt they’ll disappear. But in our efforts to get people engaged with science, we shouldn’t forget the people who connect less with great men (and women) and more with the extraordinarily powerful process of science conducted by recognizably ordinary human beings. We should remember to tell the stories about the process, not just the heroes.

Incoherent ethical claims that give philosophers a bad rap

Every now and then, in the course of a broader discussion, some philosopher will make a claim that is rightly disputed by non-philosophers. Generally, this is no big deal — philosophers have just as much capacity to be wrong as other humans. But sometimes, the philosopher’s claim, delivered with an air of authority, is not only a problem in itself but also manages to convey a wrong impression about the relation between the philosophers and non-philosophers sharing a world.

I’m going to examine the general form of one such ethical claim. If you’re interested in the specific claim, you’re invited to follow the links above. We will not be discussing the specific claim here, nor the larger debate of which it is a part.

Claim: To decide to do X is always (or, at least, should always be) a very difficult and emotional step, precisely because it has significant ethical consequences.

Let’s break that down.

“Doing X has significant ethical consequences” suggests a consequentialist view of ethics, in which doing the right thing is a matter of making sure the net good consequences (for everyone affected, whether you describe them in terms of “happiness” or something else) outweigh the net bad consequences.

To say that doing X has significant ethical consequences is then to assert that (at least in the circumstances) doing X will make a significant contribution to the happiness or unhappiness being weighed.

In the original claim, the suggestion is that the contribution of doing X to the balance of good and bad consequences is negative (or perhaps that it is negative in many circumstances), and that on this account it ought to be a “difficult and emotional step”. But does this requirement make sense?

In the circumstances in which doing X shifts the balance of good and bad consequences to a net negative, the consequentialist will say you shouldn’t do X — and this will be true regardless of your emotions. Feeling negative emotions as you are deciding to do X will add more negative consequences, but they are not necessary: a calculation of the consequences of doing X versus not doing X will still rule out doing X as an ethical option even if you have no emotions associated with it at all.

On the other hand, in the circumstances in which doing X shifts the balance of good and bad consequences to a net positive, the consequentialist will say you should do X — again, regardless of your emotions. Here, feeling negative emotions as you are deciding to do X will add more negative consequences. If these negative emotions are strong enough, they run the risk of reducing the net positive consequences — which makes the claim that one should feel negative emotions (pretty clearly implied in the assertion that the decision to do X should be difficult) a weird claim, since these negative emotions would serve only to reduce the net good consequences of doing something that produces net good consequences in the circumstances.

By the way, this also suggests, perhaps perversely, a way that strong emotions could become a problem in circumstances in which doing X would otherwise clearly bring more negative consequences than positive ones: if the person contemplating doing X were to get a lot of happiness from doing X.

Now, maybe the idea is supposed to be that negative feelings associated with the prospect of doing X are supposed to be a brake if doing X frequently leads to more bad consequences than good ones. But I think we have to recognize feelings as consequences — as something that we need to take into account in the consequentialist calculus with which we evaluate whether doing X here is ethical or not. And that makes the claim that the feelings ought always to be negative, regardless of other features of the situation that make doing X the right thing, puzzling.

You could avoid worries about weighing feelings as consequences by shifting from a consequentialist ethical framework to something else, but I don’t think that’s going to be much help here.

Kantian ethics, for example, won’t pin the ethics of doing X to the net consequences, but instead it will come down to something like whether it is your duty to do X (where your duty is to respect the rational capacity in yourself and in others, to treat people as ends in themselves rather than as mere means). Your feelings are no part of what a Kantian would consider in judging whether your action is ethical or not. Indeed, Kantians stress that ethical acts are motivated by recognizing your duty precisely because feelings can be a distraction from behaving as we should.

Virtue ethicists, on the other hand, do talk about the agent’s feelings as ethically relevant. Virtuous people take pleasure in doing the right things and feel pain at the prospect of doing the wrong thing. However, if doing X is right under the circumstances, the virtuous people will feel good about doing X, not conflicted about it — so the claim that doing X should always be difficult and emotional doesn’t make much sense here. Moreover, virtue ethicists describe the process of becoming virtuous as one where behaving in virtuous ways usually precedes developing emotional dispositions to feel pleasure from acting virtuously.

Long story short, it’s hard to make sense of the claim “To decide to do X is always (or, at least, should always be) a very difficult and emotional step, precisely because it has significant ethical consequences” — unless really what is being claimed it that doing X is always unethical and you should always feel bad for doing X. If that’s the claim, though, emotions are pretty secondary.

But beyond the incoherence of the claim, here’s what really bugs me about it: It seems to assert that ethicists (and philosophers more generally) are in the business of telling people how to feel. That, my friends, is nonsense. Indeed, I’m on record prioritizing changes in unethical behavior over any interference with what’s in people’s hearts. How we behave, after all, has much more impact on our success in sharing a world with each other than how we feel.

This is not to say that I don’t recognize a likely connection between what’s in people’s hearts and how they behave. For example, I’m willing to bet that improvements in our capacity for empathy would likely lead to more ethical behavior.

But it’s hard to see as empathetic telling people they should generally feel bad for making a choice which under the circumstances is an ethical choice. If anything, requiring such negative emotions is a failure of empathy, and punitive to boot.

Clearly, there exist ethicists and philosophers who operate this way, but many of us try to do better. Indeed, it’s reasonable for you all to expect and demand that we do better.

Soothing jellies

One day in to ScienceOnline Together 2014, my head is full of ideas and questions and hunches that weren’t there a day ago.

I’ll be posting about some of them after I’ve had some time to digest them. In the meantime, I’m looking at pictures of jellies I snapped on a recent trip to the Monterey Bay Aquarium.

In addition to being pretty interesting animals, I find them very relaxing to look at. Which is nice.

Warty comb jellies

Sea nettles

Moon jellies