Mentoring new scientists in the space between how things are and how things ought to be.

Scientists mentoring trainees often work very hard to help their trainees grasp what they need to know not only to build new knowledge, but also to succeed in the context of a career landscape where score is kept and scarce resources are distributed on the basis of scorekeeping. Many focus their protégés’ attention on the project of understanding the current landscape, noticing where score is being kept, working the system to their best advantage.

But is teaching protégés how to succeed as a scientist in the current structural social arrangements enough?

It might be enough if you’re committed to the idea that the system as it is right now is perfectly optimized for scientific knowledge-building, and for scientific knowledge-builders (and if you view all the science PhDs who can’t find permanent jobs in the research careers they’d like to have as acceptable losses). But I’d suggest that mentors can do better by their protégés.

For one thing, even if current conditions were optimal, they might well change due to influences from outside the community of knowledge-builders, as when the levels of funding change at the level of universities or of funding agencies. Expecting that the landscape will be stable over the course of a career is risky.

For another thing, it seems risky to take as given that this is the best of all possible worlds, or of all possible bundles of practices around research, communication of results, funding of research, and working conditions for scientists. Research on scientists suggests that they themselves recognize the ways in which the current system and its scorekeeping provides perverse incentives that may undercut the project of building reliable knowledge about the world. As well, the competition for scarce resources can result in a “science red in tooth and claw” dynamic that, at best, leads to the rational calculation that knowledge-builders ought to work more hours and partake of fewer off-the-clock “distractions” (like family, or even nice weather) in order not to fall behind.

Just because the scientific career landscape manifests in the particular way it does right now doesn’t mean that it must always be this way. As the body of reliable knowledge about the world is perpetually under construction, we should be able to recognize the systems and social arrangements in which scientists work as subject to modification, not carved into granite.

Restricting your focus as a mentor to imparting strategies for success given how things are may also convey to your protégés that this is the way things will always be — or that this is the way things should always be. I hope we can do better than that.

It can be a challenge to mentor with an eye to a set of conditions that don’t currently exist. Doing so involves imagining other ways of doing things. Doing it as more than a thought experiment also involves coordinating efforts with others — not just with trainees, but with established members of the professional community who have a bit more weight to throw around — to see what changes can be made and how, given the conditions you’re starting from. It may also require facing pushback from colleagues who are fine with the status quo (since it has worked well for them).

Indeed, mentoring with an eye to creating better conditions for knowledge-building and for knowledge-builders may mean agitating for changes that will primarily benefit future generations of your professional community, not your own.

But mentoring someone, welcoming them into your professional community and equipping them to be a full member of it, is not primarily about you. It is something that you do for the benefit of your protégé, and for the benefit of the professional community they are joining. Equipping your protégé for how things are is a good first step. Even better is encouraging them to imagine, to bring about, and to thrive in conditions that are better for your shared pursuit.

Grappling with the angry-making history of human subjects research, because we need to.

Teaching about the history of scientific research with human subjects bums me out.

Indeed, I get fairly regular indications from students in my “Ethics in Science” course that reading about and discussing the Nazi medical experiments and the U.S. Public Health Service’s Tuskegee syphilis experiment leaves them feeling grumpy, too.

Their grumpiness varies a bit depending on how they see themselves in relation to the researchers whose ethical transgressions are being inspected. Some of the science majors who identify strongly with the research community seem to get a little defensive, pressing me to see if these two big awful examples of human subject research aren’t clear anomalies, the work of obvious monsters. (This is one reason I generally point out that, when it comes to historical examples of ethically problematic research with human subjects, the bench is deep: the U.S. government’s syphilis experiments in Guatemala, the MIT Radioactivity Center’s studies on kids with mental disabilities in a residential school, the harms done to Henrietta Lacks and to the family members that survived her by scientists working with HeLa cells, the National Cancer Institute and Gates Foundation funded studies of cervical cancer screening in India — to name just a few.) Some of the non-science majors in the class seem to look at their classmates who are science majors with a bit of suspicion.

Although I’ve been covering this material with my students since Spring of 2003, it was only a few years ago that I noticed that there was a strong correlation between my really bad mood and the point in the semester when we were covering the history of human subjects research. Indeed, I’ve come to realize that this is no mere correlation but a causal connection.

The harm that researchers have done to human subjects in order to build scientific knowledge in many of these historically notable cases makes me deeply unhappy. These cases involve scientists losing their ethical bearings and then defending indefensible actions as having been all in the service of science. It leaves me grumpy about the scientific community of which these researchers were a part (rather than being obviously marked as monsters or rogues). It leaves me grumpy about humanity.

In other contexts, my grumpiness might be no big deal to anyone but me. But in the context of my “Ethics in Science” course, I need to keep pessimism on a short leash. It’s kind of pointless to talk about what we ought to do if you’re feeling like people are going to be as evil as they can get away with being.

It’s important to talk about the Nazi doctors and the Tuskegee syphilis experiment so my students can see where formal statements about ethical constraints on human subject research (in particular, the Nuremberg Code and the Belmont Report) come from, what actual (rather than imagined) harms they are reactions to. To the extent that official rules and regulations are driven by very bad situations that the scientific community or the larger human community want to avoid repeating, history matters.

History also matters if scientists want to understand the attitudes of publics towards scientists in general and towards scientists conducting research with human subjects in particular. Newly-minted researchers who would never even dream of crossing the ethical lines the Nazi doctors or the Tuskegee syphilis researchers crossed may feel it deeply unfair that potential human subjects don’t default to trusting them. But that’s not how trust works. Ignoring the history of human subjects research means ignoring very real harms and violations of trust that have not faded from the collective memories of the populations that were harmed. Insisting that it’s not fair doesn’t magically earn scientists trust.

Grappling with that history, though, might help scientists repair trust and ensure that the research they conduct is actually worthy of trust.

It’s history that lets us start noticing patterns in the instances where human subjects research took a turn for the unethical. Frequently we see researchers working with human subjects who that don’t see as fully human, or whose humanity seems less important than the piece of knowledge the researchers have decided to build. Or we see researchers who believe they are approaching questions “from the standpoint of pure science,” overestimating their own objectivity and good judgment.

This kind of behavior does not endear scientists to publics. Nor does it help researchers develop appropriate epistemic humility, a recognition that their objectivity is not an individual trait but rather a collective achievement of scientists engaging seriously with each other as they engage with the world they are trying to know. Nor does it help them build empathy.

I teach about the history of human subjects research because it is important to understand where the distrust between scientists and publics has come from. I teach about this history because it is crucial to understanding where current rules and regulations come from.

I teach about this history because I fully believe that scientists can — and must — do better.

And, because the ethical failings of past human subject research were hardly ever the fault of monsters, we ought to grapple with this history so we can identify the places where individual human weakness, biases, blind-spots are likely to lead to ethical problems down the road. We need to build systems and social mechanisms to be accountable to human subjects (and to publics), to prioritize their interests, never to lose sight of their humanity.

We can — and must — do better. But this requires that we seriously examine the ways that scientists have fallen short — even the ways that they have done evil. We owe it to future human subjects of research to learn from the ways scientists have failed past human subjects, to apply these lessons, to build something better.

Doing science is more than building knowledge: on professional development in graduate training.

Earlier this week, I was pleased to be an invited speaker at UC – Berkeley’s Science Leadership and Management (SLAM) seminar series. Here’s the official description of the program:

What is SLAM?

Grad school is a great place to gain scientific expertise – but that’s hardly the only thing you’ll need in your future as a PhD. Are you ready to lead a group? Manage your coworkers? Mentor budding scientists? To address the many interpersonal issues that arise in a scientific workplace, grad students from Chemistry, Physics, and MCB founded SLAM: Science Leadership and Management.

This is a seminar series focused on understanding the many interpersonal interactions critical for success in a scientific lab, as well as some practical aspects of lab management.  The target audience for this course is upper-level science graduate students with broad interests and backgrounds, and the skills discussed will be applicable to a variety of career paths. Postdocs are also welcome to attend.

Let me say for the record that I think programs like this are tremendously important, and far too few universities with Ph.D. programs have anything like them. (Stanford has offered something similar, although more explicitly focused on career trajectories in academia, in its Future Faculty Seminar.)

In their standard configuration, graduate programs can do quite a lot to help you learn how to build new knowledge in your discipline. Mostly, you master this ability by spending years working, under the supervision of your graduate advisor, to build new knowledge in your discipline. The details of this apprenticeship vary widely, owing largely to differences in advisors’ approaches: some are very hands-on mentors, others more hands-off, some inclined towards very specific task-lists for the scientific trainees in their labs, others towards letting trainees figure out their own plans of attack or even their own projects. The promise the Ph.D. training holds out, though, is that at the end of the apprenticeship you will have the skills and capacities to go forth and build more knowledge in your field.

The challenge is that most of this knowledge-building will take place in employment contexts that expect the knowledge-builders will have other relevant skills, as well. These may include mounting collaborations, or training others, or teaching, or writing for an audience of non-experts, not to mention working effectively with others (in the lab, on committees, in other contexts) and making good ethical decisions.

To the extent that graduate training focuses solely on learning how to be a knowledge-builder, it often falls down on the job of providing reasonable professional development. This is true even in the realm of teaching, where graduate students usually gain some experience as teaching assistants but they hardly ever get any training in pedagogy.

The graduate students who organize the SLAM program at Berkeley impress me as a smart, vibrant bunch, and they have a supportive faculty advisor. But it’s striking to me that such efforts at serious professional development for grad students are usually spearheaded by grad students, rather than by the grown-up members of their departments training them to be competent knowledge-builders.

One wonders if this is because it just doesn’t occur to the grown-up members of these disciplines that their trainees that such professional development could be helpful — or because graduate programs don’t feel like they owe their graduate students professional development of this sort.

If the latter, that says something about how graduate programs see their relationship with their students, especially in scientific fields. If all you are transmitting to students is how to build new knowledge, rather than attending to other skills they will need to successfully apply their knowledge-building chops in a career after graduate school, it makes it hard not to suspect that the relationship is really one that’s all about providing relatively cheap knowledge-building labor for grad school faculty.

Apprenticeships need not be that exploitative.

Indeed, if graduate programs want to compete for the best grad-school-bound undergraduates or prospective students who have done something else in the interval since their undergraduate education, making serious professional development could help them distinguish themselves from other programs. The trick here is that trainees would need to recognize, as they’re applying to graduate programs, that professional development is something they deserve. Whoever is mentoring them and providing advice on how to choose a graduate program should at least put the issue of professional development on the radar.

If you are someone who fits that description, I hope I have just put professional development on your radar.

On the value of empathy, not othering.

Could seeing the world through the eyes of the scientist who behaves unethically be a valuable tool for those trying to behave ethically?

Last semester, I asked my “Ethics in Science” students to review an online ethics training module of the sort that many institutions use to address responsible conduct of research with their students and employees. Many of my students elected to review the Office of Research Integrity’s interactive movie The Lab, which takes you through a “choose your own adventure” scenario in as academic lab as one of four characters (a graduate student, a postdoc, the principal investigator, or the institution’s research integrity officer). The scenario surrounds research misconduct by another member of the lab, and your goal is to do what you can to address the problems — and to avoid being drawn into committing misconduct yourself.

By and large, my students reported that “The Lab” was a worthwhile activity. As part of the assignment, I asked them to suggest changes, and a number of them made what I thought was a striking suggestion: players should have the option to play the character who commits the misconduct.

I can imagine some imminently sensible reasons why the team that produced “The Lab” didn’t include the cheater as a playable character. For instance, if the scenario were to start before the decision to cheat and the user playing this character picks the options that amount to not cheating, you end up with a story that lacks almost all of the drama. Similarly, if you pick up with that character in the immediate aftermath of the instance of cheating and go with the “come clean/don’t dig a deeper hole” options, the story ends pretty quickly.

Setting the need for dramatic tension aside, I suspect that another reason that “The Lab” doesn’t include the cheater as a playable character is that people who are undergoing research ethics training are supposed to think of themselves as people who would not cheat. Rather, they’re supposed to think of themselves as ethical folks who would resist temptation and stand up to cheating when others do it. These training exercises bring out some of the particular challenges that might be associated with making good ethical decisions (many of them connected to seeing a bit further down the causal chain to anticipate the likely consequences of your choices), but they tend to position the cheater as just part of the environment to which the ethical researcher must respond.

I think this is a mistake. I think there may be something valuable in being able to view those who commit misconduct as more than mere antagonists or monsters.

Part of what makes “The Lab” a useful exercise is that it presents situations with a number of choices available to us, some easier and some harder, some likely to lead to interactions that are more honest and fair and others more likely to lead to problems. In real life, though, we don’t usually have the option of rewinding time and choosing a different option if our first choice goes badly. Nor do we have assurance that we’ll end up being the good guys.

It’s important to understand the temptations that the cheaters felt — the circumstances that made their unethical behaviors seem expedient, or rational, or necessary. Casting cheaters as monsters is glossing over our own human vulnerability to these bad choices, which will surely make the temptations harder to handle when we encounter them. Moreover, understanding the cheaters as humans (just like the scientists who haven’t cheated) rather than “other” in some fundamental way lets us examine those temptations and then collectively create working environments with fewer of them. Though it’s part of a different discussion, Ashe Dryden describes the dangers of “othering” here quite well:

There is no critical discussion about what leads to these incidents — what parts of our culture allow these things to go unchecked for so long, how pervasive they are, and how so much of this is rewarded directly or indirectly. …

It’s important to notice what is happening here: by declaring that the people doing these things are others, it removes the need to examine our own actions. The logic assumed is that only bad people do these things and we aren’t bad people, so we couldn’t do something like this. Othering effectively absolves ourselves of any blame.

The dramatic arc of “The Lab” is definitely not centered on the cheater’s redemption, nor on cultivating empathy for him, and in the context of the particular training it offers, that’s fine. Sometimes one’s first priority is protecting or repairing the integrity of the scientific record, or ensuring a well-functioning scientific community by isolating a member who has proven himself untrustworthy.

But, that member of the community who we’re isolating, or rehabilitating, is connected to the community — connected to us — in complicated ways. Misconduct doesn’t just happen, but neither is it the case that, when someone commits it, it’s just the matter of the choices and actions of an individual in a vacuum.

The community is participating in creating the environment in which people commit misconduct. Trying to understand the ways in which behaviors, expectations, formal and informal reward systems, and the like can encourage big ethical transgressions or desensitize people to “little” lapses may be a crucial step to creating an environment where fewer people commit misconduct, whether because the cost of doing so is too high or the payoff for doing so (if you get away with it) is too low.

But seeing members of the community as connected in this way requires not seeing the research environment as static and unchangeable — and not seeing those in the community who commit misconduct as fundamentally different creatures from those who do not.

All of this makes me think that part of the voluntary exclusion deals between people who have committed misconduct and the ORI should be an allocution, in which the wrongdoer spells out the precise circumstances of the misconduct, including the pressures in the foreground when the wrongdoer chose the unethical course. This would not be an excuse but an explanation, a post-mortem of the misconduct available to the community for inspection and instruction. Ideally, others might recognize familiar situations in the allocution and then consider how close their own behavior in such situations has come to crossing ethical lines, as well as what factors seemed to help them avoid crossing those lines. As well, researchers could think together about what gives rise to the situations and the temptations within them and explore whether common practices can be tweaked to remove some of the temptations while supporting knowledge-building and knowledge builders.

Casting cheaters as monsters doesn’t do much to help people make good choices in the face of difficult circumstances. Ignoring the ways we contribute to creating those circumstances doesn’t help, either — and may even increase the risk that we’ll become like the “monsters” we decry

Resistance to ethics instruction: the intuition that ethics cannot be taught.

In my last post, I suggested that required ethics coursework (especially for students in STEM* disciplines) are met with a specific sort of resistance. I also surmised that part of this resistance is the idea that ethics can’t be taught in any useful way, “the idea that being ethical is somehow innate, a mere matter of not being evil.”

In a comment on that post, ThomasB nicely illustrates that particular strain of resistance:

Certainly scientists, like everyone else in our society, must behave ethically. But what makes this a college-level class? From the description, it covers the basic do not lie-cheat-steal along with some anti-bullying and possibly a reminder to cite one’s references. All of which should have been instilled long before college.

So what is there to teach at this point? The only thing I can think of specific to science is the “publish or perish” pressure to keep the research dollars flowing in. Or possibly the psychological studies showing that highly intelligent and creative people are more inclined to be dishonest than ordinary people. Possibly because they are better at rationalizing doing what they want to do. Which is why I used the word “instilled” earlier: it seems to me that ethics comes more from the emotional centers of the brain than the conscious analytical part. As soon as we start consciously thinking about ethics, they seem to go out the window. Such as the study from one of the Ivy League schools where the students did worse at the ethics test at the end of the class than at the beginning.

So I guess the bottom line is whether the science shows that ethics classes at this point in a person’s life actually show an improvement in the person’s behavior. As Far as I know, there has been no such study done.

(Bold emphasis added.)

I think it’s reasonable to ask, before requiring an intervention (like ethics coursework), what we know about whether this sort of intervention is likely to work. I think it’s less reasonable to assume it won’t work without consulting the research on the matter.

As it happens, there has been a great deal of research on whether ethics instruction is an intervention that helps people behave more ethically — and the bulk of it shows that well-designed ethics instruction is an effective intervention.

Here’s what Bebeau et al. (1995) have to say about the question:

When people are given an opportunity to reflect on decisions and choices, they can and do change their minds about what they ought to do and how they wish to conduct their personal and professional lives. This is not to say that any instruction will be effective, or that all manner of ethical behavior can be developed with well-developed ethics instruction. But it is to say — and there is considerable evidence to show it — that ethics instruction can influence the thinking processes that relate to behavior. …

We do not claim that radical changes are likely to take place in the classroom or that sociopaths can be transformed into saints via case discussion. But we do claim that significant improvements can be made in reasoning about complex problems and that the effort is worthwhile. We are not alone in this belief: the National Institutes of Health, the National Science Foundation, the American Association for the Advancement of Science, and the Council of Biology Editors, among others, have called for increased attention to training in the responsible conduct of scientific research. Further, our belief is buttressed by empirical evidence from moral psychology. In Garrod (1993), James R. Rest summarizes the “several thousand” published studies on moral judgment and draws the following conclusions:

  • development of competence in ethical problem-solving continues well into adulthood (people show dramatic changes in their twenties, as in earlier years);
  • such changes reflect profound reconceptualization of moral issues;
  • formal education promotes ethical reasoning;
  • deliberate attempts to develop moral reasoning … can be demonstrated to be effective; and
  • studies link moral reasoning to moral behavior

So, there’s a body of research that supports ethics instruction as an intervention to help people behave more ethically.

Indeed, part of how ethics instruction helps is by getting students to engage analytically, not just emotionally. I would argue that making ethical decisions involves moving beyond gut feelings and instincts. It means understanding how your decisions impact others, and considering the ways your interests and theirs intersect. It means thinking through possible impacts of the various choices available to you. It means understanding the obligations set up by our relations to others in personal and professional contexts.

And methodology for approaching ethical decision making can be taught. Practice in making ethical decisions makes it easier to make better decisions. And making these decisions in conversation with other people who may have different perspectives (rather than just following a gut feeling) forces us to work out our reasons for preferring one course of action to the alternatives. These reasons are not just something we can offer to others to defend what we did, but they are things we can consider when deciding what to do in the first place.

As always, I reckon that there are some people who will remain unmoved by the research that shows the efficacy of ethics instruction, preferring to cling to their strong intuition that college-aged humans are past the point where an intervention like an ethics class could make any impact on their ethical behavior. But if that’s an intuition that ought to guide us — if, by your twenties, you’re either a good egg or irredeemably corrupt — it’s not clear that our individual or institutional responses to unethical behavior by scientists make any sense.

That’s the subject I’ll take up in my next post.

______
*STEM stands for science, technology, engineering, and mathematics.

______
Bebeau, M. J., Pimple, K. D., Muskavitch, K. M., Borden, S. L., & Smith, D. H. (1995). Moral reasoning in scientific research. Cases for teaching and assessment. Bloomington, IN: Poynter Center for the Study of Ethics and Assessment.

Garrod, A. (Ed.). (1993). Approaches to moral development: New research and emerging themes. Teachers College Press.

Resistance to ethics is different from resistance to other required courses.

For academic types like myself, the end of the semester can be a weird juxtaposition of projects that are ending and new projects that are on the horizon, a juxtaposition that can be an opportunity for reflexion.

I’ve just seen another offering of my “Ethics in Science” course to a (mostly successful) conclusion. Despite the fact that the class was huge (more than 100 students) for a course that is heavy on discussion, its students were significantly more active and engaged than those in the much smaller class I taught right after it. The students thought hard and well, and regularly floored me with their razor-sharp insights. All the evidence suggests that these students were pretty into it.

Meanwhile, I’m getting set for a new project that will involve developing ethics units for required courses offered in another college at my university — and one of the things I’ve been told is that the students required to take these courses (as well as some non-zero number of the professors in their disciplines) are very resistant to the inclusion of ethics coursework in courses otherwise focused on their major subjects.

I find this resistance interesting, especially given that the majority of the students in my “Ethics in Science” class were taking it because it was required for their majors.

I recognize that part of what’s going on may be a blanket resistance to required courses. Requirements can feel like an attack on one’s autonomy and individuality — rather than being able to choose what you will to study, you’re told what you must study to major in a particular subject or to earn a degree from a particular university. A course that a student might have been open to enjoying were it freely chosen can become a loathed burden merely by virtue of being required. I’ve seen the effect often enough that it no longer surprises me.

However, requirements aren’t usually imposed solely to constrain students’ autonomy. There’s almost always a reason that the course, or subject-matter, or problem-solving area that’s required is being required. The students may not know that reason (or judge it to be a compelling reason if they do know it), but that doesn’t meant that there’s not a reason.

In some ways, ethics is really not much different here from other major requirements or subject matter that students bemoan, including calculus, thermodynamics, writing in the major, and significant figures. On the other hand, the moaning for some of those other requirements tends to take the form of “When am I ever going to use that?”

I don’t believe I’ve ever heard a science or engineering student say, “When am I ever going to use ethics?”

In other words, they generally accept that they should be ethical, but they also sometimes voice resistance to the idea that a course (or workshop, or online training module) about how to be ethical will be anything but a massive waste of their time.

My sense is that at least part of what’s going on here is that scientists and engineers and their ilk feel like ethics are being imposed on them from without, by university administrators or funding agencies or accrediting organizations. Worse, the people exhorting scientists, engineers, et alia to take ethics seriously often seem to take a finger-wagging approach. And this, I suspect, makes it harder to get what those business types call “buy-in” from the scientists.

The typical story I’ve heard about ethics sessions in industry (and some university settings) goes something like this:

You get a big packet with the regulations you have to follow — to get your protocols approved by the IRB and/or the IACUC, to disclose potential conflicts of interest, to protect the company’s or university’s patent rights, to fill out the appropriate paperwork for hazardous waste disposal, etc., etc. You are admonished against committing the “big three” of falsification, fabrication, and plagiarism. Sometimes, you are also admonished against sexually harassing those with whom you are working. The whole thing has the feel of being driven by the legal department’s concerns: for goodness sake, don’t do anything that will embarrass the organization or get us into hot water with regulators or funders!


Listening to the litany of things you ought not to do, it’s really easy to think: Very bad people do things like this. But I’m not a very bad person. So I can tune this out, and I can kind of ignore ethics.


The decision to tune out ethics is enabled by the fact that the people wagging the fingers at the scientists are generally outsiders (from the legal department, or the philosophy department, or wherever). These outsiders are coming in telling us how to do our jobs! And, the upshot of what they’re telling us seems to be “Don’t be evil,” and we’re not evil! Besides, these outsiders clearly don’t care about (let alone understand) the science so much as avoiding scandals or legal problems. And they don’t really trust us not to be evil.


So just nod earnestly and let’s get this over with.

One hurdle here is the need to get past the idea that being ethical is somehow innate, a mere matter of not being evil, rather than a problem-solving practice that gets better with concrete strategies and repeated use. Another hurdle is the feeling that ethics instruction is the result of meddling by outsiders.


If ethics is seen as something imposed upon scientists by a group from the outside — one that neither understands science, nor values it, nor trusts that scientists are generally not evil — then scientists will resist ethics. To get “buy-in” from the scientists, they need to see how ethics are intimately connected to the job they’re trying to get done. In other words, scientists need to understand how ethical conduct is essential to the project of doing science. Once scientists make that connection, they will be ethical — not because someone else is telling them to be ethical, but because being ethical is required to make progress on the job of building scientific knowledge.
_____________
This post is an updated version of an ancestor post on my other blog, and was prompted by the Virtually Speaking Science discussion of philosophy in and of science scheduled for Wednesday, May 28, 2014 (starting 8 PM EDT/8 PM PDT). Watch the hashtags #VSpeak and #AskVS for more details.

Pub-Style Science: dreams of objectivity in a game built around power.

This is the third and final installment of my transcript of the Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building. Leading up to this part of the conversation, we were considering the possibility that the idealization of the scientific method left out a lot of the details of how real humans actually interact to build scientific knowledge …

Dr. Isis: And that’s the tricky part, I think. That’s where this becomes a messy endeavor. You think about the parts of the scientific method, and you write the scientific method out, we teach it to our students, it’s on the little card, and I think it’s one of the most amazing constructs that there is. It’s certainly a philosophy.

I have devoted my career to the scientific method, and yet it’s that last step that is the messiest. We take our results and we interpret them, we either reject or fail to reject the hypothesis, and in a lot of cases, the way we interpret the very objective data that we’re getting is based on the social and cultural constructs of who we are. And the messier part is that the who we are — you say that science is done around the world, sure, but really, who is it done by? We all get the CV, “Dear honorable and most respected professor…” And what do you do with those emails? You spam them. But why? Why do we do that? There are people [doing science] around the world, and yet we reject their science-doing because of who they are and where they’re from and our understanding, our capacity to take [our doing] of that last step of the scientific method as superior because of some pedigree of our training, which is absolutely rooted in the narrowest sliver of our population.

And that’s the part that frightens me about science. Going from lab to lab and learning things, you’re not just learning objective skills, you’re learning a political process — who do you shake hands with at meetings, who do you have lunch with, who do you have drinks with, how do you phrase your grants in a particular way so they get funded because this is the very narrow sliver of people who are reading them? And I have no idea what to do about that.

Janet Stemwedel: I think this is a place where the acknowledgement that’s embodied in editorial policies of journals like PLOS ONE, that we can’t actually reliably predict what’s going to be important, is a good step forward. That’s saying, look, what we can do is talk about whether this is a result that seems to be robust: this is how I got it; I think if you try to get it in your lab, you’re likely to get it, too; this is why it looked interesting to me in light of what we knew already. Without saying: oh, and this is going to be the best thing since sliced bread. At least that’s acknowledging a certain level of epistemic humility that it’s useful for the scientific community to put out there, to no pretend that the scientific method lets you see into the future. Because last time I checked, it doesn’t.

(46:05)
Andrew Brandel: I just want to build on this point, that this question of objective truth also is a question that is debated hotly, obviously, in science, and I will get in much trouble for my vision of what is objective and what is not objective. This question of whether, to quote a famous philosopher of science, we’re all looking at the same world through different-colored glasses, or whether there’s something more to it, if we’re actually talking about nature in different ways, if we can really learn something not even from science being practiced wherever in the world, but from completely different systems of thinking about how the world works. Because the other part of this violence is not just the ways in which certain groups have not been included in the scientific community, the professional community, which was controlled by the church and wealthy estates and things, but also with the institutions like the scientific method, like certain kinds of philosophy. A lot of violence has been propagated in the name of those things. So I think it’s important to unpack not just this question of let’s get more voices to the table, but literally think about how the structures of what we’re doing themselves — the way the universities are set up, the way that we think about what science does, the way that we think about objective truth — also propagate certain kinds of violence, epistemic kinds of violence.

Michael Tomasson: Wait wait wait, this is fascinating. Epistemic violence? Expand on that.

Andrew Brandel: What I mean to say is, part of the problem, at least from the view of myself — I don’t want to actually represent anybody else — is that if we think that we’re getting to some better method of getting to objective truth, if we think that we have — even if it’s only in an ideal state — some sort of cornerstone, some sort of key to the reality of things as they are, then we can squash the other systems of thinking about the world. And that is also a kind of violence, in a way, that’s not just the violence of there’s no women at the table, there’s no different kinds of people at the table. But there’s actually another kind of power structure that’s embedded in the very way that we think about truths. So, for example, a famous anthropologist, Levi-Strauss, would always point out that the botanists would go to places in Latin America and they would identify 14 different kinds of XYZ plant, and the people living in that jungle who aren’t scientists or don’t have that kind of sophisticated knowledge could distinguish like 45 kinds of these plants. And they took them back to the lab, and they were completely right.

So what does that mean? How do we think about these different ways [of knowing]? I think unpacking that is a big thing that social science and philosophy of science can bring to this conversation, pointing out when there is a place to critique the ways in which science becomes like an ideology.

Michael Tomasson: That just sort of blew my mind. I have to process that for awhile. I want to pick up on something you’re saying and that I think Janet said before, which is really part of the spirit of what Pub-Style Science is all about, the idea that is we get more different kinds of voices into science, we’ll have a little bit better science at the other end of it.

Dr. Rubidium: Yeaaaah. We can all sit around like, I’ve got a ton of great ideas, and that’s fabulous, and new voices, and rah rah. But, where are the new voices? are the new voices, or what you would call new voices, or new opinions, or different opinions (maybe not even new, just different from the current power structure) — if those voices aren’t getting to positions of real power to affect change, it doesn’t matter how many foot soldiers you get on the ground. You have got to get people into the position of being generals. And is that happening? No. I would say no.

Janet Stemwedel: Having more different kinds of people at the table doesn’t matter if you don’t take them seriously.

Andrew Brandel: Exactly. That’s a key point.

Dr. Isis: This is the tricky thing that I sort of alluded to. And I’m not talking about diverse voices in terms of gender and racial and sexual orientation diversity and disability issues. I’m talking about just this idea of diverse voices. One of the things that is tricky, again, is that to get to play the game you have to know the rules, and trying to change the rules too early — one, I think it’s dangerous to try to change the rules before you understand what the rules even are, and two, that is the quickest way to get smacked in the nose when you’re very young. And now, to extend that to issues of actual diversity in science, at least my experience has been that some of the folks who are diverse in science are some of the biggest rule-obeyers. Because you have to be in order to survive. You can’t come in and be different as it is and decide you’re going to change the rules out from under everybody until you get into that — until you become a general, to use Dr. Rubidium’s analogy. The problem is, by the time you become the general, have you drunk enough of the Kool-Aid that you remember who you were? Do you still have enough of yourself to change the system? Some of my more senior colleagues, diverse colleagues, who came up the ranks, are some of the biggest believers in the rules. I don’t know if they felt that way when they were younger folks.

Janet Stemwedel: Part of it can be, if the rules work for you, there’s less incentive to think about changing them. But this is one of those places where those of us philosophers who think about where the knowledge-building bumps up against the ethics will say: look, the ethical responsibilities of the people in the community with more power are different that the ethical responsibilities of the people in the community who are just coming up, because they don’t have as much weight to throw around. They don’t have as much power. So I talk a lot to mid-career and late-career scientists and say, hey look, you want to help build a different community, a different environment for the people you’re training? You’ve got to put some skin in the game to make that happen. You’re in a relatively safe place to throw that weight around. You do that!

And you know, I try to make these prudential arguments about, if you shift around the incentive structures [in various ways], what’s likely to produce better knowledge on the other end? That’s presumably why scientists are doing science, ’cause otherwise there’d be some job that they’d be doing that takes up less time and less brain.

Andrew Brandel: This is a question also of where ethics and epistemic issues also come together, because I think that’s really part of what kind of radical politics — there’s a lot of different theories about what kind of revolution you can talk about, what a revolutionary politics might be to overthrow the system in science. But I think this issue that it’s also an epistemic thing, that it’s also a question of producing better knowledge, and that, to bring back this point about how it’s not just about putting people in positions, it’s not just hiring an assistant professor from XYZ country or more women or these kinds of things, but it’s also a question of putting oneself sufficiently at risk, and taking seriously the possibility that I’m wrong, from radically different positions. That would really move things, I think, in a more interesting direction. That’s maybe something we can bring to the table.

Janet Stemwedel: This is the piece of Karl Popper, by the way, that scientists like as an image of what kind of tough people they are. Scientists are not trying to prove their hypotheses, they’re trying to falsify them, they’re trying to show that they’re wrong, and they’re ready to kiss even their favorite hypothesis goodbye if that’s what the evidence shows.

Some of those hypotheses that scientists need to be willing to kiss goodbye have to do with narrow views of what kind of details count as fair game for building real reliable knowledge about the world and what kind of people and what kind of training could do that, too. Scientists really have to be more evidence-attentive around issues like their own implicit bias. And for some reason that’s really hard, because scientists think that individually they are way more objective than they average bear. The real challenge of science is recognizing that we are all average bears, and it is just the coordination of our efforts within this particular methodological structure that gets us something better than the individual average bear could get by him- or herself.

Michael Tomasson: I’m going to backpedal as furiously as I can, since we’re running out of time. So I’ll give my final spiel and then we’ll go around for closing comments.

I guess I will pare down my skeleton-key: I think there’s an idea of different ways of doing science, and there’s a lot of culture that comes with it that I think is very flexible. I think what I’m getting at is, is there some universal hub for whatever different ways people are looking at science? Is there some sort of universal skeleton or structure? And I guess, if I had to backpedal furiously, that I would say, what I would try to teach my folks, is number one, there is an objective world, it’s not just my opinion. When people come in and talk to me about their science and experiments, it’s not just about what I want, it’s not just about what I think, it’s that there is some objective world out there that we’re trying to describe. The second thing, the most stripped-down version of the scientific method I can think of, is that in order to understand that objective world, it helps to have a hypothesis, a preconceived notion, first to challenge.

What I get frustrated about, and this is just a very practical day-to-day thing, is I see people coming and doing experiments saying, “I have no preconceived notion of how this should go, I did this experiment, and here’s what I got.” It’s like, OK, that’s very hard to interpret unless you start from a certain place — here’s my prediction, here’s what I think was going on — and then test it.

Dr. Isis: I’ll say, Tomasson, actually this wasn’t as boring as I thought it would be. I was really worried about this one. I wasn’t really sure what we were supposed to be talking about — philosophy and science — but this one was OK. So, good on you.

But, I think that I will concur with you that science is about seeking objective truth. I think it’s a darned shame that humans are the ones doing the seeking.

Janet Stemwedel: You know, dolphin science would be completely different, though.

Dr. Rubidium: Yeah, dolphins are jerks! What are you talking about?

Janet Stemwedel: Exactly! All their journals would be behind paywalls.

Andrew Brandel: I’ll just say that I was saying to David, who I know is a regular member of your group, that I think it’s a good step in the right direction to have these conversations. I don’t think we get asked as social scientists, even those of us who work in science settings, to at least talk about these issues more, and talk about, what are the ethical and epistemic stakes involved in doing what we do? What can we bring to the table on similar kinds of questions? For me, this question of cultivating a kind of openness to being wrong is so central to thinking about the kind of science that I do. I think that these kinds of conversations are important, and we need to generate some kind of momentum. I jokingly said to Tomasson that we need a grant to pay for a workshop to get more people into these types of conversations, because I think it’s significant. It’s a step in the right direction.

Janet Stemwedel: I’m inclined to say one of the take-home messages here is that there’s a whole bunch of scientists and me, and none of you said, “Let’s not talk about philosophy at all, that’s not at all useful.” I would like some university administrators to pay attention to this. It’s possible that those of us in the philosophy department are actually contributing something that enhances not only the fortunes of philosophy majors but also the mindfulness of scientists about what they’re doing.

I’m pretty committed to the idea that there is some common core to what scientists across disciplines and across cultures are doing to build knowledge. I think the jury’s still out on what precisely the right thing to say about that common core of the scientific method is. But, I think there’s something useful in being able to step back and examine that question, rather than saying, “Science is whatever the hell we do in my lab. And as long as I keep doing all my future knowledge-building on the same pattern, nothing could go wrong.”

Dr. Rubidium: I think that for me, I’ll echo Isis’s comments: science is an endeavor done by people. And people are jerks — No! With people, then, if you have this endeavor, this job, whatever you want to call it — some people would call it a calling — once people are involved, I think it’s essential that we talk about philosophy, sociology, the behavior of people. They are doing the work. It doesn’t make sense to me, then — and I’m an analytical chemist and I have zero background in all of the social stuff — it doesn’t make sense to me that you would have this thing done by people and then actually say with a straight face, “But let’s not talk about people.” That part just doesn’t compute. So I think these conversations definitely need to continue, and I hope that we can talk more about the people behind the endeavor and more about the things attached to their thoughts and behaviors.

* * * * *

Part 1 of the transcript.

Part 2 of the transcript.

Archived video of this Pub-Style Science episode.

Storify’d version of the simultaneous Twitter conversation.

You should also check out Dr. Isis’s post on why the conversations that happen in Pub-Style Science are valuable to scientists-in-training.

Pub-Style Science: exclusion, inclusion, and methodological disputes.

This is the second part of my transcript of the Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building, wherein we discuss methodological disputes, who gets included or excluded in scientific knowledge-building, and ways the exclusion or inclusion might matter. Also, we talk about power gradients and make the scary suggestion that “the scientific method” might be a lie…

Michael Tomasson: Rubidium, you got me started on this. I made a comment on Twitter about our aspirations to build objective knowledge and that that was what science was about, and whether there’s sexism or racism or whatever other -isms around is peripheral to the holy of holies, which is the finding of objective truth. And you made … a comment.

Dr. Rubidium: I think I told you that was cute.

Michael Tomasson: Let me leverage it this way: One reason I think philosophy is important is the basics of structure, of hypothesis-driven research. The other thing I’m kind of intrigued by is part of Twitter culture and what we’re doing with Pub-Style Science is to throw the doors open to people from different cultures and different backgrounds are really say, hey, we want to have science that’s not just a white bread monoculture, but have it be a little more open. But does that mean that everyone can bring their own way of doing science? It sounds like Andrew might say, well, there’s a lot of different ways, and maybe everyone who shows up can bring their own. Maybe one person wants a hypothesis, another doesn’t. Does everybody get to do their own thing, or do we need to educate people in the one way to do science?

As I mentioned on my blog, I had never known that there was a feminist way of doing science.

Janet Stemwedel: There’s actually more than one.

Dr. Isis: We’re not all the same.

Janet Stemwedel: I think even the claim that there’s a single, easily described scientific method is kind of a tricky one. One of the things I’m interested in — one of the things that sucked me over from building knowledge in chemistry to trying to build knowledge in philosophy — is, if you look at scientific practice, scientists who are nominally studying the same thing, the same phenomena, but who’re doing it in different disciplines (say, the chemical physicists and the physical chemists) can be looking at the same thing, but they’re using very different experimental tools and conceptual tools and methodological tools to try to describe what’s going on there. There’s ways in which, when you cross a disciplinary boundary — and sometimes, when you leave your research group and go to another research group in the same department — that what you see on the ground as the method you’re using to build knowledge shifts.

In some ways, I’m inclined to say it’s an empirical question whether there’s a single unified scientific method, or whether we’ve got something more like a family resemblance kind of thing going on. There’s enough overlap in the tools that we’re going to call them all science, but whether we can give necessary and sufficient conditions that describe the whole thing, that’s still up in the air.

Andrew Brandel: I just want to add to that point, if I can. I think that one of the major topics in social sciences of science and in the philosophy of science recently has been the point that science itself, as it’s been practiced, has a history that is also built on certain kinds of power structures. So it’s not even enough to say, let’s bring lots of different kinds of people to the table, but we actually have to uncover the ways in which certain power structures have been built into the very way that we think about science or the way that the disciplines are arranged.

(23:10)
Michael Tomasson: You’ve got to expand on that. What do you mean? There’s only one good — there’s good science and there’s bad science. I don’t understand.

Janet Stemwedel: So wait, everyone who does science like you do is doing good science, and everyone who uses different approaches, that’s bad?

Michael Tomasson: Yes, exactly.

Janet Stemwedel: There’s no style choices in there at all?

Michael Tomasson: That’s what I’m throwing out there. I’m trying to explore that. I’m going to take poor Casey over here, we’re going to stamp him, turn him into a white guy in a tie and he’s going to do science the way God intended it.

Dr. Isis: This is actually a good point, though. I had a conversation with a friend recently about “Cosmos.” As they look back on the show, at all the historical scientists, who, historically has done science? Up until very recently, it has been people who were sufficiently wealthy to support the lifestyle to which they would like to become accustomed, and it’s very easy to sit and think and philosophize about how we do science when it’s not your primary livelihood. It was sort of gentleman scientists who were of the independently wealthy variety who were interested in science and were making these observations, and now that’s very much changed.

It was really interesting to me when you suggested this as a topic because recently I’ve become very pragmatic about doing science. I think I’m taking the “Friday” approach to science — you know, the movie? Danielle Lee wants to remake “Friday” as a science movie. Right now, messing with my money is like messing with my emotions. I’m about writing things in a way to get them funded and writing things in a way that gets them published, and it’s cute to think that we might change the game or make it better, but there’s also a pragmatic side to it. It’s a human endeavor, and doing things in a certain way gets certain responses from your colleagues. The thing that I see, especially watching young people on Twitter, is they try to change the game before they understand the game, and then they get smacked on the nose, and then they write is off as “science is broken”. Well, you don’t understand the game yet.

Janet Stemwedel: Although it’s complicated, I’d say. It is a human endeavor. Forgetting it’s a human endeavor is a road to nothing but pain. And you’ve got the knowledge-building thing going on, and that’s certainly at the center of science, but you’ve also got the getting credit for the awesome things you’ve done and getting paid so you can stay in the pool and keep building knowledge, because we haven’t got this utopian science island where anyone who wants to build knowledge can and all their needs are taken care of. And, you’ve got power gradients. So, there may well be principled arguments from the point of view of what’s going to incentivize practices that will result in better knowledge and less cheating and things like that, to change the game. I’d argue that’s one of the things that philosophy of science can contribute — I’ve tried to contribute that as part of my day job. But the first step is, you’ve got to start talking about the knowledge-building as an activity that’s conducted by humans rather than you put more data into the scientific method box, you turn the crank, and out comes the knowledge.

Michael Tomasson: This is horrifying. I guess what I’m concerned about is I’d hoped you’d teach the scientific method as some sort of central methodology from lab to lab. Are you saying, from the student’s point of view, whatever lab you’re in, you’ve got to figure out whatever the boss wants, and that’s what science is? Is there no skeleton key or structure that we can take from lab to lab?

Dr. Rubidium: Isn’t that what you’re doing? You’re going to instruct your people to do science the way you think it should be done? That pretty much sounds like what you just said.

Dr. Isis: That’s the point of being an apprentice, right?

Michael Tomasson: I had some fantasy that there was some universal currency or universal toolset that could be taken from one lab to another. Are you saying that I’m just teaching my people how to do Tomasson science, and they’re going to go over to Rubidium and be like, forget all that, and do things totally differently?

Dr. Rubidium: That might be the case.

Janet Stemwedel: Let’s put out there that a unified scientific method that’s accepted across scientific disciplines, and from lab to lab and all that, is an ideal. We have this notion that part of why we’re engaged in science to try to build knowledge of the world is that there is a world that we share. We’re trying to build objective knowledge, and why that matters is because we take it that there is a reality out there that goes deeper than how, subjectively, things seem to us.

(30:00)
Michael Tomasson: Yes!

Janet Stemwedel: So, we’re looking for a way to share that world, and the pictures of the method involved in doing that, the logical connections involved in doing that, that we got from the logical empiricists and Popper and that crowd — if you like, they’re giving sort of the idealized model of how we could do that. It’s analogous to the story they tell you about orbitals in intro chem. You know what happens, if you keep on going with chem, is they mess up that model. They say, it’s not that simple, it’s more complicated.

And that’s what philosophers of science do, is we mess up that model. We say, it can’t possible be that simple, because real human beings couldn’t drive that and make it work as well as it does. So there must be something more complicated going on; let’s figure out what it is. My impression, looking at the practice through the lens of philosophy of science, is that you find a lot of diversity in the details of the methods, you find a reasonable amount of diversity in terms of what’s the right attitude to have towards our theories — if we’ve got a lot of evidence in favor of our theories, are we allowed to believe our theories are probably right about the world, or just that they’re better at churning out predictions than the other theories we’ve considered so far? We have places where you can start to look at how methodologies embraced by Western primatologists compared to Japanese primatologists — where they differ on what’s the right thing to do to get the knowledge — you could say, it’s not the case that one side is right and one side is wrong, we’ve located a trade-off here, where one camp is deciding one of the things you could get is more important and you can sacrifice the other, and the other camp is going the other direction on that.

It’s not to say we should just give up on this project of science and building objective, reliable knowledge about the world. But how we do that is not really anything like the flowchart of the scientific method that you find in the junior high science text book. That’s like staying with the intro chem picture of the orbitals and saying, that’s all I need to know.

(32:20)
Dr. Isis: I sort of was having a little frightened moment where, as I was listening to you talk, Michael, I was having this “I don’t think that word means what you think it means” reaction. And I realize that you’re a physician and not a real scientist, but “the scientific method” is actually a narrow construct of generating a hypothesis, generating methods to test the hypothesis, generating results, and then either rejecting or failing to reject your hypothesis. This idea of going to people’s labs and learning to do science is completely tangential from the scientific method. I think we can all agree that, for most of us at are core, the scientific method is different from the culture. Now, whether I go to Tomasson’s lab and learn to label my reagents with the wrong labels because they’re a trifling, scandalous bunch who will mess up your experiment, and then I go to Rubidium’s lab and we all go marathon training at 3 o’clock in the afternoon, that’s the culture of science, that’s not the scientific method.

(34:05)
Janet Stemwedel: Maybe what we mean by the scientific method is either more nebulous or more complicated, and that’s where the disagreements come from.

If I can turn back to the example of the Japanese primatologists and the primatologists from the U.S. [1]… You’re trying to study monkeys. You want to see how they’re behaving, you want to tell some sort of story, you probably are driven by some sort of hypotheses. As it turns out, the Western primatologists are starting with the hypothesis that basically you start at the level of the individual monkey, that this is a biological machine, and you figure out how that works, and how they interact with each other if you put them in a group. The Japanese primatologists are starting out with the assumption that you look at the level of social groups to understand what’s going on.

(35:20)
And there’s this huge methodological disagreement that they had when they started actually paying attention to each other: is it OK to leave food in the clearing to draw the monkeys to where you can see them more closely?

The Western primatologists said, hell no, that interferes with the system you’re trying to study. You want to know what the monkeys would be like in nature, without you there. So, leaving food out there for them, “provisioning” them, is a bad call.

The Japanese primatologists (who are, by the way, studying monkeys that live in the islands that are part of Japan, monkeys that are well aware of the existence of humans because they’re bumping up against them all the time) say, you know what, if we get them closer to where we are, if we draw them into the clearings, we can see more subtle behaviors, we can actually get more information.

So here, there’s a methodological trade-off. Is it important to you to get more detailed observations, or to get observations that are untainted by human interference? ‘Cause you can’t get both. They’re both using the scientific method, but they’re making different choices about the kind of knowledge they’re building with that scientific method. Yet, on the surface of things, these primatologists were sort of looking at each other like, “Those guys don’t know how to do science! What the hell?”

(36:40)
Andrew Brandel: The other thing I wanted to mention to this point and, I think, to Tomasson’s question also, is that there are lots of anthropologists embedded with laboratory scientists all over the world, doing research into specifically what kinds of differences, both in the ways that they’re organized and in the ways that arguments get levied, what counts as “true” or “false,” what counts as a hypothesis, how that gets determined within these different contexts. There are broad fields of social sciences doing exactly this.

Dr. Rubidium: I think this gets to the issue: Tomasson, what are you calling the scientific method? Versus, can you really at some point separate out the idea that science is a thing — like Janet was saying, it’s a machine, you put the stuff in, give it a spin, and get the stuff out — can you really separate something called “the scientific method” from the people who do it?

I’ve taught general chemistry, and one of the first things we do is to define science, which is always exciting. It’s like trying to define art.

Michael Tomasson: So what do you come up with? What is science?

Dr. Rubidium: It’s a body of knowledge and a process — it’s two different things, when people say science. We always tell students, it’s a body of knowledge but it’s also a process, a thing you can do. I’m not saying it’s [the only] good answer, but it’s the answer we give students in class.

Then, of course, the idea is, what’s the scientific method? And everyone’s got some sort of a figure. In the gen chem book, in chapter 1, it’s always going to be in there. And it makes it seem like we’ve all agreed at some point, maybe taken a vote, I don’t know, that this is what we do.

Janet Stemwedel: And you get the laminated card with the steps on it when you get your lab coat.

Dr. Rubidium: And there’s the flowchart, usually laid out like a circle.

Michael Tomasson: Exactly!

Dr. Rubidium: It’s awesome! But that’s what we tell people. It’s kind of like the lie we tell the about orbitals, like Janet was saying, in the beginning of gen chem. But then, this is how sausages are really made. And yes, we have this method, and these are the steps we say are involved with it, but are we talking about that, which is what you learn in high school or junior high or science camp or whatever, or are you actually talking about how you run your research group? Which one are you talking about?

(39:30)
Janet Stemwedel: It can get more complicated than that. There’s also this question of: is the scientific method — whatever the heck we do to build reliable knowledge about the world using science — is that the kind of thing you could do solo, or is it necessarily a process that involves interaction with other people? So, maybe we don’t need to be up at night worrying about whether individual scientists fail to instantiate this idealized scientific method as long as the whole community collectively shakes out as instantiating it.

Michael Tomasson: Hmmm.

Casey: Isn’t this part of what a lot of scientists are doing, that it shakes out some of the human problems that come with it? It’s a messy process and you have a globe full of people performing experiments, doing research. That should, to some extent, push out some noise. We have made advances. Science works to some degree.

Janet Stemwedel: It mostly keeps the plane up in the air when it’s supposed to be in the air, and the water from being poisoned when it’s not supposed to be poisoned. The science does a pretty good job building the knowledge. I can’t always explain why it’s so good at that, but I believe that it does. And I think you’re right, there’s something — certainly in peer review, there’s this assumption that why we play with others here is that they help us catch the thing we’re missing, they help us to make sure the experiments really are reproducible, to make sure that we’re not smuggling in unconscious assumptions, whatever. I would argue, following on something Tomasson wrote in his blog post, that this is a good epistemic reason for some of the stuff that scientists rail on about on Twitter, about how we should try to get rid of sexism and racism and ableism and other kinds of -isms in the practice of science. It’s not just because scientists shouldn’t be jerks to people who could be helping them build the knowledge. It’s that, if you’ve got a more diverse community of people building the knowledge, you up the chances that you’re going to locate the unconscious biases that are sneaking in to the story we tell about what the world is like.

When the transcript continues, we do some more musing about methodology, the frailties of individual humans when it comes to being objective, and epistemic violence.

_______

[1] This discussion based on my reading of Pamela J. Asquith, “Japanese science and western hegemonies: primatology and the limits set to questions.” Naked science: Anthropological inquiry into boundaries, power, and knowledge (1996): 239-258.

* * * * *

Part 1 of the transcript.

Archived video of this Pub-Style Science episode.

Storify’d version of the simultaneous Twitter conversation.

Pub-Style Science: philosophy, hypotheses, and the scientific method.

Last week I was honored to participate in a Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building. Some technical glitches notwithstanding, it was a rollicking good conversation — so much so that I have put together a transcript for those who don’t want to review the archived video.

The full transcript is long (approaching 8000 words even excising the non-substantive smack-talk), so I’ll be presenting it here in a few chunks that I’ve split more or less at points where the topic of the discussion shifted.

In places, I’ve cleaned up the grammar a bit, attempting to faithfully capture the gist of what each speaker was saying. As well, because my mom reads this blog, I’ve cleaned up some of the more colorful language. If you prefer the PG-13 version, the archived video will give you what you need.

Simultaneously with our video-linked discussion, there was a conversation on Twitter under the #pubscience hashtag. You can see that conversation Storify’d here.

____
(05:40)
Michael Tomasson: The reason I was interested in this is because I have one very naïve view and one esoteric view. My naïve view is that there is something useful about philosophy in terms of the scientific method, and when people are in my lab, I try to beat into their heads (I mean, educate them) that there’s a certain structure to how we do science, and this is a life-raft and a tool that is essential. And I guess that’s the question, whether there is some sort of essential tool kit. We talk about the scientific method. Is that a universal? I started thinking about this talking with my brother-in-law, who’s an amateur philosopher, about different theories of epistemology, and he was shocked that I would think that science had a lock on creating knowledge. But I think we do, through the scientific method.

Janet, take us to the next level. To me, from where I am, the scientific method is the key to the city of knowledge. No?

Janet Stemwedel: Well, that’s certainly a common view, and that’s a view that, in the philosophy of science class I regularly teach, we start with — that there’s something special about whatever it is scientists are doing, something special about the way they gather very careful observations of the world, and hook them together in the right logical way, and draw inferences and find patterns, that’s a reliable way to build knowledge. But at least for most of the 20th Century, what people who looked closely at this assumption in philosophy found was that it had to be more complicated than that. So you end up with folks like Sir Karl Popper pointing out that there is a problem of induction — that deductive logic will get you absolutely guaranteed conclusions if your premises are true, but inductive inference could go wrong; the future might not be like the past we’ve observed so far.

(08:00)
Michael Tomasson: I’ve got to keep the glossary attached. Deductive and inductive?

Janet Stemwedel: Sure. A deductive argument might run something like this:

All men are mortal. Socrates is a man. Therefore, Socrates is mortal.

If it’s true that all men are mortal, and that Socrates is a man, then you are guaranteed that Socrates is also going to be mortal. The form of the argument is enough to say, if the assumptions are true, then the conclusion has to be true, and you can take that to the bank.

Inductive inference is actually most of what we seem to use in drawing inferences from observations and experiments. So, let’s say you observe a whole lot of frogs, and you observe that, after some amount of time, each of the frogs that you’ve had in your possession kicks off. After a certain number of frogs have done this, you might draw the inference that all frogs are mortal. And, it seems like a pretty good inference. But, it’s possible that there are frogs not yet observed that aren’t mortal.

Inductive inference is something we use all the time. But Karl Popper said, guess what, it’s not guaranteed in the same way deductive logic is. And this is why he thought the power of the scientific method is that scientists are actually only ever concerned to find evidence against their hypotheses. The evidence against your hypotheses lets you conclude, via deductive inference, that those hypotheses are wrong, and then you cross them off. Any hypothesis where you seem to get observational support, Popper says, don’t get too excited! Keep testing it, because maybe the next test is going to be the one where you find evidence against it, and you don’t want to get screwed over by induction. Inductive reasoning is just a little too shaky to put your faith in.

(10:05)
Michael Tomasson: That’s my understanding of Karl Popper. I learned about the core of falsifying hypotheses, and that’s sort of what I teach as truth. But I’ve heard some anti-Karl Popper folks, which I don’t really quite understand.

Let me ask Isis, because I know Isis has very strong opinions about hypotheses. You had a blog post a long time ago about hypotheses. Am I putting words in your mouth to say you think hypotheses and hypothesis testing are important?

(10:40)
Dr. Isis: No, I did. That’s sort of become the running joke here is that my only contribution to lab meeting is to say, wait wait wait, what was your hypothesis? I think that having hypotheses is critical, and I’m a believer, as Dr. Tomasson knows, that a hypothesis has four parts. I think that’s fundamental, framing the question, because I think that the question frames how you do your analysis. The design and the analysis fall out of the hypothesis, so I don’t understand doing science without a hypothesis.

Michael Tomasson: Let me throw it over to Andrew … You’re coming from anthropology, you’re looking at science from 30,000 feet, where maybe in anthropology it’s tough to do hypothesis-testing. So, what do you say to this claim that the hypothesis is everything?

Andrew Brandel: I would give two basic responses. One: in the social sciences, we definitely have a different relationship to hypotheses, to the scientific method, perhaps. I don’t want to represent the entire world of social and human sciences.

Michael Tomasson: Too bad!

(12:40)
Andrew Brandel: So, there’s definitely a different relationship to hypothesis-testing — we don’t have a controlled setting. This is what a lot of famous anthropologists would talk about. The other area where we might interject is, science is (in the view of some of us) one among many different ways of viewing and organizing our knowledge about the world, and not necessarily better than some other view.

Michael Tomasson: No, it’s better! Come on!

Andrew Brandel: Well, we can debate about this. This is a debate that’s been going on for a long time, but basically my position would be that we have something to learn from all the different sciences that exist in the world, and that there are lots of different logics which condition the possibility of experiencing different kinds of things. When we ask, what is the hypothesis, when Dr. Isis is saying that is crucial for the research, we would agree with you, that that is also conditioning the responses you get. That’s both what you want and part of the problem. It’s part of a culture that operates like an ideology — too close to you to come at from within it.

Janet Stemwedel: One of the things that philosophers of science started twigging to, since the late 20th Century, is that science is not working with this scientific method that’s essentially a machine that you toss observations into and you turn the crank and on the other end out comes pristine knowledge. Science is an activity done by human beings, and human beings who do science have as many biases and blindspots as human beings who don’t do science. So, recognizing some of the challenges that are built into the kind of critter we are trying to build reliable knowledge about the world becomes crucial, and even places where the scientist will say, look, I’m not doing (in this particular field) hypothesis-driven science, it doesn’t mean that there aren’t some hypotheses sort of behind the curtain directing the attention of the people trying to build knowledge. It just means that they haven’t bumped into enough people trying to build knowledge in the same area that have different assumptions to notice that they’re making assumptions in the first place.

(15:20)
Dr. Isis: I think that’s a crucial distinction. Is the science that you’re doing really not hypothesis-driven, or are you too lazy to write down a hypothesis?

To give an example, I’m writing a paper with this clinical fellow, and she’s great. She brought a draft, which is amazing, because I’m all about the paper right now. And in there, she wrote, we sought to observe this because to the best of our knowledge this has never been reported in the literature.

First of all, the phrase “to the best of our knowledge,” any time you write that you should just punch yourself in the throat, because if it wasn’t to the best of your knowledge, you wouldn’t be writing it. I mean, you wouldn’t be lying: “this has never been reported in the literature.” The other thing is, “this has never been reported in the literature” as the motivation to do it is a stupid reason. I told her, the frequency of the times of the week that I wear black underwear has never been reported in the literature. That doesn’t mean it should be.

Janet Stemwedel: Although, if it correlates with your experiment working or not — I have never met more superstitious people than experimentalists. If the experiment only works on the days you were black underwear, you’re wearing black underwear until the paper is submitted, that’s how it’s going to be. Because the world is complicated!

Dr. Isis: The point is that it’s not that she didn’t have a hypothesis. It’s that pulling it out of her — it was like a tapeworm. It was a struggle. That to me is the question. Are we really doing science without a hypothesis, or are we making the story about ourselves? About what we know about in the literature, what the gap in the literature is, and the motivation to do the experiment, or are we writing, “we wanted to do this to see if this was the thing”? — in which case, I don’t find it very interesting.

Michael Tomasson: That’s an example of something that I try to teach, when you’re writing papers: we did this, we wanted to do that, we thought about this. It’s not really about you.

But friend of the show Cedar Riener tweets in, aren’t the biggest science projects those least likely to have clearly hypothesis-driven experiments, like HGP, BRANI, etc.? I think the BRAIN example is a good one. We talk about how you need hypotheses to do science, and yet here’s this very high profile thing which, as far as I can tell, doesn’t really have any hypotheses driving it.

When the transcript continues: Issues of inclusion, methodological disputes, and the possibility that “the scientific method” is actually a lie.

What do I owe society for my scientific training? Obligations of scientists (part 6)

One of the dangers of thinking hard about your obligations is that you may discover one that you’ve fallen down on. As we continue our discussion of the obligations of scientist, I put myself under the microscope and invite you to consider whether I’ve incurred a debt to society that I have failed to pay back.

In the last post in this series, we discussed the claim that those in our society with scientific training have a positive duty to conduct scientific research in order to build new scientific knowledge. The source of that putative duty is two-fold. On the one hand, it’s a duty that flows from the scientist’s abilities in the face of societal needs: if people trained to build new scientific knowledge won’t build the new scientific knowledge needed to address pressing problems (like how to feed the world, or hold off climate change, or keep us all from dying from infectious diseases, or what have you), we’re in trouble. On the other hand, it’s a duty that flows from the societal investment that nurtures the development of these special scientific abilities: in the U.S., it’s essentially impossible to get scientific training at the Ph.D. level that isn’t subsidized by public funding. Public funding is used to support the training of scientists because the public expects a return on that investment in the form of grown-up scientists building knowledge which will benefit the public in some way. By this logic, people who take advantage of that heavily subsidized scientific training but don’t go on to build scientific knowledge when they are fully trained are falling down on their obligation to society.

People like me.

From September 1989 through December 1993, I was in a Ph.D. program in chemistry. (My Ph.D. was conferred January 1994.)

As part of this program, I was enrolled in graduate coursework (two chemistry courses per quarter for my first year, plus another chemistry course and three math courses, for fun, during my second year). I didn’t pay a dime for any of this coursework (beyond buying textbooks and binder paper and writing implements). Instead, tuition was fully covered by my graduate tuition stipend (which also covered “units” in research, teaching, and department seminar that weren’t really classes but appeared on our transcripts as if they were). Indeed, beyond the tuition reimbursement I was paid a monthly stipend of $1000, which seemed like a lot of money at the time (despite the fact that more than a third of it went right to rent).

I was also immersed in a research lab from January 1990 onward. Working in this lab was the heart of my training as a chemist. I was given a project to start with — a set of empirical questions to try to answer about a far-from-equilibrium chemical system that one of the recently-graduated students before me had been studying. I had to digest a significant chunk of experimental and theoretical literature to grasp why the questions mattered and what the experimental challenges in answering them might be. I had to assess the performance of the experimental equipment we had on hand, spend hours with calibrations, read a bunch of technical manuals, disassemble and reassemble pumps, write code to drive the apparatus and to collect data, identify experimental constraints that were important to control (and that, strangely, were not identified as such in the experimental papers I was working from), and also, when I determined that the chemical system I had started with was much too fussy to study with the equipment the lab could afford, to identify a different chemical system that I could use to answer similar questions and persuade my advisor to approve this new plan.

In short, my time in the lab had me learning how to build new knowledge (in a particular corner of physical chemistry) by actually building new knowledge. The earliest stages of my training had me juggling the immersion into research with my own coursework and with teaching undergraduate chemistry students as a lab instructor and teaching assistant. Some weeks, this meant I was learning less about how to make new scientific knowledge than I was about how to tackle a my problem-sets or how to explain buffers to pre-meds. Past the first year of the program, though, my waking hours were dominated by getting experiments designed, collecting loads of data, and figuring out what it meant. There were significant stretches of time during which I got into the lab by 5 AM and didn’t leave until 8 or 9 PM, and the weekend days when I didn’t go into the lab were usually consumed with coding, catching up on relevant literature, or drafting manuscripts or thesis chapters.

Once, for fun, some of us grad students did a back-of-the-envelope calculation of our hourly wages. It was remarkably close to the minimum wage I had been paid as a high school student in 1985. Still, we were getting world class scientific training, for free! We paid with the sweat of our brows, but wouldn’t we have to put in that time and effort to learn how to make scientific knowledge anyway? Sure, we graduate students did the lion’s share of the hands-on teaching of undergraduates in our chemistry department (undergraduates who were paying a significant tuition bill), but we were learning, from some of the best scientists in the world, how to be scientists!

Having gotten what amounts to a full-ride for that graduate training, due in significant part to public investment in scientific training at the Ph.D. level, shouldn’t I be hunkered down somewhere working to build more chemical knowledge to pay off my debt to society?

Do I have any good defense to offer for the fact that I’m not building chemical knowledge?

For the record, when I embarked on Ph.D. training in chemistry, I fully expected to be an academic chemist when I grew up. I really did imagine that I’d have a long career building chemical knowledge, training new chemists, and teaching chemistry to an audience that included some future scientists and some students who would go on to do other things but who might benefit from a better understanding of chemistry. Indeed, when I was applying to graduate programs, my chemistry professors were talking up the “critical shortage” of Ph.D. chemists. (By January of my first year in graduate school, I was reading reports that there were actually something like 30% more Ph.D. chemists than there were jobs for Ph.D. chemists, but a first-year grad student is not necessarily freaking out about the job market while she is wrestling with her experimental system.) I did not embark on a chemistry Ph.D. as a collectable. I did not set out to be a dilettante.

In the course of the research that was part of my Ph.D. training, I actually built some new knowledge and shared it with the public, at least to the extent of publishing it in journal articles (four of them, an average of one per year). It’s not clear what the balance sheet would say about this rate of return on the public’s investment in my scientific training — nor either whether most taxpayers would judge the knowledge I built (about the dynamics of far-from-equilibrium chemical reactions and about ways to devise useful empirical tests of proposed reaction mechanisms) as useful knowledge.

Then again, no part of how our research was evaluated in grad school was framed in terms of societal utility. You might try to describe how your research had broader implications that someone outside your immediate subfield could appreciate if you were writing a grant to get the research funded, but solving society’s pressing scientific problems was not the sine qua non of the research agendas we were advancing for our advisors or developing for ourselves.

As my training was teaching me how to conduct serious research in physical chemistry, it was also helping me to discover that my temperament was maybe not so well suited to life as a researcher in physical chemistry. I found, as I was struggling with a grant application that asked me to describe the research agenda I expected to pursue as an academic chemist, that the questions that kept me up at night were not fundamentally questions about chemistry. I learned that no part of me was terribly interested in the amount of grant-writing and lab administration that would have been required of me as a principal investigator. Looking at the few women training me at the Ph.D. level, I surmised that I might have to delay or skip having kids altogether to survive academic chemistry — and that the competition for those faculty jobs where I’d be able to do research and build new knowledge was quite fierce.

Plausibly, had I been serious about living up to my obligation to build new knowledge by conducting research, I could have been a chemist in industry. As I was finishing up my Ph.D., the competition for industry jobs for physical chemists like me was also pretty intense. What I gathered as I researched and applied for industry jobs was that I didn’t really like the culture of industry. And, while working in industry would have been a way from me to conduct research and build new knowledge, I might have ended up spending more time solving the shareholders’ problems than solving society’s problems.

If I wasn’t going to do chemical research in an academic career and I wasn’t going to do chemical research in an industrial job, how should I pay society back for the publicly-supported scientific training I received? Should I be building new scientific knowledge on my own time, in my own garage, until I’ve built enough that the debt is settled? How much new knowledge would that take?

The fact is, none of us Ph.D. students seemed to know at the time that public money was making it possible for us to get graduate training in chemistry without paying for that training. Nor was there an explicit contract we were asked to sign as we took advantage of this public support, agreeing to work for a certain number of years upon the completion of our degrees as chemists serving the public’s interests. Rather, I think most of us saw an opportunity to pursue a subject we loved and to get the preparation we would need to become principal investigators in academia or industry if we decided to pursue those career paths. Most of us probably didn’t know enough about what those career paths would be like to have told you at the beginning of our Ph.D. training whether those career paths would suit our talents or temperaments — that was part of what we were trying to find out by pursuing graduate studies. And practically, many of us would not have been able to find out if we had had to pay the costs of our Ph.D. training ourselves.

If no one who received scientific training subsidized by the public went on to build new scientific knowledge, this would surely be a problem for society. But, do we want to say that everyone who receives such subsidized training is on the hook to pay society back by building new scientific knowledge until such time as society has all the scientific knowledge it needs?

That strikes me as too strong. However, given that I’ve benefitted directly from a societal investment in Ph.D. training that, for all practical purposes, I stopped using in 1994, I’m probably not in a good position to make an objective judgment about just what I do owe society to pay back this debt. Have I paid it back already? Is society within its rights to ask more of me?

Here, I’ve thought about the scientist’s debt to society — my debt to society — in very personal terms. In the next post in the series, we’ll revisit these questions on a slightly larger scale, looking at populations of scientists interacting with the larger society and seeing what this does to our understanding of the obligations of scientists.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)