Pennywise and pound-foolish: misidentified cells and competitive pressures in scientific knowledge-building.

The overarching project of science is building reliable knowledge about the world, but the way this knowledge-building happens in our world is in the context of competition. For example, scientists compete with each other to be the first to make a new discovery, and they compete with each other for finite pools of grant money with which to conduct more research and make further discoveries.

I’ve heard the competitive pressures on scientists described as a useful way to motivate scientists to be clever and efficient (and not to knock off early lest some more dedicated lab get to your discovery first). But there are situations where it’s less obvious that fierce competition for scarce resources leads to choices that really align with the goal of building reliable knowledge about the world.

This week, on NPR’s Morning Edition, Richard Harris reported a pair of stories on how researchers who work with cells in culture grapple with the problem of their intended cell line being contaminated and overtaken by a different cell line. Harris tells us:

One of the worst cases involves a breast cancer cell line called MDA-435 (or MDA-MB-435). After the cell line was identified in 1976, breast cancer scientists eagerly adopted it.

When injected in animals, the cells spread the way breast cancer metastasizes in women, “and that’s not a very common feature of most breast cancer cell lines,” says Stephen Ethier, a cancer geneticist at the Medical University of South Carolina. “So as a result of that, people began asking for those cells, and so there are many laboratories all over the world, who have published hundreds of papers using the MDA-435 cell line as a model for breast cancer metastasis.”

In fact, scientists published more than a thousand papers with this cell line over the years. About 15 years ago, scientists using newly developed DNA tests took a close look at these cells. And they were shocked to discover that they weren’t from a breast cancer cell at all. The breast cancer cell line had been crowded out by skin cancer cells.

“We now know with certainty that the MDA-435 cell line is identical to a melanoma cell line,” Ethier says.

And it turns out that contamination traces back for decades. Several scientists published papers about this to alert the field, “but nevertheless, there are people out there who haven’t gotten the memo, apparently,” he says.

Decades worth of work and more than a thousand published research papers were supposed to add up to a lot of knowledge about a particular kind of breast cancer cell, except it wasn’t knowledge about breast cancer cells at all because the cells in the cell line had been misidentified. Probably scientists know something from that work, but it isn’t the knowledge they thought they had before the contamination was detected.

On the basis of the discovery that this much knowledge-building had been compromised by being based on misidentified cells, you might imagine researchers would prioritize precise identification of the cells they use. But, as Harris found, this obvious bit of quality control meets resistance. For one thing, researchers seem unwilling to pay the extra financial costs it would take:

This may all come down to money. Scientists can avoid most of these problems by purchasing cells from a company that routinely tests them. But most scientists would rather walk down the hall and borrow cells from another lab.

“Academics share their cell lines like candy because they don’t want to go back and spend another $300,” said Richard Neve from Genentech. “It is economics. And they don’t want to spend another $100 to [verify] that’s still the same cell line.”

Note here that scientists could still economize by sharing cell lines with their colleagues instead of purchasing them but paying for the tests to nail down the identity of the shared cells. However, many do not.

(Consider, though, how awkward it might be to test cells you’ve gotten from a colleague only to discover that they are not the kind of cells your colleague thought they were. How do you break the news to your colleague that their work — including published papers in scientific journals — is likely to be mistaken and misleading? How likely would this make other colleagues to share their cell lines with you, knowing that you might bring them similarly bad news as a result of their generosity?)

Journals like Nature have tried to encourage scientists to test their cell lines by adding it to an authors’ checklist for researchers submitting papers. Most authors do not check the box indicating they have tested their cells.

One result here is that the knowledge that comes from these studies and gets reported in scientific journals may not be as solid as it seems:

When scientists at [Genentech] find an intriguing result from an academic lab, the first thing they do is try to replicate the result.

Neve said often they can’t, and misidentified cells are a common reason.

This is a problem that is not just of concern to scientists. The rest of us depend on scientists to build reliable knowledge about the world in part because it might matter for what kinds of treatments are developed for diseases that affect us. Moreover, much of this research is paid for with public money — which means the public has an interest in whether the funding is doing what it is supposed to be doing.

However, Harris notes that funding agencies seem unwilling to act decisively to address the issue of research based on misidentified cell lines:

“We are fully convinced that this is a significant enough problem that we have to take steps to address it,” Jon Lorsch, director of the NIH’s National Institute of General Medical Sciences, said during the panel discussion.

One obvious step would be to require scientists who get federal funding to test their cells. Howard Soule, chief science officer at the Prostate Cancer Foundation, said that’s what his charity requires of the scientists it funds.

There’s a commercial lab that will run this test for about $140, so “this is not going to break the bank,” Soule said.

But Lorsch at the NIH argued that it’s not so simple on the scale at which his institute hands out funding. “We really can’t go and police 10,000 grants,” Lorsch said.

“Sure you can,” Soule shot back. “How can you not?”

Lorsch said if they do police this issue, “there are dozens and dozens of other issues” that the NIH should logically police as well. “It becomes a Hydra,” Lorsch said. “You know, you chop off one head and others grow.”

Biomedical research gets more expensive all the time, and the NIH is reluctant to pile on a whole bunch of new rules. It’s a balancing act.

“If we become too draconian we’re going to end up squashing creativity and slowing down research, which is not good for the taxpayers because they aren’t going to get as much for their money,” Lorsch said.

To my eye, Lorsch’s argument against requiring researchers to test their cells focuses on the competitive aspect of scientific research to the exclusion of the knowledge-building aspect.

What does it matter if the taxpayers get more research generated and published if a significant amount of that research output is irreproducible because of misidentified cells? In the absence of tests to properly identify the cells being used, there’s no clear way to tell just by looking at the journal articles which ones are reliable and which ones are not. Post-publication quality control requires researchers to repeat experiments and compare their results to those published, something that will cost significantly more than if the initial researchers tested their cells in the first place.

However, research funding is generally awarded to build new knowledge, not to test existing knowledge claims. Scientists get credit for making new discoveries, not for determining that other scientists’ discoveries can be reproduced.

NIH could make it a condition of funding that researchers working with cell lines get those cell lines tested, and arguably this would be the most cost-efficient way to ensure results that are reliable rather than based on misidentification. I find Lorsch’s claim that there are dozens of other kinds of quality control of this sort NIH could demand, so they cannot demand this, unpersuasive. Even if there are many things to fix, it doesn’t mean you must fix them all at once. Incremental improvements in quality control are surely better than none at all.

His further suggestion that engaging in NIH-mandated quality control will quash scientific creativity strikes me as silly. Scientists are at their most creative when they are working within constraints to solve problems. Indeed, were NIH to require that researchers test their cells, there is no reason to think this additional constraint could not be easily incorporated into researchers’ current competition for NIH funding.

The big question, really, is whether NIH is prioritizing funding a higher volume of research, or higher quality research. Presumably, the public is better served by a smaller number of published studies that make reliable claims about the actual cells researchers are working with than by a large number of published studies making hard-to-verify claims about misidentified cells.

If scientific competition is inescapable, at least let’s make sure that the incentives encourage the careful steps required to build reliable knowledge. If those careful steps are widely seen as an impediment in succeeding in the competition, we derail the goal that the competitive pressures were supposed to enhance.

James Watson’s sense of entitlement, and misunderstandings of science that need to be countered.

James Watson, who shared a Nobel Prize in 1962 for discovering the double helix structure of DNA, is in the news, offering his Nobel Prize medal at auction. As reported by the Telegraph:

Mr Watson, who shared the 1962 Nobel Prize for uncovering the double helix structure of DNA, sparked an outcry in 2007 when he suggested that people of African descent were inherently less intelligent than white people.

If the medal is sold Mr Watson said he would use some of the proceeds to make donations to the “institutions that have looked after me”, such as University of Chicago, where he was awarded his undergraduate degree, and Clare College, Cambridge.

Mr Watson said his income had plummeted following his controversial remarks in 2007, which forced him to retire from the Cold Spring Harbor Laboratory on Long Island, New York. He still holds the position of chancellor emeritus there.

“Because I was an ‘unperson’ I was fired from the boards of companies, so I have no income, apart from my academic income,” he said.

He would also use some of the proceeds to buy an artwork, he said. “I really would love to own a [painting by David] Hockney”. …

Mr Watson said he hoped the publicity surrounding the sale of the medal would provide an opportunity for him to “re-enter public life”. Since the furore in 2007 he has not delivered any public lectures.

There’s a lot I could say here about James Watson, the assumptions under which he is laboring, and the potential impacts on science and the public’s engagement with it. In fact, I have said much of it before, although not always in reference to James Watson in particular. However, given the likelihood that we’ll keep hearing the same unhelpful responses to James Watson and his ilk if we don’t grapple with some of the fundamental misunderstandings of science at work here, it’s worth covering this ground again.

First, I’ll start with some of the claims I see Watson making around his decision to auction his Nobel Prize medal:

  • He needs money, given that he has “no income beyond [his] academic income”. One might take this as an indication that academic salaries in general ought to be raised (although I’m willing to bet a few buck that Watson’s inadequate academic income is at least as much as that of the average academic actively engaged in research and/or teaching in the U.S. today). However, Watson gives no sign of calling for such an across-the-board increase, since…
  • He connects his lack of income to being fired from boards of companies and to his inability to book public speaking engagements after his 2007 remarks on race.
  • He equates this removal from boards and lack of invitations to speak with being an “unperson”.

What comes across to me here is that James Watson sees himself as special, as entitled to seats on boards and speaker invitations. On what basis, we might ask, is he entitled to these perks, especially in the face of a scientific community just brimming with talented members currently working at the cutting edge(s) of scientific knowledge-building? It is worth noting that some who attended recent talks by Watson judged them to be nothing special:

Possibly, then, speaking engagements may have dried up at least partly because James Watson was not such an engaging speaker — with an asking price of $50,000 for a paid speaking engagement, whether you give good talk is a relevant criterion — rather than being driven entirely by his remarks on race in 2007, or before 2007. However, Watson seems sure that these remarks are the proximate cause of his lack of invitations to give public talks since 2007. And, he finds this result not to be in accord with what a scientist like himself deserves.

Positioning James Watson as a very special scientist who deserves special treatment above and beyond the recognition of the Nobel committee feeds the problematic narrative of scientific knowledge as an achievement of great men (and yes, in this narrative, it is usually great men who are recognized). This narrative ignores the fundamentally social nature of scientific knowledge-building and the fact that objectivity is the result of teamwork.

Of course, it’s even more galling to have James Watson portrayed (including by himself) as an exceptional hero of science rather than as part of a knowledge-building community given the role of Rosalind Franklin’s work in determining the structure of DNA — and given Watson’s apparent contempt for Franklin, rather than regard for her as a member of the knowledge-building team, in The Double Helix.

Indeed, part of the danger of the hero narrative is that scientists themselves may start to believe it. They can come to see themselves as individuals possessing more powers of objectivity than other humans (thus fundamentally misunderstanding where objectivity comes from), with privileged access to truth, with insights that don’t need to be rigorously tested or supported with empirical evidence. (Watson’s 2007 claims about race fit in this territory .)

Scientists making authoritative claims beyond what science can support is a bigger problem. To the extent that the public also buys into the hero narrative of science, that public is likely to take what Nobel Prize winners say as authoritative, even in the absence of good empirical evidence. Here Watson keeps company with William Shockley and his claims on race, Kary Mullis and his claims on HIV, and Linus Pauling and his advocacy of mega-doses of vitamin C. Some may argue that non-scientists need to be more careful consumers of scientific claims, but it would surely help if scientists themselves would recognize the limits of their own expertise and refrain from overselling either their claims or their individual knowledge-building power.

Where Watson’s claims about race are concerned, the harm of positioning him as an exceptional scientist goes further than reinforcing a common misunderstanding of where scientific knowledge comes from. These views, asserted authoritatively by a Nobel Prize winner, give cover to people who want to believe that their racist views are justified by scientific knowledge.

As well, as I have argued before (in regard to Richard Feynman and sexism), the hero narrative can be harmful to the goal of scientific outreach given the fact that human scientists usually have some problematic features and that these problematic features are often ignored, minimized, or even justified (e.g., as “a product of the time”) in order to foreground the hero’s great achievement and sell the science. There seems to be no shortage of folks willing to label Watson’s racist views as unfortunate but also as something that should not overshadow his discovery of the structure of DNA. In order that the unfortunate views not overshadow the big scientific contribution, some of these folks would rather we stop talking about Watson’s having made the claims he has made about racial difference (although Watson shows no apparent regret for holding these views, only for having voiced them to reporters).

However, especially for people in the groups that James Watson has claimed are genetically inferior, asserting that Watson’s massive scientific achievement trumps his problematic claims about race can be alienating. His scientific achievement doesn’t magically remove the malign effects of the statements he has made from a very large soapbox, using his authority as a Nobel Prize winning scientist. Ignoring those malign effects, or urging people to ignore them because of the scientific achievement which gave him that big soapbox, sounds an awful lot like saying that including the whole James Watson package in science is more important than including black people as scientific practitioners or science fans.

The hero narrative gives James Watson’s claims more power than they deserve. The hero narrative also makes urgent the need to deem James Watson’s “foibles” forgivable so we can appreciate his contribution to knowledge. None of this is helpful to the practice of science. None of it helps non-scientists engage more responsibly with scientific claims or scientific practitioners.

Holding James Watson to account for his claims, holding him responsible for scientific standards of evidence, doesn’t render him an unperson. Indeed, it amounts to treating him as a person engaged in the scientific knowledge-building project, as well as a person sharing a world with the rest of us.

* * * * *
Michael Hendricks offers a more concise argument against the hero narrative in science.

And, if you’re not up on the role of Rosalind Franklin in the discovery of the structure of DNA, these seventh graders can get you started:

Kitchen science: evaluating methods of self-defense against onions.

Background
I hate chopping onions. They make me cry within seconds, and those tears both hurt and obscure my view of onions, knife, and fingertips (which can lead to additional injuries).

The chemical mechanism by which onions cause this agony is well known. Less well known are effective methods to prevent or mitigate this agony in order to get through chopping the quantities of onions that need to be chopped for a Thanksgiving meal.

So, I canvassed sources (on Twitter) for possible interventions and tested them.

Self-defense against onions

Materials & Methods
1 lb. yellow onions (all room temperature except 1/2 onion frozen, in a plastic sandwich bag, for 25 min)
sharp knife
cutting board
stop-watch (I used the one on my phone)
video capture (I used iMovie)
slice of bread
metal table spoon
swim goggles
tea candle
portable fan

General procedure:
1. Put proposed intervention in place.
2. Start the stop-watch and start chopping onions.
3. Stop stop-watch when onion-induced tears are agonizing; note time elapsed from start of trial.
4. Allow eyes to clear (2-5 min) before testing next intervention.

Results

Here are the interventions I tested, with the time to onion-induced eyeball agony observed:

Slice of bread in the mouth: 46 sec
Metal spoon in the mouth: 62 sec
Candle burning near cutting-board: 80 sec
Onion chilled in freezer: 86 sec
Fan blowing across cutting-board: 106 sec
Swim goggles: No agony!

Note that each intervention was tested exactly once, by a single experimental subject (me) to generate this data. If there’s any effect on an intervention due to being tested right after another particular intervention, I haven’t controlled for it here, and your onion-induced eyeball agony may vary.

Also, I did not test these interventions against a control (which here would be me chopping an onion with no intervention). So, on the basis of this experiment, I cannot tell you persuasively that the worst of these interventions is any better than just chopping onions with no interventions. (On the basis of my recent onion-chopping recollections, I can tell you that even the slice of bread in the mouth seemed to help a little — but THIS IS SCIENCE, where we use our tearing eyes to look askance at anecdata.)

Discussion

The most successful intervention in my trials was wearing goggles. This makes sense, as the goggles provide a barrier between the eyeballs and the volatile chemicals released when the onions are cut.

The fan and the burning candle deal with those volatile chemicals a different way, either by blowing them away from the eyes, or … well, with the candle, the likely mechanism is murkier. Maybe it’s that those volatile compounds get drawn to the flame and involved in the combustion reaction there? Or that the compounds released by the candle burning compete with those released by the cut onion for access to the eyeball? However it’s supposed to work, compared to the barrier-method of the goggles, the candle method was less successful. Even the fan couldn’t keep some of those volatile compounds from getting to the eyeballs and doing their teary work.

Cooling the onion was somewhat successful, too, likely because at a lower temperature those compounds in the onion were less ready to make it into gas phase easily. There may be a side effect of this method for those chopping onions for culinary use, in that freezing long enough may change the texture of the onion permanently (i.e., even when returned to room temperature).

I am not sure by what mechanism a slice of bread or a metal spoon in the mouth is supposed to protect one’s eyes from the volatile compounds released by onions. Maybe it’s just supposed to distract you from your eyes? Maybe the extra saliva produced is supposed to get involved somehow? Who knows? However, note that it was possible for us to empirically test these methods even in the absence of a proposed mechanism.

Conclusion
If you have lots of onions to chop and don’t have a proper fume hood in your kitchen, a pair of goggles that makes a secure seal around your eyes can provide some protection from onion-induced eyeball agony. Failing that, chilling the onions before chopping and/or setting up a fan to blow across your chopping surface may help.

A guide for science guys trying to understand the fuss about that shirt.

This is a companion to the last post, focused more specifically on the the question of how men in science who don’t really get what the fuss over Rosetta mission Project Scientist Matt Taylor’s shirt was about could get a better understanding of the objections — and of why they might care.

(If the story doesn’t embed properly for you, you can read it here.)

Conduct of scientists (and science writers) can shape the public’s view of science.

Scientists undertake a peculiar kind of project. In striving to build objective knowledge about the world, they are tacitly recognizing that our unreflective picture of the world is likely to be riddled with mistakes and distortions. On the other hand, they frequently come to regard themselves as better thinkers — as more reliably objective — than humans who are not scientists, and end up forgetting that they have biases and blindspots of their own which they are helpless to detect without help from others who don’t share these particular biases and blindspots.

Building reliable knowledge about the world requires good methodology, teamwork, and concerted efforts to ensure that the “knowledge” you build doesn’t simply reify preexisting individual and cultural biases. It’s hard work, but it’s important to do it well — especially given a long history of “scientific” findings being used to justify and enforce preexisting cultural biases.

I think this bigger picture is especially appropriate to keep in mind in reading the response from Scientific American Blogs Editor Curtis Brainard to criticisms of a pair of problematic posts on the Scientific American Blog Network. Brainard writes:

The posts provoked accusations on social media that Scientific American was promoting sexism, racism and genetic determinism. While we believe that such charges are excessive, we share readers’ concerns. Although we expect our bloggers to cover controversial topics from time to time, we also recognize that sensitive issues require extra care, and that did not happen here. The author and I have discussed the shortcomings of the two posts in detail, including the lack of attention given to countervailing arguments and evidence, and he understood the deficiencies.

As stated at the top of every post, Scientific American does not always share the views and opinions expressed by our bloggers, just as our writers do not always share our editorial positions. At the same time, we realize our network’s bloggers carry the Scientific American imprimatur and that we have a responsibility to ensure that—differences of opinion notwithstanding—their work meets our standards for accuracy, integrity, transparency, sensitivity and other attributes.

(Bold emphasis added.)

The problem here isn’t that the posts in question advocated sound scientific views with implications that people on social media didn’t like. Rather, the posts presented claims in a way that made them look like they had much stronger scientific support than they really do — and did so in the face of ample published scientific counterarguments. Scientific American is not requiring that posts on its blog network meet a political litmus test, but rather that they embody the same kind of care, responsibility to the available facts, and intellectual honesty that science itself should display.

This is hard work, but it’s important. And engaging seriously with criticism, rather than just dismissing it, can help us do it better.

There’s an irony in the fact that one of the problematic posts which ignored some significant relevant scientific literature (helpfully cited by commenters in the comments section of that very post) was ignoring that literature in the service of defending Larry Summers and his remarks on possible innate biological causes that make men better at math and science than women. The irony lies in the fact that Larry Summers displayed an apparently ironclad commitment to ignore any and all empirical findings that might challenge his intuition that there’s something innate at the heart of the gender imbalance in math and science faculty.

Back in January of 2005, Larry Summers gave a speech at a conference about what can be done to attract more women to the study of math and science, and to keep them in the field long enough to become full professors. In his talk, Summers suggested as a possible hypothesis for the relatively low number of women in math and science careers that there may be innate biological factors that make males better at math and science than females. (He also related an anecdote about his daughter naming her toy trucks as if they were dolls, but it’s fair to say he probably meant this anecdote to be illustrative rather than evidentiary.)

The talk did not go over well with the rest of the participants in the conference.

Several scientific studies were presented at the conference before Summers made his speech. All these studies presented significant evidence against the claim of an innate difference between males and females that could account for the “science gap”.


In the aftermath of this conference of yore, there were some commenters who lauded Summers for voicing “unpopular truths” and others who distanced themselves from his claims but said they supported his right to make them as an exercise of “academic freedom.”

But if Summers was representing himself as a scientist* when he made his speech, I don’t think the “academic freedom” defense works.


Summers is free to state hypotheses — even unpopular hypotheses — that might account for a particular phenomenon. But, as a scientist, he is also responsible to take account of data relevant to his hypotheses. If the data weighs against his preferred hypothesis, intellectual honesty requires that he at least acknowledge this fact. Some would argue that it could even require that he abandon his hypothesis (since science is supposed to be evidence-based whenever possible).


When news of Summers’ speech, and reactions to it, was fresh, one of the details that stuck with me was that one of the conference organizers noted to Summers, after he gave his speech, that there was a large body of evidence — some of it presented at that very conference — that seemed to undermine his hypothesis, after which Summers gave a reply that amounted to, “Well, I don’t find those studies convincing.”

Was Summers within his rights to not believe these studies? Sure. But, he had a responsibility to explain why he rejected them. As a part of a scientific community, he can’t just reject a piece of scientific knowledge out of hand. Doing so comes awfully close to undermining the process of communication that scientific knowledge is based upon. You aren’t supposed to reject a study because you have more prestige than the authors of the study (so, you don’t have to care what they say). You can question the experimental design, you can question the data analysis, you can challenge the conclusions drawn, but you have to be able to articulate the precise objection. Surely, rejecting a study just because it doesn’t fit with your preferred hypothesis is not an intellectually honest move.


By my reckoning, Summers did not conduct himself as a responsible scientist in this incident. But I’d argue that the problem went beyond a lack of intellectual honesty within the universe of scientific discourse. Summers is also responsible for the bad consequences that flowed from his remark.


The bad consequence I have in mind here is the mistaken view of science and its workings that Summers’ conduct conveys. Especially by falling back on a plain vanilla “academic freedom” defense here, defenders of Summers conveyed to the public at large the idea that any hypothesis in science is as good as any other. Scientists who are conscious of the evidence-based nature of their field will see the absurdity of this idea — some hypotheses are better, others worse, and whenever possible we turn to the evidence to make these discriminations. Summers compounded ignorance of the relevant data with what came across as a statement that he didn’t care what the data showed. From this, the public at large could assume he was within his scientific rights to decide which data to care about without giving any justification for this choice**, or they could infer that data has little bearing on the scientific picture of the world.

Clearly, such a picture of science would undermine the standing of the rest of the bits of knowledge produced by scientists far more intellectually honest than Summers.


Indeed, we might go further here. Not only did Summers have some responsibilities that seemed to have escaped him while he was speaking as a scientist, but we could argue that the rest of the scientists (whether at the conference or elsewhere) have a collective responsibility to address the mistaken picture of science his conduct conveyed to society at large. It’s disappointing that, nearly a decade later, we instead have to contend with the problem of scientists following in Summers’ footsteps by ignoring, rather than engaging with, the scientific findings that challenge their intuitions.

Owing to the role we play in presenting a picture of what science knows and of how scientists come to know it to a broader audience, those of us who write about science (on blogs and elsewhere) also have a responsibility to be clear about the kind of standards scientists need to live up to in order to build a body of knowledge that is as accurate and unbiased as humanly possible. If we’re not clear about these standards in our science writing, we risk misleading our audiences about the current state of our knowledge and about how science works to build reliable knowledge about our world. Our responsibility here isn’t just a matter of noticing when scientists are messing up — it’s also a matter of acknowledging and correcting our own mistakes and of working harder to notice our own biases. I’m pleased that our Blogs Editor is committed to helping us fulfill this duty.
_____
*Summers is an economist, and whether to regard economics as a scientific field is a somewhat contentious matter. I’m willing to give the scientific status of economics the benefit of the doubt, but this means I’ll also expect economists to conduct themselves like scientists, and will criticize them when they do not.

**It’s worth noting that a number of the studies that Summers seemed to be dismissing out of hand were conducted by women. One wonders what lessons the public might draw from that.

_____
A portion of this post is an updated version of an ancestor post on my other blog.

A suggestion for those arguing about the causal explanation for fewer women in science and engineering fields.

People are complex, as are the social structures they build (including but not limited to educational institutions, workplaces, and professional communities).

Accordingly, the appropriate causal stories to account for the behaviors and choices of humans, individually and collectively, are bound to be complex. It will hardly ever be the case that there is a single cause doing all the work.

However, there are times when people seem to lose the thread when they spin their causal stories. For example:

The point of focusing on innate psychological differences is not to draw attention away from anti-female discrimination. The research clearly shows that such discrimination exists—among other things, women seem to be paid less for equal work. Nor does it imply that the sexes have nothing in common. Quite frankly, the opposite is true. Nor does it imply that women—or men—are blameworthy for their attributes.

Rather, the point is that anti-female discrimination isn’t the only cause of the gender gap. As we learn more about sex differences, we’ve built better theories to explain the non-identical distribution of the sexes among the sciences. Science is always tentative, but the latest research suggests that discrimination has a weaker impact than people might think, and that innate sex differences explain quite a lot.

What I’m seeing here is a claim that amounts to “there would still be a gender gap in the sciences even if we eliminated anti-female discrimination” — in other words, that the causal powers of innate sex differences would be enough to create a gender gap.

To this claim, I would like to suggest:

1. that there is absolutely no reason not to work to eliminate anti-female discrimination; whether or not there are other causes that are harder to change, such discrimination seems like something we can change, and it has negative effects on those subject to it;

2. that is is an empirical question whether, in the absence of anti-female discrimination, there would still be a gender gap in the sciences; given the complexity of humans and their social structures, controlled studies in psychology are models of real life that abstract away lots of details*, and when the rubber hits the road in the real phenomena we are modeling, things may play out differently.

Let’s settle the question of how much anti-female discrimination matters by getting rid of it.

_____
* This is not a special problem for psychology. All controlled experiments are abstracting away details. That’s what controlling variables is all about.

Pub-Style Science: dreams of objectivity in a game built around power.

This is the third and final installment of my transcript of the Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building. Leading up to this part of the conversation, we were considering the possibility that the idealization of the scientific method left out a lot of the details of how real humans actually interact to build scientific knowledge …

Dr. Isis: And that’s the tricky part, I think. That’s where this becomes a messy endeavor. You think about the parts of the scientific method, and you write the scientific method out, we teach it to our students, it’s on the little card, and I think it’s one of the most amazing constructs that there is. It’s certainly a philosophy.

I have devoted my career to the scientific method, and yet it’s that last step that is the messiest. We take our results and we interpret them, we either reject or fail to reject the hypothesis, and in a lot of cases, the way we interpret the very objective data that we’re getting is based on the social and cultural constructs of who we are. And the messier part is that the who we are — you say that science is done around the world, sure, but really, who is it done by? We all get the CV, “Dear honorable and most respected professor…” And what do you do with those emails? You spam them. But why? Why do we do that? There are people [doing science] around the world, and yet we reject their science-doing because of who they are and where they’re from and our understanding, our capacity to take [our doing] of that last step of the scientific method as superior because of some pedigree of our training, which is absolutely rooted in the narrowest sliver of our population.

And that’s the part that frightens me about science. Going from lab to lab and learning things, you’re not just learning objective skills, you’re learning a political process — who do you shake hands with at meetings, who do you have lunch with, who do you have drinks with, how do you phrase your grants in a particular way so they get funded because this is the very narrow sliver of people who are reading them? And I have no idea what to do about that.

Janet Stemwedel: I think this is a place where the acknowledgement that’s embodied in editorial policies of journals like PLOS ONE, that we can’t actually reliably predict what’s going to be important, is a good step forward. That’s saying, look, what we can do is talk about whether this is a result that seems to be robust: this is how I got it; I think if you try to get it in your lab, you’re likely to get it, too; this is why it looked interesting to me in light of what we knew already. Without saying: oh, and this is going to be the best thing since sliced bread. At least that’s acknowledging a certain level of epistemic humility that it’s useful for the scientific community to put out there, to no pretend that the scientific method lets you see into the future. Because last time I checked, it doesn’t.

(46:05)
Andrew Brandel: I just want to build on this point, that this question of objective truth also is a question that is debated hotly, obviously, in science, and I will get in much trouble for my vision of what is objective and what is not objective. This question of whether, to quote a famous philosopher of science, we’re all looking at the same world through different-colored glasses, or whether there’s something more to it, if we’re actually talking about nature in different ways, if we can really learn something not even from science being practiced wherever in the world, but from completely different systems of thinking about how the world works. Because the other part of this violence is not just the ways in which certain groups have not been included in the scientific community, the professional community, which was controlled by the church and wealthy estates and things, but also with the institutions like the scientific method, like certain kinds of philosophy. A lot of violence has been propagated in the name of those things. So I think it’s important to unpack not just this question of let’s get more voices to the table, but literally think about how the structures of what we’re doing themselves — the way the universities are set up, the way that we think about what science does, the way that we think about objective truth — also propagate certain kinds of violence, epistemic kinds of violence.

Michael Tomasson: Wait wait wait, this is fascinating. Epistemic violence? Expand on that.

Andrew Brandel: What I mean to say is, part of the problem, at least from the view of myself — I don’t want to actually represent anybody else — is that if we think that we’re getting to some better method of getting to objective truth, if we think that we have — even if it’s only in an ideal state — some sort of cornerstone, some sort of key to the reality of things as they are, then we can squash the other systems of thinking about the world. And that is also a kind of violence, in a way, that’s not just the violence of there’s no women at the table, there’s no different kinds of people at the table. But there’s actually another kind of power structure that’s embedded in the very way that we think about truths. So, for example, a famous anthropologist, Levi-Strauss, would always point out that the botanists would go to places in Latin America and they would identify 14 different kinds of XYZ plant, and the people living in that jungle who aren’t scientists or don’t have that kind of sophisticated knowledge could distinguish like 45 kinds of these plants. And they took them back to the lab, and they were completely right.

So what does that mean? How do we think about these different ways [of knowing]? I think unpacking that is a big thing that social science and philosophy of science can bring to this conversation, pointing out when there is a place to critique the ways in which science becomes like an ideology.

Michael Tomasson: That just sort of blew my mind. I have to process that for awhile. I want to pick up on something you’re saying and that I think Janet said before, which is really part of the spirit of what Pub-Style Science is all about, the idea that is we get more different kinds of voices into science, we’ll have a little bit better science at the other end of it.

Dr. Rubidium: Yeaaaah. We can all sit around like, I’ve got a ton of great ideas, and that’s fabulous, and new voices, and rah rah. But, where are the new voices? are the new voices, or what you would call new voices, or new opinions, or different opinions (maybe not even new, just different from the current power structure) — if those voices aren’t getting to positions of real power to affect change, it doesn’t matter how many foot soldiers you get on the ground. You have got to get people into the position of being generals. And is that happening? No. I would say no.

Janet Stemwedel: Having more different kinds of people at the table doesn’t matter if you don’t take them seriously.

Andrew Brandel: Exactly. That’s a key point.

Dr. Isis: This is the tricky thing that I sort of alluded to. And I’m not talking about diverse voices in terms of gender and racial and sexual orientation diversity and disability issues. I’m talking about just this idea of diverse voices. One of the things that is tricky, again, is that to get to play the game you have to know the rules, and trying to change the rules too early — one, I think it’s dangerous to try to change the rules before you understand what the rules even are, and two, that is the quickest way to get smacked in the nose when you’re very young. And now, to extend that to issues of actual diversity in science, at least my experience has been that some of the folks who are diverse in science are some of the biggest rule-obeyers. Because you have to be in order to survive. You can’t come in and be different as it is and decide you’re going to change the rules out from under everybody until you get into that — until you become a general, to use Dr. Rubidium’s analogy. The problem is, by the time you become the general, have you drunk enough of the Kool-Aid that you remember who you were? Do you still have enough of yourself to change the system? Some of my more senior colleagues, diverse colleagues, who came up the ranks, are some of the biggest believers in the rules. I don’t know if they felt that way when they were younger folks.

Janet Stemwedel: Part of it can be, if the rules work for you, there’s less incentive to think about changing them. But this is one of those places where those of us philosophers who think about where the knowledge-building bumps up against the ethics will say: look, the ethical responsibilities of the people in the community with more power are different that the ethical responsibilities of the people in the community who are just coming up, because they don’t have as much weight to throw around. They don’t have as much power. So I talk a lot to mid-career and late-career scientists and say, hey look, you want to help build a different community, a different environment for the people you’re training? You’ve got to put some skin in the game to make that happen. You’re in a relatively safe place to throw that weight around. You do that!

And you know, I try to make these prudential arguments about, if you shift around the incentive structures [in various ways], what’s likely to produce better knowledge on the other end? That’s presumably why scientists are doing science, ’cause otherwise there’d be some job that they’d be doing that takes up less time and less brain.

Andrew Brandel: This is a question also of where ethics and epistemic issues also come together, because I think that’s really part of what kind of radical politics — there’s a lot of different theories about what kind of revolution you can talk about, what a revolutionary politics might be to overthrow the system in science. But I think this issue that it’s also an epistemic thing, that it’s also a question of producing better knowledge, and that, to bring back this point about how it’s not just about putting people in positions, it’s not just hiring an assistant professor from XYZ country or more women or these kinds of things, but it’s also a question of putting oneself sufficiently at risk, and taking seriously the possibility that I’m wrong, from radically different positions. That would really move things, I think, in a more interesting direction. That’s maybe something we can bring to the table.

Janet Stemwedel: This is the piece of Karl Popper, by the way, that scientists like as an image of what kind of tough people they are. Scientists are not trying to prove their hypotheses, they’re trying to falsify them, they’re trying to show that they’re wrong, and they’re ready to kiss even their favorite hypothesis goodbye if that’s what the evidence shows.

Some of those hypotheses that scientists need to be willing to kiss goodbye have to do with narrow views of what kind of details count as fair game for building real reliable knowledge about the world and what kind of people and what kind of training could do that, too. Scientists really have to be more evidence-attentive around issues like their own implicit bias. And for some reason that’s really hard, because scientists think that individually they are way more objective than they average bear. The real challenge of science is recognizing that we are all average bears, and it is just the coordination of our efforts within this particular methodological structure that gets us something better than the individual average bear could get by him- or herself.

Michael Tomasson: I’m going to backpedal as furiously as I can, since we’re running out of time. So I’ll give my final spiel and then we’ll go around for closing comments.

I guess I will pare down my skeleton-key: I think there’s an idea of different ways of doing science, and there’s a lot of culture that comes with it that I think is very flexible. I think what I’m getting at is, is there some universal hub for whatever different ways people are looking at science? Is there some sort of universal skeleton or structure? And I guess, if I had to backpedal furiously, that I would say, what I would try to teach my folks, is number one, there is an objective world, it’s not just my opinion. When people come in and talk to me about their science and experiments, it’s not just about what I want, it’s not just about what I think, it’s that there is some objective world out there that we’re trying to describe. The second thing, the most stripped-down version of the scientific method I can think of, is that in order to understand that objective world, it helps to have a hypothesis, a preconceived notion, first to challenge.

What I get frustrated about, and this is just a very practical day-to-day thing, is I see people coming and doing experiments saying, “I have no preconceived notion of how this should go, I did this experiment, and here’s what I got.” It’s like, OK, that’s very hard to interpret unless you start from a certain place — here’s my prediction, here’s what I think was going on — and then test it.

Dr. Isis: I’ll say, Tomasson, actually this wasn’t as boring as I thought it would be. I was really worried about this one. I wasn’t really sure what we were supposed to be talking about — philosophy and science — but this one was OK. So, good on you.

But, I think that I will concur with you that science is about seeking objective truth. I think it’s a darned shame that humans are the ones doing the seeking.

Janet Stemwedel: You know, dolphin science would be completely different, though.

Dr. Rubidium: Yeah, dolphins are jerks! What are you talking about?

Janet Stemwedel: Exactly! All their journals would be behind paywalls.

Andrew Brandel: I’ll just say that I was saying to David, who I know is a regular member of your group, that I think it’s a good step in the right direction to have these conversations. I don’t think we get asked as social scientists, even those of us who work in science settings, to at least talk about these issues more, and talk about, what are the ethical and epistemic stakes involved in doing what we do? What can we bring to the table on similar kinds of questions? For me, this question of cultivating a kind of openness to being wrong is so central to thinking about the kind of science that I do. I think that these kinds of conversations are important, and we need to generate some kind of momentum. I jokingly said to Tomasson that we need a grant to pay for a workshop to get more people into these types of conversations, because I think it’s significant. It’s a step in the right direction.

Janet Stemwedel: I’m inclined to say one of the take-home messages here is that there’s a whole bunch of scientists and me, and none of you said, “Let’s not talk about philosophy at all, that’s not at all useful.” I would like some university administrators to pay attention to this. It’s possible that those of us in the philosophy department are actually contributing something that enhances not only the fortunes of philosophy majors but also the mindfulness of scientists about what they’re doing.

I’m pretty committed to the idea that there is some common core to what scientists across disciplines and across cultures are doing to build knowledge. I think the jury’s still out on what precisely the right thing to say about that common core of the scientific method is. But, I think there’s something useful in being able to step back and examine that question, rather than saying, “Science is whatever the hell we do in my lab. And as long as I keep doing all my future knowledge-building on the same pattern, nothing could go wrong.”

Dr. Rubidium: I think that for me, I’ll echo Isis’s comments: science is an endeavor done by people. And people are jerks — No! With people, then, if you have this endeavor, this job, whatever you want to call it — some people would call it a calling — once people are involved, I think it’s essential that we talk about philosophy, sociology, the behavior of people. They are doing the work. It doesn’t make sense to me, then — and I’m an analytical chemist and I have zero background in all of the social stuff — it doesn’t make sense to me that you would have this thing done by people and then actually say with a straight face, “But let’s not talk about people.” That part just doesn’t compute. So I think these conversations definitely need to continue, and I hope that we can talk more about the people behind the endeavor and more about the things attached to their thoughts and behaviors.

* * * * *

Part 1 of the transcript.

Part 2 of the transcript.

Archived video of this Pub-Style Science episode.

Storify’d version of the simultaneous Twitter conversation.

You should also check out Dr. Isis’s post on why the conversations that happen in Pub-Style Science are valuable to scientists-in-training.

Pub-Style Science: exclusion, inclusion, and methodological disputes.

This is the second part of my transcript of the Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building, wherein we discuss methodological disputes, who gets included or excluded in scientific knowledge-building, and ways the exclusion or inclusion might matter. Also, we talk about power gradients and make the scary suggestion that “the scientific method” might be a lie…

Michael Tomasson: Rubidium, you got me started on this. I made a comment on Twitter about our aspirations to build objective knowledge and that that was what science was about, and whether there’s sexism or racism or whatever other -isms around is peripheral to the holy of holies, which is the finding of objective truth. And you made … a comment.

Dr. Rubidium: I think I told you that was cute.

Michael Tomasson: Let me leverage it this way: One reason I think philosophy is important is the basics of structure, of hypothesis-driven research. The other thing I’m kind of intrigued by is part of Twitter culture and what we’re doing with Pub-Style Science is to throw the doors open to people from different cultures and different backgrounds are really say, hey, we want to have science that’s not just a white bread monoculture, but have it be a little more open. But does that mean that everyone can bring their own way of doing science? It sounds like Andrew might say, well, there’s a lot of different ways, and maybe everyone who shows up can bring their own. Maybe one person wants a hypothesis, another doesn’t. Does everybody get to do their own thing, or do we need to educate people in the one way to do science?

As I mentioned on my blog, I had never known that there was a feminist way of doing science.

Janet Stemwedel: There’s actually more than one.

Dr. Isis: We’re not all the same.

Janet Stemwedel: I think even the claim that there’s a single, easily described scientific method is kind of a tricky one. One of the things I’m interested in — one of the things that sucked me over from building knowledge in chemistry to trying to build knowledge in philosophy — is, if you look at scientific practice, scientists who are nominally studying the same thing, the same phenomena, but who’re doing it in different disciplines (say, the chemical physicists and the physical chemists) can be looking at the same thing, but they’re using very different experimental tools and conceptual tools and methodological tools to try to describe what’s going on there. There’s ways in which, when you cross a disciplinary boundary — and sometimes, when you leave your research group and go to another research group in the same department — that what you see on the ground as the method you’re using to build knowledge shifts.

In some ways, I’m inclined to say it’s an empirical question whether there’s a single unified scientific method, or whether we’ve got something more like a family resemblance kind of thing going on. There’s enough overlap in the tools that we’re going to call them all science, but whether we can give necessary and sufficient conditions that describe the whole thing, that’s still up in the air.

Andrew Brandel: I just want to add to that point, if I can. I think that one of the major topics in social sciences of science and in the philosophy of science recently has been the point that science itself, as it’s been practiced, has a history that is also built on certain kinds of power structures. So it’s not even enough to say, let’s bring lots of different kinds of people to the table, but we actually have to uncover the ways in which certain power structures have been built into the very way that we think about science or the way that the disciplines are arranged.

(23:10)
Michael Tomasson: You’ve got to expand on that. What do you mean? There’s only one good — there’s good science and there’s bad science. I don’t understand.

Janet Stemwedel: So wait, everyone who does science like you do is doing good science, and everyone who uses different approaches, that’s bad?

Michael Tomasson: Yes, exactly.

Janet Stemwedel: There’s no style choices in there at all?

Michael Tomasson: That’s what I’m throwing out there. I’m trying to explore that. I’m going to take poor Casey over here, we’re going to stamp him, turn him into a white guy in a tie and he’s going to do science the way God intended it.

Dr. Isis: This is actually a good point, though. I had a conversation with a friend recently about “Cosmos.” As they look back on the show, at all the historical scientists, who, historically has done science? Up until very recently, it has been people who were sufficiently wealthy to support the lifestyle to which they would like to become accustomed, and it’s very easy to sit and think and philosophize about how we do science when it’s not your primary livelihood. It was sort of gentleman scientists who were of the independently wealthy variety who were interested in science and were making these observations, and now that’s very much changed.

It was really interesting to me when you suggested this as a topic because recently I’ve become very pragmatic about doing science. I think I’m taking the “Friday” approach to science — you know, the movie? Danielle Lee wants to remake “Friday” as a science movie. Right now, messing with my money is like messing with my emotions. I’m about writing things in a way to get them funded and writing things in a way that gets them published, and it’s cute to think that we might change the game or make it better, but there’s also a pragmatic side to it. It’s a human endeavor, and doing things in a certain way gets certain responses from your colleagues. The thing that I see, especially watching young people on Twitter, is they try to change the game before they understand the game, and then they get smacked on the nose, and then they write is off as “science is broken”. Well, you don’t understand the game yet.

Janet Stemwedel: Although it’s complicated, I’d say. It is a human endeavor. Forgetting it’s a human endeavor is a road to nothing but pain. And you’ve got the knowledge-building thing going on, and that’s certainly at the center of science, but you’ve also got the getting credit for the awesome things you’ve done and getting paid so you can stay in the pool and keep building knowledge, because we haven’t got this utopian science island where anyone who wants to build knowledge can and all their needs are taken care of. And, you’ve got power gradients. So, there may well be principled arguments from the point of view of what’s going to incentivize practices that will result in better knowledge and less cheating and things like that, to change the game. I’d argue that’s one of the things that philosophy of science can contribute — I’ve tried to contribute that as part of my day job. But the first step is, you’ve got to start talking about the knowledge-building as an activity that’s conducted by humans rather than you put more data into the scientific method box, you turn the crank, and out comes the knowledge.

Michael Tomasson: This is horrifying. I guess what I’m concerned about is I’d hoped you’d teach the scientific method as some sort of central methodology from lab to lab. Are you saying, from the student’s point of view, whatever lab you’re in, you’ve got to figure out whatever the boss wants, and that’s what science is? Is there no skeleton key or structure that we can take from lab to lab?

Dr. Rubidium: Isn’t that what you’re doing? You’re going to instruct your people to do science the way you think it should be done? That pretty much sounds like what you just said.

Dr. Isis: That’s the point of being an apprentice, right?

Michael Tomasson: I had some fantasy that there was some universal currency or universal toolset that could be taken from one lab to another. Are you saying that I’m just teaching my people how to do Tomasson science, and they’re going to go over to Rubidium and be like, forget all that, and do things totally differently?

Dr. Rubidium: That might be the case.

Janet Stemwedel: Let’s put out there that a unified scientific method that’s accepted across scientific disciplines, and from lab to lab and all that, is an ideal. We have this notion that part of why we’re engaged in science to try to build knowledge of the world is that there is a world that we share. We’re trying to build objective knowledge, and why that matters is because we take it that there is a reality out there that goes deeper than how, subjectively, things seem to us.

(30:00)
Michael Tomasson: Yes!

Janet Stemwedel: So, we’re looking for a way to share that world, and the pictures of the method involved in doing that, the logical connections involved in doing that, that we got from the logical empiricists and Popper and that crowd — if you like, they’re giving sort of the idealized model of how we could do that. It’s analogous to the story they tell you about orbitals in intro chem. You know what happens, if you keep on going with chem, is they mess up that model. They say, it’s not that simple, it’s more complicated.

And that’s what philosophers of science do, is we mess up that model. We say, it can’t possible be that simple, because real human beings couldn’t drive that and make it work as well as it does. So there must be something more complicated going on; let’s figure out what it is. My impression, looking at the practice through the lens of philosophy of science, is that you find a lot of diversity in the details of the methods, you find a reasonable amount of diversity in terms of what’s the right attitude to have towards our theories — if we’ve got a lot of evidence in favor of our theories, are we allowed to believe our theories are probably right about the world, or just that they’re better at churning out predictions than the other theories we’ve considered so far? We have places where you can start to look at how methodologies embraced by Western primatologists compared to Japanese primatologists — where they differ on what’s the right thing to do to get the knowledge — you could say, it’s not the case that one side is right and one side is wrong, we’ve located a trade-off here, where one camp is deciding one of the things you could get is more important and you can sacrifice the other, and the other camp is going the other direction on that.

It’s not to say we should just give up on this project of science and building objective, reliable knowledge about the world. But how we do that is not really anything like the flowchart of the scientific method that you find in the junior high science text book. That’s like staying with the intro chem picture of the orbitals and saying, that’s all I need to know.

(32:20)
Dr. Isis: I sort of was having a little frightened moment where, as I was listening to you talk, Michael, I was having this “I don’t think that word means what you think it means” reaction. And I realize that you’re a physician and not a real scientist, but “the scientific method” is actually a narrow construct of generating a hypothesis, generating methods to test the hypothesis, generating results, and then either rejecting or failing to reject your hypothesis. This idea of going to people’s labs and learning to do science is completely tangential from the scientific method. I think we can all agree that, for most of us at are core, the scientific method is different from the culture. Now, whether I go to Tomasson’s lab and learn to label my reagents with the wrong labels because they’re a trifling, scandalous bunch who will mess up your experiment, and then I go to Rubidium’s lab and we all go marathon training at 3 o’clock in the afternoon, that’s the culture of science, that’s not the scientific method.

(34:05)
Janet Stemwedel: Maybe what we mean by the scientific method is either more nebulous or more complicated, and that’s where the disagreements come from.

If I can turn back to the example of the Japanese primatologists and the primatologists from the U.S. [1]… You’re trying to study monkeys. You want to see how they’re behaving, you want to tell some sort of story, you probably are driven by some sort of hypotheses. As it turns out, the Western primatologists are starting with the hypothesis that basically you start at the level of the individual monkey, that this is a biological machine, and you figure out how that works, and how they interact with each other if you put them in a group. The Japanese primatologists are starting out with the assumption that you look at the level of social groups to understand what’s going on.

(35:20)
And there’s this huge methodological disagreement that they had when they started actually paying attention to each other: is it OK to leave food in the clearing to draw the monkeys to where you can see them more closely?

The Western primatologists said, hell no, that interferes with the system you’re trying to study. You want to know what the monkeys would be like in nature, without you there. So, leaving food out there for them, “provisioning” them, is a bad call.

The Japanese primatologists (who are, by the way, studying monkeys that live in the islands that are part of Japan, monkeys that are well aware of the existence of humans because they’re bumping up against them all the time) say, you know what, if we get them closer to where we are, if we draw them into the clearings, we can see more subtle behaviors, we can actually get more information.

So here, there’s a methodological trade-off. Is it important to you to get more detailed observations, or to get observations that are untainted by human interference? ‘Cause you can’t get both. They’re both using the scientific method, but they’re making different choices about the kind of knowledge they’re building with that scientific method. Yet, on the surface of things, these primatologists were sort of looking at each other like, “Those guys don’t know how to do science! What the hell?”

(36:40)
Andrew Brandel: The other thing I wanted to mention to this point and, I think, to Tomasson’s question also, is that there are lots of anthropologists embedded with laboratory scientists all over the world, doing research into specifically what kinds of differences, both in the ways that they’re organized and in the ways that arguments get levied, what counts as “true” or “false,” what counts as a hypothesis, how that gets determined within these different contexts. There are broad fields of social sciences doing exactly this.

Dr. Rubidium: I think this gets to the issue: Tomasson, what are you calling the scientific method? Versus, can you really at some point separate out the idea that science is a thing — like Janet was saying, it’s a machine, you put the stuff in, give it a spin, and get the stuff out — can you really separate something called “the scientific method” from the people who do it?

I’ve taught general chemistry, and one of the first things we do is to define science, which is always exciting. It’s like trying to define art.

Michael Tomasson: So what do you come up with? What is science?

Dr. Rubidium: It’s a body of knowledge and a process — it’s two different things, when people say science. We always tell students, it’s a body of knowledge but it’s also a process, a thing you can do. I’m not saying it’s [the only] good answer, but it’s the answer we give students in class.

Then, of course, the idea is, what’s the scientific method? And everyone’s got some sort of a figure. In the gen chem book, in chapter 1, it’s always going to be in there. And it makes it seem like we’ve all agreed at some point, maybe taken a vote, I don’t know, that this is what we do.

Janet Stemwedel: And you get the laminated card with the steps on it when you get your lab coat.

Dr. Rubidium: And there’s the flowchart, usually laid out like a circle.

Michael Tomasson: Exactly!

Dr. Rubidium: It’s awesome! But that’s what we tell people. It’s kind of like the lie we tell the about orbitals, like Janet was saying, in the beginning of gen chem. But then, this is how sausages are really made. And yes, we have this method, and these are the steps we say are involved with it, but are we talking about that, which is what you learn in high school or junior high or science camp or whatever, or are you actually talking about how you run your research group? Which one are you talking about?

(39:30)
Janet Stemwedel: It can get more complicated than that. There’s also this question of: is the scientific method — whatever the heck we do to build reliable knowledge about the world using science — is that the kind of thing you could do solo, or is it necessarily a process that involves interaction with other people? So, maybe we don’t need to be up at night worrying about whether individual scientists fail to instantiate this idealized scientific method as long as the whole community collectively shakes out as instantiating it.

Michael Tomasson: Hmmm.

Casey: Isn’t this part of what a lot of scientists are doing, that it shakes out some of the human problems that come with it? It’s a messy process and you have a globe full of people performing experiments, doing research. That should, to some extent, push out some noise. We have made advances. Science works to some degree.

Janet Stemwedel: It mostly keeps the plane up in the air when it’s supposed to be in the air, and the water from being poisoned when it’s not supposed to be poisoned. The science does a pretty good job building the knowledge. I can’t always explain why it’s so good at that, but I believe that it does. And I think you’re right, there’s something — certainly in peer review, there’s this assumption that why we play with others here is that they help us catch the thing we’re missing, they help us to make sure the experiments really are reproducible, to make sure that we’re not smuggling in unconscious assumptions, whatever. I would argue, following on something Tomasson wrote in his blog post, that this is a good epistemic reason for some of the stuff that scientists rail on about on Twitter, about how we should try to get rid of sexism and racism and ableism and other kinds of -isms in the practice of science. It’s not just because scientists shouldn’t be jerks to people who could be helping them build the knowledge. It’s that, if you’ve got a more diverse community of people building the knowledge, you up the chances that you’re going to locate the unconscious biases that are sneaking in to the story we tell about what the world is like.

When the transcript continues, we do some more musing about methodology, the frailties of individual humans when it comes to being objective, and epistemic violence.

_______

[1] This discussion based on my reading of Pamela J. Asquith, “Japanese science and western hegemonies: primatology and the limits set to questions.” Naked science: Anthropological inquiry into boundaries, power, and knowledge (1996): 239-258.

* * * * *

Part 1 of the transcript.

Archived video of this Pub-Style Science episode.

Storify’d version of the simultaneous Twitter conversation.

Pub-Style Science: philosophy, hypotheses, and the scientific method.

Last week I was honored to participate in a Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building. Some technical glitches notwithstanding, it was a rollicking good conversation — so much so that I have put together a transcript for those who don’t want to review the archived video.

The full transcript is long (approaching 8000 words even excising the non-substantive smack-talk), so I’ll be presenting it here in a few chunks that I’ve split more or less at points where the topic of the discussion shifted.

In places, I’ve cleaned up the grammar a bit, attempting to faithfully capture the gist of what each speaker was saying. As well, because my mom reads this blog, I’ve cleaned up some of the more colorful language. If you prefer the PG-13 version, the archived video will give you what you need.

Simultaneously with our video-linked discussion, there was a conversation on Twitter under the #pubscience hashtag. You can see that conversation Storify’d here.

____
(05:40)
Michael Tomasson: The reason I was interested in this is because I have one very naïve view and one esoteric view. My naïve view is that there is something useful about philosophy in terms of the scientific method, and when people are in my lab, I try to beat into their heads (I mean, educate them) that there’s a certain structure to how we do science, and this is a life-raft and a tool that is essential. And I guess that’s the question, whether there is some sort of essential tool kit. We talk about the scientific method. Is that a universal? I started thinking about this talking with my brother-in-law, who’s an amateur philosopher, about different theories of epistemology, and he was shocked that I would think that science had a lock on creating knowledge. But I think we do, through the scientific method.

Janet, take us to the next level. To me, from where I am, the scientific method is the key to the city of knowledge. No?

Janet Stemwedel: Well, that’s certainly a common view, and that’s a view that, in the philosophy of science class I regularly teach, we start with — that there’s something special about whatever it is scientists are doing, something special about the way they gather very careful observations of the world, and hook them together in the right logical way, and draw inferences and find patterns, that’s a reliable way to build knowledge. But at least for most of the 20th Century, what people who looked closely at this assumption in philosophy found was that it had to be more complicated than that. So you end up with folks like Sir Karl Popper pointing out that there is a problem of induction — that deductive logic will get you absolutely guaranteed conclusions if your premises are true, but inductive inference could go wrong; the future might not be like the past we’ve observed so far.

(08:00)
Michael Tomasson: I’ve got to keep the glossary attached. Deductive and inductive?

Janet Stemwedel: Sure. A deductive argument might run something like this:

All men are mortal. Socrates is a man. Therefore, Socrates is mortal.

If it’s true that all men are mortal, and that Socrates is a man, then you are guaranteed that Socrates is also going to be mortal. The form of the argument is enough to say, if the assumptions are true, then the conclusion has to be true, and you can take that to the bank.

Inductive inference is actually most of what we seem to use in drawing inferences from observations and experiments. So, let’s say you observe a whole lot of frogs, and you observe that, after some amount of time, each of the frogs that you’ve had in your possession kicks off. After a certain number of frogs have done this, you might draw the inference that all frogs are mortal. And, it seems like a pretty good inference. But, it’s possible that there are frogs not yet observed that aren’t mortal.

Inductive inference is something we use all the time. But Karl Popper said, guess what, it’s not guaranteed in the same way deductive logic is. And this is why he thought the power of the scientific method is that scientists are actually only ever concerned to find evidence against their hypotheses. The evidence against your hypotheses lets you conclude, via deductive inference, that those hypotheses are wrong, and then you cross them off. Any hypothesis where you seem to get observational support, Popper says, don’t get too excited! Keep testing it, because maybe the next test is going to be the one where you find evidence against it, and you don’t want to get screwed over by induction. Inductive reasoning is just a little too shaky to put your faith in.

(10:05)
Michael Tomasson: That’s my understanding of Karl Popper. I learned about the core of falsifying hypotheses, and that’s sort of what I teach as truth. But I’ve heard some anti-Karl Popper folks, which I don’t really quite understand.

Let me ask Isis, because I know Isis has very strong opinions about hypotheses. You had a blog post a long time ago about hypotheses. Am I putting words in your mouth to say you think hypotheses and hypothesis testing are important?

(10:40)
Dr. Isis: No, I did. That’s sort of become the running joke here is that my only contribution to lab meeting is to say, wait wait wait, what was your hypothesis? I think that having hypotheses is critical, and I’m a believer, as Dr. Tomasson knows, that a hypothesis has four parts. I think that’s fundamental, framing the question, because I think that the question frames how you do your analysis. The design and the analysis fall out of the hypothesis, so I don’t understand doing science without a hypothesis.

Michael Tomasson: Let me throw it over to Andrew … You’re coming from anthropology, you’re looking at science from 30,000 feet, where maybe in anthropology it’s tough to do hypothesis-testing. So, what do you say to this claim that the hypothesis is everything?

Andrew Brandel: I would give two basic responses. One: in the social sciences, we definitely have a different relationship to hypotheses, to the scientific method, perhaps. I don’t want to represent the entire world of social and human sciences.

Michael Tomasson: Too bad!

(12:40)
Andrew Brandel: So, there’s definitely a different relationship to hypothesis-testing — we don’t have a controlled setting. This is what a lot of famous anthropologists would talk about. The other area where we might interject is, science is (in the view of some of us) one among many different ways of viewing and organizing our knowledge about the world, and not necessarily better than some other view.

Michael Tomasson: No, it’s better! Come on!

Andrew Brandel: Well, we can debate about this. This is a debate that’s been going on for a long time, but basically my position would be that we have something to learn from all the different sciences that exist in the world, and that there are lots of different logics which condition the possibility of experiencing different kinds of things. When we ask, what is the hypothesis, when Dr. Isis is saying that is crucial for the research, we would agree with you, that that is also conditioning the responses you get. That’s both what you want and part of the problem. It’s part of a culture that operates like an ideology — too close to you to come at from within it.

Janet Stemwedel: One of the things that philosophers of science started twigging to, since the late 20th Century, is that science is not working with this scientific method that’s essentially a machine that you toss observations into and you turn the crank and on the other end out comes pristine knowledge. Science is an activity done by human beings, and human beings who do science have as many biases and blindspots as human beings who don’t do science. So, recognizing some of the challenges that are built into the kind of critter we are trying to build reliable knowledge about the world becomes crucial, and even places where the scientist will say, look, I’m not doing (in this particular field) hypothesis-driven science, it doesn’t mean that there aren’t some hypotheses sort of behind the curtain directing the attention of the people trying to build knowledge. It just means that they haven’t bumped into enough people trying to build knowledge in the same area that have different assumptions to notice that they’re making assumptions in the first place.

(15:20)
Dr. Isis: I think that’s a crucial distinction. Is the science that you’re doing really not hypothesis-driven, or are you too lazy to write down a hypothesis?

To give an example, I’m writing a paper with this clinical fellow, and she’s great. She brought a draft, which is amazing, because I’m all about the paper right now. And in there, she wrote, we sought to observe this because to the best of our knowledge this has never been reported in the literature.

First of all, the phrase “to the best of our knowledge,” any time you write that you should just punch yourself in the throat, because if it wasn’t to the best of your knowledge, you wouldn’t be writing it. I mean, you wouldn’t be lying: “this has never been reported in the literature.” The other thing is, “this has never been reported in the literature” as the motivation to do it is a stupid reason. I told her, the frequency of the times of the week that I wear black underwear has never been reported in the literature. That doesn’t mean it should be.

Janet Stemwedel: Although, if it correlates with your experiment working or not — I have never met more superstitious people than experimentalists. If the experiment only works on the days you were black underwear, you’re wearing black underwear until the paper is submitted, that’s how it’s going to be. Because the world is complicated!

Dr. Isis: The point is that it’s not that she didn’t have a hypothesis. It’s that pulling it out of her — it was like a tapeworm. It was a struggle. That to me is the question. Are we really doing science without a hypothesis, or are we making the story about ourselves? About what we know about in the literature, what the gap in the literature is, and the motivation to do the experiment, or are we writing, “we wanted to do this to see if this was the thing”? — in which case, I don’t find it very interesting.

Michael Tomasson: That’s an example of something that I try to teach, when you’re writing papers: we did this, we wanted to do that, we thought about this. It’s not really about you.

But friend of the show Cedar Riener tweets in, aren’t the biggest science projects those least likely to have clearly hypothesis-driven experiments, like HGP, BRANI, etc.? I think the BRAIN example is a good one. We talk about how you need hypotheses to do science, and yet here’s this very high profile thing which, as far as I can tell, doesn’t really have any hypotheses driving it.

When the transcript continues: Issues of inclusion, methodological disputes, and the possibility that “the scientific method” is actually a lie.

Brief thoughts on uncertainty.

For context, these thoughts follow upon a very good session at ScienceOnline Together 2014 on “How to communicate uncertainty with the brevity that online communication requires.” Two of the participants in the session used Storify to collect tweets of the discussion (here and here).

About a month later, this does less to answer the question of the session title than to give you a peek into my thoughts about science and uncertainty. This may be what you’ve come to expect of me.

Humans are uncomfortable with uncertainty, at least in those moments when we notice it and where we have to make decisions that have more than entertainment value riding on them. We’d rather have certainty, since that makes it easier to enact plans that won’t be thwarted.

Science is (probably) a response to our desire for more certainty. Finding natural explanations for natural phenomena, stable patterns in our experience, gives us a handle on our world and what we can expect from it that’s less capricious than “the gods are in a mood today.”

But the scientific method isn’t magic. It’s a tool that cranks out explanations of what’s happened, predictions of what’s coming up, based on observations made by humans with our fallible human senses.

The fallibility of those human senses (plus things like the trickiness of being certain you’re awake and not dreaming) was (probably) what drove philosopher René Descartes in his famous Meditations, the work that yielded the conclusion “I think, therefore I am” and that featured not one but two proofs of the existence of a God who is not a deceiver. Descartes was not pursuing a theological project here. Rather, he was trying to explain how empirical science — science relying on all kinds of observations made by fallible humans with their fallible senses — could possibly build reliable knowledge. Trying to put empirical science on firm foundations, he engaged in his “method of doubt” to locate some solid place to stand, some thing that could not be doubted. That something was “I think, therefore I am” — in other words, if I’m here doubting that my experience is reliable, that I’m awake instead of dreaming, that I’m a human being rather than a brain in a vat, I can at least me sure that there exists a thinking thing that’s doing the doubting.

From this fact that could not be doubted, Descartes tried to climb back out of that pit of doubt and to work out the extent to which we could trust our senses (and the ways in which our sense were likely to mislead us). This involved those two proofs of the existence of a God who is not a deceiver, plus a whole complicated story of minds and brain communicating with each other (via the wiggling of our pineal glands) — which is to say, it was not entirely persuasive. Still, it was all in the service of getting us more certainty from our empirical science.

Certainty and its limits are at the heart of another piece of philosophy, “the problem of induction,” this one most closely associated with David Hume. The problem here rests on our basic inability to be certain that what we have so far observed of our world will be a reliable guide to what we haven’t observed yet, that the future will be like the past. Observing a hundred, or a thousand, or a million ravens that are black is not enough for us to conclude with absolute certainty that the ravens we haven’t yet observed must also be black. Just because the sun rose today, and yesterday, and everyday through recorded human history to date does not guarantee that it will rise tomorrow.

But while Hume pointed out the limits of what we could conclude with certainty from our observations at any given moment — limits which impelled Karl Popper to assert that the scientific attitude was one of trying to prove hypotheses false rather than seeking support for them — he also acknowledged our almost irresistible inclination to believe that the future will be like the past, that the patterns of our experience so far will be repeated in the parts of the world still waiting for us to experience them. Logic can’t guarantee these patterns will persist, but our expectations (especially in cases where we have oodles of very consistent observations) feel like certainty.

Scientists are trained to recognize the limits of their certainty when they draw conclusions, offer explanations, make predictions. They are officially on the hook to acknowledge their knowledge claims as tentative, likely to be updated in the light of further information.

This care in acknowledging the limits of what careful observation and logical inference guarantee us can make it appear to people who don’t obsess over uncertainties in everyday life that scientists don’t know what’s going on. But the existence of some amount of uncertainty does not mean we have no idea what’s going on, no clue what’s likely to happen next.

What non-scientists who dismiss scientific knowledge claims on the basis of acknowledged uncertainty forget is that making decisions in the face of uncertainty is the human condition. We do it all the time. If we didn’t, we’d make no decisions at all (or else we’d be living a sustained lie about how clearly we see into our future).

Strangely, though, we seem to have a hard time reconciling our everyday pragmatism about everyday uncertainty with our suspicion about the uncertainties scientists flag in the knowledge they share with us. Maybe we’re making the jump from viewing scientific knowledge as reliable to demanding that it be perfect. Or maybe we’re just not very reflective about how easily we navigate uncertainty in our everyday decision-making.

I see this firsthand when my “Ethics in Science” students grapple with ethics case studies. At first they are freaked out by the missing details, the less-than-perfect information about what will happen if the protagonist does X or if she does Y instead. How can we make good decisions about what the protagonist should do if we can’t be certain about those potential outcomes?

My answer to them: The same way we do in real life, whose future we can’t see with any more certainty.

When there’s more riding on our decisions, we’re more likely to notice the gaps in the information that informs those decisions, the uncertainty inherent in the outcomes that will follow on what we decide. But we never have perfect information, and neither do scientists. That doesn’t mean our decision-making is hopeless, just that we need to get comfortable making do with the certainty we have.