Pennywise and pound-foolish: misidentified cells and competitive pressures in scientific knowledge-building.

The overarching project of science is building reliable knowledge about the world, but the way this knowledge-building happens in our world is in the context of competition. For example, scientists compete with each other to be the first to make a new discovery, and they compete with each other for finite pools of grant money with which to conduct more research and make further discoveries.

I’ve heard the competitive pressures on scientists described as a useful way to motivate scientists to be clever and efficient (and not to knock off early lest some more dedicated lab get to your discovery first). But there are situations where it’s less obvious that fierce competition for scarce resources leads to choices that really align with the goal of building reliable knowledge about the world.

This week, on NPR’s Morning Edition, Richard Harris reported a pair of stories on how researchers who work with cells in culture grapple with the problem of their intended cell line being contaminated and overtaken by a different cell line. Harris tells us:

One of the worst cases involves a breast cancer cell line called MDA-435 (or MDA-MB-435). After the cell line was identified in 1976, breast cancer scientists eagerly adopted it.

When injected in animals, the cells spread the way breast cancer metastasizes in women, “and that’s not a very common feature of most breast cancer cell lines,” says Stephen Ethier, a cancer geneticist at the Medical University of South Carolina. “So as a result of that, people began asking for those cells, and so there are many laboratories all over the world, who have published hundreds of papers using the MDA-435 cell line as a model for breast cancer metastasis.”

In fact, scientists published more than a thousand papers with this cell line over the years. About 15 years ago, scientists using newly developed DNA tests took a close look at these cells. And they were shocked to discover that they weren’t from a breast cancer cell at all. The breast cancer cell line had been crowded out by skin cancer cells.

“We now know with certainty that the MDA-435 cell line is identical to a melanoma cell line,” Ethier says.

And it turns out that contamination traces back for decades. Several scientists published papers about this to alert the field, “but nevertheless, there are people out there who haven’t gotten the memo, apparently,” he says.

Decades worth of work and more than a thousand published research papers were supposed to add up to a lot of knowledge about a particular kind of breast cancer cell, except it wasn’t knowledge about breast cancer cells at all because the cells in the cell line had been misidentified. Probably scientists know something from that work, but it isn’t the knowledge they thought they had before the contamination was detected.

On the basis of the discovery that this much knowledge-building had been compromised by being based on misidentified cells, you might imagine researchers would prioritize precise identification of the cells they use. But, as Harris found, this obvious bit of quality control meets resistance. For one thing, researchers seem unwilling to pay the extra financial costs it would take:

This may all come down to money. Scientists can avoid most of these problems by purchasing cells from a company that routinely tests them. But most scientists would rather walk down the hall and borrow cells from another lab.

“Academics share their cell lines like candy because they don’t want to go back and spend another $300,” said Richard Neve from Genentech. “It is economics. And they don’t want to spend another $100 to [verify] that’s still the same cell line.”

Note here that scientists could still economize by sharing cell lines with their colleagues instead of purchasing them but paying for the tests to nail down the identity of the shared cells. However, many do not.

(Consider, though, how awkward it might be to test cells you’ve gotten from a colleague only to discover that they are not the kind of cells your colleague thought they were. How do you break the news to your colleague that their work — including published papers in scientific journals — is likely to be mistaken and misleading? How likely would this make other colleagues to share their cell lines with you, knowing that you might bring them similarly bad news as a result of their generosity?)

Journals like Nature have tried to encourage scientists to test their cell lines by adding it to an authors’ checklist for researchers submitting papers. Most authors do not check the box indicating they have tested their cells.

One result here is that the knowledge that comes from these studies and gets reported in scientific journals may not be as solid as it seems:

When scientists at [Genentech] find an intriguing result from an academic lab, the first thing they do is try to replicate the result.

Neve said often they can’t, and misidentified cells are a common reason.

This is a problem that is not just of concern to scientists. The rest of us depend on scientists to build reliable knowledge about the world in part because it might matter for what kinds of treatments are developed for diseases that affect us. Moreover, much of this research is paid for with public money — which means the public has an interest in whether the funding is doing what it is supposed to be doing.

However, Harris notes that funding agencies seem unwilling to act decisively to address the issue of research based on misidentified cell lines:

“We are fully convinced that this is a significant enough problem that we have to take steps to address it,” Jon Lorsch, director of the NIH’s National Institute of General Medical Sciences, said during the panel discussion.

One obvious step would be to require scientists who get federal funding to test their cells. Howard Soule, chief science officer at the Prostate Cancer Foundation, said that’s what his charity requires of the scientists it funds.

There’s a commercial lab that will run this test for about $140, so “this is not going to break the bank,” Soule said.

But Lorsch at the NIH argued that it’s not so simple on the scale at which his institute hands out funding. “We really can’t go and police 10,000 grants,” Lorsch said.

“Sure you can,” Soule shot back. “How can you not?”

Lorsch said if they do police this issue, “there are dozens and dozens of other issues” that the NIH should logically police as well. “It becomes a Hydra,” Lorsch said. “You know, you chop off one head and others grow.”

Biomedical research gets more expensive all the time, and the NIH is reluctant to pile on a whole bunch of new rules. It’s a balancing act.

“If we become too draconian we’re going to end up squashing creativity and slowing down research, which is not good for the taxpayers because they aren’t going to get as much for their money,” Lorsch said.

To my eye, Lorsch’s argument against requiring researchers to test their cells focuses on the competitive aspect of scientific research to the exclusion of the knowledge-building aspect.

What does it matter if the taxpayers get more research generated and published if a significant amount of that research output is irreproducible because of misidentified cells? In the absence of tests to properly identify the cells being used, there’s no clear way to tell just by looking at the journal articles which ones are reliable and which ones are not. Post-publication quality control requires researchers to repeat experiments and compare their results to those published, something that will cost significantly more than if the initial researchers tested their cells in the first place.

However, research funding is generally awarded to build new knowledge, not to test existing knowledge claims. Scientists get credit for making new discoveries, not for determining that other scientists’ discoveries can be reproduced.

NIH could make it a condition of funding that researchers working with cell lines get those cell lines tested, and arguably this would be the most cost-efficient way to ensure results that are reliable rather than based on misidentification. I find Lorsch’s claim that there are dozens of other kinds of quality control of this sort NIH could demand, so they cannot demand this, unpersuasive. Even if there are many things to fix, it doesn’t mean you must fix them all at once. Incremental improvements in quality control are surely better than none at all.

His further suggestion that engaging in NIH-mandated quality control will quash scientific creativity strikes me as silly. Scientists are at their most creative when they are working within constraints to solve problems. Indeed, were NIH to require that researchers test their cells, there is no reason to think this additional constraint could not be easily incorporated into researchers’ current competition for NIH funding.

The big question, really, is whether NIH is prioritizing funding a higher volume of research, or higher quality research. Presumably, the public is better served by a smaller number of published studies that make reliable claims about the actual cells researchers are working with than by a large number of published studies making hard-to-verify claims about misidentified cells.

If scientific competition is inescapable, at least let’s make sure that the incentives encourage the careful steps required to build reliable knowledge. If those careful steps are widely seen as an impediment in succeeding in the competition, we derail the goal that the competitive pressures were supposed to enhance.

James Watson’s sense of entitlement, and misunderstandings of science that need to be countered.

James Watson, who shared a Nobel Prize in 1962 for discovering the double helix structure of DNA, is in the news, offering his Nobel Prize medal at auction. As reported by the Telegraph:

Mr Watson, who shared the 1962 Nobel Prize for uncovering the double helix structure of DNA, sparked an outcry in 2007 when he suggested that people of African descent were inherently less intelligent than white people.

If the medal is sold Mr Watson said he would use some of the proceeds to make donations to the “institutions that have looked after me”, such as University of Chicago, where he was awarded his undergraduate degree, and Clare College, Cambridge.

Mr Watson said his income had plummeted following his controversial remarks in 2007, which forced him to retire from the Cold Spring Harbor Laboratory on Long Island, New York. He still holds the position of chancellor emeritus there.

“Because I was an ‘unperson’ I was fired from the boards of companies, so I have no income, apart from my academic income,” he said.

He would also use some of the proceeds to buy an artwork, he said. “I really would love to own a [painting by David] Hockney”. …

Mr Watson said he hoped the publicity surrounding the sale of the medal would provide an opportunity for him to “re-enter public life”. Since the furore in 2007 he has not delivered any public lectures.

There’s a lot I could say here about James Watson, the assumptions under which he is laboring, and the potential impacts on science and the public’s engagement with it. In fact, I have said much of it before, although not always in reference to James Watson in particular. However, given the likelihood that we’ll keep hearing the same unhelpful responses to James Watson and his ilk if we don’t grapple with some of the fundamental misunderstandings of science at work here, it’s worth covering this ground again.

First, I’ll start with some of the claims I see Watson making around his decision to auction his Nobel Prize medal:

  • He needs money, given that he has “no income beyond [his] academic income”. One might take this as an indication that academic salaries in general ought to be raised (although I’m willing to bet a few buck that Watson’s inadequate academic income is at least as much as that of the average academic actively engaged in research and/or teaching in the U.S. today). However, Watson gives no sign of calling for such an across-the-board increase, since…
  • He connects his lack of income to being fired from boards of companies and to his inability to book public speaking engagements after his 2007 remarks on race.
  • He equates this removal from boards and lack of invitations to speak with being an “unperson”.

What comes across to me here is that James Watson sees himself as special, as entitled to seats on boards and speaker invitations. On what basis, we might ask, is he entitled to these perks, especially in the face of a scientific community just brimming with talented members currently working at the cutting edge(s) of scientific knowledge-building? It is worth noting that some who attended recent talks by Watson judged them to be nothing special:

Possibly, then, speaking engagements may have dried up at least partly because James Watson was not such an engaging speaker — with an asking price of $50,000 for a paid speaking engagement, whether you give good talk is a relevant criterion — rather than being driven entirely by his remarks on race in 2007, or before 2007. However, Watson seems sure that these remarks are the proximate cause of his lack of invitations to give public talks since 2007. And, he finds this result not to be in accord with what a scientist like himself deserves.

Positioning James Watson as a very special scientist who deserves special treatment above and beyond the recognition of the Nobel committee feeds the problematic narrative of scientific knowledge as an achievement of great men (and yes, in this narrative, it is usually great men who are recognized). This narrative ignores the fundamentally social nature of scientific knowledge-building and the fact that objectivity is the result of teamwork.

Of course, it’s even more galling to have James Watson portrayed (including by himself) as an exceptional hero of science rather than as part of a knowledge-building community given the role of Rosalind Franklin’s work in determining the structure of DNA — and given Watson’s apparent contempt for Franklin, rather than regard for her as a member of the knowledge-building team, in The Double Helix.

Indeed, part of the danger of the hero narrative is that scientists themselves may start to believe it. They can come to see themselves as individuals possessing more powers of objectivity than other humans (thus fundamentally misunderstanding where objectivity comes from), with privileged access to truth, with insights that don’t need to be rigorously tested or supported with empirical evidence. (Watson’s 2007 claims about race fit in this territory .)

Scientists making authoritative claims beyond what science can support is a bigger problem. To the extent that the public also buys into the hero narrative of science, that public is likely to take what Nobel Prize winners say as authoritative, even in the absence of good empirical evidence. Here Watson keeps company with William Shockley and his claims on race, Kary Mullis and his claims on HIV, and Linus Pauling and his advocacy of mega-doses of vitamin C. Some may argue that non-scientists need to be more careful consumers of scientific claims, but it would surely help if scientists themselves would recognize the limits of their own expertise and refrain from overselling either their claims or their individual knowledge-building power.

Where Watson’s claims about race are concerned, the harm of positioning him as an exceptional scientist goes further than reinforcing a common misunderstanding of where scientific knowledge comes from. These views, asserted authoritatively by a Nobel Prize winner, give cover to people who want to believe that their racist views are justified by scientific knowledge.

As well, as I have argued before (in regard to Richard Feynman and sexism), the hero narrative can be harmful to the goal of scientific outreach given the fact that human scientists usually have some problematic features and that these problematic features are often ignored, minimized, or even justified (e.g., as “a product of the time”) in order to foreground the hero’s great achievement and sell the science. There seems to be no shortage of folks willing to label Watson’s racist views as unfortunate but also as something that should not overshadow his discovery of the structure of DNA. In order that the unfortunate views not overshadow the big scientific contribution, some of these folks would rather we stop talking about Watson’s having made the claims he has made about racial difference (although Watson shows no apparent regret for holding these views, only for having voiced them to reporters).

However, especially for people in the groups that James Watson has claimed are genetically inferior, asserting that Watson’s massive scientific achievement trumps his problematic claims about race can be alienating. His scientific achievement doesn’t magically remove the malign effects of the statements he has made from a very large soapbox, using his authority as a Nobel Prize winning scientist. Ignoring those malign effects, or urging people to ignore them because of the scientific achievement which gave him that big soapbox, sounds an awful lot like saying that including the whole James Watson package in science is more important than including black people as scientific practitioners or science fans.

The hero narrative gives James Watson’s claims more power than they deserve. The hero narrative also makes urgent the need to deem James Watson’s “foibles” forgivable so we can appreciate his contribution to knowledge. None of this is helpful to the practice of science. None of it helps non-scientists engage more responsibly with scientific claims or scientific practitioners.

Holding James Watson to account for his claims, holding him responsible for scientific standards of evidence, doesn’t render him an unperson. Indeed, it amounts to treating him as a person engaged in the scientific knowledge-building project, as well as a person sharing a world with the rest of us.

* * * * *
Michael Hendricks offers a more concise argument against the hero narrative in science.

And, if you’re not up on the role of Rosalind Franklin in the discovery of the structure of DNA, these seventh graders can get you started:

Grappling with the angry-making history of human subjects research, because we need to.

Teaching about the history of scientific research with human subjects bums me out.

Indeed, I get fairly regular indications from students in my “Ethics in Science” course that reading about and discussing the Nazi medical experiments and the U.S. Public Health Service’s Tuskegee syphilis experiment leaves them feeling grumpy, too.

Their grumpiness varies a bit depending on how they see themselves in relation to the researchers whose ethical transgressions are being inspected. Some of the science majors who identify strongly with the research community seem to get a little defensive, pressing me to see if these two big awful examples of human subject research aren’t clear anomalies, the work of obvious monsters. (This is one reason I generally point out that, when it comes to historical examples of ethically problematic research with human subjects, the bench is deep: the U.S. government’s syphilis experiments in Guatemala, the MIT Radioactivity Center’s studies on kids with mental disabilities in a residential school, the harms done to Henrietta Lacks and to the family members that survived her by scientists working with HeLa cells, the National Cancer Institute and Gates Foundation funded studies of cervical cancer screening in India — to name just a few.) Some of the non-science majors in the class seem to look at their classmates who are science majors with a bit of suspicion.

Although I’ve been covering this material with my students since Spring of 2003, it was only a few years ago that I noticed that there was a strong correlation between my really bad mood and the point in the semester when we were covering the history of human subjects research. Indeed, I’ve come to realize that this is no mere correlation but a causal connection.

The harm that researchers have done to human subjects in order to build scientific knowledge in many of these historically notable cases makes me deeply unhappy. These cases involve scientists losing their ethical bearings and then defending indefensible actions as having been all in the service of science. It leaves me grumpy about the scientific community of which these researchers were a part (rather than being obviously marked as monsters or rogues). It leaves me grumpy about humanity.

In other contexts, my grumpiness might be no big deal to anyone but me. But in the context of my “Ethics in Science” course, I need to keep pessimism on a short leash. It’s kind of pointless to talk about what we ought to do if you’re feeling like people are going to be as evil as they can get away with being.

It’s important to talk about the Nazi doctors and the Tuskegee syphilis experiment so my students can see where formal statements about ethical constraints on human subject research (in particular, the Nuremberg Code and the Belmont Report) come from, what actual (rather than imagined) harms they are reactions to. To the extent that official rules and regulations are driven by very bad situations that the scientific community or the larger human community want to avoid repeating, history matters.

History also matters if scientists want to understand the attitudes of publics towards scientists in general and towards scientists conducting research with human subjects in particular. Newly-minted researchers who would never even dream of crossing the ethical lines the Nazi doctors or the Tuskegee syphilis researchers crossed may feel it deeply unfair that potential human subjects don’t default to trusting them. But that’s not how trust works. Ignoring the history of human subjects research means ignoring very real harms and violations of trust that have not faded from the collective memories of the populations that were harmed. Insisting that it’s not fair doesn’t magically earn scientists trust.

Grappling with that history, though, might help scientists repair trust and ensure that the research they conduct is actually worthy of trust.

It’s history that lets us start noticing patterns in the instances where human subjects research took a turn for the unethical. Frequently we see researchers working with human subjects who that don’t see as fully human, or whose humanity seems less important than the piece of knowledge the researchers have decided to build. Or we see researchers who believe they are approaching questions “from the standpoint of pure science,” overestimating their own objectivity and good judgment.

This kind of behavior does not endear scientists to publics. Nor does it help researchers develop appropriate epistemic humility, a recognition that their objectivity is not an individual trait but rather a collective achievement of scientists engaging seriously with each other as they engage with the world they are trying to know. Nor does it help them build empathy.

I teach about the history of human subjects research because it is important to understand where the distrust between scientists and publics has come from. I teach about this history because it is crucial to understanding where current rules and regulations come from.

I teach about this history because I fully believe that scientists can — and must — do better.

And, because the ethical failings of past human subject research were hardly ever the fault of monsters, we ought to grapple with this history so we can identify the places where individual human weakness, biases, blind-spots are likely to lead to ethical problems down the road. We need to build systems and social mechanisms to be accountable to human subjects (and to publics), to prioritize their interests, never to lose sight of their humanity.

We can — and must — do better. But this requires that we seriously examine the ways that scientists have fallen short — even the ways that they have done evil. We owe it to future human subjects of research to learn from the ways scientists have failed past human subjects, to apply these lessons, to build something better.

Communicating with the public, being out as a scientist.

In the previous post, I noted that scientists are not always directly engaged in the project of communicating about their scientific findings (or about the methods they used to produce those findings) to the public.

Part of this is a matter of incentives: most scientists don’t have communicating with the public as an explicit part of their job description, and they are usually better rewarded for paying attention to things that are explicit parts of their job descriptions. Part of it is training: scientists are generally taught a whole lot more about how to conduct research in their field than they are taught about effective strategies for communicating with non-scientists. Part of it is the presence of other professions (like journalists and teachers and museum curators) that are, more or less, playing the communicating-with-the-public-about-science zone. Still another part of it may be temperament: some people say that they went into science because they wanted to do research, not to deal with people. Of course, since doing research requires dealing with other people sooner or later, I’m guessing these folks are terribly bitter that scientific research did not support their preferred lifestyle of total isolation from human contact — or, that they really meant that they didn’t want to deal with people who are non-scientists.

I’d like to suggest, however, that there are very good reasons for scientists to be communicating about science with non-scientists — even if it’s not a job requirement, and there are other people playing that zone, and it doesn’t feel like it comes naturally.

The public has an interest in understanding more than it does about what science knows and how science comes to know it, about which claims are backed by evidence and which others are backed by wishful thinking or outright deception. But it’s hard to engage an adult as you would a student; members of the public are frequently just not up for didactic engagement. Dropping a lecture of what you perceive as their ignorance (or their “knowledge deficit,” as the people who study scientific communication and public understanding of science would call it) probably won’t be a welcome form of engagement.

In general, non-scientists neither need nor want to be able to evaluate scientific claims and evidence with the technical rigor that scientists evaluate them. What they need more is a read on whether the scientists whose job it is to make and evaluate these claims are the kind of people they can trust.

This seems to me like a good reason for scientists to come out as scientists to their communities, their families, their friends.

Whenever there are surveys of how many Americans can name a living scientist, a significant proportion of the people surveyed just can’t name any. But I suspect a bunch of these people know actual, living scientists who walk in their midst — they just don’t know that these folks they know as people are also scientists.

If everyone who is a scientist were to bring that identity to their other human interactions, to let it be a part of what the neighbors, or the kids whose youth soccer team they coach, or the people at the school board meeting, or the people at the gym know about them, what do you think that might do to the public’s picture of who scientists are and what scientists are like? What could letting your scientific identity ride along with the rest of you do to help your non-scientist fellow travelers get an idea of what scientists do, or of what inspires them to do science? Could being open about your ties to science help people who already have independent reasons to trust you find reasons to be less reflexively distrustful of science and scientists?

These seem to me like empirical questions. Let’s give it a try and see what we find out.

Are scientists who don’t engage with the public obliged to engage with the press?

In posts of yore, we’ve had occasion to discuss the duties scientists may have to the non-scientists with whom they share a world. One of these is the duty to share the knowledge they’ve built with the public — especially if that knowledge is essential to the public’s ability to navigate pressing problems, or if the public has put up the funds for the research in which that knowledge was built.

Even if you’re inclined to think that what we have here is something that falls short of an obligation, there are surely cases where it would have good effects — not just for the public, but also for scientists — if the public were informed of important scientific findings. After all, if not knowing a key piece of knowledge, or not understanding its implications or how certain or uncertain it is, leads the public to make worse decisions (whether at the ballot box or in their everyday lives), the impacts of those worse decisions could also harm the scientists with whom they are sharing a world.

But here’s the thing: Scientists are generally trained to communicate their knowledge through journal articles and conference presentations, seminars and grant proposals, patent applications and technical documents. Moreover, these tend to be the kind of activities in scientific careers that are rewarded by the folks making the evaluations, distributing grant money, and cutting the paychecks. Very few scientists get explicit training in how to communicate about their scientific findings, or about the processes by which the knowledge is built, with the public. Some scientists manage to be able to do a good job of this despite a lack of training, others less so. And many scientists will note that there are hardly enough hours in the day to tackle all the tasks that are recognized and rewarded in their official scientific job descriptions without adding “communicating science to the public” to the stack.

As a result, much of the job of communicating to the public about scientific research and new scientific findings falls to the press.

This raises another question for scientists: If scientists have a duty (or at least a strong interest) in making sure the knowledge they build is shared with the public, and if scientists themselves are not taking on the communicative task of sharing it (whether because they don’t have the time or they don’t have the skills to do it effectively), do scientists have an obligation to engage with the press to whom that communicative task has fallen?

Here, of course, we encounter some longstanding distrust between scientists and journalists. Scientists sometimes worry that the journalists taking on the task of making scientific findings intelligible to the public don’t themselves understand the scientific details (or scientific methodology more generally) much better than the public does. Or, they may worry about helping a science journalist who has already decided on the story they are going to tell and who will gleefully ignore or distort facts in the service of telling that story. Or, they may worry that the discovery-of-the-week model of science that journalists frequently embrace distorts the public’s understanding of the ongoing cooperative process by which a body of scientific knowledge is actually built.

To the extent that scientists believe journalists will manage to get things wrong, they may feel like they do less harm to the public’s understanding of science if they do not engage with journalists at all.

While I think this is an understandable impulse, I don’t think it necessarily minimizes the harm.

Indeed, I think it’s useful for scientists to ask themselves: What happens if I don’t engage and journalists try to tell the story anyway, without input from scientists who know this area of scientific work and why it matters?

Of course, I also think it would benefit scientists, journalists, and the public if scientists got more support here, from training in how to work with journalists, to institutional support in their interactions with journalist, to more general recognition that communicating about science with broader audiences is a good thing for scientists (and scientific institutions) to be doing. But in a world where “public outreach” falls much further down on the scientist’s list of pressing tasks than does bringing in grant money, training new lab staff, and writing up results for submission, science journalists are largely playing the zone where communication of science to the public happens. Scientists who are playing other zones should think about how they can support science journalists in covering their zone effectively.

Successful science outreach means connecting with the people you’re trying to reach.

Let’s say you think science is cool, or fun, or important to understand (or to do) in our modern world. Let’s say you want to get others who don’t (yet) see science as cool, or fun, or important, to appreciate how cool, how fun, how important it is.

Doing that, even on a small scale, is outreach.

Maybe just talking about what you find cool, fun, and important will help some others come to see science that way. But it’s also quite possible that some of the people to whom you’re reaching out will not be won over by the same explanations, the same experiences, the same exemplars of scientific achievement that won you over.

If you want your outreach to succeed, it’s not enough to know what got you engaged with science. To engage people-who-are-not-you, you probably need to find out something about them.

Find out what their experiences with science have been like — and what their experiences with scientists (and science teachers) have been like. These experiences shape what they think about science, but also what they think about who science is for.

Find out what they find interesting and what they find off-putting.

Find out what they already know and what they want to know. Don’t assume before doing this that you know where their information is gappy or what they’re really worried about. Don’t assume that filling in gaps in their knowledge is all it will take to make them science fans.

Recognize that your audience may not be as willing as you want them to be to separate their view of science from their view of scientists. A foible of a famous scientist that is no big deal to you may be a huge deal to people you’re trying to reach who have had different experiences. Your baseline level of trust for scientists and the enterprise of scientific knowledge-building may be higher than that of people in your target audience who come from communities that have been hurt by researchers or harmed by scientific claims used to justify their marginalization.

Actually reaching people means taking their experiences seriously. Telling someone how to feel is a bad outreach strategy.

Taking the people you’re trying to reach seriously also means taking seriously their capacity to understand and to make good decisions — even when their decisions are not precisely the decisions you might make. When you feel frustration because of decisions being made out of what looks to you like ignorance, resist the impulse to punch down. Instead, ask where the decisions are coming from and try to understand them before explaining, respectfully, why you’d make a different decision.

If your efforts at outreach don’t seem to be reaching people or groups you are trying hard to reach, seriously consider the possibility that what you’re doing may not be succeeding because it’s not aligned with the wants or needs of those people or groups.

If you’re serious about reaching those people or groups ask them how your outreach efforts are coming across to them, and take their answers seriously.

Heroes, human “foibles”, and science outreach.

“Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.”

— Richard Feynman

There is a tendency sometimes to treat human beings as if they were resultant vectors arrived at by adding lots and lots of particular vectors together, an urge to try to work out whether someone’s overall contribution to their field (or to the world) was a net positive.

Unless you have the responsibility for actually putting the human being in question into the system to create good or bad effects (and I don’t kid myself that my readership is that omnipotent), I think treating human beings like resultant vectors is not a great idea.

For one thing, in focusing on the net effect, one tends to overlook that people are complicated. You end up in a situation where you might use those overall tallies to sort people into good and evil rather than noticing how in particular circumstances good and bad may turn on a decision or two.

This can also create an unconscious tendency to put a thumb on the scale when the person whose impact you’re evaluating is someone about whom you have strong feelings, whether they’re a hero to you or a villain. As a result, you may end up completely ignoring the experiences of others, or noticing them but treating them as insignificant, when a better course of action may be to recognize that it’s entirely possible that people who had a positive impact on you had a negative impact on others (and vice versa).

Science is sometimes cast as a pursuit in which people can, by participating in a logical methodology, transcend their human frailties, at least insofar as these frailties constrain our ability to get objective knowledge of the world. On that basis, you’ll hear the claim that we really ought to separate the scientific contributions of an individual from their behaviors and interactions with others. In other words, we should focus on what they did when they were being a scientist rather than on the rest of the (incidental) stuff they did while they were being a human.

This distinction rests on a problematic dichotomy between being a scientist and being a human. Because scientific knowledge is built not just through observations and experiments but also through human interactions, drawing a clear line between human behavior and scientific contributions is harder than it might at first appear.

Consider a scientist who has devised, conducted, and reported the results of many important experiments. If it turns out that some of those experimental results were faked, what do you want to say about his scientific legacy? Can you be confident in his other results? If so, on what basis can you be confident?

The coordinated effort to build a reliable body of knowledge about the world depends on a baseline level of trust between scientists. Without that trust, you are left having to take on the entire project yourself, and that seriously diminished the chances that the knowledge you’re building will be objective.

What about behaviors that don’t involve putting misinformation into the scientific record? Are those the kinds of things we can separate from someone’s scientific contributions?

Here, the answer will depend a lot on the particulars of those behaviors. Are we talking about a scientist who dresses his dogs in ugly sweaters, or one who plays REO Speedwagon albums at maximum volume while drafting journal articles? Such peculiarities might come up in anecdotes but they probably won’t impact the credibility of one’s science. Do we have a scientist who is regularly cruel to his graduate student trainees, or who spreads malicious rumors about his scientific colleagues? That kind of behavior has the potential to damage the networks of trust and cooperation upon which the scientific knowledge-building endeavor depends, which means it probably can’t be dismissed as a mere “foible”.

What about someone who is scrupulously honest about his scientific contributions but whose behavior towards women or members of underrepresented minorities demonstrates that he does not regard them as being as capable, as smart, or as worthy of respect? What if, moreover, most of these behaviors are displayed outside of scientific contexts (owing to the general lack of women or members of underrepresented minorities in the scientific contexts this scientist encounters)? Intended or not, such attitudes and behaviors can have the effect of excluding people from the scientific community. Even if you think you’re actively working to improve outreach/inclusion, your regular treatment of people you’re trying to help as “less than” can have the effect of exclusion. It also sets a tone within your community where it’s predictable that simply having more women and members of underrepresented minorities there won’t result in their full participation, whether because you and your likeminded colleagues are disinclined to waste your time interacting with them or because they get burnt out interacting with people like you who treat them as “less than”.

This last description of a hypothetical scientist is not too far from famous physicist Richard Feynman, something that we know not just from the testimony of his contemporaries but from Feynman’s own accounts. As it happens, Feynman is enough of a hero to scientists and people who do science outreach that many seem compelled to insist that the net effect of his legacy is positive. Ironically, the efforts to paint Feynman as a net-good guy can inflict harms similar to the behavior Feynman’s defenders seem to minimize.

In an excellent, nuanced post on Feynman, Matthew Francis writes:

Richard Feynman casts the longest shadow in the collective psyche of modern physicists. He plays the nearly same role within the community that Einstein does in the world beyond science: the Physicist’s Physicist, someone almost as important as a symbol as he was as a researcher. Many of our professors in school told Feynman stories, and many of us acquired copies of his lecture notes in physics. …

Feynman was a pioneer of quantum field theory, one of a small group of researchers who worked out quantum electrodynamics (QED): the theory governing the behavior of light, matter, and their interactions. QED shows up everywhere from the spectrum of atoms to the collisions of electrons inside particle accelerators, but Feynman’s calculation techniques proved useful well beyond the particular theory.

Not only that, his explanations of quantum physics were deep and cogent, in a field where clarity can be hard to come by. …

Feynman stories that get passed around physics departments aren’t usually about science, though. They’re about his safecracking, his antics, his refusal to wear neckties, his bongos, his rejection of authority, his sexual predation on vulnerable women.

The predation in question here included actively targeting female students as sex partners, a behavior that rather conveys that you don’t view them primarily in terms of their potential to contribute to science.

While it is true that much of what we know about Richard Feynman’s behavior is the result of Feynman telling stories about himself, there stories really don’t seem to indicate awareness of the harmful impacts his behavior might have had on others. Moreover, Feynman’s tone in telling these stories suggests he assumed an audience that would be taken with his cleverness, including his positioning of women (and his ability to get into their pants) as a problem to be solved scientifically.

Apparently these are not behaviors that prevented Feynman from making significant contributions to physics. However, it’s not at all clear that these are behaviors that did no harm to the scientific community.

One take-home message of all this is that making positive contributions to science doesn’t magically cancel out harmful things you may do — including things that may have the effect of harming other scientists or the cooperative knowledge-building effort in which they’re engaged. If you’re a living scientist, this means you should endeavor not to do harm, regardless of what kinds of positive contributions you’ve amassed so far.

Another take-home message here is that it is dangerous to rest your scientific outreach efforts on scientific heroes.

If the gist of your outreach is: “Science is cool! Here’s a cool guy who made cool contributions to science!” and it turns out that your “cool guy” actually displayed some pretty awful behavior (sexist, racist, whatever), you probably shouldn’t put yourself in a position where your message comes across as:

  • These scientific contributions were worth the harm done by his behavior (including the harm it may have done in unfairly excluding people from full participation in science).
  • He may have been sexist or racist, but that was no big deal because people in his time, place and culture were pretty sexist (as if that removes the harm done by the behavior).
  • He did some things that weren’t sexist or racist, so that cancels out the things he did that were sexist or racist. Maybe he worked hard to help a sister or a daughter participate in science; how can we then say that his behavior hurt women’s inclusion in science?
  • His sexism or racism was no big deal because it seems to have been connected to a traumatic event (e.g., his wife died, he had a bad experience with a black person once), or because the problematic behavior seems to have been his way of “blowing off steam” during a period of scientific productivity.

You may be intending to convey the message that this was an interesting guy who made some important contributions to science, but the message that people may take away is that great scientific achievement totally outweighs sexism, racism, and other petty problems.

But people aren’t actually resultant vectors. If you’re a target of the racism, sexism, and other petty problems, you may not feel like they should be overlooked or forgiven on the strength of the scientific achievement.

Science outreach doesn’t just deliver messages about what science knows or about the processes by which that knowledge is built. Science outreach also delivers messages about what kind of people scientists are (and about what kinds of people can be scientists).

There is a special danger lurking here if you are doing science outreach by using a hero like Feynman and you are not a member of a group likely to have been hurt by his behavior. You may believe that the net effect of his story casts science and scientists in a way that will draw people in, but it’s possible you are fooling yourself.

Maybe you aren’t the kind of person whose opinion about science or eagerness to participate in science would be influenced by the character flaws of the “scientific heroes” on offer, but if you’re already interested in science perhaps you’re not the main target for outreach efforts. And if members of the groups who are targeted for outreach tell you that they find these “scientific heroes” and the glorification of them by science fans alienating, perhaps listening to them would help you to devise more effective outreach strategies.

Building more objective knowledge about the world requires input from others. Why should we think that ignoring such input — especially from the kind of people you’re trying to reach — would lead to better science outreach?

Conduct of scientists (and science writers) can shape the public’s view of science.

Scientists undertake a peculiar kind of project. In striving to build objective knowledge about the world, they are tacitly recognizing that our unreflective picture of the world is likely to be riddled with mistakes and distortions. On the other hand, they frequently come to regard themselves as better thinkers — as more reliably objective — than humans who are not scientists, and end up forgetting that they have biases and blindspots of their own which they are helpless to detect without help from others who don’t share these particular biases and blindspots.

Building reliable knowledge about the world requires good methodology, teamwork, and concerted efforts to ensure that the “knowledge” you build doesn’t simply reify preexisting individual and cultural biases. It’s hard work, but it’s important to do it well — especially given a long history of “scientific” findings being used to justify and enforce preexisting cultural biases.

I think this bigger picture is especially appropriate to keep in mind in reading the response from Scientific American Blogs Editor Curtis Brainard to criticisms of a pair of problematic posts on the Scientific American Blog Network. Brainard writes:

The posts provoked accusations on social media that Scientific American was promoting sexism, racism and genetic determinism. While we believe that such charges are excessive, we share readers’ concerns. Although we expect our bloggers to cover controversial topics from time to time, we also recognize that sensitive issues require extra care, and that did not happen here. The author and I have discussed the shortcomings of the two posts in detail, including the lack of attention given to countervailing arguments and evidence, and he understood the deficiencies.

As stated at the top of every post, Scientific American does not always share the views and opinions expressed by our bloggers, just as our writers do not always share our editorial positions. At the same time, we realize our network’s bloggers carry the Scientific American imprimatur and that we have a responsibility to ensure that—differences of opinion notwithstanding—their work meets our standards for accuracy, integrity, transparency, sensitivity and other attributes.

(Bold emphasis added.)

The problem here isn’t that the posts in question advocated sound scientific views with implications that people on social media didn’t like. Rather, the posts presented claims in a way that made them look like they had much stronger scientific support than they really do — and did so in the face of ample published scientific counterarguments. Scientific American is not requiring that posts on its blog network meet a political litmus test, but rather that they embody the same kind of care, responsibility to the available facts, and intellectual honesty that science itself should display.

This is hard work, but it’s important. And engaging seriously with criticism, rather than just dismissing it, can help us do it better.

There’s an irony in the fact that one of the problematic posts which ignored some significant relevant scientific literature (helpfully cited by commenters in the comments section of that very post) was ignoring that literature in the service of defending Larry Summers and his remarks on possible innate biological causes that make men better at math and science than women. The irony lies in the fact that Larry Summers displayed an apparently ironclad commitment to ignore any and all empirical findings that might challenge his intuition that there’s something innate at the heart of the gender imbalance in math and science faculty.

Back in January of 2005, Larry Summers gave a speech at a conference about what can be done to attract more women to the study of math and science, and to keep them in the field long enough to become full professors. In his talk, Summers suggested as a possible hypothesis for the relatively low number of women in math and science careers that there may be innate biological factors that make males better at math and science than females. (He also related an anecdote about his daughter naming her toy trucks as if they were dolls, but it’s fair to say he probably meant this anecdote to be illustrative rather than evidentiary.)

The talk did not go over well with the rest of the participants in the conference.

Several scientific studies were presented at the conference before Summers made his speech. All these studies presented significant evidence against the claim of an innate difference between males and females that could account for the “science gap”.


In the aftermath of this conference of yore, there were some commenters who lauded Summers for voicing “unpopular truths” and others who distanced themselves from his claims but said they supported his right to make them as an exercise of “academic freedom.”

But if Summers was representing himself as a scientist* when he made his speech, I don’t think the “academic freedom” defense works.


Summers is free to state hypotheses — even unpopular hypotheses — that might account for a particular phenomenon. But, as a scientist, he is also responsible to take account of data relevant to his hypotheses. If the data weighs against his preferred hypothesis, intellectual honesty requires that he at least acknowledge this fact. Some would argue that it could even require that he abandon his hypothesis (since science is supposed to be evidence-based whenever possible).


When news of Summers’ speech, and reactions to it, was fresh, one of the details that stuck with me was that one of the conference organizers noted to Summers, after he gave his speech, that there was a large body of evidence — some of it presented at that very conference — that seemed to undermine his hypothesis, after which Summers gave a reply that amounted to, “Well, I don’t find those studies convincing.”

Was Summers within his rights to not believe these studies? Sure. But, he had a responsibility to explain why he rejected them. As a part of a scientific community, he can’t just reject a piece of scientific knowledge out of hand. Doing so comes awfully close to undermining the process of communication that scientific knowledge is based upon. You aren’t supposed to reject a study because you have more prestige than the authors of the study (so, you don’t have to care what they say). You can question the experimental design, you can question the data analysis, you can challenge the conclusions drawn, but you have to be able to articulate the precise objection. Surely, rejecting a study just because it doesn’t fit with your preferred hypothesis is not an intellectually honest move.


By my reckoning, Summers did not conduct himself as a responsible scientist in this incident. But I’d argue that the problem went beyond a lack of intellectual honesty within the universe of scientific discourse. Summers is also responsible for the bad consequences that flowed from his remark.


The bad consequence I have in mind here is the mistaken view of science and its workings that Summers’ conduct conveys. Especially by falling back on a plain vanilla “academic freedom” defense here, defenders of Summers conveyed to the public at large the idea that any hypothesis in science is as good as any other. Scientists who are conscious of the evidence-based nature of their field will see the absurdity of this idea — some hypotheses are better, others worse, and whenever possible we turn to the evidence to make these discriminations. Summers compounded ignorance of the relevant data with what came across as a statement that he didn’t care what the data showed. From this, the public at large could assume he was within his scientific rights to decide which data to care about without giving any justification for this choice**, or they could infer that data has little bearing on the scientific picture of the world.

Clearly, such a picture of science would undermine the standing of the rest of the bits of knowledge produced by scientists far more intellectually honest than Summers.


Indeed, we might go further here. Not only did Summers have some responsibilities that seemed to have escaped him while he was speaking as a scientist, but we could argue that the rest of the scientists (whether at the conference or elsewhere) have a collective responsibility to address the mistaken picture of science his conduct conveyed to society at large. It’s disappointing that, nearly a decade later, we instead have to contend with the problem of scientists following in Summers’ footsteps by ignoring, rather than engaging with, the scientific findings that challenge their intuitions.

Owing to the role we play in presenting a picture of what science knows and of how scientists come to know it to a broader audience, those of us who write about science (on blogs and elsewhere) also have a responsibility to be clear about the kind of standards scientists need to live up to in order to build a body of knowledge that is as accurate and unbiased as humanly possible. If we’re not clear about these standards in our science writing, we risk misleading our audiences about the current state of our knowledge and about how science works to build reliable knowledge about our world. Our responsibility here isn’t just a matter of noticing when scientists are messing up — it’s also a matter of acknowledging and correcting our own mistakes and of working harder to notice our own biases. I’m pleased that our Blogs Editor is committed to helping us fulfill this duty.
_____
*Summers is an economist, and whether to regard economics as a scientific field is a somewhat contentious matter. I’m willing to give the scientific status of economics the benefit of the doubt, but this means I’ll also expect economists to conduct themselves like scientists, and will criticize them when they do not.

**It’s worth noting that a number of the studies that Summers seemed to be dismissing out of hand were conducted by women. One wonders what lessons the public might draw from that.

_____
A portion of this post is an updated version of an ancestor post on my other blog.

Some thoughts about human subjects research in the wake of Facebook’s massive experiment.

You can read the study itself here, plus a very comprehensive discussion of reactions to the study here.

1. If you intend to publish your research in a peer-reviewed scientific journal, you are expected to have conducted that research with the appropriate ethical oversight. Indeed, the submission process usually involves explicitly affirming that you have done so (and providing documentation, in the case of human subjects research, of approval by the relevant Institutional Review Board(s) or of the IRB’s determination that the research was exempt from IRB oversight).

2. Your judgment, as a researcher, that your research will not expose your human subjects to especially big harms does not suffice to exempt that research from IRB oversight. The best way to establish that your research is exempt from IRB oversight is to submit your protocol to the IRB and have the IRB determine that it is exempt.

3. It’s not unreasonable for people to judge that violating their informed consent (say, by not letting them know that they are human subjects in a study where you are manipulating their environment and not giving them the opportunity to opt out of being part of your study) is itself a harm to them. When we value our autonomy, we tend to get cranky when others disregard it.

4. Researchers, IRBs, and the general public needn’t judge a study to be as bad as [fill in the name of a particularly horrific instance of human subjects research] to judge the conduct of the researchers in the study unethical. We can (and should) surely ask for more than “not as bad as the Tuskegee Syphilis Experiment”.

5. IRB approval of a study means that the research has received ethical oversight, but it does not guarantee that the treatment of human subjects in the research will be ethical. IRBs can make questionable ethical judgments too.

6. It is unreasonable to suggest that you can generally substitute Terms of Service or End User License Agreements for informed consent documents, as the latter are supposed to be clear and understandable to your prospective human subjects, while the former are written in such a way that even lawyers have a hard time reading and understanding them. The TOS or EULA is clearly designed to protect the company, not the user. (Some of those users, by the way, are in their early teens, which means they probably ought to be regarded as members of a “vulnerable population” entitled to more protection, not less.)

7. Just because a company like Facebook may “routinely” engage in manipulations of a user’s environment doesn’t make that kind of manipulation automatically ethical when it is done for the purposes of research. Nor does it mean that that kind of manipulation is ethical when Facebook does it for its own purposes. As it happens, peer-reviewed scientific journals, funding agencies, and other social structures tend to hold scientists building knowledge with human subjects research to higher ethical standards than (say) corporations are held to when they interact with humans. This doesn’t necessarily means our ethical demands of scientific knowledge-builders are too high. Instead, it may mean that our ethical demands of corporations are too low.

Resistance to ethics instruction: considering the hypothesis that moral character is fixed.

This week I’ve been blogging about the resistance to required ethics coursework one sometimes sees in STEM* disciplines. As one reason for this resistance is the hunch that you can’t teach a person to be ethical once they’re past a certain (pre-college) age, my previous post noted that there’s a sizable body of research that supports ethics instruction as an intervention to help people behave more ethically.

But, as I mentioned in that post, the intuition that one’s moral character is fixed by one’s twenties can be so strong that folks don’t always believe what the empirical research says about the question.

So, as a thought experiment, let’s entertain the hypothesis that, by your twenties, your moral character is fixed — that you’re either ethical or evil by then and there’s nothing further ethics instruction can do about it. If this were the case, how would we expect scientists to respond to other scientists or scientific trainees who behave unethically?

Presumably, scientists would want the unethical members of the tribe of science identified and removed, permanently. Under the fixed-character hypothesis, the removal would have to be permanent, because there would be every reason to expect the person who behaved unethically to behave unethically again.

If we took this seriously, that would mean every college student who ever cheated on a quiz or made up data for a lab report should be barred from entry to the scientific community, and that every grown-up scientist caught committing scientific misconduct — or any ethical lapse, even those falling well short of fabrication, falsification, or plagiarism — would be excommunicated from the tribe of science forever.

That just doesn’t happen. Even Office of Research Integrity findings of scientific misconduct don’t typically lead to lifetime disbarment from federal research funding. Instead, they usually lead to administrative actions imposed for a finite duration, on the order of years, not decades.

And, I don’t think the failure to impose a policy of “one strike, you’re out” for those who behave unethically is because members of the tribe of science are being held back by some naïvely optimistic outside force (like the government, or the taxpaying public, or ethics professors). Nor is it because scientists believe it’s OK to lie, cheat, and steal in one’s scientific practice; there is general agreement that scientific misconduct damages the shared body of knowledge scientists are working to build.

When dealing with members of their community who have behaved unethically, scientists usually behave as if there is a meaningful difference between a first offense and a pattern of repeated offenses. This wouldn’t make sense if scientists were truly committed to the fixed-character hypothesis.

On the other hand, it fits pretty well with the hypothesis that people may be able to learn from their mistakes — to be rehabilitated rather than simply removed from the community.

There are surely some hard cases that the tribe of science view as utterly irredeemable, but graduate students or early career scientists whose unethical behavior is caught early are treated by many as probably redeemable.

How to successfully rehabilitate a scientist who has behaved unethically is a tricky question, and not one scientists seem inclined to speak about much. Actions by universities, funding agencies, or governmental entities like the Office of Research Integrity are part of the punishment landscape, but punishment is not the same thing as rehabilitation. Meanwhile, it’s unclear whether individual actions to address wrongdoing are effective at heading off future unethical behavior.

If it takes a village to raise a scientist, it may take concerted efforts at the level of scientific communities to rehabilitate scientists who have strayed from the path of ethical practice. We’ll discuss some of the challenges with that in the next post.

______
*STEM stands for science, technology, engineering, and mathematics.