James Watson’s sense of entitlement, and misunderstandings of science that need to be countered.

James Watson, who shared a Nobel Prize in 1962 for discovering the double helix structure of DNA, is in the news, offering his Nobel Prize medal at auction. As reported by the Telegraph:

Mr Watson, who shared the 1962 Nobel Prize for uncovering the double helix structure of DNA, sparked an outcry in 2007 when he suggested that people of African descent were inherently less intelligent than white people.

If the medal is sold Mr Watson said he would use some of the proceeds to make donations to the “institutions that have looked after me”, such as University of Chicago, where he was awarded his undergraduate degree, and Clare College, Cambridge.

Mr Watson said his income had plummeted following his controversial remarks in 2007, which forced him to retire from the Cold Spring Harbor Laboratory on Long Island, New York. He still holds the position of chancellor emeritus there.

“Because I was an ‘unperson’ I was fired from the boards of companies, so I have no income, apart from my academic income,” he said.

He would also use some of the proceeds to buy an artwork, he said. “I really would love to own a [painting by David] Hockney”. …

Mr Watson said he hoped the publicity surrounding the sale of the medal would provide an opportunity for him to “re-enter public life”. Since the furore in 2007 he has not delivered any public lectures.

There’s a lot I could say here about James Watson, the assumptions under which he is laboring, and the potential impacts on science and the public’s engagement with it. In fact, I have said much of it before, although not always in reference to James Watson in particular. However, given the likelihood that we’ll keep hearing the same unhelpful responses to James Watson and his ilk if we don’t grapple with some of the fundamental misunderstandings of science at work here, it’s worth covering this ground again.

First, I’ll start with some of the claims I see Watson making around his decision to auction his Nobel Prize medal:

  • He needs money, given that he has “no income beyond [his] academic income”. One might take this as an indication that academic salaries in general ought to be raised (although I’m willing to bet a few buck that Watson’s inadequate academic income is at least as much as that of the average academic actively engaged in research and/or teaching in the U.S. today). However, Watson gives no sign of calling for such an across-the-board increase, since…
  • He connects his lack of income to being fired from boards of companies and to his inability to book public speaking engagements after his 2007 remarks on race.
  • He equates this removal from boards and lack of invitations to speak with being an “unperson”.

What comes across to me here is that James Watson sees himself as special, as entitled to seats on boards and speaker invitations. On what basis, we might ask, is he entitled to these perks, especially in the face of a scientific community just brimming with talented members currently working at the cutting edge(s) of scientific knowledge-building? It is worth noting that some who attended recent talks by Watson judged them to be nothing special:

Possibly, then, speaking engagements may have dried up at least partly because James Watson was not such an engaging speaker — with an asking price of $50,000 for a paid speaking engagement, whether you give good talk is a relevant criterion — rather than being driven entirely by his remarks on race in 2007, or before 2007. However, Watson seems sure that these remarks are the proximate cause of his lack of invitations to give public talks since 2007. And, he finds this result not to be in accord with what a scientist like himself deserves.

Positioning James Watson as a very special scientist who deserves special treatment above and beyond the recognition of the Nobel committee feeds the problematic narrative of scientific knowledge as an achievement of great men (and yes, in this narrative, it is usually great men who are recognized). This narrative ignores the fundamentally social nature of scientific knowledge-building and the fact that objectivity is the result of teamwork.

Of course, it’s even more galling to have James Watson portrayed (including by himself) as an exceptional hero of science rather than as part of a knowledge-building community given the role of Rosalind Franklin’s work in determining the structure of DNA — and given Watson’s apparent contempt for Franklin, rather than regard for her as a member of the knowledge-building team, in The Double Helix.

Indeed, part of the danger of the hero narrative is that scientists themselves may start to believe it. They can come to see themselves as individuals possessing more powers of objectivity than other humans (thus fundamentally misunderstanding where objectivity comes from), with privileged access to truth, with insights that don’t need to be rigorously tested or supported with empirical evidence. (Watson’s 2007 claims about race fit in this territory .)

Scientists making authoritative claims beyond what science can support is a bigger problem. To the extent that the public also buys into the hero narrative of science, that public is likely to take what Nobel Prize winners say as authoritative, even in the absence of good empirical evidence. Here Watson keeps company with William Shockley and his claims on race, Kary Mullis and his claims on HIV, and Linus Pauling and his advocacy of mega-doses of vitamin C. Some may argue that non-scientists need to be more careful consumers of scientific claims, but it would surely help if scientists themselves would recognize the limits of their own expertise and refrain from overselling either their claims or their individual knowledge-building power.

Where Watson’s claims about race are concerned, the harm of positioning him as an exceptional scientist goes further than reinforcing a common misunderstanding of where scientific knowledge comes from. These views, asserted authoritatively by a Nobel Prize winner, give cover to people who want to believe that their racist views are justified by scientific knowledge.

As well, as I have argued before (in regard to Richard Feynman and sexism), the hero narrative can be harmful to the goal of scientific outreach given the fact that human scientists usually have some problematic features and that these problematic features are often ignored, minimized, or even justified (e.g., as “a product of the time”) in order to foreground the hero’s great achievement and sell the science. There seems to be no shortage of folks willing to label Watson’s racist views as unfortunate but also as something that should not overshadow his discovery of the structure of DNA. In order that the unfortunate views not overshadow the big scientific contribution, some of these folks would rather we stop talking about Watson’s having made the claims he has made about racial difference (although Watson shows no apparent regret for holding these views, only for having voiced them to reporters).

However, especially for people in the groups that James Watson has claimed are genetically inferior, asserting that Watson’s massive scientific achievement trumps his problematic claims about race can be alienating. His scientific achievement doesn’t magically remove the malign effects of the statements he has made from a very large soapbox, using his authority as a Nobel Prize winning scientist. Ignoring those malign effects, or urging people to ignore them because of the scientific achievement which gave him that big soapbox, sounds an awful lot like saying that including the whole James Watson package in science is more important than including black people as scientific practitioners or science fans.

The hero narrative gives James Watson’s claims more power than they deserve. The hero narrative also makes urgent the need to deem James Watson’s “foibles” forgivable so we can appreciate his contribution to knowledge. None of this is helpful to the practice of science. None of it helps non-scientists engage more responsibly with scientific claims or scientific practitioners.

Holding James Watson to account for his claims, holding him responsible for scientific standards of evidence, doesn’t render him an unperson. Indeed, it amounts to treating him as a person engaged in the scientific knowledge-building project, as well as a person sharing a world with the rest of us.

* * * * *
Michael Hendricks offers a more concise argument against the hero narrative in science.

And, if you’re not up on the role of Rosalind Franklin in the discovery of the structure of DNA, these seventh graders can get you started:

The Rosetta mission #shirtstorm was never just about that shirt.

Last week, the European Space Agency’s Spacecraft Rosetta put a washing machine-sized lander named Philae on Comet 67P/Churyumov-Gerasimenko.

Landing anything on a comet is a pretty amazing feat, so plenty of scientists and science-fans were glued to their computers watching for reports of the Rosetta mission’s progress. During the course of the interviews streamed to the public (including classrooms), Project Scientist Matt Taylor described the mission as the “sexiest mission there’s ever been”, but not “easy”. And, he conducted on-camera interviews in a colorful shirt patterned with pin-up images of scantily-clad women.

This shirt was noticed, and commented upon, by more than one woman in science and science communication.

To some viewers, Taylor’s shirt just read as a departure from the “boring” buttoned-down image the public might associate with scientists. But to many women scientists and science communicators who commented upon it, the shirt seemed to convey lack of awareness or concern with the experiences of women who have had colleagues, supervisors, teachers, students treat them as less than real scientists, or science students, or science communicators, or science fans. It was jarring given all the subtle and not so subtle ways that some men (not all men) in science have conveyed to us that our primary value lies in being decorative or titillating, not in being capable, creative people with intelligence and skills who can make meaningful contributions to building scientific knowledge or communicating science to a wider audience.

The pin-up images of scantily clad women on the shirt Taylor wore on camera distracted people who were tuned in because they wanted to celebrate Rosetta. It jarred them, reminding them of the ways science can still be a boys’ club.

It was just one scientist, wearing just one shirt, but it was a token of a type that is far too common for many of us to ignore.

There is research on the ways that objectifying messages and images can have a significant negative effect on those in the group being objectified. Objectification, even if it’s unintentional, adds one more barrier (on top of implicit bias, stereotype threat, chilly climate, benevolent sexism, and outright harassment) on women’s participation.

Even if there wasn’t a significant body of research demonstrating that the effects are real, the fact of women who explicitly say that casual use of sexualizing imagery or language in professional contexts makes science less welcoming for them ought to count for more than an untested hunch that it shouldn’t make them feel this way.

And here’s the thing: this is a relatively easy barrier to remove. All it requires is thinking about whether your cheeky shirt, your wall calendar, your joke, is likely to have a negative effect on other people — including on women who are likely to have accumulated lots of indications that they are not welcomed in the scientific community on the same terms.

When Matt Taylor got feedback about the message his shirt was sending to some in his intended audience, he got it, and apologized unreservedly.

But the criticism was never just about just one shirt, and what has been happening since Matt Taylor’s apology underlines that this is not a problem that starts and ends with Matt Taylor or with one bad wardrobe choice for the professional task at hand.

Despite Matt Taylor’s apology, legions of people have been asserting that he should not have apologized. They have been insisting that people objecting to his wearing that shirt while representing Rosetta and acting as an ambassador for science were wrong to voice their objections, wrong even to be affected by the shirt.

If only we could not be affected by things simply by choosing not to be affected by them. But that’s not how symbols work.

A critique of this wardrobe choice as one small piece of a scientific culture that makes it harder for women to participate fully brought forth throngs of people (including scientists) responding with a torrent of hostility and, in some cases, threats of harm. This response conveys that women are welcome in science, or science journalism, or the audience for landing a spacecraft on a comet, only as long as they shut up about any of the barriers they might encounter, while men in science should never, ever be made uncomfortable about choices they’ve made that might contribute (even unintentionally) to throwing up such barriers.

That is not a great strategy for demonstrating that science is welcoming to all.

Indeed, it’s a strategy that seems to imbed a bunch of assumptions:

  • that it’s worth losing the scientific talent of women who might make the scientific climate uncomfortable for men by describing their experiences and pointing out barriers that are relatively easy to fix;
  • that men who have to be tough enough to test their hypotheses against empirical data and to withstand the rigors of peer review are not tough enough to handle it when women in their professional circle express discomfort;
  • that these men of science are incapable of empathy for others (including women) in their professional circle.

These strike me as bad assumptions. People making them seem to have a worse opinion of men who do science that the women voicing critiques have.

Voicing a critique (and sometimes steps it would be good to take going forward), rather that sighing and regarding the thing you’re critiquing as the cost of doing business, is something you do when you believe the person hearing it would want to know about the problem and address it. It comes from a place of trust — that your male colleagues aren’t trying to exclude you, and so will make little adjustments to stop doing unintentional harm once that they know that they’re doing it.

Matt Taylor seemed to understand the critique at least well enough to change his shirt and apologize for the unintentional harm he did. He seems willing to make that small effort to make science welcoming, rather than alienating.

Now we’re just waiting for the rest of the scientific community to join him.

You’re not rehabilitated if you keep deceiving.

Regular readers will know that I view scientific misconduct as a serious harm to both the body of scientific knowledge and the scientific community involved in building that knowledge. I also hold out hope that at least some of the scientists who commit scientific misconduct can be rehabilitated (and I’ve noted that other members of the scientific community behave in ways that suggest that they, too, believe that rehabilitation is possible).

But I think a non-negotiable prerequisite for rehabilitation is demonstrating that you really understand how what you did was wrong. This understanding needs to be more than simply recognizing that what you did was technically against the rules. Rather, you need to grasp the harms that your actions did, the harms that may continue as a result of those actions, the harms that may not be quickly or easily repaired. You need to <a href="http://blogs.scientificamerican.com/doing-good-science/2014/06/29/do-permanent-records-of-scientific-misconduct-findings-interfere-with-rehabilitation/"acknowledgethose harms, not minimize them or make excuses for your actions that caused the harms.

And, you need to stop behaving in the ways that caused the harms in the first place.

Among other things, this means that if you did significant harm to your scientific community, and to the students you were were supposed to be training, by making up “results” rather than actually doing experiments and making and reporting accurate results, you need to recognize that you have acted deceptively. To stop doing harm, you need to stop acting deceptively. Indeed, you may need to be significantly more transparent and forthcoming with details than others who have not transgressed as you have. Owing to your past bad acts, you may just have to meet a higher burden of proof going forward.

That you have retracted the publications in which you deceived, or lost a degree for which (it is strongly suspected) you deceived, or lost your university post, or served your hours of court-ordered community service does not reset you to the normal baseline of presumptive trust. “Paying your debt to society” does not in itself mean that anyone is obligated to believe that you are not still untrustworthy. If you break trust, you need to earn it back, not to demand it because you did your time.

You certainly can’t earn that trust back by engaging in deception to mount an argument that people should give you a break because you’ve served out your sentence.

These thoughts on how not to approach your own rehabilitation are prompted by the appearance of disgraced social scientist Diederik Stapel (discussed here, here, here, here, here, and here) in the comments at Retraction Watch on a post about Diederik Stapel and his short-lived gig as an adjunct instructor for a college course. Now, there’s no prima facie reason Diederik Stapel might not be able to make a productive contribution to a discussion about Diederik Stapel.

However, Diederik Stapel was posting his comments not as Diederik Stapel but as “Paul”.

I hope it is obvious why posting comments that are supportive of yourself while making it appear that this support is coming from someone else is deceptive. Moreover, the comments seem to suggest that Stapel is not really fully responsible for the frauds he committed.

“Paul” writes:

Help! Let’s not change anything. Science is a flawless institution. Yes. And only the past two days I read about medical scientists who tampered with data to please the firm that sponsored their work and about the start of a new investigation into the work of a psychologist who produced data “too good to be true.” Mistakes abound. On a daily basis. Sure, there is nothing to reform here. Science works just fine. I think it is time for the “Men in Black” to move in to start an outside-invesigation of science and academia. The Stapel case and other, similar cases teach us that scientists themselves are able to clean-up their act.

Later, he writes (sic throughout):

Stapel was punished, he did his community service (as he writes in his latest book), he is not on welfare, he is trying to make money with being a writer, a cab driver, a motivational speaker, but not very successfully, and .. it is totally unclear whether he gets paid for his teaching (no research) an extra-curricular hobby course (2 hours a week, not more, not less) and if he gets paid, how much.

Moreover and more importantly, we do not know WHAT he teaches exactly, we have not seen his syllabus. How can people write things like “this will only inspire kids to not get caught”, without knowing what the guy is teaching his students? Will he reach his students how to become fraudsters? Really? When you have read the two books he wrote after his demise, you cannot be conclude that this is very unlikely? Will he teach his students about all the other fakes and frauds and terrible things that happen in science? Perhaps. Is that bad? Perhaps. I think it is better to postpone our judgment about the CONTENT of all this as long as we do not know WHAT he is actually teaching. That would be a Popper-like, open-minded, rationalistic, democratic, scientific attitude. Suppose a terrible criminal comes up with a great insight, an interesting analysis, a new perspective, an amazing discovery, suppose (think Genet, think Gramsci, think Feyerabend).

Is it smart to look away from potentially interesting information, because the messenger of that information stinks?

Perhaps, God forbid, Stapel is able to teach his students valuable lessons and insights no one else is willing to teach them for a 2-hour-a-week temporary, adjunct position that probably doesn’t pay much and perhaps doesn’t pay at all. The man is a failure, yes, but he is one of the few people out there who admitted to his fraud, who helped the investigation into his fraud (no computer crashes…., no questionnaires that suddenly disappeared, no data files that were “lost while moving office”, see Sanna, Smeesters, and …. Foerster). Nowhere it is written that failures cannot be great teachers. Perhaps he points his students to other frauds, failures, and ridiculous mistakes in psychological science we do not know of yet. That would be cool (and not unlikely).

Is it possible? Is it possible that Stapel has something interesting to say, to teach, to comment on?

To my eye, these comments read as saying that Stapel has paid his debt to society and thus ought not to be subject to heightened scrutiny. They seem to assert that Stapel is reformable. They also suggest that the problem is not so much with Stapel as with the scientific enterprise. While there may be systemic features of science as currently practice that make cheating a greater temptation than it might be otherwise, suggesting that those features made Stapel commit fraud does not convey an understanding of Stapel’s individual responsibility to navigate those temptations. Putting those assertions and excuses in someone else’s mouth makes them look less self-serving than they actually are.

Hilariously, “Paul” also urges the Retraction Watch commenters expressing doubts about Stapel’s rehabilitation and moral character to contact Stapel using their real names, first here:

I guess that if people want to write Stapel a message, they can send him a personal email, using their real name. Not “Paul” or “JatdS” or “QAQ” or “nothingifnotcritical” or “KK” or “youknowbestofall” or “whatistheworldcoming to” or “givepeaceachance”.

then here:

if you want to talk to puppeteer, as a real person, using your real name, I recommend you write Stapel a personal email message. Not zwg or neuroskeptic or what arewehiding for.

Meanwhile, behind the scenes, the Retraction Watch editors accumulated clues that “Paul” was not an uninvolved party but rather Diederik Stapel portraying himself as an uninvolved party. After they contacted him to let him know that such behavior did not comport with their comment policy, Diederik Stapel posted under his real name:

Hello, my name is Diederik Stapel. I thought that in an internet environment where many people are writing about me (a real person) using nicknames it is okay to also write about me (a real person) using a nickname. ! have learned that apparently that was —in this particular case— a misjudgment. I think did not dare to use my real name (and I still wonder why). I feel that when it concerns person-to-person communication, the “in vivo” format is to be preferred over and above a blog where some people use their real name and some do not. In the future, I will use my real name. I have learned that and I understand that I –for one– am not somebody who can use a nickname where others can. Sincerely, Diederik Stapel.

He portrays this as a misunderstanding about how online communication works — other people are posting without using their real names, so I thought it was OK for me to do the same. However, to my eye it conveys that he also misunderstands how rebuilding trust works. Posting to support the person at the center of the discussion without first acknowledging that you are that person is deceptive. Arguing that that person ought to be granted more trust while dishonestly portraying yourself as someone other than that person is a really bad strategy. When you’re caught doing it, those arguments for more trust are undermined by the fact that they are themselves further instances of the deceptive behavior that broke trust in the first place.

I will allow as how Diederik Stapel may have some valuable lessons to teach of, though. One of these is how not to make a convincing case that you’ve reformed.

Adjudicating “misbehavior”: how can scientists respond when they don’t get fair credit?

As I mentioned in an earlier post, I recently gave a talk at UC – Berkeley’s Science Leadership and Management (SLAM) seminar series. After the talk (titled “The grad student, the science fair, the reporter, and the lionfish: a case study of competition, credit, and communication of science to the public”), there was a discussion that I hope was at least as much fun for the audience as it was for me.

One of the questions that came up had to do with what recourse members of the scientific community have when other scientists are engaged in behavior that is problematic but that falls short of scientific misconduct.

If a scientist engages in fabrication, falsification, or plagiarism — and if you can prove that they have done so — you can at least plausibly get help from your institution, or the funder, or the federal government, in putting a stop to the bad behavior, repairing some of the damage, and making sure the wrongdoer is punished. But misconduct is a huge line to cross, so harmful to the collective project of scientific knowledge-building that, scientists hope, most scientists would never engage in it, no matter how dire the circumstances.

Other behavior that is ethically problematic in the conduct of science, however, is a lot more common. Disputes over appropriate credit for scientific contributions (which is something that came up in my talk) are sufficiently common that most people who have been in science for a while have first-hand stories they can tell you.

Denying someone of fair credit for the contribution they made to a piece of research is not a good thing. But who can you turn to if someone does it to you? Can the Office of Research Integrity go after the coauthor who didn’t fully acknowledge your contribution to your joint paper (and in the process knocked you from second author to third), or will you have to suck it up?

At the heart of the question is the problem of working out what mechanisms are currently available to address this kind of problem.

Is it possible to stretch the official government definition of plagiarism“the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit” — to cover the situation where you’re being given credit but not enough?

When scientists work out who did enough to be an author on a scientific paper reporting a research finding — and how the magnitude of the various contributions should be reflected in the ordering of names in the author line — is there a clear, objective, correct answer? Are there widely accepted standards that scientists are using to assign appropriate credit? Or, do the standards vary locally, situationally? Is the lack of a clear set of shared standards the kind of thing that creates ambiguities that scientists are prepared to use to their own advantage when they can?

We’ve discussed before the absence of a single standard for authorship embraced uniformly by the Tribe of Science as a whole. Maybe making the case for such a shared standard would help scientists protect themselves from having their contributions minimized — and also help them not unintentionally minimize the contributions of others.

While we’re waiting for a shared standard to gain acceptance, however, there are a number of scientific journals that clearly spell out their own standards for who counts as an author and what kinds of contributions to research and the writing of the paper do or do not rise to the level of receiving authorship credit. If you have submitted your work to a journal with a clear policy of this sort, and if your coauthors have subverted the policy to misrepresent your contribution, you can bring the problem to the journal editors. Indeed, Retraction Watch is brimming with examples of papers that have been retracted on account of problems with who is, or is not, credited with the work that had been published.

While getting redress form a journal editor may be better than nothing, a retraction is the kind of thing that leaves a mark on a scientific reputation — and on the relationships scientists need to be able to coordinate their efforts in the project of scientific knowledge-building. I would argue, however, that not giving the other scientists you work with fair credit for their contributions is also harmful to those relationships, and to the reputations of the scientists who routinely minimize the contributions of others while inflating their own contributions.

So maybe one of the most important things scientists can do right now, given the rules and the enforcement mechanisms that currently exist, the variance in standards and the ambiguities which they create, is to be clear in communicating about contributions and credit from the very beginning of every collaboration. As people are making contributions to the knowledge being built, explicitly identifying those contributions strikes me as a good practice that can help keep other people’s contributions from escaping our notice. Talking about how the different pieces lead to better understanding of what’s going on may also help the collaborators figure out how to make more progress on their research questions by bringing additional contributions to bear.

Of course, it may be easier to spell out what particular contributions each person in the collaboration made than to rank them in terms of which contribution was the biggest or the most important. But maybe this is a good argument for an explicit authorship standard in which authors specify the details of what they contributed and sidestep the harder question of whether experimental design was more or less important that the analysis of the data in this particular collaboration.

There’s a funny kind of irony in feeling like you have better tools to combat bad behavior that happens less frequently than you do to combat bad behavior that happens all the time. Disputes about credit may feel minor enough to be tolerable most of the time, differences of opinion that can expose power gradients in scientific communities that like to think of themselves as egalitarian. But especially for the folks on the wrong end of the power gradients, the erosion of recognition for their hard work can hurt. It may even lessen their willingness to collaborate with other scientists, impoverishing the opportunities for cooperation that help the knowledge get built efficiently. Scientists are entitled to expect better of each other. When they do — and when they give voice to those expectations (and to their disappointment when their scientific peers don’t live up to them) — maybe disputes over fair credit will become rare enough that someday most people who have been in science for a while won’t have first-hand stories they can tell you about them.

Are scientists who don’t engage with the public obliged to engage with the press?

In posts of yore, we’ve had occasion to discuss the duties scientists may have to the non-scientists with whom they share a world. One of these is the duty to share the knowledge they’ve built with the public — especially if that knowledge is essential to the public’s ability to navigate pressing problems, or if the public has put up the funds for the research in which that knowledge was built.

Even if you’re inclined to think that what we have here is something that falls short of an obligation, there are surely cases where it would have good effects — not just for the public, but also for scientists — if the public were informed of important scientific findings. After all, if not knowing a key piece of knowledge, or not understanding its implications or how certain or uncertain it is, leads the public to make worse decisions (whether at the ballot box or in their everyday lives), the impacts of those worse decisions could also harm the scientists with whom they are sharing a world.

But here’s the thing: Scientists are generally trained to communicate their knowledge through journal articles and conference presentations, seminars and grant proposals, patent applications and technical documents. Moreover, these tend to be the kind of activities in scientific careers that are rewarded by the folks making the evaluations, distributing grant money, and cutting the paychecks. Very few scientists get explicit training in how to communicate about their scientific findings, or about the processes by which the knowledge is built, with the public. Some scientists manage to be able to do a good job of this despite a lack of training, others less so. And many scientists will note that there are hardly enough hours in the day to tackle all the tasks that are recognized and rewarded in their official scientific job descriptions without adding “communicating science to the public” to the stack.

As a result, much of the job of communicating to the public about scientific research and new scientific findings falls to the press.

This raises another question for scientists: If scientists have a duty (or at least a strong interest) in making sure the knowledge they build is shared with the public, and if scientists themselves are not taking on the communicative task of sharing it (whether because they don’t have the time or they don’t have the skills to do it effectively), do scientists have an obligation to engage with the press to whom that communicative task has fallen?

Here, of course, we encounter some longstanding distrust between scientists and journalists. Scientists sometimes worry that the journalists taking on the task of making scientific findings intelligible to the public don’t themselves understand the scientific details (or scientific methodology more generally) much better than the public does. Or, they may worry about helping a science journalist who has already decided on the story they are going to tell and who will gleefully ignore or distort facts in the service of telling that story. Or, they may worry that the discovery-of-the-week model of science that journalists frequently embrace distorts the public’s understanding of the ongoing cooperative process by which a body of scientific knowledge is actually built.

To the extent that scientists believe journalists will manage to get things wrong, they may feel like they do less harm to the public’s understanding of science if they do not engage with journalists at all.

While I think this is an understandable impulse, I don’t think it necessarily minimizes the harm.

Indeed, I think it’s useful for scientists to ask themselves: What happens if I don’t engage and journalists try to tell the story anyway, without input from scientists who know this area of scientific work and why it matters?

Of course, I also think it would benefit scientists, journalists, and the public if scientists got more support here, from training in how to work with journalists, to institutional support in their interactions with journalist, to more general recognition that communicating about science with broader audiences is a good thing for scientists (and scientific institutions) to be doing. But in a world where “public outreach” falls much further down on the scientist’s list of pressing tasks than does bringing in grant money, training new lab staff, and writing up results for submission, science journalists are largely playing the zone where communication of science to the public happens. Scientists who are playing other zones should think about how they can support science journalists in covering their zone effectively.

Doing science is more than building knowledge: on professional development in graduate training.

Earlier this week, I was pleased to be an invited speaker at UC – Berkeley’s Science Leadership and Management (SLAM) seminar series. Here’s the official description of the program:

What is SLAM?

Grad school is a great place to gain scientific expertise – but that’s hardly the only thing you’ll need in your future as a PhD. Are you ready to lead a group? Manage your coworkers? Mentor budding scientists? To address the many interpersonal issues that arise in a scientific workplace, grad students from Chemistry, Physics, and MCB founded SLAM: Science Leadership and Management.

This is a seminar series focused on understanding the many interpersonal interactions critical for success in a scientific lab, as well as some practical aspects of lab management.  The target audience for this course is upper-level science graduate students with broad interests and backgrounds, and the skills discussed will be applicable to a variety of career paths. Postdocs are also welcome to attend.

Let me say for the record that I think programs like this are tremendously important, and far too few universities with Ph.D. programs have anything like them. (Stanford has offered something similar, although more explicitly focused on career trajectories in academia, in its Future Faculty Seminar.)

In their standard configuration, graduate programs can do quite a lot to help you learn how to build new knowledge in your discipline. Mostly, you master this ability by spending years working, under the supervision of your graduate advisor, to build new knowledge in your discipline. The details of this apprenticeship vary widely, owing largely to differences in advisors’ approaches: some are very hands-on mentors, others more hands-off, some inclined towards very specific task-lists for the scientific trainees in their labs, others towards letting trainees figure out their own plans of attack or even their own projects. The promise the Ph.D. training holds out, though, is that at the end of the apprenticeship you will have the skills and capacities to go forth and build more knowledge in your field.

The challenge is that most of this knowledge-building will take place in employment contexts that expect the knowledge-builders will have other relevant skills, as well. These may include mounting collaborations, or training others, or teaching, or writing for an audience of non-experts, not to mention working effectively with others (in the lab, on committees, in other contexts) and making good ethical decisions.

To the extent that graduate training focuses solely on learning how to be a knowledge-builder, it often falls down on the job of providing reasonable professional development. This is true even in the realm of teaching, where graduate students usually gain some experience as teaching assistants but they hardly ever get any training in pedagogy.

The graduate students who organize the SLAM program at Berkeley impress me as a smart, vibrant bunch, and they have a supportive faculty advisor. But it’s striking to me that such efforts at serious professional development for grad students are usually spearheaded by grad students, rather than by the grown-up members of their departments training them to be competent knowledge-builders.

One wonders if this is because it just doesn’t occur to the grown-up members of these disciplines that their trainees that such professional development could be helpful — or because graduate programs don’t feel like they owe their graduate students professional development of this sort.

If the latter, that says something about how graduate programs see their relationship with their students, especially in scientific fields. If all you are transmitting to students is how to build new knowledge, rather than attending to other skills they will need to successfully apply their knowledge-building chops in a career after graduate school, it makes it hard not to suspect that the relationship is really one that’s all about providing relatively cheap knowledge-building labor for grad school faculty.

Apprenticeships need not be that exploitative.

Indeed, if graduate programs want to compete for the best grad-school-bound undergraduates or prospective students who have done something else in the interval since their undergraduate education, making serious professional development could help them distinguish themselves from other programs. The trick here is that trainees would need to recognize, as they’re applying to graduate programs, that professional development is something they deserve. Whoever is mentoring them and providing advice on how to choose a graduate program should at least put the issue of professional development on the radar.

If you are someone who fits that description, I hope I have just put professional development on your radar.

Some thoughts about the suicide of Yoshiki Sasai.

In the previous post I suggested that it’s a mistake to try to understand scientific activity (including misconduct and culpable mistakes) by focusing on individual scientists, individual choices, and individual responsibility without also considering the larger community of scientists and the social structures it creates and maintains. That post was where I landed after thinking about what was bugging me about the news coverage and discussions about recent suicide of Yoshiki Sasai, deputy director of the Riken Center for Developmental Biology in Kobe, Japan, and coauthor of retracted papers on STAP cells.

I went toward teasing out the larger, unproductive pattern I saw, on the theory that trying to find a more productive pattern might help scientific communities do better going forward.

But this also means I didn’t say much about my particular response to Sasai’s suicide and the circumstances around it. I’m going to try to do that here, and I’m not going to try to fit every piece of my response into a larger pattern or path forward.

The situation in a nutshell:

Yoshiki Sasai worked with Haruko Obokata at the Riken Center on “stimulus-triggered acquisition of pluripotency”, a method by which exposing normal cells to a stress (like a mild acid) supposedly gave rise to pluripotent stem cells. It’s hard to know how closely they worked together on this; in the papers published on STAP. Obokata was the lead-author and Sasai was a coauthor. It’s worth noting that Obokata was some 20 years younger than Sasai, an up-and-coming researcher. Sasai was a more senior scientist, serving in a leadership position at the Riken Center and as Obokata’s supervisor there.

The papers were published in a high impact journal (Nature) and got quite a lot of attention. But then the findings came into question. Other researchers trying to reproduce the findings that had been reported in the papers couldn’t reproduce them. One of the images in the papers seemed to be a duplicate of another, which was fishy. Nature investigated, Riken investigated, the papers were retracted, Obokata continued to defend the papers and to deny any wrongdoing.

Meanwhile, a Riken investigation committee said “Sasai bore heavy responsibility for not confirming data for the STAP study and for Obokata’s misconduct”. This apparently had a heavy impact on Sasai:

Sasai’s colleagues at Riken said he had been receiving mental counseling since the scandal surrounding papers on STAP, or stimulus-triggered acquisition of pluripotency, cells, which was lead-authored by Obokata, came to light earlier this year.

Kagaya [head of public relations at Riken] added that Sasai was hospitalized for nearly a month in March due to psychological stress related to the scandal, but that he “recovered and had not been hospitalized since.”

Finally, Sasai hanged himself in a Riken stairwell. One of the notes he left, addressed to Obokata, urged her to reproduce the STAP findings.

So, what is my response to all this?

I think it’s good when scientists take their responsibilities seriously, including the responsibility to provide good advice to junior colleagues.

I also think it’s good when scientists can recognize the limits. You can give very, very good advice — and explain with great clarity why it’s good advice — but the person you’re giving it to may still choose to do something else. It can’t be your responsibility to control another autonomous person’s actions.

I think trust is a crucial part of any supervisory or collaborative relationship. I think it’s good to be able to interact with coworkers with the presumption of trust.

I think it’s awful that it’s so hard to tell which people are not worthy of our trust before they’ve taken advantage of our trust to do something bad.

Finding the right balance between being hands-on and giving space is a challenge in the best of supervisory or mentoring relationships.

Bringing an important discovery with the potential to enable lots of research that could ultimately help lots of people to one’s scientific peers — and to the public — must feel amazing. Even if there weren’t a harsh judgment from the scientific community for retraction, I imagine that having to say, “We jumped the gun on the ‘discovery’ we told you about” would not feel good.

The danger of having your research center’s reputation tied to an important discovery is what happens if that discovery doesn’t hold up, whether because of misconduct or mistakes. And either way, this means that lots of hard work that is important in the building of the shared body of scientific knowledge (and lots of people doing that hard work) can become invisible.

Maybe it would be good to value that work on its own merits, independent of whether anyone else judged it important or newsworthy. Maybe we need to rethink the “big discoveries” and “important discoverers” way of thinking about what makes scientific work or a research center good.

Figuring out why something went wrong is important. When the something that went wrong includes people making choices, though, this always seems to come down to assigning blame. I feel like that’s the wrong place to stop.

I feel like investigations of results that don’t hold up, including investigations that turn up misconduct, should grapple with the question of how can we use what we found here to fix what went wrong? Instead of just asking, “Whose fault was this?” why not ask, “How can we address the harm? What can we learn that will help us avoid this problem in the future?”

I think it’s a problem when a particular work environment makes the people in it anxious all the time.

I think it’s a problem when being careful feels like an unacceptable risk because it slows you down. I think it’s a problem when being first feels more important than being sure.

I think it’s a problem when a mistake of judgment feels so big that you can’t imagine a way forward from it. So disastrous that you can’t learn something useful from it. So monumental that it makes you feel like not existing.

I feel like those of us who are still here have a responsibility to pay attention.

We have a responsibility to think about the impacts of the ways science is done, valued, celebrated, on the human beings who are doing science — and not just on the strongest of those human beings, but also on the ones who may be more vulnerable.

We have a responsibility to try to learn something from this.

I don’t think what we should learn is not to trust, but how to be better at balancing trust and accountability.

I don’t think what we should learn is not to take the responsibilities of oversight seriously, but to put them in perspective and to mobilize more people in the community to provide more support in oversight and mentoring.

Can we learn enough to shift away from the Important New Discovery model of how we value scientific contributions? Can we learn enough that cooperation overtakes competition, that building the new knowledge together and making sure it holds up is more important than slapping someone’s name on it? I don’t know.

I do know that, if the pressures of the scientific career landscape are harder to navigate for people with consciences and easier to navigate for people without consciences, it will be a problem for all of us.

When focusing on individual responsibility obscures shared responsibility.

Over many years of writing about ethics in the conduct of science, I’ve had occasion to consider many cases of scientific misconduct and misbehavior, instances of honest mistakes and culpable mistakes. Discussions of these cases in the media and among scientists often make them look aberrant, singular, unconnected — the Schön case, the Hauser case, Aetogate, the Sezen-Sames case, the Hwang Woo-Sook case, the Stapel case, the Van Parijs case.* They make the world of science look binary, a set of unproblematically ethical practitioners with a handful of evil interlopers who need only be identified and rooted out.

I don’t think this approach is helpful, either in preventing misconduct, misbehavior, and mistakes, or in mounting a sensible response to the people involved in them.

Indeed, despite the fact that scientific knowledge-building is inherently a cooperative activity, the tendency to focus on individual responsibility can manifest itself in assignment of individual blame on people who “should have known” that another individual was involved in misconduct or culpable mistakes. It seems that something like this view — whether imposed from without or from within — may have been a factor in the recent suicide of Yoshiki Sasai, deputy director of the Riken Center for Developmental Biology in Kobe, Japan, and coauthor of retracted papers on STAP cells.

While there seems to be widespread suspicion that the lead-author of the STAP cell papers, Haruko Obokata, may have engaged in research misconduct of some sort (something Obokata has denied), Sasai was not himself accused of research misconduct. However, in his role as an advisor to Obokata, Sasai was held responsible by Riken’s investigation for not confirming Obokata’s data. Sasai expressed shame over the problems in the retracted papers, and had been hospitalized prior to his suicide in connection to stress over the scandal.

Michael Eisen describes the similarities here to his own father’s suicide as a researcher at NIH caught up in the investigation of fraud committed by a member of his lab:

[A]s the senior scientists involved, both Sasai and my father bore the brunt of the institutional criticism, and both seem to have been far more disturbed by it than the people who actually committed the fraud.

It is impossible to know why they both responded to situations where they apparently did nothing wrong by killing themselves. But it is hard for me not to place at least part of the blame on the way the scientific community responds to scientific misconduct.

This response, Eisen notes, goes beyond rooting out the errors in the scientific record and extends to rooting out all the people connected to the misconduct event, on the assumption that fraud is caused by easily identifiable — and removable — individuals, something that can be cut out precisely like a tumor, leaving the rest of the scientific community free of the cancer. But Eisen doesn’t believe this model of the problem is accurate, and he notes the damage it can do to people like Sasai and like his own father:

Imagine what it must be like to have devoted your life to science, and then to discover that someone in your midst – someone you have some role in supervising – has committed the ultimate scientific sin. That in and of itself must be disturbing enough. Indeed I remember how upset my father was as he was trying to prove that fraud had taken place. But then imagine what it must feel like to all of a sudden become the focal point for scrutiny – to experience your colleagues and your field casting you aside. It must feel like your whole world is collapsing around you, and not everybody has the mental strength to deal with that.

Of course everyone will point out that Sasai was overreacting – just as they did with my father. Neither was accused of anything. But that is bullshit. We DO act like everyone involved in cases of fraud is responsible. We do this because when fraud happens, we want it to be a singularity. We are all so confident this could never happen to us, that it must be that somebody in a position of power was lax – the environment was flawed. It is there in the institutional response. And it is there in the whispers …

Given the horrible incentive structure we have in science today – Haruko Obokata knew that a splashy result would get a Nature paper and make her famous and secure her career if only she got that one result showing that you could create stem cells by dipping normal cells in acid – it is somewhat of a miracle that more people don’t make up results on a routine basis. It is important that we identify, and come down hard, on people who cheat (although I wish this would include the far greater number of people who overhype their results – something that is ultimately more damaging than the small number of people who out and out commit fraud).

But the next time something like this happens, I am begging you to please be careful about how you respond. Recognize that, while invariably fraud involves a failure not just of honesty but of oversight, most of the people involved are honest, decent scientists, and that witch hunts meant to pretend that this kind of thing could not happen to all of us are not just gross and unseemly – they can, and sadly do, often kill.

As I read him, Eisen is doing at least a few things here. He is suggesting that a desire on the part of scientists for fraud to be a singularity — something that happens “over there” at the hands of someone else who is bad — means that they will draw a circle around the fraud and hold everyone on the inside of that circle (and no one outside of it) accountable. He’s also arguing that the inside/outside boundary inappropriately lumps the falsifiers, fabricators, and plagiarists with those who have committed the lesser sin of not providing sufficient oversight. He is pointing out the irony that those who have erred by not providing sufficient oversight tend to carry more guilt than do those they were working with who have lied outright to their scientific peers. And he is suggesting that needed efforts to correct the scientific record and to protect the scientific community from dishonest researchers can have tragic results for people who are arguably less culpable.

Indeed, if we describe Sasai’s failure as a failure of oversight, it suggests that there is some clear benchmark for sufficient oversight in scientific research collaborations. But it can be very hard to recognize that what seemed like a reasonable level of oversight was insufficient until someone who you’re supervising or with whom you’re collaborating is caught in misbehavior or a mistake. (That amount of oversight might well have been sufficient if the person one was supervising chose to behave honestly, for example.) There are limits here. Unless you’re shadowing colleagues 24/7, oversight depends on some baseline level of trust, some presumption that one’s colleagues are behaving honestly rather than dishonestly.

Eisen’s framing of the problem, though, is still largely in terms of the individual responsibility of fraudsters (and over-hypers). This prompts arguments in response about individuals bearing responsibility for their actions and their effects (including the effects of public discussion of those actions and about the individual scientists who are arguably victims of data fabrication and fraud. We are still in the realm of conceiving of fraudsters as “other” rather than recognizing that honest, decent scientists may be only a few bad decisions away from those they cast as monsters.

And we’re still describing the problem in terms of individual circumstances, individual choices, and individual failures.

I think Eisen is actually on the road to pointing out that a focus primarily on the individual level is unhelpful when he points to the problems of the scientific incentive structure. But I think it’s important to explicitly raise the alternate model, that fraud also flows from a collective failure of the scientific community and of the social structures it has built — what is valued, what is rewarded, what is tolerated, what is punished.

Arguably, one of the social structures implicated in scientific fraud is the first across the finish line, first to publish in a high impact journal model of scientific achievement. When being second to a discovery counts for exactly nothing (after lots of time, effort, and other resources have been invested), there is much incentive for haste and corner-cutting, and sometimes even outright fraud. This provides temptations for researchers — and dangers for those providing oversight to ambitious colleagues who may fall prey to such temptations. But while misconduct involves individuals making bad decisions, it happens in the context of a reward structure that exists because of collective choices and behaviors. If the structures that result from those collective choices and behaviors make some kinds of individual choices that are pathological to the shared project (building knowledge) rational choices for the individual to make under the circumstances (because they help the individual secure the reward), the community probably has an interest in examining the structures it has built.

Similarly, there are pathological individual choices (like ignoring or covering up someone else’s misconduct) that seem rational if the social structures built by the scientific community don’t enable a clear path forward within the community for scientists who have erred (whether culpably or honestly). Scientists are human. They get attached to their colleagues and tend to believe them to be capable of learning from their mistakes. Also, they notice that blowing the whistle on misconduct can lead to isolation of the whistleblower, not just the people committing the misconduct. Arguably, these are failures of the community and of the social structures it has built.

We might even go a step further and consider whether insisting on talking about scientific behavior (and misbehavior) solely in terms of individual actions and individual responsibility is part of the problem.

Seeing the scientific enterprise and things that happen in connection with it in terms of heroes and villains and innocent bystanders can seem very natural. Taking this view also makes it look like the most rational choice for scientists to plot their individual courses within the status quo. The rules, the reward structures, are taken almost as if they were carved in granite. How could one person change them? What would be the point of opting out of publishing in the high impact journals, since it would surely only hurt the individual opting out while leaving the system intact? In a competition for individual prestige and credit for knowledge built, what could be the point of pausing to try to learn something from the culpable mistakes committed by other individuals rather than simply removing those other individuals from the competition?

But individual scientists are not working in isolation against a fixed backdrop. Treating their social structures as if they were a fixed backdrop not only obscures that these structures result from collective choices but also prevents scientists from thinking together about other ways the institutional practice of science could be.

Whether some of the alternative arrangements they could create might be better than the status quo — from the point of view of coordinating scientific efforts, improving scientists’ quality of life, or improving the quality of the body of knowledge scientist are building — is surely an empirical question. But just as surely it is an empirical question worth exploring.

______
* It’s worth noticing that failures of safety are also frequently characterized as singular events, as in the Sheri Sangji/Patrick Harran case. As I’ve discussed at length on this blog, there is no reason to imagine the conditions in Harran’s lab that led to Sangji’s death were unique, and there is plenty of reason for the community of academic researchers to try to cultivate a culture of safety rather than individually hoping their own good luck will hold.

Conduct of scientists (and science writers) can shape the public’s view of science.

Scientists undertake a peculiar kind of project. In striving to build objective knowledge about the world, they are tacitly recognizing that our unreflective picture of the world is likely to be riddled with mistakes and distortions. On the other hand, they frequently come to regard themselves as better thinkers — as more reliably objective — than humans who are not scientists, and end up forgetting that they have biases and blindspots of their own which they are helpless to detect without help from others who don’t share these particular biases and blindspots.

Building reliable knowledge about the world requires good methodology, teamwork, and concerted efforts to ensure that the “knowledge” you build doesn’t simply reify preexisting individual and cultural biases. It’s hard work, but it’s important to do it well — especially given a long history of “scientific” findings being used to justify and enforce preexisting cultural biases.

I think this bigger picture is especially appropriate to keep in mind in reading the response from Scientific American Blogs Editor Curtis Brainard to criticisms of a pair of problematic posts on the Scientific American Blog Network. Brainard writes:

The posts provoked accusations on social media that Scientific American was promoting sexism, racism and genetic determinism. While we believe that such charges are excessive, we share readers’ concerns. Although we expect our bloggers to cover controversial topics from time to time, we also recognize that sensitive issues require extra care, and that did not happen here. The author and I have discussed the shortcomings of the two posts in detail, including the lack of attention given to countervailing arguments and evidence, and he understood the deficiencies.

As stated at the top of every post, Scientific American does not always share the views and opinions expressed by our bloggers, just as our writers do not always share our editorial positions. At the same time, we realize our network’s bloggers carry the Scientific American imprimatur and that we have a responsibility to ensure that—differences of opinion notwithstanding—their work meets our standards for accuracy, integrity, transparency, sensitivity and other attributes.

(Bold emphasis added.)

The problem here isn’t that the posts in question advocated sound scientific views with implications that people on social media didn’t like. Rather, the posts presented claims in a way that made them look like they had much stronger scientific support than they really do — and did so in the face of ample published scientific counterarguments. Scientific American is not requiring that posts on its blog network meet a political litmus test, but rather that they embody the same kind of care, responsibility to the available facts, and intellectual honesty that science itself should display.

This is hard work, but it’s important. And engaging seriously with criticism, rather than just dismissing it, can help us do it better.

There’s an irony in the fact that one of the problematic posts which ignored some significant relevant scientific literature (helpfully cited by commenters in the comments section of that very post) was ignoring that literature in the service of defending Larry Summers and his remarks on possible innate biological causes that make men better at math and science than women. The irony lies in the fact that Larry Summers displayed an apparently ironclad commitment to ignore any and all empirical findings that might challenge his intuition that there’s something innate at the heart of the gender imbalance in math and science faculty.

Back in January of 2005, Larry Summers gave a speech at a conference about what can be done to attract more women to the study of math and science, and to keep them in the field long enough to become full professors. In his talk, Summers suggested as a possible hypothesis for the relatively low number of women in math and science careers that there may be innate biological factors that make males better at math and science than females. (He also related an anecdote about his daughter naming her toy trucks as if they were dolls, but it’s fair to say he probably meant this anecdote to be illustrative rather than evidentiary.)

The talk did not go over well with the rest of the participants in the conference.

Several scientific studies were presented at the conference before Summers made his speech. All these studies presented significant evidence against the claim of an innate difference between males and females that could account for the “science gap”.


In the aftermath of this conference of yore, there were some commenters who lauded Summers for voicing “unpopular truths” and others who distanced themselves from his claims but said they supported his right to make them as an exercise of “academic freedom.”

But if Summers was representing himself as a scientist* when he made his speech, I don’t think the “academic freedom” defense works.


Summers is free to state hypotheses — even unpopular hypotheses — that might account for a particular phenomenon. But, as a scientist, he is also responsible to take account of data relevant to his hypotheses. If the data weighs against his preferred hypothesis, intellectual honesty requires that he at least acknowledge this fact. Some would argue that it could even require that he abandon his hypothesis (since science is supposed to be evidence-based whenever possible).


When news of Summers’ speech, and reactions to it, was fresh, one of the details that stuck with me was that one of the conference organizers noted to Summers, after he gave his speech, that there was a large body of evidence — some of it presented at that very conference — that seemed to undermine his hypothesis, after which Summers gave a reply that amounted to, “Well, I don’t find those studies convincing.”

Was Summers within his rights to not believe these studies? Sure. But, he had a responsibility to explain why he rejected them. As a part of a scientific community, he can’t just reject a piece of scientific knowledge out of hand. Doing so comes awfully close to undermining the process of communication that scientific knowledge is based upon. You aren’t supposed to reject a study because you have more prestige than the authors of the study (so, you don’t have to care what they say). You can question the experimental design, you can question the data analysis, you can challenge the conclusions drawn, but you have to be able to articulate the precise objection. Surely, rejecting a study just because it doesn’t fit with your preferred hypothesis is not an intellectually honest move.


By my reckoning, Summers did not conduct himself as a responsible scientist in this incident. But I’d argue that the problem went beyond a lack of intellectual honesty within the universe of scientific discourse. Summers is also responsible for the bad consequences that flowed from his remark.


The bad consequence I have in mind here is the mistaken view of science and its workings that Summers’ conduct conveys. Especially by falling back on a plain vanilla “academic freedom” defense here, defenders of Summers conveyed to the public at large the idea that any hypothesis in science is as good as any other. Scientists who are conscious of the evidence-based nature of their field will see the absurdity of this idea — some hypotheses are better, others worse, and whenever possible we turn to the evidence to make these discriminations. Summers compounded ignorance of the relevant data with what came across as a statement that he didn’t care what the data showed. From this, the public at large could assume he was within his scientific rights to decide which data to care about without giving any justification for this choice**, or they could infer that data has little bearing on the scientific picture of the world.

Clearly, such a picture of science would undermine the standing of the rest of the bits of knowledge produced by scientists far more intellectually honest than Summers.


Indeed, we might go further here. Not only did Summers have some responsibilities that seemed to have escaped him while he was speaking as a scientist, but we could argue that the rest of the scientists (whether at the conference or elsewhere) have a collective responsibility to address the mistaken picture of science his conduct conveyed to society at large. It’s disappointing that, nearly a decade later, we instead have to contend with the problem of scientists following in Summers’ footsteps by ignoring, rather than engaging with, the scientific findings that challenge their intuitions.

Owing to the role we play in presenting a picture of what science knows and of how scientists come to know it to a broader audience, those of us who write about science (on blogs and elsewhere) also have a responsibility to be clear about the kind of standards scientists need to live up to in order to build a body of knowledge that is as accurate and unbiased as humanly possible. If we’re not clear about these standards in our science writing, we risk misleading our audiences about the current state of our knowledge and about how science works to build reliable knowledge about our world. Our responsibility here isn’t just a matter of noticing when scientists are messing up — it’s also a matter of acknowledging and correcting our own mistakes and of working harder to notice our own biases. I’m pleased that our Blogs Editor is committed to helping us fulfill this duty.
_____
*Summers is an economist, and whether to regard economics as a scientific field is a somewhat contentious matter. I’m willing to give the scientific status of economics the benefit of the doubt, but this means I’ll also expect economists to conduct themselves like scientists, and will criticize them when they do not.

**It’s worth noting that a number of the studies that Summers seemed to be dismissing out of hand were conducted by women. One wonders what lessons the public might draw from that.

_____
A portion of this post is an updated version of an ancestor post on my other blog.

Do permanent records of scientific misconduct findings interfere with rehabilitation?

We’ve been discussing how the scientific community deals with cheaters in its midst and the question of whether scientists view rehabilitation as a live option. Connected to the question of rehabilitation is the question of whether an official finding of scientific misconduct leaves a permanent mark that makes it practically impossible for someone to function within the scientific community — not because the person who has committed the conduct is unable to straighten up and fly right, but because others in the scientific community will no longer accept that person in the scientific knowledge-building endeavor, no matter what their behavior.

A version of this worry is at the center of an editorial by Richard Gallagher that appeared in The Scientist five years ago. In it, Gallagher argued that the Office of Research Integrity should not include findings of scientific misconduct in publications that are archived online, and that traces of such findings that persist after the period of debarment from federal funding has ended are unjust. Gallagher wrote:

For the sake of fairness, these sentences must be implemented precisely as intended. This means that at the end of the exclusion period, researchers should be able to participate again as full members of the scientific community. But they can’t.

Misconduct findings against a researcher appear on the Web–indeed, in multiple places on the Web. And the omnipresence of the Web search means that reprimands are being dragged up again and again and again. However minor the misdemeanor, the researcher’s reputation is permanently tarnished, and his or her career is invariably ruined, just as surely as if the punishment were a lifetime ban.

Both the NIH Guide and The Federal Register publish findings of scientific misconduct, and are archived online. As long as this continues, the problem will persist. The director of the division of investigative oversight at ORI has stated his regret at the “collateral damage” caused by the policy (see page 32). But this is not collateral damage; it is a serious miscarriage of justice against researchers and a stain on the integrity of the system, and therefore of science.

It reminds me of the system present in US prisons, in which even after “serving their time,” prisoners will still have trouble finding work because of their criminal records. But is it fair to compare felons to scientists who have, for instance, fudged their affiliations on a grant application when they were young and naïve?

It’s worth noting that the ORI website seems currently to present information for misconduct cases where scientists haven’t yet “served out their sentences”, featuring the statement:

This page contains cases in which administrative actions were imposed due to findings of research misconduct. The list only includes those who CURRENTLY have an imposed administrative actions against them. It does NOT include the names of individuals whose administrative actions periods have expired.

In the interaction between scientists who have been found to have committed scientific misconduct and the larger scientific community, we encounter the tension between the rights of the individual scientist and the rights of the scientific community. This extends to the question of the magnitude of a particular instance of misconduct, or of whether it was premeditated or merely sloppy, or of whether the offender was young and naïve or old enough to know better. An oversight or mistake in judgment that may strike the individual scientist making it as no big deal (at least at the time) can have significant consequences for the scientific community in terms of time wasted (e.g., trying to reproduce reported results) and damaged trust.

The damaged trust is not a minor thing. Given that the scientific knowledge-building enterprise relies on conditions where scientists can trust their fellow scientists to make honest reports (whether in the literature, in grant proposals, or in less formal scientific communications), discovering a fellow scientist whose relationship with the truth is more casual is a very big deal. Flagging liars is like tagging a faulty measuring device. It doesn’t mean you throw them out, but you do need to go to some lengths to reestablish their reliability.

To the extent that an individual scientist is committed to the shared project of building a reliable body of scientific knowledge, he or she ought to understand that after a breach, one is not entitled to a full restoration of the community’s trust. Rather, that trust must be earned back. One step in earning back trust is to acknowledge the harm the community suffered (or at least risked) from the dishonesty. Admitting that you blew it, that you are sorry, and that others have a right to be upset about it, are all necessary preliminaries to making a credible claim that you won’t make the same mistake again.

On the other hand, protesting that your screw-ups really weren’t important, or that your enemies have blown them out of proportion, might be an indication that you still don’t really get why your scientific colleagues are unhappy about your behavior. In such a circumstance, although you may have regained your eligibility to receive federal grant money, you may still have some work left to do to demonstrate that you are a trustworthy member of the scientific community.

It’s true that scientific training seems to go on forever, but that shouldn’t mean that early career scientists are infantilized. They are, by and large, legal adults, and they ought to be striving to make decisions as adults — which means considering the potential effects of their actions and accepting the consequences of them. I’m disinclined, therefore, to view ORI judgments of scientific misconduct as akin to juvenile criminal records that are truly expunged to reflect the transient nature of the youthful offender’s transgressions. Scientists ought to have better judgment than fifteen-year-olds. Occasionally they don’t. If they want to stay a part of the scientific community that their bad choices may have harmed, they have to be prepared to make real restitution. This may include having to meet a higher burden of proof to make up for having misled one’s fellow scientists at some earlier point in time. It may be a pain, but it’s not impossible.

Indeed, I’m inclined to think that early career lapses in judgment ought not to be buried precisely because public knowledge of the problem gives the scientific community some responsibility for providing guidance to the promising young scientist who messed up. Acknowledging your mistakes sets up a context in which it may be easier to ask other folks for help in avoiding similar mistakes in the future. (Ideally, scientists would be able to ask each other for such advice as a matter of course, but there are plenty of instances where it feels like asking a question would be exposing a weakness — something that can feel very dangerous, especially to an early career scientist.)

Besides, there’s a practical difficulty in burying the pixel trail of a scientist’s misconduct. It’s almost always the case that other members of the scientific community are involved in alleging, detecting, investigating, or adjudicating. They know something is up. Keeping the official findings secret leaves the other concerned members of the scientific community hanging, unsure whether the ORI has done anything about the allegations (which can breed suspicion that scientists are getting away with misconduct left and right). It can also make the rumor mill seem preferable to a total lack of information on scientific colleagues prone to dishonesty toward other scientists.

Given the amount of information available online, it’s unlikely that scientists who have been caught in misconduct can fly completely under the radar. But even before the internet, there was no guarantee such a secret would stay secret. Searchable online information imposes a certain level of transparency. But if this is transparency following upon actions that deceived one’s scientific community, it might be the start of effective remediation. Admitting that you have broken trust may be the first real step in earning that trust back.

_____________
This post is an updated version of an ancestor post on my other blog.