You’re not rehabilitated if you keep deceiving.

Regular readers will know that I view scientific misconduct as a serious harm to both the body of scientific knowledge and the scientific community involved in building that knowledge. I also hold out hope that at least some of the scientists who commit scientific misconduct can be rehabilitated (and I’ve noted that other members of the scientific community behave in ways that suggest that they, too, believe that rehabilitation is possible).

But I think a non-negotiable prerequisite for rehabilitation is demonstrating that you really understand how what you did was wrong. This understanding needs to be more than simply recognizing that what you did was technically against the rules. Rather, you need to grasp the harms that your actions did, the harms that may continue as a result of those actions, the harms that may not be quickly or easily repaired. You need to <a href="http://blogs.scientificamerican.com/doing-good-science/2014/06/29/do-permanent-records-of-scientific-misconduct-findings-interfere-with-rehabilitation/"acknowledgethose harms, not minimize them or make excuses for your actions that caused the harms.

And, you need to stop behaving in the ways that caused the harms in the first place.

Among other things, this means that if you did significant harm to your scientific community, and to the students you were were supposed to be training, by making up “results” rather than actually doing experiments and making and reporting accurate results, you need to recognize that you have acted deceptively. To stop doing harm, you need to stop acting deceptively. Indeed, you may need to be significantly more transparent and forthcoming with details than others who have not transgressed as you have. Owing to your past bad acts, you may just have to meet a higher burden of proof going forward.

That you have retracted the publications in which you deceived, or lost a degree for which (it is strongly suspected) you deceived, or lost your university post, or served your hours of court-ordered community service does not reset you to the normal baseline of presumptive trust. “Paying your debt to society” does not in itself mean that anyone is obligated to believe that you are not still untrustworthy. If you break trust, you need to earn it back, not to demand it because you did your time.

You certainly can’t earn that trust back by engaging in deception to mount an argument that people should give you a break because you’ve served out your sentence.

These thoughts on how not to approach your own rehabilitation are prompted by the appearance of disgraced social scientist Diederik Stapel (discussed here, here, here, here, here, and here) in the comments at Retraction Watch on a post about Diederik Stapel and his short-lived gig as an adjunct instructor for a college course. Now, there’s no prima facie reason Diederik Stapel might not be able to make a productive contribution to a discussion about Diederik Stapel.

However, Diederik Stapel was posting his comments not as Diederik Stapel but as “Paul”.

I hope it is obvious why posting comments that are supportive of yourself while making it appear that this support is coming from someone else is deceptive. Moreover, the comments seem to suggest that Stapel is not really fully responsible for the frauds he committed.

“Paul” writes:

Help! Let’s not change anything. Science is a flawless institution. Yes. And only the past two days I read about medical scientists who tampered with data to please the firm that sponsored their work and about the start of a new investigation into the work of a psychologist who produced data “too good to be true.” Mistakes abound. On a daily basis. Sure, there is nothing to reform here. Science works just fine. I think it is time for the “Men in Black” to move in to start an outside-invesigation of science and academia. The Stapel case and other, similar cases teach us that scientists themselves are able to clean-up their act.

Later, he writes (sic throughout):

Stapel was punished, he did his community service (as he writes in his latest book), he is not on welfare, he is trying to make money with being a writer, a cab driver, a motivational speaker, but not very successfully, and .. it is totally unclear whether he gets paid for his teaching (no research) an extra-curricular hobby course (2 hours a week, not more, not less) and if he gets paid, how much.

Moreover and more importantly, we do not know WHAT he teaches exactly, we have not seen his syllabus. How can people write things like “this will only inspire kids to not get caught”, without knowing what the guy is teaching his students? Will he reach his students how to become fraudsters? Really? When you have read the two books he wrote after his demise, you cannot be conclude that this is very unlikely? Will he teach his students about all the other fakes and frauds and terrible things that happen in science? Perhaps. Is that bad? Perhaps. I think it is better to postpone our judgment about the CONTENT of all this as long as we do not know WHAT he is actually teaching. That would be a Popper-like, open-minded, rationalistic, democratic, scientific attitude. Suppose a terrible criminal comes up with a great insight, an interesting analysis, a new perspective, an amazing discovery, suppose (think Genet, think Gramsci, think Feyerabend).

Is it smart to look away from potentially interesting information, because the messenger of that information stinks?

Perhaps, God forbid, Stapel is able to teach his students valuable lessons and insights no one else is willing to teach them for a 2-hour-a-week temporary, adjunct position that probably doesn’t pay much and perhaps doesn’t pay at all. The man is a failure, yes, but he is one of the few people out there who admitted to his fraud, who helped the investigation into his fraud (no computer crashes…., no questionnaires that suddenly disappeared, no data files that were “lost while moving office”, see Sanna, Smeesters, and …. Foerster). Nowhere it is written that failures cannot be great teachers. Perhaps he points his students to other frauds, failures, and ridiculous mistakes in psychological science we do not know of yet. That would be cool (and not unlikely).

Is it possible? Is it possible that Stapel has something interesting to say, to teach, to comment on?

To my eye, these comments read as saying that Stapel has paid his debt to society and thus ought not to be subject to heightened scrutiny. They seem to assert that Stapel is reformable. They also suggest that the problem is not so much with Stapel as with the scientific enterprise. While there may be systemic features of science as currently practice that make cheating a greater temptation than it might be otherwise, suggesting that those features made Stapel commit fraud does not convey an understanding of Stapel’s individual responsibility to navigate those temptations. Putting those assertions and excuses in someone else’s mouth makes them look less self-serving than they actually are.

Hilariously, “Paul” also urges the Retraction Watch commenters expressing doubts about Stapel’s rehabilitation and moral character to contact Stapel using their real names, first here:

I guess that if people want to write Stapel a message, they can send him a personal email, using their real name. Not “Paul” or “JatdS” or “QAQ” or “nothingifnotcritical” or “KK” or “youknowbestofall” or “whatistheworldcoming to” or “givepeaceachance”.

then here:

if you want to talk to puppeteer, as a real person, using your real name, I recommend you write Stapel a personal email message. Not zwg or neuroskeptic or what arewehiding for.

Meanwhile, behind the scenes, the Retraction Watch editors accumulated clues that “Paul” was not an uninvolved party but rather Diederik Stapel portraying himself as an uninvolved party. After they contacted him to let him know that such behavior did not comport with their comment policy, Diederik Stapel posted under his real name:

Hello, my name is Diederik Stapel. I thought that in an internet environment where many people are writing about me (a real person) using nicknames it is okay to also write about me (a real person) using a nickname. ! have learned that apparently that was —in this particular case— a misjudgment. I think did not dare to use my real name (and I still wonder why). I feel that when it concerns person-to-person communication, the “in vivo” format is to be preferred over and above a blog where some people use their real name and some do not. In the future, I will use my real name. I have learned that and I understand that I –for one– am not somebody who can use a nickname where others can. Sincerely, Diederik Stapel.

He portrays this as a misunderstanding about how online communication works — other people are posting without using their real names, so I thought it was OK for me to do the same. However, to my eye it conveys that he also misunderstands how rebuilding trust works. Posting to support the person at the center of the discussion without first acknowledging that you are that person is deceptive. Arguing that that person ought to be granted more trust while dishonestly portraying yourself as someone other than that person is a really bad strategy. When you’re caught doing it, those arguments for more trust are undermined by the fact that they are themselves further instances of the deceptive behavior that broke trust in the first place.

I will allow as how Diederik Stapel may have some valuable lessons to teach of, though. One of these is how not to make a convincing case that you’ve reformed.

Adjudicating “misbehavior”: how can scientists respond when they don’t get fair credit?

As I mentioned in an earlier post, I recently gave a talk at UC – Berkeley’s Science Leadership and Management (SLAM) seminar series. After the talk (titled “The grad student, the science fair, the reporter, and the lionfish: a case study of competition, credit, and communication of science to the public”), there was a discussion that I hope was at least as much fun for the audience as it was for me.

One of the questions that came up had to do with what recourse members of the scientific community have when other scientists are engaged in behavior that is problematic but that falls short of scientific misconduct.

If a scientist engages in fabrication, falsification, or plagiarism — and if you can prove that they have done so — you can at least plausibly get help from your institution, or the funder, or the federal government, in putting a stop to the bad behavior, repairing some of the damage, and making sure the wrongdoer is punished. But misconduct is a huge line to cross, so harmful to the collective project of scientific knowledge-building that, scientists hope, most scientists would never engage in it, no matter how dire the circumstances.

Other behavior that is ethically problematic in the conduct of science, however, is a lot more common. Disputes over appropriate credit for scientific contributions (which is something that came up in my talk) are sufficiently common that most people who have been in science for a while have first-hand stories they can tell you.

Denying someone of fair credit for the contribution they made to a piece of research is not a good thing. But who can you turn to if someone does it to you? Can the Office of Research Integrity go after the coauthor who didn’t fully acknowledge your contribution to your joint paper (and in the process knocked you from second author to third), or will you have to suck it up?

At the heart of the question is the problem of working out what mechanisms are currently available to address this kind of problem.

Is it possible to stretch the official government definition of plagiarism“the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit” — to cover the situation where you’re being given credit but not enough?

When scientists work out who did enough to be an author on a scientific paper reporting a research finding — and how the magnitude of the various contributions should be reflected in the ordering of names in the author line — is there a clear, objective, correct answer? Are there widely accepted standards that scientists are using to assign appropriate credit? Or, do the standards vary locally, situationally? Is the lack of a clear set of shared standards the kind of thing that creates ambiguities that scientists are prepared to use to their own advantage when they can?

We’ve discussed before the absence of a single standard for authorship embraced uniformly by the Tribe of Science as a whole. Maybe making the case for such a shared standard would help scientists protect themselves from having their contributions minimized — and also help them not unintentionally minimize the contributions of others.

While we’re waiting for a shared standard to gain acceptance, however, there are a number of scientific journals that clearly spell out their own standards for who counts as an author and what kinds of contributions to research and the writing of the paper do or do not rise to the level of receiving authorship credit. If you have submitted your work to a journal with a clear policy of this sort, and if your coauthors have subverted the policy to misrepresent your contribution, you can bring the problem to the journal editors. Indeed, Retraction Watch is brimming with examples of papers that have been retracted on account of problems with who is, or is not, credited with the work that had been published.

While getting redress form a journal editor may be better than nothing, a retraction is the kind of thing that leaves a mark on a scientific reputation — and on the relationships scientists need to be able to coordinate their efforts in the project of scientific knowledge-building. I would argue, however, that not giving the other scientists you work with fair credit for their contributions is also harmful to those relationships, and to the reputations of the scientists who routinely minimize the contributions of others while inflating their own contributions.

So maybe one of the most important things scientists can do right now, given the rules and the enforcement mechanisms that currently exist, the variance in standards and the ambiguities which they create, is to be clear in communicating about contributions and credit from the very beginning of every collaboration. As people are making contributions to the knowledge being built, explicitly identifying those contributions strikes me as a good practice that can help keep other people’s contributions from escaping our notice. Talking about how the different pieces lead to better understanding of what’s going on may also help the collaborators figure out how to make more progress on their research questions by bringing additional contributions to bear.

Of course, it may be easier to spell out what particular contributions each person in the collaboration made than to rank them in terms of which contribution was the biggest or the most important. But maybe this is a good argument for an explicit authorship standard in which authors specify the details of what they contributed and sidestep the harder question of whether experimental design was more or less important that the analysis of the data in this particular collaboration.

There’s a funny kind of irony in feeling like you have better tools to combat bad behavior that happens less frequently than you do to combat bad behavior that happens all the time. Disputes about credit may feel minor enough to be tolerable most of the time, differences of opinion that can expose power gradients in scientific communities that like to think of themselves as egalitarian. But especially for the folks on the wrong end of the power gradients, the erosion of recognition for their hard work can hurt. It may even lessen their willingness to collaborate with other scientists, impoverishing the opportunities for cooperation that help the knowledge get built efficiently. Scientists are entitled to expect better of each other. When they do — and when they give voice to those expectations (and to their disappointment when their scientific peers don’t live up to them) — maybe disputes over fair credit will become rare enough that someday most people who have been in science for a while won’t have first-hand stories they can tell you about them.

Some thoughts about the suicide of Yoshiki Sasai.

In the previous post I suggested that it’s a mistake to try to understand scientific activity (including misconduct and culpable mistakes) by focusing on individual scientists, individual choices, and individual responsibility without also considering the larger community of scientists and the social structures it creates and maintains. That post was where I landed after thinking about what was bugging me about the news coverage and discussions about recent suicide of Yoshiki Sasai, deputy director of the Riken Center for Developmental Biology in Kobe, Japan, and coauthor of retracted papers on STAP cells.

I went toward teasing out the larger, unproductive pattern I saw, on the theory that trying to find a more productive pattern might help scientific communities do better going forward.

But this also means I didn’t say much about my particular response to Sasai’s suicide and the circumstances around it. I’m going to try to do that here, and I’m not going to try to fit every piece of my response into a larger pattern or path forward.

The situation in a nutshell:

Yoshiki Sasai worked with Haruko Obokata at the Riken Center on “stimulus-triggered acquisition of pluripotency”, a method by which exposing normal cells to a stress (like a mild acid) supposedly gave rise to pluripotent stem cells. It’s hard to know how closely they worked together on this; in the papers published on STAP. Obokata was the lead-author and Sasai was a coauthor. It’s worth noting that Obokata was some 20 years younger than Sasai, an up-and-coming researcher. Sasai was a more senior scientist, serving in a leadership position at the Riken Center and as Obokata’s supervisor there.

The papers were published in a high impact journal (Nature) and got quite a lot of attention. But then the findings came into question. Other researchers trying to reproduce the findings that had been reported in the papers couldn’t reproduce them. One of the images in the papers seemed to be a duplicate of another, which was fishy. Nature investigated, Riken investigated, the papers were retracted, Obokata continued to defend the papers and to deny any wrongdoing.

Meanwhile, a Riken investigation committee said “Sasai bore heavy responsibility for not confirming data for the STAP study and for Obokata’s misconduct”. This apparently had a heavy impact on Sasai:

Sasai’s colleagues at Riken said he had been receiving mental counseling since the scandal surrounding papers on STAP, or stimulus-triggered acquisition of pluripotency, cells, which was lead-authored by Obokata, came to light earlier this year.

Kagaya [head of public relations at Riken] added that Sasai was hospitalized for nearly a month in March due to psychological stress related to the scandal, but that he “recovered and had not been hospitalized since.”

Finally, Sasai hanged himself in a Riken stairwell. One of the notes he left, addressed to Obokata, urged her to reproduce the STAP findings.

So, what is my response to all this?

I think it’s good when scientists take their responsibilities seriously, including the responsibility to provide good advice to junior colleagues.

I also think it’s good when scientists can recognize the limits. You can give very, very good advice — and explain with great clarity why it’s good advice — but the person you’re giving it to may still choose to do something else. It can’t be your responsibility to control another autonomous person’s actions.

I think trust is a crucial part of any supervisory or collaborative relationship. I think it’s good to be able to interact with coworkers with the presumption of trust.

I think it’s awful that it’s so hard to tell which people are not worthy of our trust before they’ve taken advantage of our trust to do something bad.

Finding the right balance between being hands-on and giving space is a challenge in the best of supervisory or mentoring relationships.

Bringing an important discovery with the potential to enable lots of research that could ultimately help lots of people to one’s scientific peers — and to the public — must feel amazing. Even if there weren’t a harsh judgment from the scientific community for retraction, I imagine that having to say, “We jumped the gun on the ‘discovery’ we told you about” would not feel good.

The danger of having your research center’s reputation tied to an important discovery is what happens if that discovery doesn’t hold up, whether because of misconduct or mistakes. And either way, this means that lots of hard work that is important in the building of the shared body of scientific knowledge (and lots of people doing that hard work) can become invisible.

Maybe it would be good to value that work on its own merits, independent of whether anyone else judged it important or newsworthy. Maybe we need to rethink the “big discoveries” and “important discoverers” way of thinking about what makes scientific work or a research center good.

Figuring out why something went wrong is important. When the something that went wrong includes people making choices, though, this always seems to come down to assigning blame. I feel like that’s the wrong place to stop.

I feel like investigations of results that don’t hold up, including investigations that turn up misconduct, should grapple with the question of how can we use what we found here to fix what went wrong? Instead of just asking, “Whose fault was this?” why not ask, “How can we address the harm? What can we learn that will help us avoid this problem in the future?”

I think it’s a problem when a particular work environment makes the people in it anxious all the time.

I think it’s a problem when being careful feels like an unacceptable risk because it slows you down. I think it’s a problem when being first feels more important than being sure.

I think it’s a problem when a mistake of judgment feels so big that you can’t imagine a way forward from it. So disastrous that you can’t learn something useful from it. So monumental that it makes you feel like not existing.

I feel like those of us who are still here have a responsibility to pay attention.

We have a responsibility to think about the impacts of the ways science is done, valued, celebrated, on the human beings who are doing science — and not just on the strongest of those human beings, but also on the ones who may be more vulnerable.

We have a responsibility to try to learn something from this.

I don’t think what we should learn is not to trust, but how to be better at balancing trust and accountability.

I don’t think what we should learn is not to take the responsibilities of oversight seriously, but to put them in perspective and to mobilize more people in the community to provide more support in oversight and mentoring.

Can we learn enough to shift away from the Important New Discovery model of how we value scientific contributions? Can we learn enough that cooperation overtakes competition, that building the new knowledge together and making sure it holds up is more important than slapping someone’s name on it? I don’t know.

I do know that, if the pressures of the scientific career landscape are harder to navigate for people with consciences and easier to navigate for people without consciences, it will be a problem for all of us.

When focusing on individual responsibility obscures shared responsibility.

Over many years of writing about ethics in the conduct of science, I’ve had occasion to consider many cases of scientific misconduct and misbehavior, instances of honest mistakes and culpable mistakes. Discussions of these cases in the media and among scientists often make them look aberrant, singular, unconnected — the Schön case, the Hauser case, Aetogate, the Sezen-Sames case, the Hwang Woo-Sook case, the Stapel case, the Van Parijs case.* They make the world of science look binary, a set of unproblematically ethical practitioners with a handful of evil interlopers who need only be identified and rooted out.

I don’t think this approach is helpful, either in preventing misconduct, misbehavior, and mistakes, or in mounting a sensible response to the people involved in them.

Indeed, despite the fact that scientific knowledge-building is inherently a cooperative activity, the tendency to focus on individual responsibility can manifest itself in assignment of individual blame on people who “should have known” that another individual was involved in misconduct or culpable mistakes. It seems that something like this view — whether imposed from without or from within — may have been a factor in the recent suicide of Yoshiki Sasai, deputy director of the Riken Center for Developmental Biology in Kobe, Japan, and coauthor of retracted papers on STAP cells.

While there seems to be widespread suspicion that the lead-author of the STAP cell papers, Haruko Obokata, may have engaged in research misconduct of some sort (something Obokata has denied), Sasai was not himself accused of research misconduct. However, in his role as an advisor to Obokata, Sasai was held responsible by Riken’s investigation for not confirming Obokata’s data. Sasai expressed shame over the problems in the retracted papers, and had been hospitalized prior to his suicide in connection to stress over the scandal.

Michael Eisen describes the similarities here to his own father’s suicide as a researcher at NIH caught up in the investigation of fraud committed by a member of his lab:

[A]s the senior scientists involved, both Sasai and my father bore the brunt of the institutional criticism, and both seem to have been far more disturbed by it than the people who actually committed the fraud.

It is impossible to know why they both responded to situations where they apparently did nothing wrong by killing themselves. But it is hard for me not to place at least part of the blame on the way the scientific community responds to scientific misconduct.

This response, Eisen notes, goes beyond rooting out the errors in the scientific record and extends to rooting out all the people connected to the misconduct event, on the assumption that fraud is caused by easily identifiable — and removable — individuals, something that can be cut out precisely like a tumor, leaving the rest of the scientific community free of the cancer. But Eisen doesn’t believe this model of the problem is accurate, and he notes the damage it can do to people like Sasai and like his own father:

Imagine what it must be like to have devoted your life to science, and then to discover that someone in your midst – someone you have some role in supervising – has committed the ultimate scientific sin. That in and of itself must be disturbing enough. Indeed I remember how upset my father was as he was trying to prove that fraud had taken place. But then imagine what it must feel like to all of a sudden become the focal point for scrutiny – to experience your colleagues and your field casting you aside. It must feel like your whole world is collapsing around you, and not everybody has the mental strength to deal with that.

Of course everyone will point out that Sasai was overreacting – just as they did with my father. Neither was accused of anything. But that is bullshit. We DO act like everyone involved in cases of fraud is responsible. We do this because when fraud happens, we want it to be a singularity. We are all so confident this could never happen to us, that it must be that somebody in a position of power was lax – the environment was flawed. It is there in the institutional response. And it is there in the whispers …

Given the horrible incentive structure we have in science today – Haruko Obokata knew that a splashy result would get a Nature paper and make her famous and secure her career if only she got that one result showing that you could create stem cells by dipping normal cells in acid – it is somewhat of a miracle that more people don’t make up results on a routine basis. It is important that we identify, and come down hard, on people who cheat (although I wish this would include the far greater number of people who overhype their results – something that is ultimately more damaging than the small number of people who out and out commit fraud).

But the next time something like this happens, I am begging you to please be careful about how you respond. Recognize that, while invariably fraud involves a failure not just of honesty but of oversight, most of the people involved are honest, decent scientists, and that witch hunts meant to pretend that this kind of thing could not happen to all of us are not just gross and unseemly – they can, and sadly do, often kill.

As I read him, Eisen is doing at least a few things here. He is suggesting that a desire on the part of scientists for fraud to be a singularity — something that happens “over there” at the hands of someone else who is bad — means that they will draw a circle around the fraud and hold everyone on the inside of that circle (and no one outside of it) accountable. He’s also arguing that the inside/outside boundary inappropriately lumps the falsifiers, fabricators, and plagiarists with those who have committed the lesser sin of not providing sufficient oversight. He is pointing out the irony that those who have erred by not providing sufficient oversight tend to carry more guilt than do those they were working with who have lied outright to their scientific peers. And he is suggesting that needed efforts to correct the scientific record and to protect the scientific community from dishonest researchers can have tragic results for people who are arguably less culpable.

Indeed, if we describe Sasai’s failure as a failure of oversight, it suggests that there is some clear benchmark for sufficient oversight in scientific research collaborations. But it can be very hard to recognize that what seemed like a reasonable level of oversight was insufficient until someone who you’re supervising or with whom you’re collaborating is caught in misbehavior or a mistake. (That amount of oversight might well have been sufficient if the person one was supervising chose to behave honestly, for example.) There are limits here. Unless you’re shadowing colleagues 24/7, oversight depends on some baseline level of trust, some presumption that one’s colleagues are behaving honestly rather than dishonestly.

Eisen’s framing of the problem, though, is still largely in terms of the individual responsibility of fraudsters (and over-hypers). This prompts arguments in response about individuals bearing responsibility for their actions and their effects (including the effects of public discussion of those actions and about the individual scientists who are arguably victims of data fabrication and fraud. We are still in the realm of conceiving of fraudsters as “other” rather than recognizing that honest, decent scientists may be only a few bad decisions away from those they cast as monsters.

And we’re still describing the problem in terms of individual circumstances, individual choices, and individual failures.

I think Eisen is actually on the road to pointing out that a focus primarily on the individual level is unhelpful when he points to the problems of the scientific incentive structure. But I think it’s important to explicitly raise the alternate model, that fraud also flows from a collective failure of the scientific community and of the social structures it has built — what is valued, what is rewarded, what is tolerated, what is punished.

Arguably, one of the social structures implicated in scientific fraud is the first across the finish line, first to publish in a high impact journal model of scientific achievement. When being second to a discovery counts for exactly nothing (after lots of time, effort, and other resources have been invested), there is much incentive for haste and corner-cutting, and sometimes even outright fraud. This provides temptations for researchers — and dangers for those providing oversight to ambitious colleagues who may fall prey to such temptations. But while misconduct involves individuals making bad decisions, it happens in the context of a reward structure that exists because of collective choices and behaviors. If the structures that result from those collective choices and behaviors make some kinds of individual choices that are pathological to the shared project (building knowledge) rational choices for the individual to make under the circumstances (because they help the individual secure the reward), the community probably has an interest in examining the structures it has built.

Similarly, there are pathological individual choices (like ignoring or covering up someone else’s misconduct) that seem rational if the social structures built by the scientific community don’t enable a clear path forward within the community for scientists who have erred (whether culpably or honestly). Scientists are human. They get attached to their colleagues and tend to believe them to be capable of learning from their mistakes. Also, they notice that blowing the whistle on misconduct can lead to isolation of the whistleblower, not just the people committing the misconduct. Arguably, these are failures of the community and of the social structures it has built.

We might even go a step further and consider whether insisting on talking about scientific behavior (and misbehavior) solely in terms of individual actions and individual responsibility is part of the problem.

Seeing the scientific enterprise and things that happen in connection with it in terms of heroes and villains and innocent bystanders can seem very natural. Taking this view also makes it look like the most rational choice for scientists to plot their individual courses within the status quo. The rules, the reward structures, are taken almost as if they were carved in granite. How could one person change them? What would be the point of opting out of publishing in the high impact journals, since it would surely only hurt the individual opting out while leaving the system intact? In a competition for individual prestige and credit for knowledge built, what could be the point of pausing to try to learn something from the culpable mistakes committed by other individuals rather than simply removing those other individuals from the competition?

But individual scientists are not working in isolation against a fixed backdrop. Treating their social structures as if they were a fixed backdrop not only obscures that these structures result from collective choices but also prevents scientists from thinking together about other ways the institutional practice of science could be.

Whether some of the alternative arrangements they could create might be better than the status quo — from the point of view of coordinating scientific efforts, improving scientists’ quality of life, or improving the quality of the body of knowledge scientist are building — is surely an empirical question. But just as surely it is an empirical question worth exploring.

______
* It’s worth noticing that failures of safety are also frequently characterized as singular events, as in the Sheri Sangji/Patrick Harran case. As I’ve discussed at length on this blog, there is no reason to imagine the conditions in Harran’s lab that led to Sangji’s death were unique, and there is plenty of reason for the community of academic researchers to try to cultivate a culture of safety rather than individually hoping their own good luck will hold.

On the value of empathy, not othering.

Could seeing the world through the eyes of the scientist who behaves unethically be a valuable tool for those trying to behave ethically?

Last semester, I asked my “Ethics in Science” students to review an online ethics training module of the sort that many institutions use to address responsible conduct of research with their students and employees. Many of my students elected to review the Office of Research Integrity’s interactive movie The Lab, which takes you through a “choose your own adventure” scenario in as academic lab as one of four characters (a graduate student, a postdoc, the principal investigator, or the institution’s research integrity officer). The scenario surrounds research misconduct by another member of the lab, and your goal is to do what you can to address the problems — and to avoid being drawn into committing misconduct yourself.

By and large, my students reported that “The Lab” was a worthwhile activity. As part of the assignment, I asked them to suggest changes, and a number of them made what I thought was a striking suggestion: players should have the option to play the character who commits the misconduct.

I can imagine some imminently sensible reasons why the team that produced “The Lab” didn’t include the cheater as a playable character. For instance, if the scenario were to start before the decision to cheat and the user playing this character picks the options that amount to not cheating, you end up with a story that lacks almost all of the drama. Similarly, if you pick up with that character in the immediate aftermath of the instance of cheating and go with the “come clean/don’t dig a deeper hole” options, the story ends pretty quickly.

Setting the need for dramatic tension aside, I suspect that another reason that “The Lab” doesn’t include the cheater as a playable character is that people who are undergoing research ethics training are supposed to think of themselves as people who would not cheat. Rather, they’re supposed to think of themselves as ethical folks who would resist temptation and stand up to cheating when others do it. These training exercises bring out some of the particular challenges that might be associated with making good ethical decisions (many of them connected to seeing a bit further down the causal chain to anticipate the likely consequences of your choices), but they tend to position the cheater as just part of the environment to which the ethical researcher must respond.

I think this is a mistake. I think there may be something valuable in being able to view those who commit misconduct as more than mere antagonists or monsters.

Part of what makes “The Lab” a useful exercise is that it presents situations with a number of choices available to us, some easier and some harder, some likely to lead to interactions that are more honest and fair and others more likely to lead to problems. In real life, though, we don’t usually have the option of rewinding time and choosing a different option if our first choice goes badly. Nor do we have assurance that we’ll end up being the good guys.

It’s important to understand the temptations that the cheaters felt — the circumstances that made their unethical behaviors seem expedient, or rational, or necessary. Casting cheaters as monsters is glossing over our own human vulnerability to these bad choices, which will surely make the temptations harder to handle when we encounter them. Moreover, understanding the cheaters as humans (just like the scientists who haven’t cheated) rather than “other” in some fundamental way lets us examine those temptations and then collectively create working environments with fewer of them. Though it’s part of a different discussion, Ashe Dryden describes the dangers of “othering” here quite well:

There is no critical discussion about what leads to these incidents — what parts of our culture allow these things to go unchecked for so long, how pervasive they are, and how so much of this is rewarded directly or indirectly. …

It’s important to notice what is happening here: by declaring that the people doing these things are others, it removes the need to examine our own actions. The logic assumed is that only bad people do these things and we aren’t bad people, so we couldn’t do something like this. Othering effectively absolves ourselves of any blame.

The dramatic arc of “The Lab” is definitely not centered on the cheater’s redemption, nor on cultivating empathy for him, and in the context of the particular training it offers, that’s fine. Sometimes one’s first priority is protecting or repairing the integrity of the scientific record, or ensuring a well-functioning scientific community by isolating a member who has proven himself untrustworthy.

But, that member of the community who we’re isolating, or rehabilitating, is connected to the community — connected to us — in complicated ways. Misconduct doesn’t just happen, but neither is it the case that, when someone commits it, it’s just the matter of the choices and actions of an individual in a vacuum.

The community is participating in creating the environment in which people commit misconduct. Trying to understand the ways in which behaviors, expectations, formal and informal reward systems, and the like can encourage big ethical transgressions or desensitize people to “little” lapses may be a crucial step to creating an environment where fewer people commit misconduct, whether because the cost of doing so is too high or the payoff for doing so (if you get away with it) is too low.

But seeing members of the community as connected in this way requires not seeing the research environment as static and unchangeable — and not seeing those in the community who commit misconduct as fundamentally different creatures from those who do not.

All of this makes me think that part of the voluntary exclusion deals between people who have committed misconduct and the ORI should be an allocution, in which the wrongdoer spells out the precise circumstances of the misconduct, including the pressures in the foreground when the wrongdoer chose the unethical course. This would not be an excuse but an explanation, a post-mortem of the misconduct available to the community for inspection and instruction. Ideally, others might recognize familiar situations in the allocution and then consider how close their own behavior in such situations has come to crossing ethical lines, as well as what factors seemed to help them avoid crossing those lines. As well, researchers could think together about what gives rise to the situations and the temptations within them and explore whether common practices can be tweaked to remove some of the temptations while supporting knowledge-building and knowledge builders.

Casting cheaters as monsters doesn’t do much to help people make good choices in the face of difficult circumstances. Ignoring the ways we contribute to creating those circumstances doesn’t help, either — and may even increase the risk that we’ll become like the “monsters” we decry

Do permanent records of scientific misconduct findings interfere with rehabilitation?

We’ve been discussing how the scientific community deals with cheaters in its midst and the question of whether scientists view rehabilitation as a live option. Connected to the question of rehabilitation is the question of whether an official finding of scientific misconduct leaves a permanent mark that makes it practically impossible for someone to function within the scientific community — not because the person who has committed the conduct is unable to straighten up and fly right, but because others in the scientific community will no longer accept that person in the scientific knowledge-building endeavor, no matter what their behavior.

A version of this worry is at the center of an editorial by Richard Gallagher that appeared in The Scientist five years ago. In it, Gallagher argued that the Office of Research Integrity should not include findings of scientific misconduct in publications that are archived online, and that traces of such findings that persist after the period of debarment from federal funding has ended are unjust. Gallagher wrote:

For the sake of fairness, these sentences must be implemented precisely as intended. This means that at the end of the exclusion period, researchers should be able to participate again as full members of the scientific community. But they can’t.

Misconduct findings against a researcher appear on the Web–indeed, in multiple places on the Web. And the omnipresence of the Web search means that reprimands are being dragged up again and again and again. However minor the misdemeanor, the researcher’s reputation is permanently tarnished, and his or her career is invariably ruined, just as surely as if the punishment were a lifetime ban.

Both the NIH Guide and The Federal Register publish findings of scientific misconduct, and are archived online. As long as this continues, the problem will persist. The director of the division of investigative oversight at ORI has stated his regret at the “collateral damage” caused by the policy (see page 32). But this is not collateral damage; it is a serious miscarriage of justice against researchers and a stain on the integrity of the system, and therefore of science.

It reminds me of the system present in US prisons, in which even after “serving their time,” prisoners will still have trouble finding work because of their criminal records. But is it fair to compare felons to scientists who have, for instance, fudged their affiliations on a grant application when they were young and naïve?

It’s worth noting that the ORI website seems currently to present information for misconduct cases where scientists haven’t yet “served out their sentences”, featuring the statement:

This page contains cases in which administrative actions were imposed due to findings of research misconduct. The list only includes those who CURRENTLY have an imposed administrative actions against them. It does NOT include the names of individuals whose administrative actions periods have expired.

In the interaction between scientists who have been found to have committed scientific misconduct and the larger scientific community, we encounter the tension between the rights of the individual scientist and the rights of the scientific community. This extends to the question of the magnitude of a particular instance of misconduct, or of whether it was premeditated or merely sloppy, or of whether the offender was young and naïve or old enough to know better. An oversight or mistake in judgment that may strike the individual scientist making it as no big deal (at least at the time) can have significant consequences for the scientific community in terms of time wasted (e.g., trying to reproduce reported results) and damaged trust.

The damaged trust is not a minor thing. Given that the scientific knowledge-building enterprise relies on conditions where scientists can trust their fellow scientists to make honest reports (whether in the literature, in grant proposals, or in less formal scientific communications), discovering a fellow scientist whose relationship with the truth is more casual is a very big deal. Flagging liars is like tagging a faulty measuring device. It doesn’t mean you throw them out, but you do need to go to some lengths to reestablish their reliability.

To the extent that an individual scientist is committed to the shared project of building a reliable body of scientific knowledge, he or she ought to understand that after a breach, one is not entitled to a full restoration of the community’s trust. Rather, that trust must be earned back. One step in earning back trust is to acknowledge the harm the community suffered (or at least risked) from the dishonesty. Admitting that you blew it, that you are sorry, and that others have a right to be upset about it, are all necessary preliminaries to making a credible claim that you won’t make the same mistake again.

On the other hand, protesting that your screw-ups really weren’t important, or that your enemies have blown them out of proportion, might be an indication that you still don’t really get why your scientific colleagues are unhappy about your behavior. In such a circumstance, although you may have regained your eligibility to receive federal grant money, you may still have some work left to do to demonstrate that you are a trustworthy member of the scientific community.

It’s true that scientific training seems to go on forever, but that shouldn’t mean that early career scientists are infantilized. They are, by and large, legal adults, and they ought to be striving to make decisions as adults — which means considering the potential effects of their actions and accepting the consequences of them. I’m disinclined, therefore, to view ORI judgments of scientific misconduct as akin to juvenile criminal records that are truly expunged to reflect the transient nature of the youthful offender’s transgressions. Scientists ought to have better judgment than fifteen-year-olds. Occasionally they don’t. If they want to stay a part of the scientific community that their bad choices may have harmed, they have to be prepared to make real restitution. This may include having to meet a higher burden of proof to make up for having misled one’s fellow scientists at some earlier point in time. It may be a pain, but it’s not impossible.

Indeed, I’m inclined to think that early career lapses in judgment ought not to be buried precisely because public knowledge of the problem gives the scientific community some responsibility for providing guidance to the promising young scientist who messed up. Acknowledging your mistakes sets up a context in which it may be easier to ask other folks for help in avoiding similar mistakes in the future. (Ideally, scientists would be able to ask each other for such advice as a matter of course, but there are plenty of instances where it feels like asking a question would be exposing a weakness — something that can feel very dangerous, especially to an early career scientist.)

Besides, there’s a practical difficulty in burying the pixel trail of a scientist’s misconduct. It’s almost always the case that other members of the scientific community are involved in alleging, detecting, investigating, or adjudicating. They know something is up. Keeping the official findings secret leaves the other concerned members of the scientific community hanging, unsure whether the ORI has done anything about the allegations (which can breed suspicion that scientists are getting away with misconduct left and right). It can also make the rumor mill seem preferable to a total lack of information on scientific colleagues prone to dishonesty toward other scientists.

Given the amount of information available online, it’s unlikely that scientists who have been caught in misconduct can fly completely under the radar. But even before the internet, there was no guarantee such a secret would stay secret. Searchable online information imposes a certain level of transparency. But if this is transparency following upon actions that deceived one’s scientific community, it might be the start of effective remediation. Admitting that you have broken trust may be the first real step in earning that trust back.

_____________
This post is an updated version of an ancestor post on my other blog.

Faith in rehabilitation (but not in official channels): how unethical behavior in science goes unreported.

Can a scientist who has behaved unethically be rehabilitated and reintegrated as a productive member of the scientific community? Or is your first ethical blunder grounds for permanent expulsion from the community?

In practice, this isn’t just a question about the person who commits the ethical violation. It’s also a question about what other scientists in the community can stomach in dealing with the offenders — especially when the offender turns out to be a close colleague or a trainee.

In the case of a hard line — one ethical strike and you’re out — what kind of decision does this place on the scientific mentor who discovers that his or her graduate student or postdoc has crossed an ethical line? Faced with someone you judge to have talent and promise, someone you think could contribute to the scientific endeavor, someone whose behavior you are convinced was the result of a moment of bad judgment rather than evil intent or an irredeemably flawed character, what do you do?

Do you hand the matter on to university administrators or federal funders (who don’t know your trainee, might not recognize or value his or her promise, might not be able to judge just how out of character this ethical misstep really was) and let them mete out punishment? Or, do you try to address the transgression yourself, as a mentor, addressing the actual circumstances of the ethical blunder, the other options your trainee should have recognized as better ones to pursue, and the kind of harm this bad decision could bring to the trainee and to other members of the scientific community?

Clearly, there are downsides to either of these options.

One problem with handling an ethical transgression privately is that it’s hard to be sure it has really been handled in a lasting way. Given the persistent patterns of escalating misbehavior that often come to light when big frauds are exposed, it’s hard not to wonder whether scientific mentors were aware, and perhaps even intervening in ways they hoped would be effective.

It’s the building over time of ethical violations that is concerning. Is such an escalation the result of a hands-off (and eyes-off) policy from mentors and collaborators? Could intervention earlier in the game have stopped the pattern of infractions and led the researcher to cultivate more honest patterns of scientific behavior? Or is being caught by a mentor or collaborator who admonishes you privately and warns that he or she will keep an eye on you almost as good as getting away with it — an outcome with no real penalties and no paper-trail that other members of the scientific community might access?

It’s even possible that some of these interventions might happen at an institutional level — the department or the university becomes aware of ethical violations and deals with them “internally” without involving “the authorities” (who, in such cases, are usually federal funding agencies). I dare say that the feds would be pretty unhappy about being kept out of the loop if the ethical violations in question occur in research supported by federal funding. But if the presumption is that getting the feds involved raises the available penalties to the draconian, it is understandable that departments and universities might want to try to address the ethical missteps while still protecting the investment they have made in a promising young researcher.

Of course, the rest of the scientific community has relevant interests here. These include an interest in being able to trust that other scientists present honest results to the community, whether in journal articles, conference presentations, grant applications, or private communications. Arguably, they also include an interest in having other members of the community expose dishonesty when they detect it. Managing an ethical infraction privately is problematic if it leaves the scientific community with misleading literature that isn’t corrected or retracted (for example).

It’s also problematic if it leaves someone with a habit of cheating in the community, presumed by all but a few of the community’s members to have a good record of integrity.

But I’m inclined to think that the impulse to deal with science’s youthful offenders privately is a response to the fear that handing them over to federal authorities has a high likelihood of ending their scientific careers forever. There is a fear that a first offense will be punished with the career equivalent of the death penalty.

As it happens, administrative sanctions imposed by Office of Research Integrity are hardly ever permanent removal. Findings of scientific misconduct are much more likely to be punished with exclusion from federal funding for three years, or five years, or ten years. Still, in an extremely competitive environment, with multitudes of scientists competing for scarce grant dollars and permanent jobs, even a three year disbarment may be enough to seriously derail a scientific career. The mentor making the call about whether to report a trainee’s unethical behavior may judge the likely fallout as enough to end the trainee’s career.

Permanent expulsion or a slap on the wrist is not much of a range of penalties. And, neither of these options really addresses the question of whether rehabilitation is possible and in the best interests of both the individual and the scientific community.

If no errors in judgment are tolerated, people will do anything to conceal such errors. Mentors who are trying to be humane may become accomplices in the concealment. The conversations about how to make better judgments may not happen because people worry that their hypothetical situations will be scrutinized for clues about actual screw-ups.

None of this is to say that ethical violations should be without serious consequences — they shouldn’t. But this need not preclude the possibility that people can learn from their mistakes. Violators may have to meet a heavy burden to demonstrate that they have learned from their mistakes. Indeed, it is possible they may never fully regain the trust of their fellow researchers (who may go forward reading their papers and grant proposals with heightened skepticism in light of their past wrongdoing).

However, it seems perverse for the scientific community to adopt a stance that rehabilitation is impossible when so many of its members seem motivated to avoid official channels for dealing with misconduct precisely because they feel rehabilitation is possible. If the official penalty structure denies the possibility of rehabilitation, those scientists who believe in rehabilitation will take matters into their own hands. To the extent that this may exacerbate the problem, it might be good if paths to rehabilitation were given more prominence in official responses to misconduct.

_____________
This post is an updated version of an ancestor post on my other blog.

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

In the last post, we discussed why fabrication and falsification are harmful to scientific knowledge-building. The short version is that if you’re trying to build a body of reliable knowledge about the world, making stuff up (rather than, say, making careful observations of that world and reporting those observations accurately) tends not to get you closer to that goal.

Along with fabrication and falsification, plagiarism is widely recognized as a high crime against the project of science, but the explanations for why it’s harmful generally make it look like a different kind of crime than fabrication and falsification. For example, Donald E. Buzzelli (1999) writes:

[P]lagiarism is an instance of robbing a scientific worker of the credit for his or her work, not a matter of corrupting the record. (p. 278)

Kenneth D, Pimple (2002) writes:

One ideal of science, identified by Robert Merton as “disinterestedness,” holds that what matters is the finding, not who makes the finding. Under this norm, scientists do not judge each other’s work by reference to the race, religion, gender, prestige, or any other incidental characteristic of the researcher; the work is judged by the work, not the worker. No harm would be done to the Theory of Relativity if we discovered Einstein had plagiarized it…

[P]lagiarism … is an offense against the community of scientists, rather than against science itself. Who makes a particular finding will not matter to science in one hundred years, but today it matters deeply to the community of scientists. Plagiarism is a way of stealing credit, of gaining credit where credit is not due, and credit, typically in the form of authorship, is the coin of the realm in science. An offense against scientists qua scientists is an offense against science, and in its way plagiarism is as deep an offense against scientists as falsification and fabrication are offenses against science. (p. 196)

Pimple is claiming that plagiarism is not an offense that undermines the knowledge-building project of science per se. Rather, the crime is in depriving other scientists of the reward they are due for participating in this knowledge-building project. In other words, Pimple says that plagiarism is problematic not because it is dishonest, but rather because it is unfair.

While I think Pimple is right to identify an additional component of responsible conduct of science besides honesty, namely, a certain kind of fairness to one’s fellow scientists, I also think this analysis of plagiarism misses an important way in which misrepresenting the source of words, ideas, methods, or results can undermine the knowledge-building project of science.

On the surface, plagiarism, while potentially nasty to the person whose report is being stolen, might seem not to undermine the scientific community’s evaluation of the phenomena. We are still, after all, bringing together and comparing a number of different observation reports to determine the stable features of our experience of the phenomenon. But this comparison often involves a dialogue as well. As part of the knowledge-building project, from the earliest planning of their experiments to well after results are published, scientists are engaged in asking and answering questions about the details of the experience and of the conditions under which the phenomenon was observed.

Misrepresenting someone else’s honest observation report as one’s own strips the report of accurate information for such a dialogue. It’s hard to answer questions about the little, seemingly insignificant experimental details of an experiment you didn’t actually do, or to refine a description of an experience someone else had. Moreover, such a misrepresentation further undermines the process of building more objective knowledge by failing to contribute the actual insight of the scientist who appears to be contributing his own view but is actually contributing someone else’s. And while it may appear that a significant number of scientists are marshaling their resources to understand a particular phenomenon, if some of those scientists are plagiarists, there are fewer scientists actually grappling with the problem than it would appear.

In such circumstances, we know less than we think we do.

Given the intersubjective route to objective knowledge, failing to really weigh in to the dialogue may end up leaving certain of the subjective biases of others in place in the collective “knowledge” that results.

Objective knowledge is produced when the scientific community’s members work with each other to screen out subjective biases. This means the sort of honesty required for good science goes beyond the accurate reporting of what has been observed and under what conditions. Because each individual report is shaped by the individual’s perspective, objective scientific knowledge also depends on honesty about the individual agency actually involved in making the observations. Thus, plagiarism, which often strikes scientists as less of a threat to scientific knowledge (and more of an instance of “being a jerk”), may pose just as much of a threat to the project of producing objective scientific knowledge as outright fabrication.

What I’m arguing here is that plagiarism is a species of dishonesty that can undermine the knowledge-building project of science in a direct way. Even if what has been lifted by the plagiarist is “accurate” from the point of view of the person who actually collected or analyzed the data or drew conclusions from it, separating this contribution from its true author means it doesn’t function the same way in the ongoing scientific dialogue.

In the next post, we’ll continue our discussion of the duties of scientists by looking at what the positive duties of scientists might be, and by examining the sources of these duties.
_____


Buzzelli, D. E. (1999). Serious deviation from accepted practices. Science and Engineering Ethics, 5(2), 275-282.

Pimple, K. D. (2002). Six domains of research ethics. Science and Engineering Ethics, 8(2), 191-205.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Don’t be evil: Obligations of scientists (part 3)

In the last installation of our ongoing discussion of the obligations of scientists, I said the next post in the series would take up scientists’ positive duties (i.e., duties to actually do particular kinds of things). I’ve decided to amend that plan to say just a bit more about scientists’ negative duties (i.e., duties to refrain from doing particular kinds of things).

Here, I want to examine a certain minimalist view of scientists’ duties (or of scientists’ negative duties) that is roughly analogous to the old Google motto, “Don’t be evil.” For scientists, the motto would be “Don’t commit scientific misconduct.” The premise is that if X isn’t scientific misconduct, then X is acceptable conduct — at least, acceptable conduct within the context of doing science.

The next question, if you’re trying to avoid committing scientific misconduct, is how scientific misconduct is defined. For scientists in the U.S., a good place to look is to the federal agencies that provide funding for scientific research and training.

Here’s the Office of Research Integrity’s definition of misconduct:

Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. …

Research misconduct does not include honest error or differences of opinion.

Here’s the National Science Foundation’s definition of misconduct:

Research misconduct means fabrication, falsification, or plagiarism in proposing or performing research funded by NSF, reviewing research proposals submitted to NSF, or in reporting research results funded by NSF. …

Research misconduct does not include honest error or differences of opinion.

These definitions are quite similar, although NSF restricts its definition to actions that are part of a scientist’s interaction with NSF — giving the impression that the same actions committed in a scientist’s interaction with NIH would not be scientific misconduct. I’m fairly certain that NSF officials view all scientific plagiarism as bad. However, when the plagiarism is committed in connection with NIH funding, NSF leaves it to the ORI to pursue sanctions. This is a matter of jurisdiction for enforcement.

It’s worth thinking about why federal funders define (and forbid) scientific misconduct in the first place rather than leaving it to scientists as a professional community to police. One stated goal is to ensure that the money they are distributing to support scientific research and training is not being misused — and to have a mechanism with which they can cut off scientists who have proven themselves to be bad actors from further funding. Another stated goal is to protect the quality of the scientific record — that is, to ensure that the published results of the funded research reflect honest reporting of good scientific work rather than lies.

The upshot here is that public money for science comes with strings attached, and that one of those strings is that the money be used to conduct actual science.

Ensuring the proper use of the funding and protecting the integrity of the scientific record needn’t be the only goals of federal funding agencies in the U.S. in their interactions with scientists or in the way they frame their definitions of scientific misconduct, but at present these are the goals in the foreground in discussions of why federally funded scientists should avoid scientific misconduct.

Let’s consider the three high crimes identified in these definitions of scientific misconduct.

Fabrication is making up data or results rather than actually collecting them from observation or experimentation. Obviously, fabrication undermines the project of building a reliable body of knowledge about the world – faked data can’t be counted on to give us an accurate picture of what the world is really like.

A close cousin of fabrication is falsification. Here, rather than making up data out of whole cloth, falsification involves “adjusting” real data – changing the values, adding some data points, omitting other data points. As with fabrication, falsification is lying about your empirical data, representing the falsified data as an honest report of what you observed when it isn’t.

The third high crime is plagiarism, misrepresenting the words or ideas (or, for that matter, data or computer code, for example) of others as your own. Like fabrication and falsification, plagiarism is a variety of dishonesty.

Observation and experimentation are central in establishing the relevant facts about the phenomena scientists are trying to understand. Establishing such relevant facts requires truthfulness about what is observed or measured and under what conditions. Deception, therefore, undermines this aim of science. So at a minimum, scientists must embrace the norm of truthfulness or abandon the goal of building accurate pictures of reality. This doesn’t mean that honest scientists never make mistakes in setting up their experiments, making their measurements, performing data analysis, or reporting what they found to other scientists. However, when honest scientists discover these mistakes, they do what they can to correct them, so that they don’t mislead their fellow scientists even accidentally.

The importance of reliable empirical data, whether as the source of or a test of one’s theory, is why fabrication and falsification of data are rightly regarded as cardinal sins against science. Made-up data are no kind of reliable indicator of what the world is like or whether a particular theory is a good one. Similarly, “cooking” data sets to better support particular hypotheses amounts to ignoring the reality of what has actually been measured. The scientific rules of engagement with phenomena hold the scientist to account for what has actually been observed. While the scientist is always permitted to get additional data about the object of study, one cannot willfully ignore facts one finds puzzling or inconvenient. Even if these facts are not explained, they must be acknowledged.

Those who commit falsification and fabrication undermine the goal of science by knowingly introducing unreliable data into, or holding back relevant data from, the formulation and testing of theories. They sin by not holding themselves accountable to reality as observed in scientific experiments. When they falsify or fabricate in reports of research, they undermine the integrity of the scientific record. When they do it in grant proposals, they are attempting to secure funding under false pretenses.

Plagiarism, the third of the cardinal sins against responsible science, is dishonesty of another sort, namely, dishonesty about the source of words, ideas, methods, or results. A number of people who think hard about research ethics and scientific misconduct view plagiarism as importantly different in its effects from fabrication and falsification. For example, Donald E. Buzzelli (1999) writes:

[P]lagiarism is an instance of robbing a scientific worker of the credit for his or her work, not a matter of corrupting the record. (p. 278)

Kenneth D, Pimple (2002) writes:

One ideal of science, identified by Robert Merton as “disinterestedness,” holds that what matters is the finding, not who makes the finding. Under this norm, scientists do not judge each other’s work by reference to the race, religion, gender, prestige, or any other incidental characteristic of the researcher; the work is judged by the work, not the worker. No harm would be done to the Theory of Relativity if we discovered Einstein had plagiarized it…

[P]lagiarism … is an offense against the community of scientists, rather than against science itself. Who makes a particular finding will not matter to science in one hundred years, but today it matters deeply to the community of scientists. Plagiarism is a way of stealing credit, of gaining credit where credit is not due, and credit, typically in the form of authorship, is the coin of the realm in science. An offense against scientists qua scientists is an offense against science, and in its way plagiarism is as deep an offense against scientists as falsification and fabrication are offenses against science. (p. 196)

In fact, I think we can make a good argument that plagiarism does threaten the integrity of the scientific record (although I’ll save that argument for a separate post). However, I agree with both Buzzelli and Pimple that plagiarism is also a problem because it embodies a particular kind of unfairness within scientific practice. That federal funders include plagiarism by name in their definitions of scientific misconduct suggests that their goals extend further than merely protecting the integrity of the scientific record.

Fabrication, falsification, and plagiarism are clearly instances of scientific misconduct, but the misconduct definitions of the United States Public Health Service (whose umbrella includes NIH) and NSF used to define scientific misconduct as fabrication, falsification, plagiarism, and other serious deviations from accepted research practices. The “other serious deviations” clause was controversial, with a panel of the National Academy of Sciences (among others) arguing that this language was ambiguous enough that it shouldn’t be part of an official misconduct definition. Maybe, the panel worried, “serious deviations from accepted research practices” might be interpreted to include cutting-edge methodological innovations, meaning that scientific innovation would count as misconduct.

In his article 1993 article, “The Definition of Misconduct in Science: A View from NSF,” Buzzelli claimed that there was no evidence that the broader definitions of misconduct had been used to lodge this kind of misconduct complaint. Since then, however, there there have been instances where definitions of scientific misconduct containing an “other serious deviations” clause could be argued to take advantage of the ambiguity of the clause to go after a scientist for political reasons.

If the “other serious deviations” clause isn’t meant to keep scientists from innovating, what kinds of misconduct is it supposed to cover? These include things like sabotaging other scientists’ experiments or equipment, falsifying colleagues’ data, violating agreements about sharing important research materials like cultures and reagents, making misrepresentations in grant proposals, and violating the confidentiality of the peer review process. None of these activities is necessarily covered by fabrication, falsification, or plagiarism, but each of these activities can be seriously harmful to scientific knowledge-building.

Buzzelli (1993) discusses a particular deviation from accepted research practices that the NSF judged as misconduct, one where a principal investigator directing an undergraduate primatology research experience funded by an NSF grant sexually harassed student researchers and graduate assistants. Buzzelli writes:

In carrying out this project, the senior researcher was accused of a range of coercive sexual offenses against various female undergraduate students and research assistants, up to and including rape. … He rationed out access to the research data and the computer on which they were stored and analyzed, as well as his own assistance, so they were only available to students who accepted his advances. He was also accused of threatening to blackball some of the graduate students in the professional community and to damage their careers if they reported his activities. (p. 585)

Even opponents of the “other serious deviations” clause would be unlikely to argue that this PI was not behaving very badly. However, they did argue that this PI’s misconduct was not scientific misconduct — that it should be handled by criminal or civil authorities rather than funding agencies, and that it was not conduct that did harm to science per se.

Buzzelli (who, I should mention, was writing as a senior scientist in the Office of the Inspector General in the National Science Foundation) disagreed with this assessment. He argued that NSF had to get involved in this sexual harassment case in order to protect the integrity of its research funds. The PI in question, operating with NSF funds designated to provide an undergraduate training experience, used his power as a research director and mentor to make sexual demands of his undergraduate trainees. The only way for the undergraduate trainees to receive the training, mentoring, and even access to their own data that they were meant to receive in this research experience at a remote field site was for them to submit to the PI’s demands. In other words, while the PI’s behavior may not have directly compromised the shared body of scientific knowledge, it undermined the other central job of the tribe of science: the training of new scientists. Buzzelli writes:

These demands and assaults, plus the professional blackmail mentioned earlier, were an integral part of the subject’s performance as a research mentor and director and ethically compromised that performance. Hence, they seriously deviated from the practices accepted in the scientific community. (p. 647)

Buzzelli makes the case for an understanding of scientific misconduct as practices that do harm to science. Thus, practices that damage the integrity of training and supervision of associates and students – an important element of the research process – would count as misconduct. Indeed, in his 1999 article, he notes that the first official NIH definition of scientific misconduct (in 1986) used the phrase “serious deviations, such as fabrication, falsification, or plagiarism, from accepted practices in carrying out research or in reporting the results of research.” (p. 276) This language shifted in subsequent statements of the definition of scientific misconduct, for example “fabrication, falsification, plagiarism, and other serious deviations from accepted practices” in the NSF definition that was in place in 1999.

Reordering the words this way might not seem like a big shift, but as Buzzelli points out, it conveys the impression that “other serious deviations” is a fourth item in the list after the clearly enumerated fabrication, falsification, and plagiarism, an ill-defined catch-all meant to cover cases too fuzzy to enumerate in advance. The original NIH wording, in contrast, suggests that the essence of scientific misconduct is that it is an ethical deviation from accepted scientific practice. In this framing of the definition, fabrication, falsification, and plagiarism are offered as three examples of the kind of deviation that counts as scientific misconduct, but there is no claim that these three examples are the only deviations that count as scientific misconduct.

To those still worried by the imprecision of this definition, Buzzelli offers the following:

[T]he ethical import of “serious deviations from accepted practices” has escaped some critics, who have taken it to refer instead to such things as doing creative and novel research, exhibiting personality quirks, or deviating from some artificial ideal of scientific method. They consider the language of the present definition to be excessively broad because it would supposedly allow misconduct findings to be made against scientists for these inappropriate reasons.

However, the real import of “accepted practices” is that is makes the ethical standards held by the scientific community itself the regulatory standard that a federal agency will use in considering a case of misconduct against a scientist. (p. 277)

In other words, Buzzelli is arguing that a definition of scientific misconduct that is centered on practices that the scientific community finds harmful to knowledge-building is better for ensuring the proper use of research funding and protecting the integrity of the scientific record than a definition that restricts scientific misconduct to fabrication, falsification, and plagiarism. Refraining from fabrication, falsification, and plagiarism, then, would not suffice to fulfill the negative duties of a scientist.

We’ll continue our discussion of the duties of scientists with a sidebar discussion on what kind of harm I claim plagiarism does to scientific knowledge-building. From there, we will press on to discuss what the positive duties of scientists might be, as well as the sources of these duties.

_____
Buzzelli, D. E. (1993). The definition of misconduct in science: a view from NSF. Science, 259(5095), 584-648.

Buzzelli, D. (1999). Serious deviation from accepted practices. Science and Engineering Ethics, 5(2), 275-282.

Pimple, K. D. (2002). Six domains of research ethics. Science and Engineering Ethics, 8(2), 191-205.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Scary subject matter.

This being Hallowe’en, I felt like I should serve you something scary.

But what?

Verily, we’ve talked about some scary things here:

More scary subjects have come up on my other blog, including:

Making this list, I’m very glad it’s still light out! Otherwise I might be quaking uncontrollably.

Truth be told, as someone who works with ethics for a living, I’m less afraid of monsters than I am of ordinary humans who lose sight of their duties to their fellow humans.

And frankly, when it comes to things that go bump in the night, I’m less terrified than curious …

especially since the things that go “bump” in my kitchen usually involve the intriguing trio of temperature-, pressure-, and phase-changes — which is to say, it’s nothing a little science couldn’t demystify.

Have a happy, safe, and ethical Hallowe’en!