As I mentioned in an earlier post, I recently gave a talk at UC – Berkeley’s Science Leadership and Management (SLAM) seminar series. After the talk (titled “The grad student, the science fair, the reporter, and the lionfish: a case study of competition, credit, and communication of science to the public”), there was a discussion that I hope was at least as much fun for the audience as it was for me.
One of the questions that came up had to do with what recourse members of the scientific community have when other scientists are engaged in behavior that is problematic but that falls short of scientific misconduct.
If a scientist engages in fabrication, falsification, or plagiarism — and if you can prove that they have done so — you can at least plausibly get help from your institution, or the funder, or the federal government, in putting a stop to the bad behavior, repairing some of the damage, and making sure the wrongdoer is punished. But misconduct is a huge line to cross, so harmful to the collective project of scientific knowledge-building that, scientists hope, most scientists would never engage in it, no matter how dire the circumstances.
Other behavior that is ethically problematic in the conduct of science, however, is a lot more common. Disputes over appropriate credit for scientific contributions (which is something that came up in my talk) are sufficiently common that most people who have been in science for a while have first-hand stories they can tell you.
Denying someone of fair credit for the contribution they made to a piece of research is not a good thing. But who can you turn to if someone does it to you? Can the Office of Research Integrity go after the coauthor who didn’t fully acknowledge your contribution to your joint paper (and in the process knocked you from second author to third), or will you have to suck it up?
At the heart of the question is the problem of working out what mechanisms are currently available to address this kind of problem.
Is it possible to stretch the official government definition of plagiarism — “the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit” — to cover the situation where you’re being given credit but not enough?
When scientists work out who did enough to be an author on a scientific paper reporting a research finding — and how the magnitude of the various contributions should be reflected in the ordering of names in the author line — is there a clear, objective, correct answer? Are there widely accepted standards that scientists are using to assign appropriate credit? Or, do the standards vary locally, situationally? Is the lack of a clear set of shared standards the kind of thing that creates ambiguities that scientists are prepared to use to their own advantage when they can?
We’ve discussed before the absence of a single standard for authorship embraced uniformly by the Tribe of Science as a whole. Maybe making the case for such a shared standard would help scientists protect themselves from having their contributions minimized — and also help them not unintentionally minimize the contributions of others.
While we’re waiting for a shared standard to gain acceptance, however, there are a number of scientific journals that clearly spell out their own standards for who counts as an author and what kinds of contributions to research and the writing of the paper do or do not rise to the level of receiving authorship credit. If you have submitted your work to a journal with a clear policy of this sort, and if your coauthors have subverted the policy to misrepresent your contribution, you can bring the problem to the journal editors. Indeed, Retraction Watch is brimming with examples of papers that have been retracted on account of problems with who is, or is not, credited with the work that had been published.
While getting redress form a journal editor may be better than nothing, a retraction is the kind of thing that leaves a mark on a scientific reputation — and on the relationships scientists need to be able to coordinate their efforts in the project of scientific knowledge-building. I would argue, however, that not giving the other scientists you work with fair credit for their contributions is also harmful to those relationships, and to the reputations of the scientists who routinely minimize the contributions of others while inflating their own contributions.
So maybe one of the most important things scientists can do right now, given the rules and the enforcement mechanisms that currently exist, the variance in standards and the ambiguities which they create, is to be clear in communicating about contributions and credit from the very beginning of every collaboration. As people are making contributions to the knowledge being built, explicitly identifying those contributions strikes me as a good practice that can help keep other people’s contributions from escaping our notice. Talking about how the different pieces lead to better understanding of what’s going on may also help the collaborators figure out how to make more progress on their research questions by bringing additional contributions to bear.
Of course, it may be easier to spell out what particular contributions each person in the collaboration made than to rank them in terms of which contribution was the biggest or the most important. But maybe this is a good argument for an explicit authorship standard in which authors specify the details of what they contributed and sidestep the harder question of whether experimental design was more or less important that the analysis of the data in this particular collaboration.
There’s a funny kind of irony in feeling like you have better tools to combat bad behavior that happens less frequently than you do to combat bad behavior that happens all the time. Disputes about credit may feel minor enough to be tolerable most of the time, differences of opinion that can expose power gradients in scientific communities that like to think of themselves as egalitarian. But especially for the folks on the wrong end of the power gradients, the erosion of recognition for their hard work can hurt. It may even lessen their willingness to collaborate with other scientists, impoverishing the opportunities for cooperation that help the knowledge get built efficiently. Scientists are entitled to expect better of each other. When they do — and when they give voice to those expectations (and to their disappointment when their scientific peers don’t live up to them) — maybe disputes over fair credit will become rare enough that someday most people who have been in science for a while won’t have first-hand stories they can tell you about them.