As promised, I’ve been thinking about the details of Chandok v. Klessig. To recap, we have a case where a postdoc (Meena Chandok) generated some exciting scientific findings. She and her supervisor (Daniel F. Klessig), along with some coworkers, published those findings. Then, in the fullness of time, after others working with Klessig tried to reproduce those findings on the way to extending the work, Klessig decided that the results were not sufficiently reproducible.
At that point, Klessig decided that the published papers reported those findings needed to be retracted. Retracting a paper, as we’ve had occasion to discuss before, communicates something about the results (namely that the authors cannot stand behind them anymore). By extension, a retraction can also communicate something to the scientific community about the researcher responsible for generating those results — perhaps that she was too quick to decide a result was robust and rush it into print, or that she made an honest mistake that was not discovered until after the paper was published, or that her coauthors no longer trust that her scientific reports are reliable.
The issue is complicated, I think, by the fact that there were coauthors on the papers in question. Coauthors share the labor of doing the scientific work, and they share the credit for the findings described in their paper. You might expect, therefore, that they would share responsibility for quality control on the scientific work, and for making sure that the findings are robust before the manuscript goes off to the journal. (In my first post on this case, I noted that “before the work was submitted to Cell, Klessig had one of his doctoral students try to verify it, and this attempt was at least good enough not to put the brakes on the manuscript submission.” However, given that further efforts to reproduce the findings seem not to have succeeded, I suspect opinions will vary on whether this pre-submission replication was enough quality control on the authors’ parts.) And, you might expect that it would be the rare case where a problem with a published manuscript would come to rest on the shoulders of a single author in the group.
If credit is shared, why isn’t blame?
Whatever you think ought to be the standard assumptions when a collaborative piece of scientific work does not hold up, in this particular case the blame seemed to fall on Chandok. She took issue with the implication of the retractions (among other communications) that she was unreliable as a scientific researcher. Probably she considered the importance of trust and accountability in the scientific community, recognizing that if she were not trusted by her fellow scientists and if her work were viewed as presumptively unreliable, she would not have much of a scientific career ahead of her. So, she sought legal remedy for this harm to her scientific reputation and career prospects by pursuing a defamation claim against Klessig.
There are separable issues at play here. One is the question of what is required in the eyes of the law to prove a claim of defamation. Another is what would constitute “best practices” for scientific work, both in terms of dealing with data and conclusions, and in terms of dealing with the scientists who generate the data and conclusions (and who are the main audience for the findings reported by other scientists). Here, I think “dealing with” encompasses more than simply classifying fellow scientists by whether or not you can trust their scientific output. It includes interactions with collaborators (and competitors) , not to mention interactions in scientific training relationships.
We might quibble about where a postdoc falls in the process of scientific training and development. Nevertheless, if the PI supervising a postdoc is supposed to be teaching her something (rather than just using her as another pair of hands, however well trained, in the lab), he may have specific responsibilities to mentor her and help her get established as a PI herself. Sorting out what those responsibilities are — and what other responsibilities could trump them — might be useful in preventing this kind of acrimonious outcome in other cases.
We’ll return to considering the broader lessons we might draw from this situation, but first let’s continue laying out the facts of Chandok v. Klessig, 5:05-cv-01076. (Again, I’m indebted to the reader who helpfully sent me the PDF of District Judge Joseph M. Hood’s ruling in this case, which is what I’m quoting below.)
Category Archives: Institutional ethics
Do these claims look defamatory to you?
You may remember my post from last week involving a case where a postdoc sued her former boss for defamation when he retracted a couple of papers they coauthored together. After that post went up, a reader helpfully hooked me up with a PDF of District Judge Joseph M. Hood’s ruling on the case (Chandok v. Klessig, 5:05-cv-01076). There is a lot of interesting stuff here, and I’m working on a longer examination of the judge’s reasoning in the ruling. But, in the interim, I thought you might be interested in the statements made by the defendant in the case, Dr. Daniel F. Klessig, that the plaintiff in the case, Dr. Meena Chandok, alleged were defamatory.
In the longer post I’m working on, I’ll dig in to Judge Hood’s arguments with respect to what elements a plaintiff must establish to prove defamation, and what particular features of the scientific arena were germane to his ruling in this case. For the time being, however, I’m interested to hear what you all think about whether the 23 allegedly defamatory claims quoted below tend “to expose the plaintiff to public hatred, contempt, ridicule, or disgrace.” (13) As well, given that one element of defamation is that the defamatory statements are factually false, I’d like to hear your thoughts on the evidentiary standard a scientist should have to meet before making claims like these to other scientists.
Here, quoted from the ruling, are the 23 allegedly defamatory statements:
Does a retraction constitute defamation of your coauthor?
I’m used to reading about cases of alleged scientific misconduct in science-focused publications and in major media outlets like the New York Times and the Boston Globe. I’ve had less occasion to read about them in law journals. But today, on the front page of the New York Law Journal, there’s an article titled “Scientists Defamation Claims Over Colleagues Efforts to Discredit Her Research Are Dismissed”. (The article is available to paid subscribers. This may be a good time to make a friend with access to a law library.)
The legal action the article describes was brought by a scientist who argued she was being defamed by a collaborator who no longer stands behind work they jointly published. The defendant says the published results are not reproducible; the plaintiff says, stop defaming me!
The judge says, your case doesn’t meet the burden to prove defamation.
From the article:
Medical ghostwriting and the role of the ‘author’ who acts as the sheet.
This week the New York Times reported on the problem of drug company-sponsored ghostwriting of articles in the scientific literature:
A growing body of evidence suggests that doctors at some of the nation’s top medical schools have been attaching their names and lending their reputations to scientific papers that were drafted by ghostwriters working for drug companies — articles that were carefully calibrated to help the manufacturers sell more products.
Experts in medical ethics condemn this practice as a breach of the public trust. Yet many universities have been slow to recognize the extent of the problem, to adopt new ethical rules or to hold faculty members to account.
The last time I blogged explicitly about the problem of medical ghostwriting, the focus on the coverage seemed to be on the ways that such “authorship” let pharmaceutical companies stack the literature in favor of the drugs they were trying to sell. Obviously, this sort of practice has a potential to deliver “knowledge” that is more useful to the health of the pharmaceutical companies than to the health of the patients whose doctors are consulting the medical literature.
This time around, it strikes me that more attention is being paid to the ways that the academic scientists involved are gaming the system — specifically, putting their names on work they can’t legitimately take credit for (at least, not as much credit as they seem to be claiming). When there’s a ghostwriter in the background (working with the company-provided checklist of things to play up and things to play down in the manuscript), the scientist who puts her name on the author line starts moving into guest author territory. As we’ve noted before, guest authorship is, at its core, a deception.
Deception, of course, is at odds with the honesty and serious efforts towards objectivity scientists are supposed to bring to their communications with other scientists.
The saga of the journal comment.
Recently, Steinn brought our attention to some of the difficulties involved in getting a scientific journal to publish a “Comment” on an article. He drew on a document (PDF) by Prof. Rick Trebino of the Georgia Institute of Technology School of Physics detailing (in 123 numbered steps) his own difficulties in advancing what is supposed to be an ongoing conversation between practicing scientists in the peer reviewed scientific literature. Indeed, I think this chronology of exasperation raises some questions about just what interests journal editors are actually working towards, and about how as a result journals may be failing to play the role that the scientific community has expected them to play.
If the journals aren’t playing this role, the scientific community may well need to find another way to get the job done.
But I’m getting ahead of myself. First, let’s look at some key stretches of Trebino’s timeline:
1. Read a paper in the most prestigious journal in your field that “proves” that your entire life’s work is wrong.
2. Realize that the paper is completely wrong, its conclusions based entirely on several misconceptions. It also claims that an approach you showed to be fundamentally impossible is preferable to one that you pioneered in its place and that actually works. And among other errors, it also includes a serious miscalculation–a number wrong by a factor of about 1000–a fact that’s obvious from a glance at the paper’s main figure.
3. Decide to write a Comment to correct these mistakes–the option conveniently provided by scientific journals precisely for such situations.
How to discourage scientific fraud.
In my last post, I mentioned Richard Gallagher’s piece in The Scientist, Fairness for Fraudsters, wherein Gallagher argues that online archived publications ought to be scrubbed of the names of scientists sanctioned by the ORI for misconduct so that they don’t keep paying after they have served their sentence. There, I sketched my reasons for disagreeing with Gallagher.
But there’s another piece of his article that I’d like to consider: the alternative strategies he suggests to discourage scientific fraud.
Gallagher writes:
Cleaning up scientific competition: an interview with Sean Cutler (part 2).
Yesterday, I posted the first part of my interview with Sean Cutler, a biology professor on a mission to get the tribe of science to understand that good scientific competition is not antithetical to cooperation. Cutler argues that the problem scientists (and journal editors, and granting agencies) need to tackle is scientists who try to get an edge in the competition by unethical means. As Cutler put it (in a post at TierneyLab):
Scientists who violate these standards [e.g., not making use of information gained when reviewing manuscripts submitted for publication] are unethical – this is the proverbial no-brainer. But as my colleague and ethicist Coleen Macnamara says, “There is more to ethics than just following the rules- it’s also about helping people when assistance comes at little cost to oneself.” The “little experiment” I did was an exercise in this form of ethical competition. Yes, I could have rushed to the finish line as secretly and quickly as possible and scoop everyone, but I like to play out scenarios and live my life as an experimentalist. By bringing others on board, I turned my competitors turn into collaborators. The paper is better as a result and no one got scooped. A good ethical choice led to a more competitive product.
But how easy is it to change entrenched patterns of behavior? When scientists have been trained to take advantage of every competitive advantage to stay in the scientific game, what might it take to make ethical behavior seem like an advantage rather than an impediment to success?
My interview with Sean Cutler continues:
Cleaning up scientific competition: an interview with Sean Cutler (part 1).
Sean Cutler is an assistant professor of plant cell biology at the University of California, Riverside and the corresponding author of a paper in Science published online at the end of April. Beyond its scientific content, this paper is interesting because of the long list of authors, and the way it is they ended up as coauthors on this work. As described by John Tierney,
Impediments to dialogue about animal research (part 3).
As with yesterday’s dialogue blocker (the question of whether animal research is necessary for scientific and medical advancement), today’s impediment is another substantial disagreement about the facts. A productive dialogue requires some kind of common ground between its participants, including some shared premises about the current state of affairs. One feature of the current state of affairs is the set of laws and regulations that cover animal use — but these laws and regulations are a regular source of disagreement:
Current animal welfare regulations are not restrictive enough/are too restrictive.
Freelance chemistry for fun and (illegal) profit.
You know how graduate students are always complaining that their stipends are small compared to the cost of living? It seems that some graduate students find ways to supplement that income … ways that aren’t always legal. For example, from this article in the September 8, 2008 issue of Chemical & Engineering News [1]:
Jason D. West, a third-year chemistry graduate student at the University of California, Merced, was arraigned last month on charges of conspiring to manufacture methamphetamine, manufacturing methamphetamine, and possessing stolen property. West allegedly stole approximately $10,000 worth of equipment and chemicals from the university to make the illegal drug.
West, 36, pleaded not guilty to the charges and as of press time was in jail on $1 million bail. Police have found materials traced to West at three different meth labs and in one vehicle, says Tom MacKenzie of the Merced County Sheriff’s Department.
The police ended up arresting West following an investigation by UC-Merced campus police of the whereabouts of a vacuum pump that went missing from West’s graduate lab. Graduate students take note: your advisor will miss that expensive piece of lab equipment.