If you enter a dialogue, do you risk being co-opted?

On my earlier post, “Dialogue, not debate”, commenter dave c-h posed some interesting questions:

Is there an ethical point at which engagement is functionally equivalent to assent? In other words, is there a point at which dialogue should be replaced by active resistance? If so, how do you tell where that point is? I think many activists fear that dialogue is a tactic of those who support the status quo to co-opt them into a process that is unlikely to lead to any real change because the power is unevenly divided.

Continue reading

Dialogue, not debate.

At the end of last week, I made a quick trip to UCLA to visit with some researchers who, despite having been targets of violence and intimidation, are looking for ways to engage with the public about research with animals. I was really struck by their seriousness about engaging folks on “the other side”, rather than just hunkering down to their research and hoping to be left alone.
The big thing we talked about was the need to shift the terms of engagement.

Continue reading

Psychohazard.

The other day, while surfing the web, my better half came upon this semi-official looking symbol for psychohazards:

psychohazard2.png

The verbiage underneath the symbol seem to indicate conditions that might have serious consequences for one’s picture of the world and its contents, or for one’s ability to come to knowledge about the world. A philosopher who was so inclined could go to town on this.
However, while this particular icon was new to me, this isn’t the first time I’ve seen the term “psychohazard” in use.

Continue reading

Legal and scientific burdens of proof, and scientific discourse as public controversy: more thoughts on Chandok v. Klessig.

As promised, I’ve been thinking about the details of Chandok v. Klessig. To recap, we have a case where a postdoc (Meena Chandok) generated some exciting scientific findings. She and her supervisor (Daniel F. Klessig), along with some coworkers, published those findings. Then, in the fullness of time, after others working with Klessig tried to reproduce those findings on the way to extending the work, Klessig decided that the results were not sufficiently reproducible.
At that point, Klessig decided that the published papers reported those findings needed to be retracted. Retracting a paper, as we’ve had occasion to discuss before, communicates something about the results (namely that the authors cannot stand behind them anymore). By extension, a retraction can also communicate something to the scientific community about the researcher responsible for generating those results — perhaps that she was too quick to decide a result was robust and rush it into print, or that she made an honest mistake that was not discovered until after the paper was published, or that her coauthors no longer trust that her scientific reports are reliable.
The issue is complicated, I think, by the fact that there were coauthors on the papers in question. Coauthors share the labor of doing the scientific work, and they share the credit for the findings described in their paper. You might expect, therefore, that they would share responsibility for quality control on the scientific work, and for making sure that the findings are robust before the manuscript goes off to the journal. (In my first post on this case, I noted that “before the work was submitted to Cell, Klessig had one of his doctoral students try to verify it, and this attempt was at least good enough not to put the brakes on the manuscript submission.” However, given that further efforts to reproduce the findings seem not to have succeeded, I suspect opinions will vary on whether this pre-submission replication was enough quality control on the authors’ parts.) And, you might expect that it would be the rare case where a problem with a published manuscript would come to rest on the shoulders of a single author in the group.
If credit is shared, why isn’t blame?
Whatever you think ought to be the standard assumptions when a collaborative piece of scientific work does not hold up, in this particular case the blame seemed to fall on Chandok. She took issue with the implication of the retractions (among other communications) that she was unreliable as a scientific researcher. Probably she considered the importance of trust and accountability in the scientific community, recognizing that if she were not trusted by her fellow scientists and if her work were viewed as presumptively unreliable, she would not have much of a scientific career ahead of her. So, she sought legal remedy for this harm to her scientific reputation and career prospects by pursuing a defamation claim against Klessig.
There are separable issues at play here. One is the question of what is required in the eyes of the law to prove a claim of defamation. Another is what would constitute “best practices” for scientific work, both in terms of dealing with data and conclusions, and in terms of dealing with the scientists who generate the data and conclusions (and who are the main audience for the findings reported by other scientists). Here, I think “dealing with” encompasses more than simply classifying fellow scientists by whether or not you can trust their scientific output. It includes interactions with collaborators (and competitors) , not to mention interactions in scientific training relationships.
We might quibble about where a postdoc falls in the process of scientific training and development. Nevertheless, if the PI supervising a postdoc is supposed to be teaching her something (rather than just using her as another pair of hands, however well trained, in the lab), he may have specific responsibilities to mentor her and help her get established as a PI herself. Sorting out what those responsibilities are — and what other responsibilities could trump them — might be useful in preventing this kind of acrimonious outcome in other cases.
We’ll return to considering the broader lessons we might draw from this situation, but first let’s continue laying out the facts of Chandok v. Klessig, 5:05-cv-01076. (Again, I’m indebted to the reader who helpfully sent me the PDF of District Judge Joseph M. Hood’s ruling in this case, which is what I’m quoting below.)

Continue reading

Do these claims look defamatory to you?

You may remember my post from last week involving a case where a postdoc sued her former boss for defamation when he retracted a couple of papers they coauthored together. After that post went up, a reader helpfully hooked me up with a PDF of District Judge Joseph M. Hood’s ruling on the case (Chandok v. Klessig, 5:05-cv-01076). There is a lot of interesting stuff here, and I’m working on a longer examination of the judge’s reasoning in the ruling. But, in the interim, I thought you might be interested in the statements made by the defendant in the case, Dr. Daniel F. Klessig, that the plaintiff in the case, Dr. Meena Chandok, alleged were defamatory.
In the longer post I’m working on, I’ll dig in to Judge Hood’s arguments with respect to what elements a plaintiff must establish to prove defamation, and what particular features of the scientific arena were germane to his ruling in this case. For the time being, however, I’m interested to hear what you all think about whether the 23 allegedly defamatory claims quoted below tend “to expose the plaintiff to public hatred, contempt, ridicule, or disgrace.” (13) As well, given that one element of defamation is that the defamatory statements are factually false, I’d like to hear your thoughts on the evidentiary standard a scientist should have to meet before making claims like these to other scientists.
Here, quoted from the ruling, are the 23 allegedly defamatory statements:

Continue reading

Does a retraction constitute defamation of your coauthor?

I’m used to reading about cases of alleged scientific misconduct in science-focused publications and in major media outlets like the New York Times and the Boston Globe. I’ve had less occasion to read about them in law journals. But today, on the front page of the New York Law Journal, there’s an article titled “Scientists Defamation Claims Over Colleagues Efforts to Discredit Her Research Are Dismissed”. (The article is available to paid subscribers. This may be a good time to make a friend with access to a law library.)
The legal action the article describes was brought by a scientist who argued she was being defamed by a collaborator who no longer stands behind work they jointly published. The defendant says the published results are not reproducible; the plaintiff says, stop defaming me!
The judge says, your case doesn’t meet the burden to prove defamation.
From the article:

Continue reading

Anatomy of a scientific fraud: an interview with Eugenie Samuel Reich.

Eugenie Samuel Reich is a reporter whose work in the Boston Globe, Nature, and New Scientist will be well-known to those with an interest in scientific conduct (and misconduct). In Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World, she turns her skills as an investigative reporter to writing a book-length exploration of Jan Hendrik Schön’s frauds at Bell Labs, providing a detailed picture of the conditions that made it possible for him to get away with his fraud as long as he did.
Eugenie Samuel Reich agreed to answer some questions about Plastic Fantastic and the Schön case. My questions, and her answers, after the jump.

Continue reading

Book review: Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World.

PlasticFantastic.jpg
Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World
by Eugenie Samuel Reich
New York: Palgrave Macmillan
2009

The scientific enterprise is built on trust and accountability. Scientists are accountable both to the world they are trying to describe and to their fellow scientists, with whom they are working to build a reliable body of knowledge. And, given the magnitude of the task, they must be able to trust the other scientists engaged in this knowledge-building activity.
When scientists commit fraud, they are breaking trust with their fellow scientists and failing to be accountable to their phenomena or their scientific community. Once a fraud has been revealed, it is easy enough to flag it as pathological science and its perpetrator as a pathological scientist. The larger question, though, is how fraud is detected by the scientific community — and what conditions allow fraud to go unnoticed.
In Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World, Eugenie Samuel Reich explores the scientific career of fraudster Jan Hendrik Schön, piecing together the mechanics of how he fooled the scientific community and considering the motivations that may have driven him. Beyond this portrait of a single pathological scientist, though, the book considers the responses of Schön’s mentors, colleagues, and supervisors, of journal editors and referees, of the communities of physicists and engineers. What emerges is a picture that challenges the widely held idea that science can be counted on to be self-correcting.

Continue reading

Medical ghostwriting and the role of the ‘author’ who acts as the sheet.

This week the New York Times reported on the problem of drug company-sponsored ghostwriting of articles in the scientific literature:

A growing body of evidence suggests that doctors at some of the nation’s top medical schools have been attaching their names and lending their reputations to scientific papers that were drafted by ghostwriters working for drug companies — articles that were carefully calibrated to help the manufacturers sell more products.

Experts in medical ethics condemn this practice as a breach of the public trust. Yet many universities have been slow to recognize the extent of the problem, to adopt new ethical rules or to hold faculty members to account.

The last time I blogged explicitly about the problem of medical ghostwriting, the focus on the coverage seemed to be on the ways that such “authorship” let pharmaceutical companies stack the literature in favor of the drugs they were trying to sell. Obviously, this sort of practice has a potential to deliver “knowledge” that is more useful to the health of the pharmaceutical companies than to the health of the patients whose doctors are consulting the medical literature.

This time around, it strikes me that more attention is being paid to the ways that the academic scientists involved are gaming the system — specifically, putting their names on work they can’t legitimately take credit for (at least, not as much credit as they seem to be claiming). When there’s a ghostwriter in the background (working with the company-provided checklist of things to play up and things to play down in the manuscript), the scientist who puts her name on the author line starts moving into guest author territory. As we’ve noted before, guest authorship is, at its core, a deception.

Deception, of course, is at odds with the honesty and serious efforts towards objectivity scientists are supposed to bring to their communications with other scientists.

Continue reading

Who’s a scientist?

At Philosophers’ Playground, Steve Gimbel ponders the pedagogically appropriate way to label William Dembski:

I’m wrapping up work on my textbook Methods and Models: A Historical Introduction to the Philosophy of Science and have run into a question. …
The evolutionary biology track’s final piece deals with William Dembski’s work on intelligent design theory. Therein lies the question. The way the exercises are laid out is in three parts labeled The Case, The Scientist, and Your Job. The second part is a brief biographical sketch (a paragraph, just a couple sentences about the person’s life). Not every case study has a bio — for the discovery of the top quark, for example, there is no “The” scientist — so the question is whether I should have one for Dembski.
On the one hand, having it seems to beg the question I am asking the student — is it science. By labeling him “the scientist” in the text is to send a signal to the student. At the same time not doing so seems to send the same sort of message in the opposite direction. It also seems to be a political statement whether I do or don’t. If he had a Ph.D. in biology or had done some other work, that would make it easy, but he has a Ph.D. in mathematics and another in philosophy and teaches philosophy at Southwest Baptist Seminary. He did have an NSF research fellowship at one point, but then so have many philosophers whom I would not call scientists. His arguments are aimed at the discourse within evolutionary biology, that is, he sees himself as doing science and it is his clear intent to do science. Is that enough to be a scientist? Would being a mathematician with a professional interest in complexity theory, applied statistics be sufficient? Does the applied nature, the world-pointing orientation of those field make one a scientist? What is a scientist and is William Dembski one?

That question of who is properly counted as a scientist resurfaces yet again.

Continue reading