‘My work has been plagiarized. Now what?’

I received an email from reader Doug Blank (who gave me permission to share it here and to identify him by name) about a perplexing situation:

Janet,
I thought I’d solicit your advice. Recently, I found an instance of parts of my thesis appearing in a journal article, and of the paper being presented at a conference. In fact, further exploration revealed that it had won a best paper prize! Why don’t I feel proud…
I’ve sent the following letter to the one and only email address that I found on the journal’s website, almost three weeks ago, but haven’t heard anything. I tried contacting the Editorial Advisory Board Chair (through that same email), but he doesn’t have any specific contact information anywhere available on the web, or elsewhere. He is emeritus at [name of university redacted], but they won’t tell me how to contact him. I asked a secretary there to forward my contact to him. I emailed website maintainers. Nothing yet.
Some questions from this: can one have a journal without having someone easily contactable for such issues? No telephone numbers? Who is responsible for catching this kind of thing? Reviewers? Could the community rise to the challenge? For example, could we build a site where papers that are ready for publishing get scrutinized for plagiarism? People would love that more than wikipedia!
Am I in any risk for even sending such accusatory emails? Should I contact the perp? What would he do? What can he do?
I hope to follow this through to the end. Feel free to use any of this as material. If you are interested, I’d be glad to update you. More importantly, I’d be glad to hear of advice.
Thanks!
-Doug

Doug appended the email message he sent to the elusive Editorial Advisory Board Chair (which I present here heavily redacted, just in case the guy turns up and makes an effort to set things right):

Continue reading

Some tactics always stink.

Abel and Orac and Isis have recently called attention to the flak Amy Wallace had been getting for her recent article in WIRED Magazine, “An Epidemic of Fear: How Panicked Parents Skipping Shots Endangers Us All”. The flak Wallace has gotten, as detailed in her Twitter feed (from which Abel constructed a compilation):

I’ve been called stupid, greedy, a whore, a prostitute, and a “fking lib.” I’ve been called the author of “heinous tripe.”
J.B. Handley, the founder of Generation Rescue, the anti-vaccine group that actress Jenny McCarthy helps promote, sent an essay title” “Paul Offit Rapes (intellectually) Amy Wallace and Wired Magazine.” In it, he implied that Offit had slipped me a date rape drug. “The roofie cocktails at Paul Offit’s house must be damn good,” he wrote. Later, he sent a revised version that omitted rape and replaced it with the image of me drinking Offit’s Kool-aid. That one was later posted at the anti-vaccine blog Age of Autism. You can read that blog here.
I’ve been told I’ll think differently “if you live to grow up.” I’ve been warned that “this article will haunt you for a long time.” Just now, I got an email so sexually explicit that I can’t paraphrase it here. Except to say it contained the c-word and a reference to dead fish.

Since the scientific issues around vaccination (including the lack of evidence to demonstrate a link between vaccinations and autism) are well-covered in these parts (especially at Orac’s pad and by Mike The Mad Biologist), I just want to speak briefly about the strategy that seems to be embodied by these reactions to Wallace’s article.

Continue reading

How did we do at dialogue?

In a recent post, I issued an invitation:

I am always up for a dialogue on the issue of our moral relation to animals and on the ethical use of animals in scientific research. If folks inclined towards the animal rights stance want to engage in a dialogue right here, in the comments on this post, I am happy to host it.
(I will not, however, be hosting a debate. A dialogue is different from a debate, and a dialogue is what I’m prepared to host.)

That post has received upward of 250 comments, so there was certainly some sort of exchange going on. But, did we manage to have something approaching a dialogue, or did we end up slipping into a debate?
In considering this question, I want to offer a grid I encountered in the Difficult Dialogues Initiative at San Jose State University, adapted from material from the Public Conversations Project. The grid compares characteristics of dialogues and arguments (which are not precisely the same as debates but are probably close enough for our purposes here):

Continue reading

When collaboration ends badly.

Back before I was sucked into the vortex of paper-grading, an eagle-eyed Mattababy pointed me to a very interesting post by astronomer Mike Brown. Brown details his efforts to collaborate with another team of scientists who were working on the same scientific question he was working on, what became of that attempted collaboration, and the bad feelings that followed when Brown and the other scientists ended up publishing separate papers on the question.
Here’s how Brown lays it out:

Continue reading

Physical phenomena, competing models, and evil.

Over at Starts with a Bang, Ethan Siegel expressed exasperation that Nature and New Scientist are paying attention to (and lending too much credibility to) an astronomical theory Ethan views as a non-starter, Modified Netwonian Dynamics (or MOND):

[W]hy is Nature making a big deal out of a paper like this? Why are magazines like New Scientist declaring that there are cracks in dark matter theories?
Because someone (my guess is HongSheng Zhao, one of the authors of this paper who’s fond of press releases and modifying gravity) is pimping this piece of evidence like it tells us something. Guess what? Galaxy rotation curves are the only thing MOND has ever been good for! MOND is lousy for everything else, and dark matter — which is good for everything else — is good for this too!
So thanks to a number of people for bringing these to my attention, because the record needs to be set straight. Dark matter: still fine. MOND: still horribly insufficient. Now, maybe we can get the editors and referees of journals like this to not only do quality control on the data, but also on the reasonableness of the conclusions drawn.

In a comment on that post, Steinn took issue with Ethan’s characterization of MOND:

Ethan – this is not a creationism debate.
Hong Sheng is a top dynamicist and he knows perfectly well what the issues are. The whole point of science at this level is to test models and propose falsifiable alternatives.
MOND may be wrong, but it is not evil.
Cold Dark Matter is a likelier hypothesis, by far, but it has some serious problems in detail, and the underlying microphysics is essentially unknown and plagued with poorly motivated speculation.
MOND has always approached the issue from a different perspective: that you start with What You See Is What You Get, and then look for minimal modifications to account for the discrepancies. It is a phenomenological model, and makes little attempt to be a fundamental theory of anything. Observers tend to like it because it gives direct comparison with data and is rapidly testable.
I think Leslie Sage knew what he was doing when he published this paper.

In a subsequent post, Ethan responded to Steinn:

Yes, Steinn, it is evil to present MOND as though it is a viable alternative to dark matter.
It is evil to spread information about science based only on some tiny fraction of the available data, especially when the entire data set overwhelmingly favors dark matter and crushes MOND so as to render it untenable. It isn’t evil in the same way that creationism is evil, but it is evil in the same way that pushing the steady-state-model over the Big Bang is evil.
It’s a lie based on an unfair, incomplete argument. It’s a discredited theory attacking the most valid model we have at — arguably — its only weak point. Or, to use a favorite term of mine, it is willfully ignorant to claim that MOND is reasonable in any sort of way as an alternative to dark matter. It’s possibly worse than that, because it’s selectively willful ignorance in this case.
And then I look at the effect it has. It undermines public understanding of dark matter, gravity, and the Universe, by presenting an unfeasible alternative as though it’s perfectly valid. And it isn’t perfectly valid. It isn’t even close. It has nothing to do with how good their results as scientists are; it has everything to do with the invalid, untrue, knowledge-undermining conclusions that the public receives.
And yes, I find that incredibly evil. Do you?

I have no strong views on MOND or Cold Dark Matter, but given that my professional focus includes the methodology of science and issues of ethics in science, I find this back and forth really interesting.

Continue reading

Signing a public petition means taking a public stand.

This, in turn, means that members of the public who strongly disagree with your stand may decide to track you down and let you know they disagree with you.
Apparently, this may become an issue for those who signed the Pro-Test petition in support of ethical and human scientific research with animals. From an email sent to signatories:

[A] few websites hosted by animal rights activists have encouraged their readerships to visit the list of Pro-Test signatories in order to find names and to contact those persons to express their opposition to animal research. While your email addresses on the RaisingVoices.net website are secure and not publicly listed, the animal rights groups encourage people to use the wide array of Internet tools to find contact information and to use it.
While we regret that any person may receive negative communications as a result of this heinous effort by animal rights groups, we want to express – more than ever – our resolve that circumstances such as this are exactly why we all signed the Pro-Test Petition to begin with. Harassment of scientists and supporters of research is intolerable and only resoluteness and mutual support can overcome it.
Remember — no one on this list stands alone. We all share the support and assistance of more than 10,000 other signatories, as well as the resources of Americans for Medical Progress, Speaking of Research and Pro-Test for Science. Now more than ever, it is crucial that we find our collective voices and refuse to shy away when extremists use their predictable, tired tricks.

In case you were wondering, here’s the language from one of the ARA sites:

Continue reading

If you enter a dialogue, do you risk being co-opted?

On my earlier post, “Dialogue, not debate”, commenter dave c-h posed some interesting questions:

Is there an ethical point at which engagement is functionally equivalent to assent? In other words, is there a point at which dialogue should be replaced by active resistance? If so, how do you tell where that point is? I think many activists fear that dialogue is a tactic of those who support the status quo to co-opt them into a process that is unlikely to lead to any real change because the power is unevenly divided.

Continue reading

Dialogue, not debate.

At the end of last week, I made a quick trip to UCLA to visit with some researchers who, despite having been targets of violence and intimidation, are looking for ways to engage with the public about research with animals. I was really struck by their seriousness about engaging folks on “the other side”, rather than just hunkering down to their research and hoping to be left alone.
The big thing we talked about was the need to shift the terms of engagement.

Continue reading

Legal and scientific burdens of proof, and scientific discourse as public controversy: more thoughts on Chandok v. Klessig.

As promised, I’ve been thinking about the details of Chandok v. Klessig. To recap, we have a case where a postdoc (Meena Chandok) generated some exciting scientific findings. She and her supervisor (Daniel F. Klessig), along with some coworkers, published those findings. Then, in the fullness of time, after others working with Klessig tried to reproduce those findings on the way to extending the work, Klessig decided that the results were not sufficiently reproducible.
At that point, Klessig decided that the published papers reported those findings needed to be retracted. Retracting a paper, as we’ve had occasion to discuss before, communicates something about the results (namely that the authors cannot stand behind them anymore). By extension, a retraction can also communicate something to the scientific community about the researcher responsible for generating those results — perhaps that she was too quick to decide a result was robust and rush it into print, or that she made an honest mistake that was not discovered until after the paper was published, or that her coauthors no longer trust that her scientific reports are reliable.
The issue is complicated, I think, by the fact that there were coauthors on the papers in question. Coauthors share the labor of doing the scientific work, and they share the credit for the findings described in their paper. You might expect, therefore, that they would share responsibility for quality control on the scientific work, and for making sure that the findings are robust before the manuscript goes off to the journal. (In my first post on this case, I noted that “before the work was submitted to Cell, Klessig had one of his doctoral students try to verify it, and this attempt was at least good enough not to put the brakes on the manuscript submission.” However, given that further efforts to reproduce the findings seem not to have succeeded, I suspect opinions will vary on whether this pre-submission replication was enough quality control on the authors’ parts.) And, you might expect that it would be the rare case where a problem with a published manuscript would come to rest on the shoulders of a single author in the group.
If credit is shared, why isn’t blame?
Whatever you think ought to be the standard assumptions when a collaborative piece of scientific work does not hold up, in this particular case the blame seemed to fall on Chandok. She took issue with the implication of the retractions (among other communications) that she was unreliable as a scientific researcher. Probably she considered the importance of trust and accountability in the scientific community, recognizing that if she were not trusted by her fellow scientists and if her work were viewed as presumptively unreliable, she would not have much of a scientific career ahead of her. So, she sought legal remedy for this harm to her scientific reputation and career prospects by pursuing a defamation claim against Klessig.
There are separable issues at play here. One is the question of what is required in the eyes of the law to prove a claim of defamation. Another is what would constitute “best practices” for scientific work, both in terms of dealing with data and conclusions, and in terms of dealing with the scientists who generate the data and conclusions (and who are the main audience for the findings reported by other scientists). Here, I think “dealing with” encompasses more than simply classifying fellow scientists by whether or not you can trust their scientific output. It includes interactions with collaborators (and competitors) , not to mention interactions in scientific training relationships.
We might quibble about where a postdoc falls in the process of scientific training and development. Nevertheless, if the PI supervising a postdoc is supposed to be teaching her something (rather than just using her as another pair of hands, however well trained, in the lab), he may have specific responsibilities to mentor her and help her get established as a PI herself. Sorting out what those responsibilities are — and what other responsibilities could trump them — might be useful in preventing this kind of acrimonious outcome in other cases.
We’ll return to considering the broader lessons we might draw from this situation, but first let’s continue laying out the facts of Chandok v. Klessig, 5:05-cv-01076. (Again, I’m indebted to the reader who helpfully sent me the PDF of District Judge Joseph M. Hood’s ruling in this case, which is what I’m quoting below.)

Continue reading

Medical ghostwriting and the role of the ‘author’ who acts as the sheet.

This week the New York Times reported on the problem of drug company-sponsored ghostwriting of articles in the scientific literature:

A growing body of evidence suggests that doctors at some of the nation’s top medical schools have been attaching their names and lending their reputations to scientific papers that were drafted by ghostwriters working for drug companies — articles that were carefully calibrated to help the manufacturers sell more products.

Experts in medical ethics condemn this practice as a breach of the public trust. Yet many universities have been slow to recognize the extent of the problem, to adopt new ethical rules or to hold faculty members to account.

The last time I blogged explicitly about the problem of medical ghostwriting, the focus on the coverage seemed to be on the ways that such “authorship” let pharmaceutical companies stack the literature in favor of the drugs they were trying to sell. Obviously, this sort of practice has a potential to deliver “knowledge” that is more useful to the health of the pharmaceutical companies than to the health of the patients whose doctors are consulting the medical literature.

This time around, it strikes me that more attention is being paid to the ways that the academic scientists involved are gaming the system — specifically, putting their names on work they can’t legitimately take credit for (at least, not as much credit as they seem to be claiming). When there’s a ghostwriter in the background (working with the company-provided checklist of things to play up and things to play down in the manuscript), the scientist who puts her name on the author line starts moving into guest author territory. As we’ve noted before, guest authorship is, at its core, a deception.

Deception, of course, is at odds with the honesty and serious efforts towards objectivity scientists are supposed to bring to their communications with other scientists.

Continue reading