Bad cites. Bad science?

Do scientists see themselves, like Isaac Newton, building new knowledge by standing on the shoulders of giants? Or are they most interested in securing their own position in the scientific conversation by stepping on the feet, backs, and heads of other scientists in their community? Indeed, are some of them willfully ignorant about the extent to which their knowledge is build on someone else’s foundations?
That’s a question raised in a post from November 25, 2008 on The Scientist NewsBlog. The post examines objections raised by a number of scientists to a recent article in the journal Cell:

The paper in question, published in a June issue of Cell, described a model for understanding the genetic and cellular machinery underlying planar cell polarity (PCP), the cell-to-cell communication that epithelial cells use to align and arrange themselves to function as an organized tissue.
Developmental biologist Jeffrey Axelrod, the paper’s main author, defended the work, writing in an email to The Scientist, “our paper (Chen et al. June 2008) underwent Cell‘s rigorous process of peer-review prior to publication. We stand by our conclusions as stated in the paper, as well as by our use of citations, and I encourage your readers to look at the papers in question, as they speak for themselves.”
But [University of Cambridge biologist Peter] Lawrence claims that the Axelrod paper, which identifies a transmembrane protein called Flamingo (also known as starry night or stan) as a key signaling molecule in Drosophila PCP, is largely a rehash of his own group’s work, which was published in the journal Development in 2004 and has been cited 35 times, according to ISI. (Axelrod’s Cell paper has not yet been cited in any published papers.)
“The complaint is that the main point of the [Cell paper] is what we discovered and provided evidence for four years ago,” Lawrence said. “It pretends to be much more novel than it is.”

Let me make two smallish points in passing before taking on some larger issues. First, in this last quotation, I’m bothered by the word “pretends”. At least to my ear, pretending involves intent, and intent is a tricky thing to prove. (It’s worth noting, as we’ve discussed before, that plagiarism needn’t involve intent.)
Second, we should not overestimate the virtues certified by a manuscript’s having passed through peer review, even “Cell‘s rigorous process of peer-review”. That the paper was published indicates that the peer reviewers judged its claims plausible (both with respect to prior knowledge in the field and with respect to the specific evidence offered to support the claims) and interesting. There is no guarantee, on the basis of peer review, that the claims offered in the paper are true, nor that the data were reported accurately, nor that the model used to interpret the data is on the right track, nor that the paper made proper mention of prior work.
The peer reviewers make the best judgments they can with the facts at hand. They may not themselves have encyclopedic knowledge of the literature in the field.
Now, on to the bigger questions.
How vital is novelty? Given the current rewards system within the community of science, very. That’s how you establish your priority, on the basis of which your work is supposed to be cited in further discussions of the system or scientific problem.
How important is citation of prior work in conducting (and reporting) new research results?
Drawing on the ideas and results that others have published* requires that you give them credit by citing their work. This is part of the social contract within the scientific community. People publish to share what they know, and their strategies for building that knowledge, and other scientists are welcome to draw on these published resources so long as they cite these resources when they contribute their own findings to the discussion. Failing to cite earlier work is failing to give props to the scientists who advanced the state of understanding to the point at which you jumped into the fray. To the extent that citations actually count for more than ego (e.g., playing a role in how scientists and their work are evaluated in tenure and grant reviews), failing to cite the work of other scientists can hurt their careers in tangible ways.
If you are caught not citing work upon which you are building your own work, your willingness to violate the social contract and to do tangible harm to the careers of other scientists may well be held against you. Those uncited scientists, after all, may be reviewing your next manuscript or grant proposal, or even answering a request from your department for an “outside letter” evaluating your standing in your research field.
If you know about prior work in the area, you ought to read up on it and cite it.
If you don’t know about prior work in the area, you ought to do a literature search to see whether any exists. If it does, return to step 1(read up on it and cite it).
If you can’t find any prior work in the area, you can go ahead and do your research to see what you can find out.
It seems possible that a group could be ill-informed about the state of the literature on a particular system or question, especially if you’re just moving into this area. Maybe you’re not clear on the best search strategies to turn up the key pieces of literature. Maybe the literature search took a back seat to getting a lab set up or getting a cranky experimental system to cooperate.
Having done the experimental research and drawn conclusions from it, obviously you’d like to report it.
Reporting your results (i.e., how you set up your system, what you observed, the conclusions you drew) conveys information to the rest of the scientific community. One thing your article might convey is that you don’t know how to do a good literature search, or that you can’t be bothered to consult the existing literature.
Usually, that’s not the main conclusion you want people to draw from your article. But authorial intent only gets you so far.
Now, from the point of view of what the scientific community collectively knows, instances in which a research group blows off the literature search and unintentionally replicates a finding that has already been reported might actually be useful — they offer truly independent replications of earlier findings, with the group producing the replication unbiased by what the earlier group reported, having not seen those reports to be biased by them. When independent groups find the same thing, it suggests that the results are pretty robust, which is a nice feature in scientific knowledge.
And, given that only first place counts in a priority race, it seems unlikely that many research groups would have a good reason to try to replicate existing results that they know about. As helpful as replication may be from the point of view of establishing what the scientific community knows, there’s no career reward for it. In other words, the community might get something positive out of the efforts of groups who fail to do thorough literature searches — it’s just that these groups, or the scientific groups of whose results they were unaware, won’t necessarily get the career reward they expected.
Should one do the research, publish the findings, and only after the fact discover that other scientists had been there first (and published about it), the sensible thing to do is to acknowledge those others late rather than not at all. Saying “I should have known about these earlier publications” is a classier move than denying you goofed by not finding them and acknowleding them in the first place. Of course, if you feel that the earlier references are reporting results that are substantially different, you need to make the case that there are salient differences between this earlier work and your later work. But acknowledging a mistake, cleaning it up, and moving on is crucial as far as scientific life skills go.
To be fair, in this particular case there are other worries that have been raised beyond failure to properly cite prior work. Again, from The Scientist NewsBlog:

Lawrence wrote in a letter to Cell that the paper was “seriously flawed both scientifically and ethically and in my opinion amounts to a theft of our intellectual property (especially the results and conclusions of our prior paper, Lawrence et al., 2004).” Lawrence’s letter was not published in Cell, but he sent it to The Scientist. At least four other researchers submitted letters independently – some also obtained by The Scientist – to the journal last July. Some of these also claimed that the Axelrod group’s science in constructing a model for PCP was subpar.
“I hope you will agree with me that (i) this paper is a disaster for the field (it will set the community back by several years) and (ii) it is not good for the journal either,” wrote Marek Mlodzik, chair of developmental and regenerative biology at Mount Sinai School of Medicine, in a letter to the editor of Cell, Emilie Marcus.
Mlodzik said that the Axelrod paper completely ignores some of his own previous research on PCP; specifically, a 2005 paper that proposed a similar model for PCP. “They should have cited it because of the model,” Mlodzik told The Scientist.
Mlodzik also takes issue with the science in the Cell paper, citing in his letter to the journal a couple examples where “the authors use wrong data or conceptually flawed experiments to give false credibility to their model.”

In addition to complaints from scientists who published earlier results on PCP, we have worries expressed that the data cited in the paper is “wrong”, that the experiments discussed are “conceptually flawed”, and that the scientific reasoning used to construct or support the model is “subpar”. These may all be reasonable criticisms, or they may not. Indeed, they may be the sort of criticisms that scientists working on a particular problem could raise about other papers that passed through peer review and included meticulous citations of earlier work on the problem. Pre-publication peer review is not the end of the scientific community’s organized skepticism. Rather, published articles are fair game for critique — including critique of reported data, of the conceptual underpinnings of experimental design, of the logic behind the model offered to explain the results, and so forth. Because what is credible starts out feeling like a very individual matter, the scientific community needs to enter into discussions of published results together, helping the community’s members establish what is persuasive to others within that community.
And, fair or not, scientists are likely to find more persuasive the testimony of those they take to be honest and meticulous, Scientists who can’t even do a good literature search may have other as yet unexposed gaps in their practice that render their testimony less credible.
_____
*Indeed, one should also cite the ideas and results of others that have been communicated by other routes. Publications leave a more obvious paper trail, so that’s where most attention is focused, but proper credit for intellectual contributions ought to be accorded whenever possible.

facebooktwittergoogle_pluslinkedinmail
Posted in Communication, Ethical research, Methodology, Tribe of Science.

23 Comments

  1. Citations are also very important for anyone trying to read-up on a particular area of science. They help researches find important papers in the field.

  2. A “competing” group published a paper (in a minor journal) with observations that, in another model organism, we’d published a few years earlier. They cited our paper and mentioned our observations, but not our model. Instead, they “proposed” the same model and helpfully suggested that it explained our observations. How do people come up with such strategies?

  3. I want to point out an editorial initiative by The Embo Journal that I think requires much more publicity. Part of their new editorial guidelines do not count references against the length of the paper, and authors are actively encouraged to cite the primary literature, and not just fig-leaf their manuscripts with convenient review citations.

  4. I also want to add a couple of things: 1) citation is so easy with EndNote etc. I think that many authors never really re-engage with papers and may be citing from memory and their faulty interpretations of the work, instead of reexamination of the published literature. 2) Authors are incredibly lazy. I just had work not cited- a statement in a paper published just this week needed to cite a single paper (mine) but instead cited a review published only a year after my work. Obviously, everyone has stories such as this, but there is very lazy authorship out there. Very, very bad. Journals also encourage poor citation because authors cut as much useful information, such as methods and references to get in under word limits.

  5. I do find it strange that 3 reviewers presumably gave this paper the thumbs-up for Cell. Either the authors have good company in their ignorance of the earlier work, or the journal made poor decisions about reviewers. If even a single reviewer had said, “Neat story but old news (Lawrence 06)” the editor wouldn’t have been likely to let it in.
    It would be fascinating to see who was on the “Don’t Send to These Reviewers” list. If the authors listed Peter Lawrence and Marek Mlodzik, then they’re hanged. If not, it at least seems possible that they are merely (along with three reviewers!) ignorant.
    Pinko Punko, EMBO’s approach sounds wise. Would that a few more journals would follow suit. But in defense of the review-citing: why is that so grossly inappropriate, if the review provides the context in which the author understood the original paper’s impact?

  6. In an undergraduate class the importance of citing the original paper was strongly emphasized. This emphasis was repeated by my MSc advisor, who has a very strong ethical standard. As a graduate student, I am extremely frustrated that I have to sometimes go back 10 papers to find an original citation. For example, when looking at the methodology section, many papers will say “we completed x technique as described in paper b”. When I go and find paper b, the same technique is written exactly the same except now its described in paper r. I have also noticed that over the last couple years there has been an increase in the number of times review articles are cited for basic findings or background information. Is it so hard to go and get the paper that first explained an phenomenon? Its amazing what you learn by reading those early papers. Many times review articles will cherry pick the data that suits the message the author is trying to send.

  7. The statement was supported by a single paper in the literature, it was a specific statement about a biological observation, one made in a single paper. There was no need to cite the review. “Fact X was determined (Original work)” as opposed to “Fact X for protein Y is one of a number of observations showing proteins of this class have Z function (Review)” the statement in the work was very much the first. Reviews being cited is by no means necessarily inappropriate, in this case it was inappropriate because of the statement being supported by the citation did not support citation of a review. For example, in the question at hand in the post here, all of the prior works slighted by the authors of Chen et al (2008) were indeed cited in their paper, but we cited either inappropriately or insufficiently. In the case I referred to, the review was cited just as inappropriately as was the omission of my work.
    I don’t do this all the time, but when I am bundling several results into on sentence for the purpose of citing a review in support of a statement, I try to make it clear that the citation is a review, especially there is not room to cite key results that support the introduction for example. I will saw “reviewed in (X)” or say “(X), and references therein”

  8. I agree with Dr. Jekyll & Mrs. Hyde. How can it be that none of the reviewers noticed the paper was very similar to previously published data? It’s not like Development is an obscure journal no-one ever reads. This suggests there might be problems in the way Cell implements its peer review process.

  9. In a small but hyper competitive flashy field, like PCP, I would wager that the competing groups were on the do not review list and guess that perhaps the PCP experts reviewing the work were mouse, with a fly dev bio person as a third. This is speculation of course, but I’m reading both papers tonight to compare.

  10. Not considering subtleties of the data, Chen et al doesn’t strike as overtly offensive. This very much may relate to how well a field is known. It is likely those in disagreement disagree with the strength and/or coverage of this statement “Other investigators reached a similar conclusion using different experimental paradigms” concerning a critical aspect of strabismus/Flamingo function.
    Still reading, so will try to be back with more. The papers are nice and thick with genetic mosaic data.

  11. Should one do the research, publish the findings, and only after the fact discover that other scientists had been there first (and published about it), the sensible thing to do is to acknowledge those others late rather than not at all. Saying “I should have known about these earlier publications” is a classier move than denying you goofed by not finding them and acknowleding them in the first place.

    A related thing happened to Comrade PhysioProf earlier in his career. We published a paper in a C/N/S journal that described application of a novel technique in a particular model organism, and the results we obtained contradicted then-current dogma relating to a particular physiological process, and did so in a very interesting way.
    We really just stumbled into this particular discovery, as we were at first engaged in a tool development exercise, invented a hammer, and the specific nail we hit was fortuitous. Thus, we were complete newcomers to the study of this physiological process. Because of our ignorance, we were completely unaware that there was am earlier literature out there had been attempting to establish a particular cellular/molecular mechanism for this particular physiological process in organisms other than the one we were working in.
    As a historical matter, the vast array of studies supporting the then-current dogma was perceived as contradicting the earlier literature, and the mechanism proposed in that earlier literature was abandoned. It turned out that our paper supported the viability of that abandoned model, which we didn’t even know existed. So we wrote our paper without any of this historical context, and without any citations to that work. And none of the reviewers of the paper caught this, probably because (1) the older model had been abandoned and (2) the reviewers were probably most familiar with the work done on our model organism, and not the ones that the older studies had been performed on.
    I only discovered all of this when I read a News&Views written about our paper that provided the historical context that we missed. In horror, I immediately sent e-mails to some of the major players in that older body of work apologizing for our failure to cite their work and explaining that it was purely out of ignorance and that I was completely new to the field. The awesome thing is that these very senior figures in our field have become tremendous career supporters of mine, writing me letters, inviting me to speak at conferences, and otherwise encouraging and supporting my entry into their field and career advancement.
    It sounds like this Chen et al. situation is different, in that Axelrod didn’t stumble into the Drosophila PCP field, and that he should have known what was going on. Is it possible that Axelrod’s trainee who wrote the paper, Chen, was given so much leeway that Axelrod didn’t even know what was really in the paper, and that Chen was genuinely ignorant of the Lawrence studies?

  12. The awesome thing is that these very senior figures in our field have become tremendous career supporters of mine, writing me letters, inviting me to speak at conferences, and otherwise encouraging and supporting my entry into their field and career advancement.
    You provided evidence, many years later, that their stuff was right all along….and they think you are da bomb? No wai!!!!!

  13. Not knowing how to search the literature (or indeed, not knowing you *should* search the literature) is no excuse. Unless your institution or company has closed its library and fired all of the librarians (which is actually possible), then all you need to do is ask.
    Of course, even a thorough search of the literature might miss things that were written in a different field or using different terminology.

  14. Pinko Punko
    The failure to cite previous work happens all the time. And I haven’t been in science all that long. It’s happened at least twice this year to our research group. In one case letters to the editor were written up and accepted for publication.
    In another much more egregious case print publication of one of our papers, already accepted (online publication), was held up to allow another far more established and famous group to write up and publish their results in print simultaneously. Despite submitting after our paper was accepted and published online they did not cite our work and claimed primacy throughout their manuscript.
    But, hey, what can you do?
    CPP- I was wondering. You emailed to aplogise to the original authors when you fucked up (a classy move) but did you write a letter to the editor to clarify?

  15. I’ve been there, anon.
    I think it is good to talk about these things because some authors are stuck in such an aggressive stance they don’t feel like engaging the literature honestly because they are selling their work. I think it is good to have a checklist of due diligence in dealing with the literature. Too often this is forgotten or overlooked.

  16. I think it’s setting up a false dichotomy and ethically oversimplifying to say either it was ignorance or intent in failing to cite. Also I think it’s easier both to preach and practice generous citation to the extent one’s research area is less competitive and/or the journals in which one publishes accept a higher percentage of submitted manuscripts. In a low prestige journal, I believe the authors only make their manuscript look more important by referencing work in higher prestige journals. This counter balances the incentive that figures more heavily in a top tier journal like Cell, which is to not to refer to work that looks to have established something similar to what one’s representing one’s own work to be establishing. Models may be advanced on slim grounds in a lower tier journal, and especially if the model is one that might have occurred to a lot of people, such a publication creates no authorship in the model. Bibliographies are constructed as if to tell the history of how an understanding has been established by evidence. To the extent a publication is unpersuasive, it establishes nothing and needs no citation. One need not even search the lower tier journals, if one believes nothing in them ever is persuasive or new. Of course, one’s rivals’ work is liable to look especially unpersuasive and unimportant to the history of progress in one’s field.

  17. I don’t know what this paper is talking about at all. I don’t have electronic access to scientific journals for years, and I am prohibited to visit the campus. But, there is something strange there: “the cell-to-cell communication that epithelial cells use to align and arrange themselves to function as an organized tissue”. The thing is that cells during the formation of tissues, when they divide, remain attached to each other by the permanent cell-to-cell bridges; and that’s why they represent an integral structure – the tissue. They do not need “to align and arrange themselves” later. An interesting problem arises: how the cell renewal in the adult organism can proceed? Such model exists: see http://www.cell-division-program.com

  18. > And, fair or not, scientists are likely to find more
    > persuasive the testimony of those they take to be
    > honest and meticulous, Scientists who can’t even do
    > a good literature search may have other as yet
    > unexposed gaps in their practice that render
    > their testimony less credible.
    Amen there.

  19. The “shoulders of giants” attribution to Newton isn’t for the coinage but for the saintly standard of humility implicit in Newton saying it. Thing is, that makes it doubly wrongheaded to associate it with Newton, who if I remember right, fought aggressively and ungenerously behind the scenes to deny any share of glory to Hooke in what was to become the prototype of all future scientific journals, the Philosophical Transactions of the Royal Society. Newton is more like the last person in the world to cite or acknowledge others–is I think the current historical perspective on the nevertheless great man.

Leave a Reply

Your email address will not be published. Required fields are marked *