Harvard Dean sheds (a little) more light on Hauser misconduct case.

Today ScienceInsider gave an update on the Marc Hauser misconduct case, one that seems to support the accounts of other researchers in the Hauser lab. From ScienceInsider:

In an e-mail sent earlier today to Harvard University faculty members, Michael Smith, dean of the Faculty of Arts and Sciences (FAS), confirms that cognitive scientist Marc Hauser “was found solely responsible, after a thorough investigation by a faculty member investigating committee, for eight instances of scientific misconduct under FAS standards.”

ScienceInsider reprints the Dean’s email in its entirety. Here’s the characterization of the nature of Hauser’s misconduct from that email:

Continue reading

Is objectivity an ethical duty? (More on the Hauser case.)

Today the Chronicle of Higher Education has an article that bears on the allegation of shenanigans in the research lab of Marc D. Hauser. As the article draws heavily on documents given to the Chronicle by anonymous sources, rather than on official documents from Harvard’s inquiry into allegations of misconduct in the Hauser lab, we are going to take them with a large grain of salt. However, I think the Chronicle story raises some interesting questions about the intersection of scientific methodology and ethics.

From the article:

Continue reading

Data release, ethics, and professional survival.

In recent days, there have been signs on the horizon of an impending blogwar. Prof-like Substance fired the first volley:

[A]lmost all major genomics centers are going to a zero-embargo data release policy. Essentially, once the sequencing is done and the annotation has been run, the data is on the web in a searchable and downloadable format.

Yikes.

How many other fields put their data directly on the web before those who produced it have the opportunity to analyze it? Now, obviously no one is going to yank a genome paper right out from under the group working on it, but what about comparative studies? What about searching out specific genes for multi-gene phylogenetics? Where is the line for what is permissible to use before the genome is published? How much of a grace period do people get with data that has gone public, but that they* paid for?

—–
*Obviously we are talking about grant-funded projects, so the money is tax payer money not any one person’s. Nevertheless, someone came up with the idea and got it funded, so there is some ownership there.

Then, Mike the Mad Biologist fired off this reply:

Several of the large centers, including the one I work at, are funded by NIAID to sequence microorganisms related to human health and disease (analogous programs for human biology are supported by NHGRI). There’s a reason why NIH is hard-assed about data release:

Funding agencies learned this the hard way, as too many early sequencing centers resembled ‘genomic roach motels’: DNA checks in, but sequence doesn’t check out.

The funding agencies’ mission is to improve human health (or some other laudable goal), not to improve someone’s tenure package. This might seem harsh unless we remember how many of these center-based genome projects are funded. The investigator’s grant is not paying for the sequencing. In the case of NIAID, there is a white paper process. Before NIAID will approve the project, several goals have to be met in the white paper (Note: while I’m discussing NIAID, other agencies have a similar process, if different scientific objectives).

Obviously, the organism and collection of strains to be sequenced have to be relevant to human health. But the project also must have significant community input. NIAID absolutely does not want this to be an end-run around R01 grants. Consequently, these sequencing projects should not be a project that belongs to a single lab, and which lacks involvement by others in the subdiscipline (“this looks like an R01” is a pejorative). It also has to provide a community resource. In other words, data from a successful project should be used rapidly by other groups: that’s the whole point (otherwise, write an R01 proposal). The white paper should also contain a general description of the analysis goals of the project (and, ideally, who in the collaborative group will address them). If you get ‘scooped’, that’s, in part, a project planning issue.

NIAID, along with other agencies and institutes, is pushing hard for rapid public release. Why does NIAID get to call the shots? Because it’s their money.

Which brings me to the issue of ‘whose’ genomes these are. The answer is very simple: NIH’s (and by extension, the American people’s). As I mentioned above, NIH doesn’t care about your tenure package, or your dissertation (given that many dissertations and research programs are funded in part or in their entirely by NIH and other agencies, they’re already being generous†). What they want is high-quality data that are accessible to as many researchers as possible as quickly as possible. To put this (very) bluntly, medically important data should not be held hostage by career notions. That is the ethical position.

Prof-like substance hurled back a hefty latex pillow of a rejoinder:

People feel like anything that is public is free to use, and maybe they should. But how would you feel as the researcher who assembled a group of researchers from the community, put a proposal together, drummed up support from the community outside of your research team, produced and purified the sample to be sequenced (which is not exactly just using a Sigma kit in a LOT of cases), dealt with the administration issues that crop up along the way, pushed the project through (another aspect woefully under appreciated) the center, got your research community together once they data were in hand to make sense of it all and herded the cats to get the paper together? Would you feel some ownership, even if it was public dollars that funded the project?

Now what if you submitted the manuscript and then opened your copy of Science and saw the major finding that you centered the genome paper around has been plucked out by another group and publish in isolation? Would you say, “well, the data’s publicly available, what’s unscrupulous about using it?”

[L]et’s couch this in the reality of the changing technology. If your choice is to have the sequencing done for free, but risk losing it right off the machine, OR to do it with your own funds (>$40,000) and have exclusive right to it until the paper is published, what are you going to choose? You can draw the line regarding big and small centers or projects all you want, but it is becoming increasingly fuzzy.

This is all to get back to my point that if major sequencing centers want to stay ahead of the curve, they have to have policies that are going to encourage, not discourage, investigators to use them.

It’s fair to say that I don’t know from genomics. However, I think the ethical landscape of this disagreement bears closer examination.

Continue reading

The value of (unrealistic) case studies in ethics education.

Dr. Isis posted a case study about a postdoc’s departure from approved practices and invited her readers to discuss it. DrugMonkey responded by decrying the ridiculousness of case studies far more black and white than what scientists encounter in real life:

This is like one of those academic misconduct cases where they say “The PI violates the confidence of review, steals research ideas that are totally inconsistent with anything she’d been doing before, sat on the paper review unfairly, called the editor to badmouth the person who she was scooping and then faked up the data in support anyway. Oh, and did we mention she kicked her cat?”.

This is the typical and useless fare at the ethical training course. Obvious, overwhelmingly clear cases in which the black hats and white hats are in full display and provide a perfect correlation with malfeasance.

The real world is messier and I think that if we are to make any advances in dealing with the real problems, the real cases of misconduct and the real cases of dodgy animal use in research, we need to cover more realistic scenarios.

I’m sympathetic to DrugMonkey’s multiple complaints: that real life is almost always more complicated than the canned case study; that hardly anyone puts in the years of study and training to become a scientist if her actual career objective is to be a super-villain; and especially that the most useful sort of ethics training for the scientist will be in day to day conversation with scientific mentors and colleagues rather than in isolated ethics courses, training modules, or workshops.

However, used properly, I think that case studies — even unrealistic ones — play a valuable role in ethics education.

Continue reading

What kind of problem is it when data do not support findings?

And, whose problem is it?

Yesterday, The Boston Globe published an article about Harvard University psychologist Marc Hauser, a researcher embarking on a leave from his appointment in the wake of a retraction and a finding of scientific misconduct in his lab. From the article:

In a letter Hauser wrote this year to some Harvard colleagues, he described the inquiry as painful. The letter, which was shown to the Globe, said that his lab has been under investigation for three years by a Harvard committee, and that evidence of misconduct was found. He alluded to unspecified mistakes and oversights that he had made, and said he will be on leave for the upcoming academic year. …

Much remains unclear, including why the investigation took so long, the specifics of the misconduct, and whether Hauser’s leave is a punishment for his actions.

The retraction, submitted by Hauser and two co-authors, is to be published in a future issue of Cognition, according to the editor. It says that, “An internal examination at Harvard University . . . found that the data do not support the reported findings. We therefore are retracting this article.’’

The paper tested cotton-top tamarin monkeys’ ability to learn generalized patterns, an ability that human infants had been found to have, and that may be critical for learning language. The paper found that the monkeys were able to learn patterns, suggesting that this was not the critical cognitive building block that explains humans’ ability to learn language. In doing such experiments, researchers videotape the animals to analyze each trial and provide a record of their raw data. …

The editor of Cognition, Gerry Altmann, said in an interview that he had not been told what specific errors had been made in the paper, which is unusual. “Generally when a manuscript is withdrawn, in my experience at any rate, we know a little more background than is actually published in the retraction,’’ he said. “The data not supporting the findings is ambiguous.’’

Gary Marcus, a psychology professor at New York University and one of the co-authors of the paper, said he drafted the introduction and conclusions of the paper, based on data that Hauser collected and analyzed.

“Professor Hauser alerted me that he was concerned about the nature of the data, and suggested that there were problems with the videotape record of the study,’’ Marcus wrote in an e-mail. “I never actually saw the raw data, just his summaries, so I can’t speak to the exact nature of what went wrong.’’
The investigation also raised questions about two other papers co-authored by Hauser. The journal Proceedings of the Royal Society B published a correction last month to a 2007 study. The correction, published after the British journal was notified of the Harvard investigation, said video records and field notes of one of the co-authors were incomplete. Hauser and a colleague redid the three main experiments and the new findings were the same as in the original paper. …

“This retraction creates a quandary for those of us in the field about whether other results are to be trusted as well, especially since there are other papers currently being reconsidered by other journals as well,’’ Michael Tomasello, co-director of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, said in an e-mail. “If scientists can’t trust published papers, the whole process breaks down.’’ …

In 1995, he [Hauser] was the lead author of a paper in the Proceedings of the National Academy of Sciences that looked at whether cotton-top tamarins are able to recognize themselves in a mirror. Self-recognition was something that set humans and other primates, such as chimpanzees and orangutans, apart from other animals, and no one had shown that monkeys had this ability.

Gordon G. Gallup Jr., a professor of psychology at State University of New York at Albany, questioned the results and requested videotapes that Hauser had made of the experiment.

“When I played the videotapes, there was not a thread of compelling evidence — scientific or otherwise — that any of the tamarins had learned to correctly decipher mirrored information about themselves,’’ Gallup said in an interview.

A quick rundown of what we get from this article:

  • Someone raised a concern about scientific misconduct that led to the Harvard inquiry, which in turn led to the discovery of “evidence of misconduct” in Hauser’s lab.
  • We don’t, however, have an identification of what kind of misconduct is suggested by the evidence (fabrication? falsification? plagiarism? other serious deviations from accepted practices?) or of who exactly committed it (Hauser or one of the other people in his lab).
  • At least one paper has been retracted because “the data do not support the reported findings”.
  • However, we don’t know the precise issue with the data here — e.g., whether the reported findings were bolstered by reported data that turned out to be fabricated or falsified (and are thus not being included anymore in “the data”).
  • Apparently, the editor of the journal that published the retracted paper doesn’t know the precise issue with the data, either, and found this unusual enough a situation with respect to the retraction of the paper to merit comment.
  • Other papers from the Hauser group may be under investigation for similar reasons at this point, and other researchers in the field seem to be nervous about those papers and their reliability in light of the ongoing inquiry and the retraction of the paper in Cognition.

There’s already been lots of good commentary on what might be going on with the Hauser case. (I say “might” because there are many facts still not in evidence to those of us not actually on the Harvard inquiry panel. As such, I think it’s necessary to refrain from drawing conclusions not supported by the facts that are in evidence.)

John Hawks situates the Hauser case in terms of the problem of subjective data.

Melody has a nice discussion of the political context of getting research submitted to journals, approved by peer reviewers, and anointed as knowledge.

David Dobbs wonders whether the effects of the Hauser case (and of the publicity it’s getting) will mean backing off from overly strong conclusions drawn from subjective data, or backing off too far from a “hot” scientific field that may still have a bead on some important phenomena in our world.

Drugmonkey critiques the Boston Globe reporting and reminds us that failure to replicate a finding is not evidence of scientific misconduct or fraud. That’s a hugely important point, and one that bears repeating. Repeatedly.

This is the kind of territory where we start to notice common misunderstandings about how science works. It’s usually not the case that we can cut nature at the joints along nicely dotted lines that indicate just where those cuts should be. Collecting reliable data and objectively interpreting that data is hard work. Sometimes as we go, we learn more about better conditions for collecting reliable data, or better procedures for interpreting the data without letting our cognitive biases do the driving. And sometimes, a data set we took to be reliable and representative of the phenomenon we’re trying to understand just isn’t.

That’s part of why scientific conclusions are always tentative. Scientists expect to update their current conclusions in the light of new results down the road — and in the light of our awareness that some of our old results just weren’t as solid or reproducible as we took them to be. It’s good to be sure they’re reproducible enough before you announce a finding to your scientific peers, but to be absolutely certain of total reproducibility, you have to solve the problem of induction, which isn’t terribly practical.

Honest scientific work can lead to incorrect conclusions, either because that honest work yielded wonk data from which to draw conclusions, or because good data can still be consistent with incorrect conclusions.

And, there’s a similar kind of disconnect we should watch out for. For the “corrected” 2007 paper in Proceedings of the Royal Society B, the Boston Globe article reports that videotapes and field notes (the sources of the data to support the reported conclusions) were “incomplete”. But, Hauser and a colleague redid the experiments and found data that supported the conclusions reported in this paper. One might think that as long as reported results are reproducible, they’re necessarily sufficiently ethical and scientifically sound and all that good stuff. That’s not how scientific knowledge-building works. The rules of the game are that you lay your data-cards on the table and base your findings on those data. Chancing upon an answer that turns out to be right but isn’t supported by the data you actually have doesn’t count, nor does having a really strong hunch that turns out to be right. In the scientific realm, empirical data is our basis for knowing what we know about the phenomena. Thus, doing the experiments over in the face of insufficient data is not “playing it safe” so much as “doing the job you were supposed to have done in the first place”.

Now, given the relative paucity of facts in this particular case, I find myself interested by a more general question: What are the ethical duties of a PI who discovers that he has published a paper whose findings are not, in fact, supported by the data?.

It seems reasonable that at least one of his or her duties involves correcting the scientific literature.

This could involve retracting the paper, in essence saying, “Actually, we can’t conclude this based on the data we have. Our bad!”

It could also involve correcting the paper, saying, “We couldn’t conclude this based on the data we have; instead, we should conclude this other thing,” or, “We couldn’t conclude this based on the data we originally reported, but we’ve gone and done more experiments (or have repeated the experiments we described), obtained this data, and are now confident that on the basis of these data, the conclusion in well-supported.”

If faulty data were reported, I would think that the retraction or correction should probably explain how the data were faulty — what’s wrong with them? If the problem had its source in an honest mistake, it might also be valuable to identify that honest mistake so other researchers could avoid it themselves. (Surely this would be a kindness; is it also a duty?)

Beyond correcting the scientific literature, does the PI in this situation have other relevant duties?

Would these involve ratcheting up the scrutiny of data within the lab group in advance of future papers submitted for publication? Taking the skepticism of other researchers in the field more seriously and working that much harder to build a compelling case for conclusions from the data? (Or, perhaps, working hard to identify the ways that the data might argue against the expected conclusion?) Making serious efforts to eliminate as much subjectivity from the data as possible?

Assuming the PI hasn’t fabricated or falsified the data (and that if someone in the lab group has, that person has been benched, at least for the foreseeable future), what kind of steps ought that PI to take to make things right — not just for the particular problematic paper(s), but for his or her whole research group moving forward and interacting with other researchers in the field? How can they earn back trust?

In search of accepted practices: the final report on the investigation of Michael Mann (part 2).

When you’re investigating charges that a scientist has seriously deviated from accepted practices for proposing, conducting, or reporting research, how do you establish what the accepted practices are? In the wake of ClimateGate, this was the task facing the Investigatory Committee at Penn State University investigating the allegation (which the earlier Inquiry Committee deemed worthy of an investigation) that Dr. Michael E. Mann “engage[d] in, or participate[d] in, directly or indirectly, … actions that seriously deviated from accepted practices within the academic community for proposing, conducting, or reporting research or other scholarly activities”.
One strategy you might pursue is asking the members of a relevant scientific or academic community what practices they accept. In the last post, we looked at what the Investigatory Committee learned from its interviews about this question with Dr. Mann himself and with Dr. William Easterling, Dean, College of Earth and Mineral Sciences, The Pennsylvania State University. In this post, we turn to the committee’s interviews with three climate scientists from other institutions, none of whom had collaborated with Dr. Mann, and at least one of whom has been very vocal about his disagreements with Dr. Mann’s scientific conclusions.

Continue reading

Ethics case study: science goes to the dogs.

I want to apologize for the infrequency of my posting lately. Much of it can be laid at the feet of end-of-term grading, although today I’ve been occupied with a meeting of scientists at different career stages to which I was invited to speak about some topics I discuss here. (More about that later.) June will have more substantive ethics-y posts, honest!
Indeed, to tide you over, I want to ask for your responses to a case study I wrote for the final exam for my “Ethics in Science” class.
First, the case:

Continue reading

Ask Dr. Free-Ride: Ethically, which field of science is the worst?

A reader writes:

I was in a PhD program in materials science, in a group that did biomedical research (biomaterials end of the field) and was appalled at the level of misconduct I saw. Later, I entered an MD program. I witnessed some of the ugliest effects of ambition in the lab there.
Do you think biomedical research is somehow “ethically worse” than other fields?
I’ve always wanted to compare measurable instances of unethical behavior across different fields. As an undergraduate I remember never hearing or seeing anything strange with the folks that worked with metallurgy and it never seemed to be an issue with my colleagues in these areas in graduate school. Whenever there is trouble it seems to come from the biomedical field. I’d love to see you write about that.
Thank you for doing what you do, since that time I have so many regrets, your blog keeps me sane.

First, I must thank this reader for the kind words. I am thrilled (although still a bit bewildered) that what I write here is of interest and use to others, and if I can contribute to someone’s sanity while I’m thinking out loud (or on the screen, as the case may be), then I feel like this whole “blogging” thing is worthwhile.
Next, on the question of whether biomedical research is somehow “ethically worse” than research in other areas of science, the short answer is: I don’t know.
Certainly there are some high profile fraudsters — and scientists whose misbehavior, while falling short of official definitions of misconduct, also fell well short of generally accepted ethical standards — in the biomedical sciences. I’ve blogged about the shenanigans of biologists, stem cell researchers, geneticists, cancer researchers, researchers studying the role of hormones in aging, researchers studying immunosuppression, anesthesiologists, and biochemists.
But the biomedical sciences haven’t cornered the market on ethical lapses, as we’ve seen in discussions of mechanical engineers, nuclear engineers, physicists, organic chemists, paleontologists, and government geologists.
There are, seemingly, bad actors to be found in every scientific field. Of course, it is reasonable to assume that there are also plenty of honest and careful scientists in every scientific field. Maybe the list of well-publicized bad actors in biomedical research is longer, but given the large number of biomedical researchers compared to the number of researchers in all scientific fields (and also the extent to which the public might regard biomedical research as more relevant to their lives than, say, esoteric questions in organic synthesis), is it disproportionately long?
Again, that’s hard to gauge.
However, my correspondent’s broad question strikes me as raising a number of related empirical questions that it would be useful to try to answer:

Continue reading

Common ground and deeply held differences: a reply to Bruins for Animals.

In a post last month, I noted that not all (maybe even not many) supporters of animal rights are violent extremists, and that Bruins for Animals is a group committed to the animal rights position that was happy to take a public stand against the use of violence and intimidation to further the cause of animal liberation.
On Wednesday, Kristy Anderson (the co-founder of Bruins for Animals), Ashley Smith (the president), and Jill Ryther (the group’s advisor) posted a critical response to my post. In the spirit of continuing dialogue, I’d like to respond to that response.
They write:

AR activists can rightly accept praise and credit for encouraging the two sides to come together in what was an unprecedented public and civil dialogue. However, one glaring and rather twisted irony too often overlooked is the fact that those very same participants who speak against aggressive campaigns against the animal experimentation industry and who are quick to praise AR advocates’ stance on nonviolence are themselves engaged in (or are supporters of) violence and intimidation towards sentient beings on a daily basis.

Continue reading

What’s the point of peer review?

Once again, I’m going to “get meta” on that recent paper on blogs as a channel of scientific communication I mentioned in my last post. Here, the larger question I’d like to consider is how peer review — the back and forth between authors and reviewers, mediated (and perhaps even refereed by) journal editors — does, could, and perhaps should play out.
Prefacing his post about the paper, Bora writes:

First, let me get the Conflict Of Interest out of the way. I am on the Editorial Board of the Journal of Science Communication. I helped the journal find reviewers for this particular manuscript. And I have reviewed it myself. Wanting to see this journal be the best it can be, I was somewhat dismayed that the paper was published despite not being revised in any way that reflects a response to any of my criticisms I voiced in my review.

Bora’s post, in other words, drew heavily on comments he wrote for the author of the paper to consider (and, presumably, to take into account in her revision of the manuscript) before it was published.
Since, as it turns out, the author didn’t make revisions addressing Bora’s criticisms that ended up in the published version of the paper, Bora went ahead and made those criticisms part of the (now public) discussion of the published paper. He still endorses those criticisms, so he chooses to share them with the larger audience the paper has now that it has been published.

Continue reading