How scientists see research ethics: ‘normal misbehavior’ (part 2).

In the last post, we started looking at the results of a 2006 study by Raymond De Vries, Melissa S. Anderson, and Brian C. Martinson [1] in which they deployed focus groups to find out what issues in research ethics scientists themselves find most difficult and worrisome. That post focused on two categories the scientists being studied identified as fraught with difficulty, the meaning of data and the rules of science. In this post, we’ll focus on the other two categories where scientists expressed concerns, life with colleagues and the pressures of production in science. We’ll also look for the take-home message from this study.

Continue reading

How scientists see research ethics: ‘normal misbehavior’ (part 1).

In the U.S., the federal agencies that fund scientific research usually discuss scientific misconduct in terms of the big three of fabrication, falsification, and plagiarism (FFP). These three are the “high crimes” against science, so far over the line as to be shocking to one’s scientific sensibilities.
But there are lots of less extreme ways to cross the line that are still — by scientists’ own lights — harmful to science. Those “normal misbehaviors” emerge in a 2006 study by Raymond De Vries, Melissa S. Anderson, and Brian C. Martinson [1]:

We found that while researchers were aware of the problems of FFP, in their eyes misconduct is generally associated with more mundane, everyday problems in the work environment. These more common problems fall into four categories: the meaning of data, the rules of science, life with colleagues, and the pressures of production in science. (43)

These four categories encompass a lot of terrain on the scientific landscape, from the challenges of building new knowledge about a piece of the world, to the stresses of maintaining properly functioning cooperative relations in a context that rewards individual achievement. As such, I’m breaking up my discussion of this study into two posts. (This one will focus on the first two categories, he meaning of data and the rules of science. Part 2 will focus on life with colleagues and the pressures of production in science.)

Continue reading

The Hellinga retractions (part 2): trust, accountability, collaborations, and training relationships.

Back in June, I wrote a post examining the Hellinga retractions. That post, which drew upon the Chemical & Engineering News article by Celia Henry Arnaud (May 5, 2008) [1], focused on the ways scientists engage with each other’s work in the published literature, and how they engage with each other more directly in trying to build on this published work. This kind of engagement is where you’re most likely to see one group of scientists reproduce the results of another — or to see their attempts to reproduce these results fail. Given that reproducibilty of results is part of what supposedly underwrites the goodness of scientific knowledge, the ways scientists deal with failed attempts to reproduce results have great significance for the credibility of science.

Speaking of credibility, in that post I promised you all (and especially Abi) that there would be a part 2, drawing on the Nature news feature by Erika Check Hayden (May 15, 2008) [2]. Here it is.

In this post, I shift the focus to scientists’ relationships within a research group (rather than across research groups and through the scientific literature). In research groups in academic settings, questions of trust and accountability are complicated by differentials in experience and power (especially between graduate students and principal investigators). Academic researchers are not just in the business of producing scientific results, but also new scientists. Within training relationship, who is making the crucial scientific decisions, and on the basis of what information?

The central relationship in this story is that between Homme W. Hellinga, professor of biochemistry at Duke University, and graduate student Mary Dwyer.

Continue reading

The kind of thing that makes industry ‘science’ look bad.

In a post last week, I mentioned a set of standards put forward by Carol Henry (a consultant and former vice president for industry performance programs at the American Chemistry Council), who says they would improve the credibility of industry-funded research.
But why does industry-funded research have a credibility problem in the first place? Aren’t industry scientists (or academic scientists whose research is supported by money from industry) first and foremost scientists, committed to the project of building accurate and reliable knowledge about the world? As scientists, aren’t they just as hard-headed and devoted to objectivity — indeed, to truth — as the rest of their professional community?
I have no doubt that many industry (and industry-funded) scientists do take good knowledge-building as their most important job. And this means that some of those who depart from this commitment are making things harder for those scientists whose loyalties to their industry benefactors do not extend to misrepresenting the truth. Plus, of course, they may be misleading policy makers and the public by passing off as reliable scientific knowledge something that is not.
In the article “Tobacco Industry Influence on Science and Scientists in Germany,” [1] Thilo GrĂ¼ning, Anna B. Gilmore, and Martin McKee draw on internal tobacco industry documents (released in 1998 as part of the settlement of litigation by the state of Minnesota against tobacco companies) to identify the strategies tobacco companies used to influence scientists and to distort science.

Continue reading

Standards for industry-funded research.

In the August 25, 2008 issue of Chemical & Engineering News, there’s an interview with Carol Henry (behind a paywall). Henry is a consultant who used to be vice president for industry performance programs at the American Chemistry Council (ACC). In the course of the interview, Henry laid out a set of standards for doing research that she thinks all scientists should adopt. (Indeed, these are the standards that guided Henry in managing research programs for the California Environmental Protection Agency, the U.S. Department of Energy, the American Petroleum Institute, and ACC.)
Here are Carol Henry’s research standards:

Continue reading

Cell phones, DNA damage, and questionable data.

While other ScienceBlogs bloggers (notably Revere and Orac) post periodically on the state of the scientific evidence with regard to whether cell phones have biological effects on those using them, I’ve mostly followed the discussion from the sidelines. Possibly this is because I’m a tremendous Luddite who got a cell phone under protest (and who uses a phone with maybe three functions — place a call, receive a call, and store phone numbers). Possibly it’s because in my estimation the biggest health risk posed by cell phones is that they shift the attention of the maniac driver shifting across four lanes of freeway without signaling.
What has me jumping into the fray now is a news report in Science about fraud charges that have been raised against a group of scientists whose papers offered evidence of the potential for biological harm from cell phone use. From the Science article:

Continue reading

Clinical trials — or not — of chelation therapy.

Back in July, Science ran an interesting news article about an on again, off again clinical trial of chelation therapy in the treatment of autistic children. I found the story fascinating because it highlights some of the challenges in setting up ethical research with human subjects — not to mention some of the challenges inherent in trying to help humans to make good decisions grounded in the best available scientific knowledge.

From the Science article:

Continue reading

Data paparazzi.

In a comment on another post, Blatnoi asks for my take on a recent news item in Nature:

An Italian-led research group’s closely held data have been outed by paparazzi physicists, who photographed conference slides and then used the data in their own publications.
For weeks, the physics community has been buzzing with the latest results on ‘dark matter’ from a European satellite mission known as PAMELA (Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics). Team members have talked about their latest results at several recent conferences … but beyond a quick flash of a slide, the collaboration has not shared the data. Many high-profile journals, including Nature, have strict rules about authors publicizing data before publication.
It now seems that some physicists have taken matters into their own hands. At least two papers recently appeared on the preprint server arXiv.org showing representations of PAMELA’s latest findings (M. Cirelli et al. http://arxiv.org/abs/0808.3867; 2008, and L. Bergstrom et al. http://arxiv.org/abs/0808.3725; 2008). Both have recreated data from photos taken of a PAMELA presentation on 20 August at the Identification of Dark Matter conference in Stockholm, Sweden.

I’d say this is a situation that bears closer examination.

Continue reading

Medical research with ‘legacy samples’ raises ethical questions.

In the July 18, 2008 issue of Science, I noticed a news item, “Old Samples Trip Up Tokyo Team”:

A University of Tokyo team has retracted a published research paper because it apparently failed to obtain informed consent from tissue donors or approval from an institutional review board (IRB). Other papers by the same group are under investigation by the university. Observers believe problems stem in part from guidelines that don’t sufficiently explain how to handle samples collected before Japan established informed consent procedures.

The samples in question were “legacy samples”, samples that had been previously collected for other research projects. The fact that these samples were collected before the institution of the rules for research with human subjects to which Japanese researchers are now bound complicates the ethical considerations for the researchers.

Continue reading

Aetogate aftermath: paleontologists discuss the norms of their discipline.

Finally, here is the long awaited fourth part in my three part series examining the Society for Vertebrate Paleontology Ethics Education Committee response to the allegations of scientific misconduct against Spencer Lucas and co-workers. Part 3 was a detailed examination of the “best practices” document (PDF) issued by this committee. In this post, I make a brief foray into the conversations paleontologists have been having online about their understanding of the accepted practices in their field.
As these conversations are ongoing (and some of them are happening on listservs to which I do not subscribe), what I present here is just a snapshot of how some members of the professional community of paleontologists (and those in related fields) describe the working rules of their professional activities and interactions. What’s striking to me, though, is that these scientists are responding to the Aetogate controversy by having these conversations.

Continue reading