This week at Bloggingheads.tv, PalMD and I have a chat about science, ethics, and alternative medicine. Plus, we have a little disagreement about what constitutes paternalism.
Go watch!
Category Archives: Ethical research
Animal research, violent attacks, and the public’s right to know.
An article in the Wall Street Journal notes the collision between researchers’ interests in personal safety and the public’s right to know how its money is being spent — specifically, when that money funds research that involves animals:
Mentoring ethics and authorship ethics.
One of my correspondents told me about a situation that raised some interesting questions about both proper attribution of authorship in scientific papers and ethical interactions between mentor and mentee in a scientific training relationship. With my correspondent’s permission, I’m sharing the case with you.
A graduate student, in chatting with a colleague in another lab, happened upon an idea for an experimental side project to do with that colleague. While the side project fell well outside the research agenda of this graduate student’s research group, he first asked his advisor whether it was OK for him to work on the side project. The advisor was reluctant to allow the student to work on the project, but agreed to give him a relatively short window of time (on the order of weeks, not months) to work on the side project and see if he got any results.
Bad cites. Bad science?
Do scientists see themselves, like Isaac Newton, building new knowledge by standing on the shoulders of giants? Or are they most interested in securing their own position in the scientific conversation by stepping on the feet, backs, and heads of other scientists in their community? Indeed, are some of them willfully ignorant about the extent to which their knowledge is build on someone else’s foundations?
That’s a question raised in a post from November 25, 2008 on The Scientist NewsBlog. The post examines objections raised by a number of scientists to a recent article in the journal Cell:
The ethics of a low-content retraction.
Over at DrugMonkey, PhysioProf notes a recent retraction of an article from the Journal of Neuroscience. What’s interesting about this case is that the authors retract the whole article without any explanation for the retraction. As PhysioProf writes:
There is absolutely no mention of why the paper is being retracted. People who have relied on the retracted manuscript to develop their own research conceptually and/or methodologically have been given no guidance whatsoever on what aspects of the manuscript are considered unreliable, and/or why.
So, asks PhysioProf, have these authors behaved ethically?
I think in order to get clear on what obligations the authors have to the scientific community, it may be useful to start with the question of what this kind of retraction communicates to the scientific community.
A drug company, a psychiatrist, and an inexplicable failure to disclose conflicts of interest.
Charles B. Nemeroff, M.D., Ph.D., is a psychiatrist at Emory University alleged by congressional investigators to have failed to report a third of the $2.8 million (or more) he received in consulting fees from pharmaceutical companies whose drugs he was studying.
Why would congressional investigators care? For one thing, during the period of time when Nemeroff received these consulting fees, he also received $3.9 million from NIH to study the efficacy of five GlaxoSmithKline drugs in the treatment of depression. When the government ponies up money for scientific research, it has an interest in ensuring that the research will produce reliable knowledge.
GlaxoSmithKline, of course, has an interest in funding studies that show that its drugs work really well.
Injustice, misbehavior, and the scientist’s social identity (part 3).
Let’s wrap up our discussion on the Martinson et al. paper, “Scientists’ Perceptions of Organizational Justice and Self-Reported Misbehaviors”. [1] You’ll recall that the research in this paper examined three hypotheses about academic scientists:
Hypothesis 1: The greater the perceived distributive injustice in science, the greater the likelihood of a scientist engaging in misbehavior. (51)
Hypothesis 2: The greater the perceived procedural injustice in science, the greater the likelihood of a scientist engaging in misbehavior. (52)
Hypothesis 3: Perceptions of injustice are more strongly associated with misbehavior among those for whom the injustice represents a more serious threat to social identity (e.g., early-career scientists, female scientists in traditionally male fields). (52)
We’ve already looked at the methodological details of the study. We’ve also examined the findings Martinson et al. reported. (In short, they found that early-career and mid-career scientists reported more procedural injustice than distributive injustice; that early-career scientists who perceived high levels of distributive injustice were somewhat more likely to report engaging in misbehavior than those who did not; that misbehavior was most likely from mid-career scientists with high intrinsic drive who perceived a high level of procedural injustice; and that female scientists were less likely to engage in misbehavior than male scientists.)
In this post, we’re going to consider what these findings mean, and what larger conclusions can be drawn from them.
Injustice, misbehavior, and the scientist’s social identity (part 2).
Last week, we started digging into a paper by Brian C. Martinson, Melissa S. Anderson, A. Lauren Crain, and Raymond De Vries, “Scientists’ Perceptions of Organizational Justice and Self-Reported Misbehaviors”. [1] . The study reported in the paper was aimed at exploring the connections between academic scientists’ perceptions of injustice (both distributive and procedural) and those scientists engaging in scientific misbehavior. In particular, the researchers were interested in whether differences would emerge between scientists with fragile social identities within the tribe of academic science and those with more secure social identities. At the outset, the researchers expected that scientists at early career stages and female scientists in male-dominated fields would be the most likely to have fragile social identities. They hypothesized that perceptions of injustice would increase the likelihood of misbehaving, and that this link would be even greater among early-career scientists and female scientists.
We started with a post walking through the methodology of the study. In this post, we’ll examine the results Martinson et al. reported. Part 3 will then consider what conclusions we might draw from these findings.
First, how much injustice did the study participants report?
Injustice, misbehavior, and the scientist’s social identity (part 1).
Regular readers know that I frequently blog about cases of scientific misconduct or misbehavior. A lot of times, discussions about problematic scientific behavior are framed in terms of interactions between individual scientists — and in particular, of what a individual scientist thinks she does or does not owe another individual scientist in terms of honesty and fairness.
In fact, the scientists in the situations we discuss might also conceive of themselves as responding not to other individuals so much as to “the system”. Unlike a flesh and blood colleague, “the system” is faceless, impersonal. “The system” is what you have to work within — or around.
Could scientists feel the same sort of loyalty or accountability to “the system” as they do to other individual scientists? How do scientists’ views of the fairness or unfairness of “the system” impact how they will behave toward it?
It is this last question that is the focus of a piece of research reported by Brian C. Martinson, Melissa S. Anderson, A. Lauren Crain, and Raymond De Vries in the paper “Scientists’ Perceptions of Organizational Justice and Self-Reported Misbehaviors” . [1] Focusing specifically on the world of the academic scientist, they ask, if you feel like the system won’t give you a fair break, is your behavior within it more likely to drift into misbehavior? Their findings suggest that the answer to this question is “yes”:
Our findings indicate that when scientists believe they are being treated unfairly they are more likely to behave in ways that compromise the integrity of science. Perceived violations of distributive and procedural justice were positively associated with self-reports of misbehavior among scientists. (51)
Against over-specialization.
In the 12 September, 2008 issue of Science, there is a brief article titled “Do We Need ‘Synthetic Bioethics’?” [1]. The authors, Hastings Center ethicists Erik Parens, Josephine Johnston, and Jacob Moses, answer: no.