From the other end of the pipeline: views of science from Yale’s MB&B entering class of 1991.

There’s an article in the 19 September 2008 issue of Science (“And Then There Was One”) [1] that catches up with many of the 30 men and women who made up the incoming class of 1991 in the molecular biophysics and biochemistry (MB&B) Ph.D. program at Yale University. The article raises lots of interesting questions, including what counts as a successful career in science. (Not surprisingly, it depends who you ask.) The whole article is well worth a read no matter what stage of the science career pipeline you’re at (although it’s behind a paywall, so you may have to track it down at your local library).
Because there’s so much going on in the article, rather than try to distill it in a single blog post, I thought I would point out a few thought-provoking comments contained in it:

Continue reading

Prizes for women. Progress for women?

2008 is the tenth year of the L’OrĂ©al-UNESCO For Women in Science awards to remarkable female scientists from around the world. Indeed, our sister-site, ScienceBlogs.de, covered this year’s award ceremony and is celebrating women in science more generally with a For Women in Science blog. (It, like the rest of ScienceBlogs.de, is in German. Just so you know.)
In addition to the global contest, three further scholarships are given to women scientists in Germany. But, the only women eligible for these awards are women with kids. (The rationale for this is that childcare options in Germany are not as good as they should be for working mothers, so women scientists with kids need special support.)

Continue reading

A drug company, a psychiatrist, and an inexplicable failure to disclose conflicts of interest.

Charles B. Nemeroff, M.D., Ph.D., is a psychiatrist at Emory University alleged by congressional investigators to have failed to report a third of the $2.8 million (or more) he received in consulting fees from pharmaceutical companies whose drugs he was studying.
Why would congressional investigators care? For one thing, during the period of time when Nemeroff received these consulting fees, he also received $3.9 million from NIH to study the efficacy of five GlaxoSmithKline drugs in the treatment of depression. When the government ponies up money for scientific research, it has an interest in ensuring that the research will produce reliable knowledge.
GlaxoSmithKline, of course, has an interest in funding studies that show that its drugs work really well.

Continue reading

Injustice, misbehavior, and the scientist’s social identity (part 3).

Let’s wrap up our discussion on the Martinson et al. paper, “Scientists’ Perceptions of Organizational Justice and Self-Reported Misbehaviors”. [1] You’ll recall that the research in this paper examined three hypotheses about academic scientists:

Hypothesis 1: The greater the perceived distributive injustice in science, the greater the likelihood of a scientist engaging in misbehavior. (51)

Hypothesis 2: The greater the perceived procedural injustice in science, the greater the likelihood of a scientist engaging in misbehavior. (52)

Hypothesis 3: Perceptions of injustice are more strongly associated with misbehavior among those for whom the injustice represents a more serious threat to social identity (e.g., early-career scientists, female scientists in traditionally male fields). (52)

We’ve already looked at the methodological details of the study. We’ve also examined the findings Martinson et al. reported. (In short, they found that early-career and mid-career scientists reported more procedural injustice than distributive injustice; that early-career scientists who perceived high levels of distributive injustice were somewhat more likely to report engaging in misbehavior than those who did not; that misbehavior was most likely from mid-career scientists with high intrinsic drive who perceived a high level of procedural injustice; and that female scientists were less likely to engage in misbehavior than male scientists.)

In this post, we’re going to consider what these findings mean, and what larger conclusions can be drawn from them.

Continue reading

Injustice, misbehavior, and the scientist’s social identity (part 2).

Last week, we started digging into a paper by Brian C. Martinson, Melissa S. Anderson, A. Lauren Crain, and Raymond De Vries, “Scientists’ Perceptions of Organizational Justice and Self-Reported Misbehaviors”. [1] . The study reported in the paper was aimed at exploring the connections between academic scientists’ perceptions of injustice (both distributive and procedural) and those scientists engaging in scientific misbehavior. In particular, the researchers were interested in whether differences would emerge between scientists with fragile social identities within the tribe of academic science and those with more secure social identities. At the outset, the researchers expected that scientists at early career stages and female scientists in male-dominated fields would be the most likely to have fragile social identities. They hypothesized that perceptions of injustice would increase the likelihood of misbehaving, and that this link would be even greater among early-career scientists and female scientists.

We started with a post walking through the methodology of the study. In this post, we’ll examine the results Martinson et al. reported. Part 3 will then consider what conclusions we might draw from these findings.

First, how much injustice did the study participants report?

Continue reading

Injustice, misbehavior, and the scientist’s social identity (part 1).

Regular readers know that I frequently blog about cases of scientific misconduct or misbehavior. A lot of times, discussions about problematic scientific behavior are framed in terms of interactions between individual scientists — and in particular, of what a individual scientist thinks she does or does not owe another individual scientist in terms of honesty and fairness.

In fact, the scientists in the situations we discuss might also conceive of themselves as responding not to other individuals so much as to “the system”. Unlike a flesh and blood colleague, “the system” is faceless, impersonal. “The system” is what you have to work within — or around.

Could scientists feel the same sort of loyalty or accountability to “the system” as they do to other individual scientists? How do scientists’ views of the fairness or unfairness of “the system” impact how they will behave toward it?

It is this last question that is the focus of a piece of research reported by Brian C. Martinson, Melissa S. Anderson, A. Lauren Crain, and Raymond De Vries in the paper “Scientists’ Perceptions of Organizational Justice and Self-Reported Misbehaviors” . [1] Focusing specifically on the world of the academic scientist, they ask, if you feel like the system won’t give you a fair break, is your behavior within it more likely to drift into misbehavior? Their findings suggest that the answer to this question is “yes”:

Our findings indicate that when scientists believe they are being treated unfairly they are more likely to behave in ways that compromise the integrity of science. Perceived violations of distributive and procedural justice were positively associated with self-reports of misbehavior among scientists. (51)

Continue reading

How scientists see research ethics: ‘normal misbehavior’ (part 2).

In the last post, we started looking at the results of a 2006 study by Raymond De Vries, Melissa S. Anderson, and Brian C. Martinson [1] in which they deployed focus groups to find out what issues in research ethics scientists themselves find most difficult and worrisome. That post focused on two categories the scientists being studied identified as fraught with difficulty, the meaning of data and the rules of science. In this post, we’ll focus on the other two categories where scientists expressed concerns, life with colleagues and the pressures of production in science. We’ll also look for the take-home message from this study.

Continue reading

How scientists see research ethics: ‘normal misbehavior’ (part 1).

In the U.S., the federal agencies that fund scientific research usually discuss scientific misconduct in terms of the big three of fabrication, falsification, and plagiarism (FFP). These three are the “high crimes” against science, so far over the line as to be shocking to one’s scientific sensibilities.
But there are lots of less extreme ways to cross the line that are still — by scientists’ own lights — harmful to science. Those “normal misbehaviors” emerge in a 2006 study by Raymond De Vries, Melissa S. Anderson, and Brian C. Martinson [1]:

We found that while researchers were aware of the problems of FFP, in their eyes misconduct is generally associated with more mundane, everyday problems in the work environment. These more common problems fall into four categories: the meaning of data, the rules of science, life with colleagues, and the pressures of production in science. (43)

These four categories encompass a lot of terrain on the scientific landscape, from the challenges of building new knowledge about a piece of the world, to the stresses of maintaining properly functioning cooperative relations in a context that rewards individual achievement. As such, I’m breaking up my discussion of this study into two posts. (This one will focus on the first two categories, he meaning of data and the rules of science. Part 2 will focus on life with colleagues and the pressures of production in science.)

Continue reading

The Hellinga retractions (part 2): trust, accountability, collaborations, and training relationships.

Back in June, I wrote a post examining the Hellinga retractions. That post, which drew upon the Chemical & Engineering News article by Celia Henry Arnaud (May 5, 2008) [1], focused on the ways scientists engage with each other’s work in the published literature, and how they engage with each other more directly in trying to build on this published work. This kind of engagement is where you’re most likely to see one group of scientists reproduce the results of another — or to see their attempts to reproduce these results fail. Given that reproducibilty of results is part of what supposedly underwrites the goodness of scientific knowledge, the ways scientists deal with failed attempts to reproduce results have great significance for the credibility of science.

Speaking of credibility, in that post I promised you all (and especially Abi) that there would be a part 2, drawing on the Nature news feature by Erika Check Hayden (May 15, 2008) [2]. Here it is.

In this post, I shift the focus to scientists’ relationships within a research group (rather than across research groups and through the scientific literature). In research groups in academic settings, questions of trust and accountability are complicated by differentials in experience and power (especially between graduate students and principal investigators). Academic researchers are not just in the business of producing scientific results, but also new scientists. Within training relationship, who is making the crucial scientific decisions, and on the basis of what information?

The central relationship in this story is that between Homme W. Hellinga, professor of biochemistry at Duke University, and graduate student Mary Dwyer.

Continue reading