Does having too much fun undermine your credibility?

Over at Crooked Timber, John Quiggin lays into climate scientist Richard Lindzen. His post begins with reasons one might be inclined to take Lindzen’s views seriously:

Unlike nearly all “sceptics”, he’s a real climate scientist who has done significant research on climate change, and, also unlike most of them, there’s no evidence that he has a partisan or financial axe to grind.

But then, we find the 2001 Newsweek interview that gives Quiggin reason for pause:

Lindzen clearly relishes the role of naysayer. He’ll even expound on how weakly lung cancer is linked to cigarette smoking. He speaks in full, impeccably logical paragraphs, and he punctuates his measured cadences with thoughtful drags on a cigarette.

And Quiggin’s response:

Anyone who could draw this conclusion in the light of the evidence, and act on it as Lindzen has done, is clearly useless as a source of advice on any issue involving the analysis of statistical evidence.

I don’t want to get into a debate here about climate science (although the neighbors will likely oblige if you ask them nicely), nor even about the proper analysis of statistical evidence. Instead, I’d like to consider whether enjoying being a contrarian (or a consensus-supporter, for that matter) is a potential source of bias against which scientists should guard.

Continue reading

Does extra data always get you closer to the truth?

As expected, Derek Lowe has a thoughful post (with a very interesting discussion going on in the comments) about the latest “Expression of Concern” from the New England Journal of Medicine about the VIGOR Vioxx trial.
To catch you up if you’ve been watching curling rather than following the case: A clinical study of Vioxx was performed, resulting in a manuscript submitted to NEJM by 13 authors (11 academics and two scientists employed by Merck). The study was looking at whether adverse gastrointestinal events were correlated to taking Vioxx. During the course of the study, other events in participants (including cardiovascular events) were also tracked. The point of contention is that there were three heart attacks among study participants that happened before the official ending date for the study, and were known to the authors before the paper was published in NEJM, but that were left out of the data presented in the paper that was published. NEJM has identified this as a problem. While not coming out and saying, “Looky here, scumbag pharamceutical company trimming the data to sell more Vioxx, patient safety be damned!” that’s a conclusion people might be tempted to draw here.
But, as Derek points out, it’s not that simple.

Continue reading

Who’s duping whom?

Today in the Chronicle of Higher Education there’s a piece on Gerald Schatten’s role in the Korean stem cell mess. It’s an interesting piece, written without Dr. Schatten’s participation — he’s keeping quiet while the University of Pittsburgh conducts its investigation of him. (Worth noting, from the article: “Pittsburgh began investigating Mr. Schatten, at his own request, with a six-person panel that first met on December 14.”)

Given Schatten’s non-participation in the article, the portrait of him that emerges turns on the impressions of his friends and acquaintances, past collaborators and competitors. We can only guess at what might have been going on inside Schatten’s mind at crucial points as events unfolded. But perhaps, at least for the purposes of trying to spare other scientists from the professional horrors to which Schatten now finds himself subjected, it would be useful to identify some questions Schatten ought to have asked himself. After all, if we didn’t think we could learn something from experience, what the heck are we doing science for?

Continue reading

Science’s neighborhood watch

The commenters here at ScienceBlogs are da bomb! Just look at the insight they contributed to my previous post on fakery in science. Indeed, let’s use some of that insight to see if we can get a little bit further on the matter of how to discourage scientists from making it up rather than, you know, actually doing good science.
Three main strategies emerged from the comments so far:

  • Make the potential payoff of cheating very low compared to the work involved in getting away with it and the penalty you’ll face if caught (thus, making just doing good science the most cost-effective strategy).
  • Clear out the deadwood in the community of science (who are not cheating to get Nobel prizes but instead to get tenure so they can really slack off).
  • Make academic integrity and intellectual honesty important from the very beginning of scientific training (in college or earlier), so scientists know how to “get the job done” without cheating.

I like all of these, and I think it’s worth considering whether there are useful ways to combine them with one of the fraud-busting strategies mentioned in the previous post, namely, ratting out your collaborator/colleague/underling/boss if you see them breaking the rules. I’m not advocating a McCarthyite witch hunt for fakers, but something more along the lines of a neighborhood watch for the community of science.

Continue reading

Working to do human subjects research right.

Today, some news that makes me smile (and not that bitter, cynical smile): UCSF has announced that it has received full accreditation for its program to protect research participants from the Association for the Accreditation of Human Research Protection Programs (AAHRPP).
This is a voluntary accreditation — nothing the federal government requires, for example — that undoubtedly required a great deal of work from UCSF investigators and administrators to obtain. (AAHRPP describes the process as including a preliminary self-assessment, followed by appropriate modifications of your institutions human subject protection program, preparation of a detailed written application, an on-site evaluation of your program by a team of experts, and review of these materials by the AAHRPP council on accreditation.) Here’s what the UCSF news report has to say about the process:

Continue reading

Is all animal research inhumane?

I received an email from a reader in response to my last post on PETA’s exposing of problems with the treatment of research animals at UNC. The reader pointed me to the website of an organization concerned with the treatment of lab animals in the Research Triangle, www.serat-nc.org. And, she wrote the following:

Some people may think that PETA is extreme. However, the true “extreme” is what happens to animals in labs. If the public knew, most would be outraged. But, of course our government hides such things very well. Those researchers who abuse animals in labs (which is ALL researchers, by my definition), cannot do an about turn and go home and not abuse animals or humans at their homes. Animal researchers are abusers, and there is enough research on people who abuse to know that abuse does not occur in isolation. The entire industry must change.

There are a bunch of claims here, some of which I’m going to pretty much leave alone because I don’t have the expertise to evaluate them. Frankly, I don’t know whether even the folks we would all agree are abusing animals in the lab are full-fledged abusers who cannot help but go forth and abuse spouses, children, family pets, neighbors, and such. (I’m not a psychologist or a sociologist, after all.) And, while I’d like to believe that the public would be outraged at unambigous cases of animal abuse, the public seems not to be outraged by quite a lot of things that I find outrageous.
I would, however, like to consider the claim that ALL researchers who do research with animals are abusing those animals.

Continue reading