Anesthesiology and addiction.

There’s an interesting story on The New Republic website at the moment, “Going Under” by Jason Zengerle, that relates the sad story of a young anesthesiologist’s descent into addiction. What I find interesting about it is the larger questions it raises about why this particular anesthesiologist’s story is not so unusual. Indeed, the article offers an:
Observation: Anesthesiologists seem to suffer from addiction in greater numbers than physicians in other specialties.
And, it lays out
Three hypotheses as to why this might be so:

Continue reading

Bad cites. Bad science?

Do scientists see themselves, like Isaac Newton, building new knowledge by standing on the shoulders of giants? Or are they most interested in securing their own position in the scientific conversation by stepping on the feet, backs, and heads of other scientists in their community? Indeed, are some of them willfully ignorant about the extent to which their knowledge is build on someone else’s foundations?
That’s a question raised in a post from November 25, 2008 on The Scientist NewsBlog. The post examines objections raised by a number of scientists to a recent article in the journal Cell:

Continue reading

Challenges of placebo-controlled trials.

Back in November, at the Philosophy of Science Association meeting in Pittsburgh, I heard a really interesting talk by Jeremy Howick of the Centre for Evidence-Based Medicine at Oxford University about the challenges of double-blind trials in medical research. I’m not going to reconstruct his talk here (since it’s his research, not mine), but I wanted to give him the credit for bringing some tantalizing details to my attention before I share them with you here.

Continue reading

Scientific literacy: a comment on Revere’s rant.

Over at Effect Measure, Revere takes issue with a science educator’s hand-wringing over what science students (and scientists) don’t know. In a piece at The Scientist, James Williams (the science educator in question) writes:

Graduates, from a range of science disciplines and from a variety of universities in Britain and around the world, have a poor grasp of the meaning of simple terms and are unable to provide appropriate definitions of key scientific terminology. So how can these hopeful young trainees possibly teach science to children so that they become scientifically literate? How will school-kids learn to distinguish the questions and problems that science can answer from those that science cannot and, more importantly, the difference between science and pseudoscience?

Revere responds:

Continue reading

Book review: Autism’s False Prophets.


Paul A. Offit, M.D., Autism’s False Prophets: Bad Science, Risky Medicine, and the Search for a Cure. Columbia University Press, 2008.
Autism’s False Prophets: Bad Science, Risky Medicine, and the Search for a Cure examines the ways that uncertainties about autism’s causes have played out in the spheres of medical treatment, liability lawsuits, political hearings, and media coverage. Offit’s introduction describes the lay of the land in 1916, as polio epidemics raged. That lay of the land, with public fear and willingness to pursue strange, expensive, and dangerous treatments, evokes a strong parallel to the current public mood about autism. It also evokes the hope that our current state is a “before” that (like polio’s “before”) will be followed by an “after” where sanity prevails about autism’s causes and treatments.

Continue reading

The Hellinga retractions (part 2): trust, accountability, collaborations, and training relationships.

Back in June, I wrote a post examining the Hellinga retractions. That post, which drew upon the Chemical & Engineering News article by Celia Henry Arnaud (May 5, 2008) [1], focused on the ways scientists engage with each other’s work in the published literature, and how they engage with each other more directly in trying to build on this published work. This kind of engagement is where you’re most likely to see one group of scientists reproduce the results of another — or to see their attempts to reproduce these results fail. Given that reproducibilty of results is part of what supposedly underwrites the goodness of scientific knowledge, the ways scientists deal with failed attempts to reproduce results have great significance for the credibility of science.

Speaking of credibility, in that post I promised you all (and especially Abi) that there would be a part 2, drawing on the Nature news feature by Erika Check Hayden (May 15, 2008) [2]. Here it is.

In this post, I shift the focus to scientists’ relationships within a research group (rather than across research groups and through the scientific literature). In research groups in academic settings, questions of trust and accountability are complicated by differentials in experience and power (especially between graduate students and principal investigators). Academic researchers are not just in the business of producing scientific results, but also new scientists. Within training relationship, who is making the crucial scientific decisions, and on the basis of what information?

The central relationship in this story is that between Homme W. Hellinga, professor of biochemistry at Duke University, and graduate student Mary Dwyer.

Continue reading

The Monty Hall problem and the nature of scientific discourse.

There’s a neat article [1] in the September-October 2008 issue of American Scientist (although sadly, this particular article seems not to be online) in which Brian Hayes discusses the Monty Hall problem and people’s strong resistance to the official solution to it.
Now, folks like Jason have discussed the actual puzzle about probabilities in great detail (on numerous occasions). It’s a cool problem, I believe the official solution, and I’m not personally inclined to raise skeptical doubts about it. What I really like about Hayes’s article is how he connects it to the larger ongoing discussion in which scientists engage:

Continue reading

Objectivity and other people.

As a follow-up to my last post, it looks like I should offer a more detailed explanation of why exactly scientific activity is a group activity — not simply as a matter of convenience, but as a matter of necessity. Helen E. Longino has already made this case very persuasively in her book Science as Social Knowledge (specifically the chapter called “Values and Objectivity”), so I’m going to use this post to give a sketch of her argument.

The upshot of the argument is that objective knowledge requires the involvement of other people in the building. All by yourself, there is no way to move beyond subjective knowledge.
First, what do we mean by “objective”?

Continue reading

Peer review and science.

Chad Orzel takes a commenter to task for fetishizing peer review:

Saying that only peer-reviewed articles (or peer-reviewable articles) count as science only reinforces the already pervasive notion that science is something beyond the reach of “normal” people. In essence, it’s saying that only scientists can do science, and that science is the exclusive province of geeks and nerds.

That attitude is, I think, actively harmful to our society. It’s part of why we have a hard time getting students to study math and science, and finding people to teach math and science. We shouldn’t be restricting science to refereed journals, we should be trying to spread it as widely as possible.

Peer review and refereed journals are a good check on science, but they do not define the essence of science. Science is, at its core, a matter of attitude and procedure. The essence of science is looking at the world and saying “Huh. I wonder why that happens?” And then taking a systematic approach to figuring it out.

I see what Chad is saying — and to the extent that science can be said to have an “essence” think he’s hit on a nice way to describe it. But I’m going to speak up for peer review here.

Continue reading

Seeing is believing.

Blogging has been a bit light lately, in part because I was persuaded to teach half of a graduate seminar during the summer session. The first half of the seminar looked at philosophical approaches to epistemology (basically, a set of issues around what counts as knowledge and what could count as reasonable ways to build knowledge). The second half, which I am teaching, shifts the focus to what scientists seem to be doing when they build knowledge (or knowledge claims, or theories, or tentative findings).
In the course of our reading for this week, I came upon a couple passages in a chapter by Karin Knorr Cetina [1] that I found really striking:

Continue reading