In response to my first entry on Steve Fuller’s essay on Chris Mooney‘s book, The Republican War on Science, Bill Hooker posted this incisive comment:
Fuller seems to be suggesting that there is no good way to determine which scientists in the debate are most credible — it all comes down to deciding who to trust.
I think this misses an important piece of how scientific disputes are actually adjudicated. In the end, what makes a side in a scientific debate credible is not a matter of institutional power or commanding personality. Rather, it comes down to methodology and evidence.
So, in other words, deciding who to trust means being able to evaluate the data for yourself, which — according to the pullquote above — Mooney suggests a journalist should not do. (Right here would be a good place to admit I haven’t read TWoS.)
Don’t get me wrong, I’ve been reading Chris Mooney about as long as he’s had a blog, and I have a lot of respect for him. He’s a welcome exception to the rule that science writers don’t understand the science. I think, however, that in this case he’s wrong, both about what he should do and what he does do. It seems clear to me that he does understand the science, and does evaluate the facts for himself. I don’t, frankly, see how one can approach a scientific controversy by any other method than reference to the data. To me, “what makes it science is the epistemology” means RTFdata.
This is a question that bears closer examination: If I’m not able to directly evaluate the data, does that mean I have no good way to evaluate the credibility of the scientist pointing to the data to make a claim?
You’ll recall that I myself have pointed out the dangers — to scientists as well as laypersons — of going farther than the tether of your expertise stretches. So maybe the prospects of judging scientific credibility for oneself are even worse — maybe only chemists can judge the credibility of chemists (and only the organic chenists will be able to assess the credibility of claims in organic chemistry, while they’ll have to leave it to the physical chemists to assess the credibility of claims in physical chemistry, etc.).
This would pretty much put all of us in a position where, outside our own narrow area of expertise, we either have to commit to agnosticism or take someone else’s word for things. And this in turn might mean that science journalism becomes a mere matter of publishing press releases — or that good science journalism demands extensive scientific training (and that we probably need a separate science reporter for each specialized area of science to be covered).
But I don’t think the prospects for evaluating scientific credibility are quite that bad.
Scientific knowledge is built on empirical data, and the details of the data (what sort of data is relevant to the question at hand, what kind of data can we actually collect, what techniques are better or worse for collecting the data, how do we distinguish data from noise, etc.) can vary quite a lot in different scientific disciplines, and in different areas of research within those disciplines. However, there are commonalities in the basic patterns of reasoning that scientists in all fields use to compare their theories with their data. Some of these patterns of reasoning may be rather sophisticated, perhaps even non-intuitive. (I’m guessing certain kinds of probabilistic or statistical reasoning might fit this category.) But others will be the patterns of reasoning that get highlighted when “the scientific method” is taught.
Perhaps what Chris is doing is not so much evaluating the scientific facts as evaluating the process the scientists are describing by which they got to those facts.
This is why, in my post about how far a scientist’s expertise extends, I claimed that I (as a trained scientist) am qualified
to evaluate scientific arguments in areas of science other than my own for logical structure and persuasiveness (though I must be careful to acknowledge that there may be premises of these arguments — pieces of theory or factual claims from observations or experiments that I’m not familiar with — that I’m not qualified to evaluate).
In other words, even if I can’t evaluate someone else’s raw data to tell you directly what it means, I can evaluate the way that data is used to support or refute claims. I can recognize logical fallacies and distinguish them from instances of valid reasoning. Moreover, this is the kind of thing that a non-scientist who is good at critical thinking could evaluate as well.
One way to judge scientific credibility (or lack thereof) is to scope out the logical structure of the arguments a scientist is putting up for consideration. It is possible to judge whether arguments have the right kind of relationship to the empirical data without wallowing in that data oneself. Credible scientists can lay out:
- Here’s my hypothesis.
- Here’s what you’d expect to observe if the hypothesis is true. Here, on the other hand, is what you’d expect to observe if the hypothesis is false.
- Here’s what we actually observed (and here are the steps we took to control the other variables).
- Here’s what we can say (and with what degree of certainty) about the hypothesis in the light of these results.
- Here’s the next study we’d like to do to be even more sure.
And, not only will the logical connections between the data and what is inferred from them look plausible to the science writer who is hip to the scientific method, but they ought to look plausible to other scientists — even to scientists who might prefer different hypotheses, or different experimental approaches. If what makes something good science is its epistemology — the process by which data are used to generate and/or support knowledge claims — then even scientists who may disagree with those knowledge claims should still be able to recognize the patterns of reasoning involved as properly scientific.
If the patterns of reasoning are properly scientific, why wouldn’t all the scientists agree about the knowledge claims themselves? Perhaps they’re taking different sets of data into account, or they disagree about certain of the assumptions made in framing the question. The important thing to notice here is that scientists can disagree with each other about experimental results and scientific conclusions without thinking that the other guy is a bad scientist. The hope is that, in the fullness of time, more data and dialogue will resolve the disagreements. But good, smart, honest scientists can disagree.
This is not to say that there aren’t folks in lab coats whose thinking is sloppy. Indeed, catching sloppy thinking is the kind of thing you’d hope a good general understanding of science would help someone (like a science journalist) to do. At that point, of course, it’s good to have backup — other scientists who can give you their read on the pattern of reasoning, for example. And, to the extent that a scientist — especially one talking “on the record” about the science (whether to a reporter or to other scientists) — displays sloppy thinking, that would tend to undermine his or her credibility.
There are other kinds of evaluation you can probably make of a scientist’s credibility without being an expert in their field. Examining a scientific paper to see if the sources cited make the claims that they are purported to by the paper citing them is one way to assess credibility. Determining whether a scientist might be biased by an employer or a funding source may be harder, but there I suspect many of the scientists themselves are aware of these concerns and will go the extra mile to establish their credibility by taking the possibility that they are seeing what they want to see very seriously and testing their hypotheses fairly stringently so they can answer possible objections.
It’s harder to get a good read on the credibility of scientists who present evidence and interpretations with the right sort of logical structure but who have, in fact, fabricated or falsified that evidence. Being wary of results that seem too good to be true is probably a good strategy. Also, once a scientist is caught in such misconduct, it’s entirely appropriate not to trust another word that comes from his or her mouth.
One of the things fans of science have tended to like is that it’s a route to knowledge that is, at least potentially, open to any of us. It draws on empirical data we can get at through our senses and on our powers of rational thinking. As it happens, the empirical data have gotten pretty complicated, and there’s usually a good bit of technology between the thing in the world we’re trying to observe and the sense organs we’re using to observe it. However, those powers of rational thinking are still at the center of how the scientific knowledge gets built. Those powers need careful cultivation, but to at least a first approximation they may be enough to help us tell the people doing good science from the cranks.
I’d be very interested, though, if Chris could weigh in on this and spell out what he sees as the ways that science writers ought to be judging the credibility of their sources.
This is really important and really well put. We can evaluate the logic and the methodology and if the data warrant the conclusion. But we cannot know if the actual data are “right” until the experiments are replicated or followed-up on.
This is why I run two blogs: one for the area of expertise where I can make a fairly good judgment of the data themselves, and the other blog for stuff outside of my narrow area of expertise where I can evaluate the logic and method but have to tentatively accept that data are correct until and unless they are shown to be faulty.
I am assuming that all science journalists, including Mooney (and Zimmer, for instance) are capable of judging the logic and method and tentatively accepting the data reported in the papers.
There is a basic, underlying assumption in the scientific community that scientists are honest. The assumption is that although errors and unsupported conclusions may be made, the researchers do not lie about their data or their methods. This is, of course, a consciously-applied assumption that every scientist knows is risky. After all, some scientists have infamously lied. But scientists can usually trust other scientists for the reasons you cite: once a lie is found out, no one will ever believe that person again, and his life as a scientist is over. Even the most rancorous and public disputes almost never involve charges of dishonesty. Stupidity, perhaps, but not dishonesty.
Although a layman might eventually become very good at understanding scientific papers, it’s unlikely he will ever be able to actually evaluate any of the more abstruse areas. As you point out, even for scientists outside their own discipline, things can get very confusing very quickly. However, if a science journalist follows a developing research story, he can eventually see the lay of the land as other researchers report their findings. Global warming is a prime example. It is also usually possible to follow the back-and-forth of papers, comments and responses. One important sign post for the science journalist is that the back-and-forth takes place in science journals and at scientific meeetings, not usually on TV or in political speeches. Cold fusion is a good example of that. Any decent science journalist should have had some very serious doubts about cold fusion very quickly when it was reported in the general news media first rather than in the appropriate scientific journals.
Twice during my graduate studies something interesting happened. I’d mention a paper to my advisor and, suddenly, his face would get dark and somber and he’d tell me to burn the paper and to never read anything else by that author. He did not say exactly why, but I looked and realized that both of those scientists must have fiddled with the data a little bit too much – in paper after paper, data appeared too good to be true.
I will weigh in on this soon, thanks for bringing it up…
I think you’ve got a great point and I think that being able to deconstruct an argument is a key piece of analysing a scientific paper. Which is why I also think that our young political science students and journalism students should be taking college courses where they learn how to evaluate scientific papers for content, not simply learning how to grow bacteria in petri dishes. There is, of course, something to be learned from such experiments, but I think most people will end up reading about science, rather than performing the experiments themselves, and I would like them to know how to search for the logic in a research paper and evaluate it.
Very well put! I’d like to point out another grounds for evaluation which is available to both scientists and non-scientists. That is what Wilson calls consilience, the general tendency of scientific findings to “hang together” and retain consistent principles even across disciplines. Any given claim will have implications reaching outside its own context, and those can be used as checks.
For example, if somebody claims to have made a Turing machine out of DNA, you’d point out that I can’t hope to check the chemistry, but I can see whether what the paper describes is really a Turing machine. But I can do a bit more than that — I can note the basic difficulty of the task (consider: deterministic choice of nucleic acids? based on the states of at least three other complex molecules?) and consider whether the structures he describes are complex enough to do it, and whether his account of the production seems plausible in that light. I can also check what’s *powering* the computation….
Simpler examples: if somebody claims to have a water-powered car, I don’t need to *check* his results, I know that the energy plain isn’t there, unless he’s invoking fusion. (It also helps to know a little about urban legends!) And if someone claims to have discovered a trisexual species of mouse, I know that this is so unheard of as to warrant “extraordinary proof”. More specifically: mice aren’t that different from any other mammals, and I’m pretty sure that if there were *any* trisexual mammals around, I’d have heard of them by now!
These are all examples where knowing general facts lets people evaluate more specific facts. Of course, you have to be careful about saying “impossible”! (Would I believe an artificial strain of trisexual mice? Maybe in a few decades….) Even so, science has both basic principles and common certainties. It’s possible to do some impressive things, but there are always preconditions and limitations, and those don’t change that fast.
This is a good start but there are other ways of approaching the credibility of sources as well. One of my methodologies is quite similar to the one that Judge Jones applied in the Dover decision: Is the “scientist” actually following the norms of science, i.e., behaving like a scientist, conducting research, submitting it to journals, and so forth? Or is this a political campaign dressed up to look like science. Journalists are more than capable of detecting something like this, and there are plenty of scientists willing to help them do that.
See, this is why Janet gets paid to do philosophy of science, whereas I… comment on blogs sometimes.
“RTFdata” is too strong a claim — as is any idea that I know what Chris is doing. Quite apart from the more sociological methods that Chris and Mark Paris mention, the discussion above makes it clear that close familiarity with scientific reasoning is sufficient to evaluate methodologies and conclusions in a pretty stringent manner.
there are plenty of scientists willing to help them do that
This is a key issue in science journalism, and it really, really annoys me: too many science journalists seem to rely solely on press releases, don’t even attempt to read the actual research, and finally don’t consult scientists to check that their grasp of the subject is in the right ball-park.
Since I started reading science blogs (not ScienceBlogs, just science blogs!) – I’ve learned a few cues to look out for which act as warning signs that a reporter has not done his or her job; I’ve also begun learning how to read a scientific paper at least well enough to understand what the researchers actually say. Once I’ve done what fact-checking I can as to the accuracy of an article, I try and find out what other scientists in the same discipline think. All part of the basic deal, and these days all very easy. Why are so many journalists incapable of doing as much? (Note: Mooney, Zimmer, Goldacre, etc. are exceptions, it goes without saying.)
Excellent post and comments, thanks everyone.
This from the Journal of Higher Education offers some good tips for avoiding junk science.
linklog 060401
linklog 060401
I’m sorry to barge in on this interesting discussion so late in the day. But I would like to add a few things I hope you might find of use or interest.
First, I read The Republican War on Science as just as much an attempt to make as to report news – very much like George W. Bush’s declaration of a ‘War on Terror’. In both cases, one can see a wide range of evidence for the conflict indicated in the corresponding phrase but now collected together and focussed for ideological effect. There also a self-fulfilling air to both phrases: Both Mooney and Bush want people to realize there is a war – and to takes sides. As a journalistic invention, it is potentially very illuminating because it seems to offer a distinct perspective from which lots of things at once in the public eye start to make sense. But given that promise, it becomes imperative that the journalist has staked out an independent point-of-view. ‘Independent’ here simply means using standards of judgement that are his/her own, and not entirely parasitic on that his/her informants.
‘Trust’ is a really bad word to use in this context because it suggests an either-or choice the journalist needs to make with regard to (in this case) the political and scientific informants: A journalist who aspires to the panoramic sweep suggested by ‘The Republican War on Science’ should be engaged in more of a wheat-and-chaff exercise with regard to informants. Maybe Chris did do this, but it’s not evident from the book. He sometimes – but rarely – faults the guys on his side. Indeed, Chris pretty much did a George Bush and painted the situation as the Forces of Light (Democrats and Real Scientists) against the Forces of Darkness (Republicans and Pseudo-Scientists), which ends up only preaching to the choir – which, admittedly, nowadays is pretty large.
I raised spectre of Walter Lippmann in my Crooked Timber review because Lippmann was a great example of a journalist who carved out an independent point of view, one that was always sympathetic to US interests but always pitched at a level somewhat above the political trenches. This allowed him to criticize political leaders when necessary, without appearing to be a traitor, and also to provide a sense of consistent vision during arguably the most turbulent period in US political history – its transition to superpower status. Science journalists over the years have also aspired to something similar, e.g. Waldemar Kaempffert in the US and J.G. Crowther in the UK, both during the interwar years, projected very progressive, even futuristic, scientific visions from ongoing developments. In a rather different, more downbeat way, John Horgan in our time strikes me as someone aiming at Lippmann-like independence. (By the way, it’s possible to adopt an ‘independent’ viewpoint in my sense without being a ‘centrist’, just like a judge can weigh the merits of a case and still come down on one side.)
One thing that troubles me about the discussion so far is that you guys seem to be portraying the Chris’ journalistic task as simply a matter of evaluating the merits of rival scientific sources, and then looking to the philosophy of science to solve your problem. Chris claims to be doing something more creative here – and so deserves to be held to a different standard, one that takes science journalism out of the realm of ‘Methodology for the Masses’. In the end, the independence of Chris’ standpoint should be judged by how he disentangles and analyses the relationship between scientific clams and political motives. In that spirit, I suggested in my review that he should have asked both sides of this ‘war’ to troubleshoot each other’s claims and motives, and then formally evaluate them, as a judge might do in a case.