(Apologies to John Hodgman for swiping his nifty title.)
There has been some discussion in these parts about just who ought to be allowed to talk about scientific issues of various sorts, and just what kind of authority we ought to grant such talk. It’s well and good to say that a journalism major who never quite finished his degree is less of an authority on matters cosmological than a NASA scientist, but what should we say about engineers or medical doctors with “concerns” about evolutionary theory? What about the property manager who has done a lot of reading? How important is all that specialization research scientists do? To some extent, doesn’t all science follow the same rules, thus equipping any scientist to weigh in intelligently about it?
Rather than give you a general answer to that question, I thought it best to lay out the competence I personally am comfortable claiming, in my capacity as a trained scientist.
As someone trained in a science, I am qualified:
- to say an awful lot about the research projects I have completed (although perhaps a bit less about them when they were still underway).
- to say something about the more or less settled knowledge, and about the live debates, in my research area (assuming, of course, that I have kept up with the literature and professional meetings where discussions of research in this area take place).
- to say something about the more or less settled (as opposed to “frontier”) knowledge for my field more generally (again, assuming I have kept up with the literature and the meetings).
- perhaps, to weigh in on frontier knowledge in research areas other than my own, if I have been very diligent about keeping up with the literature and the meetings and about communicating with colleagues working in these areas.
- to evaluate scientific arguments in areas of science other than my own for logical structure and persuasiveness (though I must be careful to acknowledge that there may be premises of these arguments — pieces of theory or factual claims from observations or experiments that I’m not familiar with — that I’m not qualified to evaluate).
- to recognize, and be wary of, logical fallacies and other less obvious pseudo-scientific moves (e.g., I should call shenanigans on claims that weaknesses in theory T1 count as support for alternative theory T2).
- to recognize that experts in fields of science other than my own generally know what the heck they’re talking about.
- to trust scientists in fields other than my own to rein in scientists in those fields who don’t know what they are talking about.
- to face up to the reality that, as much as I may know about the little piece of the universe I’ve been studying, I don’t know everything (which is part of why it takes a really big community to do science).
This list of my qualifications is an expression of my comfort level more than anything else. It’s not elitist — good training and hard work can make a scientist out of almost anyone. But, it recognizes that with as much as there is to know, you can’t be an expert on everything. Knowing how far the tether of your expertise extends is part of being a responsible scientist.
A quick note, as a semi-practicing scientist (no longer an academic now in private industry) I would suggest that your point 8 may need tweaking.
You trust that scientists in fields other than your own will rein in scientists in those fields who don’t know what they are talking about. In my mind the problem with your point is twofold: the first is that this process only works with really obvious errors or true dim bulbs. Most of the time science is twisted in small incremental steps, that of-and-by themselves aren?t wrong, just misleading or misdirected. My area of research was the use of scientific information in decision-making and it was always the small, fine cases that caused the big problems. Policy-makers are looking for an ?in? and all they need is an inch. That type of inch is almost always allowed in the field as part of the give-and-take of debate, but once it gets out into the wild it can take on a life of its own.
Another consideration is that it is my observation that a large proportion of scientists are unwilling to make waves when big names are involved because of the incestuous nature of research and the nature of the peer review process. It takes a lot of guts to challenge a big gun who you know might get a shot at your next paper when it goes out for peer review. As well, given the way research is funded it doesn’t take a lot of effort for a big gun to see that a junior doesn’t get a better job or new funding. Better to keep your head down and do good work than to tilt at windmills. That?s why our justice system depends on outside enforcers (police etc..) and neutral third parties (judges etc…) to settle disputes.
It might be helpful for non-scientists to have some idea of the meaning of “research area” and “field”. I think this is sometimes the source of some misunderstanding – in particular, I don’t think most people have any idea how much narrower the former can be than the latter.
Tangled Bank 49
The 49th Tangled Bank is available at Living The Scientific Life. It includes the post on drift and draft I wrote in the Bank’s lonely category, evolutionary philosophy. I thought a few posts stood out from among the otherwise excellent
I’m glad to see you lay it out like that. I agree with Blair about pt.8 . I’m an industrial scientist (PhD in materials chemistry). As a grad student and post doc, I suffered mightily trying to reproduce some high-profile work that turned out to be fraudulent (I won’t bore with details, but it involved Bell Labs a few years back).
As a result of my experience, I became pretty interested in the structure and time-course of scientific fraud and, for lack of better term, nonsense. The takedown in the Bell Labs case was a good case of point 8 working as it should- the damage was considerable, but contained.
The recent Hwang genetic engineering frauds were caught by bloggers in Korea, for the most part postdocs. It is a more oblique example of point 8, because the scientific establishment was pretty loathe to rain on the parade until things collapsed.
More newsworthy, and more jugular, is the debate surrounding anthropogenic global warming. The claims for consensus are true as far as they go- most in the field agree that GW is measurable. But then, for reasons that seem to be advocacy related, this gets conflated with the idea that there is no debate about the anthropogenic part, which is not true. There is a lot of very light-stepping done to avoid deep scrutiny of results, with prominent researchers hoarding data and refusing to publish detailed analysis methods or code.
Climatology is not my field, so I remain agnostic, but this sort of behavior seems to be countenanced in climatology, and I am quite certain it would not be in physics or chemistry. Looking from the outside, I am not reassured by this behavior. Extending point 5, although I am not a climatologist, I understand the statistics and data-handling procedures common to science, and am suspicious when I think that they are being violated. I see no good or honest reason to not disclose and archive data or methods once the research is published. If the science is to be of the quality to influence policy, and hence great swaths of the economy, it has to be above reproach, IMO. I think that this is very important stuff, and mishandling things could cause considerable damage to the reputation of science, and to policy.
Good for you! Mostly, I want to follow up Dave Eaton’s post. There is a site entitled “A Few Things Ill Considered” which clarifies the most common misconceptions about climate. The site “RealClimate” should resolve whatever remaining doubts you might have.
On expertise and arguments from authority
So, much attention has been drawn to my comment pointing out that Rebecca Culshaw is a mathematician (well, isn’t she?), while my elaboration in my very next comment was ignored. So I thought I’d take some time to highlight…