The Philosophy of Science Association Biennial Meeting included a symposium session on the release of hacked e-mails from the Climate Research Unit at the University of East Anglia. Given that we’ve had occasion to discuss ClimateGate here before, i thought I’d share my notes from this session.
Symposium: The CRU E-mails: Perspectives from Philosophy of Science.
Naomi Oreskes (UC San Diego), gave a talk called “Why We Resist the Results of Climate Science.”
She mentioned the attention brought to the discovery of errors in the IPCC report, noting that while mistakes are obviously to be avoided, it would be amazing for there to be a report that ran thousands of pages that did not have some mistakes. (Try to find a bound dissertation — generally only in the low hundreds of pages — without at least one typo.) The public’s assumption, though, was that these mistakes, once revealed, were smoking guns — a sign that something improper must have occurred.
Oreskes noted the boundary scientists of all sorts (including climate scientists) have tried to maintain between the policy-relevant and the policy-prescriptive. This is a difficult boundary to police, though, as climate science has an inescapable moral dimension. To the extent that climate change is driven by consumption (especially but not exclusively the burning of fossil fuels), we have a situation where the people reaping the benefits are not the ones who will be paying for that benefit (since people in the developed world will have the means to respond to the effects of climate change and those in the developing world will not). The situation seems to violate our expectations of intergenerational equity (since future generations will have to cope with the consequences of the consumption of past and current generations), as well as of inter-specific equity (since the species likely to go extinct in response to climate change are not the ones contributing the most to climate change).
The moral dimension of climate change, though, doesn’t make this a scientific issue about which the public feels a sense of clarity. Rather, the moral issues are such that Americans feel like their way of life is on trial. Those creating the harmful effects have done something wrong, even if it was accidental.
And this is where the collision occurs: Americans believe they are good; climate science seems to be telling them that they are bad. (To the extent that people strongly equate capitalism with democracy and the American way of life, that’s an issue too, given that consumption and growth are part of the problem.)
The big question Oreskes left us with, then, is how else to frame the need for changes in behavior, so that such a need would not make Americans so defensive that they would reflexively reject the science. I’m not sure the session ended with a clear answer to that question.
* * * * *
Wendy S. Parker (Ohio University) gave a talk titled “The Context of Climate Science: Norms, Pressures, and Progress.” A particular issue she took up was the ideal of transparency and how it came up in the context of climate scientists interactions with each other and with the public.
Parker noted that there had been numerous requests for access to raw data by people climate scientists did not recognize as part of the climate science community. The CRU denied many such requests, and the ClimateGate emails made it clear that the scientists generally didn’t want to cooperate with these requests.
Here, Parker observed that while we tend to look favorably on transparency, we probably need to say more about what transparency should amount to. Are we talking about making something available and open to scrutiny (i.e., making “transparency” roughly the opposite of “secrecy”)? Are we talking about making something understandable or usable, perhaps by providing fully explained nontechnical accounts of scientific methods and findings for the media (i.e., making “transparency” roughly the opposite of “opacity”)?
What exactly do we imagine ought to be made available? Research methods? Raw and/or processed data? Computer code? Lab notebooks? E-mail correspondence?
To whom ought the materials to be made available? Other members of one’s scientific community seems like a good bet, but how about members of the public at large? (Or, for that matter, members of industry or of political lobbying groups?)
And, for that matter, why do we value transparency? What makes it important? Is it primarily a matter of ensuring the quality of the shared body of scientific knowledge, and of improving the rate of scientific progress? Or, do we care about transparency as a matter of democratic accountability? As Parker noted, these values might be in conflict. (As well, she mentioned, transparency might conflict with other social values, like the privacy of human subjects.)
Here, if the public imputed nefarious motives to the climate researchers, the scientists themselves viewed some of the requests for access to their raw data as attempts by people with political motivations to obstruct the progress (or acceptance) of their research. It was not that the scientists feared that bad science would be revealed if the data were shared, but rather that they worried that yahoos from outside the scientific community were going to waste their time, or worse to cherry pick the shared data to make allegations that the scientists to which would then have to respond, wasting even more time.
In the numerous investigations that followed on the heels of the leak of stolen CRU e-mails, about the strongest charge against the involved climate scientists that stood was that they failed to display “the proper degree of openness”, and that they seemed to have a ethos of minimal compliance (or occasionally non-compliance) with regard to Freedom of Information Act (FOIA) requests. They were chided that the requirements of FOIA must not be seen as impositions, but as part of their social contract with the public (and something likely to make their scientific knowledge better).
Compliance, of course, takes resources (one of the most important of these being time), so it’s not free. Indeed, it’s hard not to imagine that at least some FOIA requests to climate scientists had “unintended consequences” (in terms of the expenditure of tim and other resources) on climate scientists that were precisely what the requesters intended.
However, as Parker noted, FOIA originated with the intent of giving citizens access to the workings of their government — imposing it on science and scientists is a relatively new move. It is true that many scientists (although not all) conduct publicly funded research, and thereby incur some obligations to the public. But there’s a question of how far this should go — ought every bit of data generated with the aid of any government grant to be FOIA-able?
Parker discussed the ways that FOIA seems to demand an openness that doesn’t quite fit with the career reward structures currently operating within science. Yet ClimateGate and its aftermath, and the heightened public scrutiny of, and demands for openness from, climate scientists in particular, seem to be driving (or at least putting significant pressure upon) the standards for data and code sharing in climate science.
I got to ask one of the questions right after Parker’s talk. I wondered whether the level of public scrutiny on climate scientists might be enough to drive them into the arms of the “open science” camp — which would, of course, require some serious rethinking of the scientific reward structures and the valorization of competition over cooperation. As we’ve discussed on this blog on many occasions, institutional and cultural change is hard. If openness from climate scientists is important enough to the public, though, could the public decide that it’s worthwhile to put up the resources necessary to support this kind of change in climate science?
I guess it would require a public willing to pay for the goodies it demands.
* * * * *
The next talk, by Kristin Shrader-Frechette (University of Notre Dame), was titled “Scientifically Legitimate Ways to Cook and Trim Data: The Hacked and Leaked Climate Emails.”
Shrader-Frechette discussed what statisticians (among others) have to say about conditions in which it is acceptable to leave out some of your data (and indeed, arguably misleading to leave it in rather than omitting it). There was maybe not as much unanimity here as one might like.
There’s general agreement that data trimming in order to make your results fit some predetermined theory is unacceptable. There’s less agreement about how to deal with outliers. Some say that deleting them is probably OK (although you’d want to be open that you have done so). On the other hand, many of the low probability/high consequence events that science would like to get a handle on are themselves outliers.
So when and how to trim data is one of those topics where it looks like scientists are well advised to keep talking to their scientific peers, the better not to mess it up.
Of the details in the leaked CRU e-mails, one that was frequently identified as a smoking gun indicating scientific shenanigans was the discussion of the “trick” to “hide the decline” in the reconstruction of climatic temperatures using proxy data from tree-rings. Shrader-Frechette noted that what was being “hidden” was not a decline in temperatures (as measured instrumentally) but rather in the temperatures reconstructed from one particular proxy — and that other proxies the climate scientists were using didn’t show this decline.
The particular incident raises a more general methodological question: scientifically speaking, is it better to include the data from proxies (once you have reason to believe it’s bad data) in your graphs? Is including it (or leaving it out) best seen as scrupulous honesty or as dishonesty?
And, does the answer differ if the graph is intended for use in an academic, bench-science presentation or a policy presentation (where it would be a very bad thing to confuse your non-expert audience)?
As she closed her talk, Shrader-Frechette noted that welfare-affecting science cannot be treated merely as pure science. She also mentioned that while FOIA applies to government-funded science, it does not apply to industry-funded science — which means that the “transparency” available to the public is pretty asymmetrical (and that industry scientists are unlikely to have to devote their time to responding to requests from yahoos for their raw data).
* * * * *
Finally, James McAllister (University of Leiden) gave a talk titled “Errors, Blunders, and the Construction of Climate Change Facts.” He spoke of four epistemic gaps climate scientists have to bridge: between distinct proxy data sources, between proxy and instrumental data, between historical time series (constructed of instrumental and proxy data) and predictive scenarios, and between predictive scenarios and reality. These epistemic gaps can be understood in the context of the two broad projects climate science undertakes: the reconstruction of past climate variation, and the forecast of the future.
As you might expect, various climate scientists have had different views about which kinds of proxy data are most reliable, and about how the different sorts of proxies ought to be used in reconstructions of past climate variation. The leaked CRU e-mails include discussions where climate scientists dedicate themselves to finding the “common denominator” in this diversity of expert opinion — not just because such a common denominator might be expected to be closer to the objective reality of things, but also because finding common ground in the diversity of opinion could be expected to enhance the core group’s credibility. Another effect, of course, is that the common denominator is also denied to outsiders, undermining their credibility (and effectively excluding them as outliers).
McAllister noted that the emails simultaneously revealed signs of internal disagreement, and of a reaching for balance. Some of the scientists argued for “wise use” of proxies and voiced judgments about how to use various types of data.
The data, of course, cannot actually speak for themselves.
As the climate scientists worked to formulate scenario-based forecasts that public policy makers would be able to use, they needed to grapple with the problems of how to handle the link between their reconstructions of past climate trends and their forecasts. They also had to figure out how to handle the link between their forecasts and reality. The e-mails indicate that some of the scientists were pretty resistant to this latter linkage — one asserted that they were “NOT supposed to be working with the assumption that these scenarios are realistic,” rather using them as internally consistent “what if?” storylines.
One thing the e-mails don’t seem to contain is any explicit discussion of what would count as an ad hoc hypothesis and why avoiding ad hoc hypotheses would be a good thing. This doesn’t mean that the climate scientists didn’t avoid them, just that it was not a methodological issue they felt they needed to be discussing with each other.
This was a really interesting set of talks, and I’m still mulling over some of the issues they raised for me. When those ideas are more than half-baked, I’ll probably write something about them here.