Norms are what we ought to do, not what we suspect everyone actually does.

In the comments on a number of recent posts, I’ve been sensing a certain level of cynicism about the realities of scientific practice, and it’s been bumming me out. (In fairness, as I reread those comment threads today, the comments aren’t as jaded as I remember them being; it’s probably that the ones with a cynical edge are staying with me a bit longer.)
I am not bummed because I think people ought to have a picture of the scientific community as one where everyone is happy and smiling, holding hands and singing and wanting to buy the world a Coke. What’s sticking in my craw a little is the “Eh, what are you gonna do?” note of resignation about some of the problematic behaviors and outcomes that are acknowledged to be even more common than the headlines would lead us to believe.
I do not think we can afford to embrace resignation here. I think that seeing the problems actually saddles us with some responsibilty to do something about them.


I don’t think we can proclaim that scientific work is basically trustworthy while also saying everyone has doubts about his collaborators (and we cannot expect people to pursue the information that could clear up these uncertainties one way or another), and it’s a completely normal and expected thing for scientists to present their findings as much more certain and important than they are (not to mention puffing up the promise of their work to secure funding for it), and even to misrepresent who was actually involved in coming up with the big idea, doing the research, and drawing the conclusions.
Or, that peer review by other scientists in the field is how we certify this as knowledge while also holding that generally the peer reviewers are too conservative to wrap their heads around truly new findings, or that they are so specialized that they don’t know enough about anyone else’s research to understand what it is they’re evaluating, or that they just don’t have the time to do a reasonable job reviewing manuscripts, or they are so driven by their own ambitions that their reviews are not objective evaluations of scientific arguments so much as efforts to undermine their enemies or help their allies (assuming they think they can tell from the manuscript whether it comes from an ally or an enemy), besides which journal editors end up publishing what they feel like publishing regardless of the recommendations of the referees.
Or, that reproducibility is a central feature of scientific knowledge, one that gives us reason to claim we’ve gotten our hands on a real feature of the world (and a discouragement against people just making stuff up and submitting it to journals), but we all know that it’s super-hard to reproduce scientific results, and there are no career rewards for replication per se (nor do we think that any should be built into the system), and only the “important” findings will be tested for reproducibility in the course of trying to build new findings upon them (which leaves a whole lot of findings on the books but not fully verified), and when scientists report that their serious efforts to reproduce a finding have failed the journal editors and university administrators can be counted on to do very little to recognize that the original finding shouldn’t be counted as “knowledge” anymore.
I don’t think we can have it both ways. I don’t think we can trumpet the reliability of Science and wallow in cynicism about the actual people and institutional structures involved in the production of science.
Now, scientists know that there are things they and their fellow scientists ought to be doing if they are serious about building a body of accurate and reliable knowledge about the world. If you’re down with the scientific method, simply making up data (rather than doing the experiment) is off the table. If the scientific method (including not only particular sorts of interactions between scientists and the phenomena they are studying but also particular sorts of interactions between scientists and the findings reported by other scientists aimed at ensuring that the findings are an accurate account of the phenomena and are as free from bias as they can be) is how we justify our claim that a certain body of knowledge is reliable and credible, then departures from this method — including departures from the optimal scientist-on-scientist interactions — undercut the reliability and the credibility of the knowledge.
We know there are people who are departing from what they ought to be doing as scientists. Perhaps we fear that most scientists are doing this to one degree or another.
What are we going to do about this?
Frankly, I’m not sure whether we’re in a position to say with certainty if this is a matter of a few bad apples or an orchard where all the trees are diseased and overrun with coddling moth caterpillars. I don’t think we have to burn it all down and start fresh. But I do think that scientists who care about building good scientific knowledge have some responsibilities to shoulder here.
How do you behave in conducting and reporting your research, and in your work with collaborators and your interactions with competitors? In a community of science where everyone behaved as you do, would it be easier or harder to open a journal and find a credible report? Would it be easier or harder for scientists in a field to get to the bottom of a difficult scientific question?
What kind of behavior do you tolerate in your scientific colleagues? What kind of behavior do you encourage? Do you give other scientists a hard time for doing things that undermine your ability to trust scientific findings or other scientists in the community? If you don’t, why not?
Those of you who are journal editors or university administrators, how do you understand your relationship to the project of building good scientific knowledge? What kinds of other interests (like operating costs, fear of lawsuits, concerns about subscriptions or publicity) get in the way of your being able to actively support the building of good scientific knowledge? Is there a way to manage those other interests better — or even to openly acknowledge the ways these interests might be in tensions with the goal of building good scientific knowledge.
I’m no Pollyanna. I know there’s some bad stuff happening out there. But has science gone completely off the rails?
Maybe not. And maybe it’s time to turn from cynicism to righteous anger, to exert some peer pressure (and some personal discipline) so that hardly anyone will feel comfortable trying to get away with the stuff “everyone” does now.

facebooktwittergoogle_pluslinkedinmail
Posted in Ethical research, Tribe of Science.

12 Comments

  1. ‘Norm’ is a word used to refer both to proscriptive standards and descriptive accounts. Conflating the two is generally a bad idea.

  2. “Norm is a word used to refer both to proscriptive standards and descriptive accounts.”
    Oh, I suspect Dr. Free-Ride recognizes that as Ye Olde Is-Ought Problem. However one describes it, I do find it useful to recognize gaps between espoused values and lived values.
    Seems to me that’s an important step toward adjusting social norms to reconcile lived values with espoused values.
    Have I mentioned Peter Senge’s book, The Fifth Discipline before? (At least, recently?) I found that to be a helpful eye-opener, with an insightful approach to recognizing the gaps between espoused values and lived values, and with practical advice about reconciling them.
    Cheers

  3. “I don’t think we can proclaim that scientific work is basically trustworthy…”
    Luckily, we don’t have to. The self-correcting nature of science will sort out what is and isn’t reliable.

  4. I think the (perhaps tacitly) accepted practices vary widely across science, but my viewpoint is that a lot of what is taught as “the scientific method” in junior high isn’t practiced or necessary. Part of the reason for this is that most scientific findings are just not all that important on their own. Great advances are made of the accumulation of many small ones, and if someone misrepresents their work (which, I believe, is almost always unintentional) it will simply not lead to anything further.

  5. “I think that seeing the problems actually saddles us with some responsibilty to do something about them.”
    “I’m not sure whether we’re in a position to say with certainty if this is a matter of a few bad apples or an orchard where all the trees are diseased and overrun with coddling moth caterpillars.”
    It’s a wild idea, but one possible way out of this possibly diseased forest is complete transparency. If scientists thought that someone — especially a critic — was looking over their shoulder in something approaching real time, maybe they would behave differently.
    I’d start in the animal labs.

  6. But seriously,
    This is a good post. Several of its contents rank high on my own personal list of things that are integral to the practice of science that are not spoken of nearly enough. Combine the tensions inherent in peer review and reproducibility with the fact that nothing ever seems to work (I am defining ‘nothing’ as 5% – that’s my p-value), and you have the top three reasons why I find science so difficult.
    Maybe I am using lists this morning because I just finished logging reagents and methods in my lab notebooks.

  7. As someone who took a novel to nearly every class for the first 11 years of my education, and spent about half of the average lecture with my nose in a book, I’m delighted to read your article.

  8. I do agree that science is generally self-correcting when embedded within an overall social and political system that has a healthy regard for the truth, along with well-entrenched institutions with some degree of power and independence that actively support the quest for objective truth. However, absent those constructs, I think that even the best intentions of individal practitioners can be (and historically have been) subverted. I am not a practicing scientist, but I bring this up as a reminder to non-scientists that by their votes, voices and support ( both financial and non-financial) they too can help do something. Science cannot stand up on its own if a society allows “objective truth” to continually be trumped by considerations of political gain and immediate profit.
    I do not think we are there yet, but there are reasons for concern. This, to me, is the intellectual battle of the age – I see science as the refined “tip of the iceberg” that can only survive with a system with an overall regard and respect for the confrontation of reality with logic and data. (I think that last may be a paraphrase of Jacques Monod.) And too that end, I think that scientists need to take extra care to the observe the cautions you describe, because the overall credibility of science itself provides positive feedback to the underlying mechanisms of societal support.

Leave a Reply

Your email address will not be published. Required fields are marked *