If you’re a regular reader of this blog (or, you know, attentive at all to the world around you), you will have noticed that scientific knowledge is built by human beings, creatures that, even on the job, resemble other humans more closely than they do Mr. Spock or his Vulcan conspecifics. When an experiment yields really informative results, most human scientists don’t cooly raise an eyebrow and murmur “Fascinating.” Instead, you’re likely to see a reactions somewhere on the continuum between big smiles, shouts of delight, and full-on end zone happy-dance. You can observe human scientists displaying similar emotional responses in other kinds of scientific situations, too — say, for example, when they find the fatal flaw in a competitor’s conclusion or experimental strategy.
Many scientists enjoy doing science. (If this weren’t so, the rest of us would have to feel pretty bad for making them do such thankless work to build knowledge that we’re not willing or able to build ourselves but from which we benefit nonetheless.) At least some scientists are enjoying more than just the careful work of forming hypotheses, making observations, comparing outcomes and predictions, and contributing to a more reliable account of the world and its workings. Sometimes the enjoyment comes from playing a particular kind of role in the scientific conversation.
Some scientists delight in the role of advancer or supporter of the new piece of knowledge that will change how we understand our world in some fundamental way. Other scientists delight in the role of curmudgeon, shooting down overly-bold claims. Some scientists relish being contrarians. Others find comfort in being upholders of consensus.
In light of this, we should probably consider whether having one of these human predilections like enjoying being a contrarian (or a consensus-supporter, for that matter) is a potential source of bias against which scientists should guard.
The basic problem is nothing new: what we observe, and how we interpret what we observe, can be influenced by what we expect to see — and, sometimes, by what we want to see. Obviously, scientists don’t always see what they want to see, else people’s grad school lab experiences would be deliriously happy rather than soul-crushingly frustrating. But sometimes what there is to see is ambiguous, and the person making the observation has to make a call. And frequently, with a finite set of data, there are multiple conclusions — not all of them compatible with each other — that can be drawn.
These are moments when our expectations and our ‘druthers might creep in as the tie-breaker.
At the scale of the larger community of science and the body of knowledge it produces, this may not be such a big deal. (As we’ve noted before, objectivity requires teamwork). Given a sufficiently diverse scientific community, there will be loads of other scientists who are likely to have different expectations and ‘druthers. In trying to take someone else’s result and use it to build more knowledge, the thought is that something like replication of the earlier result happens, and biases that may have colored the earlier result will be identified and corrected. (Especially since scientists are in competition for scarce goods like jobs, grants, and Nobel Prizes, you might start with the assumption that there’s no reason not to identify problems with the existing knowledge base. Of course, actual conditions on the ground for scientists can make things more complicated.)
But even given the rigorous assessment she can expect from the larger scientific community, each scientist would also like, individually, to be as unbiased as possible. One of the advantages of engaging with lots of other scientists, with different biases than your own, is you get better at noticing your own biases and keeping them on a shorter leash — putting you in a better place to make objective knowledge.
So, what if you discover that you take a lot of pleasure in being a naysayer or contrarian? Is coming to such self-awareness the kind of thing that should make you extra careful in coming to contrarian conclusions about the data? If you actually come to the awareness that you dig being a contrarian, does it put you in a better position to take corrective action than you would if you enjoyed being a contrarian but didn’t realize that being contrarian was what was bringing you the enjoyment?
(That’s right, a philosopher of science just made something like an argument that scientists might benefit — as scientists, not just as human beings — from self-reflection. Go figure.)
What kind of corrective action do I have in mind for scientists who discover that they may have a tilt, whether towards contrarianism or consensus-supporting? I’m thinking of a kind of scientific buddy-system, for example matching scientists with contrarian leanings to scientists who are made happier by consensus-supporting. Such a pairing would be useful for each scientist in the pair as far as vetting their evidence and conclusions: Here’s the scientist you have to convince! Here’s the colleague whose objections you need to understand and engage with before this goes any further!
After all, one of the things serious scientists are after is a good grip on how things actually are. An explanation that a scientist with different default assumptions than yours can’t easily dismiss is an explanation worth taking seriously. If, on the other hand, your “buddy” can dismiss your explanation, it would be good to know why so you can address its weaknesses (or even, if it is warranted, change your conclusions).
Such a buddy-system would probably only be workable with scientists who are serious about intellectual honesty and getting knowledge that is objective as possible. Among other things, this means you wouldn’t want to be paired with a scientist for whom having an open mind would be at odds with the conditions of his employment.
_____
An ancestor version of this post was published on my other blog.