Objectivity requires teamwork, but teamwork is hard.

In my last post, I set out to explain why the scientific quest to build something approaching objective knowledge requires help from other people. However, teamwork can be a challenge in the best of circumstances. And, certain aspects of scientific practices — especially in terms of how rewards are distributed — can make scientific teamwork even harder.

In this post, I’ll run down just some of the obstacles to scientists playing together effectively to build reliable knowledge about the world.

First, recall that a crucial thing individual scientists hope to get from their teammates in knowledge-building is help in identifying when they are wrong. The sociologist of science Robert Merton noted that a rule of the knowledge-building game, at least as far as Team Science is concerned, is organized skepticism, which I once described like this:

Everyone in the tribe of science can advance knowledge claims, but every such claims that is advanced is scrutinized, tested, tortured to see if it really holds up. The claims that do survive the skeptical scrutiny of the tribe get to take their place in the shard body of scientific knowledge.

In principle, each scientist tries to have their organized skepticism turned up to a healthy level when looking at her own results, as well as the results of others. In practice, there are issues that get in the way of both self-scrutiny and scrutiny of the results of others.

It’s hard to make a scientific career in replicating the results of others.

The first thing to recognize is that as serious as scientists are about the ideal of reproducible results, reproducibility is hard. It takes a while to gain technical mastery of all the moving parts in your experimental system and to figure out which of those wiggly bits make a difference in the results you see.

In itself, this needn’t be an obstacle to scientists working well together. The problem is that scientific rewards are usually reserved for those who generate novel findings — figuring out something that wasn’t known before — rather than for those who replicate results someone else has already put forward. What matters, for the career score-keeping (which drives who gets hired, who gets grant money, who gets promoted, who wins prizes) is whether you are first across the finish line to discover X. Being second (or third, or tenth) across that finish line is a nice reassurance that the first one across had a solid finding, but it doesn’t count in the same way.

Setting up the rewards so that the only winner is the first across the finish line may also provide a disincentive to doing enough experiments yourself to be sure that your results are really robust — the other guy may be sure enough to submit his manuscript on the basis of fewer runs, or might have gotten a head-start on it.

Now surely there are some exceptions, places perhaps where X was such a startlingly unexpected finding that the scientific community won’t really believe it until multiple researchers come forward to report that they have found it. But this is the exception rather than the rule, which means that if the second scientist to have found X cannot add some additional ingredient to our understanding of it that wasn’t part of the first report of X, that second researcher is out of luck.

Scientists are generally pretty smart. Among other things, this means most of them will come up with some strategy for spending their time that takes account of what activities will be rewarded. To the extent that working to replicate someone else’s results looks like a high-investment, low-yield activity, scientists may judge it prudent to spend their time doing something else.

It’s worth noting that scientists will frequently try to reproduce the results of others when those results are the starting point for a brand new piece of research of their own. These efforts can be time consuming and frustrating (see: “reproducibility is hard”). And, in the event that you discover that the other scientist’s results seem not to hold up, communicating to this other scientist is not always viewed as a friendly gesture.

Questions about results can feel like personal attacks.

Scientists work hard to get their studies to work and to draw their best conclusions about what their observations mean — and, as we’ve just noted, they do this while racing against the clock in hopes that some other researcher doesn’t make the discovery (and secure the credit for it) first. Since scientists are human, they can get attached to those results they worked so hard to get.

It shouldn’t be a surprise, then, that they can get touchy when someone else pops into the fray to tell them that there’s a problem with those results.

If the results are wrong, scientists face the possibility that they have wasted a bunch of time and money, blood, sweat, and tears. As well, they may have to issue a correction or even a retraction of their published results, which means that the publication that they’re correcting or retracting will no longer do the same work to advance their career.

In such a situation, getting defensive is understandable. However, getting defensive doesn’t do much to advance the knowledge-building project that science is supposed to be.

None of this is to say that an objection raised to one’s results should be automatically accepted as true. Organized skepticism applies to the critiques as well as to the original results.

That said, though, it strikes me that the best way to the knowledge-building, error-detecting teamwork that the tribe of science could use here might be establishing environments in scientific communities (from research groups to departments to disciplines) where researchers don’t take scrutiny of their results, data, methods, etc., personally — and where the scrutiny is applied to each member’s results, data, methods, etc. (since anyone can make mistakes).

When the players understand the game as aimed at building a reliable body of knowledge about the world that they can share, maybe they can be more welcoming of others pointing out their errors. When the game is understood as each scientist against all the others, pulling back to look critically at problems with one’s own work (especially when they are pointed out by a competitor) doesn’t look like such a great strategy.

(Unwillingness to take critiques of promising results seriously seems to have been a major feature of the Bengü Sezen/Dalibor Sames fraud scandal, and may also have played a role in the downfall of Harvard psychologist Marc Hauser.)

Sharing too much common ground makes it harder to be objective, too.

Part of the reason that scientists try to be alert to ways they could be deceived, even by themselves, is that the opportunities for deception are plentiful. One of the issues you have to face is that your expectations can influence what you see.

We don’t even have to dig into Thomas S. Kuhn’sThe Structure of Scientific Revolutions, embrace his whole story about paradigms, or even peruse the perception experiments he describes to accept that this is a possibility. The potential effect of expectations on observations is one reason that placebo-controlled trials are “double-blind” whenever possible, so neither experimental subject nor researcher is swayed by what they think ought to be occurring. Expectations also play a role in what kind of scientific findings are accepted easily into the shared body of knowledge (because they seem to fit so naturally with what we already know) and which ones are resisted (because they don’t fit so well, and might even require us to identify some of the things we thought we “knew” as wrong). And, expectations can influence scientific knowledge by shaping what kinds of questions researchers ask, what kinds of methods they decide to use to tackle those questions, and what kinds of outcomes they view as within the realm of possible outcomes. (If you’re doing an experiment and you see an outcome outside the range of expectations, often the first thing you check is whether the equipment is malfunctioning.)

Working with other people helps scientists build better knowledge by giving them some information about which observed outcomes are driven by the features of the phenomenon they’re trying to understand and which are driven by subjective features (like expectations). But the other people are must helpful here if their expectations and background assumptions are not identical to our own!

In the case that a scientific community shares all the same background assumptions, expectations, even unconscious biases, these things — and the ways that they can influence how experiments are designed and what findings they produce — may become almost invisible to the scientists in the community.

What this means is that it may be a healthy thing for a community of knowledge-builders to be diverse. How diverse? Ideally, you’d want the community to achieve enough diversity that it’s hard for each individual’s background assumptions to stay in the background, because you’re always in spitting distance of another individual with different background assumptions. Recognizing them as assumptions rather than necessarily true facts about the world can keep potential errors or oversights due to these assumptions on a shorter leash.

Each of these obstacles is linked to an over-arching challenge for scientific teamwork:

Teamwork often isn’t recognized or rewarded in scientific career score-keeping.

Scientists work with each other a lot to divide up aspects of complex research projects, but when it comes time to tally up the score what sometimes seems to matter most is who ends up being first author and who’s lost in the et al. Scientists have detailed discussions about published research in their field, enacting something like post-publication peer review, but if you determine that the other guy’s argument is persuasive or that his published findings actually hold up, it can be hard to capture the contribution you’ve made to Team Science’s joint knowledge-building project on your Curriculum Vitae. Scientists even have detailed pre-publication engagement about results (sometimes as a journal submission is being peer reviewed, sometimes less formally), but helping someone else uncover her mistakes or biases may put her in position to cross the discovery finish line before you do — and again, one doesn’t get much tangible career reward for providing this assist.

Teamwork is essential to making the knowledge scientists produce more objective. Yet the big career rewards in science seem clearly tied to individual achievement. Maybe the assumption is that some combination of competitive impulse (i.e., wanting to knock down other scientists’ putative knowledge claims) and community goodwill is enough to get scientists working together to find the errors and weed them out.

But maybe, if we think objectivity is a quality towards which scientific knowledge should be striving, it would make sense to put some more concrete rewards in place to incentivize the scientific teamwork on which objectivity depends.

facebooktwittergoogle_pluslinkedinmail
facebooktwittergoogle_pluslinkedin
Posted in More Science and tagged , , , .

Leave a Reply

Your email address will not be published. Required fields are marked *