Objectivity requires teamwork, but teamwork is hard.

In my last post, I set out to explain why the scientific quest to build something approaching objective knowledge requires help from other people. However, teamwork can be a challenge in the best of circumstances. And, certain aspects of scientific practices — especially in terms of how rewards are distributed — can make scientific teamwork even harder.

In this post, I’ll run down just some of the obstacles to scientists playing together effectively to build reliable knowledge about the world.

First, recall that a crucial thing individual scientists hope to get from their teammates in knowledge-building is help in identifying when they are wrong. The sociologist of science Robert Merton noted that a rule of the knowledge-building game, at least as far as Team Science is concerned, is organized skepticism, which I once described like this:

Everyone in the tribe of science can advance knowledge claims, but every such claims that is advanced is scrutinized, tested, tortured to see if it really holds up. The claims that do survive the skeptical scrutiny of the tribe get to take their place in the shard body of scientific knowledge.

In principle, each scientist tries to have their organized skepticism turned up to a healthy level when looking at her own results, as well as the results of others. In practice, there are issues that get in the way of both self-scrutiny and scrutiny of the results of others.

It’s hard to make a scientific career in replicating the results of others.

The first thing to recognize is that as serious as scientists are about the ideal of reproducible results, reproducibility is hard. It takes a while to gain technical mastery of all the moving parts in your experimental system and to figure out which of those wiggly bits make a difference in the results you see.

In itself, this needn’t be an obstacle to scientists working well together. The problem is that scientific rewards are usually reserved for those who generate novel findings — figuring out something that wasn’t known before — rather than for those who replicate results someone else has already put forward. What matters, for the career score-keeping (which drives who gets hired, who gets grant money, who gets promoted, who wins prizes) is whether you are first across the finish line to discover X. Being second (or third, or tenth) across that finish line is a nice reassurance that the first one across had a solid finding, but it doesn’t count in the same way.

Setting up the rewards so that the only winner is the first across the finish line may also provide a disincentive to doing enough experiments yourself to be sure that your results are really robust — the other guy may be sure enough to submit his manuscript on the basis of fewer runs, or might have gotten a head-start on it.

Now surely there are some exceptions, places perhaps where X was such a startlingly unexpected finding that the scientific community won’t really believe it until multiple researchers come forward to report that they have found it. But this is the exception rather than the rule, which means that if the second scientist to have found X cannot add some additional ingredient to our understanding of it that wasn’t part of the first report of X, that second researcher is out of luck.

Scientists are generally pretty smart. Among other things, this means most of them will come up with some strategy for spending their time that takes account of what activities will be rewarded. To the extent that working to replicate someone else’s results looks like a high-investment, low-yield activity, scientists may judge it prudent to spend their time doing something else.

It’s worth noting that scientists will frequently try to reproduce the results of others when those results are the starting point for a brand new piece of research of their own. These efforts can be time consuming and frustrating (see: “reproducibility is hard”). And, in the event that you discover that the other scientist’s results seem not to hold up, communicating to this other scientist is not always viewed as a friendly gesture.

Questions about results can feel like personal attacks.

Scientists work hard to get their studies to work and to draw their best conclusions about what their observations mean — and, as we’ve just noted, they do this while racing against the clock in hopes that some other researcher doesn’t make the discovery (and secure the credit for it) first. Since scientists are human, they can get attached to those results they worked so hard to get.

It shouldn’t be a surprise, then, that they can get touchy when someone else pops into the fray to tell them that there’s a problem with those results.

If the results are wrong, scientists face the possibility that they have wasted a bunch of time and money, blood, sweat, and tears. As well, they may have to issue a correction or even a retraction of their published results, which means that the publication that they’re correcting or retracting will no longer do the same work to advance their career.

In such a situation, getting defensive is understandable. However, getting defensive doesn’t do much to advance the knowledge-building project that science is supposed to be.

None of this is to say that an objection raised to one’s results should be automatically accepted as true. Organized skepticism applies to the critiques as well as to the original results.

That said, though, it strikes me that the best way to the knowledge-building, error-detecting teamwork that the tribe of science could use here might be establishing environments in scientific communities (from research groups to departments to disciplines) where researchers don’t take scrutiny of their results, data, methods, etc., personally — and where the scrutiny is applied to each member’s results, data, methods, etc. (since anyone can make mistakes).

When the players understand the game as aimed at building a reliable body of knowledge about the world that they can share, maybe they can be more welcoming of others pointing out their errors. When the game is understood as each scientist against all the others, pulling back to look critically at problems with one’s own work (especially when they are pointed out by a competitor) doesn’t look like such a great strategy.

(Unwillingness to take critiques of promising results seriously seems to have been a major feature of the Bengü Sezen/Dalibor Sames fraud scandal, and may also have played a role in the downfall of Harvard psychologist Marc Hauser.)

Sharing too much common ground makes it harder to be objective, too.

Part of the reason that scientists try to be alert to ways they could be deceived, even by themselves, is that the opportunities for deception are plentiful. One of the issues you have to face is that your expectations can influence what you see.

We don’t even have to dig into Thomas S. Kuhn’sThe Structure of Scientific Revolutions, embrace his whole story about paradigms, or even peruse the perception experiments he describes to accept that this is a possibility. The potential effect of expectations on observations is one reason that placebo-controlled trials are “double-blind” whenever possible, so neither experimental subject nor researcher is swayed by what they think ought to be occurring. Expectations also play a role in what kind of scientific findings are accepted easily into the shared body of knowledge (because they seem to fit so naturally with what we already know) and which ones are resisted (because they don’t fit so well, and might even require us to identify some of the things we thought we “knew” as wrong). And, expectations can influence scientific knowledge by shaping what kinds of questions researchers ask, what kinds of methods they decide to use to tackle those questions, and what kinds of outcomes they view as within the realm of possible outcomes. (If you’re doing an experiment and you see an outcome outside the range of expectations, often the first thing you check is whether the equipment is malfunctioning.)

Working with other people helps scientists build better knowledge by giving them some information about which observed outcomes are driven by the features of the phenomenon they’re trying to understand and which are driven by subjective features (like expectations). But the other people are must helpful here if their expectations and background assumptions are not identical to our own!

In the case that a scientific community shares all the same background assumptions, expectations, even unconscious biases, these things — and the ways that they can influence how experiments are designed and what findings they produce — may become almost invisible to the scientists in the community.

What this means is that it may be a healthy thing for a community of knowledge-builders to be diverse. How diverse? Ideally, you’d want the community to achieve enough diversity that it’s hard for each individual’s background assumptions to stay in the background, because you’re always in spitting distance of another individual with different background assumptions. Recognizing them as assumptions rather than necessarily true facts about the world can keep potential errors or oversights due to these assumptions on a shorter leash.

Each of these obstacles is linked to an over-arching challenge for scientific teamwork:

Teamwork often isn’t recognized or rewarded in scientific career score-keeping.

Scientists work with each other a lot to divide up aspects of complex research projects, but when it comes time to tally up the score what sometimes seems to matter most is who ends up being first author and who’s lost in the et al. Scientists have detailed discussions about published research in their field, enacting something like post-publication peer review, but if you determine that the other guy’s argument is persuasive or that his published findings actually hold up, it can be hard to capture the contribution you’ve made to Team Science’s joint knowledge-building project on your Curriculum Vitae. Scientists even have detailed pre-publication engagement about results (sometimes as a journal submission is being peer reviewed, sometimes less formally), but helping someone else uncover her mistakes or biases may put her in position to cross the discovery finish line before you do — and again, one doesn’t get much tangible career reward for providing this assist.

Teamwork is essential to making the knowledge scientists produce more objective. Yet the big career rewards in science seem clearly tied to individual achievement. Maybe the assumption is that some combination of competitive impulse (i.e., wanting to knock down other scientists’ putative knowledge claims) and community goodwill is enough to get scientists working together to find the errors and weed them out.

But maybe, if we think objectivity is a quality towards which scientific knowledge should be striving, it would make sense to put some more concrete rewards in place to incentivize the scientific teamwork on which objectivity depends.

The objectivity thing (or, why science is a team sport).

One of the qualities we expect from good science is objectivity. And, we’re pretty sure that the scientific method (whatever that is) has something to do with delivering scientific knowledge that is objective (or more objective than it would be otherwise, at any rate).

In this post, I’m here to tell you that it’s more complicated than that — at least, if you’re operating with the picture of the scientific method you were taught in middle school. What we’ll see is that objectivity requires more than a method; it takes a team.

(I’ll briefly note that my discussion of objectivity, subjectivity, and scientific knowledge building owes much to Helen E. Longino’s book, Science as Social Knowledge. If you want to get into the epistemic complexities of this issue, you may want to read through the comments on this old but related post on my other blog.)

But let’s start at the beginning. What do we mean by objectivity?

It may be useful to start with the contrast to objective: subjective. If I put forward the claim Friday Night Lights is the best television series ever!” you may agree or disagree. However, you might also point out that this looks like the kind of claim where it seems wrong to assert there’s a definite truth value (true or false). Why? Because it seems unlikely that there’s a fact of the matter “in the world” about what is the best television series ever — that is, a fact outside my head, or your head, or someone else’s head.

Friday Night Lights is the best television series ever!” is a subjective claim. It isn’t pointing to a fact in the world, but rather to a fact about my experience of the world. There is no reason to think your experience of the world will be the same as mine here; it’s a matter of opinion what the best TV show is.

Of course, if we want to be more precise, we can note that facts about my (subjective) experience of the world are themselves facts in the world (since I’m in the world while I’m having the experience). However, these are not facts in the world that you could verify independently. This means if you want to know how the world seems to me, you’ll have to take my word for it. Moreover, social scientists and opinion pollsters (among others) work very hard to nail down an objective picture of a population’s subjective experience, trying to quantify opinions about TV shows or political candidates or new flavors of potato chips.

Generally speaking, though, we look to science to deliver something other than mere opinions. What we hope science will find for us is a set of facts about the world outside our heads. This brings us to one sense of the word objective: what the world is really like (as opposed to merely how it seems to me).

Another sense of objective heightens the contrast with the subjective: what anyone could discover to be so. We’re looking for facts that other people could discover as well, and trying to make claims whose truth other people could verify independently. That discovery and verification is generally taken to be conducted by way of some sense organ or another, so we probably need to modify this sense of objective to “what anyone with reasonably well-functioning sense organs could discover to be so”.

There’s a connection between these two senses of “objective” that captures some of the appeal of science as a route to knowledge.

One of the big ideas behind science is that careful observation of our world can bring us to knowledge about that world. This may seem really obvious, but it wasn’t always so. Prior to the Renaissance, recognized routes to knowledge were few and far between: what was in sacred texts, or revealed by the deity (to the select few to whom the deity was revealing truths), or what was part of the stock of practical knowledge passed on by guilds (but only to other members of these guilds). If you couldn’t get your hands on the sacred texts (and read them yourself), or have a revelation, or become a part of a guild, you had to depend on others for your knowledge.

The recognition that anyone with a reasonably well-functioning set of sense organs and with the capacity to reason could discover truths about the world — cutting out the knowledge middleman, as it were — was a radical, democratizing move. (You can find a lovely historical discussion of this shift in an essay by Peter Machamer, “The Concept of the Individual and the Idea(l) of Method in Seventeenth-Century Natural Philosophy,” in the book Scientific Controversies: Philosophical and Historical Perspectives.)

But, in pointing your sense organs and your powers of reason at the world in order to know that world, there’s still the problem of separating how things actually are from how things seem to you. You want to be able to tell which parts of your experience are merely your subjective impression of things and which parts of your experience reflect the structure of the world you are experiencing.

Can the scientific method help us with this?

Again, this depends on what you mean by the scientific method. Here’s a fairly typical presentation of “the scientific method”, found on the ScienceBuddies website:

The steps of the scientific method are to:

  • Ask a Question
  • Do Background Research
  • Construct a Hypothesis
  • Test Your Hypothesis by Doing an Experiment
  • Analyze Your Data and Draw a Conclusion
  • Communicate Your Results

Except for the very last bullet point (which suggests a someone to whom you communicate your results), this list of steps makes it look like you could do science — and build a new piece of knowledge — all by yourself. You decide (as you’re formulating your question) which piece of the world you want to understand better, come up with a hunch (your hypothesis), figure out a strategy for getting empirical evidence from the world that bears on that hypothesis (and, one hopes, that would help you discover whether that hypothesis is wrong), implement that strategy (with observations or experiments), and know more than you did before.

But, as useful as this set of steps may be, it’s important to remember that the scientific method isn’t an automatic procedure. The scientific method is not a knowledge-making box where you feed in data and collect reliable conclusions from the output bin.

More to the point, it’s not a procedure you can use all by yourself to make objective knowledge. The procedure is a good first step, but if you’re building objective knowledge you need other people.

Here’s the thing: we find out the difference between objective facts and subjective impressions of the world by actually sharing a world with other people whose subjective impressions about the world differ from our own. (Given the opacity of what’s in our minds, there also needs to be some kind of communication between us and these people with whom we’re sharing the world.) We discover that some things don’t seem the same to all of us: Not everyone likes Friday Night Lights. Not everyone finds knock-knock jokes hilarious. Not everyone hates the flavor of asparagus. Not everyone finds a ’66 Mustang beautiful.

But, if you had the world all to yourself, how would you be able to tell which parts of your experience of the world were objective and which were subjective? How, in other words, would you be able to distinguish the parts of your experience that were more reflective of actual features of the world you were experiencing from the parts of your experience that were more reflective of you as the experiencer?

It’s not clear to me that you could.

If you had the world to yourself, maybe making this distinction just wouldn’t matter. By definition, your experience would be universal. (Still, it might be helpful to be able to figure out whether some bits of your experience were more reliable in identifying real features of the world that mattered for your well being — judging “This fire feels great!” as you were sitting down on the blaze wouldn’t elicit an opposing view, but it might present problems for the continued functioning of your body.)

Our confidence that our experiences are tracking features of the world outside our head depends on our interaction with other people. And let’s be clear that we don’t just need other people to help us identify squishy “value judgments” about what feels good, tastes bad, is the best album, etc. Those senses we use to get knowledge about the world can deceive us, and the sensory information they deliver can be influenced by expectations and by past experiences. However, if we can compare notes with someone else, pointing her sense organs at the same piece of the world at which we’re pointing ours, we have a better chance of working out which parts of that experience are forced by features of the world bumping against human sense organs (i.e., the parts of our experiences of the world where there’s agreement) and which are due to the squishy subjective stuff (i.e., the parts of our experiences of the world where there’s a lot of disagreement).

Comparing notes with more people should get us closer to working out “what anyone could see (or smell, or taste, or hear, or feel)” in a particular domain of the world. Finding the common ground among people whose subjective experiences vary greatly doesn’t guarantee that what we agree about gives us the true facts about how the world really is, but it surely gets us a lot closer than any of us could get all by ourselves.

It’s worth noting that even if the textbook bulleted list version of the scientific method makes it look like you could go it alone, real scientific practice builds in the teamwork that makes the resulting knowledge more objective.

One place you can see this is in the ideal of reproducible experiments. If you’re to be able to claim that a particular experimental set-up produces a particular observable outcome (where you’ll probably also want to provide an explanation for why this is so), you first want to nail down that this set-up produces that outcome more than once. More than this, you’ll want to establish that this set-up produces that outcome no matter who conducts the experiment, and whether she conducts the experiment in this lab or some other lab. Without some kind of check that the results are “robust” (i.e., that they can be reproduced following the same procedure), there’s always the worry that the exciting results you’re seeing might be the result of an equipment malfunction, or a mislabeled chemical reagent — or even of your eyes deceiving you. But if others can follow the same procedures and produce the same results, the odds are better that the results are coming from the piece of the world you think they are.

Peer review, whether of the formal pre-publication sort or the less formal post-publication conversations scientific communities have, is another element of scientific practice that depends on teamwork. Here’s how I described peer review in a post of yore:

It’s worth noting that “peer review” can encompass different things.

Peer review describes the formal process through which manuscripts that have been submitted to journal editors are then sent to reviewers with relevant expertise for their evaluation. These reviewers then reply to the journal editors with their evaluation of the manuscript — whether it should be accepted, resubmitted after revision, or rejected — and their comments on particular aspects of the manuscript (this conclusion would be more solid if it were supported by this kind of analysis of the data, that data looks more equivocal than the authors seem to think it is, this part of the materials and methods is confusingly written, the introduction could be much more concise, etc., etc.). The editor passes on the feedback to the author, the author responds to that feedback (either by making changes in the manuscript or by presenting the editor with a persuasive argument that what a reviewer is asking for is off base or unreasonable), and eventually the parties end up with a version of the paper deemed good enough for publication (or the author gives up, or tries to get a more favorable hearing from another journal).

This flavor of peer review is very much focused on making sure that papers published in scientific journals meet a certain standard of quality or acceptability to the other scientists who will be reading those papers. There’s a lot of room for disagreement about what sort of quality is produced here, about how conservative reviewers can be when faced with new ideas or approaches, about how often reviewer judgments can be overturned by the judgment of editors (and whether that is on balance a good thing or a bad thing). As we’ve discussed before, the quality control here does not typically include reviewers actually trying to replicate the experiments described in the manuscripts they are reviewing.

Still, there’s something about peer review that a great many scientists think is important, at least when they want to be able to consult the literature in their discipline. If you want to see how your results fit with the results that others are reporting in similar lines of research, or if you’re looking for promising instrumental or theoretical approaches to a tenacious scientific puzzle, it’s good to have some reason to trust what’s reported in the literature. Otherwise, you have to do all the verification yourself.

And this is where a sort of peer review becomes important to the essence of science…

The scientist, looking at the world and trying to figure out some bit of it, is engaged in theorizing and observing, in developing hunches and then testing those hunches. The scientist wants to end up with a clearer understanding of how that bit of the world is behaving, and of what could explain that behavior.

And ultimately, the scientist relies on others to get that clearer understanding.
To really trust our observations, they need to be observations that others could make as well. To really buy our own explanations for what we observe, we need to be ready to put those explanations out for the inspection of others who might find some flaw in them, some untested assumption that doesn’t hold up to close scrutiny.

Science may be characterized by an attitude toward the world, an attitude that gets us asking particular kinds of questions, but the systematic approach to answering these questions requires the participation of other people working with the same basic assumptions about how we can engage with the world to understand it better. Those other people are peers, and their participation is a kind of review.

In both the ideal of reproducibility and the practice of peer review, we can see that the scientist’s commitment to producing knowledge that is as objective as possible is closely tied to an awareness that we can be wrong and a desire not to be deceived — even by ourselves.

Science is a team sport because we need other people in order to build something approaching objective knowledge.

However, teamwork is hard. In a follow-up post, I’ll take up some of the challenges scientists face in playing as a team, and how this might bear on the knowledge building scientists are trying to accomplish.

Stay tuned!

Dividing cognitive labor, sharing a world: the American public and climate science.

It’s not just scientists who think science is up to something important. Even non-scientists are inclined to think that scientific knowledge claims have a special grip on our world, that they are likely to give us information or insight that will help us move through that world more successfully.

But scientists and non-scientists alike recognize that we can separate the questions:

  1. What is the world like?
  2. What should we do?

The answer to the first question can inform (or constrain) our answer to the second question, but the common wisdom among scientists themselves is that the facts can’t tell us what to do about the facts.Continue reading

Let’s talk about “Doing Good Science”.

Welcome to my shiny new blog at Scientific American! Here, we’ll be talking about what’s involved in doing good science — and about what ethics has to do with it.

Doing good science includes:

Building a reliable body of knowledge about the world and how it works.
The world is full of phenomena, and the basic hunch that gets science off the ground is that we humans can make sense of those phenomena. But accurately describing the bits of our world and untangling how they work is hard. It’s a project that requires care and attention to details. It requires being objective. It requires being honest.

Doing good science is not just a matter of not deceiving others. It also involves guarding against self-deception.

Building a well-functioning scientific community.
Scientific knowledge building is not a solo operation but a team effort. It’s not just that we need help in figuring out the many phenomena in our world (although given just how much is going on in the world, splitting up the terrain helps). Rather, building more objective knowledge requires that we have something like a community of knowers.

Honesty is a crucial piece of what a scientific community needs to do its knowledge building together, but so is fairness. When the moving parts in your knowledge-building machine are people, things get interesting.

Training new scientists.
Scientists all come from somewhere, and the training of new scientists happens in most of the places that scientific research is done. This means that the knowledge is built as the knowledge-builders are built.

Doing good science includes helping scientific trainees learn how to build reliable knowledge, and helping them support well-functioning scientific communities. Doing good science also includes treating scientists-in-training ethically.

Interacting with the larger society.
Even if all scientists lived and worked full-time on Science Island, isolated from non-scientists, they would still need ethics to get the scientific job done. But in the real world, scientists walk among us.

There are transfers of resources (including but not limited to money) from larger societies to scientific communities to support knowledge building and the training of new scientists. In turn, those scientific communities share the knowledge they have built, and deploy some of those scientists to tackle the problems the public wants or needs solved.

Sometimes it seems like scientific communities and the larger societies in which they are embedded aren’t always listening to each other. Doing good science in a world bigger than Science Island requires figuring out how to take each other’s interests, values, and questions seriously.

And, although you might quibble about whether it’s part of doing good science, being a good scientist surely involves sharing a world. Fulfilling your duties as a scientist does not excuse you from your duties as a member of the human community.

Doing good science isn’t always easy. It requires paying attention, being creative, putting your shoulder into the task, and often some amount of luck.

Then again, so does being a good human being.

* * * * *

About your blogger:

My name is Janet D. Stemwedel, and I’m an Associate Professor of Philosophy at San José State University (in San José, California). My teaching and research are focused on the philosophy of science, the responsible conduct of research, and the ways epistemology (knowledge building) and ethics are intertwined.

I didn’t always know I was going to be an academic philosopher. For a while, I thought I was going to be a chemist when I grew up, and even earned a Ph.D. in physical chemistry. However, I found that the questions that really kept me up at night were philosophical questions about science, rather than scientific questions. So, I went back to school and earned a Ph.D. in philosophy with a focus on history and philosophy of science. Among other things, this means I have great sympathy for those who feel like they have been in school forever.

I live in the San Francisco Bay Area with my better half, two offspring (currently in the 10-12 age bracket), and an adopted New Zealand White rabbit, all of whom (except maybe the rabbit) have an interest in how science works and fits into our world.

You can contact me by email (dr dot freeride at gmail dot com) or find me on Twitter (@docfreeride).

Haven’t I seen you before?

I have another blog that I’ve been writing for going on seven years now (yikes!) called Adventures in Ethics and Science. Indeed, if you want a sense of some of what we’ll be talking about here, you might be interested in some of my archived posts there:

And, if conversations with kids about science are your cup of tea, you might be interested in my Friday Sprog Blogging.

Who made your banner?

P.D. Magnus, a fellow philosopher of science who also happens to have mad design skillz, created the Doing Good Science banner.