C.K. Gunsalus on responsible — and prudent — whistleblowing.

In my last post, I considered why, despite good reasons to believe that social psychologist Diederik Stapel’s purported results were too good to be true, the scientific colleagues and students who were suspicious of his work were reluctant to pursue these suspicions. Questioning the integrity of a member of your professional community is hard, and blowing the whistle on misconduct and misbehavior can be downright dangerous.

In her excellent article “How to Blow the Whistle and Still Have a Career Afterwards”, C. K. Gunsalus describes some of the challenges that come from less than warm community attitudes towards members who point out wrongdoing:

[Whistleblowers pay a high price] due to our visceral cultural dislike of tattletales. While in theory we believe the wrong-doing should be reported, our feelings about practice are more ambivalent. …

Perhaps some of this ambivalence is rooted in fear of becoming oneself the target of maliciously motivated false charges filed by a disgruntled student or former colleague. While this concern is probably overblown, it seems not far from the surface in many discussions of scientific integrity. (p. 52)

I suspect that much of this is a matter of empathy — or, more precisely, of who it is within our professional community with whom we empathize. Maybe we have an easier time empathizing with the folks who seem to be trying to get along, rather than those who seem to be looking for trouble. Or maybe we have more empathy for our colleagues, with whom we share experiences and responsibilities and the expectation of longterm durable bonds, than we have for our students.

But perhaps distaste for a tattletale is more closely connected to our distaste for the labor involved in properly investigating allegations of wrongdoing and then, if wrongdoing is established, addressing it. It would certainly be easier to assume the charges are baseless, and sometimes disinclination to investigate takes the form of finding reasons not to believe the person raising the concerns.

Still, if the psychology of scientists cannot permit them to take allegations of misbehavior seriously, there is no plausible way for science to be self-correcting. Gunsalus writes:

[E]very story has at least two sides, and a problem often looks quite different when both are in hand than when only one perspective is in view. The knowledge that many charges are misplaced or result from misunderstandings reinforces ingrained hesitancies against encouraging charges without careful consideration.

On the other hand, serious problems do occur where the right and best thing for all is a thorough examination of the problem. In most instances, this examination cannot occur without someone calling the problem to attention. Early, thorough review of potential problems is in the interest of every research organization, and conduct that leads to it should be encouraged. (p. 53)

(Bold emphasis added.)

Gunsalus’s article (which you should read in full) takes account of negative attitudes towards whistleblowers despite the importance of rooting out misconduct and lays out a sensible strategy for bringing wrongdoing to light without losing your membership in your professional community. She lays out “rules for responsible whistleblowing”:

  1. Consider alternative explanations (especially that you may be wrong).
  2. In light of #1, ask questions, do not make charges.
  3. Figure out what documentation supports your concerns and where it is.
  4. Separate your personal and professional concerns.
  5. Assess your goals.
  6. Seek advice and listen to it.

and her “step-by-step procedures for responsible whistleblowing”:

  1. Review your concern with someone you trust.
  2. Listen to what that person tells you.
  3. Get a second opinion and take that seriously, too.
  4. If you decide to initiate formal proceedings, seek strength in numbers.
  5. Find the right place to file charges; study the procedures.
  6. Report your concerns.
  7. Ask questions; keep notes.
  8. Cultivate patience!

The focus is very much on moving beyond hunches to establish clear evidence — and on avoiding self-deception. The potential whistleblower must hope that those to whom he or she is bringing concerns are themselves as committed to looking at the available evidence and avoiding self-deception.

Sometimes this is the situation, as it seems to have been in the Stapel case. In other cases, though, whistleblowers have done everything Gunsalus recommends and still found themselves without the support of their community. This is not just a bad thing for the whistleblowers. It is also a bad thing for the scientific community and the reliability of the shared body of knowledge it tries to build.
_____
C. K. Gunsalus, “How to Blow the Whistle and Still Have a Career Afterwards,” Science and Engineering Ethics, 4(1) 1998, 51-64.

Reluctance to act on suspicions about fellow scientists: inside the frauds of Diederik Stapel (part 4).

It’s time for another post in which I chew on some tidbits from Yudhijit Bhattacharjee’s incredibly thought-provoking New York Times Magazine article (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. (You can also look at the tidbits I chewed on in part 1, part 2, and part 3.) This time I consider the question of why it was that, despite mounting clues that Stapel’s results were too good to be true, other scientists in Stapel’s orbit were reluctant to act on their suspicions that Stapel might be up to some sort of scientific misbehavior.

Let’s look at how Bhattacharjee sets the scene in the article:

[I]n the spring of 2010, a graduate student noticed anomalies in three experiments Stapel had run for him. When asked for the raw data, Stapel initially said he no longer had it. Later that year, shortly after Stapel became dean, the student mentioned his concerns to a young professor at the university gym. Each of them spoke to me but requested anonymity because they worried their careers would be damaged if they were identified.

The bold emphasis here (and in the quoted passages that follow) is mine. I find it striking that even now, when Stapel has essentially been fully discredited as a trustworthy scientist, these two members of the scientific community feel safer not being identified. It’s not entirely obvious to me if their worry is being identified as someone who was suspicious that fabrication was taking place but who said nothing to launch official inquiries — or whether they fear that being identified as someone who was suspicious of a fellow scientist could harm their standing in the scientific community.

If you dismiss that second possibility as totally implausible, read on:

The professor, who had been hired recently, began attending Stapel’s lab meetings. He was struck by how great the data looked, no matter the experiment. “I don’t know that I ever saw that a study failed, which is highly unusual,” he told me. “Even the best people, in my experience, have studies that fail constantly. Usually, half don’t work.”

The professor approached Stapel to team up on a research project, with the intent of getting a closer look at how he worked. “I wanted to kind of play around with one of these amazing data sets,” he told me. The two of them designed studies to test the premise that reminding people of the financial crisis makes them more likely to act generously.

In early February, Stapel claimed he had run the studies. “Everything worked really well,” the professor told me wryly. Stapel claimed there was a statistical relationship between awareness of the financial crisis and generosity. But when the professor looked at the data, he discovered inconsistencies confirming his suspicions that Stapel was engaging in fraud.

If one has suspicions about how reliable a fellow scientist’s results are, doing some empirical investigation seems like the right thing to do. Keeping an open mind and then examining the actual data might well show one’s suspicions to be unfounded.

Of course, that’s not what happened here. So, given a reason for doubt with stronger empirical support — not to mention the fact that scientists are trying to build a shared body of scientific knowledge (which means that unreliable papers in the literature can hurt the knowledge-building efforts of other scientists who trust that the work reported in that literature was done honestly), you would think the time was right for this professor to pass on what he had found to those at the university who could investigate further. Right?

The professor consulted a senior colleague in the United States, who told him he shouldn’t feel any obligation to report the matter.

For all the talk of science, and the scientific literature, being “self-correcting,” it’s hard to imagine the precise mechanism for such self-correction in a world where no scientist who is aware of likely scientific misconduct feels any obligation to report the matter.

But the person who alerted the young professor, along with another graduate student, refused to let it go. That spring, the other graduate student examined a number of data sets that Stapel had supplied to students and postdocs in recent years, many of which led to papers and dissertations. She found a host of anomalies, the smoking gun being a data set in which Stapel appeared to have done a copy-paste job, leaving two rows of data nearly identical to each other.

The two students decided to report the charges to the department head, Marcel Zeelenberg. But they worried that Zeelenberg, Stapel’s friend, might come to his defense. To sound him out, one of the students made up a scenario about a professor who committed academic fraud, and asked Zeelenberg what he thought about the situation, without telling him it was hypothetical. “They should hang him from the highest tree” if the allegations were true, was Zeelenberg’s response, according to the student.

Some might think these students were being excessively cautious, but the sad fact is that scientists faced with allegations of misconduct against a colleague — especially if they are brought by students — frequently side with their colleague and retaliate against those making the allegations. Students, after all, are new members of one’s professional community, so green one might not even think of them as really members. They are low status, they are learning how things work, they are judged likely to have misunderstood what they have seen. And, in contrast to one’s colleagues, students are transients. They are just passing through the training program, whereas you might hope to be with your colleagues for your whole professional life. In a case of dueling testimony, who are you more likely to believe?

Maybe the question should be whether your bias towards believing one over the other is strong enough to keep you from examining the available evidence to determine whether your trust is misplaced.

The students waited till the end of summer, when they would be at a conference with Zeelenberg in London. “We decided we should tell Marcel at the conference so that he couldn’t storm out and go to Diederik right away,” one of the students told me.

In London, the students met with Zeelenberg after dinner in the dorm where they were staying. As the night wore on, his initial skepticism turned into shock. It was nearly 3 when Zeelenberg finished his last beer and walked back to his room in a daze. In Tilburg that weekend, he confronted Stapel.

It might not be universally true, but at least some of the people who will lie about their scientific findings in a journal article will lie right to your face about whether they obtained those findings honestly. Yet lots of us think we can tell — at least with the people we know — whether they are being honest with us. This hunch can be just as wrong as the wrongest scientific hunch waiting for us to accumulate empirical evidence against it.

The students seeking Zeelenberg’s help in investigating Stapel’s misbehavior found a situation in which Zeelenberg would have to look at the empirical evidence first before he looked his colleague in the eye and asked him whether he was fabricating his results. They had already gotten him to say, at least in the abstract, that the kind of behavior they had reason to believe Stapel was committing was unacceptable in their scientific community. To make a conscious decision to ignore the empirical evidence would have meant Zeelenberg would have to see himself as displaying a kind of intellectual dishonesty — because if fabrication is harmful to science, it is harmful to science no matter who perpetrates it.

As it was, Zeelenberg likely had to make the painful concession that he had misjudged his colleague’s character and trustworthiness. But having wrong hunches is science is much less of a crime than clinging to those hunches in the face of mounting evidence against them.

Doing good science requires a delicate balance of trust and accountability. Scientists’ default position is to trust that other scientists are making honest efforts to build reliable scientific knowledge about the world, using empirical evidence and methods of inference that they display for the inspection (and critique) of their colleagues. Not to hold this default position means you have to build all your knowledge of the world yourself (which makes achieving anything like objective knowledge really hard). However, this trust is not unconditional, which is where the accountability comes is. Scientists recognize that they need to be transparent about what they did to build the knowledge — to be accountable when other scientists ask questions or disagree about conclusions — else that trust evaporates. When the evidence warrants it, distrusting a fellow scientist is not mean or uncollegial — it’s your duty. We need the help of other to build scientific knowledge, but if they insist that they ignore evidence of their scientific misbehavior, they’re not actually helping.

Scientific training and the Kobayashi Maru: inside the frauds of Diederik Stapel (part 3).

This post continues my discussion of issues raised in the article by Yudhijit Bhattacharjee in the New York Times Magazine (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. Part 1 looked at how expecting to find a particular kind of order in the universe may leave a scientific community more vulnerable to a fraudster claiming to have found results that display just that kind of order. Part 2 looked at some of the ways Stapel’s conduct did harm to the students he was supposed to be training to be scientists. Here, I want to point out another way that Stapel failed his students — ironically, by shielding them from failure.

Bhattacharjee writes:

[I]n the spring of 2010, a graduate student noticed anomalies in three experiments Stapel had run for him. When asked for the raw data, Stapel initially said he no longer had it. Later that year, shortly after Stapel became dean, the student mentioned his concerns to a young professor at the university gym. Each of them spoke to me but requested anonymity because they worried their careers would be damaged if they were identified.

The professor, who had been hired recently, began attending Stapel’s lab meetings. He was struck by how great the data looked, no matter the experiment. “I don’t know that I ever saw that a study failed, which is highly unusual,” he told me. “Even the best people, in my experience, have studies that fail constantly. Usually, half don’t work.”

In the next post, we’ll look at how this other professor’s curiosity about Stapel’s too-good-to-be-true results led to the unraveling of Stapel’s fraud. But I think it’s worth pausing here to say a bit more on how very odd a training environment Stapel’s research group provided for his students.

None of his studies failed. Since, as we saw in the last post, Stapel was also conducting (or, more accurately, claiming to conduct) his students’ studies, that means none of his students’ studies failed.

This is pretty much the opposite of every graduate student experience in an empirical field that I have heard described. Most studies fail. Getting to a 50% success rate with your empirical studies is a significant achievement.

Graduate students who are also Trekkies usually come to recognize that the travails of empirical studies are like a version of the Kobayashi Maru.

Introduced in Star Trek II: The Wrath of Khan, the Kobayashi Maru is a training simulation in which Star Fleet cadets are presented with a civilian ship in distress. Saving the civilians requires the cadet to violate treaty by entering the Neutral Zone (and in the simulation, this choice results in a Klingon attack and the boarding of the cadet’s ship). Honoring the treaty, on the other hand, means abandoning the civilians and their disabled ship in the Neutral Zone. The Kobayashi Maru is designed as a “no-win” scenario. The intent of the test is to discover how trainees face such a situation. Owing to James T. Kirk’s performance on the test, Wikipedia notes that some Trekkies also view the Kobayashi Maru as a problem whose solution depends on redefining the problem.

Scientific knowledge-building turns out to be packed with particular plans that cannot succeed at yielding the particular pieces of knowledge the scientists hope to discover. This is because scientists are formulating plans on the basis of what is already known to try to reveal what isn’t yet known — so knowing where to look, or what tools to use to do the looking, or what other features of the world are there to confound your ability to get clear information with those tools, is pretty hard.

Failed attempts happen. If they’re the sort of thing that will crush your spirit and leave you unable to shake it off and try it again, or to come up with a new strategy to try, then the life of a scientist will be a pretty hard life for you.

Grown-up scientists have studies fail all the time. Graduate students training to be scientists do, too. But graduate students also have mentors who are supposed to help them bounce back from failure — to figure out the most likely sources of failure, whether it’s worth trying the study again, whether a new approach would be better, whether some crucial piece of knowledge has been learned despite the failure of what was planned. Mentors give scientific trainees a set of strategies for responding to particular failures, and they also give reassurance that even good scientists fail.

Scientific knowledge is built by actual humans who don’t have perfect foresight about the features of the world as yet undiscovered, humans who don’t have perfectly precise instruments (or hands and eyes using those instruments), humans who sometimes mess up in executing their protocols. Yet the knowledge is built, and it frequently works pretty well.

In the context of scientific training, it strikes me as malpractice to send new scientists out into the world with the expectation that all of their studies should work, and without any experience grappling with studies that don’t work. Shielding his students from their Kobayashi Maru is just one more way Diederik Stapel cheated them out of a good scientific training.

Failing the scientists-in-training: inside the frauds of Diederik Stapel (part 2)

In this post, I’m continuing my discussion of the excellent article by Yudhijit Bhattacharjee in the New York Times Magazine (published April 26, 2013) on social psychologist and scientific fraudster Diederik Stapel. The last post considered how being disposed to expect order in the universe might have made other scientists in Stapel’s community less critical of his (fabricated) results than they could have been. Here, I want to shift my focus to some of the harm Stapel did beyond introducing lies to the scientific literature — specifically, the harm he did to the students he was supposed to be training to become good scientists.

I suppose it’s logically possible for a scientist to commit misconduct in a limited domain — say, to make up the results of his own research projects but to make every effort to train his students to be honest scientists. This doesn’t strike me as a likely scenario, though. Publishing fraudulent results as if they were factual is lying to one’s fellow scientists — including the generation of scientists one is training. Moreover, most research groups pursue interlocking questions, meaning that the questions the grad students are working to answer generally build on pieces of knowledge the boss has built — or, in Stapel’s case “built”. This means that at minimum, a fabricating PI is probably wasting his trainees’ time by letting them base their own research efforts on claims that there’s no good scientific reason to trust.

And as Bhattacharjee describes the situation for Stapel’s trainees, things for them were even worse:

He [Stapel] published more than two dozen studies while at Groningen, many of them written with his doctoral students. They don’t appear to have questioned why their supervisor was running many of the experiments for them. Nor did his colleagues inquire about this unusual practice.

(Bold emphasis added.)

I’d have thought that one of the things a scientist-in-training hopes to learn in the course of her graduate studies is not just how to design a good experiment, but how to implement it. Making your experimental design work in the real world is often much harder than it seems like it will be, but you learn from these difficulties — about the parameters you ignored in the design that turn out to be important, about the limitations of your measurement strategies, about ways the system you’re studying frustrates the expectations you had about it before you were actually interacting with it.

I’ll even go out on a limb and say that some experience doing experiments can make a significant difference in a scientist’s skill conceiving of experimental approaches to problems.

That Stapel cut his students out of doing the experiments was downright weird.

Now, scientific trainees probably don’t have the most realistic picture of precisely what competencies they need to master to become successful grown-up scientists in a field. They trust that the grown-up scientists training them know what these competencies are, and that these grown-up scientists will make sure that they encounter them in their training. Stapel’s trainees likely trusted him to guide them. Maybe they thought that he would have them conducting experiments if that were a skill that would require a significant amount of time or effort to master. Maybe they assumed that implementing the experiments they had designed was just so straightforward that Stapel thought they were better served working to learn other competencies instead.

(For that to be the case, though, Stapel would have to be the world’s most reassuring graduate advisor. I know my impostor complex was strong enough that I wouldn’t have believed I could do an experiment my boss or my fellow grad students viewed as totally easy until I had actually done it successfully three times. If I had to bet money, it would be that some of Stapel’s trainees wanted to learn how to do the experiments, but they were too scared to ask.)

There’s no reason, however, that Stapel’s colleagues should have thought it was OK that his trainees were not learning how to do experiments by taking charge of doing their own. If they did know and they did nothing, they were complicit in a failure to provide adequate scientific training to trainees in their program. If they didn’t know, that’s an argument that departments ought to take more responsibility for their trainees and to exercise more oversight rather than leaving each trainee to the mercies of his or her advisor.

And, as becomes clear from the New York Times Magazine article, doing experiments wasn’t the only piece of standard scientific training of which Stapel’s trainees were deprived. Bhattacharjee describes the revelation when a colleague collaborated with Stapel on a piece of research:

Stapel and [Ad] Vingerhoets [a colleague of his at Tilburg] worked together with a research assistant to prepare the coloring pages and the questionnaires. Stapel told Vingerhoets that he would collect the data from a school where he had contacts. A few weeks later, he called Vingerhoets to his office and showed him the results, scribbled on a sheet of paper. Vingerhoets was delighted to see a significant difference between the two conditions, indicating that children exposed to a teary-eyed picture were much more willing to share candy. It was sure to result in a high-profile publication. “I said, ‘This is so fantastic, so incredible,’ ” Vingerhoets told me.

He began writing the paper, but then he wondered if the data had shown any difference between girls and boys. “What about gender differences?” he asked Stapel, requesting to see the data. Stapel told him the data hadn’t been entered into a computer yet.

Vingerhoets was stumped. Stapel had shown him means and standard deviations and even a statistical index attesting to the reliability of the questionnaire, which would have seemed to require a computer to produce. Vingerhoets wondered if Stapel, as dean, was somehow testing him. Suspecting fraud, he consulted a retired professor to figure out what to do. “Do you really believe that someone with [Stapel’s] status faked data?” the professor asked him.

“At that moment,” Vingerhoets told me, “I decided that I would not report it to the rector.”

Stapel’s modus operandi was to make up his results out of whole cloth — to produce “findings” that looked statistically plausible without the muss and fuss of conducting actual experiments or collecting actual data. Indeed, since the thing he was creating that needed to look plausible enough to be accepted by his fellow scientists was the analyzed data, he didn’t bother making up raw data from which such an analysis could be generated.

Connecting the dots here, this surely means that Stapel’s trainees must not have gotten any experience dealing with raw data or learning how to apply methods of analysis to actual data sets. This left another gaping hole in the scientific training they deserved.

It would seem that those being trained by other scientists in Stapel’s program were getting some experience in conducting experiments, collecting data, and analyzing their data — since that experimentation, data collection, and data analysis became fodder for discussion in the ethics training that Stapel led. From the article:

And yet as part of a graduate seminar he taught on research ethics, Stapel would ask his students to dig back into their own research and look for things that might have been unethical. “They got back with terrible lapses­,” he told me. “No informed consent, no debriefing of subjects, then of course in data analysis, looking only at some data and not all the data.” He didn’t see the same problems in his own work, he said, because there were no real data to contend with.

I would love to know the process by which Stapel’s program decided that he was the best one to teach the graduate seminar on research ethics. I wonder if this particular teaching assignment was one of those burdens that his colleagues tried to dodge, or if research ethics was viewed as a teaching assignment requiring no special expertise. I wonder how it’s sitting with them that they let a now-famous cheater teach their grad students how to be ethical scientists.

The whole “those who can’t do, teach” adage rings hollow here.

Leave the full-sized conditioner, take the ski poles: whose assessment of risks did the TSA consider in new rules for carry-ons?

At Error Statistics Philosophy, D. G. Mayo has an interesting discussion of changes that just went into effect to Transportation Security Administration rules about what air travelers can bring in their carry-on bags. Here’s how the TSA Blog describes the changes:

TSA established a committee to review the prohibited items list based on an overall risk-based security approach. After the review, TSA Administrator John S. Pistole made the decision to start allowing the following items in carry-on bags beginning April 25th:

  • Small Pocket Knives – Small knives with non-locking blades smaller than 2.36 inches and less than 1/2 inch in width will be permitted
  • Small Novelty Bats and Toy Bats
  • Ski Poles
  • Hockey Sticks
  • Lacrosse Sticks
  • Billiard Cues
  • Golf Clubs (Limit Two)

This is part of an overall Risk-Based Security approach, which allows Transportation Security Officers to better focus their efforts on finding higher threat items such as explosives. This decision aligns TSA more closely with International Civil Aviation Organization (ICAO) standards.

These similar items will still remain on the prohibited items list:

  • Razor blades and box cutters will remain prohibited in carry-on luggage.
  • Full-size baseball, softball and cricket bats are prohibited items in carry-on luggage.

As Mayo notes, this particular framing of what does or does not count as a “higher threat item” on a flight has not been warmly embraced by everyone.

Notably, the Flight Attendants Union Coalition, the Coalition of AIrline Pilots Associations, some federal air marshals, and at least one CEO of an airline have gone on record against the rule change. Their objection is two-fold: removing these items from the list of items prohibited in carry-ons is unlikely to actually make screening lines at airports go any faster (since now you have to wait for the passenger arguing that there’s only 3 ounces of toothpaste left in the tube, so it should be allowed and the passenger arguing that her knife’s 2.4 inch blade is close enough to 2.36 inches), and allowing these items in carry-on bags on flights is likely to make those flights more dangerous for the people on them.

But that’s not the way the TSA is thinking about the risks here. Mayo writes:

By putting less focus on these items, Pistole says, airport screeners will be able to focus on looking for bomb components, which present a greater threat to aircraft. Such as:

bottled water, shampoo, cold cream, tooth paste, baby food, perfume, liquid make-up, etc. (over 3.4 oz).

They do have an argument; namely, that while liquids could be used to make explosives sharp objects will not bring down a plane. At least not so long as we can rely on the locked, bullet-proof cockpit door. Not that they’d want to permit any bullets to be around to test… And not that the locked door rule can plausibly be followed 100% of the time on smaller planes, from my experience. …

When the former TSA chief, Kip Hawley, was asked to weigh in, he fully supported Pistole; he regretted that he hadn’t acted to permit the above sports items during his reign service at TSA:

“They ought to let everything on that is sharp and pointy. Battle axes, machetes … bring anything you want that is pointy and sharp because while you may be able to commit an act of violence, you will not be able to take over the plane. It is as simple as that,” he said. (Link is here.)

I burst out laughing when I read this, but he was not joking:

Asked if he was using hyperbole in suggesting that battle axes be allowed on planes, Hawley said he was not.

“I really believe it. What are you going to do when you get on board with a battle ax? And you pull out your battle ax and say I’m taking over the airplane. You may be able to cut one or two people, but pretty soon you would be down in the aisle and the battle ax would be used on you.”

There does seem to be an emphasis on relying on passengers to rise up against ax-wielders, that passengers are angry these days at anyone who starts trouble. But what about the fact that there’s a lot more “air rage” these days? … That creates a genuine risk as well.

Will the availability of battle axes make disputes over the armrest more civil or less? Is the TSA comfortable with whatever happens on a flight so long as it falls short of bringing down the plane? How precisely did the TSA arrive at this particular assessment of risks that makes an 8 ounce bottle of conditioner more of a danger than a hockey stick?

And, perhaps most troubling, if the TSA is putting so much reliance on the vigilance and willingness to mount a response of passengers and flight crews, why does it look like they failed to seek out input from those passengers and flight crews about what kind of in-flight risks they are willing to undertake?

Are safe working conditions too expensive for knowledge-builders?

Last week’s deadly collapse of an eight-story garment factory building in Dhaka, Bangladesh has prompted discussions about whether poor countries can afford safe working conditions for workers who make goods that consumers in countries like the U.S. prefer to buy for bargain prices.

Maybe the risk of being crushed to death (or burned to death, or what have you) is just a trade-off poor people are (or should be) willing to accept to draw a salary. At least, that seems to be the take-away message from the crowd arguing that it would cost too much to have safety regulation (and enforcement) with teeth.

It is hard not to consider how this kind of attitude might get extended to other kinds of workplaces — like, say, academic research labs — given that last week UCLA chemistry professor Patrick Harran was also scheduled to return to court for a preliminary hearing on the felony charges of labor code violations brought against him in response to the 2008 fire in his laboratory that killed his employee, Shari Sangji.

Jyllian Kemsley has a detailed look at how Harran’s defense team has responded to the charges of specific violations of the California Labor Code, charges involving failure to provide adequate training, failure to have adequate procedures in place to correct unsafe conditions or work practices, and failure to require workers wear appropriate clothing for the work being done. Since I’m not a lawyer, it’s hard for me to assess the likelihood that the defense responses to these charges would be persuasive to a judge, but ethically, they’re pretty weak tea.

Sadly, though, it’s weak tea of the exact sort that my scientific training has led me to expect from people directing scientific research labs in academic settings.

When safety training is confined to a single safety video that graduate students are shown when they enter a program, that tells graduate students that their safety is not a big deal in the research activities that are part of their training.

When there’s not enough space under the hood for all the workers in a lab to conduct all the activities that, for safety’s sake, ought to be conducted under the hood — and when the boss expects all those activities to happen without delay — that tells them that a sacrifice in safety to produce quick results is acceptable.

When a student-volunteer needs to receive required ionizing radiation safety training to get a film badge that will give her access to the facility where she can irradiate her cells for an experiment, and the PI, upon hearing that the next training session in three weeks away, says to the student-volunteer, “Don’t bother; use my film badge,” that tells people in the lab that the PI is unwilling to lose three weeks of unpaid labor on one aspect of a research project just to make the personnel involved a little bit safer.

When people running a lab take an attitude of “Eh, young people are going to dress how they’re going to dress” rather than imposing clear rules for their laboratories that people whose dress is unsafe for the activities they are to undertake don’t get to undertake them, that tells the personnel in the lab that whatever cost is involved in holding this line — losing a day’s worth of work, being viewed by one’s underlings as strict rather than cool — has been judged too high relative to the benefit of making personnel in the lab safer.

When university presidents or other administrators proclaim that knowledge-builders “must continue to recalibrate [their] risk tolerance” by examining their “own internal policies and ask[ing] the question—do they meet—or do they exceed—our legal or regulatory requirements,” that tells knowledge-builders at those universities that people with significantly more power than them judge efforts to make things safer for knowledge-builders (and for others, like the human subjects of their research) as an unnecessary burden. When institutions need to become leaner, or more agile, shouldn’t researchers (and human subjects) do their part by accepting more risk as the price of doing business?

To be sure, safety isn’t free. But there are also costs to being less safe in academic research settings.

For example, personnel develop lax attitudes toward risks and trainees take these attitudes with them when they go out in the world as grown-up scientists. Surrounding communities can get hurt by improper disposal of hazardous materials, or by inadequate safety measures taken by researchers working with infectious agents who then go home and cough on their families and friends. Sometimes, personnel are badly injured, or killed.

And, if academic scientists are dragging feet on making things safer for the researchers on their team because it takes time and effort to investigate risks and make sensible plans for managing them, to develop occupational health plans and to institute standard operating procedures that everyone on the research team knows and follows, I hope they’re noticing that facing felony charges stemming from safety problems in their labs can also take lots of time and effort.

UPDATE: The Los Angeles Times reports that Patrick Harran will stand trial after an LA County Superior Court judge denied a defense motion to dismiss the case.

When #chemophobia isn’t irrational: listening to the public’s real worries.

This week, the Grand CENtral blog features a guest post by Andrew Bissette defending the public’s anxiety about chemicals. In lots of places (including here), this anxiety is labeled “chemophobia”; Bissette spells it “chemphobia”, but he’s talking about the same thing.

Bissette argues that the response those of us with chemistry backgrounds often take to the successful marketing of “chemical free” products, namely, pointing out that the world around us is made of chemicals, fails to engage with people’s real concerns. He writes:

Look at the history of our profession – from tetraethyl lead to thalidomide to Bhopal – and maintain with a straight face that chemphobia is entirely unwarranted and irrational. Much like mistrust of the medical profession, it is unfortunate and unproductive, but it is in part our own fault. Arrogance and paternalism are still all too common across the sciences, and it’s entirely understandable that sections of the public treat us as villains.

Of course it’s silly to tar every chemical and chemist with the same brush, but from the outside we must appear rather esoteric and monolithic. Chemphobia ought to provoke humility, not eye-rolling. If the public are ignorant of chemistry, it’s our job to engage with them – not to lecture or hand down the Truth, but simply to talk and educate. …

[A] common response to chemphobia is to define “chemicals” as something like “any tangible matter”. From the lab this seems natural, and perhaps it is; in daily life, however, I think it’s at best overstatement and at worst dishonest. Drawing a distinction between substances which we encounter daily and are not harmful under those conditions – obvious things like water and air, kitchen ingredients, or common metals – and the more exotic, concentrated, or synthetic compounds we often deal with is useful. The observation that both groups are made of the same stuff is metaphysically profound but practically trivial for most people. We treat them very differently, and the use of the word “chemical” to draw this distinction is common, useful, and not entirely ignorant. …

This definition is of course a little fuzzy at the edges. Not all “chemicals” are synthetic, and plenty of commonly-encountered materials are. Regardless, I think we can very broadly use ‘chemical’ to mean the kinds of matter you find in a lab but not in a kitchen, and I think this is how most people use it.

Crucially, this distinction tends to lead to the notion of chemicals as harmful: bleach is a chemical; it has warning stickers, you keep it under the sink, and you wear gloves when using it. Water isn’t! You drink it, you bathe in it, it falls from the sky. Rightly or wrongly, chemphobia emerges from the common usage of the word ‘chemical’.

There are some places here where I’m not in complete agreement with Bissette.

My kitchen includes a bunch of chemicals that aren’t kept under the sink or handled only with gloves, including sodium bicarbonate, acetic acid, potassium bitartrate, lecithin, pectin, and ascorbic acid. We use these chemicals in cooking because of the reactions they undergo (and the alternative reactions they prevent — those ascorbic acid crystals see a lot of use in our homemade white sangria preventing the fruit from discoloring when it comes in contact with oxygen). And, I reckon it’s not just people with PhDs in chemistry who recognize that chemical leaveners in their quickbreads and pancakes depend on some kind of chemical reaction to produce their desired effects. Notwithstanding that recognition of chemical reactivity, many of these same folks will happily mix sodium bicarbonate with water and gulp it down if that batch of biscuits isn’t sitting well in their tummies, with nary a worry that they are ingesting something that could require a call to poison control.

Which is to say, I think Bissette puts too much weight on the assumption that there is a clear “common usage” putting all chemicals on the “bad” side of the line, even if the edges of the line are fuzzy.

Indeed, it’s hard not to believe that people in countries like the U.S. are generally moving in the direction of greater comfort with the idea that important bits of their world — including their own bodies — are composed of chemicals. (Casual talk about moody teenagers being victims of their brain chemistry is just one example of this.) Aside from the most phobic of the chemophobic, people seem OK with the idea that their bodies use chemical (say, to digest their food) and even that our pharmacopeia relies on chemical (that can, for example, relieve our pain or reduce inflammation).

These quibbles aside, I think Bissette has identified the central concern at the center of much chemophobia: The public is bombarded with products and processes that may or may not contain various kinds of chemicals for which they have no clear information. They can’t tell from their names (if those names are even disclosed on labels) what those chemicals do. They don’t know what possible harms might come from exposure to these chemicals (or what amounts it might take for exposure to be risky). They don’t know why the chemicals are in their products — what goal they achieve, and whether that goal is one that primarily serves the consumers, the retailers, or the manufacturers. And they don’t trust the people with enough knowledge and information to answer these questions.

Maybe some of this is the public’s distrust for scientists. People imagine scientists off in their supervillain labs, making plans to conquer non-scientists, rather than recognizing that scientists walk among them (and maybe even coach their kids’ soccer teams). This kind of distrust can be addressed by scientists actually being visible as members of their communities — and listening to concerns voiced by people in those communities.

A large part of this distrust, though, is likely distrust of corporations, claiming chemistry will bring us better living but then prioritizing the better living of CEOs and shareholders while cutting corners on safety testing, informative labeling, and avoiding environmental harms in the manufacture and use of the goodies they offer. I’m not chemophobic, but I think there’s good reason for presumptive distrust of corporations that see consumers as walking wallets rather than as folks deserving information to make their own sensible choices.

Scientists need start addressing that element of chemophobia — and join in putting pressure on the private sector to do a better job earning the public’s trust.

Shame versus guilt in community responses to wrongdoing.

Yesterday, on the Hastings Center Bioethics Forum, Carl Elliott pondered the question of why a petition asking the governor of Minnesota to investigate ethically problematic research at the University of Minnesota has gathered hundreds of signatures from scholars in bioethics, clinical research, medical humanities, and related disciplines — but only a handful of signatures from scholars and researchers at the University of Minnesota.

At the center of the research scandal is the death of Dan Markingson, who was a human subject in a clinical trial of psychiatric drugs. Detailed background on the case can be found here, and Judy Stone has blogged extensively about the ethical dimensions of the case.

Elliott writes:

Very few signers come from the University of Minnesota. In fact, only two people from the Center for Bioethics have signed: Leigh Turner and me. This is not because any faculty member outside the Department of Psychiatry actually defends the ethics of the study, at least as far as I can tell. What seems to bother people here is speaking out about it. Very few faculty members are willing to register their objections publicly.

Why not? Well, there are the obvious possibilities – fear, apathy, self-interest, and so on. At least one person has told me she is unwilling to sign because she doesn’t think the petition will succeed. But there may be a more interesting explanation that I’d like to explore. …

Why would faculty members remain silent about such an alarming sequence of events? One possible reason is simply because they do not feel as if the wrongdoing has anything to do with them. The University of Minnesota is a vast institution; the scandal took place in a single department; if anyone is to be blamed, it is the psychiatrists and the university administrators, not them. Simply being a faculty member at the university does not implicate them in the wrongdoing or give them any special obligation to fix it. In a phrase: no guilt, hence no responsibility.

My view is somewhat different. These events have made me deeply ashamed to be a part of the University of Minnesota, in the same way that I feel ashamed to be a Southerner when I see video clips of Strom Thurmond’s race-baiting speeches or photos of Alabama police dogs snapping at black civil rights marchers. I think that what our psychiatrists did to Dan Markingson was wrong in the deepest sense. It was exploitative, cruel, and corrupt. Almost as disgraceful are the actions university officials have taken to cover it up and protect the reputation of the university. The shame I feel comes from the fact that I have worked at the University of Minnesota for 15 years. I have even been a member of the IRB. For better or worse, my identity is bound up with the institution.

These two different reactions – shame versus guilt – differ in important ways. Shame is linked with honor; it is about losing the respect of others, and by virtue of that, losing your self-respect. And honor often involves collective identity. While we don’t usually feel guilty about the actions of other people, we often do feel ashamed if those actions reflect on our own identities. So, for example, you can feel ashamed at the actions of your parents, your fellow Lutherans, or your physician colleagues – even if you feel as if it would be unfair for anyone to blame you personally for their actions.

Shame, unlike guilt, involves the imagined gaze of other people. As Ruth Benedict writes: “Shame is a reaction to other people’s criticism. A man is shamed either by being openly ridiculed or by fantasying to himself that he has been made ridiculous. In either case it is a potent sanction. But it requires an audience or at least a man’s fantasy of an audience. Guilt does not.”

As Elliott notes, one way to avoid an audience — and thus to avoid shame — is to actively participate in, or tacitly endorse, a cover-up of the wrongdoing. I’m inclined to think, however, that taking steps to avoid shame by hiding the facts, or by allowing retaliation against people asking inconvenient questions, is itself a kind of wrongdoing — the kind of thing that incurs guilt, for which no audience is required.

As well, I think the scholars and researchers at the University of Minnesota who prefer not to take a stand on how their university responds to ethically problematic research, even if it is research in someone else’s lab, or someone else’s department, underestimate the size of the audience for their actions and for their inaction.

A hugely significant segment of this audience is their trainees. Their students and postdocs (and others involved in training relationships with them) are watching them, trying to draw lessons about how to be a grown-up scientist or scholar, a responsible member of a discipline, a responsible member of a university community, a responsible citizen of the world. The people they are training are looking to them to set a good example on how to respond to problems — by addressing them, learning from them, making things right, and doing better going forward, or by lying, covering up, and punishing people harmed by trying to recover costs from them (thus sending a message to others daring to point out how they have been harmed).

There are many fewer explicit conversations about such issues than one might hope in a scientist’s training. In the absence of explicit conversations, most of what trainees have to go on is how the people training them actually behave. And sometimes, a mentor’s silence speaks as loud as words.

The ethics of naming and shaming.

Lately I’ve been pondering the practice of responding to bad behavior by calling public attention to it.

The most recent impetus for my thinking about it was this tech blogger’s response to behavior that felt unwelcoming at a conference (behavior that seems, in fact, to have run afoul of that conference’s official written policies)*, but there are plenty of other examples one might find of “naming and shaming”: the discussion (on blogs and in other media outlets) of University of Chicago neuroscientist Dario Maestripieri’s comments about female attendees of the Society for Neuroscience meeting, the Office of Research Integrity’s posting of findings of scientific misconduct investigations, the occasional instructor who promises to publicly shame students who cheat in his class, and actually follows through on the promise.

There are many forms “naming-and-shaming” might take, and many types of behavior one might identify as problematic enough that they ought to be pointed out and attended to. But there seems to be a general worry that naming-and-shaming is an unethical tactic. Here, I want to explore that worry.

Presumably, the point of responding to bad behavior is because it’s bad — causing harm to individuals or a community (or both), undermining progress on a project or goal, and so forth. Responding to bad behavior can be useful if it stops bad behavior in progress and/or keeps similarly bad behavior from happening in the future. A response can also be useful in calling attention to the harm the behavior does (i.e., in making clear what’s bad about the behavior). And, depending on the response, it can affirm the commitment of individuals or communities that the behavior in question actual is bad, and that the individuals or communities see themselves as having a real stake in reducing it.

Rules, professional codes, conference harassment policies — these are some ways to specify at the outset what behaviors are not acceptable in the context of the meeting, game, work environment, or disciplinary pursuit. There are plenty of contexts, too, where there is no written-and-posted official enumeration of every type of unacceptable behavior. Sometimes communities make judgments on the fly about particular kinds of behavior. Sometimes, members of communities are not in agreement about these judgments, which might result in a thoughtful conversation within the community to try to come to some agreement, or the emergence of a rift that leads people to realize that the community was not as united as they once thought, or ruling on the “actual” badness or acceptability of the behavior by those within the community who can marshal the power to make such a ruling.

Sharing a world with people who are not you is complicated, after all.

Still, I hope we can agree that there are some behaviors that count as bad behaviors. Assuming we had an unambiguous example of someone engaging in such a behavior, should we respond? How should we respond? Do we have a duty to respond?

I frequently hear people declare that one should respond to bad behavior, but that one should do so privately. The idea here seems to be that letting the bad actor know that the behavior in question was bad, and should be stopped, is enough to ensure that it will be stopped — and that the bad behavior must be a reflection of a gap in the bad actor’s understanding.

If knowing that a behavior is bad (or against the rules) were enough to ensure that those with the relevant knowledge never engage in the behavior, though, it becomes difficult to explain the highly educated researchers who get caught fabricating or falsifying data or images, the legions of undergraduates who commit plagiarism despite detailed instructions on proper citation methods, the politicians who lie. If knowledge that a certain kind of behavior is unacceptable is not sufficient to prevent that behavior, responding effectively to bad behavior must involve more than telling the perpetrator of that behavior, “What you’re doing is bad. Stop it.”

This is where penalties may be helpful in responding to bad behavior — get benched for the rest of the game, or fail the class, or get ejected from the conference, or become ineligible for funding for this many years. A penalty can convey that bad behavior is harmful enough to the endeavor or the community that its perpetrator needs a “time-out”.

Sometimes the application of penalties needs to be private (e.g., when a law like the Family Education Rights and Privacy Act makes applying the penalty publicly illegal). But there are dangers in only dealing with bad behavior privately.

When fabrication, falsification, and plagiarism are “dealt with” privately, it can make it hard for a scientific community to identify papers in the scientific literature that they shouldn’t trust or researchers who might be prone to slipping back into fabricating, falsifying, or plagiarizing if they think no one is watching. (It is worth noting that large ethical lapses are frequently part of an escalating pattern that started with smaller ethical infractions.)

Worse, if bad behavior is dealt with privately, out of view of members of the community who witnessed the bad behavior in question, those members may lose faith in the community’s commitment to calling it out. Keeping penalties (if any) under wraps can convey the message that the bad behavior is actually tolerated, that official policies against it are empty words.

And sometimes, there are instances where the people within an organization or community with the power to impose penalties on bad actors seem disinclined to actually address bad behavior, using the cover of privacy as a way to opt out of penalizing the bad actors or of addressing the bad behavior in any serious way.

What’s a member of the community to do in such circumstances? Given that the bad behavior is bad because it has harmful effects on the community and its members, should those aware of the bad behavior call the community’s attention to it, in the hopes that the community can respond to it (or that the community’s scrutiny will encourage the bad actor to cease the bad behavior)?

Arguably, a community that is harmed by bad behavior has an interest in knowing when that behavior is happening, and who the bad actors are. As well, the community has an interest in stopping the bad behavior, in mitigating the harms it has already caused, and in discouraging further such behavior. Naming-and-shaming bad actors may be an effective way to secure these interests.

I don’t think this means naming-and-shaming is the only possible way to secure these interests, nor that it is always the best way to do so. Sometimes, however, it’s the tool that’s available that seems likely to do the most good.

There’s not a simple algorithm or litmus test that will tell you when shaming bad actors is the best course of action, but there are questions that are worth asking when assessing the options:

  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, goes unchallenged?
  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, gets challenged privately? (In particular, what are the potential consequences to the person engaging in the bad behavior? To the person challenging the behavior? To others who have had occasion to observe the behavior, or who might be affected by similar behavior in the future?)
  • What are the potential consequences if this piece of bad behavior, which is observable to at least some members of the community, gets challenged publicly? (In particular, what are the potential consequences to the person engaging in the bad behavior? To the person challenging the behavior? To others who have had occasion to observe the behavior, or who might be affected by similar behavior in the future?)

Challenging bad behavior is not without costs. Depending on your status within the community, challenging a bad actor may harm you more than the bad actor. However, not challenging bad behavior has costs, too. If the community and its members aren’t prepared to deal with bad behavior when it happens, the community has to bear those costs.
_____
* Let me be clear that this post is focused on the broader question of publicly calling out bad behavior rather than on the specific details of Adria Richards’ response to the people behind her at the tech conference, whether she ought to have found their jokes unwelcoming, whether she ought to have responded to them the way she did, or what have you. Since this post is not about whether Adria Richards did everything right (or everything wrong) in that particular instance, I’m going to be quite ruthless in pruning comments that are focused on her particular circumstances or decisions. Indeed, commenters who make any attempt to use the comments here to issue threats of violence against Richards (of the sort she is receiving via social media as I compose this post), or against anyone else, will have their information (including IP address) forwarded to law enforcement.

If you’re looking for my take on the details of the Adria Richards case, I’ll have a post up on my other blog within the next 24 hours.

Building a scientific method around the ideal of objectivity.

While modern science seems committed to the idea that seeking verifiable facts that are accessible to anyone is a good strategy for building a reliable picture of the world as it really is, historically, these two ideas have not always gone together. Peter Machamer describes a historical moment when these two senses of objectivity were coupled in his article, “The Concept of the Individual and the Idea(l) of Method in Seventeenth-Century Natural Philosophy.” [1]

Prior to the emergence of a scientific method that stressed objectivity, Machamer says, most people thought knowledge came from divine inspiration (whether written in holy books or transmitted by religious authorities) or from ancient sources that were only shared with initiates (think alchemy, stone masonry, and healing arts here). Knowledge, in other words, was a scarce resource that not everyone could get his or her hands (or brains) on. To the extent that a person found the world intelligible at all, it was probably based on the story that someone else in a special position of authority was telling.

How did this change? Machamer argues that it changed when people started to think of themselves as individuals. The erosion of feudalism, the reformation and counter-reformation, European voyages to the New World (which included encounters with plants, animals, and people previously unknown in the Old World), and the shift from a geocentric to a heliocentric view of the cosmos all contributed to this shift by calling old knowledge and old sources of authority into question. As the old sources of knowledge became less credible (or at least less monopolistic), the individual came to be seen as a new source of knowledge.

Machamer describes two key aspects of individuality at work. One is what he calls the “Epistemic I.” This is the recognition that an individual can gain knowledge and ideas directly from his or her own interactions with the world, and that these interactions depend on senses and powers of reason that all humans have (or could have, given the opportunity to develop them). This recognition casts knowledge (and the ability to get it) as universal and democratic. The power to build knowledge is not concentrated in the hands (or eyes) of just the elite — this power is our birthright as human beings.

The other side of individuality here is what Machamer calls the “Entrepreneurial I.” This is the belief that an individual’s insights deserve credit and recognition, perhaps even payment. This recognition casts the individual who has it as a leader, or a teacher — definitely, as a special human worth listening to.

Pause for a moment to notice that this tension is still present in science. For all the commitment to science as an enterprise that builds knowledge from observations of the world that others must be able to make (which is the whole point of reproducibility), scientists also compete for prestige and career capital based on which individual was the first to observe (and report observing) a particular detail that anyone could see. Seeing something new is not effortless (as we’ve discussed in the last two posts), but there’s still an uneasy coexistence between the idea of scientific knowledge-building as within the powers of normal human beings and the idea of scientific knowledge-building as the activity of special human beings with uniquely powerful insights and empirical capacities.

The two “I”s that Machamer describes came together as thinkers in the 1600s tried to work out a reliable method by which individuals could replace discredited sources of “knowledge” and expand on what remained to produce their own knowledge. Lots of “natural philosophers” (what we would call scientists today) set out to formulate just such a method. The paradox here is that each thinker was selling (often literally) a way of knowing that was supposed to work for everyone, while simultaneously presenting himself as the only one clever enough to have found it.

Looking for a method that anyone could use to get the facts about the world, the thinkers Machamer describes recognized that they needed to formulate a clear set of procedures that was broadly applicable to the different kinds of phenomena in the world about which people wanted to build knowledge, that was teachable (rather than being a method that only the person who came up with it could use), and that was able to bring about consensus and halt controversy. However, in the 1600s there were many candidates for this method on offer, which meant that there was a good bit of controversy about the question of which method was the method.

Among the contenders for the method, the Baconian method involved cataloguing many experiences of phenomena, then figuring out how to classify them. The Galilean method involved representing the phenomena in terms of mechanical models (and even going so far as to build the corresponding machine). The Hobbesian model focused on analyzing compositions and divisions of substances in order to distinguish causes from effects. And these were just three contenders in a crowded field. If there was a common thread in these many methods, it was describing or representing the phenomena of interest in spatial terms. In the seventeenth century, as now, seeing is believing.

In a historical moment when people were considering the accessibility and the power of knowledge through experience, it became clear to the natural philosophers trying to develop an appropriate method that such knowledge also required control. To get knowledge, it was not enough to have just any experience -– you had to have the right kind of experiences. This meant that the methods under development had to give guidance on how to track empirical data and then analyze it. As well, these methods had to invent the concept of a controlled experiment.

Whether it was in a published dialogue or an experiment conducted in a public space before witnesses, the natural philosophers developing knowledge-building methods recognized the importance of demonstration. Machamer writes:

Demonstration … consists in laying a phenomenon before oneself and others. This “laying out” exhibits the structure of the phenomenon, exhibits its true nature. What is laid out provides an experience for those seeing it. It carries informational certainty that causes assent.” (94)

Interestingly, there seems to have been an assumption that once people hit on the appropriate procedure for gathering empirical facts about the phenomena, these facts would be sufficient to produce agreement among those who observed them. The ideal method was supposed to head off controversy. Disagreements were either a sign that you were using the wrong method, or that you were using the right method incorrectly. As Machamer describes it:

[T]he doctrines of method all held that disputes or controversies are due to ignorance. Controversies are stupid and accomplish nothing. Only those who cannot reason properly will find it necessary to dispute. Obviously, as noted, the ideal of universality and consensus contrasts starkly with the increasing number of disputes that engage these scientific entrepreneurs, and with the entrepreneurial claims of each that he alone has found the true method.

Ultimately, what stemmed the proliferations of competing methods was a professionalization of science, in which the practitioners essentially agreed to be guided by a shared method. The hope was that the method the scientific profession agreed upon would be the one that allowed scientists to harness human senses and intellect to best discover what the world is really like. Within this context, scientists might still disagree about the details of the method, but they took it that such agreements ought to be resolved in such a way that the resulting methodology better approximated this ideal method.

The adoption of shared methodology and the efforts to minimize controversy are echoed in Bruce Bower’s [2] discussion of how the ideal of objectivity has been manifested in scientific practices. He writes:

Researchers began to standardize their instruments, clarify basic concepts, and write in an impersonal style so that their peers in other countries and even in future centuries could understand them. Enlightenment-influenced scholars thus came to regard facts no longer as malleable observations but as unbreakable nuggets of reality. Imagination represented a dangerous, wild force that substituted personal fantasies for a sober, objective grasp of nature. (361)

What the seventeenth century natural philosophers Machamer describes were striving for is clearly recognizable to us as objectivity -– both in the form of an objective method for producing knowledge and in the form of a body of knowledge that gives a reliable picture of how the world really is. The objective scientific method they sought was supposed to produce knowledge we could all agree upon and to head off controversy.

As you might imagine, the project of building reliable knowledge about the world has pushed scientists in the direction of also building experimental and observational techniques that are more standardized and require less individual judgment across observers. But an interesting side-effect of this focus on objective knowledge as a goal of science is the extent to which scientific reports can make it look like no human observers were involved in making the knowledge being reported. The passive voice of scientific papers — these procedures were performed, these results were observed — does more than just suggest that the particular individuals that performed the procedures and observed the results are interchangeable with other individuals (who, scientists trust, would, upon performing the same procedures, see the same results for themselves). The passive voice can actually erase the human labor involved in making knowledge about the world.

This seems like a dangerous move when objectivity is not an easy goal to achieve, but rather one that requires concerted teamwork along with one’s objective method.
_____________

[1] “The Concept of the Individual and the Idea(l) of Method in Seventeenth-Century Natural Philosophy,” in Peter Machamer, Marcello Pera, and Aristides Baltas (eds.), Scientific Controversies: Philosophical and Historical Perspectives. Oxford University Press, 2000.

[2] Bruce Bower, “Objective Visions,” Science News. 5 December 1998: Vol. 154, pp. 360-362