Mentoring new scientists in the space between how things are and how things ought to be.

Scientists mentoring trainees often work very hard to help their trainees grasp what they need to know not only to build new knowledge, but also to succeed in the context of a career landscape where score is kept and scarce resources are distributed on the basis of scorekeeping. Many focus their protégés’ attention on the project of understanding the current landscape, noticing where score is being kept, working the system to their best advantage.

But is teaching protégés how to succeed as a scientist in the current structural social arrangements enough?

It might be enough if you’re committed to the idea that the system as it is right now is perfectly optimized for scientific knowledge-building, and for scientific knowledge-builders (and if you view all the science PhDs who can’t find permanent jobs in the research careers they’d like to have as acceptable losses). But I’d suggest that mentors can do better by their protégés.

For one thing, even if current conditions were optimal, they might well change due to influences from outside the community of knowledge-builders, as when the levels of funding change at the level of universities or of funding agencies. Expecting that the landscape will be stable over the course of a career is risky.

For another thing, it seems risky to take as given that this is the best of all possible worlds, or of all possible bundles of practices around research, communication of results, funding of research, and working conditions for scientists. Research on scientists suggests that they themselves recognize the ways in which the current system and its scorekeeping provides perverse incentives that may undercut the project of building reliable knowledge about the world. As well, the competition for scarce resources can result in a “science red in tooth and claw” dynamic that, at best, leads to the rational calculation that knowledge-builders ought to work more hours and partake of fewer off-the-clock “distractions” (like family, or even nice weather) in order not to fall behind.

Just because the scientific career landscape manifests in the particular way it does right now doesn’t mean that it must always be this way. As the body of reliable knowledge about the world is perpetually under construction, we should be able to recognize the systems and social arrangements in which scientists work as subject to modification, not carved into granite.

Restricting your focus as a mentor to imparting strategies for success given how things are may also convey to your protégés that this is the way things will always be — or that this is the way things should always be. I hope we can do better than that.

It can be a challenge to mentor with an eye to a set of conditions that don’t currently exist. Doing so involves imagining other ways of doing things. Doing it as more than a thought experiment also involves coordinating efforts with others — not just with trainees, but with established members of the professional community who have a bit more weight to throw around — to see what changes can be made and how, given the conditions you’re starting from. It may also require facing pushback from colleagues who are fine with the status quo (since it has worked well for them).

Indeed, mentoring with an eye to creating better conditions for knowledge-building and for knowledge-builders may mean agitating for changes that will primarily benefit future generations of your professional community, not your own.

But mentoring someone, welcoming them into your professional community and equipping them to be a full member of it, is not primarily about you. It is something that you do for the benefit of your protégé, and for the benefit of the professional community they are joining. Equipping your protégé for how things are is a good first step. Even better is encouraging them to imagine, to bring about, and to thrive in conditions that are better for your shared pursuit.

Ebola, abundant caution, and sharing a world.

Today a judge in Maine ruled that quarantining nurse Kaci Hickox is not necessary to protect the public from Ebola. Hickox, who had been in Sierra Leone for a month helping to treat people infected with Ebola, had earlier been subject to a mandatory quarantine in New Jersey upon her return to the U.S., despite being free of Ebola symptoms (and so, given what scientists know about Ebola, unable to transmit the virus). She was released from that quarantine after a CDC evaluation, though if she had stayed in New Jersey, the state health department promised to keep her in quarantine for a full 21 days. Maine state officials originally followed New Jersey’s lead in deciding that following CDC guidelines for medical workers who have been in contact with Ebola patients required a quarantine.

The order from Judge Charles C. LaVerdiere “requires Ms. Hickox to submit to daily monitoring for symptoms, to coordinate her travel with state health officials, and to notify them immediately if symptoms appear. Ms. Hickox has agreed to follow the requirements.”

It is perhaps understandable that state officials, among others, have been responding to the Ebola virus in the U.S. with policy recommendations, and actions, driven by “an abundance of caution,” but it’s worth asking whether this is actually an overabundance.

Indeed, the reaction to a handful of Ebola cases in the U.S. is so far shaping up to be an overreaction. As Maryn McKenna details in a staggering round-up, people have been asked or forced to stay home from their jobs for 21 days (the longest Ebola incubation period) for visiting countries in Africa with no Ebola cases. Someone was placed on leave by an employer for visiting Dallas (in whose city limits there were two Ebola cases). A Haitian woman who vomited on a Boston subway platform was presumed to be Liberian, and the station was shut down. Press coverage of Ebola in the U.S. has fed the public’s panic.

How we deal with risk is a pretty personal thing. It has a lot to do with what outcomes we feel it most important to avoid (even if the probability of those outcomes is very low) and which outcomes we think we could handle. This means our thinking about risk will be connected to our individual preferences, our experiences, and what we think we know.

Sharing a world with other people, though, requires finding some common ground on what level of risk is acceptable.

Our choices about how much risk we’re willing to take on frequently have an effect on the level of risk to which those around us are subject. This comes up in discussions of vaccination, of texting-while-driving, of policy making in response to climate change. Finding the common ground — even noticing that our risk-taking decisions impact anyone but us — can be really difficult.

However, it’s bound to be even more difficult if we’re guessing at risks without taking account of what we know. Without some agreement about the facts, we’re likely to get into irresolvable conflicts. (If you want to bone up on what scientists know about Ebola, by the way, you really ought to be reading what Tara C. Smith has been writing about it.)

Our scientific information is not perfect, and it is the case that very unlikely events sometimes happen. However, striving to reduce our risk to zero might not leave us as safe as we imagine it would. If we fear any contact with anyone who has come into contact with an Ebola patient, what would this require? Permanently barring their re-entry to the U.S. from areas of outbreak? Killing possibly-infected health care workers already in the U.S. and burning their remains?

Personally, I’d prefer less dystopia in my world, not more.

And even given the actual reactions to people like Kaci Hickox from states like New Jersey and Maine, the “abundance of caution” approach has foreseeable effects that will not help protect people in the U.S. from Ebola. Mandatory quarantines that take no account of symptoms of those quarantined (nor of the conditions under which someone is infectious) are a disincentive for people to be honest about their exposure, or to come forward when symptoms present. Moreover, they provide a disincentive for health care workers to help people in areas of Ebola outbreak — where helping patients and containing the spread of the virus is, arguably, a reasonable strategy to protect other countries (like the U.S.) that do not have Ebola epidemics.

Indeed, the “abundance of caution” approach might make us less safe by ramping up our stress beyond what is warranted or healthy.

If this were a spooky story, Ebola might be the virus that got in only to reveal to us, by the story’s conclusion, that it was really our own terrified reaction to the threat that would end up harming us the most. That’s not a story we need to play out in real life.

You’re not rehabilitated if you keep deceiving.

Regular readers will know that I view scientific misconduct as a serious harm to both the body of scientific knowledge and the scientific community involved in building that knowledge. I also hold out hope that at least some of the scientists who commit scientific misconduct can be rehabilitated (and I’ve noted that other members of the scientific community behave in ways that suggest that they, too, believe that rehabilitation is possible).

But I think a non-negotiable prerequisite for rehabilitation is demonstrating that you really understand how what you did was wrong. This understanding needs to be more than simply recognizing that what you did was technically against the rules. Rather, you need to grasp the harms that your actions did, the harms that may continue as a result of those actions, the harms that may not be quickly or easily repaired. You need to <a href="http://blogs.scientificamerican.com/doing-good-science/2014/06/29/do-permanent-records-of-scientific-misconduct-findings-interfere-with-rehabilitation/"acknowledgethose harms, not minimize them or make excuses for your actions that caused the harms.

And, you need to stop behaving in the ways that caused the harms in the first place.

Among other things, this means that if you did significant harm to your scientific community, and to the students you were were supposed to be training, by making up “results” rather than actually doing experiments and making and reporting accurate results, you need to recognize that you have acted deceptively. To stop doing harm, you need to stop acting deceptively. Indeed, you may need to be significantly more transparent and forthcoming with details than others who have not transgressed as you have. Owing to your past bad acts, you may just have to meet a higher burden of proof going forward.

That you have retracted the publications in which you deceived, or lost a degree for which (it is strongly suspected) you deceived, or lost your university post, or served your hours of court-ordered community service does not reset you to the normal baseline of presumptive trust. “Paying your debt to society” does not in itself mean that anyone is obligated to believe that you are not still untrustworthy. If you break trust, you need to earn it back, not to demand it because you did your time.

You certainly can’t earn that trust back by engaging in deception to mount an argument that people should give you a break because you’ve served out your sentence.

These thoughts on how not to approach your own rehabilitation are prompted by the appearance of disgraced social scientist Diederik Stapel (discussed here, here, here, here, here, and here) in the comments at Retraction Watch on a post about Diederik Stapel and his short-lived gig as an adjunct instructor for a college course. Now, there’s no prima facie reason Diederik Stapel might not be able to make a productive contribution to a discussion about Diederik Stapel.

However, Diederik Stapel was posting his comments not as Diederik Stapel but as “Paul”.

I hope it is obvious why posting comments that are supportive of yourself while making it appear that this support is coming from someone else is deceptive. Moreover, the comments seem to suggest that Stapel is not really fully responsible for the frauds he committed.

“Paul” writes:

Help! Let’s not change anything. Science is a flawless institution. Yes. And only the past two days I read about medical scientists who tampered with data to please the firm that sponsored their work and about the start of a new investigation into the work of a psychologist who produced data “too good to be true.” Mistakes abound. On a daily basis. Sure, there is nothing to reform here. Science works just fine. I think it is time for the “Men in Black” to move in to start an outside-invesigation of science and academia. The Stapel case and other, similar cases teach us that scientists themselves are able to clean-up their act.

Later, he writes (sic throughout):

Stapel was punished, he did his community service (as he writes in his latest book), he is not on welfare, he is trying to make money with being a writer, a cab driver, a motivational speaker, but not very successfully, and .. it is totally unclear whether he gets paid for his teaching (no research) an extra-curricular hobby course (2 hours a week, not more, not less) and if he gets paid, how much.

Moreover and more importantly, we do not know WHAT he teaches exactly, we have not seen his syllabus. How can people write things like “this will only inspire kids to not get caught”, without knowing what the guy is teaching his students? Will he reach his students how to become fraudsters? Really? When you have read the two books he wrote after his demise, you cannot be conclude that this is very unlikely? Will he teach his students about all the other fakes and frauds and terrible things that happen in science? Perhaps. Is that bad? Perhaps. I think it is better to postpone our judgment about the CONTENT of all this as long as we do not know WHAT he is actually teaching. That would be a Popper-like, open-minded, rationalistic, democratic, scientific attitude. Suppose a terrible criminal comes up with a great insight, an interesting analysis, a new perspective, an amazing discovery, suppose (think Genet, think Gramsci, think Feyerabend).

Is it smart to look away from potentially interesting information, because the messenger of that information stinks?

Perhaps, God forbid, Stapel is able to teach his students valuable lessons and insights no one else is willing to teach them for a 2-hour-a-week temporary, adjunct position that probably doesn’t pay much and perhaps doesn’t pay at all. The man is a failure, yes, but he is one of the few people out there who admitted to his fraud, who helped the investigation into his fraud (no computer crashes…., no questionnaires that suddenly disappeared, no data files that were “lost while moving office”, see Sanna, Smeesters, and …. Foerster). Nowhere it is written that failures cannot be great teachers. Perhaps he points his students to other frauds, failures, and ridiculous mistakes in psychological science we do not know of yet. That would be cool (and not unlikely).

Is it possible? Is it possible that Stapel has something interesting to say, to teach, to comment on?

To my eye, these comments read as saying that Stapel has paid his debt to society and thus ought not to be subject to heightened scrutiny. They seem to assert that Stapel is reformable. They also suggest that the problem is not so much with Stapel as with the scientific enterprise. While there may be systemic features of science as currently practice that make cheating a greater temptation than it might be otherwise, suggesting that those features made Stapel commit fraud does not convey an understanding of Stapel’s individual responsibility to navigate those temptations. Putting those assertions and excuses in someone else’s mouth makes them look less self-serving than they actually are.

Hilariously, “Paul” also urges the Retraction Watch commenters expressing doubts about Stapel’s rehabilitation and moral character to contact Stapel using their real names, first here:

I guess that if people want to write Stapel a message, they can send him a personal email, using their real name. Not “Paul” or “JatdS” or “QAQ” or “nothingifnotcritical” or “KK” or “youknowbestofall” or “whatistheworldcoming to” or “givepeaceachance”.

then here:

if you want to talk to puppeteer, as a real person, using your real name, I recommend you write Stapel a personal email message. Not zwg or neuroskeptic or what arewehiding for.

Meanwhile, behind the scenes, the Retraction Watch editors accumulated clues that “Paul” was not an uninvolved party but rather Diederik Stapel portraying himself as an uninvolved party. After they contacted him to let him know that such behavior did not comport with their comment policy, Diederik Stapel posted under his real name:

Hello, my name is Diederik Stapel. I thought that in an internet environment where many people are writing about me (a real person) using nicknames it is okay to also write about me (a real person) using a nickname. ! have learned that apparently that was —in this particular case— a misjudgment. I think did not dare to use my real name (and I still wonder why). I feel that when it concerns person-to-person communication, the “in vivo” format is to be preferred over and above a blog where some people use their real name and some do not. In the future, I will use my real name. I have learned that and I understand that I –for one– am not somebody who can use a nickname where others can. Sincerely, Diederik Stapel.

He portrays this as a misunderstanding about how online communication works — other people are posting without using their real names, so I thought it was OK for me to do the same. However, to my eye it conveys that he also misunderstands how rebuilding trust works. Posting to support the person at the center of the discussion without first acknowledging that you are that person is deceptive. Arguing that that person ought to be granted more trust while dishonestly portraying yourself as someone other than that person is a really bad strategy. When you’re caught doing it, those arguments for more trust are undermined by the fact that they are themselves further instances of the deceptive behavior that broke trust in the first place.

I will allow as how Diederik Stapel may have some valuable lessons to teach of, though. One of these is how not to make a convincing case that you’ve reformed.

Grappling with the angry-making history of human subjects research, because we need to.

Teaching about the history of scientific research with human subjects bums me out.

Indeed, I get fairly regular indications from students in my “Ethics in Science” course that reading about and discussing the Nazi medical experiments and the U.S. Public Health Service’s Tuskegee syphilis experiment leaves them feeling grumpy, too.

Their grumpiness varies a bit depending on how they see themselves in relation to the researchers whose ethical transgressions are being inspected. Some of the science majors who identify strongly with the research community seem to get a little defensive, pressing me to see if these two big awful examples of human subject research aren’t clear anomalies, the work of obvious monsters. (This is one reason I generally point out that, when it comes to historical examples of ethically problematic research with human subjects, the bench is deep: the U.S. government’s syphilis experiments in Guatemala, the MIT Radioactivity Center’s studies on kids with mental disabilities in a residential school, the harms done to Henrietta Lacks and to the family members that survived her by scientists working with HeLa cells, the National Cancer Institute and Gates Foundation funded studies of cervical cancer screening in India — to name just a few.) Some of the non-science majors in the class seem to look at their classmates who are science majors with a bit of suspicion.

Although I’ve been covering this material with my students since Spring of 2003, it was only a few years ago that I noticed that there was a strong correlation between my really bad mood and the point in the semester when we were covering the history of human subjects research. Indeed, I’ve come to realize that this is no mere correlation but a causal connection.

The harm that researchers have done to human subjects in order to build scientific knowledge in many of these historically notable cases makes me deeply unhappy. These cases involve scientists losing their ethical bearings and then defending indefensible actions as having been all in the service of science. It leaves me grumpy about the scientific community of which these researchers were a part (rather than being obviously marked as monsters or rogues). It leaves me grumpy about humanity.

In other contexts, my grumpiness might be no big deal to anyone but me. But in the context of my “Ethics in Science” course, I need to keep pessimism on a short leash. It’s kind of pointless to talk about what we ought to do if you’re feeling like people are going to be as evil as they can get away with being.

It’s important to talk about the Nazi doctors and the Tuskegee syphilis experiment so my students can see where formal statements about ethical constraints on human subject research (in particular, the Nuremberg Code and the Belmont Report) come from, what actual (rather than imagined) harms they are reactions to. To the extent that official rules and regulations are driven by very bad situations that the scientific community or the larger human community want to avoid repeating, history matters.

History also matters if scientists want to understand the attitudes of publics towards scientists in general and towards scientists conducting research with human subjects in particular. Newly-minted researchers who would never even dream of crossing the ethical lines the Nazi doctors or the Tuskegee syphilis researchers crossed may feel it deeply unfair that potential human subjects don’t default to trusting them. But that’s not how trust works. Ignoring the history of human subjects research means ignoring very real harms and violations of trust that have not faded from the collective memories of the populations that were harmed. Insisting that it’s not fair doesn’t magically earn scientists trust.

Grappling with that history, though, might help scientists repair trust and ensure that the research they conduct is actually worthy of trust.

It’s history that lets us start noticing patterns in the instances where human subjects research took a turn for the unethical. Frequently we see researchers working with human subjects who that don’t see as fully human, or whose humanity seems less important than the piece of knowledge the researchers have decided to build. Or we see researchers who believe they are approaching questions “from the standpoint of pure science,” overestimating their own objectivity and good judgment.

This kind of behavior does not endear scientists to publics. Nor does it help researchers develop appropriate epistemic humility, a recognition that their objectivity is not an individual trait but rather a collective achievement of scientists engaging seriously with each other as they engage with the world they are trying to know. Nor does it help them build empathy.

I teach about the history of human subjects research because it is important to understand where the distrust between scientists and publics has come from. I teach about this history because it is crucial to understanding where current rules and regulations come from.

I teach about this history because I fully believe that scientists can — and must — do better.

And, because the ethical failings of past human subject research were hardly ever the fault of monsters, we ought to grapple with this history so we can identify the places where individual human weakness, biases, blind-spots are likely to lead to ethical problems down the road. We need to build systems and social mechanisms to be accountable to human subjects (and to publics), to prioritize their interests, never to lose sight of their humanity.

We can — and must — do better. But this requires that we seriously examine the ways that scientists have fallen short — even the ways that they have done evil. We owe it to future human subjects of research to learn from the ways scientists have failed past human subjects, to apply these lessons, to build something better.