Ebola, abundant caution, and sharing a world.

Today a judge in Maine ruled that quarantining nurse Kaci Hickox is not necessary to protect the public from Ebola. Hickox, who had been in Sierra Leone for a month helping to treat people infected with Ebola, had earlier been subject to a mandatory quarantine in New Jersey upon her return to the U.S., despite being free of Ebola symptoms (and so, given what scientists know about Ebola, unable to transmit the virus). She was released from that quarantine after a CDC evaluation, though if she had stayed in New Jersey, the state health department promised to keep her in quarantine for a full 21 days. Maine state officials originally followed New Jersey’s lead in deciding that following CDC guidelines for medical workers who have been in contact with Ebola patients required a quarantine.

The order from Judge Charles C. LaVerdiere “requires Ms. Hickox to submit to daily monitoring for symptoms, to coordinate her travel with state health officials, and to notify them immediately if symptoms appear. Ms. Hickox has agreed to follow the requirements.”

It is perhaps understandable that state officials, among others, have been responding to the Ebola virus in the U.S. with policy recommendations, and actions, driven by “an abundance of caution,” but it’s worth asking whether this is actually an overabundance.

Indeed, the reaction to a handful of Ebola cases in the U.S. is so far shaping up to be an overreaction. As Maryn McKenna details in a staggering round-up, people have been asked or forced to stay home from their jobs for 21 days (the longest Ebola incubation period) for visiting countries in Africa with no Ebola cases. Someone was placed on leave by an employer for visiting Dallas (in whose city limits there were two Ebola cases). A Haitian woman who vomited on a Boston subway platform was presumed to be Liberian, and the station was shut down. Press coverage of Ebola in the U.S. has fed the public’s panic.

How we deal with risk is a pretty personal thing. It has a lot to do with what outcomes we feel it most important to avoid (even if the probability of those outcomes is very low) and which outcomes we think we could handle. This means our thinking about risk will be connected to our individual preferences, our experiences, and what we think we know.

Sharing a world with other people, though, requires finding some common ground on what level of risk is acceptable.

Our choices about how much risk we’re willing to take on frequently have an effect on the level of risk to which those around us are subject. This comes up in discussions of vaccination, of texting-while-driving, of policy making in response to climate change. Finding the common ground — even noticing that our risk-taking decisions impact anyone but us — can be really difficult.

However, it’s bound to be even more difficult if we’re guessing at risks without taking account of what we know. Without some agreement about the facts, we’re likely to get into irresolvable conflicts. (If you want to bone up on what scientists know about Ebola, by the way, you really ought to be reading what Tara C. Smith has been writing about it.)

Our scientific information is not perfect, and it is the case that very unlikely events sometimes happen. However, striving to reduce our risk to zero might not leave us as safe as we imagine it would. If we fear any contact with anyone who has come into contact with an Ebola patient, what would this require? Permanently barring their re-entry to the U.S. from areas of outbreak? Killing possibly-infected health care workers already in the U.S. and burning their remains?

Personally, I’d prefer less dystopia in my world, not more.

And even given the actual reactions to people like Kaci Hickox from states like New Jersey and Maine, the “abundance of caution” approach has foreseeable effects that will not help protect people in the U.S. from Ebola. Mandatory quarantines that take no account of symptoms of those quarantined (nor of the conditions under which someone is infectious) are a disincentive for people to be honest about their exposure, or to come forward when symptoms present. Moreover, they provide a disincentive for health care workers to help people in areas of Ebola outbreak — where helping patients and containing the spread of the virus is, arguably, a reasonable strategy to protect other countries (like the U.S.) that do not have Ebola epidemics.

Indeed, the “abundance of caution” approach might make us less safe by ramping up our stress beyond what is warranted or healthy.

If this were a spooky story, Ebola might be the virus that got in only to reveal to us, by the story’s conclusion, that it was really our own terrified reaction to the threat that would end up harming us the most. That’s not a story we need to play out in real life.

Some thoughts about the suicide of Yoshiki Sasai.

In the previous post I suggested that it’s a mistake to try to understand scientific activity (including misconduct and culpable mistakes) by focusing on individual scientists, individual choices, and individual responsibility without also considering the larger community of scientists and the social structures it creates and maintains. That post was where I landed after thinking about what was bugging me about the news coverage and discussions about recent suicide of Yoshiki Sasai, deputy director of the Riken Center for Developmental Biology in Kobe, Japan, and coauthor of retracted papers on STAP cells.

I went toward teasing out the larger, unproductive pattern I saw, on the theory that trying to find a more productive pattern might help scientific communities do better going forward.

But this also means I didn’t say much about my particular response to Sasai’s suicide and the circumstances around it. I’m going to try to do that here, and I’m not going to try to fit every piece of my response into a larger pattern or path forward.

The situation in a nutshell:

Yoshiki Sasai worked with Haruko Obokata at the Riken Center on “stimulus-triggered acquisition of pluripotency”, a method by which exposing normal cells to a stress (like a mild acid) supposedly gave rise to pluripotent stem cells. It’s hard to know how closely they worked together on this; in the papers published on STAP. Obokata was the lead-author and Sasai was a coauthor. It’s worth noting that Obokata was some 20 years younger than Sasai, an up-and-coming researcher. Sasai was a more senior scientist, serving in a leadership position at the Riken Center and as Obokata’s supervisor there.

The papers were published in a high impact journal (Nature) and got quite a lot of attention. But then the findings came into question. Other researchers trying to reproduce the findings that had been reported in the papers couldn’t reproduce them. One of the images in the papers seemed to be a duplicate of another, which was fishy. Nature investigated, Riken investigated, the papers were retracted, Obokata continued to defend the papers and to deny any wrongdoing.

Meanwhile, a Riken investigation committee said “Sasai bore heavy responsibility for not confirming data for the STAP study and for Obokata’s misconduct”. This apparently had a heavy impact on Sasai:

Sasai’s colleagues at Riken said he had been receiving mental counseling since the scandal surrounding papers on STAP, or stimulus-triggered acquisition of pluripotency, cells, which was lead-authored by Obokata, came to light earlier this year.

Kagaya [head of public relations at Riken] added that Sasai was hospitalized for nearly a month in March due to psychological stress related to the scandal, but that he “recovered and had not been hospitalized since.”

Finally, Sasai hanged himself in a Riken stairwell. One of the notes he left, addressed to Obokata, urged her to reproduce the STAP findings.

So, what is my response to all this?

I think it’s good when scientists take their responsibilities seriously, including the responsibility to provide good advice to junior colleagues.

I also think it’s good when scientists can recognize the limits. You can give very, very good advice — and explain with great clarity why it’s good advice — but the person you’re giving it to may still choose to do something else. It can’t be your responsibility to control another autonomous person’s actions.

I think trust is a crucial part of any supervisory or collaborative relationship. I think it’s good to be able to interact with coworkers with the presumption of trust.

I think it’s awful that it’s so hard to tell which people are not worthy of our trust before they’ve taken advantage of our trust to do something bad.

Finding the right balance between being hands-on and giving space is a challenge in the best of supervisory or mentoring relationships.

Bringing an important discovery with the potential to enable lots of research that could ultimately help lots of people to one’s scientific peers — and to the public — must feel amazing. Even if there weren’t a harsh judgment from the scientific community for retraction, I imagine that having to say, “We jumped the gun on the ‘discovery’ we told you about” would not feel good.

The danger of having your research center’s reputation tied to an important discovery is what happens if that discovery doesn’t hold up, whether because of misconduct or mistakes. And either way, this means that lots of hard work that is important in the building of the shared body of scientific knowledge (and lots of people doing that hard work) can become invisible.

Maybe it would be good to value that work on its own merits, independent of whether anyone else judged it important or newsworthy. Maybe we need to rethink the “big discoveries” and “important discoverers” way of thinking about what makes scientific work or a research center good.

Figuring out why something went wrong is important. When the something that went wrong includes people making choices, though, this always seems to come down to assigning blame. I feel like that’s the wrong place to stop.

I feel like investigations of results that don’t hold up, including investigations that turn up misconduct, should grapple with the question of how can we use what we found here to fix what went wrong? Instead of just asking, “Whose fault was this?” why not ask, “How can we address the harm? What can we learn that will help us avoid this problem in the future?”

I think it’s a problem when a particular work environment makes the people in it anxious all the time.

I think it’s a problem when being careful feels like an unacceptable risk because it slows you down. I think it’s a problem when being first feels more important than being sure.

I think it’s a problem when a mistake of judgment feels so big that you can’t imagine a way forward from it. So disastrous that you can’t learn something useful from it. So monumental that it makes you feel like not existing.

I feel like those of us who are still here have a responsibility to pay attention.

We have a responsibility to think about the impacts of the ways science is done, valued, celebrated, on the human beings who are doing science — and not just on the strongest of those human beings, but also on the ones who may be more vulnerable.

We have a responsibility to try to learn something from this.

I don’t think what we should learn is not to trust, but how to be better at balancing trust and accountability.

I don’t think what we should learn is not to take the responsibilities of oversight seriously, but to put them in perspective and to mobilize more people in the community to provide more support in oversight and mentoring.

Can we learn enough to shift away from the Important New Discovery model of how we value scientific contributions? Can we learn enough that cooperation overtakes competition, that building the new knowledge together and making sure it holds up is more important than slapping someone’s name on it? I don’t know.

I do know that, if the pressures of the scientific career landscape are harder to navigate for people with consciences and easier to navigate for people without consciences, it will be a problem for all of us.

When focusing on individual responsibility obscures shared responsibility.

Over many years of writing about ethics in the conduct of science, I’ve had occasion to consider many cases of scientific misconduct and misbehavior, instances of honest mistakes and culpable mistakes. Discussions of these cases in the media and among scientists often make them look aberrant, singular, unconnected — the Schön case, the Hauser case, Aetogate, the Sezen-Sames case, the Hwang Woo-Sook case, the Stapel case, the Van Parijs case.* They make the world of science look binary, a set of unproblematically ethical practitioners with a handful of evil interlopers who need only be identified and rooted out.

I don’t think this approach is helpful, either in preventing misconduct, misbehavior, and mistakes, or in mounting a sensible response to the people involved in them.

Indeed, despite the fact that scientific knowledge-building is inherently a cooperative activity, the tendency to focus on individual responsibility can manifest itself in assignment of individual blame on people who “should have known” that another individual was involved in misconduct or culpable mistakes. It seems that something like this view — whether imposed from without or from within — may have been a factor in the recent suicide of Yoshiki Sasai, deputy director of the Riken Center for Developmental Biology in Kobe, Japan, and coauthor of retracted papers on STAP cells.

While there seems to be widespread suspicion that the lead-author of the STAP cell papers, Haruko Obokata, may have engaged in research misconduct of some sort (something Obokata has denied), Sasai was not himself accused of research misconduct. However, in his role as an advisor to Obokata, Sasai was held responsible by Riken’s investigation for not confirming Obokata’s data. Sasai expressed shame over the problems in the retracted papers, and had been hospitalized prior to his suicide in connection to stress over the scandal.

Michael Eisen describes the similarities here to his own father’s suicide as a researcher at NIH caught up in the investigation of fraud committed by a member of his lab:

[A]s the senior scientists involved, both Sasai and my father bore the brunt of the institutional criticism, and both seem to have been far more disturbed by it than the people who actually committed the fraud.

It is impossible to know why they both responded to situations where they apparently did nothing wrong by killing themselves. But it is hard for me not to place at least part of the blame on the way the scientific community responds to scientific misconduct.

This response, Eisen notes, goes beyond rooting out the errors in the scientific record and extends to rooting out all the people connected to the misconduct event, on the assumption that fraud is caused by easily identifiable — and removable — individuals, something that can be cut out precisely like a tumor, leaving the rest of the scientific community free of the cancer. But Eisen doesn’t believe this model of the problem is accurate, and he notes the damage it can do to people like Sasai and like his own father:

Imagine what it must be like to have devoted your life to science, and then to discover that someone in your midst – someone you have some role in supervising – has committed the ultimate scientific sin. That in and of itself must be disturbing enough. Indeed I remember how upset my father was as he was trying to prove that fraud had taken place. But then imagine what it must feel like to all of a sudden become the focal point for scrutiny – to experience your colleagues and your field casting you aside. It must feel like your whole world is collapsing around you, and not everybody has the mental strength to deal with that.

Of course everyone will point out that Sasai was overreacting – just as they did with my father. Neither was accused of anything. But that is bullshit. We DO act like everyone involved in cases of fraud is responsible. We do this because when fraud happens, we want it to be a singularity. We are all so confident this could never happen to us, that it must be that somebody in a position of power was lax – the environment was flawed. It is there in the institutional response. And it is there in the whispers …

Given the horrible incentive structure we have in science today – Haruko Obokata knew that a splashy result would get a Nature paper and make her famous and secure her career if only she got that one result showing that you could create stem cells by dipping normal cells in acid – it is somewhat of a miracle that more people don’t make up results on a routine basis. It is important that we identify, and come down hard, on people who cheat (although I wish this would include the far greater number of people who overhype their results – something that is ultimately more damaging than the small number of people who out and out commit fraud).

But the next time something like this happens, I am begging you to please be careful about how you respond. Recognize that, while invariably fraud involves a failure not just of honesty but of oversight, most of the people involved are honest, decent scientists, and that witch hunts meant to pretend that this kind of thing could not happen to all of us are not just gross and unseemly – they can, and sadly do, often kill.

As I read him, Eisen is doing at least a few things here. He is suggesting that a desire on the part of scientists for fraud to be a singularity — something that happens “over there” at the hands of someone else who is bad — means that they will draw a circle around the fraud and hold everyone on the inside of that circle (and no one outside of it) accountable. He’s also arguing that the inside/outside boundary inappropriately lumps the falsifiers, fabricators, and plagiarists with those who have committed the lesser sin of not providing sufficient oversight. He is pointing out the irony that those who have erred by not providing sufficient oversight tend to carry more guilt than do those they were working with who have lied outright to their scientific peers. And he is suggesting that needed efforts to correct the scientific record and to protect the scientific community from dishonest researchers can have tragic results for people who are arguably less culpable.

Indeed, if we describe Sasai’s failure as a failure of oversight, it suggests that there is some clear benchmark for sufficient oversight in scientific research collaborations. But it can be very hard to recognize that what seemed like a reasonable level of oversight was insufficient until someone who you’re supervising or with whom you’re collaborating is caught in misbehavior or a mistake. (That amount of oversight might well have been sufficient if the person one was supervising chose to behave honestly, for example.) There are limits here. Unless you’re shadowing colleagues 24/7, oversight depends on some baseline level of trust, some presumption that one’s colleagues are behaving honestly rather than dishonestly.

Eisen’s framing of the problem, though, is still largely in terms of the individual responsibility of fraudsters (and over-hypers). This prompts arguments in response about individuals bearing responsibility for their actions and their effects (including the effects of public discussion of those actions and about the individual scientists who are arguably victims of data fabrication and fraud. We are still in the realm of conceiving of fraudsters as “other” rather than recognizing that honest, decent scientists may be only a few bad decisions away from those they cast as monsters.

And we’re still describing the problem in terms of individual circumstances, individual choices, and individual failures.

I think Eisen is actually on the road to pointing out that a focus primarily on the individual level is unhelpful when he points to the problems of the scientific incentive structure. But I think it’s important to explicitly raise the alternate model, that fraud also flows from a collective failure of the scientific community and of the social structures it has built — what is valued, what is rewarded, what is tolerated, what is punished.

Arguably, one of the social structures implicated in scientific fraud is the first across the finish line, first to publish in a high impact journal model of scientific achievement. When being second to a discovery counts for exactly nothing (after lots of time, effort, and other resources have been invested), there is much incentive for haste and corner-cutting, and sometimes even outright fraud. This provides temptations for researchers — and dangers for those providing oversight to ambitious colleagues who may fall prey to such temptations. But while misconduct involves individuals making bad decisions, it happens in the context of a reward structure that exists because of collective choices and behaviors. If the structures that result from those collective choices and behaviors make some kinds of individual choices that are pathological to the shared project (building knowledge) rational choices for the individual to make under the circumstances (because they help the individual secure the reward), the community probably has an interest in examining the structures it has built.

Similarly, there are pathological individual choices (like ignoring or covering up someone else’s misconduct) that seem rational if the social structures built by the scientific community don’t enable a clear path forward within the community for scientists who have erred (whether culpably or honestly). Scientists are human. They get attached to their colleagues and tend to believe them to be capable of learning from their mistakes. Also, they notice that blowing the whistle on misconduct can lead to isolation of the whistleblower, not just the people committing the misconduct. Arguably, these are failures of the community and of the social structures it has built.

We might even go a step further and consider whether insisting on talking about scientific behavior (and misbehavior) solely in terms of individual actions and individual responsibility is part of the problem.

Seeing the scientific enterprise and things that happen in connection with it in terms of heroes and villains and innocent bystanders can seem very natural. Taking this view also makes it look like the most rational choice for scientists to plot their individual courses within the status quo. The rules, the reward structures, are taken almost as if they were carved in granite. How could one person change them? What would be the point of opting out of publishing in the high impact journals, since it would surely only hurt the individual opting out while leaving the system intact? In a competition for individual prestige and credit for knowledge built, what could be the point of pausing to try to learn something from the culpable mistakes committed by other individuals rather than simply removing those other individuals from the competition?

But individual scientists are not working in isolation against a fixed backdrop. Treating their social structures as if they were a fixed backdrop not only obscures that these structures result from collective choices but also prevents scientists from thinking together about other ways the institutional practice of science could be.

Whether some of the alternative arrangements they could create might be better than the status quo — from the point of view of coordinating scientific efforts, improving scientists’ quality of life, or improving the quality of the body of knowledge scientist are building — is surely an empirical question. But just as surely it is an empirical question worth exploring.

______
* It’s worth noticing that failures of safety are also frequently characterized as singular events, as in the Sheri Sangji/Patrick Harran case. As I’ve discussed at length on this blog, there is no reason to imagine the conditions in Harran’s lab that led to Sangji’s death were unique, and there is plenty of reason for the community of academic researchers to try to cultivate a culture of safety rather than individually hoping their own good luck will hold.

When your cover photo says less about the story and more about who you imagine you’re talking to.

The choice of cover of the most recent issue of Science was not good. This provoked strong reactions and, eventually, an apology from Science‘s editor-in-chief. It’s not the worst apology I’ve seen in recent days, but my reading of it suggests that there’s still a gap between the reactions to the cover and the editorial team’s grasp of those reactions.

So, in the interests of doing what I can to help close that gap, I give you the apology (in block quotes) and my response to it:

From Science Editor-in-Chief Marcia McNutt:

Science has heard from many readers expressing their opinions and concerns with the recent [11 July 2014] cover choice.

The cover showing transgender sex workers in Jarkarta was selected after much discussion by a large group

I suppose the fact that the choice of the cover was discussed by many people for a long time (as opposed to by one person with no discussion) is good. But it’s no guarantee of a good choice, as we’ve seen here. It might be useful to tell readers more about what kind of group was involved in making the decision, and what kind of discussion led to the choice of this cover over the other options that were considered.

and was not intended to offend anyone,

Imagine my relief that you did not intend what happened in response to your choice of cover. And, given how predictable the response to your cover was, imagine my estimation of your competence in the science communication arena dropping several notches. How well do you know your audience? Who exactly do you imagine that audience to be? If you’re really not interested in reaching out to people like me, can I get my AAAS dues refunded, please?

but rather to highlight the fact that there are solutions for the AIDS crisis for this forgotten but at-risk group. A few have indicated to me that the cover did exactly that,

For them. For them the cover highlighted transgender sex workers as a risk group who might get needed help from research. So, there was a segment of your audience for whom your choice succeeded, apparently.

but more have indicated the opposite reaction: that the cover was offensive because they did not have the context of the story prior to viewing it, an important piece of information that was available to those choosing the cover.

Please be careful with your causal claims here. Even with the missing context provided, a number of people still find the cover harmful. This explanation of the harm in the context of what the scientific community, and the wider world, can be like for a trans*woman, spells it out pretty eloquently.

The problem, in other words, goes deeper than the picture not effectively conveying your intended context. Instead, the cover communicated layers of context about who you imagine as your audience — and about whose reality is not really on your radar.

The people who are using social media to explain the problems they have with this cover are sharing information about who is in your audience, about what our lives in and with science are like. We are pinging you so we will be on your radar. We are trying to help you.

I am truly sorry for any discomfort that this cover may have caused anyone,

Please do not minimize the harm your choice of cover caused by describing it as “discomfort”. Doing so suggests that you still aren’t recognizing how this isn’t an event happening in a vacuum. That’s a bad way to support AAAS members who are women and to broaden the audience for science.

and promise that we will strive to do much better in the future to be sensitive to all groups and not assume that context and intent will speak for themselves.

What’s your action plan going forward? Is there good reason to think that simply trying hard to do better will get the job done? Or are you committed enough to doing better that you’re ready to revisit your editorial processes, the diversity of your editorial team, the diversity of the people beyond that team whose advice and feedback you seek and take seriously?

I’ll repeat: We are trying to help you. We criticize this cover because we expect more from Science and AAAS. This is why people have been laboring, patiently, to spell out the problems.

Please use those patient explanations and formulate a serious plan to do better.

* * * * *
For this post, I’m not accepting comments. There is plenty of information linked here for people to read and digest, and my sense is this is a topic where thinking hard for a while is likely to be more productive than jumping in with questions that the reading, digesting, and hard thinking could themselves serve to answer.

Some thoughts about human subjects research in the wake of Facebook’s massive experiment.

You can read the study itself here, plus a very comprehensive discussion of reactions to the study here.

1. If you intend to publish your research in a peer-reviewed scientific journal, you are expected to have conducted that research with the appropriate ethical oversight. Indeed, the submission process usually involves explicitly affirming that you have done so (and providing documentation, in the case of human subjects research, of approval by the relevant Institutional Review Board(s) or of the IRB’s determination that the research was exempt from IRB oversight).

2. Your judgment, as a researcher, that your research will not expose your human subjects to especially big harms does not suffice to exempt that research from IRB oversight. The best way to establish that your research is exempt from IRB oversight is to submit your protocol to the IRB and have the IRB determine that it is exempt.

3. It’s not unreasonable for people to judge that violating their informed consent (say, by not letting them know that they are human subjects in a study where you are manipulating their environment and not giving them the opportunity to opt out of being part of your study) is itself a harm to them. When we value our autonomy, we tend to get cranky when others disregard it.

4. Researchers, IRBs, and the general public needn’t judge a study to be as bad as [fill in the name of a particularly horrific instance of human subjects research] to judge the conduct of the researchers in the study unethical. We can (and should) surely ask for more than “not as bad as the Tuskegee Syphilis Experiment”.

5. IRB approval of a study means that the research has received ethical oversight, but it does not guarantee that the treatment of human subjects in the research will be ethical. IRBs can make questionable ethical judgments too.

6. It is unreasonable to suggest that you can generally substitute Terms of Service or End User License Agreements for informed consent documents, as the latter are supposed to be clear and understandable to your prospective human subjects, while the former are written in such a way that even lawyers have a hard time reading and understanding them. The TOS or EULA is clearly designed to protect the company, not the user. (Some of those users, by the way, are in their early teens, which means they probably ought to be regarded as members of a “vulnerable population” entitled to more protection, not less.)

7. Just because a company like Facebook may “routinely” engage in manipulations of a user’s environment doesn’t make that kind of manipulation automatically ethical when it is done for the purposes of research. Nor does it mean that that kind of manipulation is ethical when Facebook does it for its own purposes. As it happens, peer-reviewed scientific journals, funding agencies, and other social structures tend to hold scientists building knowledge with human subjects research to higher ethical standards than (say) corporations are held to when they interact with humans. This doesn’t necessarily means our ethical demands of scientific knowledge-builders are too high. Instead, it may mean that our ethical demands of corporations are too low.

In the wake of the Harran plea deal, are universities embracing lab safety?

Earlier this month, prosecutors in Los Angeles reached a plea agreement with UCLA chemistry professor Patrick Harran in the criminal case against him in connection with the 2008 lab accident that resulted in the death of 23-year-old staff research assistant Sheharbano “Sheri” Sangji. Harran, who was facing more than 4 years of jail time if convicted, instead will perform 800 hours of community service and may find himself back in court in the event that his lab is found to have new safety violations in the next five years.

The Sangji family is not satisfied that the plea punishes Harran enough. My worry is whether the resolution of this case has a positive impact on safety in academic labs and research settings.

According to The Chronicle of Higher Education,

Several [independent safety advocates] agreed that universities’ research laboratories still remain more dangerous than their corporate counterparts. Yet they also expressed confidence that the impetus for improvement brought by the first filing ever of criminal charges over a fatal university lab accident has not been diluted by the plea bargain. …

[T]he action by California prosecutors “has gotten the attention of virtually every research chemist out there,” even in states that may seem more reluctant to pursue such cases, [Neal R. Langerman, a former associate professor of chemistry at Utah State University who now heads Advanced Chemical Safety, a consulting firm] said. “This is precedent-setting, and now that the precedent is set, you really do not want to test the water, because the water is already boiling.”

As you might expect, the official statement from UCLA plays up the improvements in lab safety put into place in the wake of the accident and points to the creation of the UC Center for Laboratory Safety, which has been holding workshops and surveying lab workers on safety practices and attitudes.

I’m afraid, however, judging from the immediate reaction I’ve seen at my own institution, that we have a long way to go.

In particular, a number of science faculty (who are not chemists) seem to have been getting clear messages in the wake of “that UCLA prosecution” — they didn’t really know the details of the case, nor the names of the people involved — that our university would not be backing them up legally in the event of any safety mishap in the lab or the field. Basically, the rumblings from the higher administrative strata were: No matter how well you’ve prepared yourself, your students, your employees, no matter how many safety measures you’ve put into place, no matter what limitations you’re working with as far as equipment or facilities, if something goes wrong, it’s your ass on the line.

This does not strike me as a productive way to approach safe working conditions as a collective responsibility within an educational institution. I also suspect it’s not a stance that would hold up in court, but since it would probably take another lab tragedy and prosecution to undermine it, I’m hopeful that some sense of institutional ethics will well up and result in a more productive approach.

The most charitable explanation I can come up with is that the higher administrative strata intended to communicate that science faculty have a positive duty to ensure safe working conditions for their students and employees (and themselves). That means that science faculty need to be proactive in assessing their research settings (whether for laboratory or field research) for potential hazards, in educating themselves and those they work with about those hazards, in having workable plans to mitigate the hazards and to respond swiftly and effectively to mishaps. All of that is sensible enough.

However, none of that means that the institution is free of responsibility. Departments, colleges, and university administrators control resources that can make the difference between a pretty safe research environment and a terribly risky one. Institutions, not individual faculty, create and maintain occupational health programs. Institutions can marshal shared resources (including safety training programs and institutional safety officers) that individual faculty cannot.

Moreover, institutions set the institutional tone — the more or less official sense of what is prioritized, of what is valued. If the strongest message about safety that reaches faculty boils down to legal liability and who will ultimately be legally liable, I’m pretty sure the institution still has a great deal of work to do in establishing a real culture of safety.

_____

Related posts:

Suit against UCLA in fatal lab fire raises question of who is responsible for safety.

Crime, punishment, and the way forward: in the wake of Sheri Sangji’s death, what should happen to Patrick Harran?

Facing felony charges in lab death of Sheri Sangji, UCLA settles, Harran stretches credulity.

Why does lab safety look different to chemists in academia and chemists in industry?

Community responsibility for a safety culture in academic chemistry.

Are safe working conditions too expensive for knowledge-builders?

The quest for underlying order: inside the frauds of Diederik Stapel (part 1)

Yudhijit Bhattacharjee has an excellent article in the most recent New York Times Magazine (published April 26, 2013) on disgraced Dutch social psychologist Diederik Stapel. Why is Stapel disgraced? At the last count at Retraction Watch, 54 53 of his scientific publications have been retracted, owing to the fact that the results reported in those publications were made up. [Scroll in that Retraction Watch post for the update — apparently one of the Stapel retractions was double-counted. This is the risk when you publish so much made-up stuff.]

There’s not much to say about the badness of a scientist making results up. Science is supposed to be an activity in which people build a body of reliable knowledge about the world, grounding that knowledge in actual empirical observations of that world. Substituting the story you want to tell for those actual empirical observations undercuts that goal.

But Bhattacharjee’s article is fascinating because it goes some way to helping illuminate why Stapel abandoned the path of scientific discovery and went down the path of scientific fraud instead. It shows us some of the forces and habits that, while seemingly innocuous taken individually, can compound to reinforce scientific behavior that is not helpful to the project of knowledge-building. It reveals forces within scientific communities that make it hard for scientists to pursue suspicions of fraud to get formal determinations of whether their colleagues are actually cheating. And, the article exposes some of the harms Stapel committed beyond publishing lies as scientific findings.

It’s an incredibly rich piece of reporting, one which I recommend you read in its entirety, maybe more than once. Given just how much there is to talk about here, I’ll be taking at least a few posts to highlight bits of the article as nourishing food for thought.

Let’s start with how Stapel describes his early motivation for fabricating results to Bhattacharjee. From the article:

Stapel did not deny that his deceit was driven by ambition. But it was more complicated than that, he told me. He insisted that he loved social psychology but had been frustrated by the messiness of experimental data, which rarely led to clear conclusions. His lifelong obsession with elegance and order, he said, led him to concoct sexy results that journals found attractive. “It was a quest for aesthetics, for beauty — instead of the truth,” he said. He described his behavior as an addiction that drove him to carry out acts of increasingly daring fraud, like a junkie seeking a bigger and better high.

(Bold emphasis added.)

It’s worth noting here that other scientists — plenty of scientists who were never cheaters, in fact — have also pursued science as a quest for beauty, elegance, and order. For many, science is powerful because it is a way to find order in a messy universe, to discover simple natural laws that give rise to such an array of complex phenomena. We’ve discussed this here before, when looking at the tension between Platonist and Aristotelian strategies for getting to objective truths:

Plato’s view was that the stuff of our world consists largely of imperfect material instantiations of immaterial ideal forms -– and that science makes the observations it does of many examples of material stuff to get a handle on those ideal forms.

If you know the allegory of the cave, however, you know that Plato didn’t put much faith in feeble human sense organs as a route to grasping the forms. The very imperfection of those material instantiations that our sense organs apprehend would be bound to mislead us about the forms. Instead, Plato thought we’d need to use the mind to grasp the forms.

This is a crucial juncture where Aristotle parted ways with Plato. Aristotle still thought that there was something like the forms, but he rejected Plato’s full-strength rationalism in favor of an empirical approach to grasping them. If you wanted to get a handle on the form of “horse,” for example, Aristotle thought the thing to do was to examine lots of actual specimens of horse and to identify the essence they all have in common. The Aristotelian approach probably feels more sensible to modern scientists than the Platonist alternative, but note that we’re still talking about arriving at a description of “horse-ness” that transcends the observable features of any particular horse.

Honest scientists simultaneously reach for beautiful order and the truth. They use careful observations of the world to try to discern the actual structures and forces giving rise to what they are observing. They recognize that our observational powers are imperfect, that our measurements are not infinitely precise (and that they are often at least a little inaccurate), but those observations, those measurements, are what we have to work with in discerning the order underlying them.

This is why Ockham’s razor — to prefer simple explanations for phenomena over more complicated ones — is a strategy but not a rule. Scientists go into their knowledge-building endeavor with the hunch that the world has more underlying order than is immediately apparent to us — and that careful empirical study will help us discover that order — but how things actually are provides a constraint on how much elegance there is to be found.

However, as the article in the New York Times Magazine makes clear, Stapel was not alone in expecting the world he was trying to describe in his research to yield elegance:

In his early years of research — when he supposedly collected real experimental data — Stapel wrote papers laying out complicated and messy relationships between multiple variables. He soon realized that journal editors preferred simplicity. “They are actually telling you: ‘Leave out this stuff. Make it simpler,’” Stapel told me. Before long, he was striving to write elegant articles.

The journal editors’ preference here connects to a fairly common notion of understanding. Understanding a system is being able to identify that components of that system that make a difference in producing the effects of interest — and, by extension, recognizing which components of the system don’t feature prominently in bringing about the behaviors you’re studying. Again, the hunch is that there are likely to be simple mechanisms underlying apparently complex behavior. When you really understand the system, you can point out those mechanisms and explain what’s going on while leaving all the other extraneous bits in the background.

Pushing to find this kind of underlying simplicity has been a fruitful scientific strategy, but it’s a strategy that can run into trouble if the mechanisms giving rise to the behavior you’re studying are in fact complicated. There’s a phrase attributed to Einstein that captures this tension nicely: as simple as possible … but not simpler.

The journal editors, by expressing to Stapel that they liked simplicity more than messy relationships between multiple variables, were surely not telling Stapel to lie about his findings to create such simplicity. They were likely conveying their view that further study, or more careful analysis of data, might yield elegant relations that were really there but elusive. However, intentionally or not, they did communicate to Stapel that simple relationships fit better with journal editors’ hunches about what the world is like than did messy ones — and that results that seemed to reveal simple relations were thus more likely to pass through peer review without raising serious objections.

So, Stapel was aware that the gatekeepers of the literature in his field preferred elegant results. He also seemed to have felt the pressure that early-career academic scientists often feel to make all of his research time productive — where the ultimate measure of productivity is a publishable result. Again, from the New York Times Magazine article:

The experiment — and others like it — didn’t give Stapel the desired results, he said. He had the choice of abandoning the work or redoing the experiment. But he had already spent a lot of time on the research and was convinced his hypothesis was valid. “I said — you know what, I am going to create the data set,” he told me.

(Bold emphasis added.)

The sunk time clearly struck Stapel as a problem. Making a careful study of the particular psychological phenomenon he was trying to understand hadn’t yielded good results — which is to say, results that would be recognized by scientific journal editors or peer reviewers as adding to the shared body of knowledge by revealing something about the mechanism at work in the phenomenon. This is not to say that experiments with negative results don’t tell scientists something about how the world is. But what negative results tell us is usually that the available data don’t support the hypothesis, or perhaps that the experimental design wasn’t a great way to obtain data to let us evaluate that hypothesis.

Scientific journals have not generally been very interested in publishing negative results, however, so scientists tend to view them as failures. They may help us to reject appealing hypotheses or to refine experimental strategies, but they don’t usually do much to help advance a scientist’s career. If negative results don’t help you get publications, without which it’s harder to get grants to fund research that could find positive results, then the time and money spent doing all that research has been wasted.

And Stapel felt — maybe because of his hunch that the piece of the world he was trying to describe had to have an underlying order, elegance, simplicity — that his hypothesis was right. The messiness of actual data from the world got in the way of proving it, but it had to be so. And this expectation of elegance and simplicity fit perfectly with the feedback he had heard before from journal editors in his field (feedback that may well have fed Stapel’s own conviction).

A career calculation paired with a strong metaphysical commitment to underlying simplicity seems, then, to have persuaded Diederik Stapel to let his hunch weigh more heavily than the data and then to commit the cardinal sin of falsifying data that could be presented to other scientists as “evidence” to support that hunch.

No one made Diederik Stapel cross that line. But it’s probably worth thinking about the ways that commitments within scientific communities — especially methodological commitments that start to take on the strength of metaphysical commitments — could have made crossing it more tempting.

Leave the full-sized conditioner, take the ski poles: whose assessment of risks did the TSA consider in new rules for carry-ons?

At Error Statistics Philosophy, D. G. Mayo has an interesting discussion of changes that just went into effect to Transportation Security Administration rules about what air travelers can bring in their carry-on bags. Here’s how the TSA Blog describes the changes:

TSA established a committee to review the prohibited items list based on an overall risk-based security approach. After the review, TSA Administrator John S. Pistole made the decision to start allowing the following items in carry-on bags beginning April 25th:

  • Small Pocket Knives – Small knives with non-locking blades smaller than 2.36 inches and less than 1/2 inch in width will be permitted
  • Small Novelty Bats and Toy Bats
  • Ski Poles
  • Hockey Sticks
  • Lacrosse Sticks
  • Billiard Cues
  • Golf Clubs (Limit Two)

This is part of an overall Risk-Based Security approach, which allows Transportation Security Officers to better focus their efforts on finding higher threat items such as explosives. This decision aligns TSA more closely with International Civil Aviation Organization (ICAO) standards.

These similar items will still remain on the prohibited items list:

  • Razor blades and box cutters will remain prohibited in carry-on luggage.
  • Full-size baseball, softball and cricket bats are prohibited items in carry-on luggage.

As Mayo notes, this particular framing of what does or does not count as a “higher threat item” on a flight has not been warmly embraced by everyone.

Notably, the Flight Attendants Union Coalition, the Coalition of AIrline Pilots Associations, some federal air marshals, and at least one CEO of an airline have gone on record against the rule change. Their objection is two-fold: removing these items from the list of items prohibited in carry-ons is unlikely to actually make screening lines at airports go any faster (since now you have to wait for the passenger arguing that there’s only 3 ounces of toothpaste left in the tube, so it should be allowed and the passenger arguing that her knife’s 2.4 inch blade is close enough to 2.36 inches), and allowing these items in carry-on bags on flights is likely to make those flights more dangerous for the people on them.

But that’s not the way the TSA is thinking about the risks here. Mayo writes:

By putting less focus on these items, Pistole says, airport screeners will be able to focus on looking for bomb components, which present a greater threat to aircraft. Such as:

bottled water, shampoo, cold cream, tooth paste, baby food, perfume, liquid make-up, etc. (over 3.4 oz).

They do have an argument; namely, that while liquids could be used to make explosives sharp objects will not bring down a plane. At least not so long as we can rely on the locked, bullet-proof cockpit door. Not that they’d want to permit any bullets to be around to test… And not that the locked door rule can plausibly be followed 100% of the time on smaller planes, from my experience. …

When the former TSA chief, Kip Hawley, was asked to weigh in, he fully supported Pistole; he regretted that he hadn’t acted to permit the above sports items during his reign service at TSA:

“They ought to let everything on that is sharp and pointy. Battle axes, machetes … bring anything you want that is pointy and sharp because while you may be able to commit an act of violence, you will not be able to take over the plane. It is as simple as that,” he said. (Link is here.)

I burst out laughing when I read this, but he was not joking:

Asked if he was using hyperbole in suggesting that battle axes be allowed on planes, Hawley said he was not.

“I really believe it. What are you going to do when you get on board with a battle ax? And you pull out your battle ax and say I’m taking over the airplane. You may be able to cut one or two people, but pretty soon you would be down in the aisle and the battle ax would be used on you.”

There does seem to be an emphasis on relying on passengers to rise up against ax-wielders, that passengers are angry these days at anyone who starts trouble. But what about the fact that there’s a lot more “air rage” these days? … That creates a genuine risk as well.

Will the availability of battle axes make disputes over the armrest more civil or less? Is the TSA comfortable with whatever happens on a flight so long as it falls short of bringing down the plane? How precisely did the TSA arrive at this particular assessment of risks that makes an 8 ounce bottle of conditioner more of a danger than a hockey stick?

And, perhaps most troubling, if the TSA is putting so much reliance on the vigilance and willingness to mount a response of passengers and flight crews, why does it look like they failed to seek out input from those passengers and flight crews about what kind of in-flight risks they are willing to undertake?

Who profits from killing Pluto?

You may recall (as I and my offspring do) the controversy about six years ago around the demotion of Pluto. There seemed to me to be reasonable arguments on both sides, and indeed, my household included pro-Pluto partisans and partisans for a new, clear definition of “planet” that might end up leaving Pluto on the exo-planet side of the line.

At the time, Neil deGrasse Tyson was probably the most recognizable advocate of the anti-Pluto position, and since then he has not been shy about reaffirming his position. I had taken this vocal (even gleeful) advocacy as just an instance of a scientist working to do effective public outreach, but recently, I’ve been made aware of reasons to believe that there may be more going on with Neil deGrasse Tyson here.

You may be familiar with the phenomenon of offshore banking, which involves depositors stashing their assets in bank accounts in countries with much lower taxes than the jurisdictions in which the depositors actually reside. Indeed, residents of the U.S. have occasionally used offshore bank accounts (and bank secrecy policies) to hide their money from the prying (and tax-assessing) eyes of the Internal Revenue Service.

Officially, those who are subject to U.S. income tax are required to declare any offshore bank accounts they might have. However, since the offshore banks themselves have generally not been required by law to report interest income on their accounts to the U.S. tax authorities, lots of account holders have kept mum about it, too.

Recently, however, the U.S. government has been more vigorous in its efforts to track down this taxable offshore income, and has put more pressure on the offshore bankers not to aid their depositors in hiding assets. International pressure seems to be pushing banks in the direction of more transparency and accountability.

What does any of this have to do with Neil deGrasse Tyson, or with Pluto?

You may recall, back when the International Astronomical Union (IAU) was formally considering the question of Pluto’s status, that Neil deGrasse Tyson was a vocal proponent of demoting Pluto from planethood. Despite his position at the Hayden Planetarium, a position in which he had rather more contact with school children and other interested non-scientists making heartfelt arguments in support of Pluto’s planethood, Neil deGrasse Tyson was utterly unmoved.

Steely in his determination to get Pluto reclassified. And forward looking. Add to that remarkably well-dressed (seriously, have you seen his vests?) for a Ph.D. astrophysicist who has spent most of his career working for museums.

The only way it makes sense is if Neil deGrasse Tyson has been stashing money someplace it can earn interest without being taxed. Given his connections, this can only mean off-world banking.

But again, what does this have to do with Pluto?

Pluto killer though he may be, Neil deGrasse Tyson is law abiding. There have so far been no legal requirements to report interest income earned in banks on other planets. But Neil deGrasse Tyson, as a forward looking kind of guy, undoubtedly recognizes that regulators are rapidly moving in the direction of requiring those subject to U.S. income tax to declare their bank accounts on other planets.

The regulators, however, seem uninterested in making any such requirements for those with assets in off-world banks that are not on planets. Which means that while Pluto is less than 1/5 the mass of Earth’s Moon, as a non-planet, it will remain a convenient place for Neil deGrasse Tyson to benefit from compound interest without increasing his tax liability.

It kind of casts his stance on Pluto in a different light, doesn’t it?

[More details in this story from the Associated Press.]