Some thoughts about human subjects research in the wake of Facebook’s massive experiment.

You can read the study itself here, plus a very comprehensive discussion of reactions to the study here.

1. If you intend to publish your research in a peer-reviewed scientific journal, you are expected to have conducted that research with the appropriate ethical oversight. Indeed, the submission process usually involves explicitly affirming that you have done so (and providing documentation, in the case of human subjects research, of approval by the relevant Institutional Review Board(s) or of the IRB’s determination that the research was exempt from IRB oversight).

2. Your judgment, as a researcher, that your research will not expose your human subjects to especially big harms does not suffice to exempt that research from IRB oversight. The best way to establish that your research is exempt from IRB oversight is to submit your protocol to the IRB and have the IRB determine that it is exempt.

3. It’s not unreasonable for people to judge that violating their informed consent (say, by not letting them know that they are human subjects in a study where you are manipulating their environment and not giving them the opportunity to opt out of being part of your study) is itself a harm to them. When we value our autonomy, we tend to get cranky when others disregard it.

4. Researchers, IRBs, and the general public needn’t judge a study to be as bad as [fill in the name of a particularly horrific instance of human subjects research] to judge the conduct of the researchers in the study unethical. We can (and should) surely ask for more than “not as bad as the Tuskegee Syphilis Experiment”.

5. IRB approval of a study means that the research has received ethical oversight, but it does not guarantee that the treatment of human subjects in the research will be ethical. IRBs can make questionable ethical judgments too.

6. It is unreasonable to suggest that you can generally substitute Terms of Service or End User License Agreements for informed consent documents, as the latter are supposed to be clear and understandable to your prospective human subjects, while the former are written in such a way that even lawyers have a hard time reading and understanding them. The TOS or EULA is clearly designed to protect the company, not the user. (Some of those users, by the way, are in their early teens, which means they probably ought to be regarded as members of a “vulnerable population” entitled to more protection, not less.)

7. Just because a company like Facebook may “routinely” engage in manipulations of a user’s environment doesn’t make that kind of manipulation automatically ethical when it is done for the purposes of research. Nor does it mean that that kind of manipulation is ethical when Facebook does it for its own purposes. As it happens, peer-reviewed scientific journals, funding agencies, and other social structures tend to hold scientists building knowledge with human subjects research to higher ethical standards than (say) corporations are held to when they interact with humans. This doesn’t necessarily means our ethical demands of scientific knowledge-builders are too high. Instead, it may mean that our ethical demands of corporations are too low.

In the wake of the Harran plea deal, are universities embracing lab safety?

Earlier this month, prosecutors in Los Angeles reached a plea agreement with UCLA chemistry professor Patrick Harran in the criminal case against him in connection with the 2008 lab accident that resulted in the death of 23-year-old staff research assistant Sheharbano “Sheri” Sangji. Harran, who was facing more than 4 years of jail time if convicted, instead will perform 800 hours of community service and may find himself back in court in the event that his lab is found to have new safety violations in the next five years.

The Sangji family is not satisfied that the plea punishes Harran enough. My worry is whether the resolution of this case has a positive impact on safety in academic labs and research settings.

According to The Chronicle of Higher Education,

Several [independent safety advocates] agreed that universities’ research laboratories still remain more dangerous than their corporate counterparts. Yet they also expressed confidence that the impetus for improvement brought by the first filing ever of criminal charges over a fatal university lab accident has not been diluted by the plea bargain. …

[T]he action by California prosecutors “has gotten the attention of virtually every research chemist out there,” even in states that may seem more reluctant to pursue such cases, [Neal R. Langerman, a former associate professor of chemistry at Utah State University who now heads Advanced Chemical Safety, a consulting firm] said. “This is precedent-setting, and now that the precedent is set, you really do not want to test the water, because the water is already boiling.”

As you might expect, the official statement from UCLA plays up the improvements in lab safety put into place in the wake of the accident and points to the creation of the UC Center for Laboratory Safety, which has been holding workshops and surveying lab workers on safety practices and attitudes.

I’m afraid, however, judging from the immediate reaction I’ve seen at my own institution, that we have a long way to go.

In particular, a number of science faculty (who are not chemists) seem to have been getting clear messages in the wake of “that UCLA prosecution” — they didn’t really know the details of the case, nor the names of the people involved — that our university would not be backing them up legally in the event of any safety mishap in the lab or the field. Basically, the rumblings from the higher administrative strata were: No matter how well you’ve prepared yourself, your students, your employees, no matter how many safety measures you’ve put into place, no matter what limitations you’re working with as far as equipment or facilities, if something goes wrong, it’s your ass on the line.

This does not strike me as a productive way to approach safe working conditions as a collective responsibility within an educational institution. I also suspect it’s not a stance that would hold up in court, but since it would probably take another lab tragedy and prosecution to undermine it, I’m hopeful that some sense of institutional ethics will well up and result in a more productive approach.

The most charitable explanation I can come up with is that the higher administrative strata intended to communicate that science faculty have a positive duty to ensure safe working conditions for their students and employees (and themselves). That means that science faculty need to be proactive in assessing their research settings (whether for laboratory or field research) for potential hazards, in educating themselves and those they work with about those hazards, in having workable plans to mitigate the hazards and to respond swiftly and effectively to mishaps. All of that is sensible enough.

However, none of that means that the institution is free of responsibility. Departments, colleges, and university administrators control resources that can make the difference between a pretty safe research environment and a terribly risky one. Institutions, not individual faculty, create and maintain occupational health programs. Institutions can marshal shared resources (including safety training programs and institutional safety officers) that individual faculty cannot.

Moreover, institutions set the institutional tone — the more or less official sense of what is prioritized, of what is valued. If the strongest message about safety that reaches faculty boils down to legal liability and who will ultimately be legally liable, I’m pretty sure the institution still has a great deal of work to do in establishing a real culture of safety.

_____

Related posts:

Suit against UCLA in fatal lab fire raises question of who is responsible for safety.

Crime, punishment, and the way forward: in the wake of Sheri Sangji’s death, what should happen to Patrick Harran?

Facing felony charges in lab death of Sheri Sangji, UCLA settles, Harran stretches credulity.

Why does lab safety look different to chemists in academia and chemists in industry?

Community responsibility for a safety culture in academic chemistry.

Are safe working conditions too expensive for knowledge-builders?

Do permanent records of scientific misconduct findings interfere with rehabilitation?

We’ve been discussing how the scientific community deals with cheaters in its midst and the question of whether scientists view rehabilitation as a live option. Connected to the question of rehabilitation is the question of whether an official finding of scientific misconduct leaves a permanent mark that makes it practically impossible for someone to function within the scientific community — not because the person who has committed the conduct is unable to straighten up and fly right, but because others in the scientific community will no longer accept that person in the scientific knowledge-building endeavor, no matter what their behavior.

A version of this worry is at the center of an editorial by Richard Gallagher that appeared in The Scientist five years ago. In it, Gallagher argued that the Office of Research Integrity should not include findings of scientific misconduct in publications that are archived online, and that traces of such findings that persist after the period of debarment from federal funding has ended are unjust. Gallagher wrote:

For the sake of fairness, these sentences must be implemented precisely as intended. This means that at the end of the exclusion period, researchers should be able to participate again as full members of the scientific community. But they can’t.

Misconduct findings against a researcher appear on the Web–indeed, in multiple places on the Web. And the omnipresence of the Web search means that reprimands are being dragged up again and again and again. However minor the misdemeanor, the researcher’s reputation is permanently tarnished, and his or her career is invariably ruined, just as surely as if the punishment were a lifetime ban.

Both the NIH Guide and The Federal Register publish findings of scientific misconduct, and are archived online. As long as this continues, the problem will persist. The director of the division of investigative oversight at ORI has stated his regret at the “collateral damage” caused by the policy (see page 32). But this is not collateral damage; it is a serious miscarriage of justice against researchers and a stain on the integrity of the system, and therefore of science.

It reminds me of the system present in US prisons, in which even after “serving their time,” prisoners will still have trouble finding work because of their criminal records. But is it fair to compare felons to scientists who have, for instance, fudged their affiliations on a grant application when they were young and naïve?

It’s worth noting that the ORI website seems currently to present information for misconduct cases where scientists haven’t yet “served out their sentences”, featuring the statement:

This page contains cases in which administrative actions were imposed due to findings of research misconduct. The list only includes those who CURRENTLY have an imposed administrative actions against them. It does NOT include the names of individuals whose administrative actions periods have expired.

In the interaction between scientists who have been found to have committed scientific misconduct and the larger scientific community, we encounter the tension between the rights of the individual scientist and the rights of the scientific community. This extends to the question of the magnitude of a particular instance of misconduct, or of whether it was premeditated or merely sloppy, or of whether the offender was young and naïve or old enough to know better. An oversight or mistake in judgment that may strike the individual scientist making it as no big deal (at least at the time) can have significant consequences for the scientific community in terms of time wasted (e.g., trying to reproduce reported results) and damaged trust.

The damaged trust is not a minor thing. Given that the scientific knowledge-building enterprise relies on conditions where scientists can trust their fellow scientists to make honest reports (whether in the literature, in grant proposals, or in less formal scientific communications), discovering a fellow scientist whose relationship with the truth is more casual is a very big deal. Flagging liars is like tagging a faulty measuring device. It doesn’t mean you throw them out, but you do need to go to some lengths to reestablish their reliability.

To the extent that an individual scientist is committed to the shared project of building a reliable body of scientific knowledge, he or she ought to understand that after a breach, one is not entitled to a full restoration of the community’s trust. Rather, that trust must be earned back. One step in earning back trust is to acknowledge the harm the community suffered (or at least risked) from the dishonesty. Admitting that you blew it, that you are sorry, and that others have a right to be upset about it, are all necessary preliminaries to making a credible claim that you won’t make the same mistake again.

On the other hand, protesting that your screw-ups really weren’t important, or that your enemies have blown them out of proportion, might be an indication that you still don’t really get why your scientific colleagues are unhappy about your behavior. In such a circumstance, although you may have regained your eligibility to receive federal grant money, you may still have some work left to do to demonstrate that you are a trustworthy member of the scientific community.

It’s true that scientific training seems to go on forever, but that shouldn’t mean that early career scientists are infantilized. They are, by and large, legal adults, and they ought to be striving to make decisions as adults — which means considering the potential effects of their actions and accepting the consequences of them. I’m disinclined, therefore, to view ORI judgments of scientific misconduct as akin to juvenile criminal records that are truly expunged to reflect the transient nature of the youthful offender’s transgressions. Scientists ought to have better judgment than fifteen-year-olds. Occasionally they don’t. If they want to stay a part of the scientific community that their bad choices may have harmed, they have to be prepared to make real restitution. This may include having to meet a higher burden of proof to make up for having misled one’s fellow scientists at some earlier point in time. It may be a pain, but it’s not impossible.

Indeed, I’m inclined to think that early career lapses in judgment ought not to be buried precisely because public knowledge of the problem gives the scientific community some responsibility for providing guidance to the promising young scientist who messed up. Acknowledging your mistakes sets up a context in which it may be easier to ask other folks for help in avoiding similar mistakes in the future. (Ideally, scientists would be able to ask each other for such advice as a matter of course, but there are plenty of instances where it feels like asking a question would be exposing a weakness — something that can feel very dangerous, especially to an early career scientist.)

Besides, there’s a practical difficulty in burying the pixel trail of a scientist’s misconduct. It’s almost always the case that other members of the scientific community are involved in alleging, detecting, investigating, or adjudicating. They know something is up. Keeping the official findings secret leaves the other concerned members of the scientific community hanging, unsure whether the ORI has done anything about the allegations (which can breed suspicion that scientists are getting away with misconduct left and right). It can also make the rumor mill seem preferable to a total lack of information on scientific colleagues prone to dishonesty toward other scientists.

Given the amount of information available online, it’s unlikely that scientists who have been caught in misconduct can fly completely under the radar. But even before the internet, there was no guarantee such a secret would stay secret. Searchable online information imposes a certain level of transparency. But if this is transparency following upon actions that deceived one’s scientific community, it might be the start of effective remediation. Admitting that you have broken trust may be the first real step in earning that trust back.

_____________
This post is an updated version of an ancestor post on my other blog.

A suggestion for those arguing about the causal explanation for fewer women in science and engineering fields.

People are complex, as are the social structures they build (including but not limited to educational institutions, workplaces, and professional communities).

Accordingly, the appropriate causal stories to account for the behaviors and choices of humans, individually and collectively, are bound to be complex. It will hardly ever be the case that there is a single cause doing all the work.

However, there are times when people seem to lose the thread when they spin their causal stories. For example:

The point of focusing on innate psychological differences is not to draw attention away from anti-female discrimination. The research clearly shows that such discrimination exists—among other things, women seem to be paid less for equal work. Nor does it imply that the sexes have nothing in common. Quite frankly, the opposite is true. Nor does it imply that women—or men—are blameworthy for their attributes.

Rather, the point is that anti-female discrimination isn’t the only cause of the gender gap. As we learn more about sex differences, we’ve built better theories to explain the non-identical distribution of the sexes among the sciences. Science is always tentative, but the latest research suggests that discrimination has a weaker impact than people might think, and that innate sex differences explain quite a lot.

What I’m seeing here is a claim that amounts to “there would still be a gender gap in the sciences even if we eliminated anti-female discrimination” — in other words, that the causal powers of innate sex differences would be enough to create a gender gap.

To this claim, I would like to suggest:

1. that there is absolutely no reason not to work to eliminate anti-female discrimination; whether or not there are other causes that are harder to change, such discrimination seems like something we can change, and it has negative effects on those subject to it;

2. that is is an empirical question whether, in the absence of anti-female discrimination, there would still be a gender gap in the sciences; given the complexity of humans and their social structures, controlled studies in psychology are models of real life that abstract away lots of details*, and when the rubber hits the road in the real phenomena we are modeling, things may play out differently.

Let’s settle the question of how much anti-female discrimination matters by getting rid of it.

_____
* This is not a special problem for psychology. All controlled experiments are abstracting away details. That’s what controlling variables is all about.