Injustice, misbehavior, and the scientist’s social identity (part 3).

Let’s wrap up our discussion on the Martinson et al. paper, “Scientists’ Perceptions of Organizational Justice and Self-Reported Misbehaviors”. [1] You’ll recall that the research in this paper examined three hypotheses about academic scientists:

Hypothesis 1: The greater the perceived distributive injustice in science, the greater the likelihood of a scientist engaging in misbehavior. (51)

Hypothesis 2: The greater the perceived procedural injustice in science, the greater the likelihood of a scientist engaging in misbehavior. (52)

Hypothesis 3: Perceptions of injustice are more strongly associated with misbehavior among those for whom the injustice represents a more serious threat to social identity (e.g., early-career scientists, female scientists in traditionally male fields). (52)

We’ve already looked at the methodological details of the study. We’ve also examined the findings Martinson et al. reported. (In short, they found that early-career and mid-career scientists reported more procedural injustice than distributive injustice; that early-career scientists who perceived high levels of distributive injustice were somewhat more likely to report engaging in misbehavior than those who did not; that misbehavior was most likely from mid-career scientists with high intrinsic drive who perceived a high level of procedural injustice; and that female scientists were less likely to engage in misbehavior than male scientists.)

In this post, we’re going to consider what these findings mean, and what larger conclusions can be drawn from them.


To the extent that the research demonstrated correlations between the perception of injustice and misbehaving, one upshot is that making procedures and distribution of resources more just — and ensuring that scientists’ perceptions about procedural and distributive justice match reality — would be a way to reduce the likelihood of scientists engaging in misbehaviors.

An alternate strategy might be to focus simply on shaping scientists’ perceptions that their professional sphere is just in terms of both distribution of resources and procedures by which one’s results and job performance are evaluated. However, if the reality was that high levels of distributive and procedural justice were in place, this changing of perceptions would amount to misleading the scientists. To me, this looks like a strategy that would be bound to backfire (and to be clear, it is not one that Martinson et al. advocate in their paper).

Of course, some would advocate reducing distributive and procedural injustice in the world of academic science because it’s the right thing to do. Part of the point of this research, I think, is to provide consequentialist reasons for doing the right thing. Basically, they’re pointing out the ways leaving injustice in place can result in behaviors that harm the scientific community and the body of knowledge it works together to produce.

Identifying these bad consequences may be necessary simply because making academic science more just will require serious effort on several fronts:

Because our measure of procedural injustice tapped primarily aspects of the peer review systems in science, we have also demonstrated that violations of organizational justice are perceived beyond just the local institutional setting to include aspects of the peer-review system, and that such perceptions may affect behaviors with implications well beyond the local setting as well. (62)

Let’s unpack this a little.

The academic scientist does a specified job that includes research, often teaching, and sometimes “service” activities (e.g., serving on committees in her department or university). But her employer does not evaluate her job performance in isolation, based solely in internal criteria. Rather, the department and the university make their judgments based on things like how much grant money the scientist brings in, and how many publications the scientist has accumulated (and in which journals, and how heavily they are cited). In other words, the local institutional evaluation piggybacks on evaluations by those who review the grant proposals and those who review manuscripts submitted to journals. Procedural injustice in any of these evaluations can undermine getting a fully fair hearing.

One way to increase the perception of procedural justice, then, would be to farm out less of the evaluative labor to others:

At the institutional level, perceived injustice in distributions of responsibilities or unfairness in the decision processes that generate these distributions may contribute to an environment in which scientific misbehavior increases. In the distribution of institutional rewards, greater attention to the quality of research would foster better scientific conduct than rewards that appear to be based on the number and size of research grants, the “glamour” of one’s topics and findings, or sheer number of publications. But in judging a scientist’s research, better indices of quality are needed to counteract the increasing tendency to judge the quality of a researcher’s curriculum vitae based on the impact factor of the journals in which s/he publishes. (62)

It’s obviously quicker to count publications, to compute impact factors, to track the number of grant dollars actually secured, and to use these as proxies for the quality of the manuscripts, grant proposals, and research conducted. But hard numbers do not guarantee objective measures of quality. Moreover, assuming that external evaluations have all been perfectly fair doesn’t necessarily excuse an institution from stepping up and making its own best evaluation on the material it has available.

Evaluating scientists on things within their control (like the quality of their grant applications) seems more fair than basing job performance evaluations on things beyond their control (like where the funding line is any given year).

We know, of course, that change is hard. Completely restructuring the criteria for job performance evaluation is a big change likely to take time and cause pain. However, Martinson et al. point to ways to increase academic scientists’ perceptions of justice that don’t require such radical restructuring:

When the means or results of decision processes are unknown or misunderstood, they are more likely to be subject to speculation, rumor, and individuals’ own value calculations. It is important, therefore, for research institutions, journals, and federal agencies to ensure that their decisions and decision processes related to rewards and responsibilities are as transparent, widely disseminated to researchers, and fair as possible. Admittedly, this might well require reassessment of some long-held precepts of peer-review and other oversight systems in science (e.g., blinded review as an unmitigated good; primacy of local IRB review in the context of multi-site studies) and a willingness to restructure these systems to more fittingly reflect the realities of the current scientific work environment. (63)

It’s true that this kind of transparency would require some restructuring (e.g., of whether anonymous peer review yields more objective outcomes or deep doubts about fairness). But particular systems like peer review didn’t wash up on a beach in 1563 in their present form. Surely, they are amenable to some tweaking.

The authors make some other comments about justice and injustice within the realm of academic science that don’t seem to follow directly from the results of this research. However, they also seem pretty sensible. For example:

A system dependent on the expertise and labor of cadres of postdoctoral fellows and graduate students, for whom there are simply not enough positions in their scientific research area, creates perceptions of organizational injustice, if not injustice itself. (63)

This sounds like familiar ground, doesn’t it? But if basic unfairness (or something close to it) does not on its own move you to address it, now you can think of it as a state (or at least a perception) that might drive especially early-career scientists toward behaviors that could undermine the project of science as a whole. In other words, what hurts junior scientists could, downstream, cause pain for senior scientists, too.

Even if the current state of affairs is not quite unjust (wherever you draw that line), you don’t want to wait until the perception of injustice is strong enough and widespread enough to create a problem. Prevention is easier than damage control.

There are significant costs to science and society to train all these early-career scientists, only to have them engage in compromising behavior or to abandon research altogether after substantial investments have been made in their training. Our work suggests a need for analyses of the broader environment of U.S. science, and a need for attention to how both competitive and anti-competitive elements of that environment may motivate misbehavior, damaging the integrity of scientists’ work and, by extension, the scientific record. (63)

Here, remember that the study found more mid-career than early-career scientists reporting that they engaged in misbehaviors. One frightening conclusion we (or an early-career scientist) might draw from this is that a certain amount of misbehavior is what is required to make it to the mid-career stage.

Even if this conclusion is wrong — if behaving well rather than badly is no impediment to success as an academic scientist — if academic scientists perceive that engaging in misbehavior is essential to surviving the scientific rat race, that’s a problem.

How should the community of academic scientists address the tempting conclusion that those who misbehave succeed while those who do not wash out? One option is to make the norms of science and strategies for responsible conduct of research an explicit part of scientific training:

The scientific community must be prepared to address and correct instances or patterns of organizational injustice through constructive, not destructive means. Early introductions to expectations, work norms and rewards associated with academic careers, as well as a solid understanding of peer-review processes, will help scientists, especially those early in their careers, to recognize and deal openly with injustices. Training in constructive confrontation, conflict management, and grievance processes are valuable in dealing not only with injustice but also misbehavior in science. (63)

It seems like it would be particularly useful for those training graduate students and postdocs to address questions like, What are your options when things are unfair? How can you move forward successfully without breaking the rules yourself?

Because misbehavior is not such an effective survival strategy if you get caught at it:

As disheartening as it may be to work under conditions of unfairness, it is a potentially career-ending event to be found guilty of violating professional rules, regulations or laws. Misbehavior is itself unjust to those who conduct their research in accordance with appropriate standards and norms. (64)

Policing misbehavior is important for the well-being of the academic scientific community and of the body of knowledge that community produces. However, ensuring good conditions within that community is also important. (Hurting the engine that produces the knowledge — the community — can’t help but threaten the amount and quality of knowledge produced.)

To the extent that distributive and procedural injustice hurt the ability of scientists to work effectively with each other and build good knowledge — and to the extent that they might incentivize misbehaving over playing by the rules — the scientific community has an interest in establishing more just conditions.

The trick, of course, is getting individual scientists to think of themselves as part of a community (rather than individuals in competition with each other) long enough to establish such conditions.
______
[1] Brian C. Martinson, Melissa S. Anderson, A. Lauren Crain, and Raymond De Vries (2006) “Scientists’ Perceptions of Organizational Justice and Self-Reported Misbehaviors” Journal of Empirical Research on Human Research Ethics 1(1), 51-66.

facebooktwittergoogle_pluslinkedinmail
Posted in Academia, Ethical research, Tribe of Science, Women and science.

2 Comments

  1. Because misbehavior is not such an effective survival strategy if you get caught at it
    Eh — from what I can tell, you just blame it on a postdoc or grad student, and nothing happens to you; if there’s any fallout at all, which is seldom, it lands on the underling. E.g.

  2. These are some really great posts Dr FR, and thank you for taking the time to go through this so thoroughly.
    I haven’t yet had time to completely digest it all, but as soon as I get the chance, I hope to weigh in more substantively.
    Now though, I have to get back to Photoshopping this figure.

Leave a Reply

Your email address will not be published. Required fields are marked *