What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

If you’re a scientist, are there certain things you’re obligated to do for society (not just for your employer)? If so, where does this obligation come from?

This is part of the discussion we started back in September about special duties or obligations scientists might have to the non-scientists with whom they share a world. If you’re just coming to the discussion now, you might want to check out the post where we set out some groundwork for the discussion, plus the three posts on scientists’ negative duties (i.e., the things scientists have an obligation not to do): our consideration of powers that scientists have and should not misuse, our discussion of scientific misconduct, the high crimes against science that scientists should never commit, and our examination of how plagiarism is not only unfair but also hazardous to knowledge-building.

In this post, finally, we lay out some of the positive duties that scientists might have.

In her book Ethics of Scientific Research, Kristin Shrader-Frechette gives a pretty forceful articulation of a set of positive duties for scientists. She asserts that scientists have a duty to do research, and a duty to use research findings in ways that serve the public good. Recall that these positive duties are in addition to scientists’ negative duty to ensure that the knowledge and technologies created by the research do not harm anyone.

Where do scientists’ special duties come from? Shrader-Frechette identifies a number of sources. For one thing, she says, there are obligations that arise from holding a monopoly on certain kinds of knowledge and services. Scientists are the ones in society who know how to work the electron microscopes and atom-smashers. They’re the ones who have the equipment and skills to build scientific knowledge. Such knowledge is not the kind of thing your average non-scientist could build for himself.

Scientists also have obligations that arise from the fact that they have a good chance of success (at least, better than anyone else) when it comes to educating the public about scientific matters or influencing public policy. The scientists who track the evidence that human activity leads to climate change, for example, are the ones who might be able to explain that evidence to the public and argue persuasively for measures that are predicted to slow climate change.

As well, scientists have duties that arise from the needs of the public. If the public’s pressing needs can only be met with the knowledge and technologies produced by scientific research – and if non-scientists cannot produce such knowledge and technologies themselves – then if scientists do no work to meet these needs, who can?

As we’ve noted before, there is, in all of this, that Spiderman superhero ethos: with great power comes great responsibility. When scientists realize how much power their knowledge and skills give them relative to the non-scientists in society, they begin to see that their duties are greater than they might have thought.

Let’s turn to what I take to be Shrader-Frechette’s more controversial claim: that scientists have a positive duty to conduct research. Where does this obligation come from?

For one thing, she argues, knowledge itself is valuable, especially in democratic societies where it could presumably help us make better choices than we’d be able to make with less knowledge. Thus, those who can produce knowledge should produce it.

For another thing, Shrader-Frechette points out, society funds research projects (through various granting agencies and direct funding from governmental entities). Researchers who accept such research funding are not free to abstain from research. They can’t take the grants and put an addition on the house. Rather, they are obligated to perform the contracted research. This argument is pretty uncontroversial, I think, since asking for money to do the research that will lead to more scientific knowledge and then failing to use that money to build more scientific knowledge is deceptive.

But here’s the argument that I think will meet with more resistance, at least from scientists: In the U.S., in addition to funding particular pieces of scientific research, society pays the bill for training scientists. This is not just true for scientists trained at public colleges and universities. Even private universities get a huge chunk of their money to fund research projects, research infrastructure, and the scientific training they give their students from public sources, including but not limited to federal funding agencies like the National Science Foundation and the National Institutes of Health.

The American people are not putting up this funding out of the goodness of their hearts. Rather, the public invests in the training of scientists because it expects a return on this investment in the form of the vital knowledge those trained scientists go on to produce and share with the public. Since the public pays to train people who can build scientific knowledge, the people who receive this training have a duty to go forth and build scientific knowledge to benefit the public.

Finally, Shrader-Frechette says, scientists have a duty to do research because if they don’t do research regularly, they won’t remain knowledgeable in their field. Not only will they not be up on the most recent discoveries or what they mean, but they will start to lose the crucial experimental and analytic skills they developed when they were being trained as scientists. For the philosophy fans in the audience, this point in Shrader-Frechette’s argument is reminiscent of Immanuel Kant’s example of how the man who prefers not to cultivate his talents is falling down on his duties. If everyone in society chose not to cultivate her talents, each of us would need to be completely self-sufficient (since we could not receive aid from others exercising their talents on our behalf) – and even that would not be enough, since we would not be able to rely on our own talents, having decided not to cultivate them.

On the basis of Shrader-Frechette’s argument, it sounds like every member of society who has had the advantage of scientific training (paid for by your tax dollars and mine) should be working away in the scientific knowledge salt-mine, at least until science has built all the knowledge society needs it to build.

And here’s where I put my own neck on the line: I earned a Ph.D. in chemistry (conferred in January 1994, almost exactly 20 years ago). Like other students in U.S. Ph.D. programs in chemistry, I did not pay for that scientific training. Rather, as Shrader-Frechette points out, my scientific training was heavily subsidized by the American tax payer. I have not build a bit of new chemical knowledge since the middle of 1994 (since I wrapped up one more project after completing my Ph.D.).

Have I fallen down on my positive duties as a trained scientist? Would it be fair for American tax payers to try to recover the funds they invested in my scientific training?

We’ll take up these questions (among others) in the next installment of this series. Stay tuned!

_____
Shrader-Frechette, K. S. (1994). Ethics of scientific research. Rowman & Littlefield.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

In the last post, we discussed why fabrication and falsification are harmful to scientific knowledge-building. The short version is that if you’re trying to build a body of reliable knowledge about the world, making stuff up (rather than, say, making careful observations of that world and reporting those observations accurately) tends not to get you closer to that goal.

Along with fabrication and falsification, plagiarism is widely recognized as a high crime against the project of science, but the explanations for why it’s harmful generally make it look like a different kind of crime than fabrication and falsification. For example, Donald E. Buzzelli (1999) writes:

[P]lagiarism is an instance of robbing a scientific worker of the credit for his or her work, not a matter of corrupting the record. (p. 278)

Kenneth D, Pimple (2002) writes:

One ideal of science, identified by Robert Merton as “disinterestedness,” holds that what matters is the finding, not who makes the finding. Under this norm, scientists do not judge each other’s work by reference to the race, religion, gender, prestige, or any other incidental characteristic of the researcher; the work is judged by the work, not the worker. No harm would be done to the Theory of Relativity if we discovered Einstein had plagiarized it…

[P]lagiarism … is an offense against the community of scientists, rather than against science itself. Who makes a particular finding will not matter to science in one hundred years, but today it matters deeply to the community of scientists. Plagiarism is a way of stealing credit, of gaining credit where credit is not due, and credit, typically in the form of authorship, is the coin of the realm in science. An offense against scientists qua scientists is an offense against science, and in its way plagiarism is as deep an offense against scientists as falsification and fabrication are offenses against science. (p. 196)

Pimple is claiming that plagiarism is not an offense that undermines the knowledge-building project of science per se. Rather, the crime is in depriving other scientists of the reward they are due for participating in this knowledge-building project. In other words, Pimple says that plagiarism is problematic not because it is dishonest, but rather because it is unfair.

While I think Pimple is right to identify an additional component of responsible conduct of science besides honesty, namely, a certain kind of fairness to one’s fellow scientists, I also think this analysis of plagiarism misses an important way in which misrepresenting the source of words, ideas, methods, or results can undermine the knowledge-building project of science.

On the surface, plagiarism, while potentially nasty to the person whose report is being stolen, might seem not to undermine the scientific community’s evaluation of the phenomena. We are still, after all, bringing together and comparing a number of different observation reports to determine the stable features of our experience of the phenomenon. But this comparison often involves a dialogue as well. As part of the knowledge-building project, from the earliest planning of their experiments to well after results are published, scientists are engaged in asking and answering questions about the details of the experience and of the conditions under which the phenomenon was observed.

Misrepresenting someone else’s honest observation report as one’s own strips the report of accurate information for such a dialogue. It’s hard to answer questions about the little, seemingly insignificant experimental details of an experiment you didn’t actually do, or to refine a description of an experience someone else had. Moreover, such a misrepresentation further undermines the process of building more objective knowledge by failing to contribute the actual insight of the scientist who appears to be contributing his own view but is actually contributing someone else’s. And while it may appear that a significant number of scientists are marshaling their resources to understand a particular phenomenon, if some of those scientists are plagiarists, there are fewer scientists actually grappling with the problem than it would appear.

In such circumstances, we know less than we think we do.

Given the intersubjective route to objective knowledge, failing to really weigh in to the dialogue may end up leaving certain of the subjective biases of others in place in the collective “knowledge” that results.

Objective knowledge is produced when the scientific community’s members work with each other to screen out subjective biases. This means the sort of honesty required for good science goes beyond the accurate reporting of what has been observed and under what conditions. Because each individual report is shaped by the individual’s perspective, objective scientific knowledge also depends on honesty about the individual agency actually involved in making the observations. Thus, plagiarism, which often strikes scientists as less of a threat to scientific knowledge (and more of an instance of “being a jerk”), may pose just as much of a threat to the project of producing objective scientific knowledge as outright fabrication.

What I’m arguing here is that plagiarism is a species of dishonesty that can undermine the knowledge-building project of science in a direct way. Even if what has been lifted by the plagiarist is “accurate” from the point of view of the person who actually collected or analyzed the data or drew conclusions from it, separating this contribution from its true author means it doesn’t function the same way in the ongoing scientific dialogue.

In the next post, we’ll continue our discussion of the duties of scientists by looking at what the positive duties of scientists might be, and by examining the sources of these duties.
_____


Buzzelli, D. E. (1999). Serious deviation from accepted practices. Science and Engineering Ethics, 5(2), 275-282.

Pimple, K. D. (2002). Six domains of research ethics. Science and Engineering Ethics, 8(2), 191-205.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Don’t be evil: Obligations of scientists (part 3)

In the last installation of our ongoing discussion of the obligations of scientists, I said the next post in the series would take up scientists’ positive duties (i.e., duties to actually do particular kinds of things). I’ve decided to amend that plan to say just a bit more about scientists’ negative duties (i.e., duties to refrain from doing particular kinds of things).

Here, I want to examine a certain minimalist view of scientists’ duties (or of scientists’ negative duties) that is roughly analogous to the old Google motto, “Don’t be evil.” For scientists, the motto would be “Don’t commit scientific misconduct.” The premise is that if X isn’t scientific misconduct, then X is acceptable conduct — at least, acceptable conduct within the context of doing science.

The next question, if you’re trying to avoid committing scientific misconduct, is how scientific misconduct is defined. For scientists in the U.S., a good place to look is to the federal agencies that provide funding for scientific research and training.

Here’s the Office of Research Integrity’s definition of misconduct:

Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. …

Research misconduct does not include honest error or differences of opinion.

Here’s the National Science Foundation’s definition of misconduct:

Research misconduct means fabrication, falsification, or plagiarism in proposing or performing research funded by NSF, reviewing research proposals submitted to NSF, or in reporting research results funded by NSF. …

Research misconduct does not include honest error or differences of opinion.

These definitions are quite similar, although NSF restricts its definition to actions that are part of a scientist’s interaction with NSF — giving the impression that the same actions committed in a scientist’s interaction with NIH would not be scientific misconduct. I’m fairly certain that NSF officials view all scientific plagiarism as bad. However, when the plagiarism is committed in connection with NIH funding, NSF leaves it to the ORI to pursue sanctions. This is a matter of jurisdiction for enforcement.

It’s worth thinking about why federal funders define (and forbid) scientific misconduct in the first place rather than leaving it to scientists as a professional community to police. One stated goal is to ensure that the money they are distributing to support scientific research and training is not being misused — and to have a mechanism with which they can cut off scientists who have proven themselves to be bad actors from further funding. Another stated goal is to protect the quality of the scientific record — that is, to ensure that the published results of the funded research reflect honest reporting of good scientific work rather than lies.

The upshot here is that public money for science comes with strings attached, and that one of those strings is that the money be used to conduct actual science.

Ensuring the proper use of the funding and protecting the integrity of the scientific record needn’t be the only goals of federal funding agencies in the U.S. in their interactions with scientists or in the way they frame their definitions of scientific misconduct, but at present these are the goals in the foreground in discussions of why federally funded scientists should avoid scientific misconduct.

Let’s consider the three high crimes identified in these definitions of scientific misconduct.

Fabrication is making up data or results rather than actually collecting them from observation or experimentation. Obviously, fabrication undermines the project of building a reliable body of knowledge about the world – faked data can’t be counted on to give us an accurate picture of what the world is really like.

A close cousin of fabrication is falsification. Here, rather than making up data out of whole cloth, falsification involves “adjusting” real data – changing the values, adding some data points, omitting other data points. As with fabrication, falsification is lying about your empirical data, representing the falsified data as an honest report of what you observed when it isn’t.

The third high crime is plagiarism, misrepresenting the words or ideas (or, for that matter, data or computer code, for example) of others as your own. Like fabrication and falsification, plagiarism is a variety of dishonesty.

Observation and experimentation are central in establishing the relevant facts about the phenomena scientists are trying to understand. Establishing such relevant facts requires truthfulness about what is observed or measured and under what conditions. Deception, therefore, undermines this aim of science. So at a minimum, scientists must embrace the norm of truthfulness or abandon the goal of building accurate pictures of reality. This doesn’t mean that honest scientists never make mistakes in setting up their experiments, making their measurements, performing data analysis, or reporting what they found to other scientists. However, when honest scientists discover these mistakes, they do what they can to correct them, so that they don’t mislead their fellow scientists even accidentally.

The importance of reliable empirical data, whether as the source of or a test of one’s theory, is why fabrication and falsification of data are rightly regarded as cardinal sins against science. Made-up data are no kind of reliable indicator of what the world is like or whether a particular theory is a good one. Similarly, “cooking” data sets to better support particular hypotheses amounts to ignoring the reality of what has actually been measured. The scientific rules of engagement with phenomena hold the scientist to account for what has actually been observed. While the scientist is always permitted to get additional data about the object of study, one cannot willfully ignore facts one finds puzzling or inconvenient. Even if these facts are not explained, they must be acknowledged.

Those who commit falsification and fabrication undermine the goal of science by knowingly introducing unreliable data into, or holding back relevant data from, the formulation and testing of theories. They sin by not holding themselves accountable to reality as observed in scientific experiments. When they falsify or fabricate in reports of research, they undermine the integrity of the scientific record. When they do it in grant proposals, they are attempting to secure funding under false pretenses.

Plagiarism, the third of the cardinal sins against responsible science, is dishonesty of another sort, namely, dishonesty about the source of words, ideas, methods, or results. A number of people who think hard about research ethics and scientific misconduct view plagiarism as importantly different in its effects from fabrication and falsification. For example, Donald E. Buzzelli (1999) writes:

[P]lagiarism is an instance of robbing a scientific worker of the credit for his or her work, not a matter of corrupting the record. (p. 278)

Kenneth D, Pimple (2002) writes:

One ideal of science, identified by Robert Merton as “disinterestedness,” holds that what matters is the finding, not who makes the finding. Under this norm, scientists do not judge each other’s work by reference to the race, religion, gender, prestige, or any other incidental characteristic of the researcher; the work is judged by the work, not the worker. No harm would be done to the Theory of Relativity if we discovered Einstein had plagiarized it…

[P]lagiarism … is an offense against the community of scientists, rather than against science itself. Who makes a particular finding will not matter to science in one hundred years, but today it matters deeply to the community of scientists. Plagiarism is a way of stealing credit, of gaining credit where credit is not due, and credit, typically in the form of authorship, is the coin of the realm in science. An offense against scientists qua scientists is an offense against science, and in its way plagiarism is as deep an offense against scientists as falsification and fabrication are offenses against science. (p. 196)

In fact, I think we can make a good argument that plagiarism does threaten the integrity of the scientific record (although I’ll save that argument for a separate post). However, I agree with both Buzzelli and Pimple that plagiarism is also a problem because it embodies a particular kind of unfairness within scientific practice. That federal funders include plagiarism by name in their definitions of scientific misconduct suggests that their goals extend further than merely protecting the integrity of the scientific record.

Fabrication, falsification, and plagiarism are clearly instances of scientific misconduct, but the misconduct definitions of the United States Public Health Service (whose umbrella includes NIH) and NSF used to define scientific misconduct as fabrication, falsification, plagiarism, and other serious deviations from accepted research practices. The “other serious deviations” clause was controversial, with a panel of the National Academy of Sciences (among others) arguing that this language was ambiguous enough that it shouldn’t be part of an official misconduct definition. Maybe, the panel worried, “serious deviations from accepted research practices” might be interpreted to include cutting-edge methodological innovations, meaning that scientific innovation would count as misconduct.

In his article 1993 article, “The Definition of Misconduct in Science: A View from NSF,” Buzzelli claimed that there was no evidence that the broader definitions of misconduct had been used to lodge this kind of misconduct complaint. Since then, however, there there have been instances where definitions of scientific misconduct containing an “other serious deviations” clause could be argued to take advantage of the ambiguity of the clause to go after a scientist for political reasons.

If the “other serious deviations” clause isn’t meant to keep scientists from innovating, what kinds of misconduct is it supposed to cover? These include things like sabotaging other scientists’ experiments or equipment, falsifying colleagues’ data, violating agreements about sharing important research materials like cultures and reagents, making misrepresentations in grant proposals, and violating the confidentiality of the peer review process. None of these activities is necessarily covered by fabrication, falsification, or plagiarism, but each of these activities can be seriously harmful to scientific knowledge-building.

Buzzelli (1993) discusses a particular deviation from accepted research practices that the NSF judged as misconduct, one where a principal investigator directing an undergraduate primatology research experience funded by an NSF grant sexually harassed student researchers and graduate assistants. Buzzelli writes:

In carrying out this project, the senior researcher was accused of a range of coercive sexual offenses against various female undergraduate students and research assistants, up to and including rape. … He rationed out access to the research data and the computer on which they were stored and analyzed, as well as his own assistance, so they were only available to students who accepted his advances. He was also accused of threatening to blackball some of the graduate students in the professional community and to damage their careers if they reported his activities. (p. 585)

Even opponents of the “other serious deviations” clause would be unlikely to argue that this PI was not behaving very badly. However, they did argue that this PI’s misconduct was not scientific misconduct — that it should be handled by criminal or civil authorities rather than funding agencies, and that it was not conduct that did harm to science per se.

Buzzelli (who, I should mention, was writing as a senior scientist in the Office of the Inspector General in the National Science Foundation) disagreed with this assessment. He argued that NSF had to get involved in this sexual harassment case in order to protect the integrity of its research funds. The PI in question, operating with NSF funds designated to provide an undergraduate training experience, used his power as a research director and mentor to make sexual demands of his undergraduate trainees. The only way for the undergraduate trainees to receive the training, mentoring, and even access to their own data that they were meant to receive in this research experience at a remote field site was for them to submit to the PI’s demands. In other words, while the PI’s behavior may not have directly compromised the shared body of scientific knowledge, it undermined the other central job of the tribe of science: the training of new scientists. Buzzelli writes:

These demands and assaults, plus the professional blackmail mentioned earlier, were an integral part of the subject’s performance as a research mentor and director and ethically compromised that performance. Hence, they seriously deviated from the practices accepted in the scientific community. (p. 647)

Buzzelli makes the case for an understanding of scientific misconduct as practices that do harm to science. Thus, practices that damage the integrity of training and supervision of associates and students – an important element of the research process – would count as misconduct. Indeed, in his 1999 article, he notes that the first official NIH definition of scientific misconduct (in 1986) used the phrase “serious deviations, such as fabrication, falsification, or plagiarism, from accepted practices in carrying out research or in reporting the results of research.” (p. 276) This language shifted in subsequent statements of the definition of scientific misconduct, for example “fabrication, falsification, plagiarism, and other serious deviations from accepted practices” in the NSF definition that was in place in 1999.

Reordering the words this way might not seem like a big shift, but as Buzzelli points out, it conveys the impression that “other serious deviations” is a fourth item in the list after the clearly enumerated fabrication, falsification, and plagiarism, an ill-defined catch-all meant to cover cases too fuzzy to enumerate in advance. The original NIH wording, in contrast, suggests that the essence of scientific misconduct is that it is an ethical deviation from accepted scientific practice. In this framing of the definition, fabrication, falsification, and plagiarism are offered as three examples of the kind of deviation that counts as scientific misconduct, but there is no claim that these three examples are the only deviations that count as scientific misconduct.

To those still worried by the imprecision of this definition, Buzzelli offers the following:

[T]he ethical import of “serious deviations from accepted practices” has escaped some critics, who have taken it to refer instead to such things as doing creative and novel research, exhibiting personality quirks, or deviating from some artificial ideal of scientific method. They consider the language of the present definition to be excessively broad because it would supposedly allow misconduct findings to be made against scientists for these inappropriate reasons.

However, the real import of “accepted practices” is that is makes the ethical standards held by the scientific community itself the regulatory standard that a federal agency will use in considering a case of misconduct against a scientist. (p. 277)

In other words, Buzzelli is arguing that a definition of scientific misconduct that is centered on practices that the scientific community finds harmful to knowledge-building is better for ensuring the proper use of research funding and protecting the integrity of the scientific record than a definition that restricts scientific misconduct to fabrication, falsification, and plagiarism. Refraining from fabrication, falsification, and plagiarism, then, would not suffice to fulfill the negative duties of a scientist.

We’ll continue our discussion of the duties of scientists with a sidebar discussion on what kind of harm I claim plagiarism does to scientific knowledge-building. From there, we will press on to discuss what the positive duties of scientists might be, as well as the sources of these duties.

_____
Buzzelli, D. E. (1993). The definition of misconduct in science: a view from NSF. Science, 259(5095), 584-648.

Buzzelli, D. (1999). Serious deviation from accepted practices. Science and Engineering Ethics, 5(2), 275-282.

Pimple, K. D. (2002). Six domains of research ethics. Science and Engineering Ethics, 8(2), 191-205.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Join Virtually Speaking Science for a conversation about sexism in science and science journalism.

Today at 5 P.M. Eastern/2 P.M. Pacific, I’ll be on Virtually Speaking Science with Maryn McKenna and Tom Levenson to discuss sexual harassment, gender bias, and related issues in the world of science, science journalism, and online science communication. Listen live online or, if you have other stuff to do in that bit of spacetime, you can check out the archived recording later. If you do the Second Life thing, you can join us there at the Exploratorium and text in questions for us.

Tom has a nice post with some background to orient our conversation.

Here, I’m going to give you a few links that give you a taste of what I’ve been thinking about in preparation for this conversation, and then I’ll say a little about what I hope will come out of the conversation.

Geek Feminism Wiki Timeline of incidents from 2013 (includes tech and science blogosphere)

Danielle Lee’s story about the “urban whore” incident and Scientific American’s response to it.

Kate Clancy’s post on how Danielle Lee’s story and the revelations about former Scientific American blog editor Bora Zivkovic are connected to the rape-y Einstein bobble head video incident (with useful discussion of productive strategies for community response)

Andrew David Thaler’s post “On being an ally and being called out on your privilege”

A post I wrote with a link to research on implicit gender bias among science faculty at universities, wherein I point out that the empirical findings have some ethical implications if we’re committed to reducing gender bias

A short film exploring the pipeline problem for women in chemistry, “A Chemical Imbalance” (Transcript)

The most recent of Zuska’s excellent posts on the pipeline problem, “Rethinking the Normality of Attrition”

As far as I’m concerned, the point of our conversation is not to say science, or science journalism, or online science communication, has a bigger problem with sexual harassment or sexism or gender disparities than other professional communities or than the broader societies from which members of these professional communities are drawn. The issue, as far as I can tell, is that these smaller communities reproduce these problems from the broader society — but, they don’t need to. Recognizing that the problem exists — that we think we have merit-driven institutions, or that we’re better at being objective than the average Jo(e), but that the evidence indicates we’re not — is a crucial step on the way to fixing it.

I’m hopeful that we’ll be able to talk about more than individual incidents of sexism or harassment in our discussion. The individual incidents matter, but they don’t emerge fully formed from the hearts, minds, mouths, and hands of evil-doers. They are reflections of cultural influences we’re soaking in, of systems we have built.

Among other things, this suggests to me that any real change will require thinking hard about how to change systems rather than keeping our focus at the level of individuals. Recognizing that it will take more than good intentions and individual efforts to overcome things like unconscious bias in human interactions in the professional sphere (including but not limited to hiring decisions) would be a huge step forward.

Such progress will surely be hard, but I don’t think it’s impossible, and I suspect the effort would be worth it.

If you can, do listen (and watch). I’ll be sure to link the archived broadcast once that link is available.