Pub-Style Science: philosophy, hypotheses, and the scientific method.

Last week I was honored to participate in a Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building. Some technical glitches notwithstanding, it was a rollicking good conversation — so much so that I have put together a transcript for those who don’t want to review the archived video.

The full transcript is long (approaching 8000 words even excising the non-substantive smack-talk), so I’ll be presenting it here in a few chunks that I’ve split more or less at points where the topic of the discussion shifted.

In places, I’ve cleaned up the grammar a bit, attempting to faithfully capture the gist of what each speaker was saying. As well, because my mom reads this blog, I’ve cleaned up some of the more colorful language. If you prefer the PG-13 version, the archived video will give you what you need.

Simultaneously with our video-linked discussion, there was a conversation on Twitter under the #pubscience hashtag. You can see that conversation Storify’d here.

____
(05:40)
Michael Tomasson: The reason I was interested in this is because I have one very naïve view and one esoteric view. My naïve view is that there is something useful about philosophy in terms of the scientific method, and when people are in my lab, I try to beat into their heads (I mean, educate them) that there’s a certain structure to how we do science, and this is a life-raft and a tool that is essential. And I guess that’s the question, whether there is some sort of essential tool kit. We talk about the scientific method. Is that a universal? I started thinking about this talking with my brother-in-law, who’s an amateur philosopher, about different theories of epistemology, and he was shocked that I would think that science had a lock on creating knowledge. But I think we do, through the scientific method.

Janet, take us to the next level. To me, from where I am, the scientific method is the key to the city of knowledge. No?

Janet Stemwedel: Well, that’s certainly a common view, and that’s a view that, in the philosophy of science class I regularly teach, we start with — that there’s something special about whatever it is scientists are doing, something special about the way they gather very careful observations of the world, and hook them together in the right logical way, and draw inferences and find patterns, that’s a reliable way to build knowledge. But at least for most of the 20th Century, what people who looked closely at this assumption in philosophy found was that it had to be more complicated than that. So you end up with folks like Sir Karl Popper pointing out that there is a problem of induction — that deductive logic will get you absolutely guaranteed conclusions if your premises are true, but inductive inference could go wrong; the future might not be like the past we’ve observed so far.

(08:00)
Michael Tomasson: I’ve got to keep the glossary attached. Deductive and inductive?

Janet Stemwedel: Sure. A deductive argument might run something like this:

All men are mortal. Socrates is a man. Therefore, Socrates is mortal.

If it’s true that all men are mortal, and that Socrates is a man, then you are guaranteed that Socrates is also going to be mortal. The form of the argument is enough to say, if the assumptions are true, then the conclusion has to be true, and you can take that to the bank.

Inductive inference is actually most of what we seem to use in drawing inferences from observations and experiments. So, let’s say you observe a whole lot of frogs, and you observe that, after some amount of time, each of the frogs that you’ve had in your possession kicks off. After a certain number of frogs have done this, you might draw the inference that all frogs are mortal. And, it seems like a pretty good inference. But, it’s possible that there are frogs not yet observed that aren’t mortal.

Inductive inference is something we use all the time. But Karl Popper said, guess what, it’s not guaranteed in the same way deductive logic is. And this is why he thought the power of the scientific method is that scientists are actually only ever concerned to find evidence against their hypotheses. The evidence against your hypotheses lets you conclude, via deductive inference, that those hypotheses are wrong, and then you cross them off. Any hypothesis where you seem to get observational support, Popper says, don’t get too excited! Keep testing it, because maybe the next test is going to be the one where you find evidence against it, and you don’t want to get screwed over by induction. Inductive reasoning is just a little too shaky to put your faith in.

(10:05)
Michael Tomasson: That’s my understanding of Karl Popper. I learned about the core of falsifying hypotheses, and that’s sort of what I teach as truth. But I’ve heard some anti-Karl Popper folks, which I don’t really quite understand.

Let me ask Isis, because I know Isis has very strong opinions about hypotheses. You had a blog post a long time ago about hypotheses. Am I putting words in your mouth to say you think hypotheses and hypothesis testing are important?

(10:40)
Dr. Isis: No, I did. That’s sort of become the running joke here is that my only contribution to lab meeting is to say, wait wait wait, what was your hypothesis? I think that having hypotheses is critical, and I’m a believer, as Dr. Tomasson knows, that a hypothesis has four parts. I think that’s fundamental, framing the question, because I think that the question frames how you do your analysis. The design and the analysis fall out of the hypothesis, so I don’t understand doing science without a hypothesis.

Michael Tomasson: Let me throw it over to Andrew … You’re coming from anthropology, you’re looking at science from 30,000 feet, where maybe in anthropology it’s tough to do hypothesis-testing. So, what do you say to this claim that the hypothesis is everything?

Andrew Brandel: I would give two basic responses. One: in the social sciences, we definitely have a different relationship to hypotheses, to the scientific method, perhaps. I don’t want to represent the entire world of social and human sciences.

Michael Tomasson: Too bad!

(12:40)
Andrew Brandel: So, there’s definitely a different relationship to hypothesis-testing — we don’t have a controlled setting. This is what a lot of famous anthropologists would talk about. The other area where we might interject is, science is (in the view of some of us) one among many different ways of viewing and organizing our knowledge about the world, and not necessarily better than some other view.

Michael Tomasson: No, it’s better! Come on!

Andrew Brandel: Well, we can debate about this. This is a debate that’s been going on for a long time, but basically my position would be that we have something to learn from all the different sciences that exist in the world, and that there are lots of different logics which condition the possibility of experiencing different kinds of things. When we ask, what is the hypothesis, when Dr. Isis is saying that is crucial for the research, we would agree with you, that that is also conditioning the responses you get. That’s both what you want and part of the problem. It’s part of a culture that operates like an ideology — too close to you to come at from within it.

Janet Stemwedel: One of the things that philosophers of science started twigging to, since the late 20th Century, is that science is not working with this scientific method that’s essentially a machine that you toss observations into and you turn the crank and on the other end out comes pristine knowledge. Science is an activity done by human beings, and human beings who do science have as many biases and blindspots as human beings who don’t do science. So, recognizing some of the challenges that are built into the kind of critter we are trying to build reliable knowledge about the world becomes crucial, and even places where the scientist will say, look, I’m not doing (in this particular field) hypothesis-driven science, it doesn’t mean that there aren’t some hypotheses sort of behind the curtain directing the attention of the people trying to build knowledge. It just means that they haven’t bumped into enough people trying to build knowledge in the same area that have different assumptions to notice that they’re making assumptions in the first place.

(15:20)
Dr. Isis: I think that’s a crucial distinction. Is the science that you’re doing really not hypothesis-driven, or are you too lazy to write down a hypothesis?

To give an example, I’m writing a paper with this clinical fellow, and she’s great. She brought a draft, which is amazing, because I’m all about the paper right now. And in there, she wrote, we sought to observe this because to the best of our knowledge this has never been reported in the literature.

First of all, the phrase “to the best of our knowledge,” any time you write that you should just punch yourself in the throat, because if it wasn’t to the best of your knowledge, you wouldn’t be writing it. I mean, you wouldn’t be lying: “this has never been reported in the literature.” The other thing is, “this has never been reported in the literature” as the motivation to do it is a stupid reason. I told her, the frequency of the times of the week that I wear black underwear has never been reported in the literature. That doesn’t mean it should be.

Janet Stemwedel: Although, if it correlates with your experiment working or not — I have never met more superstitious people than experimentalists. If the experiment only works on the days you were black underwear, you’re wearing black underwear until the paper is submitted, that’s how it’s going to be. Because the world is complicated!

Dr. Isis: The point is that it’s not that she didn’t have a hypothesis. It’s that pulling it out of her — it was like a tapeworm. It was a struggle. That to me is the question. Are we really doing science without a hypothesis, or are we making the story about ourselves? About what we know about in the literature, what the gap in the literature is, and the motivation to do the experiment, or are we writing, “we wanted to do this to see if this was the thing”? — in which case, I don’t find it very interesting.

Michael Tomasson: That’s an example of something that I try to teach, when you’re writing papers: we did this, we wanted to do that, we thought about this. It’s not really about you.

But friend of the show Cedar Riener tweets in, aren’t the biggest science projects those least likely to have clearly hypothesis-driven experiments, like HGP, BRANI, etc.? I think the BRAIN example is a good one. We talk about how you need hypotheses to do science, and yet here’s this very high profile thing which, as far as I can tell, doesn’t really have any hypotheses driving it.

When the transcript continues: Issues of inclusion, methodological disputes, and the possibility that “the scientific method” is actually a lie.

On speaking up when someone in your profession behaves unethically.

On Twitter recently there was some discussion of a journalist who wrote and published a piece that arguably did serious harm to its subject.

As the conversation unfolded, Kelly Hills helpfully dropped a link to the Society of Professional Journalists Code of Ethics. Even cursory inspection of this code made it quite clear that the journalist (and editor, and publisher) involved in the harmful story weren’t just making decisions that happened to turn out badly. Rather, they were acting in ways that violate the ethical standards for the journalistic profession articulated in this code.

One take-away lesson from this is that being aware of these ethical standards and letting them guide one’s work as a journalist could head off a great deal of harm.

Something else that came up in the discussion, though, was what seemed like a relative dearth of journalists standing up to challenge the unethical conduct of the journalist (and editor, and publisher) in question. Edited to add: A significant number of journalists even used social media to give the problematic piece accolades.

I follow a lot of journalists on Twitter. A handful of them condemned the unethical behavior in this case. The rest may be busy with things offline. It is worth noting that the Society of Professional Journalists Code of Ethics includes the following:

Journalists should:

  • Clarify and explain news coverage and invite dialogue with the public over journalistic conduct.
  • Encourage the public to voice grievances against the news media.
  • Admit mistakes and correct them promptly.
  • Expose unethical practices of journalists and the news media.
  • Abide by the same high standards to which they hold others.

That fourth bullet-point doesn’t quite say that journalists ought to call out bad journalistic behavior that has already been exposed by others. However, using one’s voice to condemn unethical conduct when you see it is one of the ways that people know that you’re committed to ethical conduct. (The other way people know you’re committed to ethical conduct is that you conduct yourself ethically.)

In a world where the larger public is probably going to take your professional tribe as a package deal, extending trust to the lot of you or feeling mistrust for the lot of you, reliably speaking up about problematic conduct when you see it is vital in earning the public’s trust. Moreover, criticisms from inside the professional community seem much more likely to be effective in persuading its members to embrace ethical conduct than criticisms from outside the profession. It’s just too easy for people on the inside to dismiss the critique from people on the outside with, “They just don’t understand what we do.”

There’s a connection here between what’s good for the professional community of journalists and what’s good for the professional community of scientists.

When scientists behave unethically, other scientists need to call them out — not just because the unethical behavior harms the integrity of the scientific record or the opportunities of particular members of the scientific community to flourish, or the health or safety of patients, but because this is how members of the community teetering on the brink of questionable decisions remember that the community does not tolerate such behavior. This is how they remember that those codes of conduct are not just empty words. This is how they remember that their professional peers expect them to act with integrity very single day.

If members of a professional community are not willing to demand ethical behavior from each other in this way, how can the public be expected to trust that professional community to behave ethically?

Undoubtedly, there are situations that can make it harder to take a stand against unethical behavior in your professional community, power disparities that can make calling out the bad behavior dangerous to your own standing in the professional community. As well, shared membership in a professional community creates a situation where you’re inclined to give your fellow professional the benefit of the doubt rather than starting from a place of distrust in your engagements.

But if only a handful of voices in your professional community are raised to call out problematic behavior that the public has identified and is taking very seriously, what does that communicate to the public?

Maybe that you see the behavior, don’t think it’s problematic, but can’t be bothered to explain why it’s not problematic (because the public’s concerns just don’t matter to you).

Maybe that you see the behavior, recognize that it’s problematic, but don’t actually care that much when it happens (and if the public is concerned about it, that’s their problem, not yours).

Maybe that you’re working very hard not to see the problematic behavior (which, in this case, probably means you’re also working very hard not to hear the public voicing its concerns).

Sure, there’s a possibility that you’re working very hard within your professional community to address the problematic behavior and make sure it doesn’t happen again, but if the public doesn’t see evidence of these efforts, it’s unreasonable to expect them to know they’re happening.

It’s hard for me to see how the public’s trust in a profession is supposed to be strengthened by people in the professional community not speaking out against unethical conduct of members of that professional community that the public already knows about. Indeed, I think a profession that only calls out bed behavior in its ranks that the public already knows about is skating on pretty thin ice.

It surely feels desperately unfair to all the members of a professional community working hard to conduct themselves ethically when the public judges the whole profession on the basis of the bad behavior of a handful of its members. One may be tempted to protest, “We’re not all like that!” That’s not really addressing the public’s complaint, though: The public sees at least one of you who’s “like that”; what are the rest of you doing about that?

If the public has good reason to believe that members of the profession will be swift and effective in their policing of bad behavior within their own ranks, the public is more likely to see the bad actors as outliers.

But the public is more likely to believe that members of the profession will be swift and effective in their policing of bad behavior within their own ranks when they see that happen, regularly.

Don’t be evil: Obligations of scientists (part 3)

In the last installation of our ongoing discussion of the obligations of scientists, I said the next post in the series would take up scientists’ positive duties (i.e., duties to actually do particular kinds of things). I’ve decided to amend that plan to say just a bit more about scientists’ negative duties (i.e., duties to refrain from doing particular kinds of things).

Here, I want to examine a certain minimalist view of scientists’ duties (or of scientists’ negative duties) that is roughly analogous to the old Google motto, “Don’t be evil.” For scientists, the motto would be “Don’t commit scientific misconduct.” The premise is that if X isn’t scientific misconduct, then X is acceptable conduct — at least, acceptable conduct within the context of doing science.

The next question, if you’re trying to avoid committing scientific misconduct, is how scientific misconduct is defined. For scientists in the U.S., a good place to look is to the federal agencies that provide funding for scientific research and training.

Here’s the Office of Research Integrity’s definition of misconduct:

Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results. …

Research misconduct does not include honest error or differences of opinion.

Here’s the National Science Foundation’s definition of misconduct:

Research misconduct means fabrication, falsification, or plagiarism in proposing or performing research funded by NSF, reviewing research proposals submitted to NSF, or in reporting research results funded by NSF. …

Research misconduct does not include honest error or differences of opinion.

These definitions are quite similar, although NSF restricts its definition to actions that are part of a scientist’s interaction with NSF — giving the impression that the same actions committed in a scientist’s interaction with NIH would not be scientific misconduct. I’m fairly certain that NSF officials view all scientific plagiarism as bad. However, when the plagiarism is committed in connection with NIH funding, NSF leaves it to the ORI to pursue sanctions. This is a matter of jurisdiction for enforcement.

It’s worth thinking about why federal funders define (and forbid) scientific misconduct in the first place rather than leaving it to scientists as a professional community to police. One stated goal is to ensure that the money they are distributing to support scientific research and training is not being misused — and to have a mechanism with which they can cut off scientists who have proven themselves to be bad actors from further funding. Another stated goal is to protect the quality of the scientific record — that is, to ensure that the published results of the funded research reflect honest reporting of good scientific work rather than lies.

The upshot here is that public money for science comes with strings attached, and that one of those strings is that the money be used to conduct actual science.

Ensuring the proper use of the funding and protecting the integrity of the scientific record needn’t be the only goals of federal funding agencies in the U.S. in their interactions with scientists or in the way they frame their definitions of scientific misconduct, but at present these are the goals in the foreground in discussions of why federally funded scientists should avoid scientific misconduct.

Let’s consider the three high crimes identified in these definitions of scientific misconduct.

Fabrication is making up data or results rather than actually collecting them from observation or experimentation. Obviously, fabrication undermines the project of building a reliable body of knowledge about the world – faked data can’t be counted on to give us an accurate picture of what the world is really like.

A close cousin of fabrication is falsification. Here, rather than making up data out of whole cloth, falsification involves “adjusting” real data – changing the values, adding some data points, omitting other data points. As with fabrication, falsification is lying about your empirical data, representing the falsified data as an honest report of what you observed when it isn’t.

The third high crime is plagiarism, misrepresenting the words or ideas (or, for that matter, data or computer code, for example) of others as your own. Like fabrication and falsification, plagiarism is a variety of dishonesty.

Observation and experimentation are central in establishing the relevant facts about the phenomena scientists are trying to understand. Establishing such relevant facts requires truthfulness about what is observed or measured and under what conditions. Deception, therefore, undermines this aim of science. So at a minimum, scientists must embrace the norm of truthfulness or abandon the goal of building accurate pictures of reality. This doesn’t mean that honest scientists never make mistakes in setting up their experiments, making their measurements, performing data analysis, or reporting what they found to other scientists. However, when honest scientists discover these mistakes, they do what they can to correct them, so that they don’t mislead their fellow scientists even accidentally.

The importance of reliable empirical data, whether as the source of or a test of one’s theory, is why fabrication and falsification of data are rightly regarded as cardinal sins against science. Made-up data are no kind of reliable indicator of what the world is like or whether a particular theory is a good one. Similarly, “cooking” data sets to better support particular hypotheses amounts to ignoring the reality of what has actually been measured. The scientific rules of engagement with phenomena hold the scientist to account for what has actually been observed. While the scientist is always permitted to get additional data about the object of study, one cannot willfully ignore facts one finds puzzling or inconvenient. Even if these facts are not explained, they must be acknowledged.

Those who commit falsification and fabrication undermine the goal of science by knowingly introducing unreliable data into, or holding back relevant data from, the formulation and testing of theories. They sin by not holding themselves accountable to reality as observed in scientific experiments. When they falsify or fabricate in reports of research, they undermine the integrity of the scientific record. When they do it in grant proposals, they are attempting to secure funding under false pretenses.

Plagiarism, the third of the cardinal sins against responsible science, is dishonesty of another sort, namely, dishonesty about the source of words, ideas, methods, or results. A number of people who think hard about research ethics and scientific misconduct view plagiarism as importantly different in its effects from fabrication and falsification. For example, Donald E. Buzzelli (1999) writes:

[P]lagiarism is an instance of robbing a scientific worker of the credit for his or her work, not a matter of corrupting the record. (p. 278)

Kenneth D, Pimple (2002) writes:

One ideal of science, identified by Robert Merton as “disinterestedness,” holds that what matters is the finding, not who makes the finding. Under this norm, scientists do not judge each other’s work by reference to the race, religion, gender, prestige, or any other incidental characteristic of the researcher; the work is judged by the work, not the worker. No harm would be done to the Theory of Relativity if we discovered Einstein had plagiarized it…

[P]lagiarism … is an offense against the community of scientists, rather than against science itself. Who makes a particular finding will not matter to science in one hundred years, but today it matters deeply to the community of scientists. Plagiarism is a way of stealing credit, of gaining credit where credit is not due, and credit, typically in the form of authorship, is the coin of the realm in science. An offense against scientists qua scientists is an offense against science, and in its way plagiarism is as deep an offense against scientists as falsification and fabrication are offenses against science. (p. 196)

In fact, I think we can make a good argument that plagiarism does threaten the integrity of the scientific record (although I’ll save that argument for a separate post). However, I agree with both Buzzelli and Pimple that plagiarism is also a problem because it embodies a particular kind of unfairness within scientific practice. That federal funders include plagiarism by name in their definitions of scientific misconduct suggests that their goals extend further than merely protecting the integrity of the scientific record.

Fabrication, falsification, and plagiarism are clearly instances of scientific misconduct, but the misconduct definitions of the United States Public Health Service (whose umbrella includes NIH) and NSF used to define scientific misconduct as fabrication, falsification, plagiarism, and other serious deviations from accepted research practices. The “other serious deviations” clause was controversial, with a panel of the National Academy of Sciences (among others) arguing that this language was ambiguous enough that it shouldn’t be part of an official misconduct definition. Maybe, the panel worried, “serious deviations from accepted research practices” might be interpreted to include cutting-edge methodological innovations, meaning that scientific innovation would count as misconduct.

In his article 1993 article, “The Definition of Misconduct in Science: A View from NSF,” Buzzelli claimed that there was no evidence that the broader definitions of misconduct had been used to lodge this kind of misconduct complaint. Since then, however, there there have been instances where definitions of scientific misconduct containing an “other serious deviations” clause could be argued to take advantage of the ambiguity of the clause to go after a scientist for political reasons.

If the “other serious deviations” clause isn’t meant to keep scientists from innovating, what kinds of misconduct is it supposed to cover? These include things like sabotaging other scientists’ experiments or equipment, falsifying colleagues’ data, violating agreements about sharing important research materials like cultures and reagents, making misrepresentations in grant proposals, and violating the confidentiality of the peer review process. None of these activities is necessarily covered by fabrication, falsification, or plagiarism, but each of these activities can be seriously harmful to scientific knowledge-building.

Buzzelli (1993) discusses a particular deviation from accepted research practices that the NSF judged as misconduct, one where a principal investigator directing an undergraduate primatology research experience funded by an NSF grant sexually harassed student researchers and graduate assistants. Buzzelli writes:

In carrying out this project, the senior researcher was accused of a range of coercive sexual offenses against various female undergraduate students and research assistants, up to and including rape. … He rationed out access to the research data and the computer on which they were stored and analyzed, as well as his own assistance, so they were only available to students who accepted his advances. He was also accused of threatening to blackball some of the graduate students in the professional community and to damage their careers if they reported his activities. (p. 585)

Even opponents of the “other serious deviations” clause would be unlikely to argue that this PI was not behaving very badly. However, they did argue that this PI’s misconduct was not scientific misconduct — that it should be handled by criminal or civil authorities rather than funding agencies, and that it was not conduct that did harm to science per se.

Buzzelli (who, I should mention, was writing as a senior scientist in the Office of the Inspector General in the National Science Foundation) disagreed with this assessment. He argued that NSF had to get involved in this sexual harassment case in order to protect the integrity of its research funds. The PI in question, operating with NSF funds designated to provide an undergraduate training experience, used his power as a research director and mentor to make sexual demands of his undergraduate trainees. The only way for the undergraduate trainees to receive the training, mentoring, and even access to their own data that they were meant to receive in this research experience at a remote field site was for them to submit to the PI’s demands. In other words, while the PI’s behavior may not have directly compromised the shared body of scientific knowledge, it undermined the other central job of the tribe of science: the training of new scientists. Buzzelli writes:

These demands and assaults, plus the professional blackmail mentioned earlier, were an integral part of the subject’s performance as a research mentor and director and ethically compromised that performance. Hence, they seriously deviated from the practices accepted in the scientific community. (p. 647)

Buzzelli makes the case for an understanding of scientific misconduct as practices that do harm to science. Thus, practices that damage the integrity of training and supervision of associates and students – an important element of the research process – would count as misconduct. Indeed, in his 1999 article, he notes that the first official NIH definition of scientific misconduct (in 1986) used the phrase “serious deviations, such as fabrication, falsification, or plagiarism, from accepted practices in carrying out research or in reporting the results of research.” (p. 276) This language shifted in subsequent statements of the definition of scientific misconduct, for example “fabrication, falsification, plagiarism, and other serious deviations from accepted practices” in the NSF definition that was in place in 1999.

Reordering the words this way might not seem like a big shift, but as Buzzelli points out, it conveys the impression that “other serious deviations” is a fourth item in the list after the clearly enumerated fabrication, falsification, and plagiarism, an ill-defined catch-all meant to cover cases too fuzzy to enumerate in advance. The original NIH wording, in contrast, suggests that the essence of scientific misconduct is that it is an ethical deviation from accepted scientific practice. In this framing of the definition, fabrication, falsification, and plagiarism are offered as three examples of the kind of deviation that counts as scientific misconduct, but there is no claim that these three examples are the only deviations that count as scientific misconduct.

To those still worried by the imprecision of this definition, Buzzelli offers the following:

[T]he ethical import of “serious deviations from accepted practices” has escaped some critics, who have taken it to refer instead to such things as doing creative and novel research, exhibiting personality quirks, or deviating from some artificial ideal of scientific method. They consider the language of the present definition to be excessively broad because it would supposedly allow misconduct findings to be made against scientists for these inappropriate reasons.

However, the real import of “accepted practices” is that is makes the ethical standards held by the scientific community itself the regulatory standard that a federal agency will use in considering a case of misconduct against a scientist. (p. 277)

In other words, Buzzelli is arguing that a definition of scientific misconduct that is centered on practices that the scientific community finds harmful to knowledge-building is better for ensuring the proper use of research funding and protecting the integrity of the scientific record than a definition that restricts scientific misconduct to fabrication, falsification, and plagiarism. Refraining from fabrication, falsification, and plagiarism, then, would not suffice to fulfill the negative duties of a scientist.

We’ll continue our discussion of the duties of scientists with a sidebar discussion on what kind of harm I claim plagiarism does to scientific knowledge-building. From there, we will press on to discuss what the positive duties of scientists might be, as well as the sources of these duties.

_____
Buzzelli, D. E. (1993). The definition of misconduct in science: a view from NSF. Science, 259(5095), 584-648.

Buzzelli, D. (1999). Serious deviation from accepted practices. Science and Engineering Ethics, 5(2), 275-282.

Pimple, K. D. (2002). Six domains of research ethics. Science and Engineering Ethics, 8(2), 191-205.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

In this post, we’re returning to a discussion we started back in September about whether scientists have special duties or obligations to society (or, if the notion of “society” seems too fuzzy and ill-defined to you, to the other people who are not scientists with whom they share a world) in virtue of being scientists.

You may recall that, in the post where we set out some groundwork for the discussion, I offered one reason you might think that scientists have duties that are importantly different from the duties of non-scientists:

The main arguments for scientists having special duties tend to turn on scientists being in possession of special powers. This is the scientist as Spider-Man: with great power comes great responsibility.

What kind of special powers are we talking about? The power to build reliable knowledge about the world – and in particular, about phenomena and mechanisms in the world that are not so transparent to our everyday powers of observation and the everyday tools non-scientists have at their disposal for probing features of their world. On account of their training and experience, scientists are more likely to be able to set up experiments or conditions for observation that will help them figure out the cause of an outbreak of illness, or the robust patterns in global surface temperatures and the strength of their correlation with CO2 outputs from factories and farms, or whether a particular plan for energy generation is thermodynamically plausible. In addition, working scientists are more likely to have access to chemical reagents and modern lab equipment, to beamtimes at particle accelerators, to purpose-bred experimental animals, to populations of human subjects and institutional review boards for well-regulated clinical trials.

Scientists can build specialist knowledge that the rest of us (including scientists in other fields) cannot, and many of them have access to materials, tools, and social arrangements for use in their knowledge-building that the rest of us do not. That may fall short of a superpower, but we shouldn’t kid ourselves that this doesn’t represent significant power in our world.

In her book Ethics of Scientific Research, Kristin Shrader-Frechette argues that these special abilities give rise to obligations for scientists. We can separate these into positive duties and negative duties. A positive duty is an obligation to actually do something (e.g., a duty to care for the hungry, a duty to tell the truth), while a negative duty is an obligation to refrain from doing something (e.g., a duty not to lie, a duty not to steal, a duty not to kill). There may well be context sensitivity in some of these duties (e.g, if it’s a matter of self-defense, your duty not to kill may be weakened), but you get the basic difference between the two flavors of duties.

Let’s start with ways scientists ought not to use their scientific powers. Since scientists have to share a world with everyone else, Shrader-Frechette argues that this puts some limits on the research they can do. She says that scientists shouldn’t do research that causes unjustified risks to people. Nor should they do research that violates informed consent of the human subjects who participate in the research. They should not do research that unjustly converts public resources to private profits. Nor should they do research that seriously jeopardizes environmental welfare. Finally, scientists should not do biased research.

One common theme in these prohibitions is the idea that knowledge in itself is not more important than the welfare of people. Given how focused scientific activity is on knowledge-building, this may be something about which scientists need to be reminded. For the people with whom scientists share a world, knowledge is valuable instrumentally – because people in society can benefit from it. What this means is that scientific knowledge-building that harms people more than it helps them, or that harms shared resources like the environment, is on balance a bad thing, not a good thing. This is not to say that the knowledge scientists are seeking should not be built at all. Rather, scientists need to find a way to build it without inflicting those harms – because it is their duty to avoid inflicting those harms.

Shrader-Frechette makes the observation that for research to be valuable at all to the broader public, it must be research that produces reliable knowledge. This is a big reason scientists should avoid conducting biased research. And, she notes that not doing certain research can also pose a risk to the public.

There’s another way scientists might use their powers against non-scientists that’s suggested by the Mertonian norm of disinterestedness, an “ought” scientists are supposed to feel pulling at them because of how they’ve been socialized as members of their scientific tribe. Because the scientific expert has knowledge and knowledge-building powers that the non-scientist does not, she could exploit the non-scientist’s ignorance or his tendency to trust the judgment of the expert. The scientist, in other words, could put one over on the layperson for her own benefit. This is how snake oil gets sold — and arguably, this is the kind of thing that scientists ought to refrain from doing in their interactions with non-scientists.

The overall duties of the scientist, as Shrader-Frechette describes them, also include positive duties to do research and to use research findings in ways that serve the public good, as well as to ensure that the knowledge and technologies created by the research do not harm anyone. We’ll take up these positive duties in the next post in the series.
_____
Shrader-Frechette, K. S. (1994). Ethics of scientific research. Rowman & Littlefield.
______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

On allies.

Those who cannot remember the past are condemned to repeat it.
–George Santayana

All of this has happened before, and all of this will happen again.
–a guy who turned out to be a Cylon

Let me start by putting my cards on the table: Jamie Vernon is not someone I count as an ally.

At least, he’s not someone I’d consider a reliable ally. I don’t have any reason to believe that he really understands my interests, and I don’t trust him not to sacrifice them for his own comfort. He travels in some of the same online spaces that I do and considers himself a longstanding member of the SciComm community of which I take myself to be a member, but that doesn’t mean I think he has my back. Undoubtedly, there are some issues for which we would find ourselves on the same side of things, but that’s not terribly informative; there are some issues (not many, but some) for which Dick Cheney and I are on the same side.

Here, I’m in agreement with Isis that we needn’t be friends to be able to work together in pursuit of shared goals. I’ve made similar observations about the scientific community:

We’re not all on the same page about everything. Pretending that we are misrepresents the nature of the tribe of science and of scientific activity. But given that there are some shared commitments that guide scientific methodology, some conditions without which scientific activity in the U.S. cannot flourish, these provide some common ground on which scientists ought to be more or less united … [which] opens the possibility of building coalitions, of finding ways to work together toward the goals we share even if we may not agree about what other goals are worth pursuing.

We probably can’t form workable coalitions, though, by showing open contempt for each other’s other commitments or interests. We cannot be allies by behaving like enemies. Human nature sucks like that sometimes.

But without coalitions, we have to be ready to go it alone, to work to achieve our goals with much less help. Without coalitions, we may find ourselves working against the effects of those who have chosen to pursue other goals instead. If you can’t work with me toward goal A, I may not be inclined to help you work toward goal B. If we made common cause with each other, we might be able to tailor strategies that would get us closer to both goals rather than sacrificing one for the other. But if we decide we’re not working on the same team, why on earth should we care about each other’s recommendations with respect to strategies?

Ironically, we humans seem sometimes to show more respect to people who are strangers than to people we call our friends. Perhaps it’s related to the uncertainty of our interactions going forward — the possibility that we may need to band together, or to accommodate the other’s interests to protect our own — or to the lack of much shared history to draw upon in guiding our interactions. We begin our interactions with strangers with the slate as blank as it can be. Strangers can’t be implored (at least not credibly) to consider our past good acts to excuse our current rotten behavior toward them.

We may recognize strangers as potential allies, but we don’t automatically assume that they’re allies already. Neither do we assume that they’ll view us as their allies.

Thinking about allies is important in the aftermath of Joe Hanson’s video that he says was meant to “lampoon” the personalities of famous scientists of yore and to make “a joke to call attention to the sexual harassment that many women still today experience.” It’s fair to say the joke was not entirely successful given that the scenes of Albert Einstein sexually harassing and assaulting Marie Curie arguably did harm to women in science:

Hanson’s video isn’t funny. It’s painful. It’s painful because 1) it’s such an accurate portrayal of exactly what so many of us have faced, and 2) the fact that Hanson thinks it’s “outrageous” demonstrates how many of our male colleagues don’t realize the fullness of the hostility that women scientists are still facing in the workplace. Furthermore, Hanson’s continued clinging to “can’t you take a joke” and the fact that he was “trying to be comedic” reflects the deeper issue. Not only does he not get it, his statement implies that he has no intention of trying to get it.

Hanson’s posted explanation after the negative reactions urges the people who reacted negatively to see him as an ally:

To anyone curious if I am not aware of, or not committed to preventing this kind of treatment (in whatever way my privileged perspective allows me to do so) I would urge you to check out my past writing and videos … This doesn’t excuse us, but I ask that you form your opinion of me, It’s Okay To Be Smart, and PBS Digital Studios from my body of work, and not a piece of it.

Indeed, Jamie Vernon not only vouches for Hanson’s ally bona fides but asserts his own while simultaneously suggesting that the negative reactions to Hanson’s video are themselves a problem for the SciComm community:

Accusations of discrimination were even pointed in my direction, based on a single ill-advised Tweet.  One tweet (that I now regret and apologize for) triggered a tsunami of anger, attacks, taunts, and accusations against me. 

Despite many years of speaking out on women’s issues in science, despite being an ardent supporter of women science communicators, despite being a father to two young girls for whom it is one of my supreme goals to create a more gender balanced science community, despite these things and many other examples of my attempts to be an ally to the community of women science communicators, I was now facing down the barrel of a gun determined to make an example out of me. …

“How could this be happening to me?  I’m an ally!” I thought. …

Hanson has worked incredibly hard for several years to create an identity that has proven to inspire young people.  He has thousands of loyal readers who share his work thousands of times daily on Tumblr, Facebook and Twitter.  He has championed women’s causes.  Just the week prior to the release of the infamous video, he railed against discriminatory practices among the Nobel Prize selection committees.  He is a force for good in a sea of apathy and ignorance.  Without a doubt, he is an asset to science and science communication.  In my opinion, any mention of removing him from his contract with PBS is shortsighted and reflects misdirected anger.  He deserves the opportunity to recalibrate and power on in the name of science.

Vernon assures us that he and Hanson are allies to women in science and in the SciComm community. At minimum, I believe that Vernon must have a very different understanding than I of what is involved in being an ally.

Allies are people with whom we make common cause to pursue particular goals or to secure particular interests. Their interests and goals are not identical to ours — that’s what makes them allies.

I do not expect allies to be perfect. They, like me, are human, and I certainly mess up with some regularity. Indeed, I understand full well the difficulty of being a good ally. As Josh Witten observed to me, as a white woman I am “in one of the more privileged classes of the oppressed, arguably the least f@#$ed over of the totally f@#$ed over groups in modern western society.” This means when I try to be an ally to people of color, or disabled people, or poor people, for example, there’s a good chance I’ll step in it. I may not be playing life on the lowest difficulty setting, but I’m pretty damn close.

Happily, many people to whom I try to be an ally are willing to tell me when I step in it and to detail just how I’ve stepped in it. This gives me valuable feedback to try to do better.

Allies I trust are people who pay attention to the people to whom they’re trying to give support because they’re imperfect and because their interests and goals are not identical. The point of paying attention is to get some firsthand reports on whether you’re helping or hurting from the people you’re trying to help.

When good allies mess up, they do their best to respond ethically and do better going forward. Because they want to do better, they want to know when they have messed up — even though it can be profoundly painful to find out your best efforts to help have not succeeded.

Let’s pause for a moment here so I can assure you that I understand it hurts when someone tells you that you messed up. I understand it because I have experienced it. I know all about the feeling of defensiveness that pops right up, as well as the feeling that your character as a human being is being unfairly judged on the basis of limited data — indeed, in your defensiveness, you might immediately start looking for ways the person suggesting you are not acting like a good ally has messed up (including failing to communicate your mistake in language that is as gentle as possible). These feelings are natural, but being a good ally means not letting these feelings overcome your commitment to actually be helpful to the people you set out to help.

On account of these feelings, you might feel great empathy for someone else who has just stepped in it but who you think it trying to be an ally. You might feel so much empathy that you don’t want to make them feel bad by calling out their mistake — or that you chide others for pointing out that mistake. (You might even start reaching for quotations about people without sin and stones.) Following this impulse undercuts the goal of being a good ally.

As I wrote elsewhere,

If identifying problematic behavior in a community is something that can only be done by perfect people — people who have never sinned themselves, who have never pissed anyone off, who emerged from the womb incapable of engaging in bad behavior themselves — then we are screwed.

People mess up. The hope is that by calling attention to the bad behavior, and to the harm it does, we can help each other do better. Focusing on problematic behavior (especially if that behavior is ongoing and needs to be addressed to stop the harm) needn’t brand the bad actor as irredeemable, and it shouldn’t require that there’s a saint on duty to file the complaint.

An ally worth the name recognizes that while good intentions can be helpful in steering his conduct, in the end it’s the actions that matter the most. Other people don’t have privileged access to our intentions, after all. What they have to go on is how we behave, what we do — and that outward behavior can have positive or negative effects regardless of whether we intended those effects. It hurts when you step on my toe whether or not you are a good person inside. Telling me it shouldn’t hurt because you didn’t intend the harm is effectively telling me that my own experience isn’t valid, and that your feelings (that you are a good person) trump mine (that my foot hurts).

The allies I trust recognize that the trust they bank from their past good acts is finite. Those past good acts don’t make it impossible for their current acts to cause real harm — in fact, they can make a current act more harmful by shattering the trust built up with the past good acts. As well, they try to understand that harm done by other can make all the banked trust easier to deplete. It may not seem fair, but it is a rational move on the part of the people they are trying to help to protect themselves from harm.

This is, by the way, a good reason for people who want to be effective allies to address the harms done by others rather than maintaining a non-intervention policy.

Being a good ally means trying very hard to understand the positions and experiences of the people with whom you’re trying to make common cause by listening carefully, by asking questions, and by refraining from launching into arguments from first principles that those experiences are imaginary or mistaken. While they ask questions, those committed to being allies don’t demand to be educated. They make an effort to do their own homework.

I expect allies worth the name not to demand forgiveness, not to insist that the people with whom they say they stand will swallow their feelings or let go of hurt on the so-called ally’s schedule. Things hurt as much and as long as they’re going to hurt. Ignoring that just adds more hurt to the pile.

The allies I trust are the ones who are focused on doing the right thing, and on helping counter the wrongs, whether or not anyone is watching, not for the street cred as an ally, but because they know they should.

The allies I believe in recognize that every day they are faced with choices about how to act — about who to be — and that how they choose can make them better or worse allies regardless of what came before.

I am not ruling out the possibility that Joe Hanson or Jamie Vernon could be reliable allies for women in science and in the SciComm community. But their professions of ally status will not be what makes them allies, nor will such professions be enough to make me trust them as allies. The proof of an ally is in how he acts — including how he acts in response to criticism that hurts. Being an ally will mean acting like one.

On the labor involved in being part of a community.

On Thursday of this week, registration for ScienceOnline Together 2014, the “flagship annual conference” of ScienceOnline opened (and closed). ScienceOnline describes itself as a “global, ongoing, online community” made up of “a diverse and growing group of researchers, science writers, artists, programmers, and educators —those who conduct or communicate science online”.

On Wednesday of this week, Isis the Scientist expressed her doubts that the science communication community for which ScienceOnline functions as a nexus is actually a “community” in any meaningful sense:

The major fundamental flaw of the SciComm “community” is that it is a professional community with inconsistent common values. En face, one of its values is the idea of promoting science. Another is promoting diversity and equality in a professional setting. But, at its core, its most fundamental value are these notions of friendship, support, and togetherness. People join the community in part to talk about science, but also for social interactions with other members of the “community”.  While I’ve engaged in my fair share of drinking and shenanigans  at scientific conferences, ScienceOnline is a different beast entirely.  The years that I participated in person and virtually, there was no doubt in my mind that this was a primarily social enterprise.  It had some real hilarious parts, but it wasn’t an experience that seriously upgraded me professionally.

People in SciComm feel confident talking about “the community” as a tangible thing with values and including people in it, even when those people don’t value the social structure in the same way. People write things that are “brave” and bloviate in ways that make each other feel good and have “deep and meaningful conversations about issues” that are at the end of the day nothing more than words. It’s a “community” that gives out platters full of cookies to people who claim to be “allies” to causes without actually having to ever do anything meaningful. Without having to outreach in any tangible way, simply because they claim to be “allies.” Deeming yourself an “ally” and getting a stack of “Get Out of Jail, Free” cards is a hallmark of the “community”.

Isis notes that the value of “togetherness” in the (putative) SciComm community is often prioritized over the value of “diversity” — and that this is a pretty efficient way to undermine the community. She suggests that focusing on friendship rather than professionalism entrenches this problem and writes “I have friends in academia, but being a part of academic science is not predicated on people being my friends.”

I’m very sympathetic to Isis’s concerns here. I don’t know that I’d say there’s no SciComm community, but that might come down to a disagreement about where the line is between a dysfunctional community and a lack of community altogether. But that’s like the definitional dispute about how many hairs one needs on one’s head to shift from the category of “bald” to the category of “not-bald” — for the case we’re trying to categorize there’s still agreement that there’s a whole lot of bare skin hanging out in the wind.

The crux of the matter, whether we have a community or are trying to have one, is whether we have a set of shared values and goals that is sufficient for us to make common cause with each other and to take each other seriously — to take each other seriously even when we offer critiques of other members of the community. For if people in the community dismiss your critiques out of hand, if they have the backs of some members of the community and not others (and whose they have and whose they don’t sorts out along lines of race, gender, class, and other dimensions that the community’s shared values and goals purportedly transcend), it’s pretty easy to wonder whether you are actually a valued member of the community, whether the community is for you in any meaningful way.

I do believe there’s something like a SciComm community, albeit a dysfunctional one. I will be going to ScienceOnline Together 2014, as I went to the seven annual meetings preceding it. Personally, even though I am a full-time academic like Dr. Isis, I do find professional value from this conference. Probably this has to do with my weird interdisciplinary professional focus — something that makes it harder for me to get all the support and inspiration and engagement I need from the official professional societies that are supposed to be aligned with my professional identity. And because of the focus of my work, I am well aware of dysfunction in my own professional community and in other academic and professional communities.

While there has been a pronounced social component to ScienceOnline as a focus of the SciComm community, ScienceOnline (and its ancestor conferences) have never felt purely social to me. I have always had a more professional agenda there — learning what’s going on in different realms of practice, getting my ideas before people who can give me useful feedback on them, trying to build myself a big-picture, nuanced understanding of science engagement and how it matters.

And in recent years, my experience of the meetings has been more like work. Last year, for example, I put a lot of effort into coordinating a kid-friendly room at the conference so that attendees with small children could have some child-free time in the sessions. It was a small step towards making the conference — and the community — more accessible and welcoming to all the people who we describe as being part of the community. There’s still significant work to do on this front. If we opt out of doing that work, we are sending a pretty clear message about who we care about having in the community and who we view as peripheral, about whose voices and interests we value and whose we do not.

Paying attention to who is being left out, to whose voices are not being heard, to whose needs are not being met, takes effort. But this effort is part of the regular required maintenance for any community that is not completely homogeneous. Skipping it is a recipe for dysfunction.

And the maintenance, it seems, is required pretty much every damn day.

Friday, in the Twitter stream for the ScienceOnline hashtag #scio14, I saw this:

To find out what was making Bug Girl feel unsafe, I went back and watched Joe Hanson’s Thanksgiving video, in which Albert Einstein was portrayed as making unwelcome advances on Marie Curie, cheered on by his host, culminating in a naked assault on Curie.

Given the recent upheaval in the SciComm community around sexual harassment — with lots of discussion, because that’s how we roll — it is surprising and shocking that this video plays sexual harassment and assault for laughs, apparently with no thought to how many women are still targets of harassment, no consideration of how chilly the climate for women in science remains.

Here’s a really clear discussion of what makes the video problematic, and here’s Joe Hanson’s response to the criticisms. I’ll be honest: it looks to me like Joe still doesn’t really understand what people (myself included) took to the social media to explain to him. I’m hopeful that he’ll listen and think and eventually get it better. If not, I’m hopeful that people will keep piping up to explain the problem.

But not everyone was happy that members of our putative community responded to a publicly posted video (on a pretty visible platform — PBS Digital Studio — supported by taxpayers in the U.S.) was greeted with a public critique.

The objections raised on Twitter — many of them raised with obvious care as far as being focused on the harm and communicated constructively — were described variously as “drama,” “infighting,” a “witch hunt” and “burning [Joe] at the stake”. (I’m not going to link the tweets because a number of the people who made those characterizations thought about it and walked them back.)

People insisted, as they do pretty much every time, that the proper thing to do was to address the problem privately — as if that’s the only ethical way to deal with a public wrong, or as if it’s the most effective way to fix the harm. Despite what some will argue, I don’t think we have good evidence for either of those claims.

So let’s come back to regular maintenance of the community and think harder about this. I’ve written before that

if bad behavior is dealt with privately, out of view of members of the community who witnessed the bad behavior in question, those members may lose faith in the community’s commitment to calling it out.

This strikes me as good reason not to take all the communications to private channels. People watching and listening on the sidelines are gathering information on whether their so-called community shares their values, on whether it has their back.

Indeed, the people on the sidelines are also watching and listening to the folks dismissing critiques as drama. Operationally, “drama” seems to amount to “Stuff I’d rather you not discuss where I can see or hear it,” which itself shades quickly into “Stuff that really seems to bother other people, for whom I seem to be unable to muster any empathy, because they are not me.”

Let me pause to note what I am not claiming. I am not saying that every member of a community must be an active member of every conversation within that community. I am not saying that empathy requires you to personally step up and engage in every difficult dialogue every time it rolls around. Sometimes you have other stuff to do, or you know that the cost of being patient and calm is more than you can handle at the moment, or you know you need to listen and think for awhile before you get it well enough to get into it.

But going to the trouble to speak up to convey that the conversation is a troublesome one to have happening in your community — that you wish people would stop making an issue of it, that they should just let it go for the sake of peace in the community — that’s something different. That’s telling the people expressing their hurt and disappointment and higher expectations that they should swallow it, that they should keep it to themselves.

For the sake of the community.

For the sake of the community of which they are clearly not really valued members, if they are the ones, always, who need to shut up and let their issues go for the greater good.

Arguably, if one is really serious about the good of the community, one should pay attention to how this kind of dismissal impacts the community. Now is as good a moment as any to start.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Near the beginning of the month, I asked my readers — those who are scientists and those who are non-scientists alike — to share their impressions about whether scientists have any special duties or obligations to society that non-scientists don’t have. I also asked whether non-scientists have any special duties or obligations to scientists.

If you click through to those linked posts and read the comments (and check out the thoughtful responses at MetaCookBook and Antijenic Drift), you’ll see a wide range of opinions on both of these questions, each with persuasive reasons offered to back them up.

In this post and a few more that will follow (I’m estimating three more, but we’ll see how it goes), I want to take a closer look at some of these responses. I’m also going to develop some of the standard arguments that have been put forward by professional philosophers and others of that ilk that scientists do, in fact, have special duties. Working through these arguments will include getting into specifics about what precisely scientists owe the non-scientists with whom they’re sharing a world, and about the sources of these putative obligations. If we’re going to take these arguments seriously, though, I think we need to think carefully about the corresponding questions: what do individual non-scientists and society as a whole owe to scientists, and what are the sources of these obligations?

First, let’s lay some groundwork for the discussion.

Right off the bat, I must acknowledge the problem of drawing clear lines around who counts as a scientist and who counts as a non-scientist. For the purposes of getting answers to my questions, I used a fairly arbitrary definition:

Who counts as a scientist here? I’m including anyone who has been trained (past the B.A. or B.S. level) in a science, including people who may be currently involved in that training and anyone working in a scientific field (even in the absences of schooling past the B.A. or B.S. level).

There are plenty of people who would count as “scientist” under this definition who would not describe themselves as scientists — or at least as professional scientists. (I am one of those people.) On the other hand, there are some professional scientists who would say lots of the people who meet my criteria, even those who would describe themselves as professional scientists, don’t really count as members of the tribe of science.

There’s not one obvious way to draw the lines here. The world is frequently messy that way.

That said, at least some of the arguments that claim scientists have special duties make particular assumptions about scientific training. These assumptions point to a source of the putative special duties.

But maybe that just means we should be examining claims about people-whose-training-puts-them-into-a-particular-relationship-with-society having special duties, whether or not those people are all scientists, and whether or not all scientists have had training that falls into that category.

Another issue here is getting to the bottom of what it means to have an obligation.

Some obligations we have may be spelled out in writing, explicitly agreed to, with the force of law behind them, but many of our obligations are not. Many flow not from written contracts but from relationships — whether our relationships with individuals, or with professional communities, or with other sorts of communities of various sizes.

Because they flow from relationships, it’s not unreasonable to expect that when we have obligations, the persons, communities, or other entities to whom we have obligations will have some corresponding obligations to us. However, this doesn’t guarantee that the obligations on each side will be perfectly symmetrical in strength or in kind. When my kids were little, my obligations to them were significantly larger than their obligations to me. Further, as our relationships change, so will our obligations. I owe my kids different things now than I did when they were toddlers. I owe my parents different things now than I did when I was a minor living under their roof.

It’s also important to notice that obligations are not like physical laws: having an obligation is no guarantee that one will live up to it and accordingly display a certain kind of behavior. Among other things, this means that how people act is not a perfectly reliable guide to how they ought to act. It also means that someone else’s failure to live up to her obligations to me does not automatically switch off my obligations to her. In some cases it might, but there are other cases where the nature of the relationship means my obligations are still in force. (For example, if my teenage kid falls down on her obligation to treat me with minimal respect, I still have a duty to feed and shelter her.)

That obligations are not like physical laws means there’s likely to be more disagreement around what we’re actually obliged to do. Indeed, some are likely to reject putative obligations out of hand because they are socially constructed. Here, I don’t think we need to appeal to a moral realist to locate objective moral facts that could ground our obligations. I’m happy to bite the bullet. Socially constructed obligations aren’t a problem because they emerge from the social processes that are an inescapable part of sharing a world — including with people who are not exactly like ourselves. These obligations flow from our understandings of the relationships we bear to one another, and they are no less “real” for being socially constructed than are bridges.

One more bit of background to ponder: The questions I posed asked whether scientists and non-scientists have any special duties or obligations to each other. A number of respondents (mostly on the scientist side of the line, as I defined it) suggested that scientists’ duties are not special, but simply duties of the same sort everyone in society has (with perhaps some differences in the fine details).

The main arguments for scientists having special duties tend to turn on scientists being in possession of special powers. This is the scientist as Spider-Man: with great power comes great responsibility. But whether the scientist has special powers may be the kind of thing that looks very different on opposite sides of the scientist-non-scientist divide; the scientists responding to my questions don’t seem to see themselves as very different from other members of society. Moreover, nearly every superhero canon provides ample evidence that power, and the responsibility that accompanies it, can feel like a burden. (One need look no further than seasons 6 and 7 of Buffy the Vampire Slayer to wonder if taking a break from her duty to slay vamps would have made Buffy a more pleasant person with whom to share a world.)

Arguably, scientists can do some things the rest of us can’t. How does that affect the relationship between scientists and non-scientists? What kind of duties could flow from that relationship? These powers, and the corresponding responsibilities, will be the focus of the next post.

______
Posts in this series:

Questions for the non-scientists in the audience.

Questions for the scientists in the audience.

What do we owe you, and who’s “we” anyway? Obligations of scientists (part 1)

Scientists’ powers and ways they shouldn’t use them: Obligations of scientists (part 2)

Don’t be evil: Obligations of scientists (part 3)

How plagiarism hurts knowledge-building: Obligations of scientists (part 4)

What scientists ought to do for non-scientists, and why: Obligations of scientists (part 5)

What do I owe society for my scientific training? Obligations of scientists (part 6)

Are you saying I can’t go home until we cure cancer? Obligations of scientists (part 7)

Credibility, bias, and the perils of having too much fun.

If you’re a regular reader of this blog (or, you know, attentive at all to the world around you), you will have noticed that scientific knowledge is built by human beings, creatures that, even on the job, resemble other humans more closely than they do Mr. Spock or his Vulcan conspecifics. When an experiment yields really informative results, most human scientists don’t cooly raise an eyebrow and murmur “Fascinating.” Instead, you’re likely to see a reactions somewhere on the continuum between big smiles, shouts of delight, and full-on end zone happy-dance. You can observe human scientists displaying similar emotional responses in other kinds of scientific situations, too — say, for example, when they find the fatal flaw in a competitor’s conclusion or experimental strategy.

Many scientists enjoy doing science. (If this weren’t so, the rest of us would have to feel pretty bad for making them do such thankless work to build knowledge that we’re not willing or able to build ourselves but from which we benefit nonetheless.) At least some scientists are enjoying more than just the careful work of forming hypotheses, making observations, comparing outcomes and predictions, and contributing to a more reliable account of the world and its workings. Sometimes the enjoyment comes from playing a particular kind of role in the scientific conversation.

Some scientists delight in the role of advancer or supporter of the new piece of knowledge that will change how we understand our world in some fundamental way. Other scientists delight in the role of curmudgeon, shooting down overly-bold claims. Some scientists relish being contrarians. Others find comfort in being upholders of consensus.

In light of this, we should probably consider whether having one of these human predilections like enjoying being a contrarian (or a consensus-supporter, for that matter) is a potential source of bias against which scientists should guard.

The basic problem is nothing new: what we observe, and how we interpret what we observe, can be influenced by what we expect to see — and, sometimes, by what we want to see. Obviously, scientists don’t always see what they want to see, else people’s grad school lab experiences would be deliriously happy rather than soul-crushingly frustrating. But sometimes what there is to see is ambiguous, and the person making the observation has to make a call. And frequently, with a finite set of data, there are multiple conclusions — not all of them compatible with each other — that can be drawn.

These are moments when our expectations and our ‘druthers might creep in as the tie-breaker.

At the scale of the larger community of science and the body of knowledge it produces, this may not be such a big deal. (As we’ve noted before, objectivity requires teamwork). Given a sufficiently diverse scientific community, there will be loads of other scientists who are likely to have different expectations and ‘druthers. In trying to take someone else’s result and use it to build more knowledge, the thought is that something like replication of the earlier result happens, and biases that may have colored the earlier result will be identified and corrected. (Especially since scientists are in competition for scarce goods like jobs, grants, and Nobel Prizes, you might start with the assumption that there’s no reason not to identify problems with the existing knowledge base. Of course, actual conditions on the ground for scientists can make things more complicated.)

But even given the rigorous assessment she can expect from the larger scientific community, each scientist would also like, individually, to be as unbiased as possible. One of the advantages of engaging with lots of other scientists, with different biases than your own, is you get better at noticing your own biases and keeping them on a shorter leash — putting you in a better place to make objective knowledge.

So, what if you discover that you take a lot of pleasure in being a naysayer or contrarian? Is coming to such self-awareness the kind of thing that should make you extra careful in coming to contrarian conclusions about the data? If you actually come to the awareness that you dig being a contrarian, does it put you in a better position to take corrective action than you would if you enjoyed being a contrarian but didn’t realize that being contrarian was what was bringing you the enjoyment?

(That’s right, a philosopher of science just made something like an argument that scientists might benefit — as scientists, not just as human beings — from self-reflection. Go figure.)

What kind of corrective action do I have in mind for scientists who discover that they may have a tilt, whether towards contrarianism or consensus-supporting? I’m thinking of a kind of scientific buddy-system, for example matching scientists with contrarian leanings to scientists who are made happier by consensus-supporting. Such a pairing would be useful for each scientist in the pair as far as vetting their evidence and conclusions: Here’s the scientist you have to convince! Here’s the colleague whose objections you need to understand and engage with before this goes any further!

After all, one of the things serious scientists are after is a good grip on how things actually are. An explanation that a scientist with different default assumptions than yours can’t easily dismiss is an explanation worth taking seriously. If, on the other hand, your “buddy” can dismiss your explanation, it would be good to know why so you can address its weaknesses (or even, if it is warranted, change your conclusions).

Such a buddy-system would probably only be workable with scientists who are serious about intellectual honesty and getting knowledge that is objective as possible. Among other things, this means you wouldn’t want to be paired with a scientist for whom having an open mind would be at odds with the conditions of his employment.

_____
An ancestor version of this post was published on my other blog.

“There comes a time when you have to run out of patience.”

In this post, I’m sharing an excellent short film called “A Chemical Imbalance,” which includes a number of brief interviews with chemists (most of them women, most at the University of Edinburgh) about the current situation for women in chemistry (and science, technology, engineering, and mathematics, or STEM, more generally) in the UK. Here’s the film:


A Chemical Imbalance
(I’m including my transcription of the film below.)

Some of the things I really appreciate about this film:

  • We get personal impressions, from women of different generations, about what it’s been like for them to be in chemistry in the UK.
  • We get numbers to quantify the gender disparity in academic chemistry in the UK, as well as to identify where in the career pipeline the disparity becomes worse. We also get numbers about how women chemists are paid relative to their male counterparts, and about relative rates of tenure that can’t be blamed on choices about childbearing and/or childrearing. There’s not just the perception of gender disparities in academic chemistry — the numbers demonstrate that the disparities are real.
  • Lurking beneath the surface is a conversation the interviewees might have had (but didn’t in the final cut) about what they count as compromises with respect to parenting and with respect to careers. My sense is that they would not all agree, and that they might not be as accepting of their colleagues’ alternative ways of striking a balance as we might hope.
  • Interviewees in the film also discuss research on unconscious gender bias, which provides a possible causal mechanism for the disparities other than people consciously discriminating against women. If people aren’t consciously discriminating, our intuition is that people aren’t culpable (because they can’t help what their unconscious is up to). However, whether due to conscious choices or unconscious bias, the effects are demonstrably real, which raises the question: what do we do about it?
  • The interviewees seem pretty hesitant about “positive discrimination” in favor of women as a good way to address the gender disparity — one said she wouldn’t want to think she got her career achievements because she’s a woman, rather than because she’s very good at what she does. And yet, they seem to realize that we may have to do something beyond hoping that people’s individual evaluations become less biased. The bias is there (to the extent that, unconsciously, males are being judged as better because they’re men). It’s a systemic problem. How can we put the burden on individuals to somehow magically overcome systemic problems?
  • We see a range of opinions from very smart women who have been describing inequalities and voicing the importance of making things in STEM more equitable about whether they’d describe themselves as feminists. (One of them says, near the end, that if people don’t like the word, we need to find another one so we don’t get sidetracked from actually pursuing equality.)
  • We see a sense of urgency. Despite how much has gotten better, there are plenty of elements that still need to improve. The interviewees give the impression that we ought to be able to find effective ways to address the systemic problems, if only we can find the will to do so within the scientific community.

How important is it to find more effective ways to address gender disparities in STEM? The statistic in the film that hit me hardest is that, at our present rate of progress, it will take another 70 years to achieve gender parity. I don’t have that kind of time, and I don’t think my daughters ought to wait that long, either. To quote Prof. Lesley Yellowlees,

I’ve often heard myself say we have to be patient, but there comes a time when you have to run out of patience, because if we don’t run out of patience and we don’t start demanding more from the system, demanding that culture change to happen faster than it’s happening at present, then I think we not only do ourselves a disservice, but we do the generations both past and the ones to come a huge disservice as well.

It’s been a long time since I’ve seen 13 minutes packed so effectively with so much to think about.

* * * * *
Transcript of “A Chemical Imbalance”:

Dr. Perdita Barran, Reader in Biophysical Chemistry, University of Edinburgh: I’m not sure why it is Edinburgh has such a high number of female faculty, and indeed, female postdoctoral researchers and female research fellows. One of the greatest things about this department is, because they’re are such a high proportion of female faculty — it ranges between 20 and even up to 30 percent at a few times — it becomes less important and we are less prone to the gender bias, because you don’t need to do it. You just think of scientists as scientists, you don’t think of them in terms of their gender.

Prof. Eleanor Campbell FRSC FRS, Professor of Physical Chemistry, Head of School of Chemistry, University of Edinburgh: It’s very difficult to put your finger on it, but I do feel a different atmosphere in a place where you have a significant percentage of women. That’s not to say that women can’t be confrontational and egoistical, of course they can. But on the whole, there is a difference in atmosphere.

Text on screen: 1892 Women are finally allowed to attend The University of Edinburgh as undergraduates.

Text on screen: By 1914, over 1000 women hold degrees.

Prof. Steve Chapman FRSE FRSC, Principal & Vice Chancellor, Heriot-Watt University: There’s still not enough women representation in STEM at all levels, but it gets worse the higher you go up, and when you go to management levels, I think, there is a serious disparity.

Prof. Eleanor Campbell: Yeah, the leaky pipeline is a sort of worrying tendency to lose women at various stages on the career path. [Graph on the screen about “Women in STEM, UK average”.] Here we [discussing the chemistry line on the graph] have roughly 50-50 in terms of male/female numbers at the undergraduate level. It [the proportion of women] drops a little bit at postgraduate level, and then it dives going to postdocs and onward, and that is extremely worrying. We’re losing a lot of very, very talented people.

Text on screen: Women in STEM, UK average
Undergraduate 33%
Professor 9%
(2011 UKRC & HESA)

Dr. Elaine Murray MSP, Shadow Minister for Housing & Transport, Scottish Parliament: I feel that I did — 25 years ago I made the choice between remaining in science and my family. You know, 52% of women who’ve been trained in STEM come out of it. I’m one of them.

Prof. Anita Jones, Professor of Molecular Photophysics, University of Edinburgh: On the whole, women still do take more responsibility for the looking after children and so on. But again, I think there are things that can be put in place, improved child care facilities and so on, that can help with that, and can help to achieve an acceptable compromise between the two.

Dr. Marjorie Harding, Honorary Fellow, University of Edinburgh: The division of responsibilities between husband and wife has changed a lot over the years. When I first had children, it was quite clear that it was my responsibility to cope with the home, everything that was happening there, and the children’s things, and not to expect him to have time available for that sort of thing.

Dr. Carole Morrison, Senior Lecturer in Structural Chemistry, University of Edinburgh: When the children were small, because I was working part time, I felt that I was incredibly fortunate. I was able to witness all of their little milestones. But it’s meant that my career has progressed much slower than it would have done otherwise. But, you know, life is all about compromises. I wasn’t prepared to compromise on raising my children.

Dr. Alison Hulme, Senior Lecturer in Organic Chemistry, University of Edinburgh: I don’t go out of my way to let people know that I only work at 80%, for the very fact that I don’t want them to view me as any less serious about my intentions in research.

Dr. Perdita Barran: I really understood feminism when I had children and also wanted to work. Then it really hits you how hard it is actually to be a female in science.

Text on screen: 1928 Dr. Christina Miller produces the first ever sample of pure phosphorus trioxide.
In the same year British women achieve suffrage.

Text on screen: 1949 Dr. Miller becomes the first female chemist elected to The Royal Society of Edinburgh.

Prof. Steve Chapman: Do I consider myself to be a feminist?

Prof. Anita Jones: Well, that’s an interesting question.

Dr. Perdita Barran: Uh, yeah!

Dr. Marjorie Harding: No.

Dr. Carole Morrison: No, definitely not.

Prof. Eleanor Campbell: No, I’ve never thought of myself as a feminist.

Dr. Alison Hulme: I think that people don’t want to be labeled with the tag of being a feminist because it has certain connotations associated with it that are not necessarily very positive.

Dr. Elaine Murray: I’m of an age when women were considered to be feminists, you know, most of us in the 1970s. There are battles still to be fought, but I think we had a greater consciousness of the need to define ourselves as feminists, and I would still do so. But, there’s been progress, but I think the young women still need to be aware that there’s a lot to be done. All the battles weren’t won.

Text on screen: 1970 The UK Parliament passes The Equal Pay Act.
Over 40 years later, women still earn on average 14.9% less that their male counterparts, and they get promoted less.

Prof. Polly Arnold FRSE FRSC, Crum Brown Chair of Chemistry, University of Edinburgh: The Yale study on subconscious bias was a real shocker. I realized that it was an American study, so the subjects were all American, but I don’t feel that it’s necessarily any different in the UK.

Prof. Steve Chapman: It was a very simple study, but a very telling study. They sent out CVs to people in North American institutions and the only difference in the CV was the name at the top — a male name or a female name. The contents of the CVs were identical. And when the people were asked to comment on the CVs, there was something like a 40% preference for the CV if it had a male name associated with it. Now those people I don’t think were actively trying to discriminate against women, but they were, and they were doing it subconsciously. It scared me, because of course I would go around saying, ‘I’m not prejudiced at all,’ but I read that and I thought, if I saw those CVs, would I react differently?

Dr. Janet Lovett, Royal Society University Research Fellow, University of Edinburgh: You hear the kind of results from the Yale study and unfortunately you’re not that surprised by them. And I think … I think it’s hard to explain why you’re not that surprised by them. There is an endemic sexism to most high-powered careers, I would say.

Prof. Polly Arnold: When I was a junior academic in a previous job, I was given the opportunity to go on a course to help women get promoted. The senior management at the university had looked at the data, and they’d realized that the female academics were winning lots of international prizes, being very successful internationally, but they weren’t getting promoted internally, so what we needed was a course to help us do this. And to this day, I still don’t understand how they didn’t realize that it was them that needed the course.

Dr. Elaine Murray: I think a lot of it isn’t really about legislation or regulation, it’s actually cultural change, which is more difficult to affect. And, you know, the recognition that this is part of an equality agenda, really, that we need to have that conversation which is not just about individuals, its about the experience of women in general.

Text on screen: Women without children are still 23% less likely to achieve tenure than men with children.

Prof. Anita Jones: I’m not really in favor of positive discrimination. I don’t think, as a women, I would have wanted to feel that I got a job, or a fellowship, or a grant, or whatever, because I was a woman rather than because I was very good at what I do.

Prof. Steve Chapman: I think we have to be careful. I was looking at the ratio of women in some of the things that we’re doing in my own institution, and accidentally you can heavily dominate things with males without actually thinking about it. Does that mean we have to have quotas for women? No. But does it mean we have to be pro-active in making sure we’re bringing it to the attention of women that they should be involved, and that they add value? Yes.

Dr. Elaine Murray: I was always an advocator of positive discrimination in politics, in order to address the issue of the underrepresentation of women. Now, a lot of younger women now don’t see that as important, and yet if you present them some of the issues that women face to get on, they do realize things aren’t quite as easy.

Text on screen: 2012 The School of Chemistry receives the Athena Swan Gold Award, recognising a significant progression and achievement in promoting gender equality.

Prof. Steve Chapman: We shouldn’t underestimate the signal that Athena Gold sends out. It sends out the message that this school is committed to the Athena Agenda, which isn’t actually just about women. It’s about creating an environment in which all people can thrive.

Prof. Eleanor Campbell: I think it is extremely important that the men in the department have a similar view when it comes to supporting young academics, graduate students, postdocs, regardless of their gender. I think that’s extremely important. And, I mean, certainly here, our champion for our Athena Swan activities is a male, and I deliberately wanted to have a younger male doing that job, to make it clear that it wasn’t just about women, that it was about really improving conditions for everybody.

Dr. Elaine Murray: I know, for example, in the Scottish government, equalities is somehow lumped in with health, but it’s not. You know, health is such a big portfolio that equalities is going to get pretty much lost in the end, and I think probably there’s a need for equalities issues to take a higher profile at a governmental level. And I think also it’s still about challenging the media, about the sort of stereotypes which surround women more generally, and still in science.

Text on screen: 2012 Prof. Lesley Yellowlees becomes the first female President of The Royal Society of Chemistry.

Prof. Lesley Yellowlees MBE FRSE FRSC, Professor of Inorganic Electrochemistry, Vice Principal & Head of the College of Science & Engineering, University of Edinburgh, President of The Royal Society of Chemistry: I’ve often heard myself say we have to be patient, but there comes a time when you have to run out of patience, because if we don’t run out of patience and we don’t start demanding more from the system, demanding that culture change to happen faster than it’s happening at present, then I think we not only do ourselves a disservice, but we do the generations both past and the ones to come a huge disservice as well.

Text on screen: At our current rate of progress it will take 70 years before we achieve parity between the sexes.

Prof. Polly Arnold: If we’re unwilling to define ourselves as feminists, we need to replace the word with something more palatable. The concept of equality is no less relevant today.

Professional communities, barriers to inclusion, and the value of a posse.

Last week, I wrote a post about an incident connected to a professional conference. A male conference-goer wrote a column attempting to offer praise for a panel featuring four female conference-goers but managed to package this praise in a way that reinforced sexist assumptions about the value women colleagues add to a professional community.

The women panelists communicated directly with the male commentator about his problematic framing. The male commentator seemed receptive to this feedback. I blogged about it as an example of why it’s important to respond to disrespect within professional communities, even if it’s not intended as disrespect, and despite the natural inclination to let it go. And my post was praised for offering a discussion of the issue that was calm, sensitive, and measured.

But honestly? I’m unconvinced that my calm, sensitive, measured discussion will do one whit of good to reduce the incidence of such casual sexism in the future, in the community of science journalist or in any other professional community. Perhaps there were some readers who, owing to the gentle tone, were willing to examine the impact of describing colleagues who are women primarily in terms of their looks, but if a less gentle tone would have put them off from considering the potential for harm to members of their professional communities, it’s hard to believe these readers would devote much energy to combatting these harms — whether or not they were being asked nicely to do so.

Sometimes someone has to really get your attention — in a way that shakes you up and makes you deeply uncomfortable — in order for you to pay attention going forward. Maybe feeling bad about the harm to someone else is a necessary first step to developing empathy.

And certainly, laying out the problem while protecting you from what it feels like to be one of the people struggling under the effects of that problem takes some effort. If going to all that trouble doesn’t actually leave enough of an impression to keep the problem from happening some more, what’s the point?

* * * * *

What does it take to create a diverse professional community? It requires more than an absence of explicit rules or standing practices that bar certain kinds of people from membership, more even that admitting lots of different kinds of people into the “pipeline” for that profession. If you’re in the community by virtue of your educational or employment status but you’re not actually part of the discussions that define your professional community, it may help the appearance of diversity, but not the reality of it.

The chilly climate women have been talking about in a variety of male-dominated professional communities is a real thing.

Being a real member of a professional community includes being able to participate fully in venues for getting your work and insights into the community’s discussions. These venues include journals and professional meetings, as well as panels or study sections that evaluate grant proposals. Early in one’s membership in a professional community, venues like graduate seminars and department symposia are also really important.

One problem here is that usually individuals without meaningful access to participation are also without the power in the community required to effectively address particular barriers to their access. Such individuals can point out the barriers, but they are less likely to be listened to than someone else in the community without those barriers.

Everyday sexism is just one such barrier.

This barrier can take a number of particular forms.

For the students on their way into a professional community, it’s a barrier to find out that senior members of the community who you expected would help train you and eventually take you seriously as a colleague are more inclined to sexualize you or full-on sexually harass you. It’s a barrier when you see people in your community minimize that behavior, whether offhandedly or with rather more deliberation.

It’s a barrier when members of your community focus on your looks rather than your intellectual contributions, or act like it’s cute or somehow surprising that someone like you could actually make an intellectual contribution. It’s a further barrier when other members of your community advise you to ignore tangible disrespect because surely it wasn’t intentional — especially when those other members of the community make no visible effort to help address the disrespect.

It’s a barrier when students don’t see people like themselves represented among the recognized knowledge-builders in the professional community as they are being taught the core knowledge expected of members of that community. It’s also a barrier when the more senior members of the professional community are subject to implicit biases in their expert evaluations of who’s cut out to be a full contributing member of the community.

Plenty of well-meaning folks in professional communities that have a hard time fully integrating women (among others) may be puzzled as to why this is so. If they don’t personally experience the barriers, they may not even realize that they’re there. Listening to lived experiences of their female colleagues might reveal some of the barriers — but listening also assumes that the community really takes its female members seriously as part of the community, when this is precisely the problem with which the women in the community are struggling.

* * * * *

Professional meetings can be challenging terrain for women in predominantly male professional communities. Such meetings are essential venues in which to present one’s work and get career credit for doing so. They are also crucially important for networking and building relationships with people who might become collaborators, who will be called on to evaluate one’s work, and who are the peers with whom one hopes to be engaged in productive discussions over the course of one’s career.

There is also a strong social component to these meetings, an imperative to have fun with one’s people — which is to say, in this context, the people with whom one shares a professional community. Part of this, I think, is related to how strongly people identify with their professional community: the connection is not just about what people in that community do but about who they are. They have taken on the values and goals of the professional community as their own. It’s not just a job, it’s a social identity.

For some people, the social component of professional meetings has a decidedly carnal flavor. Unfortunately, rejecting a pass from someone in your professional community, especially someone with more power in that community than you, can screw with your professional relationships within the community — even assuming that the person who made the pass accepts your “no” and moves on. In other cases, folks within the professional community may be perfectly aware of power gradients and willing to use them to get what they want, applying persistent unwanted attention that can essentially deprive the target of full participation in the conference. Given the importance professional conferences have, this is a significant professional harm.

Lest you imagine that this is a merely hypothetical worry, I assure you that it is not. If you ask around you may discover that some of the members of your professional community choose which conference sessions to attend in order to avoid their harassers. That is surely a constraint on how much one can get out of a professional meeting.

Recently a number of conferences and conventions have adopted policies against harassment, policies that are getting some use. Many of these are fan-oriented conventions or tech conferences, rather than the kind of research oriented, academically inclined professional meetings most of us university types attend. I know of at least one scientific professional society (the American Astronomical Society) that has adopted a harassment policy for its meetings and that seems generally to be moving in a good direction from the point of view of building an inclusive community. However, when I checked the websites of three professional societies to which I belong (American Chemical Society, American Philosophical Association, and Philosophy of Science Association), I could find no sign of anti-harassment policies for their conferences. This is disappointing, but not surprising to me.

The absence of anti-harassment policies doesn’t mean that there’s no harassment happening at the meetings of these professional societies, either.

And even if a professional community has anti-harassment policies in place for its meetings, this doesn’t remove the costs — especially on a relatively junior member of the community — associated with asking that the policies be enforced. Will a professional society be willing to caution a member of the program committee for the conference? To eject the most favored grad student of a luminary in the field — or, for that matter, a luminary — who violates the policy? Shining light on over-the-line behavior at conferences is a species of whistleblowing, and is likely to be received about as warmly as other forms.

* * * * *

Despite the challenges, I don’t think the prospects for building diverse and productive professional communities are dim. Progress is being made, even if most weeks the pace of progress is agonizingly slow.

But I think things could get better faster if people who take their professional communities for granted step up and become more active in maintaining them.

In much the same way that it is not science that is self-correcting but rather individual scientists who bother to engage critically with particular contributions to the ongoing scientific conversation and keep the community honest, a healthy professional community doesn’t take care of itself — at least, not without effort on the part of individual members of the community.

Professional communities require everyday maintenance. They require tending to keep their collective actions aligned with the values members of the community say they share.

People who work very hard to be part of a professional community despite systemic barriers are people committed enough to the values of the professional community to fight their way through a lot of crap. These are people who really care about the values you purport to care about as a member of the professional community, else why would they waste their time and effort fighting through the crap?

These are the kind of people you should want as colleagues, at least if you value what you say you value. Their contributions could be huge in accomplishing your community’s shared goals and ensuring your community a vibrant future.

Even more than policies that aim to address systemic barriers to their entry to the professional community, these people need a posse. They need others in the community who are unwilling to sacrifice their values — or the well-being of less powerful people who share those values — to take consistent stands against behaviors that create barriers and that undermine the shared work of the community.

These stands needn’t be huge heroic gestures. It could be as simple as reliably being that guy who asks for better gender balance in planning seminars, or who reacts to casual sexist banter with, “Dude, not cool!” It could take the form of asking about policies that might lessen barriers, and taking on some of the work involved in creating or implementing them.

It could be listening to your women colleagues when they describe what it has been like for them within your professional community and assuming the default position of believing them, rather than looking for possible ways they must have misunderstood their own experiences.

If you care about your professional community, in other words, the barriers to entry in the way of people who want badly to be part of that community because they believe fiercely in its values are your problem, too. Acting like it, and doing your part to address these barriers, is sharing the regular maintenance of the professional community you count on.

_____________
While this post is focused on barriers to full participation in professional communities that flow from gender bias, there are plenty of other types of bias that throw up similar barriers, and that could benefit from similar types of response from members of the professional communities not directly targeted by these biases.