Twenty-five years later.

Twenty-five years ago today, on December 6, 1989, in Montreal, fourteen women were murdered for being women in what their murderer perceived to be a space that rightly belonged to men:

Geneviève Bergeron (born 1968), civil engineering student

Hélène Colgan (born 1966), mechanical engineering student

Nathalie Croteau (born 1966), mechanical engineering student

Barbara Daigneault (born 1967), mechanical engineering student

Anne-Marie Edward (born 1968), chemical engineering student

Maud Haviernick (born 1960), materials engineering student

Maryse Laganière (born 1964), budget clerk in the École Polytechnique’s finance department

Maryse Leclair (born 1966), materials engineering student

Anne-Marie Lemay (born 1967), mechanical engineering student

Sonia Pelletier (born 1961), mechanical engineering student

Michèle Richard (born 1968), materials engineering student

Annie St-Arneault (born 1966), mechanical engineering student

Annie Turcotte (born 1969), materials engineering student

Barbara Klucznik-Widajewicz (born 1958), nursing student

They were murdered because their killer was disgruntled that he been denied admission to the École Polytechnique, the site of the massacre, and because he blamed women occupying positions that were traditionally occupied by men for this disappointment, among others. When their killer entered the engineering classroom where the killing began, he first told the men to leave the room, because his goal was to kill the women. In their killer’s pocket, discovered after his death, was a list of more women he had planned to kill, if only he had the time.

Shelley Page was a 24-year-old reporter who was sent to cover the Montreal massacre for The Toronto Star. On this, the 25th anniversary of the event, she writes:

I fear I sanitized the event of its feminist anger and then infantilized and diminished the victims, turning them from elite engineering students who’d fought for a place among men into teddy-bear loving daughters, sisters and girlfriends.

Twenty-five years later, as I re-evaluate my stories and with the benefit of analysis of the coverage that massacre spawned, I see how journalists— male and female producers, news directors, reporters, anchors — subtly changed the meaning of the tragedy to one that the public would get behind, silencing so-called “angry feminists.”

Twenty-five years ago, I was a 21-year-old finishing my first term in a chemistry Ph.D. program. I was studying for final exams and the qualifying exams that would be held in January, so I was not following the news about much of anything outside my bubble of graduate school. When I did hear about the Montreal massacre, it was a punch in the gut.

It was enough already to fight against the subtle and not-so-subtle doubt (from the faculty in our programs, from our classmates, even from our students) that women were cut out for science or engineering. Now it was clear that there existed people committed enough to science and engineering being male domains that they might kill us to enforce that.

The murders were political. They did not target particular individual women in the forbidden domain of engineering on the basis of particular personal grievances. Being a member of the hated group in the social space the murderer thought should be for men only was enough.

But the murders also ended the lives of fourteen particular individual women, women who were daughters and sisters and friends and girlfriends.

The tragedies were deeply personal for the survivors of the fourteen women who were murdered. They were also personal for those of us who understood (even if we couldn’t articulate it) that we occupied the same kinds of social positions, and struggled with the same barriers to access and inclusion, as these fourteen murdered women had. They made us sad, and scared, and angry.

The personal is political. The challenge is in seeing how we are connected, the structures underlying what frequently feel to us like intensely individual experiences.

I’m inclined to think it’s a mistake to look for the meaning of the Montreal massacre. There are many interconnected meanings to find here.

That individual teddy bear-loving girls and women can still be formidable scientists and engineers.

That breaking down barriers to inclusion can come at a cost to oneself as an individual (which can make it harder for others who have gotten into those male preserves to feel like it’s OK for them to leave before the barriers are completely dismantled).

That some are still dedicated to maintaining those barriers to inclusion, and where that dedication will end — with words, or threats, or violent acts — is impossible to tell just by looking at the gatekeeper.

Because they were murdered 25 years ago today, we will never know what contributions these fourteen women might have made — what projects they might have guided, what problems they might have solved, the impact they might have made as mentors or role models, as teachers, as colleagues, as friends, as lovers, as parents, as engaged citizens.

In their memory, we ought to make sure other women are free to find out what they can contribute without having to waste their energy taking down barriers and without having to fear for their lives.

James Watson’s sense of entitlement, and misunderstandings of science that need to be countered.

James Watson, who shared a Nobel Prize in 1962 for discovering the double helix structure of DNA, is in the news, offering his Nobel Prize medal at auction. As reported by the Telegraph:

Mr Watson, who shared the 1962 Nobel Prize for uncovering the double helix structure of DNA, sparked an outcry in 2007 when he suggested that people of African descent were inherently less intelligent than white people.

If the medal is sold Mr Watson said he would use some of the proceeds to make donations to the “institutions that have looked after me”, such as University of Chicago, where he was awarded his undergraduate degree, and Clare College, Cambridge.

Mr Watson said his income had plummeted following his controversial remarks in 2007, which forced him to retire from the Cold Spring Harbor Laboratory on Long Island, New York. He still holds the position of chancellor emeritus there.

“Because I was an ‘unperson’ I was fired from the boards of companies, so I have no income, apart from my academic income,” he said.

He would also use some of the proceeds to buy an artwork, he said. “I really would love to own a [painting by David] Hockney”. …

Mr Watson said he hoped the publicity surrounding the sale of the medal would provide an opportunity for him to “re-enter public life”. Since the furore in 2007 he has not delivered any public lectures.

There’s a lot I could say here about James Watson, the assumptions under which he is laboring, and the potential impacts on science and the public’s engagement with it. In fact, I have said much of it before, although not always in reference to James Watson in particular. However, given the likelihood that we’ll keep hearing the same unhelpful responses to James Watson and his ilk if we don’t grapple with some of the fundamental misunderstandings of science at work here, it’s worth covering this ground again.

First, I’ll start with some of the claims I see Watson making around his decision to auction his Nobel Prize medal:

  • He needs money, given that he has “no income beyond [his] academic income”. One might take this as an indication that academic salaries in general ought to be raised (although I’m willing to bet a few buck that Watson’s inadequate academic income is at least as much as that of the average academic actively engaged in research and/or teaching in the U.S. today). However, Watson gives no sign of calling for such an across-the-board increase, since…
  • He connects his lack of income to being fired from boards of companies and to his inability to book public speaking engagements after his 2007 remarks on race.
  • He equates this removal from boards and lack of invitations to speak with being an “unperson”.

What comes across to me here is that James Watson sees himself as special, as entitled to seats on boards and speaker invitations. On what basis, we might ask, is he entitled to these perks, especially in the face of a scientific community just brimming with talented members currently working at the cutting edge(s) of scientific knowledge-building? It is worth noting that some who attended recent talks by Watson judged them to be nothing special:

Possibly, then, speaking engagements may have dried up at least partly because James Watson was not such an engaging speaker — with an asking price of $50,000 for a paid speaking engagement, whether you give good talk is a relevant criterion — rather than being driven entirely by his remarks on race in 2007, or before 2007. However, Watson seems sure that these remarks are the proximate cause of his lack of invitations to give public talks since 2007. And, he finds this result not to be in accord with what a scientist like himself deserves.

Positioning James Watson as a very special scientist who deserves special treatment above and beyond the recognition of the Nobel committee feeds the problematic narrative of scientific knowledge as an achievement of great men (and yes, in this narrative, it is usually great men who are recognized). This narrative ignores the fundamentally social nature of scientific knowledge-building and the fact that objectivity is the result of teamwork.

Of course, it’s even more galling to have James Watson portrayed (including by himself) as an exceptional hero of science rather than as part of a knowledge-building community given the role of Rosalind Franklin’s work in determining the structure of DNA — and given Watson’s apparent contempt for Franklin, rather than regard for her as a member of the knowledge-building team, in The Double Helix.

Indeed, part of the danger of the hero narrative is that scientists themselves may start to believe it. They can come to see themselves as individuals possessing more powers of objectivity than other humans (thus fundamentally misunderstanding where objectivity comes from), with privileged access to truth, with insights that don’t need to be rigorously tested or supported with empirical evidence. (Watson’s 2007 claims about race fit in this territory .)

Scientists making authoritative claims beyond what science can support is a bigger problem. To the extent that the public also buys into the hero narrative of science, that public is likely to take what Nobel Prize winners say as authoritative, even in the absence of good empirical evidence. Here Watson keeps company with William Shockley and his claims on race, Kary Mullis and his claims on HIV, and Linus Pauling and his advocacy of mega-doses of vitamin C. Some may argue that non-scientists need to be more careful consumers of scientific claims, but it would surely help if scientists themselves would recognize the limits of their own expertise and refrain from overselling either their claims or their individual knowledge-building power.

Where Watson’s claims about race are concerned, the harm of positioning him as an exceptional scientist goes further than reinforcing a common misunderstanding of where scientific knowledge comes from. These views, asserted authoritatively by a Nobel Prize winner, give cover to people who want to believe that their racist views are justified by scientific knowledge.

As well, as I have argued before (in regard to Richard Feynman and sexism), the hero narrative can be harmful to the goal of scientific outreach given the fact that human scientists usually have some problematic features and that these problematic features are often ignored, minimized, or even justified (e.g., as “a product of the time”) in order to foreground the hero’s great achievement and sell the science. There seems to be no shortage of folks willing to label Watson’s racist views as unfortunate but also as something that should not overshadow his discovery of the structure of DNA. In order that the unfortunate views not overshadow the big scientific contribution, some of these folks would rather we stop talking about Watson’s having made the claims he has made about racial difference (although Watson shows no apparent regret for holding these views, only for having voiced them to reporters).

However, especially for people in the groups that James Watson has claimed are genetically inferior, asserting that Watson’s massive scientific achievement trumps his problematic claims about race can be alienating. His scientific achievement doesn’t magically remove the malign effects of the statements he has made from a very large soapbox, using his authority as a Nobel Prize winning scientist. Ignoring those malign effects, or urging people to ignore them because of the scientific achievement which gave him that big soapbox, sounds an awful lot like saying that including the whole James Watson package in science is more important than including black people as scientific practitioners or science fans.

The hero narrative gives James Watson’s claims more power than they deserve. The hero narrative also makes urgent the need to deem James Watson’s “foibles” forgivable so we can appreciate his contribution to knowledge. None of this is helpful to the practice of science. None of it helps non-scientists engage more responsibly with scientific claims or scientific practitioners.

Holding James Watson to account for his claims, holding him responsible for scientific standards of evidence, doesn’t render him an unperson. Indeed, it amounts to treating him as a person engaged in the scientific knowledge-building project, as well as a person sharing a world with the rest of us.

* * * * *
Michael Hendricks offers a more concise argument against the hero narrative in science.

And, if you’re not up on the role of Rosalind Franklin in the discovery of the structure of DNA, these seventh graders can get you started:

A guide for science guys trying to understand the fuss about that shirt.

This is a companion to the last post, focused more specifically on the the question of how men in science who don’t really get what the fuss over Rosetta mission Project Scientist Matt Taylor’s shirt was about could get a better understanding of the objections — and of why they might care.

(If the story doesn’t embed properly for you, you can read it here.)

The Rosetta mission #shirtstorm was never just about that shirt.

Last week, the European Space Agency’s Spacecraft Rosetta put a washing machine-sized lander named Philae on Comet 67P/Churyumov-Gerasimenko.

Landing anything on a comet is a pretty amazing feat, so plenty of scientists and science-fans were glued to their computers watching for reports of the Rosetta mission’s progress. During the course of the interviews streamed to the public (including classrooms), Project Scientist Matt Taylor described the mission as the “sexiest mission there’s ever been”, but not “easy”. And, he conducted on-camera interviews in a colorful shirt patterned with pin-up images of scantily-clad women.

This shirt was noticed, and commented upon, by more than one woman in science and science communication.

To some viewers, Taylor’s shirt just read as a departure from the “boring” buttoned-down image the public might associate with scientists. But to many women scientists and science communicators who commented upon it, the shirt seemed to convey lack of awareness or concern with the experiences of women who have had colleagues, supervisors, teachers, students treat them as less than real scientists, or science students, or science communicators, or science fans. It was jarring given all the subtle and not so subtle ways that some men (not all men) in science have conveyed to us that our primary value lies in being decorative or titillating, not in being capable, creative people with intelligence and skills who can make meaningful contributions to building scientific knowledge or communicating science to a wider audience.

The pin-up images of scantily clad women on the shirt Taylor wore on camera distracted people who were tuned in because they wanted to celebrate Rosetta. It jarred them, reminding them of the ways science can still be a boys’ club.

It was just one scientist, wearing just one shirt, but it was a token of a type that is far too common for many of us to ignore.

There is research on the ways that objectifying messages and images can have a significant negative effect on those in the group being objectified. Objectification, even if it’s unintentional, adds one more barrier (on top of implicit bias, stereotype threat, chilly climate, benevolent sexism, and outright harassment) on women’s participation.

Even if there wasn’t a significant body of research demonstrating that the effects are real, the fact of women who explicitly say that casual use of sexualizing imagery or language in professional contexts makes science less welcoming for them ought to count for more than an untested hunch that it shouldn’t make them feel this way.

And here’s the thing: this is a relatively easy barrier to remove. All it requires is thinking about whether your cheeky shirt, your wall calendar, your joke, is likely to have a negative effect on other people — including on women who are likely to have accumulated lots of indications that they are not welcomed in the scientific community on the same terms.

When Matt Taylor got feedback about the message his shirt was sending to some in his intended audience, he got it, and apologized unreservedly.

But the criticism was never just about just one shirt, and what has been happening since Matt Taylor’s apology underlines that this is not a problem that starts and ends with Matt Taylor or with one bad wardrobe choice for the professional task at hand.

Despite Matt Taylor’s apology, legions of people have been asserting that he should not have apologized. They have been insisting that people objecting to his wearing that shirt while representing Rosetta and acting as an ambassador for science were wrong to voice their objections, wrong even to be affected by the shirt.

If only we could not be affected by things simply by choosing not to be affected by them. But that’s not how symbols work.

A critique of this wardrobe choice as one small piece of a scientific culture that makes it harder for women to participate fully brought forth throngs of people (including scientists) responding with a torrent of hostility and, in some cases, threats of harm. This response conveys that women are welcome in science, or science journalism, or the audience for landing a spacecraft on a comet, only as long as they shut up about any of the barriers they might encounter, while men in science should never, ever be made uncomfortable about choices they’ve made that might contribute (even unintentionally) to throwing up such barriers.

That is not a great strategy for demonstrating that science is welcoming to all.

Indeed, it’s a strategy that seems to imbed a bunch of assumptions:

  • that it’s worth losing the scientific talent of women who might make the scientific climate uncomfortable for men by describing their experiences and pointing out barriers that are relatively easy to fix;
  • that men who have to be tough enough to test their hypotheses against empirical data and to withstand the rigors of peer review are not tough enough to handle it when women in their professional circle express discomfort;
  • that these men of science are incapable of empathy for others (including women) in their professional circle.

These strike me as bad assumptions. People making them seem to have a worse opinion of men who do science that the women voicing critiques have.

Voicing a critique (and sometimes steps it would be good to take going forward), rather that sighing and regarding the thing you’re critiquing as the cost of doing business, is something you do when you believe the person hearing it would want to know about the problem and address it. It comes from a place of trust — that your male colleagues aren’t trying to exclude you, and so will make little adjustments to stop doing unintentional harm once that they know that they’re doing it.

Matt Taylor seemed to understand the critique at least well enough to change his shirt and apologize for the unintentional harm he did. He seems willing to make that small effort to make science welcoming, rather than alienating.

Now we’re just waiting for the rest of the scientific community to join him.

When your cover photo says less about the story and more about who you imagine you’re talking to.

The choice of cover of the most recent issue of Science was not good. This provoked strong reactions and, eventually, an apology from Science‘s editor-in-chief. It’s not the worst apology I’ve seen in recent days, but my reading of it suggests that there’s still a gap between the reactions to the cover and the editorial team’s grasp of those reactions.

So, in the interests of doing what I can to help close that gap, I give you the apology (in block quotes) and my response to it:

From Science Editor-in-Chief Marcia McNutt:

Science has heard from many readers expressing their opinions and concerns with the recent [11 July 2014] cover choice.

The cover showing transgender sex workers in Jarkarta was selected after much discussion by a large group

I suppose the fact that the choice of the cover was discussed by many people for a long time (as opposed to by one person with no discussion) is good. But it’s no guarantee of a good choice, as we’ve seen here. It might be useful to tell readers more about what kind of group was involved in making the decision, and what kind of discussion led to the choice of this cover over the other options that were considered.

and was not intended to offend anyone,

Imagine my relief that you did not intend what happened in response to your choice of cover. And, given how predictable the response to your cover was, imagine my estimation of your competence in the science communication arena dropping several notches. How well do you know your audience? Who exactly do you imagine that audience to be? If you’re really not interested in reaching out to people like me, can I get my AAAS dues refunded, please?

but rather to highlight the fact that there are solutions for the AIDS crisis for this forgotten but at-risk group. A few have indicated to me that the cover did exactly that,

For them. For them the cover highlighted transgender sex workers as a risk group who might get needed help from research. So, there was a segment of your audience for whom your choice succeeded, apparently.

but more have indicated the opposite reaction: that the cover was offensive because they did not have the context of the story prior to viewing it, an important piece of information that was available to those choosing the cover.

Please be careful with your causal claims here. Even with the missing context provided, a number of people still find the cover harmful. This explanation of the harm in the context of what the scientific community, and the wider world, can be like for a trans*woman, spells it out pretty eloquently.

The problem, in other words, goes deeper than the picture not effectively conveying your intended context. Instead, the cover communicated layers of context about who you imagine as your audience — and about whose reality is not really on your radar.

The people who are using social media to explain the problems they have with this cover are sharing information about who is in your audience, about what our lives in and with science are like. We are pinging you so we will be on your radar. We are trying to help you.

I am truly sorry for any discomfort that this cover may have caused anyone,

Please do not minimize the harm your choice of cover caused by describing it as “discomfort”. Doing so suggests that you still aren’t recognizing how this isn’t an event happening in a vacuum. That’s a bad way to support AAAS members who are women and to broaden the audience for science.

and promise that we will strive to do much better in the future to be sensitive to all groups and not assume that context and intent will speak for themselves.

What’s your action plan going forward? Is there good reason to think that simply trying hard to do better will get the job done? Or are you committed enough to doing better that you’re ready to revisit your editorial processes, the diversity of your editorial team, the diversity of the people beyond that team whose advice and feedback you seek and take seriously?

I’ll repeat: We are trying to help you. We criticize this cover because we expect more from Science and AAAS. This is why people have been laboring, patiently, to spell out the problems.

Please use those patient explanations and formulate a serious plan to do better.

* * * * *
For this post, I’m not accepting comments. There is plenty of information linked here for people to read and digest, and my sense is this is a topic where thinking hard for a while is likely to be more productive than jumping in with questions that the reading, digesting, and hard thinking could themselves serve to answer.

Successful science outreach means connecting with the people you’re trying to reach.

Let’s say you think science is cool, or fun, or important to understand (or to do) in our modern world. Let’s say you want to get others who don’t (yet) see science as cool, or fun, or important, to appreciate how cool, how fun, how important it is.

Doing that, even on a small scale, is outreach.

Maybe just talking about what you find cool, fun, and important will help some others come to see science that way. But it’s also quite possible that some of the people to whom you’re reaching out will not be won over by the same explanations, the same experiences, the same exemplars of scientific achievement that won you over.

If you want your outreach to succeed, it’s not enough to know what got you engaged with science. To engage people-who-are-not-you, you probably need to find out something about them.

Find out what their experiences with science have been like — and what their experiences with scientists (and science teachers) have been like. These experiences shape what they think about science, but also what they think about who science is for.

Find out what they find interesting and what they find off-putting.

Find out what they already know and what they want to know. Don’t assume before doing this that you know where their information is gappy or what they’re really worried about. Don’t assume that filling in gaps in their knowledge is all it will take to make them science fans.

Recognize that your audience may not be as willing as you want them to be to separate their view of science from their view of scientists. A foible of a famous scientist that is no big deal to you may be a huge deal to people you’re trying to reach who have had different experiences. Your baseline level of trust for scientists and the enterprise of scientific knowledge-building may be higher than that of people in your target audience who come from communities that have been hurt by researchers or harmed by scientific claims used to justify their marginalization.

Actually reaching people means taking their experiences seriously. Telling someone how to feel is a bad outreach strategy.

Taking the people you’re trying to reach seriously also means taking seriously their capacity to understand and to make good decisions — even when their decisions are not precisely the decisions you might make. When you feel frustration because of decisions being made out of what looks to you like ignorance, resist the impulse to punch down. Instead, ask where the decisions are coming from and try to understand them before explaining, respectfully, why you’d make a different decision.

If your efforts at outreach don’t seem to be reaching people or groups you are trying hard to reach, seriously consider the possibility that what you’re doing may not be succeeding because it’s not aligned with the wants or needs of those people or groups.

If you’re serious about reaching those people or groups ask them how your outreach efforts are coming across to them, and take their answers seriously.

Heroes, human “foibles”, and science outreach.

“Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.”

— Richard Feynman

There is a tendency sometimes to treat human beings as if they were resultant vectors arrived at by adding lots and lots of particular vectors together, an urge to try to work out whether someone’s overall contribution to their field (or to the world) was a net positive.

Unless you have the responsibility for actually putting the human being in question into the system to create good or bad effects (and I don’t kid myself that my readership is that omnipotent), I think treating human beings like resultant vectors is not a great idea.

For one thing, in focusing on the net effect, one tends to overlook that people are complicated. You end up in a situation where you might use those overall tallies to sort people into good and evil rather than noticing how in particular circumstances good and bad may turn on a decision or two.

This can also create an unconscious tendency to put a thumb on the scale when the person whose impact you’re evaluating is someone about whom you have strong feelings, whether they’re a hero to you or a villain. As a result, you may end up completely ignoring the experiences of others, or noticing them but treating them as insignificant, when a better course of action may be to recognize that it’s entirely possible that people who had a positive impact on you had a negative impact on others (and vice versa).

Science is sometimes cast as a pursuit in which people can, by participating in a logical methodology, transcend their human frailties, at least insofar as these frailties constrain our ability to get objective knowledge of the world. On that basis, you’ll hear the claim that we really ought to separate the scientific contributions of an individual from their behaviors and interactions with others. In other words, we should focus on what they did when they were being a scientist rather than on the rest of the (incidental) stuff they did while they were being a human.

This distinction rests on a problematic dichotomy between being a scientist and being a human. Because scientific knowledge is built not just through observations and experiments but also through human interactions, drawing a clear line between human behavior and scientific contributions is harder than it might at first appear.

Consider a scientist who has devised, conducted, and reported the results of many important experiments. If it turns out that some of those experimental results were faked, what do you want to say about his scientific legacy? Can you be confident in his other results? If so, on what basis can you be confident?

The coordinated effort to build a reliable body of knowledge about the world depends on a baseline level of trust between scientists. Without that trust, you are left having to take on the entire project yourself, and that seriously diminished the chances that the knowledge you’re building will be objective.

What about behaviors that don’t involve putting misinformation into the scientific record? Are those the kinds of things we can separate from someone’s scientific contributions?

Here, the answer will depend a lot on the particulars of those behaviors. Are we talking about a scientist who dresses his dogs in ugly sweaters, or one who plays REO Speedwagon albums at maximum volume while drafting journal articles? Such peculiarities might come up in anecdotes but they probably won’t impact the credibility of one’s science. Do we have a scientist who is regularly cruel to his graduate student trainees, or who spreads malicious rumors about his scientific colleagues? That kind of behavior has the potential to damage the networks of trust and cooperation upon which the scientific knowledge-building endeavor depends, which means it probably can’t be dismissed as a mere “foible”.

What about someone who is scrupulously honest about his scientific contributions but whose behavior towards women or members of underrepresented minorities demonstrates that he does not regard them as being as capable, as smart, or as worthy of respect? What if, moreover, most of these behaviors are displayed outside of scientific contexts (owing to the general lack of women or members of underrepresented minorities in the scientific contexts this scientist encounters)? Intended or not, such attitudes and behaviors can have the effect of excluding people from the scientific community. Even if you think you’re actively working to improve outreach/inclusion, your regular treatment of people you’re trying to help as “less than” can have the effect of exclusion. It also sets a tone within your community where it’s predictable that simply having more women and members of underrepresented minorities there won’t result in their full participation, whether because you and your likeminded colleagues are disinclined to waste your time interacting with them or because they get burnt out interacting with people like you who treat them as “less than”.

This last description of a hypothetical scientist is not too far from famous physicist Richard Feynman, something that we know not just from the testimony of his contemporaries but from Feynman’s own accounts. As it happens, Feynman is enough of a hero to scientists and people who do science outreach that many seem compelled to insist that the net effect of his legacy is positive. Ironically, the efforts to paint Feynman as a net-good guy can inflict harms similar to the behavior Feynman’s defenders seem to minimize.

In an excellent, nuanced post on Feynman, Matthew Francis writes:

Richard Feynman casts the longest shadow in the collective psyche of modern physicists. He plays the nearly same role within the community that Einstein does in the world beyond science: the Physicist’s Physicist, someone almost as important as a symbol as he was as a researcher. Many of our professors in school told Feynman stories, and many of us acquired copies of his lecture notes in physics. …

Feynman was a pioneer of quantum field theory, one of a small group of researchers who worked out quantum electrodynamics (QED): the theory governing the behavior of light, matter, and their interactions. QED shows up everywhere from the spectrum of atoms to the collisions of electrons inside particle accelerators, but Feynman’s calculation techniques proved useful well beyond the particular theory.

Not only that, his explanations of quantum physics were deep and cogent, in a field where clarity can be hard to come by. …

Feynman stories that get passed around physics departments aren’t usually about science, though. They’re about his safecracking, his antics, his refusal to wear neckties, his bongos, his rejection of authority, his sexual predation on vulnerable women.

The predation in question here included actively targeting female students as sex partners, a behavior that rather conveys that you don’t view them primarily in terms of their potential to contribute to science.

While it is true that much of what we know about Richard Feynman’s behavior is the result of Feynman telling stories about himself, there stories really don’t seem to indicate awareness of the harmful impacts his behavior might have had on others. Moreover, Feynman’s tone in telling these stories suggests he assumed an audience that would be taken with his cleverness, including his positioning of women (and his ability to get into their pants) as a problem to be solved scientifically.

Apparently these are not behaviors that prevented Feynman from making significant contributions to physics. However, it’s not at all clear that these are behaviors that did no harm to the scientific community.

One take-home message of all this is that making positive contributions to science doesn’t magically cancel out harmful things you may do — including things that may have the effect of harming other scientists or the cooperative knowledge-building effort in which they’re engaged. If you’re a living scientist, this means you should endeavor not to do harm, regardless of what kinds of positive contributions you’ve amassed so far.

Another take-home message here is that it is dangerous to rest your scientific outreach efforts on scientific heroes.

If the gist of your outreach is: “Science is cool! Here’s a cool guy who made cool contributions to science!” and it turns out that your “cool guy” actually displayed some pretty awful behavior (sexist, racist, whatever), you probably shouldn’t put yourself in a position where your message comes across as:

  • These scientific contributions were worth the harm done by his behavior (including the harm it may have done in unfairly excluding people from full participation in science).
  • He may have been sexist or racist, but that was no big deal because people in his time, place and culture were pretty sexist (as if that removes the harm done by the behavior).
  • He did some things that weren’t sexist or racist, so that cancels out the things he did that were sexist or racist. Maybe he worked hard to help a sister or a daughter participate in science; how can we then say that his behavior hurt women’s inclusion in science?
  • His sexism or racism was no big deal because it seems to have been connected to a traumatic event (e.g., his wife died, he had a bad experience with a black person once), or because the problematic behavior seems to have been his way of “blowing off steam” during a period of scientific productivity.

You may be intending to convey the message that this was an interesting guy who made some important contributions to science, but the message that people may take away is that great scientific achievement totally outweighs sexism, racism, and other petty problems.

But people aren’t actually resultant vectors. If you’re a target of the racism, sexism, and other petty problems, you may not feel like they should be overlooked or forgiven on the strength of the scientific achievement.

Science outreach doesn’t just deliver messages about what science knows or about the processes by which that knowledge is built. Science outreach also delivers messages about what kind of people scientists are (and about what kinds of people can be scientists).

There is a special danger lurking here if you are doing science outreach by using a hero like Feynman and you are not a member of a group likely to have been hurt by his behavior. You may believe that the net effect of his story casts science and scientists in a way that will draw people in, but it’s possible you are fooling yourself.

Maybe you aren’t the kind of person whose opinion about science or eagerness to participate in science would be influenced by the character flaws of the “scientific heroes” on offer, but if you’re already interested in science perhaps you’re not the main target for outreach efforts. And if members of the groups who are targeted for outreach tell you that they find these “scientific heroes” and the glorification of them by science fans alienating, perhaps listening to them would help you to devise more effective outreach strategies.

Building more objective knowledge about the world requires input from others. Why should we think that ignoring such input — especially from the kind of people you’re trying to reach — would lead to better science outreach?

Conduct of scientists (and science writers) can shape the public’s view of science.

Scientists undertake a peculiar kind of project. In striving to build objective knowledge about the world, they are tacitly recognizing that our unreflective picture of the world is likely to be riddled with mistakes and distortions. On the other hand, they frequently come to regard themselves as better thinkers — as more reliably objective — than humans who are not scientists, and end up forgetting that they have biases and blindspots of their own which they are helpless to detect without help from others who don’t share these particular biases and blindspots.

Building reliable knowledge about the world requires good methodology, teamwork, and concerted efforts to ensure that the “knowledge” you build doesn’t simply reify preexisting individual and cultural biases. It’s hard work, but it’s important to do it well — especially given a long history of “scientific” findings being used to justify and enforce preexisting cultural biases.

I think this bigger picture is especially appropriate to keep in mind in reading the response from Scientific American Blogs Editor Curtis Brainard to criticisms of a pair of problematic posts on the Scientific American Blog Network. Brainard writes:

The posts provoked accusations on social media that Scientific American was promoting sexism, racism and genetic determinism. While we believe that such charges are excessive, we share readers’ concerns. Although we expect our bloggers to cover controversial topics from time to time, we also recognize that sensitive issues require extra care, and that did not happen here. The author and I have discussed the shortcomings of the two posts in detail, including the lack of attention given to countervailing arguments and evidence, and he understood the deficiencies.

As stated at the top of every post, Scientific American does not always share the views and opinions expressed by our bloggers, just as our writers do not always share our editorial positions. At the same time, we realize our network’s bloggers carry the Scientific American imprimatur and that we have a responsibility to ensure that—differences of opinion notwithstanding—their work meets our standards for accuracy, integrity, transparency, sensitivity and other attributes.

(Bold emphasis added.)

The problem here isn’t that the posts in question advocated sound scientific views with implications that people on social media didn’t like. Rather, the posts presented claims in a way that made them look like they had much stronger scientific support than they really do — and did so in the face of ample published scientific counterarguments. Scientific American is not requiring that posts on its blog network meet a political litmus test, but rather that they embody the same kind of care, responsibility to the available facts, and intellectual honesty that science itself should display.

This is hard work, but it’s important. And engaging seriously with criticism, rather than just dismissing it, can help us do it better.

There’s an irony in the fact that one of the problematic posts which ignored some significant relevant scientific literature (helpfully cited by commenters in the comments section of that very post) was ignoring that literature in the service of defending Larry Summers and his remarks on possible innate biological causes that make men better at math and science than women. The irony lies in the fact that Larry Summers displayed an apparently ironclad commitment to ignore any and all empirical findings that might challenge his intuition that there’s something innate at the heart of the gender imbalance in math and science faculty.

Back in January of 2005, Larry Summers gave a speech at a conference about what can be done to attract more women to the study of math and science, and to keep them in the field long enough to become full professors. In his talk, Summers suggested as a possible hypothesis for the relatively low number of women in math and science careers that there may be innate biological factors that make males better at math and science than females. (He also related an anecdote about his daughter naming her toy trucks as if they were dolls, but it’s fair to say he probably meant this anecdote to be illustrative rather than evidentiary.)

The talk did not go over well with the rest of the participants in the conference.

Several scientific studies were presented at the conference before Summers made his speech. All these studies presented significant evidence against the claim of an innate difference between males and females that could account for the “science gap”.


In the aftermath of this conference of yore, there were some commenters who lauded Summers for voicing “unpopular truths” and others who distanced themselves from his claims but said they supported his right to make them as an exercise of “academic freedom.”

But if Summers was representing himself as a scientist* when he made his speech, I don’t think the “academic freedom” defense works.


Summers is free to state hypotheses — even unpopular hypotheses — that might account for a particular phenomenon. But, as a scientist, he is also responsible to take account of data relevant to his hypotheses. If the data weighs against his preferred hypothesis, intellectual honesty requires that he at least acknowledge this fact. Some would argue that it could even require that he abandon his hypothesis (since science is supposed to be evidence-based whenever possible).


When news of Summers’ speech, and reactions to it, was fresh, one of the details that stuck with me was that one of the conference organizers noted to Summers, after he gave his speech, that there was a large body of evidence — some of it presented at that very conference — that seemed to undermine his hypothesis, after which Summers gave a reply that amounted to, “Well, I don’t find those studies convincing.”

Was Summers within his rights to not believe these studies? Sure. But, he had a responsibility to explain why he rejected them. As a part of a scientific community, he can’t just reject a piece of scientific knowledge out of hand. Doing so comes awfully close to undermining the process of communication that scientific knowledge is based upon. You aren’t supposed to reject a study because you have more prestige than the authors of the study (so, you don’t have to care what they say). You can question the experimental design, you can question the data analysis, you can challenge the conclusions drawn, but you have to be able to articulate the precise objection. Surely, rejecting a study just because it doesn’t fit with your preferred hypothesis is not an intellectually honest move.


By my reckoning, Summers did not conduct himself as a responsible scientist in this incident. But I’d argue that the problem went beyond a lack of intellectual honesty within the universe of scientific discourse. Summers is also responsible for the bad consequences that flowed from his remark.


The bad consequence I have in mind here is the mistaken view of science and its workings that Summers’ conduct conveys. Especially by falling back on a plain vanilla “academic freedom” defense here, defenders of Summers conveyed to the public at large the idea that any hypothesis in science is as good as any other. Scientists who are conscious of the evidence-based nature of their field will see the absurdity of this idea — some hypotheses are better, others worse, and whenever possible we turn to the evidence to make these discriminations. Summers compounded ignorance of the relevant data with what came across as a statement that he didn’t care what the data showed. From this, the public at large could assume he was within his scientific rights to decide which data to care about without giving any justification for this choice**, or they could infer that data has little bearing on the scientific picture of the world.

Clearly, such a picture of science would undermine the standing of the rest of the bits of knowledge produced by scientists far more intellectually honest than Summers.


Indeed, we might go further here. Not only did Summers have some responsibilities that seemed to have escaped him while he was speaking as a scientist, but we could argue that the rest of the scientists (whether at the conference or elsewhere) have a collective responsibility to address the mistaken picture of science his conduct conveyed to society at large. It’s disappointing that, nearly a decade later, we instead have to contend with the problem of scientists following in Summers’ footsteps by ignoring, rather than engaging with, the scientific findings that challenge their intuitions.

Owing to the role we play in presenting a picture of what science knows and of how scientists come to know it to a broader audience, those of us who write about science (on blogs and elsewhere) also have a responsibility to be clear about the kind of standards scientists need to live up to in order to build a body of knowledge that is as accurate and unbiased as humanly possible. If we’re not clear about these standards in our science writing, we risk misleading our audiences about the current state of our knowledge and about how science works to build reliable knowledge about our world. Our responsibility here isn’t just a matter of noticing when scientists are messing up — it’s also a matter of acknowledging and correcting our own mistakes and of working harder to notice our own biases. I’m pleased that our Blogs Editor is committed to helping us fulfill this duty.
_____
*Summers is an economist, and whether to regard economics as a scientific field is a somewhat contentious matter. I’m willing to give the scientific status of economics the benefit of the doubt, but this means I’ll also expect economists to conduct themselves like scientists, and will criticize them when they do not.

**It’s worth noting that a number of the studies that Summers seemed to be dismissing out of hand were conducted by women. One wonders what lessons the public might draw from that.

_____
A portion of this post is an updated version of an ancestor post on my other blog.

A suggestion for those arguing about the causal explanation for fewer women in science and engineering fields.

People are complex, as are the social structures they build (including but not limited to educational institutions, workplaces, and professional communities).

Accordingly, the appropriate causal stories to account for the behaviors and choices of humans, individually and collectively, are bound to be complex. It will hardly ever be the case that there is a single cause doing all the work.

However, there are times when people seem to lose the thread when they spin their causal stories. For example:

The point of focusing on innate psychological differences is not to draw attention away from anti-female discrimination. The research clearly shows that such discrimination exists—among other things, women seem to be paid less for equal work. Nor does it imply that the sexes have nothing in common. Quite frankly, the opposite is true. Nor does it imply that women—or men—are blameworthy for their attributes.

Rather, the point is that anti-female discrimination isn’t the only cause of the gender gap. As we learn more about sex differences, we’ve built better theories to explain the non-identical distribution of the sexes among the sciences. Science is always tentative, but the latest research suggests that discrimination has a weaker impact than people might think, and that innate sex differences explain quite a lot.

What I’m seeing here is a claim that amounts to “there would still be a gender gap in the sciences even if we eliminated anti-female discrimination” — in other words, that the causal powers of innate sex differences would be enough to create a gender gap.

To this claim, I would like to suggest:

1. that there is absolutely no reason not to work to eliminate anti-female discrimination; whether or not there are other causes that are harder to change, such discrimination seems like something we can change, and it has negative effects on those subject to it;

2. that is is an empirical question whether, in the absence of anti-female discrimination, there would still be a gender gap in the sciences; given the complexity of humans and their social structures, controlled studies in psychology are models of real life that abstract away lots of details*, and when the rubber hits the road in the real phenomena we are modeling, things may play out differently.

Let’s settle the question of how much anti-female discrimination matters by getting rid of it.

_____
* This is not a special problem for psychology. All controlled experiments are abstracting away details. That’s what controlling variables is all about.

Pub-Style Science: dreams of objectivity in a game built around power.

This is the third and final installment of my transcript of the Pub-Style Science discussion about how (if at all) philosophy can (or should) inform scientific knowledge-building. Leading up to this part of the conversation, we were considering the possibility that the idealization of the scientific method left out a lot of the details of how real humans actually interact to build scientific knowledge …

Dr. Isis: And that’s the tricky part, I think. That’s where this becomes a messy endeavor. You think about the parts of the scientific method, and you write the scientific method out, we teach it to our students, it’s on the little card, and I think it’s one of the most amazing constructs that there is. It’s certainly a philosophy.

I have devoted my career to the scientific method, and yet it’s that last step that is the messiest. We take our results and we interpret them, we either reject or fail to reject the hypothesis, and in a lot of cases, the way we interpret the very objective data that we’re getting is based on the social and cultural constructs of who we are. And the messier part is that the who we are — you say that science is done around the world, sure, but really, who is it done by? We all get the CV, “Dear honorable and most respected professor…” And what do you do with those emails? You spam them. But why? Why do we do that? There are people [doing science] around the world, and yet we reject their science-doing because of who they are and where they’re from and our understanding, our capacity to take [our doing] of that last step of the scientific method as superior because of some pedigree of our training, which is absolutely rooted in the narrowest sliver of our population.

And that’s the part that frightens me about science. Going from lab to lab and learning things, you’re not just learning objective skills, you’re learning a political process — who do you shake hands with at meetings, who do you have lunch with, who do you have drinks with, how do you phrase your grants in a particular way so they get funded because this is the very narrow sliver of people who are reading them? And I have no idea what to do about that.

Janet Stemwedel: I think this is a place where the acknowledgement that’s embodied in editorial policies of journals like PLOS ONE, that we can’t actually reliably predict what’s going to be important, is a good step forward. That’s saying, look, what we can do is talk about whether this is a result that seems to be robust: this is how I got it; I think if you try to get it in your lab, you’re likely to get it, too; this is why it looked interesting to me in light of what we knew already. Without saying: oh, and this is going to be the best thing since sliced bread. At least that’s acknowledging a certain level of epistemic humility that it’s useful for the scientific community to put out there, to no pretend that the scientific method lets you see into the future. Because last time I checked, it doesn’t.

(46:05)
Andrew Brandel: I just want to build on this point, that this question of objective truth also is a question that is debated hotly, obviously, in science, and I will get in much trouble for my vision of what is objective and what is not objective. This question of whether, to quote a famous philosopher of science, we’re all looking at the same world through different-colored glasses, or whether there’s something more to it, if we’re actually talking about nature in different ways, if we can really learn something not even from science being practiced wherever in the world, but from completely different systems of thinking about how the world works. Because the other part of this violence is not just the ways in which certain groups have not been included in the scientific community, the professional community, which was controlled by the church and wealthy estates and things, but also with the institutions like the scientific method, like certain kinds of philosophy. A lot of violence has been propagated in the name of those things. So I think it’s important to unpack not just this question of let’s get more voices to the table, but literally think about how the structures of what we’re doing themselves — the way the universities are set up, the way that we think about what science does, the way that we think about objective truth — also propagate certain kinds of violence, epistemic kinds of violence.

Michael Tomasson: Wait wait wait, this is fascinating. Epistemic violence? Expand on that.

Andrew Brandel: What I mean to say is, part of the problem, at least from the view of myself — I don’t want to actually represent anybody else — is that if we think that we’re getting to some better method of getting to objective truth, if we think that we have — even if it’s only in an ideal state — some sort of cornerstone, some sort of key to the reality of things as they are, then we can squash the other systems of thinking about the world. And that is also a kind of violence, in a way, that’s not just the violence of there’s no women at the table, there’s no different kinds of people at the table. But there’s actually another kind of power structure that’s embedded in the very way that we think about truths. So, for example, a famous anthropologist, Levi-Strauss, would always point out that the botanists would go to places in Latin America and they would identify 14 different kinds of XYZ plant, and the people living in that jungle who aren’t scientists or don’t have that kind of sophisticated knowledge could distinguish like 45 kinds of these plants. And they took them back to the lab, and they were completely right.

So what does that mean? How do we think about these different ways [of knowing]? I think unpacking that is a big thing that social science and philosophy of science can bring to this conversation, pointing out when there is a place to critique the ways in which science becomes like an ideology.

Michael Tomasson: That just sort of blew my mind. I have to process that for awhile. I want to pick up on something you’re saying and that I think Janet said before, which is really part of the spirit of what Pub-Style Science is all about, the idea that is we get more different kinds of voices into science, we’ll have a little bit better science at the other end of it.

Dr. Rubidium: Yeaaaah. We can all sit around like, I’ve got a ton of great ideas, and that’s fabulous, and new voices, and rah rah. But, where are the new voices? are the new voices, or what you would call new voices, or new opinions, or different opinions (maybe not even new, just different from the current power structure) — if those voices aren’t getting to positions of real power to affect change, it doesn’t matter how many foot soldiers you get on the ground. You have got to get people into the position of being generals. And is that happening? No. I would say no.

Janet Stemwedel: Having more different kinds of people at the table doesn’t matter if you don’t take them seriously.

Andrew Brandel: Exactly. That’s a key point.

Dr. Isis: This is the tricky thing that I sort of alluded to. And I’m not talking about diverse voices in terms of gender and racial and sexual orientation diversity and disability issues. I’m talking about just this idea of diverse voices. One of the things that is tricky, again, is that to get to play the game you have to know the rules, and trying to change the rules too early — one, I think it’s dangerous to try to change the rules before you understand what the rules even are, and two, that is the quickest way to get smacked in the nose when you’re very young. And now, to extend that to issues of actual diversity in science, at least my experience has been that some of the folks who are diverse in science are some of the biggest rule-obeyers. Because you have to be in order to survive. You can’t come in and be different as it is and decide you’re going to change the rules out from under everybody until you get into that — until you become a general, to use Dr. Rubidium’s analogy. The problem is, by the time you become the general, have you drunk enough of the Kool-Aid that you remember who you were? Do you still have enough of yourself to change the system? Some of my more senior colleagues, diverse colleagues, who came up the ranks, are some of the biggest believers in the rules. I don’t know if they felt that way when they were younger folks.

Janet Stemwedel: Part of it can be, if the rules work for you, there’s less incentive to think about changing them. But this is one of those places where those of us philosophers who think about where the knowledge-building bumps up against the ethics will say: look, the ethical responsibilities of the people in the community with more power are different that the ethical responsibilities of the people in the community who are just coming up, because they don’t have as much weight to throw around. They don’t have as much power. So I talk a lot to mid-career and late-career scientists and say, hey look, you want to help build a different community, a different environment for the people you’re training? You’ve got to put some skin in the game to make that happen. You’re in a relatively safe place to throw that weight around. You do that!

And you know, I try to make these prudential arguments about, if you shift around the incentive structures [in various ways], what’s likely to produce better knowledge on the other end? That’s presumably why scientists are doing science, ’cause otherwise there’d be some job that they’d be doing that takes up less time and less brain.

Andrew Brandel: This is a question also of where ethics and epistemic issues also come together, because I think that’s really part of what kind of radical politics — there’s a lot of different theories about what kind of revolution you can talk about, what a revolutionary politics might be to overthrow the system in science. But I think this issue that it’s also an epistemic thing, that it’s also a question of producing better knowledge, and that, to bring back this point about how it’s not just about putting people in positions, it’s not just hiring an assistant professor from XYZ country or more women or these kinds of things, but it’s also a question of putting oneself sufficiently at risk, and taking seriously the possibility that I’m wrong, from radically different positions. That would really move things, I think, in a more interesting direction. That’s maybe something we can bring to the table.

Janet Stemwedel: This is the piece of Karl Popper, by the way, that scientists like as an image of what kind of tough people they are. Scientists are not trying to prove their hypotheses, they’re trying to falsify them, they’re trying to show that they’re wrong, and they’re ready to kiss even their favorite hypothesis goodbye if that’s what the evidence shows.

Some of those hypotheses that scientists need to be willing to kiss goodbye have to do with narrow views of what kind of details count as fair game for building real reliable knowledge about the world and what kind of people and what kind of training could do that, too. Scientists really have to be more evidence-attentive around issues like their own implicit bias. And for some reason that’s really hard, because scientists think that individually they are way more objective than they average bear. The real challenge of science is recognizing that we are all average bears, and it is just the coordination of our efforts within this particular methodological structure that gets us something better than the individual average bear could get by him- or herself.

Michael Tomasson: I’m going to backpedal as furiously as I can, since we’re running out of time. So I’ll give my final spiel and then we’ll go around for closing comments.

I guess I will pare down my skeleton-key: I think there’s an idea of different ways of doing science, and there’s a lot of culture that comes with it that I think is very flexible. I think what I’m getting at is, is there some universal hub for whatever different ways people are looking at science? Is there some sort of universal skeleton or structure? And I guess, if I had to backpedal furiously, that I would say, what I would try to teach my folks, is number one, there is an objective world, it’s not just my opinion. When people come in and talk to me about their science and experiments, it’s not just about what I want, it’s not just about what I think, it’s that there is some objective world out there that we’re trying to describe. The second thing, the most stripped-down version of the scientific method I can think of, is that in order to understand that objective world, it helps to have a hypothesis, a preconceived notion, first to challenge.

What I get frustrated about, and this is just a very practical day-to-day thing, is I see people coming and doing experiments saying, “I have no preconceived notion of how this should go, I did this experiment, and here’s what I got.” It’s like, OK, that’s very hard to interpret unless you start from a certain place — here’s my prediction, here’s what I think was going on — and then test it.

Dr. Isis: I’ll say, Tomasson, actually this wasn’t as boring as I thought it would be. I was really worried about this one. I wasn’t really sure what we were supposed to be talking about — philosophy and science — but this one was OK. So, good on you.

But, I think that I will concur with you that science is about seeking objective truth. I think it’s a darned shame that humans are the ones doing the seeking.

Janet Stemwedel: You know, dolphin science would be completely different, though.

Dr. Rubidium: Yeah, dolphins are jerks! What are you talking about?

Janet Stemwedel: Exactly! All their journals would be behind paywalls.

Andrew Brandel: I’ll just say that I was saying to David, who I know is a regular member of your group, that I think it’s a good step in the right direction to have these conversations. I don’t think we get asked as social scientists, even those of us who work in science settings, to at least talk about these issues more, and talk about, what are the ethical and epistemic stakes involved in doing what we do? What can we bring to the table on similar kinds of questions? For me, this question of cultivating a kind of openness to being wrong is so central to thinking about the kind of science that I do. I think that these kinds of conversations are important, and we need to generate some kind of momentum. I jokingly said to Tomasson that we need a grant to pay for a workshop to get more people into these types of conversations, because I think it’s significant. It’s a step in the right direction.

Janet Stemwedel: I’m inclined to say one of the take-home messages here is that there’s a whole bunch of scientists and me, and none of you said, “Let’s not talk about philosophy at all, that’s not at all useful.” I would like some university administrators to pay attention to this. It’s possible that those of us in the philosophy department are actually contributing something that enhances not only the fortunes of philosophy majors but also the mindfulness of scientists about what they’re doing.

I’m pretty committed to the idea that there is some common core to what scientists across disciplines and across cultures are doing to build knowledge. I think the jury’s still out on what precisely the right thing to say about that common core of the scientific method is. But, I think there’s something useful in being able to step back and examine that question, rather than saying, “Science is whatever the hell we do in my lab. And as long as I keep doing all my future knowledge-building on the same pattern, nothing could go wrong.”

Dr. Rubidium: I think that for me, I’ll echo Isis’s comments: science is an endeavor done by people. And people are jerks — No! With people, then, if you have this endeavor, this job, whatever you want to call it — some people would call it a calling — once people are involved, I think it’s essential that we talk about philosophy, sociology, the behavior of people. They are doing the work. It doesn’t make sense to me, then — and I’m an analytical chemist and I have zero background in all of the social stuff — it doesn’t make sense to me that you would have this thing done by people and then actually say with a straight face, “But let’s not talk about people.” That part just doesn’t compute. So I think these conversations definitely need to continue, and I hope that we can talk more about the people behind the endeavor and more about the things attached to their thoughts and behaviors.

* * * * *

Part 1 of the transcript.

Part 2 of the transcript.

Archived video of this Pub-Style Science episode.

Storify’d version of the simultaneous Twitter conversation.

You should also check out Dr. Isis’s post on why the conversations that happen in Pub-Style Science are valuable to scientists-in-training.