When your cover photo says less about the story and more about who you imagine you’re talking to.

The choice of cover of the most recent issue of Science was not good. This provoked strong reactions and, eventually, an apology from Science‘s editor-in-chief. It’s not the worst apology I’ve seen in recent days, but my reading of it suggests that there’s still a gap between the reactions to the cover and the editorial team’s grasp of those reactions.

So, in the interests of doing what I can to help close that gap, I give you the apology (in block quotes) and my response to it:

From Science Editor-in-Chief Marcia McNutt:

Science has heard from many readers expressing their opinions and concerns with the recent [11 July 2014] cover choice.

The cover showing transgender sex workers in Jarkarta was selected after much discussion by a large group

I suppose the fact that the choice of the cover was discussed by many people for a long time (as opposed to by one person with no discussion) is good. But it’s no guarantee of a good choice, as we’ve seen here. It might be useful to tell readers more about what kind of group was involved in making the decision, and what kind of discussion led to the choice of this cover over the other options that were considered.

and was not intended to offend anyone,

Imagine my relief that you did not intend what happened in response to your choice of cover. And, given how predictable the response to your cover was, imagine my estimation of your competence in the science communication arena dropping several notches. How well do you know your audience? Who exactly do you imagine that audience to be? If you’re really not interested in reaching out to people like me, can I get my AAAS dues refunded, please?

but rather to highlight the fact that there are solutions for the AIDS crisis for this forgotten but at-risk group. A few have indicated to me that the cover did exactly that,

For them. For them the cover highlighted transgender sex workers as a risk group who might get needed help from research. So, there was a segment of your audience for whom your choice succeeded, apparently.

but more have indicated the opposite reaction: that the cover was offensive because they did not have the context of the story prior to viewing it, an important piece of information that was available to those choosing the cover.

Please be careful with your causal claims here. Even with the missing context provided, a number of people still find the cover harmful. This explanation of the harm in the context of what the scientific community, and the wider world, can be like for a trans*woman, spells it out pretty eloquently.

The problem, in other words, goes deeper than the picture not effectively conveying your intended context. Instead, the cover communicated layers of context about who you imagine as your audience — and about whose reality is not really on your radar.

The people who are using social media to explain the problems they have with this cover are sharing information about who is in your audience, about what our lives in and with science are like. We are pinging you so we will be on your radar. We are trying to help you.

I am truly sorry for any discomfort that this cover may have caused anyone,

Please do not minimize the harm your choice of cover caused by describing it as “discomfort”. Doing so suggests that you still aren’t recognizing how this isn’t an event happening in a vacuum. That’s a bad way to support AAAS members who are women and to broaden the audience for science.

and promise that we will strive to do much better in the future to be sensitive to all groups and not assume that context and intent will speak for themselves.

What’s your action plan going forward? Is there good reason to think that simply trying hard to do better will get the job done? Or are you committed enough to doing better that you’re ready to revisit your editorial processes, the diversity of your editorial team, the diversity of the people beyond that team whose advice and feedback you seek and take seriously?

I’ll repeat: We are trying to help you. We criticize this cover because we expect more from Science and AAAS. This is why people have been laboring, patiently, to spell out the problems.

Please use those patient explanations and formulate a serious plan to do better.

* * * * *
For this post, I’m not accepting comments. There is plenty of information linked here for people to read and digest, and my sense is this is a topic where thinking hard for a while is likely to be more productive than jumping in with questions that the reading, digesting, and hard thinking could themselves serve to answer.

Successful science outreach means connecting with the people you’re trying to reach.

Let’s say you think science is cool, or fun, or important to understand (or to do) in our modern world. Let’s say you want to get others who don’t (yet) see science as cool, or fun, or important, to appreciate how cool, how fun, how important it is.

Doing that, even on a small scale, is outreach.

Maybe just talking about what you find cool, fun, and important will help some others come to see science that way. But it’s also quite possible that some of the people to whom you’re reaching out will not be won over by the same explanations, the same experiences, the same exemplars of scientific achievement that won you over.

If you want your outreach to succeed, it’s not enough to know what got you engaged with science. To engage people-who-are-not-you, you probably need to find out something about them.

Find out what their experiences with science have been like — and what their experiences with scientists (and science teachers) have been like. These experiences shape what they think about science, but also what they think about who science is for.

Find out what they find interesting and what they find off-putting.

Find out what they already know and what they want to know. Don’t assume before doing this that you know where their information is gappy or what they’re really worried about. Don’t assume that filling in gaps in their knowledge is all it will take to make them science fans.

Recognize that your audience may not be as willing as you want them to be to separate their view of science from their view of scientists. A foible of a famous scientist that is no big deal to you may be a huge deal to people you’re trying to reach who have had different experiences. Your baseline level of trust for scientists and the enterprise of scientific knowledge-building may be higher than that of people in your target audience who come from communities that have been hurt by researchers or harmed by scientific claims used to justify their marginalization.

Actually reaching people means taking their experiences seriously. Telling someone how to feel is a bad outreach strategy.

Taking the people you’re trying to reach seriously also means taking seriously their capacity to understand and to make good decisions — even when their decisions are not precisely the decisions you might make. When you feel frustration because of decisions being made out of what looks to you like ignorance, resist the impulse to punch down. Instead, ask where the decisions are coming from and try to understand them before explaining, respectfully, why you’d make a different decision.

If your efforts at outreach don’t seem to be reaching people or groups you are trying hard to reach, seriously consider the possibility that what you’re doing may not be succeeding because it’s not aligned with the wants or needs of those people or groups.

If you’re serious about reaching those people or groups ask them how your outreach efforts are coming across to them, and take their answers seriously.

Heroes, human “foibles”, and science outreach.

“Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool.”

— Richard Feynman

There is a tendency sometimes to treat human beings as if they were resultant vectors arrived at by adding lots and lots of particular vectors together, an urge to try to work out whether someone’s overall contribution to their field (or to the world) was a net positive.

Unless you have the responsibility for actually putting the human being in question into the system to create good or bad effects (and I don’t kid myself that my readership is that omnipotent), I think treating human beings like resultant vectors is not a great idea.

For one thing, in focusing on the net effect, one tends to overlook that people are complicated. You end up in a situation where you might use those overall tallies to sort people into good and evil rather than noticing how in particular circumstances good and bad may turn on a decision or two.

This can also create an unconscious tendency to put a thumb on the scale when the person whose impact you’re evaluating is someone about whom you have strong feelings, whether they’re a hero to you or a villain. As a result, you may end up completely ignoring the experiences of others, or noticing them but treating them as insignificant, when a better course of action may be to recognize that it’s entirely possible that people who had a positive impact on you had a negative impact on others (and vice versa).

Science is sometimes cast as a pursuit in which people can, by participating in a logical methodology, transcend their human frailties, at least insofar as these frailties constrain our ability to get objective knowledge of the world. On that basis, you’ll hear the claim that we really ought to separate the scientific contributions of an individual from their behaviors and interactions with others. In other words, we should focus on what they did when they were being a scientist rather than on the rest of the (incidental) stuff they did while they were being a human.

This distinction rests on a problematic dichotomy between being a scientist and being a human. Because scientific knowledge is built not just through observations and experiments but also through human interactions, drawing a clear line between human behavior and scientific contributions is harder than it might at first appear.

Consider a scientist who has devised, conducted, and reported the results of many important experiments. If it turns out that some of those experimental results were faked, what do you want to say about his scientific legacy? Can you be confident in his other results? If so, on what basis can you be confident?

The coordinated effort to build a reliable body of knowledge about the world depends on a baseline level of trust between scientists. Without that trust, you are left having to take on the entire project yourself, and that seriously diminished the chances that the knowledge you’re building will be objective.

What about behaviors that don’t involve putting misinformation into the scientific record? Are those the kinds of things we can separate from someone’s scientific contributions?

Here, the answer will depend a lot on the particulars of those behaviors. Are we talking about a scientist who dresses his dogs in ugly sweaters, or one who plays REO Speedwagon albums at maximum volume while drafting journal articles? Such peculiarities might come up in anecdotes but they probably won’t impact the credibility of one’s science. Do we have a scientist who is regularly cruel to his graduate student trainees, or who spreads malicious rumors about his scientific colleagues? That kind of behavior has the potential to damage the networks of trust and cooperation upon which the scientific knowledge-building endeavor depends, which means it probably can’t be dismissed as a mere “foible”.

What about someone who is scrupulously honest about his scientific contributions but whose behavior towards women or members of underrepresented minorities demonstrates that he does not regard them as being as capable, as smart, or as worthy of respect? What if, moreover, most of these behaviors are displayed outside of scientific contexts (owing to the general lack of women or members of underrepresented minorities in the scientific contexts this scientist encounters)? Intended or not, such attitudes and behaviors can have the effect of excluding people from the scientific community. Even if you think you’re actively working to improve outreach/inclusion, your regular treatment of people you’re trying to help as “less than” can have the effect of exclusion. It also sets a tone within your community where it’s predictable that simply having more women and members of underrepresented minorities there won’t result in their full participation, whether because you and your likeminded colleagues are disinclined to waste your time interacting with them or because they get burnt out interacting with people like you who treat them as “less than”.

This last description of a hypothetical scientist is not too far from famous physicist Richard Feynman, something that we know not just from the testimony of his contemporaries but from Feynman’s own accounts. As it happens, Feynman is enough of a hero to scientists and people who do science outreach that many seem compelled to insist that the net effect of his legacy is positive. Ironically, the efforts to paint Feynman as a net-good guy can inflict harms similar to the behavior Feynman’s defenders seem to minimize.

In an excellent, nuanced post on Feynman, Matthew Francis writes:

Richard Feynman casts the longest shadow in the collective psyche of modern physicists. He plays the nearly same role within the community that Einstein does in the world beyond science: the Physicist’s Physicist, someone almost as important as a symbol as he was as a researcher. Many of our professors in school told Feynman stories, and many of us acquired copies of his lecture notes in physics. …

Feynman was a pioneer of quantum field theory, one of a small group of researchers who worked out quantum electrodynamics (QED): the theory governing the behavior of light, matter, and their interactions. QED shows up everywhere from the spectrum of atoms to the collisions of electrons inside particle accelerators, but Feynman’s calculation techniques proved useful well beyond the particular theory.

Not only that, his explanations of quantum physics were deep and cogent, in a field where clarity can be hard to come by. …

Feynman stories that get passed around physics departments aren’t usually about science, though. They’re about his safecracking, his antics, his refusal to wear neckties, his bongos, his rejection of authority, his sexual predation on vulnerable women.

The predation in question here included actively targeting female students as sex partners, a behavior that rather conveys that you don’t view them primarily in terms of their potential to contribute to science.

While it is true that much of what we know about Richard Feynman’s behavior is the result of Feynman telling stories about himself, there stories really don’t seem to indicate awareness of the harmful impacts his behavior might have had on others. Moreover, Feynman’s tone in telling these stories suggests he assumed an audience that would be taken with his cleverness, including his positioning of women (and his ability to get into their pants) as a problem to be solved scientifically.

Apparently these are not behaviors that prevented Feynman from making significant contributions to physics. However, it’s not at all clear that these are behaviors that did no harm to the scientific community.

One take-home message of all this is that making positive contributions to science doesn’t magically cancel out harmful things you may do — including things that may have the effect of harming other scientists or the cooperative knowledge-building effort in which they’re engaged. If you’re a living scientist, this means you should endeavor not to do harm, regardless of what kinds of positive contributions you’ve amassed so far.

Another take-home message here is that it is dangerous to rest your scientific outreach efforts on scientific heroes.

If the gist of your outreach is: “Science is cool! Here’s a cool guy who made cool contributions to science!” and it turns out that your “cool guy” actually displayed some pretty awful behavior (sexist, racist, whatever), you probably shouldn’t put yourself in a position where your message comes across as:

  • These scientific contributions were worth the harm done by his behavior (including the harm it may have done in unfairly excluding people from full participation in science).
  • He may have been sexist or racist, but that was no big deal because people in his time, place and culture were pretty sexist (as if that removes the harm done by the behavior).
  • He did some things that weren’t sexist or racist, so that cancels out the things he did that were sexist or racist. Maybe he worked hard to help a sister or a daughter participate in science; how can we then say that his behavior hurt women’s inclusion in science?
  • His sexism or racism was no big deal because it seems to have been connected to a traumatic event (e.g., his wife died, he had a bad experience with a black person once), or because the problematic behavior seems to have been his way of “blowing off steam” during a period of scientific productivity.

You may be intending to convey the message that this was an interesting guy who made some important contributions to science, but the message that people may take away is that great scientific achievement totally outweighs sexism, racism, and other petty problems.

But people aren’t actually resultant vectors. If you’re a target of the racism, sexism, and other petty problems, you may not feel like they should be overlooked or forgiven on the strength of the scientific achievement.

Science outreach doesn’t just deliver messages about what science knows or about the processes by which that knowledge is built. Science outreach also delivers messages about what kind of people scientists are (and about what kinds of people can be scientists).

There is a special danger lurking here if you are doing science outreach by using a hero like Feynman and you are not a member of a group likely to have been hurt by his behavior. You may believe that the net effect of his story casts science and scientists in a way that will draw people in, but it’s possible you are fooling yourself.

Maybe you aren’t the kind of person whose opinion about science or eagerness to participate in science would be influenced by the character flaws of the “scientific heroes” on offer, but if you’re already interested in science perhaps you’re not the main target for outreach efforts. And if members of the groups who are targeted for outreach tell you that they find these “scientific heroes” and the glorification of them by science fans alienating, perhaps listening to them would help you to devise more effective outreach strategies.

Building more objective knowledge about the world requires input from others. Why should we think that ignoring such input — especially from the kind of people you’re trying to reach — would lead to better science outreach?

On the value of empathy, not othering.

Could seeing the world through the eyes of the scientist who behaves unethically be a valuable tool for those trying to behave ethically?

Last semester, I asked my “Ethics in Science” students to review an online ethics training module of the sort that many institutions use to address responsible conduct of research with their students and employees. Many of my students elected to review the Office of Research Integrity’s interactive movie The Lab, which takes you through a “choose your own adventure” scenario in as academic lab as one of four characters (a graduate student, a postdoc, the principal investigator, or the institution’s research integrity officer). The scenario surrounds research misconduct by another member of the lab, and your goal is to do what you can to address the problems — and to avoid being drawn into committing misconduct yourself.

By and large, my students reported that “The Lab” was a worthwhile activity. As part of the assignment, I asked them to suggest changes, and a number of them made what I thought was a striking suggestion: players should have the option to play the character who commits the misconduct.

I can imagine some imminently sensible reasons why the team that produced “The Lab” didn’t include the cheater as a playable character. For instance, if the scenario were to start before the decision to cheat and the user playing this character picks the options that amount to not cheating, you end up with a story that lacks almost all of the drama. Similarly, if you pick up with that character in the immediate aftermath of the instance of cheating and go with the “come clean/don’t dig a deeper hole” options, the story ends pretty quickly.

Setting the need for dramatic tension aside, I suspect that another reason that “The Lab” doesn’t include the cheater as a playable character is that people who are undergoing research ethics training are supposed to think of themselves as people who would not cheat. Rather, they’re supposed to think of themselves as ethical folks who would resist temptation and stand up to cheating when others do it. These training exercises bring out some of the particular challenges that might be associated with making good ethical decisions (many of them connected to seeing a bit further down the causal chain to anticipate the likely consequences of your choices), but they tend to position the cheater as just part of the environment to which the ethical researcher must respond.

I think this is a mistake. I think there may be something valuable in being able to view those who commit misconduct as more than mere antagonists or monsters.

Part of what makes “The Lab” a useful exercise is that it presents situations with a number of choices available to us, some easier and some harder, some likely to lead to interactions that are more honest and fair and others more likely to lead to problems. In real life, though, we don’t usually have the option of rewinding time and choosing a different option if our first choice goes badly. Nor do we have assurance that we’ll end up being the good guys.

It’s important to understand the temptations that the cheaters felt — the circumstances that made their unethical behaviors seem expedient, or rational, or necessary. Casting cheaters as monsters is glossing over our own human vulnerability to these bad choices, which will surely make the temptations harder to handle when we encounter them. Moreover, understanding the cheaters as humans (just like the scientists who haven’t cheated) rather than “other” in some fundamental way lets us examine those temptations and then collectively create working environments with fewer of them. Though it’s part of a different discussion, Ashe Dryden describes the dangers of “othering” here quite well:

There is no critical discussion about what leads to these incidents — what parts of our culture allow these things to go unchecked for so long, how pervasive they are, and how so much of this is rewarded directly or indirectly. …

It’s important to notice what is happening here: by declaring that the people doing these things are others, it removes the need to examine our own actions. The logic assumed is that only bad people do these things and we aren’t bad people, so we couldn’t do something like this. Othering effectively absolves ourselves of any blame.

The dramatic arc of “The Lab” is definitely not centered on the cheater’s redemption, nor on cultivating empathy for him, and in the context of the particular training it offers, that’s fine. Sometimes one’s first priority is protecting or repairing the integrity of the scientific record, or ensuring a well-functioning scientific community by isolating a member who has proven himself untrustworthy.

But, that member of the community who we’re isolating, or rehabilitating, is connected to the community — connected to us — in complicated ways. Misconduct doesn’t just happen, but neither is it the case that, when someone commits it, it’s just the matter of the choices and actions of an individual in a vacuum.

The community is participating in creating the environment in which people commit misconduct. Trying to understand the ways in which behaviors, expectations, formal and informal reward systems, and the like can encourage big ethical transgressions or desensitize people to “little” lapses may be a crucial step to creating an environment where fewer people commit misconduct, whether because the cost of doing so is too high or the payoff for doing so (if you get away with it) is too low.

But seeing members of the community as connected in this way requires not seeing the research environment as static and unchangeable — and not seeing those in the community who commit misconduct as fundamentally different creatures from those who do not.

All of this makes me think that part of the voluntary exclusion deals between people who have committed misconduct and the ORI should be an allocution, in which the wrongdoer spells out the precise circumstances of the misconduct, including the pressures in the foreground when the wrongdoer chose the unethical course. This would not be an excuse but an explanation, a post-mortem of the misconduct available to the community for inspection and instruction. Ideally, others might recognize familiar situations in the allocution and then consider how close their own behavior in such situations has come to crossing ethical lines, as well as what factors seemed to help them avoid crossing those lines. As well, researchers could think together about what gives rise to the situations and the temptations within them and explore whether common practices can be tweaked to remove some of the temptations while supporting knowledge-building and knowledge builders.

Casting cheaters as monsters doesn’t do much to help people make good choices in the face of difficult circumstances. Ignoring the ways we contribute to creating those circumstances doesn’t help, either — and may even increase the risk that we’ll become like the “monsters” we decry

Conduct of scientists (and science writers) can shape the public’s view of science.

Scientists undertake a peculiar kind of project. In striving to build objective knowledge about the world, they are tacitly recognizing that our unreflective picture of the world is likely to be riddled with mistakes and distortions. On the other hand, they frequently come to regard themselves as better thinkers — as more reliably objective — than humans who are not scientists, and end up forgetting that they have biases and blindspots of their own which they are helpless to detect without help from others who don’t share these particular biases and blindspots.

Building reliable knowledge about the world requires good methodology, teamwork, and concerted efforts to ensure that the “knowledge” you build doesn’t simply reify preexisting individual and cultural biases. It’s hard work, but it’s important to do it well — especially given a long history of “scientific” findings being used to justify and enforce preexisting cultural biases.

I think this bigger picture is especially appropriate to keep in mind in reading the response from Scientific American Blogs Editor Curtis Brainard to criticisms of a pair of problematic posts on the Scientific American Blog Network. Brainard writes:

The posts provoked accusations on social media that Scientific American was promoting sexism, racism and genetic determinism. While we believe that such charges are excessive, we share readers’ concerns. Although we expect our bloggers to cover controversial topics from time to time, we also recognize that sensitive issues require extra care, and that did not happen here. The author and I have discussed the shortcomings of the two posts in detail, including the lack of attention given to countervailing arguments and evidence, and he understood the deficiencies.

As stated at the top of every post, Scientific American does not always share the views and opinions expressed by our bloggers, just as our writers do not always share our editorial positions. At the same time, we realize our network’s bloggers carry the Scientific American imprimatur and that we have a responsibility to ensure that—differences of opinion notwithstanding—their work meets our standards for accuracy, integrity, transparency, sensitivity and other attributes.

(Bold emphasis added.)

The problem here isn’t that the posts in question advocated sound scientific views with implications that people on social media didn’t like. Rather, the posts presented claims in a way that made them look like they had much stronger scientific support than they really do — and did so in the face of ample published scientific counterarguments. Scientific American is not requiring that posts on its blog network meet a political litmus test, but rather that they embody the same kind of care, responsibility to the available facts, and intellectual honesty that science itself should display.

This is hard work, but it’s important. And engaging seriously with criticism, rather than just dismissing it, can help us do it better.

There’s an irony in the fact that one of the problematic posts which ignored some significant relevant scientific literature (helpfully cited by commenters in the comments section of that very post) was ignoring that literature in the service of defending Larry Summers and his remarks on possible innate biological causes that make men better at math and science than women. The irony lies in the fact that Larry Summers displayed an apparently ironclad commitment to ignore any and all empirical findings that might challenge his intuition that there’s something innate at the heart of the gender imbalance in math and science faculty.

Back in January of 2005, Larry Summers gave a speech at a conference about what can be done to attract more women to the study of math and science, and to keep them in the field long enough to become full professors. In his talk, Summers suggested as a possible hypothesis for the relatively low number of women in math and science careers that there may be innate biological factors that make males better at math and science than females. (He also related an anecdote about his daughter naming her toy trucks as if they were dolls, but it’s fair to say he probably meant this anecdote to be illustrative rather than evidentiary.)

The talk did not go over well with the rest of the participants in the conference.

Several scientific studies were presented at the conference before Summers made his speech. All these studies presented significant evidence against the claim of an innate difference between males and females that could account for the “science gap”.


In the aftermath of this conference of yore, there were some commenters who lauded Summers for voicing “unpopular truths” and others who distanced themselves from his claims but said they supported his right to make them as an exercise of “academic freedom.”

But if Summers was representing himself as a scientist* when he made his speech, I don’t think the “academic freedom” defense works.


Summers is free to state hypotheses — even unpopular hypotheses — that might account for a particular phenomenon. But, as a scientist, he is also responsible to take account of data relevant to his hypotheses. If the data weighs against his preferred hypothesis, intellectual honesty requires that he at least acknowledge this fact. Some would argue that it could even require that he abandon his hypothesis (since science is supposed to be evidence-based whenever possible).


When news of Summers’ speech, and reactions to it, was fresh, one of the details that stuck with me was that one of the conference organizers noted to Summers, after he gave his speech, that there was a large body of evidence — some of it presented at that very conference — that seemed to undermine his hypothesis, after which Summers gave a reply that amounted to, “Well, I don’t find those studies convincing.”

Was Summers within his rights to not believe these studies? Sure. But, he had a responsibility to explain why he rejected them. As a part of a scientific community, he can’t just reject a piece of scientific knowledge out of hand. Doing so comes awfully close to undermining the process of communication that scientific knowledge is based upon. You aren’t supposed to reject a study because you have more prestige than the authors of the study (so, you don’t have to care what they say). You can question the experimental design, you can question the data analysis, you can challenge the conclusions drawn, but you have to be able to articulate the precise objection. Surely, rejecting a study just because it doesn’t fit with your preferred hypothesis is not an intellectually honest move.


By my reckoning, Summers did not conduct himself as a responsible scientist in this incident. But I’d argue that the problem went beyond a lack of intellectual honesty within the universe of scientific discourse. Summers is also responsible for the bad consequences that flowed from his remark.


The bad consequence I have in mind here is the mistaken view of science and its workings that Summers’ conduct conveys. Especially by falling back on a plain vanilla “academic freedom” defense here, defenders of Summers conveyed to the public at large the idea that any hypothesis in science is as good as any other. Scientists who are conscious of the evidence-based nature of their field will see the absurdity of this idea — some hypotheses are better, others worse, and whenever possible we turn to the evidence to make these discriminations. Summers compounded ignorance of the relevant data with what came across as a statement that he didn’t care what the data showed. From this, the public at large could assume he was within his scientific rights to decide which data to care about without giving any justification for this choice**, or they could infer that data has little bearing on the scientific picture of the world.

Clearly, such a picture of science would undermine the standing of the rest of the bits of knowledge produced by scientists far more intellectually honest than Summers.


Indeed, we might go further here. Not only did Summers have some responsibilities that seemed to have escaped him while he was speaking as a scientist, but we could argue that the rest of the scientists (whether at the conference or elsewhere) have a collective responsibility to address the mistaken picture of science his conduct conveyed to society at large. It’s disappointing that, nearly a decade later, we instead have to contend with the problem of scientists following in Summers’ footsteps by ignoring, rather than engaging with, the scientific findings that challenge their intuitions.

Owing to the role we play in presenting a picture of what science knows and of how scientists come to know it to a broader audience, those of us who write about science (on blogs and elsewhere) also have a responsibility to be clear about the kind of standards scientists need to live up to in order to build a body of knowledge that is as accurate and unbiased as humanly possible. If we’re not clear about these standards in our science writing, we risk misleading our audiences about the current state of our knowledge and about how science works to build reliable knowledge about our world. Our responsibility here isn’t just a matter of noticing when scientists are messing up — it’s also a matter of acknowledging and correcting our own mistakes and of working harder to notice our own biases. I’m pleased that our Blogs Editor is committed to helping us fulfill this duty.
_____
*Summers is an economist, and whether to regard economics as a scientific field is a somewhat contentious matter. I’m willing to give the scientific status of economics the benefit of the doubt, but this means I’ll also expect economists to conduct themselves like scientists, and will criticize them when they do not.

**It’s worth noting that a number of the studies that Summers seemed to be dismissing out of hand were conducted by women. One wonders what lessons the public might draw from that.

_____
A portion of this post is an updated version of an ancestor post on my other blog.