Communicating with the public, being out as a scientist.

In the previous post, I noted that scientists are not always directly engaged in the project of communicating about their scientific findings (or about the methods they used to produce those findings) to the public.

Part of this is a matter of incentives: most scientists don’t have communicating with the public as an explicit part of their job description, and they are usually better rewarded for paying attention to things that are explicit parts of their job descriptions. Part of it is training: scientists are generally taught a whole lot more about how to conduct research in their field than they are taught about effective strategies for communicating with non-scientists. Part of it is the presence of other professions (like journalists and teachers and museum curators) that are, more or less, playing the communicating-with-the-public-about-science zone. Still another part of it may be temperament: some people say that they went into science because they wanted to do research, not to deal with people. Of course, since doing research requires dealing with other people sooner or later, I’m guessing these folks are terribly bitter that scientific research did not support their preferred lifestyle of total isolation from human contact — or, that they really meant that they didn’t want to deal with people who are non-scientists.

I’d like to suggest, however, that there are very good reasons for scientists to be communicating about science with non-scientists — even if it’s not a job requirement, and there are other people playing that zone, and it doesn’t feel like it comes naturally.

The public has an interest in understanding more than it does about what science knows and how science comes to know it, about which claims are backed by evidence and which others are backed by wishful thinking or outright deception. But it’s hard to engage an adult as you would a student; members of the public are frequently just not up for didactic engagement. Dropping a lecture of what you perceive as their ignorance (or their “knowledge deficit,” as the people who study scientific communication and public understanding of science would call it) probably won’t be a welcome form of engagement.

In general, non-scientists neither need nor want to be able to evaluate scientific claims and evidence with the technical rigor that scientists evaluate them. What they need more is a read on whether the scientists whose job it is to make and evaluate these claims are the kind of people they can trust.

This seems to me like a good reason for scientists to come out as scientists to their communities, their families, their friends.

Whenever there are surveys of how many Americans can name a living scientist, a significant proportion of the people surveyed just can’t name any. But I suspect a bunch of these people know actual, living scientists who walk in their midst — they just don’t know that these folks they know as people are also scientists.

If everyone who is a scientist were to bring that identity to their other human interactions, to let it be a part of what the neighbors, or the kids whose youth soccer team they coach, or the people at the school board meeting, or the people at the gym know about them, what do you think that might do to the public’s picture of who scientists are and what scientists are like? What could letting your scientific identity ride along with the rest of you do to help your non-scientist fellow travelers get an idea of what scientists do, or of what inspires them to do science? Could being open about your ties to science help people who already have independent reasons to trust you find reasons to be less reflexively distrustful of science and scientists?

These seem to me like empirical questions. Let’s give it a try and see what we find out.

Are scientists who don’t engage with the public obliged to engage with the press?

In posts of yore, we’ve had occasion to discuss the duties scientists may have to the non-scientists with whom they share a world. One of these is the duty to share the knowledge they’ve built with the public — especially if that knowledge is essential to the public’s ability to navigate pressing problems, or if the public has put up the funds for the research in which that knowledge was built.

Even if you’re inclined to think that what we have here is something that falls short of an obligation, there are surely cases where it would have good effects — not just for the public, but also for scientists — if the public were informed of important scientific findings. After all, if not knowing a key piece of knowledge, or not understanding its implications or how certain or uncertain it is, leads the public to make worse decisions (whether at the ballot box or in their everyday lives), the impacts of those worse decisions could also harm the scientists with whom they are sharing a world.

But here’s the thing: Scientists are generally trained to communicate their knowledge through journal articles and conference presentations, seminars and grant proposals, patent applications and technical documents. Moreover, these tend to be the kind of activities in scientific careers that are rewarded by the folks making the evaluations, distributing grant money, and cutting the paychecks. Very few scientists get explicit training in how to communicate about their scientific findings, or about the processes by which the knowledge is built, with the public. Some scientists manage to be able to do a good job of this despite a lack of training, others less so. And many scientists will note that there are hardly enough hours in the day to tackle all the tasks that are recognized and rewarded in their official scientific job descriptions without adding “communicating science to the public” to the stack.

As a result, much of the job of communicating to the public about scientific research and new scientific findings falls to the press.

This raises another question for scientists: If scientists have a duty (or at least a strong interest) in making sure the knowledge they build is shared with the public, and if scientists themselves are not taking on the communicative task of sharing it (whether because they don’t have the time or they don’t have the skills to do it effectively), do scientists have an obligation to engage with the press to whom that communicative task has fallen?

Here, of course, we encounter some longstanding distrust between scientists and journalists. Scientists sometimes worry that the journalists taking on the task of making scientific findings intelligible to the public don’t themselves understand the scientific details (or scientific methodology more generally) much better than the public does. Or, they may worry about helping a science journalist who has already decided on the story they are going to tell and who will gleefully ignore or distort facts in the service of telling that story. Or, they may worry that the discovery-of-the-week model of science that journalists frequently embrace distorts the public’s understanding of the ongoing cooperative process by which a body of scientific knowledge is actually built.

To the extent that scientists believe journalists will manage to get things wrong, they may feel like they do less harm to the public’s understanding of science if they do not engage with journalists at all.

While I think this is an understandable impulse, I don’t think it necessarily minimizes the harm.

Indeed, I think it’s useful for scientists to ask themselves: What happens if I don’t engage and journalists try to tell the story anyway, without input from scientists who know this area of scientific work and why it matters?

Of course, I also think it would benefit scientists, journalists, and the public if scientists got more support here, from training in how to work with journalists, to institutional support in their interactions with journalist, to more general recognition that communicating about science with broader audiences is a good thing for scientists (and scientific institutions) to be doing. But in a world where “public outreach” falls much further down on the scientist’s list of pressing tasks than does bringing in grant money, training new lab staff, and writing up results for submission, science journalists are largely playing the zone where communication of science to the public happens. Scientists who are playing other zones should think about how they can support science journalists in covering their zone effectively.

When your cover photo says less about the story and more about who you imagine you’re talking to.

The choice of cover of the most recent issue of Science was not good. This provoked strong reactions and, eventually, an apology from Science‘s editor-in-chief. It’s not the worst apology I’ve seen in recent days, but my reading of it suggests that there’s still a gap between the reactions to the cover and the editorial team’s grasp of those reactions.

So, in the interests of doing what I can to help close that gap, I give you the apology (in block quotes) and my response to it:

From Science Editor-in-Chief Marcia McNutt:

Science has heard from many readers expressing their opinions and concerns with the recent [11 July 2014] cover choice.

The cover showing transgender sex workers in Jarkarta was selected after much discussion by a large group

I suppose the fact that the choice of the cover was discussed by many people for a long time (as opposed to by one person with no discussion) is good. But it’s no guarantee of a good choice, as we’ve seen here. It might be useful to tell readers more about what kind of group was involved in making the decision, and what kind of discussion led to the choice of this cover over the other options that were considered.

and was not intended to offend anyone,

Imagine my relief that you did not intend what happened in response to your choice of cover. And, given how predictable the response to your cover was, imagine my estimation of your competence in the science communication arena dropping several notches. How well do you know your audience? Who exactly do you imagine that audience to be? If you’re really not interested in reaching out to people like me, can I get my AAAS dues refunded, please?

but rather to highlight the fact that there are solutions for the AIDS crisis for this forgotten but at-risk group. A few have indicated to me that the cover did exactly that,

For them. For them the cover highlighted transgender sex workers as a risk group who might get needed help from research. So, there was a segment of your audience for whom your choice succeeded, apparently.

but more have indicated the opposite reaction: that the cover was offensive because they did not have the context of the story prior to viewing it, an important piece of information that was available to those choosing the cover.

Please be careful with your causal claims here. Even with the missing context provided, a number of people still find the cover harmful. This explanation of the harm in the context of what the scientific community, and the wider world, can be like for a trans*woman, spells it out pretty eloquently.

The problem, in other words, goes deeper than the picture not effectively conveying your intended context. Instead, the cover communicated layers of context about who you imagine as your audience — and about whose reality is not really on your radar.

The people who are using social media to explain the problems they have with this cover are sharing information about who is in your audience, about what our lives in and with science are like. We are pinging you so we will be on your radar. We are trying to help you.

I am truly sorry for any discomfort that this cover may have caused anyone,

Please do not minimize the harm your choice of cover caused by describing it as “discomfort”. Doing so suggests that you still aren’t recognizing how this isn’t an event happening in a vacuum. That’s a bad way to support AAAS members who are women and to broaden the audience for science.

and promise that we will strive to do much better in the future to be sensitive to all groups and not assume that context and intent will speak for themselves.

What’s your action plan going forward? Is there good reason to think that simply trying hard to do better will get the job done? Or are you committed enough to doing better that you’re ready to revisit your editorial processes, the diversity of your editorial team, the diversity of the people beyond that team whose advice and feedback you seek and take seriously?

I’ll repeat: We are trying to help you. We criticize this cover because we expect more from Science and AAAS. This is why people have been laboring, patiently, to spell out the problems.

Please use those patient explanations and formulate a serious plan to do better.

* * * * *
For this post, I’m not accepting comments. There is plenty of information linked here for people to read and digest, and my sense is this is a topic where thinking hard for a while is likely to be more productive than jumping in with questions that the reading, digesting, and hard thinking could themselves serve to answer.

Reflections on being part of a science blogging network.

This is another post following up on a session at ScienceOnline Together 2014, this one called Blog Networks: Benefits, Role of, Next Steps, and moderated by Scientific American Blogs Editor Curtis Brainard. You should also read David Zaslavsky’s summary of the session and what people were tweeting on the session hashtag, #scioBlogNet.

My own thoughts are shaped by writing an independent science blog that less than a year later became part of one of the first “pro” science blogging networks when it launched in January 2006, moving my blog from that network to a brand new science blogging community in August 2010, and keeping that blog going while starting Doing Good Science here on the Scientific American Blog Network when it launched in July 2011. This is to say, I’ve been blogging in the context of science blogging networks for a long time, and have seen the view from a few different vantage points.

That said, my view is also very particular and likely peculiar — for example, I’m a professional philosopher (albeit one with a misspent scientific youth) blogging about science while trying to hold down a day-job as a professor in a public university during a time of state budget terror and to maintain a reasonable semblance of family life. My blogging is certainly more than a hobby — in many ways it provides vital connective tissue that helps knit together my weirdly interdisciplinary professional self into a coherent whole (and has thus been evaluated as a professional activity for the day-job) — but, despite the fact that I’m a “pro” who gets paid to blog here, it’s not something I could live on.

In my experience, a science blogging network can be a great place to get visibility and to build an audience. This can be especially useful early in one’s blogging career, since it’s a big, crowded blogosphere out there. Networks can also be handy for readers, since they deliver more variety and more of a regular flow of posts than most individual bloggers can do (especially when we’re under the weather and/or catching up on grading backlogs). It’s worth noting, though, that very large blog networks can provide a regular flow of content that frequently resembles a firehose. Some blog networks provide curation in the form of featured content or topical feeds. Many provide something like quality control, although sometimes it’s exercised primarily in the determination of who will blog in the network.

Blog networks can also have a distinctive look and feel, embodied in shared design elements, or in an atmosphere set within the commenting community, for example. Bloggers within blog networks may have an easier time finding opportunities for productive cross-pollination or coordination of efforts with their network neighbors, whether to raise political awareness or philanthropic dollars or simply to contribute many distinctive perspectives to the discussion of a particular topic. Bloggers sharing networks can also become friends (although sometimes, being humans, they develop antagonisms instead).

On a science blogging network, bloggers seem also to regularly encounter the question of what counts as a proper “science blog” — about whose content is science-y enough, and what exactly that should mean. This kind of policing of boundaries happens even here.

While the confluence of different people blogging on similar terrain can open up lots of opportunities for collaboration, there are moments when the business of running a blog network (at least when that blog network is a commercial enterprise) can be in tension with what the bloggers value about blogging in the network. Sometimes the people running the network aren’t the same as the people writing the blogs, and they end up having very different visions, interests, pressing needs, and understandings of their relationships to each other.

Sometimes bloggers and networks grow apart and can’t give each other what they need for the relationship to continue to be worthwhile going forward.

And, while blogging networks can be handy, there are other ways that online communicators and consumers of information can find each other and coordinate their efforts online. Twitter has seen the ride of tremendously productive conversations around hashtags like #scistuchat and #BlackandSTEM, and undoubtedly similarly productive conversations among science-y folk regularly coalesce on Facebook and Tumblr and in Google Hangouts. Some of these online interactions lead to face-to-face collaborations like the DIY Science Zone at GeekGirlCon and conference proposals made to traditional professional societies that get their start in online conversations.

Networks can be nice. They can even help people transition from blogging into careers in science writing and outreach. But even before blog networks, awesome people managed to find each other and to come up with awesome projects to do together. Networks can lower the activation energy for this, but there are other ways to catalyze these collaborations, too.

Brief thoughts on uncertainty.

For context, these thoughts follow upon a very good session at ScienceOnline Together 2014 on “How to communicate uncertainty with the brevity that online communication requires.” Two of the participants in the session used Storify to collect tweets of the discussion (here and here).

About a month later, this does less to answer the question of the session title than to give you a peek into my thoughts about science and uncertainty. This may be what you’ve come to expect of me.

Humans are uncomfortable with uncertainty, at least in those moments when we notice it and where we have to make decisions that have more than entertainment value riding on them. We’d rather have certainty, since that makes it easier to enact plans that won’t be thwarted.

Science is (probably) a response to our desire for more certainty. Finding natural explanations for natural phenomena, stable patterns in our experience, gives us a handle on our world and what we can expect from it that’s less capricious than “the gods are in a mood today.”

But the scientific method isn’t magic. It’s a tool that cranks out explanations of what’s happened, predictions of what’s coming up, based on observations made by humans with our fallible human senses.

The fallibility of those human senses (plus things like the trickiness of being certain you’re awake and not dreaming) was (probably) what drove philosopher René Descartes in his famous Meditations, the work that yielded the conclusion “I think, therefore I am” and that featured not one but two proofs of the existence of a God who is not a deceiver. Descartes was not pursuing a theological project here. Rather, he was trying to explain how empirical science — science relying on all kinds of observations made by fallible humans with their fallible senses — could possibly build reliable knowledge. Trying to put empirical science on firm foundations, he engaged in his “method of doubt” to locate some solid place to stand, some thing that could not be doubted. That something was “I think, therefore I am” — in other words, if I’m here doubting that my experience is reliable, that I’m awake instead of dreaming, that I’m a human being rather than a brain in a vat, I can at least me sure that there exists a thinking thing that’s doing the doubting.

From this fact that could not be doubted, Descartes tried to climb back out of that pit of doubt and to work out the extent to which we could trust our senses (and the ways in which our sense were likely to mislead us). This involved those two proofs of the existence of a God who is not a deceiver, plus a whole complicated story of minds and brain communicating with each other (via the wiggling of our pineal glands) — which is to say, it was not entirely persuasive. Still, it was all in the service of getting us more certainty from our empirical science.

Certainty and its limits are at the heart of another piece of philosophy, “the problem of induction,” this one most closely associated with David Hume. The problem here rests on our basic inability to be certain that what we have so far observed of our world will be a reliable guide to what we haven’t observed yet, that the future will be like the past. Observing a hundred, or a thousand, or a million ravens that are black is not enough for us to conclude with absolute certainty that the ravens we haven’t yet observed must also be black. Just because the sun rose today, and yesterday, and everyday through recorded human history to date does not guarantee that it will rise tomorrow.

But while Hume pointed out the limits of what we could conclude with certainty from our observations at any given moment — limits which impelled Karl Popper to assert that the scientific attitude was one of trying to prove hypotheses false rather than seeking support for them — he also acknowledged our almost irresistible inclination to believe that the future will be like the past, that the patterns of our experience so far will be repeated in the parts of the world still waiting for us to experience them. Logic can’t guarantee these patterns will persist, but our expectations (especially in cases where we have oodles of very consistent observations) feel like certainty.

Scientists are trained to recognize the limits of their certainty when they draw conclusions, offer explanations, make predictions. They are officially on the hook to acknowledge their knowledge claims as tentative, likely to be updated in the light of further information.

This care in acknowledging the limits of what careful observation and logical inference guarantee us can make it appear to people who don’t obsess over uncertainties in everyday life that scientists don’t know what’s going on. But the existence of some amount of uncertainty does not mean we have no idea what’s going on, no clue what’s likely to happen next.

What non-scientists who dismiss scientific knowledge claims on the basis of acknowledged uncertainty forget is that making decisions in the face of uncertainty is the human condition. We do it all the time. If we didn’t, we’d make no decisions at all (or else we’d be living a sustained lie about how clearly we see into our future).

Strangely, though, we seem to have a hard time reconciling our everyday pragmatism about everyday uncertainty with our suspicion about the uncertainties scientists flag in the knowledge they share with us. Maybe we’re making the jump from viewing scientific knowledge as reliable to demanding that it be perfect. Or maybe we’re just not very reflective about how easily we navigate uncertainty in our everyday decision-making.

I see this firsthand when my “Ethics in Science” students grapple with ethics case studies. At first they are freaked out by the missing details, the less-than-perfect information about what will happen if the protagonist does X or if she does Y instead. How can we make good decisions about what the protagonist should do if we can’t be certain about those potential outcomes?

My answer to them: The same way we do in real life, whose future we can’t see with any more certainty.

When there’s more riding on our decisions, we’re more likely to notice the gaps in the information that informs those decisions, the uncertainty inherent in the outcomes that will follow on what we decide. But we never have perfect information, and neither do scientists. That doesn’t mean our decision-making is hopeless, just that we need to get comfortable making do with the certainty we have.

The line between persuasion and manipulation.

As this year’s ScienceOnline Together conference approaches, I’ve been thinking about the ethical dimensions of using empirical findings from psychological research to inform effective science communication (or really any communication). Melanie Tannenbaum will be co-facilitating a session about using such research findings to guide communication strategies, and this year’s session is nicely connected to a session Melanie led with Cara Santa Maria at last year’s conference called “Persuading the Unpersuadable: Communicating Science to Deniers, Cynics, and Trolls.”

In that session last year, the strategy of using empirical results from psychology to help achieve success in a communicative goal was fancifully described as deploying “Jedi mind tricks”. Achieving success in communication was cast in terms of getting your audience to accept your claims (or at least getting them not to reject your claims out of hand because they don’t trust you, or don’t trust the way you’re engaging with them, or whatever). But if you have the cognitive launch codes, as it were, you can short-circuit distrust, cultivate trust, help them end up where you want them to end up when you’re done communicating what you’re trying to communicate.

Jason Goldman pointed out to me that these “tricks” aren’t really that tricky — it’s not like you flash the Queen of Diamonds and suddenly the person you’re talking to votes for your ballot initiative or buys your product. As Jason put it to me via email, “From a practical perspective, we know that presenting reasons is usually ineffective, and so we wrap our reasons in narrative – because we know, from psychology research, that storytelling is an effective device for communication and behavior change.”

Still, using a “trick” to get your audience to end up where you want them to end up — even if that “trick” is simply empirical knowledge that you have and your audience doesn’t — sounds less like persuasion than manipulation. People aren’t generally happy about the prospect of being manipulated. Intuitively, manipulating someone else gets us into ethically dicey territory.

As a philosopher, I’m in a discipline whose ideal is that you persuade by presenting reasons for your interlocutor to examine, arguments whose logical structure can be assessed, premises whose truth (or at least likelihood) can be evaluated. I daresay scientists have something like the same ideal in mind when they present their findings or try to evaluate the scientific claims of others. In both cases, there’s the idea than we should be making a concerted effort not to let tempting cognitive shortcuts get in the way of reasoning well. We want to know about the tempting shortcuts (some of which are often catalogued as “informal fallacies”) so we can avoid falling into them. Generally, it’s considered sloppy argumentation (or worse) to try to tempt our audience with those shortcuts.

How much space is there between the tempting cognitive shortcuts we try to avoid in our own reasoning and the “Jedi mind tricks” offered to us to help us communicate, or persuade, or manipulate more effectively? If we’re taking advantage of cognitive shortcuts (or switches, or whatever the more accurate metaphor would be) to increase the chances that people will accept our factual claims, our recommendations, our credibility, etc., can we tell when we’ve crossed the line between persuasion and manipulation? Can we tell when it’s the cognitive switch that’s doing the work rather than the sharing of reasons?

It strikes me as even more ethically problematic if we’re using these Jedi mind tricks while concealing the fact that we’re using them from the audience we’re using them on. There’s a clear element of deception in doing that.

Now, possibly the Jedi mind tricks work equally well if we disclose to our audience that we’re using them and how they work. In that case, we might be able to use them to persuade without being deceptive — and it would be clear to our audience that we were availing ourselves of these tricks, and that our goal was to get them to end up in a particular place. It would be kind of weird, though, perhaps akin to going to see a magician knowing full well that she would be performing illusions and that your being fooled by those illusions is a likely outcome. (Wouldn’t this make us more distrustful in our communicative interactions, though? If you know about the switches and it’s still the case that they can be used against you, isn’t that the kind of thing that might make you want to block lots of communication before it can even happen?)

As a side note, I acknowledge that there might be some compelling extreme cases in which the goal of getting the audience to end up in a particular place — e.g., revealing to you the location of the ticking bomb — is so urgent that we’re prepared to swallow our qualms about manipulating the audience to get the job done. I don’t think that the normal stakes of our communications are like this, though. But there may be some cases where how high the stakes really are is one of the places we disagree. Jason suggests vaccine acceptance or refusal might be important enough that the Jedi mind tricks shouldn’t set off any ethical alarms. I’ll note that vaccine advocates using a just-the-empirical-facts approach to communication are often accused or suspected of having some undisclosed financial conflict of interest that is motivating them to try to get everyone vaccinated — that is, they’re not using the Jedi mind trick social psychologists think could help them persuade their target audience and yet that audience thinks they’re up to something sneaky. That’s a pretty weird situation.

Does our cognitive make-up as humans make it possible to get closer to exchanging and evaluating reasons rather than just pushing each other’s cognitive buttons? If so, can we achieve better communication without the Jedi mind tricks?

Maybe it would require some work to change the features of our communicative environment (or of the environment in which we learn how to reason about the world and how to communicate and otherwise interact with others) to help our minds more reliably work this way. Is there any empirical data on that? (If not, is this a research question psychologists are asking?)

Some of these questions tread dangerously close to the question of whether we humans can actually have free will — and that’s a big bucket of metaphysical worms that I’m not sure I want to dig into right now. I just want to know how to engage my fellow human beings as ethically as possible when we communicate.

These are some of the questions swirling around my head. Maybe next week at ScienceOnline some of them will be answered — although there’s a good chance some more questions will be added to the pile!

Professors, we need you to do more!

…though we can’t be bothered to notice all the work you’re already doing, to acknowledge the ways in which the explicit and implicit conditions of your employment make it extremely difficult to do it, or the ways in which other cultural forces, including the pronouncements of New York Times columnists, make the “more” we’re exhorting you to do harder by alienating the public you’re meant to help from both “academics” and “intellectuals”.

In his column in the New York Times, Nicholas Kristof asserts that most university professors “just don’t matter in today’s great debates,” claiming that instead of stepping up to be public intellectuals, academics have marginalized themselves.

Despite what you may have heard in the school-yard or the op-ed pages, most of us who become university professors (even in philosophy) don’t do so to cloister ourselves from the real world and its cares. We do not become academics to sideline ourselves from public debates nor to marginalize ourselves.

So, as you might guess, I have a few things to say to Mr. Kristof here.

Among other things, Kristof wants professors to do more to engage the public. He writes:

Professors today have a growing number of tools available to educate the public, from online courses to blogs to social media. Yet academics have been slow to cast pearls through Twitter and Facebook.

A quick examination of the work landscape of a professor might shed some light on this slowness.

Our work responsibilities — and the activities on which we are evaluated for retention, tenure, and promotion — can generally be broken into three categories:

  • Research, the building of new knowledge in a discipline as recognized by peers in that discipline (e.g., via peer-review on the way to publication in a scholarly journal).
  • Teaching, the transmission of knowledge in a discipline (including strategies for building more knowledge) to students, whether those majoring in the discipline or studying it at the graduate level in order to become knowledge-builders themselves, or others taking courses to support their general education.
  • Service, generally cast as service to the discipline or service to the university, which often amounts to committee work, journal editing, and the like.

Research — the knowledge-building that academics do — is something Kristof casts as problematic:

academics seeking tenure must encode their insights [from research] into turgid prose. As a double protection against public consumption, this gobbledygook is then sometimes hidden in obscure journals — or published by university presses whose reputations for soporifics keep readers at a distance.

This ignores the academics who strive to write clearly and accessibly even when writing for an audience of their peers (not to mention the efforts of peer-reviewers to encourage more clear and accessible writing from the authors whose manuscripts they review). It also ignores the significant number of academics involved in efforts to bring the knowledge they build from behind the paywalls of closed-access journals to the public.

And, it ignores that the current structures of retention, tenure, and promotion, of hiring, of grant-awarding, keep score with metrics like impact factors that entrench the primacy of a conversation in the pages of peer-reviewed journals while making other conversations objectively worthless — at least from the point of view of the evaluation on which one’s academic career flourishes or founders.

A bit earlier in the column, Kristof includes a quote from Middle East specialist Will McCants that makes this point:

If the sine qua non for academic success is peer-reviewed publications, then academics who “waste their time” writing for the masses will be penalized.

Yet even as Kristof notes that those trying to rebel against the reward system built in to the tenure process “are too often crushed or driven away,” he seems to miss the point that exhorting academics to rebel against it anyway sounds like bad advice.

This is especially true in a world where academics lucky enough to have tenure-track jobs are keenly aware of the “excess PhDs” caught in the eternal cycle of postdoctoral appointments or conscripted in the army of adjuncts. Verily, there are throngs of people with the education, the intelligence, and the skills to be public intellectuals but who are scraping by on low pay, oppressively long hours, and the kind of deep uncertainty that comes with a job that is “temporary” by design.

If the public needs professors to be sharing their knowledge more directly, Nicholas Kristof, please explain how professors can do so without paying a high professional price? Where are the additional hours in the academic day for the “public intellectual” labor you want them to do (since they will still be expected to participate fully in the knowledge-building and discourse within their disciplinary community)? How will you encourage more professors to step up after the first wave taking your marching orders is denied tenure, or denied grants, or collapses from exhaustion?

More explicit professional recognition — professional credit — for academics engaging with the public would be a good thing. But to make it happen in a sustainable way, you need a plan. And getting buy-in from the administrators who shape and enforce the current systems of professional rewards and punishments makes more sense than exhorting the professors subject to that system to ignore the punishments they’re likely to face — especially at a moment when there are throngs of new and seasoned Ph.D.s available to replace the professors who run afoul of the system as it stands.

Kristof doesn’t say much about teaching in his column, though this is arguably a place where academics regularly do outreach to the segment of the public that shows up in the classroom. Given how few undergraduates go on to be academics themselves, this opportunity for engagement can be significant. Increasingly, though, we university teachers are micromanaged and “assessed” by administrators and committees in response to free-floating anxiety about educational quality and pressure to bring “No Child Left Behind”-style oversight and high-stakes testing to higher ed. Does this increase our ability to put knowledge and insights from our discipline into real-world contexts that matter to our students — that help them broaden their understanding of the challenges that face us individually and collectively, and of different disciplinary strategies for facing them, not just to serve their future employers’ goals, but to serve their own? In my experience, it does not.

Again, if Kristof wants better engagement between academics and the public — which, presumably, includes the students who show up in the classroom and will, in their post-college lives, be part of the public — he might get better results by casting some light on the forces that derail engagement in college teaching.

Despite all these challenges, the fact is that many academics are already engaging the public. However, Nicholas Kristof seems not to have noticed this. He writes:

Professors today have a growing number of tools available to educate the public, from online courses to blogs to social media. Yet academics have been slow to cast pearls through Twitter and Facebook.

The academics who have been regularly engaging with the public on Facebook and Twitter and G+ and YouTube and blogs and podcasts — many of us for years — would beg to differ with this assessment. Check out the #EngagedAcademics hashtag for a sampling of the response.

As well, there are academics writing for mass-circulation publications, whether online or in dead-tree form, working at science festivals and science fairs, going into elementary and secondary school classrooms, hosting or participating in local events like Café Scientifique or Socrates Café, going on radio or TV programs, writing letters to the editors of their local papers, going to town council and school board meetings.

Either all of this sort of engagement is invisible to Nicholas Kristof, or he thinks it doesn’t really count towards the work of being a public intellectual.

I wonder if this is because Kristof has in mind public intellectuals who have a huge reach and an immediate impact. If so, it would be good to ask who controls the microphone and why the academics from whom Kristof wants more aren’t invited to use it. It should be noted here that the New York Times, where Kristof has a regular column, is a pretty big microphone.

Also, it’s worth asking whether there’s good (empirical) reason to believe that one-to-many communication by academics who do have access to a big microphone is a better way to serve the needs of the public than smaller-scale communications (some of them one-to-one) in which academics are not just professing their knowledge to members of the public but also actually listening to them to find out what they want to know and what they care about? Given what seems to be a persistent attitude of suspicion and alienation from “intellectuals” among members of the public, engagement on a human level strikes me as likely to feel less manipulative — and to be less manipulative.

Maybe Nicholas Kristof has a plan to dispel the public’s reflexive distrust of academics. If so, I trust he’ll lay it out in a column in the not-so-distant future.

I don’t think Kristof is wrong that the public could benefit from engagement with professors, but asserting that we need more while ignoring the conditions that discourage such engagement — and while ignoring the work of the many academics who are engaging the public — is not particularly helpful. Moreover, it seems to put the burden on professors to step up and do more while losing sight of the fact that engagement requires active participation on both sides.

Professors cannot proclaim what they know and assume that the public will automatically absorb that knowledge and, armed with it, act according. It would be somewhat horrifying (for academics and the public alike) if engagement worked that way.

Academics and members of the public are sharing a world. Having various kinds of reliable knowledge about the world is good, as is sharing that knowledge and putting it into useful context, but this is never enough to determine just what we should do with that knowledge. We need to work out, together, our shared interests and goals.

Academics must be part of this discussion, but if other members of the public aren’t willing to engage, it probably doesn’t matter if more professors come to the table.

* * * * *
It should go without saying, but I will say it here anyway, that there are plenty of people who are not professors or academics engaging the public in meaningful ways that should make us recognize them as “public intellectuals” too. My focus here has been on professors since they are the focus of Kristof’s column.

Nature and trust.

Here are some things that I know:

Nature is a high-impact scientific journal that is widely read in the scientific community.

The editorial mechanisms Nature employs are meant to ensure the quality of the publication.

Reports of scientific research submitted to Nature undergo peer review (as do manuscripts submitted to other scholarly scientific journals). As well, Nature publishes items that are not peer-reviewed — for example, news pieces and letters to the editor. Nonetheless, the pieces published in Nature that don’t undergo peer review are subjected to editorial oversight.

Our human mechanisms for ensuring the quality of items that are published are not perfect. Peer reviewers sometimes get fooled. Editors sometimes make judgments that, in retrospect, they would not endorse.

The typical non-scientist who knows about journals like Nature is in the position of being generally trusting that peer review and editorial processes do the job of ensuring the high quality of the contents of these journals, or of being generally distrusting. Moreover, my guess is that the typical non-scientist, innocent of the division of labor on the vast editorial teams employed by journals like Nature, takes for granted that the various items published in such journals reflect sound science — or, at the very least, do not put forward claims that are clearly at odds with the body of existing scientific research.

Non-scientists, in other words, are trusting that the editorial processes at work in a journal like Nature produce a kind of conversation within the scientific community, one that weeds out stuff scientists would recognize as nonsense.

This trust is important because non-scientists do not have the same ability to identify and weed out nonsense. Nature is a kind of scientific gatekeeper for the larger public.

This trust is also something that can be played — for example, by a non-expert with an agenda who manages to get a letter published in a journal like Nature. While such correspondence may not impress a scientist, a “publication in Nature” of this sort may be taken as credible by non-scientists on the basis of the trust they have that such a well-known scientific journal must have editorial processes that reliably weed out nonsense.

In a world where we divide the cognitive labor this way, where non-scientists need to trust scientists to build reliable knowledge and organs of scientific communication to weed out nonsense, the stakes are very high for the scientists and the organs of scientific communication to live up to that trust — to get it right most of the time, and to be transparent enough about their processes that when they don’t get it right it’s reasonably easy to diagnose what went wrong and to fix it.

Otherwise, scientists and the organs of scientific communication risk losing the trust of non-scientists.

I’ve been thinking about this balance of trust and accountability in the context of a letter that was published in Nature asserting, essentially, that the underrepresentation of women as authors and peer reviewers in Nature is no kind of problem, because male scientists have merit and women scientists have child care obligations.

Kelly Hills has a clear and thorough explanation of what made publishing this particular letter problematic. It’s not just that the assertion of the letter writer are not supported by the research (examples of which Kelly helpfully links). It’s not just that there’s every reason to believe that the letter writer will try to spin the publication of his letter in Nature as reason to give his views more credence.

It’s also that the decision to publish this letter suggests the question of women’s ability to do good science is a matter of legitimate debate.

In the discussion of this letter on Twitter, I saw the suggestion that the letter was selected for publication because it was representative of a view that had been communicated by many correspondents to Nature.

In a journal that the larger public takes to be a source of views that are scientifically sound, or at least scientifically plausible (rather than at odds with a growing body of empirical research), the mere fact that many people have expressed a view in letters strikes me as insufficient reason to publish it. I suspect that if a flurry of letters were to arrive asserting that the earth is stationary in the center of the universe, or that the earth is flat, that the editorial staff in charge of correspondence wouldn’t feel the need to publish letters conveying these views — especially if the letters came from people without scientific training or active involvement in scientific work of some sort. I’d even be willing to make a modest bet that Nature regularly gets a significant amount of correspondence communicating crackpot theories of one sort or another. (I’m not running a major organ of scientific communication and I regularly get a significant amount of correspondence communicating crackpot theories of one sort or another.) Yet these crackpot theories do not regularly populate Nature’s “Correspondence” page.

In response to the objections raised to the publication of this letter, the Nature Editorial staff posted this comment:

Nature has a strong history of supporting women in science and of reflecting the views of the community in our pages, including Correspondence. Our Correspondence pages do not reflect the views of the journal or its editors; they reflect the views only of the correspondents.

We do not endorse the views expressed in this Correspondence (or indeed any Correspondences unless we explicitly say so). On re-examining the letter and the process, we consider that it adds no value to the discussion and unnecessarily inflames it, that it did not receive adequate editorial attention, and that we should not have published it, for which we apologize. This note will appear online on nature.com in the notes section of the Correspondence and in the Correspondence’s pdf.

Nature’s own positive views and engagement in the issues concerning women in science are represented by our special from 2013:
www.nature.com/women
Philip Campbell, Editor-in-Chief, Nature

(Bold emphasis added.)

I think this editorial pivot is a wise one. The letter in question may have represented a view many people have, but it didn’t offer any new facts or novel insight. And it’s not like women in science don’t know that they are fighting against biases — even biases in their own heads — every single day. They didn’t need to read a letter from some guy in Nature to become aware of this bit of their professional terrain.

So, the apology is good. But it is likely insufficient.

At this point, Nature may also have trust they need to rebuild with women, whether those women are members of the scientific community or members of the larger public. While it is true that Nature devoted a special issue to challenges faced by women in science, they also gave the editorial green light to a piece of “science fiction” that reinforced, rather than challenging the gendered assumption that make it harder for women in science.

And yes, we understand that different editors oversee the peer-reviewed reports of scientific research and the news items, the correspondence and the short fiction. But our view of organizations — our trust of organizations — tends to bundle these separate units together. This is pretty unavoidable unless we personally know each of the editors in each of the units (and even personal acquaintance doesn’t mean our trust is indestructible).

All of which is to say: as an organization, Nature still has some work to do to win back the trust of women (and others) who cannot think of the special issue on women in science without also thinking of “Womanspace” or the letter arguing that underrepresentation of women in Nature’s pages is just evidence of a meritocracy working as it should.

It would be nice to trust that Nature’s editorial processes will go forth and get it right from here on out, but we don’t want to be played for fools. As well, we may have to do additional labor going forward cleaning up the fallout from this letter in public discourses on women in science when we already had plenty of work to do in that zone.

This is a moment where Nature may want women scientists to feel warmly toward the journal, to focus on the good times as representative of where Nature really stands, but trust is something that is rebuilt, or eroded, over iterated engagements every single day.

Trust can’t be demanded. Trust is earned.

Given the role Nature plays in scientific communications and in the communication of science to a broader public, I’m hopeful the editorial staff is ready to do the hard work to earn that trust — from scientists and non-scientists alike — going forward.

* * * * *
Related posts:

Hope Jahren, Why I Turned Down a Q-and-A in Nature Magazine

Anne Jefferson, Megaphones, broken records and the problem with institutional amplification of sexism and racism

On the labor involved in being part of a community.

On Thursday of this week, registration for ScienceOnline Together 2014, the “flagship annual conference” of ScienceOnline opened (and closed). ScienceOnline describes itself as a “global, ongoing, online community” made up of “a diverse and growing group of researchers, science writers, artists, programmers, and educators —those who conduct or communicate science online”.

On Wednesday of this week, Isis the Scientist expressed her doubts that the science communication community for which ScienceOnline functions as a nexus is actually a “community” in any meaningful sense:

The major fundamental flaw of the SciComm “community” is that it is a professional community with inconsistent common values. En face, one of its values is the idea of promoting science. Another is promoting diversity and equality in a professional setting. But, at its core, its most fundamental value are these notions of friendship, support, and togetherness. People join the community in part to talk about science, but also for social interactions with other members of the “community”.  While I’ve engaged in my fair share of drinking and shenanigans  at scientific conferences, ScienceOnline is a different beast entirely.  The years that I participated in person and virtually, there was no doubt in my mind that this was a primarily social enterprise.  It had some real hilarious parts, but it wasn’t an experience that seriously upgraded me professionally.

People in SciComm feel confident talking about “the community” as a tangible thing with values and including people in it, even when those people don’t value the social structure in the same way. People write things that are “brave” and bloviate in ways that make each other feel good and have “deep and meaningful conversations about issues” that are at the end of the day nothing more than words. It’s a “community” that gives out platters full of cookies to people who claim to be “allies” to causes without actually having to ever do anything meaningful. Without having to outreach in any tangible way, simply because they claim to be “allies.” Deeming yourself an “ally” and getting a stack of “Get Out of Jail, Free” cards is a hallmark of the “community”.

Isis notes that the value of “togetherness” in the (putative) SciComm community is often prioritized over the value of “diversity” — and that this is a pretty efficient way to undermine the community. She suggests that focusing on friendship rather than professionalism entrenches this problem and writes “I have friends in academia, but being a part of academic science is not predicated on people being my friends.”

I’m very sympathetic to Isis’s concerns here. I don’t know that I’d say there’s no SciComm community, but that might come down to a disagreement about where the line is between a dysfunctional community and a lack of community altogether. But that’s like the definitional dispute about how many hairs one needs on one’s head to shift from the category of “bald” to the category of “not-bald” — for the case we’re trying to categorize there’s still agreement that there’s a whole lot of bare skin hanging out in the wind.

The crux of the matter, whether we have a community or are trying to have one, is whether we have a set of shared values and goals that is sufficient for us to make common cause with each other and to take each other seriously — to take each other seriously even when we offer critiques of other members of the community. For if people in the community dismiss your critiques out of hand, if they have the backs of some members of the community and not others (and whose they have and whose they don’t sorts out along lines of race, gender, class, and other dimensions that the community’s shared values and goals purportedly transcend), it’s pretty easy to wonder whether you are actually a valued member of the community, whether the community is for you in any meaningful way.

I do believe there’s something like a SciComm community, albeit a dysfunctional one. I will be going to ScienceOnline Together 2014, as I went to the seven annual meetings preceding it. Personally, even though I am a full-time academic like Dr. Isis, I do find professional value from this conference. Probably this has to do with my weird interdisciplinary professional focus — something that makes it harder for me to get all the support and inspiration and engagement I need from the official professional societies that are supposed to be aligned with my professional identity. And because of the focus of my work, I am well aware of dysfunction in my own professional community and in other academic and professional communities.

While there has been a pronounced social component to ScienceOnline as a focus of the SciComm community, ScienceOnline (and its ancestor conferences) have never felt purely social to me. I have always had a more professional agenda there — learning what’s going on in different realms of practice, getting my ideas before people who can give me useful feedback on them, trying to build myself a big-picture, nuanced understanding of science engagement and how it matters.

And in recent years, my experience of the meetings has been more like work. Last year, for example, I put a lot of effort into coordinating a kid-friendly room at the conference so that attendees with small children could have some child-free time in the sessions. It was a small step towards making the conference — and the community — more accessible and welcoming to all the people who we describe as being part of the community. There’s still significant work to do on this front. If we opt out of doing that work, we are sending a pretty clear message about who we care about having in the community and who we view as peripheral, about whose voices and interests we value and whose we do not.

Paying attention to who is being left out, to whose voices are not being heard, to whose needs are not being met, takes effort. But this effort is part of the regular required maintenance for any community that is not completely homogeneous. Skipping it is a recipe for dysfunction.

And the maintenance, it seems, is required pretty much every damn day.

Friday, in the Twitter stream for the ScienceOnline hashtag #scio14, I saw this:

To find out what was making Bug Girl feel unsafe, I went back and watched Joe Hanson’s Thanksgiving video, in which Albert Einstein was portrayed as making unwelcome advances on Marie Curie, cheered on by his host, culminating in a naked assault on Curie.

Given the recent upheaval in the SciComm community around sexual harassment — with lots of discussion, because that’s how we roll — it is surprising and shocking that this video plays sexual harassment and assault for laughs, apparently with no thought to how many women are still targets of harassment, no consideration of how chilly the climate for women in science remains.

Here’s a really clear discussion of what makes the video problematic, and here’s Joe Hanson’s response to the criticisms. I’ll be honest: it looks to me like Joe still doesn’t really understand what people (myself included) took to the social media to explain to him. I’m hopeful that he’ll listen and think and eventually get it better. If not, I’m hopeful that people will keep piping up to explain the problem.

But not everyone was happy that members of our putative community responded to a publicly posted video (on a pretty visible platform — PBS Digital Studio — supported by taxpayers in the U.S.) was greeted with a public critique.

The objections raised on Twitter — many of them raised with obvious care as far as being focused on the harm and communicated constructively — were described variously as “drama,” “infighting,” a “witch hunt” and “burning [Joe] at the stake”. (I’m not going to link the tweets because a number of the people who made those characterizations thought about it and walked them back.)

People insisted, as they do pretty much every time, that the proper thing to do was to address the problem privately — as if that’s the only ethical way to deal with a public wrong, or as if it’s the most effective way to fix the harm. Despite what some will argue, I don’t think we have good evidence for either of those claims.

So let’s come back to regular maintenance of the community and think harder about this. I’ve written before that

if bad behavior is dealt with privately, out of view of members of the community who witnessed the bad behavior in question, those members may lose faith in the community’s commitment to calling it out.

This strikes me as good reason not to take all the communications to private channels. People watching and listening on the sidelines are gathering information on whether their so-called community shares their values, on whether it has their back.

Indeed, the people on the sidelines are also watching and listening to the folks dismissing critiques as drama. Operationally, “drama” seems to amount to “Stuff I’d rather you not discuss where I can see or hear it,” which itself shades quickly into “Stuff that really seems to bother other people, for whom I seem to be unable to muster any empathy, because they are not me.”

Let me pause to note what I am not claiming. I am not saying that every member of a community must be an active member of every conversation within that community. I am not saying that empathy requires you to personally step up and engage in every difficult dialogue every time it rolls around. Sometimes you have other stuff to do, or you know that the cost of being patient and calm is more than you can handle at the moment, or you know you need to listen and think for awhile before you get it well enough to get into it.

But going to the trouble to speak up to convey that the conversation is a troublesome one to have happening in your community — that you wish people would stop making an issue of it, that they should just let it go for the sake of peace in the community — that’s something different. That’s telling the people expressing their hurt and disappointment and higher expectations that they should swallow it, that they should keep it to themselves.

For the sake of the community.

For the sake of the community of which they are clearly not really valued members, if they are the ones, always, who need to shut up and let their issues go for the greater good.

Arguably, if one is really serious about the good of the community, one should pay attention to how this kind of dismissal impacts the community. Now is as good a moment as any to start.

When we target chemophobia, are we punching down?

Over at Pharyngula, Chris Clarke challenges those in the chemical know on their use of “dihydrogen monoxide” jokes. He writes:

Doing what I do for a living, I often find myself reading things on Facebook, Twitter, or those increasingly archaic sites called “blogs” in which the writer expresses concern about industrial effluent in our air, water, consumer products or food. Sometimes the concerns are well-founded, as in the example of pipeline breaks releasing volatile organic chemicals into your backyard. Sometimes, as in the case of concern over chemtrails or toxic vaccines, the concerns are ill-informed and spurious.

And often enough, the educational system in the United States being the way it’s been since the Reagan administration, those concerns are couched in terms that would not be used by a person with a solid grounding in science. People sometimes miss the point of dose-dependency, of acute versus chronic exposure, of the difference between parts per million and parts per trillion. Sometimes their unfamiliarity with the basic facts of chemistry causes them to make patently ridiculous alarmist statements and then double down on them when corrected.

And more times than I can count, if said statements are in a public venue like a comment thread, someone will pipe up by repeating a particular increasingly stale joke. Say it’s a discussion of contaminants in tap water allegedly stemming from hydraulic fracturing for natural gas extraction. Said wit will respond with something like:

“You know what else might be coming out of your tap? DIHYDROGEN MONOXIDE!”

Two hydrogens, one oxygen … what’s coming out of your tap here is water. Hilarious! Or perhaps not.

Clarke argues that those in the chemical know whip out the dihydrogen monoxide joke to have a laugh at the expense of someone who doesn’t have enough chemical knowledge to understand whether conditions they find alarming really ought to alarm them. However, how it usually goes down is that other chemically literate people in earshot laugh while the target of the joke ends up with no better chemical understanding of things.

Really, all the target of the joke learns is that the teller of the joke has knowledge and is willing to use it to make someone else look dumb.

Clarke explains:

Ignorance of science is an evil that for the most part is foisted upon the ignorant. The dihydrogen monoxide joke depends for its humor on ridiculing the victims of that state of affairs, while offering no solution (pun sort of intended) to the ignorance it mocks. It’s like the phrase “chemophobia.” It’s a clan marker for the Smarter Than You tribe.

The dihydrogen monoxide joke punches down, in other words. It mocks people for not having had access to a good education. And the fact that many of its practitioners use it in order to belittle utterly valid environmental concerns, in the style of (for instance) Penn Jillette, makes it all the worse — even if those concerns aren’t always expressed in phraseology a chemist would find beyond reproach, or with math that necessarily works out on close examination.

There’s a weird way in which punching down with the dihydrogen monoxide joke is the evil twin of the “deficit model” in science communication.

The deficit model assumes that the focus in science communication to audiences of non-scientists should be squarely on filling in gaps in their scientific knowledge, teaching people facts and theories that they didn’t already know, as if that is the main thing they must want from science. (It’s worth noting that the deficit model seems to assume a pretty unidirectional flow of information, from the science communicator to the non-scientist.)

The dihydrogen monoxide joke, used the way Clarke describes, identifies a gap in understanding and then, instead of trying to fill it, points and laughs. If the deficit model naïvely assumes that filling gaps in knowledge will make the public cool with science, this kind of deployment of the dihydrogen monoxide joke seems unlikely to provoke any warm feelings towards science or scientists from the person with a gappy understanding.

What’s more, this kind of joking misses an opportunity to engage with what they’re really worried about and why. Are they scared of chemicals per se? Of being at the mercy of others who have information about which chemicals can hurt us (and in which amounts) and/or who have more knowledge about or control of where those chemicals are in our environment? Do they not trust scientists at all, or are they primarily concerned about whether they can trust scientists in the employ of multinational corporations?

Do their concerns have more to do with the information and understanding our policymakers have with regard to chemicals in our world — particularly about whether these policymakers have enough to keep us relatively safe, or about whether they have the political will to do so?

Actually having a conversation and listening to what people are worried about could help. It might turn out that people with the relevant scientific knowledge to laugh at the dihydrogen monoxide joke and those without share a lot of the same concerns.

Andrew Bissette notes that there are instances where the dihydrogen monoxide joke isn’t punching down but punching up, where educated people who should know better use large platforms to take advantage of the ignorant. So perhaps it’s not the case that we need a permanent moratorium on the joke so much as more careful thought about what we hope to accomplish with it.

Let’s return to Chris Clarke’s claim that the term “chemophobia” is “a clan marker for the Smarter Than You tribe.”

Lots of chemists in the blogosphere regularly blog and tweet about chemophobia. If they took to relentlessly tagging as “chemophobe!” people who are lacking access to the body of knowledge and patterns of reasoning that define chemistry, I’d agree that it was the same kind of punching down as the use of the dihydrogen monoxide joke Clarke describes. To the extent that chemists are actually doing this to assert membership in the Smarter Than You tribe, I think it’s counterproductive and mean to boot, and we should cut it out.

But, knowing the folks I do who blog and tweet about chemophobia, I’m pretty sure their goal is not to maintain clear boundaries between The Smart and The Dumb. When they fire off a #chemophobia tweet, it’s almost like they’re sending up the Batsignal, rallying their chemical community to fight some kind of crime.

So what is it these chemists — the people who have access to the body of knowledge and patterns of reasoning that define chemistry — find problematic about the “chemophobia” of others? What do they hope to accomplish by pointing it out?

Part of where they’re coming from is probably grounded in good old fashioned deficit-model reasoning, but with more emphasis on helping others learn a bit of chemistry because it’s cool. There’s usually a conviction that the basics of the chemistry that expose the coolness are not beyond the grasp of adults of normal intelligence — if only we explain in accessibly enough. Ash Jogalekar suggests more concerted efforts in this direction, proposing a lobby for chemistry (not the chemical industry) that takes account of how people feel about chemistry and what they want to know. However it’s done, the impulse to expose the cool workings of a bit of the world to those who want to understand them should be offered as a kindness. Otherwise, we’re doing it wrong.

Another part of what moves the chemists I know who are concerned with chemophobia is that they don’t want people who are not at home with chemistry to get played. They don’t want them to be vulnerable to quack doctors, nor to merchants of doubt trying to undermine sound science to advance a particular economic or political end, nor to people trying to make a buck with misleading claims, nor to legitimately confused people who think they know much more than they really do.

People with chemical know-how could help address this kind of vulnerability, being partners to help sort out the reliable information from the bogus, the overblown risks from risks that ought to be taken seriously or investigated further.

But short of teaching the folks without access to the body of knowledge and patterns of reasoning that define chemistry everything they know to be their own experts (which is the deficit model again), providing this kind of help requires cultivating trust. It requires taking the people to whom your offering the help seriously, recognizing that gaps in their chemical understanding don’t make them unintelligent or of less value as human beings.

And laughing at the expense of the people who could use your help — using your superior chemical knowledge to punch down — seems unlikely to foster that trust.