You’re not rehabilitated if you keep deceiving.

Regular readers will know that I view scientific misconduct as a serious harm to both the body of scientific knowledge and the scientific community involved in building that knowledge. I also hold out hope that at least some of the scientists who commit scientific misconduct can be rehabilitated (and I’ve noted that other members of the scientific community behave in ways that suggest that they, too, believe that rehabilitation is possible).

But I think a non-negotiable prerequisite for rehabilitation is demonstrating that you really understand how what you did was wrong. This understanding needs to be more than simply recognizing that what you did was technically against the rules. Rather, you need to grasp the harms that your actions did, the harms that may continue as a result of those actions, the harms that may not be quickly or easily repaired. You need to <a href="http://blogs.scientificamerican.com/doing-good-science/2014/06/29/do-permanent-records-of-scientific-misconduct-findings-interfere-with-rehabilitation/"acknowledgethose harms, not minimize them or make excuses for your actions that caused the harms.

And, you need to stop behaving in the ways that caused the harms in the first place.

Among other things, this means that if you did significant harm to your scientific community, and to the students you were were supposed to be training, by making up “results” rather than actually doing experiments and making and reporting accurate results, you need to recognize that you have acted deceptively. To stop doing harm, you need to stop acting deceptively. Indeed, you may need to be significantly more transparent and forthcoming with details than others who have not transgressed as you have. Owing to your past bad acts, you may just have to meet a higher burden of proof going forward.

That you have retracted the publications in which you deceived, or lost a degree for which (it is strongly suspected) you deceived, or lost your university post, or served your hours of court-ordered community service does not reset you to the normal baseline of presumptive trust. “Paying your debt to society” does not in itself mean that anyone is obligated to believe that you are not still untrustworthy. If you break trust, you need to earn it back, not to demand it because you did your time.

You certainly can’t earn that trust back by engaging in deception to mount an argument that people should give you a break because you’ve served out your sentence.

These thoughts on how not to approach your own rehabilitation are prompted by the appearance of disgraced social scientist Diederik Stapel (discussed here, here, here, here, here, and here) in the comments at Retraction Watch on a post about Diederik Stapel and his short-lived gig as an adjunct instructor for a college course. Now, there’s no prima facie reason Diederik Stapel might not be able to make a productive contribution to a discussion about Diederik Stapel.

However, Diederik Stapel was posting his comments not as Diederik Stapel but as “Paul”.

I hope it is obvious why posting comments that are supportive of yourself while making it appear that this support is coming from someone else is deceptive. Moreover, the comments seem to suggest that Stapel is not really fully responsible for the frauds he committed.

“Paul” writes:

Help! Let’s not change anything. Science is a flawless institution. Yes. And only the past two days I read about medical scientists who tampered with data to please the firm that sponsored their work and about the start of a new investigation into the work of a psychologist who produced data “too good to be true.” Mistakes abound. On a daily basis. Sure, there is nothing to reform here. Science works just fine. I think it is time for the “Men in Black” to move in to start an outside-invesigation of science and academia. The Stapel case and other, similar cases teach us that scientists themselves are able to clean-up their act.

Later, he writes (sic throughout):

Stapel was punished, he did his community service (as he writes in his latest book), he is not on welfare, he is trying to make money with being a writer, a cab driver, a motivational speaker, but not very successfully, and .. it is totally unclear whether he gets paid for his teaching (no research) an extra-curricular hobby course (2 hours a week, not more, not less) and if he gets paid, how much.

Moreover and more importantly, we do not know WHAT he teaches exactly, we have not seen his syllabus. How can people write things like “this will only inspire kids to not get caught”, without knowing what the guy is teaching his students? Will he reach his students how to become fraudsters? Really? When you have read the two books he wrote after his demise, you cannot be conclude that this is very unlikely? Will he teach his students about all the other fakes and frauds and terrible things that happen in science? Perhaps. Is that bad? Perhaps. I think it is better to postpone our judgment about the CONTENT of all this as long as we do not know WHAT he is actually teaching. That would be a Popper-like, open-minded, rationalistic, democratic, scientific attitude. Suppose a terrible criminal comes up with a great insight, an interesting analysis, a new perspective, an amazing discovery, suppose (think Genet, think Gramsci, think Feyerabend).

Is it smart to look away from potentially interesting information, because the messenger of that information stinks?

Perhaps, God forbid, Stapel is able to teach his students valuable lessons and insights no one else is willing to teach them for a 2-hour-a-week temporary, adjunct position that probably doesn’t pay much and perhaps doesn’t pay at all. The man is a failure, yes, but he is one of the few people out there who admitted to his fraud, who helped the investigation into his fraud (no computer crashes…., no questionnaires that suddenly disappeared, no data files that were “lost while moving office”, see Sanna, Smeesters, and …. Foerster). Nowhere it is written that failures cannot be great teachers. Perhaps he points his students to other frauds, failures, and ridiculous mistakes in psychological science we do not know of yet. That would be cool (and not unlikely).

Is it possible? Is it possible that Stapel has something interesting to say, to teach, to comment on?

To my eye, these comments read as saying that Stapel has paid his debt to society and thus ought not to be subject to heightened scrutiny. They seem to assert that Stapel is reformable. They also suggest that the problem is not so much with Stapel as with the scientific enterprise. While there may be systemic features of science as currently practice that make cheating a greater temptation than it might be otherwise, suggesting that those features made Stapel commit fraud does not convey an understanding of Stapel’s individual responsibility to navigate those temptations. Putting those assertions and excuses in someone else’s mouth makes them look less self-serving than they actually are.

Hilariously, “Paul” also urges the Retraction Watch commenters expressing doubts about Stapel’s rehabilitation and moral character to contact Stapel using their real names, first here:

I guess that if people want to write Stapel a message, they can send him a personal email, using their real name. Not “Paul” or “JatdS” or “QAQ” or “nothingifnotcritical” or “KK” or “youknowbestofall” or “whatistheworldcoming to” or “givepeaceachance”.

then here:

if you want to talk to puppeteer, as a real person, using your real name, I recommend you write Stapel a personal email message. Not zwg or neuroskeptic or what arewehiding for.

Meanwhile, behind the scenes, the Retraction Watch editors accumulated clues that “Paul” was not an uninvolved party but rather Diederik Stapel portraying himself as an uninvolved party. After they contacted him to let him know that such behavior did not comport with their comment policy, Diederik Stapel posted under his real name:

Hello, my name is Diederik Stapel. I thought that in an internet environment where many people are writing about me (a real person) using nicknames it is okay to also write about me (a real person) using a nickname. ! have learned that apparently that was —in this particular case— a misjudgment. I think did not dare to use my real name (and I still wonder why). I feel that when it concerns person-to-person communication, the “in vivo” format is to be preferred over and above a blog where some people use their real name and some do not. In the future, I will use my real name. I have learned that and I understand that I –for one– am not somebody who can use a nickname where others can. Sincerely, Diederik Stapel.

He portrays this as a misunderstanding about how online communication works — other people are posting without using their real names, so I thought it was OK for me to do the same. However, to my eye it conveys that he also misunderstands how rebuilding trust works. Posting to support the person at the center of the discussion without first acknowledging that you are that person is deceptive. Arguing that that person ought to be granted more trust while dishonestly portraying yourself as someone other than that person is a really bad strategy. When you’re caught doing it, those arguments for more trust are undermined by the fact that they are themselves further instances of the deceptive behavior that broke trust in the first place.

I will allow as how Diederik Stapel may have some valuable lessons to teach of, though. One of these is how not to make a convincing case that you’ve reformed.

Conduct of scientists (and science writers) can shape the public’s view of science.

Scientists undertake a peculiar kind of project. In striving to build objective knowledge about the world, they are tacitly recognizing that our unreflective picture of the world is likely to be riddled with mistakes and distortions. On the other hand, they frequently come to regard themselves as better thinkers — as more reliably objective — than humans who are not scientists, and end up forgetting that they have biases and blindspots of their own which they are helpless to detect without help from others who don’t share these particular biases and blindspots.

Building reliable knowledge about the world requires good methodology, teamwork, and concerted efforts to ensure that the “knowledge” you build doesn’t simply reify preexisting individual and cultural biases. It’s hard work, but it’s important to do it well — especially given a long history of “scientific” findings being used to justify and enforce preexisting cultural biases.

I think this bigger picture is especially appropriate to keep in mind in reading the response from Scientific American Blogs Editor Curtis Brainard to criticisms of a pair of problematic posts on the Scientific American Blog Network. Brainard writes:

The posts provoked accusations on social media that Scientific American was promoting sexism, racism and genetic determinism. While we believe that such charges are excessive, we share readers’ concerns. Although we expect our bloggers to cover controversial topics from time to time, we also recognize that sensitive issues require extra care, and that did not happen here. The author and I have discussed the shortcomings of the two posts in detail, including the lack of attention given to countervailing arguments and evidence, and he understood the deficiencies.

As stated at the top of every post, Scientific American does not always share the views and opinions expressed by our bloggers, just as our writers do not always share our editorial positions. At the same time, we realize our network’s bloggers carry the Scientific American imprimatur and that we have a responsibility to ensure that—differences of opinion notwithstanding—their work meets our standards for accuracy, integrity, transparency, sensitivity and other attributes.

(Bold emphasis added.)

The problem here isn’t that the posts in question advocated sound scientific views with implications that people on social media didn’t like. Rather, the posts presented claims in a way that made them look like they had much stronger scientific support than they really do — and did so in the face of ample published scientific counterarguments. Scientific American is not requiring that posts on its blog network meet a political litmus test, but rather that they embody the same kind of care, responsibility to the available facts, and intellectual honesty that science itself should display.

This is hard work, but it’s important. And engaging seriously with criticism, rather than just dismissing it, can help us do it better.

There’s an irony in the fact that one of the problematic posts which ignored some significant relevant scientific literature (helpfully cited by commenters in the comments section of that very post) was ignoring that literature in the service of defending Larry Summers and his remarks on possible innate biological causes that make men better at math and science than women. The irony lies in the fact that Larry Summers displayed an apparently ironclad commitment to ignore any and all empirical findings that might challenge his intuition that there’s something innate at the heart of the gender imbalance in math and science faculty.

Back in January of 2005, Larry Summers gave a speech at a conference about what can be done to attract more women to the study of math and science, and to keep them in the field long enough to become full professors. In his talk, Summers suggested as a possible hypothesis for the relatively low number of women in math and science careers that there may be innate biological factors that make males better at math and science than females. (He also related an anecdote about his daughter naming her toy trucks as if they were dolls, but it’s fair to say he probably meant this anecdote to be illustrative rather than evidentiary.)

The talk did not go over well with the rest of the participants in the conference.

Several scientific studies were presented at the conference before Summers made his speech. All these studies presented significant evidence against the claim of an innate difference between males and females that could account for the “science gap”.


In the aftermath of this conference of yore, there were some commenters who lauded Summers for voicing “unpopular truths” and others who distanced themselves from his claims but said they supported his right to make them as an exercise of “academic freedom.”

But if Summers was representing himself as a scientist* when he made his speech, I don’t think the “academic freedom” defense works.


Summers is free to state hypotheses — even unpopular hypotheses — that might account for a particular phenomenon. But, as a scientist, he is also responsible to take account of data relevant to his hypotheses. If the data weighs against his preferred hypothesis, intellectual honesty requires that he at least acknowledge this fact. Some would argue that it could even require that he abandon his hypothesis (since science is supposed to be evidence-based whenever possible).


When news of Summers’ speech, and reactions to it, was fresh, one of the details that stuck with me was that one of the conference organizers noted to Summers, after he gave his speech, that there was a large body of evidence — some of it presented at that very conference — that seemed to undermine his hypothesis, after which Summers gave a reply that amounted to, “Well, I don’t find those studies convincing.”

Was Summers within his rights to not believe these studies? Sure. But, he had a responsibility to explain why he rejected them. As a part of a scientific community, he can’t just reject a piece of scientific knowledge out of hand. Doing so comes awfully close to undermining the process of communication that scientific knowledge is based upon. You aren’t supposed to reject a study because you have more prestige than the authors of the study (so, you don’t have to care what they say). You can question the experimental design, you can question the data analysis, you can challenge the conclusions drawn, but you have to be able to articulate the precise objection. Surely, rejecting a study just because it doesn’t fit with your preferred hypothesis is not an intellectually honest move.


By my reckoning, Summers did not conduct himself as a responsible scientist in this incident. But I’d argue that the problem went beyond a lack of intellectual honesty within the universe of scientific discourse. Summers is also responsible for the bad consequences that flowed from his remark.


The bad consequence I have in mind here is the mistaken view of science and its workings that Summers’ conduct conveys. Especially by falling back on a plain vanilla “academic freedom” defense here, defenders of Summers conveyed to the public at large the idea that any hypothesis in science is as good as any other. Scientists who are conscious of the evidence-based nature of their field will see the absurdity of this idea — some hypotheses are better, others worse, and whenever possible we turn to the evidence to make these discriminations. Summers compounded ignorance of the relevant data with what came across as a statement that he didn’t care what the data showed. From this, the public at large could assume he was within his scientific rights to decide which data to care about without giving any justification for this choice**, or they could infer that data has little bearing on the scientific picture of the world.

Clearly, such a picture of science would undermine the standing of the rest of the bits of knowledge produced by scientists far more intellectually honest than Summers.


Indeed, we might go further here. Not only did Summers have some responsibilities that seemed to have escaped him while he was speaking as a scientist, but we could argue that the rest of the scientists (whether at the conference or elsewhere) have a collective responsibility to address the mistaken picture of science his conduct conveyed to society at large. It’s disappointing that, nearly a decade later, we instead have to contend with the problem of scientists following in Summers’ footsteps by ignoring, rather than engaging with, the scientific findings that challenge their intuitions.

Owing to the role we play in presenting a picture of what science knows and of how scientists come to know it to a broader audience, those of us who write about science (on blogs and elsewhere) also have a responsibility to be clear about the kind of standards scientists need to live up to in order to build a body of knowledge that is as accurate and unbiased as humanly possible. If we’re not clear about these standards in our science writing, we risk misleading our audiences about the current state of our knowledge and about how science works to build reliable knowledge about our world. Our responsibility here isn’t just a matter of noticing when scientists are messing up — it’s also a matter of acknowledging and correcting our own mistakes and of working harder to notice our own biases. I’m pleased that our Blogs Editor is committed to helping us fulfill this duty.
_____
*Summers is an economist, and whether to regard economics as a scientific field is a somewhat contentious matter. I’m willing to give the scientific status of economics the benefit of the doubt, but this means I’ll also expect economists to conduct themselves like scientists, and will criticize them when they do not.

**It’s worth noting that a number of the studies that Summers seemed to be dismissing out of hand were conducted by women. One wonders what lessons the public might draw from that.

_____
A portion of this post is an updated version of an ancestor post on my other blog.

Reflections on being part of a science blogging network.

This is another post following up on a session at ScienceOnline Together 2014, this one called Blog Networks: Benefits, Role of, Next Steps, and moderated by Scientific American Blogs Editor Curtis Brainard. You should also read David Zaslavsky’s summary of the session and what people were tweeting on the session hashtag, #scioBlogNet.

My own thoughts are shaped by writing an independent science blog that less than a year later became part of one of the first “pro” science blogging networks when it launched in January 2006, moving my blog from that network to a brand new science blogging community in August 2010, and keeping that blog going while starting Doing Good Science here on the Scientific American Blog Network when it launched in July 2011. This is to say, I’ve been blogging in the context of science blogging networks for a long time, and have seen the view from a few different vantage points.

That said, my view is also very particular and likely peculiar — for example, I’m a professional philosopher (albeit one with a misspent scientific youth) blogging about science while trying to hold down a day-job as a professor in a public university during a time of state budget terror and to maintain a reasonable semblance of family life. My blogging is certainly more than a hobby — in many ways it provides vital connective tissue that helps knit together my weirdly interdisciplinary professional self into a coherent whole (and has thus been evaluated as a professional activity for the day-job) — but, despite the fact that I’m a “pro” who gets paid to blog here, it’s not something I could live on.

In my experience, a science blogging network can be a great place to get visibility and to build an audience. This can be especially useful early in one’s blogging career, since it’s a big, crowded blogosphere out there. Networks can also be handy for readers, since they deliver more variety and more of a regular flow of posts than most individual bloggers can do (especially when we’re under the weather and/or catching up on grading backlogs). It’s worth noting, though, that very large blog networks can provide a regular flow of content that frequently resembles a firehose. Some blog networks provide curation in the form of featured content or topical feeds. Many provide something like quality control, although sometimes it’s exercised primarily in the determination of who will blog in the network.

Blog networks can also have a distinctive look and feel, embodied in shared design elements, or in an atmosphere set within the commenting community, for example. Bloggers within blog networks may have an easier time finding opportunities for productive cross-pollination or coordination of efforts with their network neighbors, whether to raise political awareness or philanthropic dollars or simply to contribute many distinctive perspectives to the discussion of a particular topic. Bloggers sharing networks can also become friends (although sometimes, being humans, they develop antagonisms instead).

On a science blogging network, bloggers seem also to regularly encounter the question of what counts as a proper “science blog” — about whose content is science-y enough, and what exactly that should mean. This kind of policing of boundaries happens even here.

While the confluence of different people blogging on similar terrain can open up lots of opportunities for collaboration, there are moments when the business of running a blog network (at least when that blog network is a commercial enterprise) can be in tension with what the bloggers value about blogging in the network. Sometimes the people running the network aren’t the same as the people writing the blogs, and they end up having very different visions, interests, pressing needs, and understandings of their relationships to each other.

Sometimes bloggers and networks grow apart and can’t give each other what they need for the relationship to continue to be worthwhile going forward.

And, while blogging networks can be handy, there are other ways that online communicators and consumers of information can find each other and coordinate their efforts online. Twitter has seen the ride of tremendously productive conversations around hashtags like #scistuchat and #BlackandSTEM, and undoubtedly similarly productive conversations among science-y folk regularly coalesce on Facebook and Tumblr and in Google Hangouts. Some of these online interactions lead to face-to-face collaborations like the DIY Science Zone at GeekGirlCon and conference proposals made to traditional professional societies that get their start in online conversations.

Networks can be nice. They can even help people transition from blogging into careers in science writing and outreach. But even before blog networks, awesome people managed to find each other and to come up with awesome projects to do together. Networks can lower the activation energy for this, but there are other ways to catalyze these collaborations, too.

Join Virtually Speaking Science for a conversation about sexism in science and science journalism.

Today at 5 P.M. Eastern/2 P.M. Pacific, I’ll be on Virtually Speaking Science with Maryn McKenna and Tom Levenson to discuss sexual harassment, gender bias, and related issues in the world of science, science journalism, and online science communication. Listen live online or, if you have other stuff to do in that bit of spacetime, you can check out the archived recording later. If you do the Second Life thing, you can join us there at the Exploratorium and text in questions for us.

Tom has a nice post with some background to orient our conversation.

Here, I’m going to give you a few links that give you a taste of what I’ve been thinking about in preparation for this conversation, and then I’ll say a little about what I hope will come out of the conversation.

Geek Feminism Wiki Timeline of incidents from 2013 (includes tech and science blogosphere)

Danielle Lee’s story about the “urban whore” incident and Scientific American’s response to it.

Kate Clancy’s post on how Danielle Lee’s story and the revelations about former Scientific American blog editor Bora Zivkovic are connected to the rape-y Einstein bobble head video incident (with useful discussion of productive strategies for community response)

Andrew David Thaler’s post “On being an ally and being called out on your privilege”

A post I wrote with a link to research on implicit gender bias among science faculty at universities, wherein I point out that the empirical findings have some ethical implications if we’re committed to reducing gender bias

A short film exploring the pipeline problem for women in chemistry, “A Chemical Imbalance” (Transcript)

The most recent of Zuska’s excellent posts on the pipeline problem, “Rethinking the Normality of Attrition”

As far as I’m concerned, the point of our conversation is not to say science, or science journalism, or online science communication, has a bigger problem with sexual harassment or sexism or gender disparities than other professional communities or than the broader societies from which members of these professional communities are drawn. The issue, as far as I can tell, is that these smaller communities reproduce these problems from the broader society — but, they don’t need to. Recognizing that the problem exists — that we think we have merit-driven institutions, or that we’re better at being objective than the average Jo(e), but that the evidence indicates we’re not — is a crucial step on the way to fixing it.

I’m hopeful that we’ll be able to talk about more than individual incidents of sexism or harassment in our discussion. The individual incidents matter, but they don’t emerge fully formed from the hearts, minds, mouths, and hands of evil-doers. They are reflections of cultural influences we’re soaking in, of systems we have built.

Among other things, this suggests to me that any real change will require thinking hard about how to change systems rather than keeping our focus at the level of individuals. Recognizing that it will take more than good intentions and individual efforts to overcome things like unconscious bias in human interactions in the professional sphere (including but not limited to hiring decisions) would be a huge step forward.

Such progress will surely be hard, but I don’t think it’s impossible, and I suspect the effort would be worth it.

If you can, do listen (and watch). I’ll be sure to link the archived broadcast once that link is available.

On the labor involved in being part of a community.

On Thursday of this week, registration for ScienceOnline Together 2014, the “flagship annual conference” of ScienceOnline opened (and closed). ScienceOnline describes itself as a “global, ongoing, online community” made up of “a diverse and growing group of researchers, science writers, artists, programmers, and educators —those who conduct or communicate science online”.

On Wednesday of this week, Isis the Scientist expressed her doubts that the science communication community for which ScienceOnline functions as a nexus is actually a “community” in any meaningful sense:

The major fundamental flaw of the SciComm “community” is that it is a professional community with inconsistent common values. En face, one of its values is the idea of promoting science. Another is promoting diversity and equality in a professional setting. But, at its core, its most fundamental value are these notions of friendship, support, and togetherness. People join the community in part to talk about science, but also for social interactions with other members of the “community”.  While I’ve engaged in my fair share of drinking and shenanigans  at scientific conferences, ScienceOnline is a different beast entirely.  The years that I participated in person and virtually, there was no doubt in my mind that this was a primarily social enterprise.  It had some real hilarious parts, but it wasn’t an experience that seriously upgraded me professionally.

People in SciComm feel confident talking about “the community” as a tangible thing with values and including people in it, even when those people don’t value the social structure in the same way. People write things that are “brave” and bloviate in ways that make each other feel good and have “deep and meaningful conversations about issues” that are at the end of the day nothing more than words. It’s a “community” that gives out platters full of cookies to people who claim to be “allies” to causes without actually having to ever do anything meaningful. Without having to outreach in any tangible way, simply because they claim to be “allies.” Deeming yourself an “ally” and getting a stack of “Get Out of Jail, Free” cards is a hallmark of the “community”.

Isis notes that the value of “togetherness” in the (putative) SciComm community is often prioritized over the value of “diversity” — and that this is a pretty efficient way to undermine the community. She suggests that focusing on friendship rather than professionalism entrenches this problem and writes “I have friends in academia, but being a part of academic science is not predicated on people being my friends.”

I’m very sympathetic to Isis’s concerns here. I don’t know that I’d say there’s no SciComm community, but that might come down to a disagreement about where the line is between a dysfunctional community and a lack of community altogether. But that’s like the definitional dispute about how many hairs one needs on one’s head to shift from the category of “bald” to the category of “not-bald” — for the case we’re trying to categorize there’s still agreement that there’s a whole lot of bare skin hanging out in the wind.

The crux of the matter, whether we have a community or are trying to have one, is whether we have a set of shared values and goals that is sufficient for us to make common cause with each other and to take each other seriously — to take each other seriously even when we offer critiques of other members of the community. For if people in the community dismiss your critiques out of hand, if they have the backs of some members of the community and not others (and whose they have and whose they don’t sorts out along lines of race, gender, class, and other dimensions that the community’s shared values and goals purportedly transcend), it’s pretty easy to wonder whether you are actually a valued member of the community, whether the community is for you in any meaningful way.

I do believe there’s something like a SciComm community, albeit a dysfunctional one. I will be going to ScienceOnline Together 2014, as I went to the seven annual meetings preceding it. Personally, even though I am a full-time academic like Dr. Isis, I do find professional value from this conference. Probably this has to do with my weird interdisciplinary professional focus — something that makes it harder for me to get all the support and inspiration and engagement I need from the official professional societies that are supposed to be aligned with my professional identity. And because of the focus of my work, I am well aware of dysfunction in my own professional community and in other academic and professional communities.

While there has been a pronounced social component to ScienceOnline as a focus of the SciComm community, ScienceOnline (and its ancestor conferences) have never felt purely social to me. I have always had a more professional agenda there — learning what’s going on in different realms of practice, getting my ideas before people who can give me useful feedback on them, trying to build myself a big-picture, nuanced understanding of science engagement and how it matters.

And in recent years, my experience of the meetings has been more like work. Last year, for example, I put a lot of effort into coordinating a kid-friendly room at the conference so that attendees with small children could have some child-free time in the sessions. It was a small step towards making the conference — and the community — more accessible and welcoming to all the people who we describe as being part of the community. There’s still significant work to do on this front. If we opt out of doing that work, we are sending a pretty clear message about who we care about having in the community and who we view as peripheral, about whose voices and interests we value and whose we do not.

Paying attention to who is being left out, to whose voices are not being heard, to whose needs are not being met, takes effort. But this effort is part of the regular required maintenance for any community that is not completely homogeneous. Skipping it is a recipe for dysfunction.

And the maintenance, it seems, is required pretty much every damn day.

Friday, in the Twitter stream for the ScienceOnline hashtag #scio14, I saw this:

To find out what was making Bug Girl feel unsafe, I went back and watched Joe Hanson’s Thanksgiving video, in which Albert Einstein was portrayed as making unwelcome advances on Marie Curie, cheered on by his host, culminating in a naked assault on Curie.

Given the recent upheaval in the SciComm community around sexual harassment — with lots of discussion, because that’s how we roll — it is surprising and shocking that this video plays sexual harassment and assault for laughs, apparently with no thought to how many women are still targets of harassment, no consideration of how chilly the climate for women in science remains.

Here’s a really clear discussion of what makes the video problematic, and here’s Joe Hanson’s response to the criticisms. I’ll be honest: it looks to me like Joe still doesn’t really understand what people (myself included) took to the social media to explain to him. I’m hopeful that he’ll listen and think and eventually get it better. If not, I’m hopeful that people will keep piping up to explain the problem.

But not everyone was happy that members of our putative community responded to a publicly posted video (on a pretty visible platform — PBS Digital Studio — supported by taxpayers in the U.S.) was greeted with a public critique.

The objections raised on Twitter — many of them raised with obvious care as far as being focused on the harm and communicated constructively — were described variously as “drama,” “infighting,” a “witch hunt” and “burning [Joe] at the stake”. (I’m not going to link the tweets because a number of the people who made those characterizations thought about it and walked them back.)

People insisted, as they do pretty much every time, that the proper thing to do was to address the problem privately — as if that’s the only ethical way to deal with a public wrong, or as if it’s the most effective way to fix the harm. Despite what some will argue, I don’t think we have good evidence for either of those claims.

So let’s come back to regular maintenance of the community and think harder about this. I’ve written before that

if bad behavior is dealt with privately, out of view of members of the community who witnessed the bad behavior in question, those members may lose faith in the community’s commitment to calling it out.

This strikes me as good reason not to take all the communications to private channels. People watching and listening on the sidelines are gathering information on whether their so-called community shares their values, on whether it has their back.

Indeed, the people on the sidelines are also watching and listening to the folks dismissing critiques as drama. Operationally, “drama” seems to amount to “Stuff I’d rather you not discuss where I can see or hear it,” which itself shades quickly into “Stuff that really seems to bother other people, for whom I seem to be unable to muster any empathy, because they are not me.”

Let me pause to note what I am not claiming. I am not saying that every member of a community must be an active member of every conversation within that community. I am not saying that empathy requires you to personally step up and engage in every difficult dialogue every time it rolls around. Sometimes you have other stuff to do, or you know that the cost of being patient and calm is more than you can handle at the moment, or you know you need to listen and think for awhile before you get it well enough to get into it.

But going to the trouble to speak up to convey that the conversation is a troublesome one to have happening in your community — that you wish people would stop making an issue of it, that they should just let it go for the sake of peace in the community — that’s something different. That’s telling the people expressing their hurt and disappointment and higher expectations that they should swallow it, that they should keep it to themselves.

For the sake of the community.

For the sake of the community of which they are clearly not really valued members, if they are the ones, always, who need to shut up and let their issues go for the greater good.

Arguably, if one is really serious about the good of the community, one should pay attention to how this kind of dismissal impacts the community. Now is as good a moment as any to start.

The ethics of admitting you messed up.

Part of any human endeavor, including building scientific knowledge or running a magazine with a website, is the potential for messing up.

Humans make mistakes.

Some of them are the result of deliberate choices to violate a norm. Some of them are the result of honest misunderstandings, or of misjudgments about how much control we have over conditions or events. Some of them come about in instances where we didn’t really want the bad thing that happened to happen, but we didn’t take the steps we reasonably could have taken to avoid that outcome, either. Sometimes we don’t recognize that what we did (or neglected to do) was a mistake until we appreciate the negative impact it has.

Human fallibility seems like the kind of thing we’re not going to be able to engineer out of the organism, but we probably can do better at recognizing situations where we’re likely to make mistakes, at exercising more care in those conditions, and at addressing our mistakes once we’ve made them.

Ethically speaking, mistakes are a problem because they cause harm, or because they result from a lapse in an obligation we ought to be honoring, or both. Thus, an ethical response to messing up ought to involving addressing that harm and/or getting back on track with the obligation we fell down on. What does this look like?

1. Acknowledge the harm. This needs to be the very first thing you do. To admit you messed up, you have to recognize the mess, with no qualifications. There it is.

2. Acknowledge the experiential report of the people you have harmed. If you’re serious about sharing a world (which is what ethics is all about), you need to take seriously what the people with whom your sharing that world tell you about how they feel. They have privileged access to their own lived experiences; you need to rely on their testimony of those lived experiences.

Swallow your impulse to say, “I wouldn’t feel that way,” or “I wouldn’t have made such a big deal of that if it happened to me.” Swallow as well any impulse to mount an argument from first principles about how the people telling you they were harmed should feel (especially if it’s an argument that they shouldn’t feel hurt at all). These arguments don’t change how people actually feel — except, perhaps, to make them feel worse because you don’t seem to take the actual harm to them seriously! (See “secondary trauma”.)

3. Acknowledge how what you did contributed to the harm. Spell it out without excuses. Note how your action, or your failure to act, helped bring about the bad outcome. Identify the way your action, or your failure to act, fell short of you living up to your obligations (and be clear about what you understand those obligations to be).

Undoubtedly, there will be other causal factors you can point to that also contributed to bringing about the bad outcome. Pointing them out right now will give the impression that you are dodging your responsibility. Don’t do that.

4. Say you are sorry for causing the harm/falling down on the duty. Actually, you can do this earlier in the process, but doing it again won’t hurt.

What will hurt is “I’m sorry if you were offended/if you were hurt” and similar locutions, since these suggest that you don’t take seriously the experiential reports of the people to whom you’re apologizing. (See #2 above.) If it looks like you’re denying that there really was harm (or that the harm was significant), it may also look like you’re not actually apologizing.

5. Identify steps you will take to avoid repeating this kind of mistake. This is closely connected to your post-mortem of what you did wrong this time (see #3 above). How are you going to change the circumstances, be more attentive to your duties, be more aware of the potential bad consequences that you didn’t foresee this time? Spell out the plan.

6. Identify steps you will take to address the harm of your mistake. Sometimes a sincere apology and a clear plan for not messing up in that way again is enough. Sometimes offsetting the harm and rebuilding trust will take more.

This is another good juncture at which to listen to the people telling you they were harmed. What do they want to help mitigate that harm? What are they telling you might help them trust you again?

7. Don’t demand forgiveness. Some harms hurt for a long time. Trust takes longer to establish than to destroy, and rebuilding it can take longer than it took to build the initial trust. This is a good reason to be on guard against mistakes!

8. If you get off to a bad start, admit it and stop digging. People make mistakes trying to address their mistakes. People give excuses when they should instead acknowledge their culpability. People minimize the feelings of the people to whom they’re trying to apologize. It happens, but it adds an additional layer of mistakes that you ought to address.

Catch yourself. Say, “OK, I was giving an excuse, but I should just tell you that what I did was wrong, and I’m sorry it hurt you.” Or, “That reason I gave you was me being defensive, and right now it’s your feelings I need to prioritize.” Or, “I didn’t notice before that the way I was treating you was unfair. I see now that it was, and I’m going to work hard not to treat you that way again.”

Addressing a mistake is not like winning an argument. In fact, it’s the opposite of that: It’s identifying a way that what you did wasn’t successful, or defensible, or good. But this is something we have to get good at, whether we’re trying to build reliable scientific knowledge or just to share a world with others.

——
I think this very general discussion has all sorts of specific applications, for instance to Mariette DiChristina’s message in response to the outcry over the removal of a post by DNLee.

I’m happy to entertain discussion of this particular case in the comments provided it keeps pretty close to the question of our ethical duties in explaining and apologizing. Claims about people’s intent when no clear statement of that intent has been made are out-of-bounds here (although there are plenty of online spaces where you can discuss such things if you like). So are claims about legalities (since what’s legal is not strictly congruent with what’s ethical).

Also, if you haven’t already, you should read Kate Clancy’s detailed analysis of what SciAm did well and what SciAm did poorly in responding to the situation about which DNLee was blogging and in responding to the online outcry when SciAm removed her post.

Also relevant: Melanie Tannenbaum’s excellent post on why we focus on intent when we should focus on impact.

Standing with DNLee and “discovering science”.

This post is about standing with DNLee and discovering science.

In the event that you haven’t been following the situation as it exploded on Twitter, here is the short version:

DNLee was invited to guest-blog at another site. She inquired as to the terms, then politely declined. The editor then soliciting those guest-posts called her a whore.

DNLee posted on this exchange, which provides some insight into the dynamics of writing about science (and about being a woman of color writing about science) in the changing media landscape on her blog.

And then someone here at Scientific American Blogs took her post down without letting her know they were doing it or telling her why.

Today, by way of explanation, Scientific American Editor in Chief Mariette DiChristina tweeted:

Re blog inquiry: @sciam is a publication for discovering science. The post was not appropriate for this area & was therefore removed.

Let the record reflect that this is the very first time I have heard about this editorial filter, or that any of my posts that do not fall in the category of “discovering science” could be pulled down by editors.

As well, it’s hard to see how what DNLee posted counts as NOT “discovering science” unless “discovering science” is given such a narrow interpretation that this entire blog runs afoul of the standard.

Of course, I’d argue that “discovering science” in any meaningful way requires discovering that scientific knowledge is the result of human labor.

Scientific knowledge doesn’t wash up on a beach, fully formed. Embodied, quirky human beings build it. The experiences of those human beings as they interact with the world and with each other are a tremendously important part of where scientific knowledge comes from. The experiences of human beings interacting with each other as they try to communicate scientific knowledge are a crucial part of where scientific understanding comes from — and of who feels like understanding science is important, who feels like it’s inviting and fun, who feels like it’s just not for them.

Women’s experiences around building scientific knowledge, communicating scientific knowledge, participating in communities and networks that can support scientific engagements, are not separable from “discovering science”. Neither are the experiences of people of color, nor of other people not yet well represented in the communities of scientists or scientific communicators.

Unless Scientific American is really just concerned with helping the people who already feel like science is for them to “discover science”. And if that’s the situation, they really should have told us bloggers that before they signed us up.

“Discovering science” means discovering all sorts of complexities — including unpleasant ones — about the social contexts in which science is done, in which scientists are trained, in which real live human beings labor to explain bits of what we know about the world and how we came to know those bits and why they matter.

If Scientific American doesn’t want its bloggers delving into those complexities, then they don’t want me.

See also:

Dr. Isis
Kate Clancy
Dana Hunter
Anne Jefferson
Sean Carroll
Stephanie Zvan
David Wescott
Kelly Hills

Ethical and practical issues for uBiome to keep working on.

Earlier this week, the Scientific American Guest Blog hosted a post by Jessica Richman and Zachary Apte, two members of the team at uBiome, a crowdfunded citizen science start-up. Back in February, as uBiome was in the middle of its crowdfunding drive, a number of bloggers (including me) voiced worries that some of the ethical issues of the uBiome project might require more serious attention. Partly in response to those critiques, Richman’s and Apte’s post talks about their perspectives on Institutional Review Boards (IRBs) and how in their present configuration they seem suboptimal for commercial citizen science initiatives.

Their post provides food for thought, but there are some broader issues about which I think the uBiome team should think a little harder.

Ethics takes more than simply meeting legal requirements.

Consulting with lawyers to ensure that your project isn’t breaking any laws is a good idea, but it’s not enough. Meeting legal requirements is not sufficient to meet your ethical obligations (which are well and truly obligations even when they lack the force of law).

Now, it’s the case that there is often something like the force of law deployed to encourage researchers (among others) not to ignore their ethical obligations. If you accept federal research funds, for example, you are entering into a contract one of whose conditions is forking within federal guidelines for ethical use of animal or human subjects. If you don’t want the government to enforce this agreement, you can certainly opt out of taking the federal funds.

However, opting out of federal funding does not remove your ethical duties to animals or human subjects. It may remove the government’s involvement in making you live up to your ethical obligations, but the ethical obligations are still there.

This is a tremendously important point — especially in light of a long history of human subjects research in which researchers have often not even recognized their ethical obligations to human subjects, let alone had a good plan for living up to them.

Here, it is important to seek good ethical advice (as distinct from legal advice), from an array of ethicists, including some who see potential problems with your plans. If none of the ethicists you consult see anything to worry about, you probably need to ask a few more! Take the potential problems they identify seriously. Think through ways to manage the project to avoid those problems. Figure out a way to make things right if a worst case scenario should play out.

In a lot of ways, problems that uBiome encountered with the reception of its plan seemed to flow from a lack of good — and challenging — ethical advice. There are plenty of other people and organizations doing citizen science projects that are similar enough to uBiome (from the point of view of interactions with potential subjects/participants), and many of these have experience working with IRBs. Finding them and asking for their guidance could have helped the uBiome team foresee some of the issues with which they’re dealing now, somewhat late in the game.

There are more detailed discussions of the chasm between what satisfies the law and what’s ethical at The Broken Spoke and Drugmonkey. You should, as they say, click through and read the whole thing.

Some frustrations with IRBs may be based on a misunderstanding of how they work.

An Institutional Review Board, or IRB, is a body that examines scientific protocols to determine whether they meet ethical requirements in their engagement of human subjects (including humans who provide tissue or other material to a study). The requirement for independent ethical evaluation of experimental protocols was first articulated in the World Medical Association’s Declaration of Helsinki, which states:

The research protocol must be submitted for consideration, comment, guidance and approval to a research ethics committee before the study begins. This committee must be independent of the researcher, the sponsor and any other undue influence. It must take into consideration the laws and regulations of the country or countries in which the research is to be performed as well as applicable international norms and standards but these must not be allowed to reduce or eliminate any of the protections for research subjects set forth in this Declaration. The committee must have the right to monitor ongoing studies. The researcher must provide monitoring information to the committee, especially information about any serious adverse events. No change to the protocol may be made without consideration and approval by the committee.

(Bold emphasis added.)

In their guest post, Richman and Apte assert, “IRBs are usually associated with an academic institution, and are provided free of charge to members of that institution.”

It may appear that the services of an IRB are “free” to those affiliated with the institution, but they aren’t really. Surely it costs the institution money to run the IRB — to hire a coordinator, to provide ethics training resources for IRB members and to faculty, staff, and students involved in human subjects research, to (ideally) give release time to faculty and staff on the IRB so they can actually devote the time required to consider protocols, comment upon them, provide guidance to PIs, and so forth.

Administrative costs are part of institutional overhead, and there’s a reasonable expectation that researchers whose protocols come before the IRB will take a turn serving on the IRB at some point. So IRBs most certainly aren’t free.

Now, given that the uBiome team was told they couldn’t seek approval from the IRBs at any institutions where they plausibly could claim an affiliation, and given the expense of seeking approval from a private-sector IRB, I can understand why they might have been hesitant to put money down for IRB approval up front. They started with no money for their proposed project. If the project itself ended up being a no-go due to insufficient funding, spending money on IRB approval would seem pointless.

However, it’s worth making it clear that expense is not in itself a sufficient reason to do without ethical oversight. IRB oversight costs money (even in an academic institution where those costs are invisible to PIs because they’re bundled into institutional overhead). Research in general costs money. If you can’t swing the costs (including those of proper ethical oversight), you can’t do the research. That’s how it goes.

Richman and Apte go on:

[W]e wanted to go even further, and get IRB approval once we were funded — in case we wanted to publish, and to ensure that our customers were well-informed of the risks and benefits of participation. It seemed the right thing to do.

So, we decided to wait until after crowdfunding and, if the project was successful, submit for IRB approval at that point.

Getting IRB approval at some point in the process is better than getting none at all. However, some of the worries people (including me) were expressing while uBiome was at the crowdfunding stage of the process (before IRB approval) were focused on how the lines between citizen scientist, human subject, and customer were getting blurred.

Did donors to the drive believe that, by virtue of their donations, they were guaranteed to be enrolled in the study (as sample providers)? Did they have a reasonable picture of the potential benefits of their participation? Did they have a reasonable picture of the potential risks of their participation?

These are not questions we leave to PIs. To assess them objectively, we put these questions before a neutral third-party … the IRB.

If the expense of formal IRB consideration of the uBiome protocol was prohibitive during the crowdfunding stage, it surely would have gone some way to meeting ethical duties if the uBiome team had vetted the language in their crowdfunding drive with independent folks attentive to human subjects protection issues. That the ethical questions raised by their fundraising drive were so glaringly obvious to so many of us suggests that skipping this step was not a good call.


We next arrive at the issue of the for-profit IRB. Richman and Apte write:

Some might criticize the fact that we are using a private firm, one not connected with a prestigious academic institution. We beg to differ. This is the same institution that works with academic IRBs that need to coordinate multi-site studies, as well as private firms such as 23andme and pharmaceutical companies doing clinical trials. We agree that it’s kind of weird to pay for ethical review, but that is the current system, and the only option available to us.

I don’t think paying for IRB review is the ethical issue. If one were paying for IRB approval, that would be an ethical issue, and there are some well known rubber-stamp-y private IRBs out there.

Carl Elliott details some of the pitfalls of the for-profit IRB in his book White Coat, Black Hat. The most obvious of these is that, in a competition for clients, a for-profit IRB might well feel a pressure to forego asking the hard questions, to be less ethically rigorous (and more rubber-stamp-y) — else clients seeking approval would take their business to a competing IRB they saw as more likely to grant that approval with less hassle.

Market forces may provide good solutions to some problems, but it’s not clear that the problem of how to make research more ethical is one of them. Also, it’s worth noting that being a citizen science project does not in and of itself preclude review by an academic IRB – plenty of citizen science projects run by academic scientists do just that. It’s uBiome’s status as a private-sector citizen science project that led to the need to find another IRB.

That said, if folks with concerns knew which private IRB the uBiome team used (something they don’t disclose in their guest post), those folks could inspect the IRB’s track record for rigor and make a judgment from that.

Richman and Apte cite as further problems with IRBs, at least as currently constituted, lack of uniformity across committees and lack of transparency. The lack of uniformity is by design, the thought being that local control of committees should make them more responsive to local concerns (including those of potential subjects). Indeed, when research is conducted by collaborators from multiple institutions, one of the marks of good ethical design is when different local IRBs are comfortable approving the protocol. As well, at least part of the lack of transparency is aimed at human subjects protection — for example, ensuring that the privacy of human subjects is not compromised in the release of approved research protocols.

This is not to say that there is no reasonable discussion to have about striving for more IRB transparency, and more consistency between IRBs. However, such a discussion should center ethical considerations, not convenience or expediency.

Focusing on tone rather than substance makes it look like you don’t appreciate the substance of the critique.

Richman and Apte write the following of the worries bloggers raised with uBiome:

Some of the posts threw us off quite a bit as they seemed to be personal attacks rather than reasoned criticisms of our approach. …

We thought it was a bit… much, shall we say, to compare us to the Nazis (yes, that happened, read the posts) or to the Tuskegee Experiment because we funded our project without first paying thousands of dollars for IRB approval for a project that had not (and might never have) happened.

I have read all of the linked posts (here, here, here, here, here, here, here, and here) that Richman and Apte point to in leveling this complaint about tone. I don’t read them as comparing the uBiome team to Nazis or the researchers who oversaw the Tuskegee Syphilis Experiment.

I’m willing to stipulate that the tone of some of these posts was not at all cuddly. It may have made members of the uBiome team feel defensive.

However, addressing the actual ethical worries raised in these posts would have done a lot more for uBiome’s efforts to earn the public’s trust than adopting a defensive posture did.

Make no mistake, harsh language or not, the posts critical of uBiome were written by a bunch of people who know an awful lot about the ins and outs of ethical interactions with human subjects. These are also people who recognize from their professional lives that, while hard questions can feel like personal attacks, they still need to be answered. They are raising ethical concerns not to be pains, but because they think protecting human subjects matters — as does protecting the collective reputation of those who do human subjects research and/or citizen science.

Trust is easier to break than to build, which means one project’s ethical problems could be enough to sour the public on even the carefully designed projects of researchers who have taken much more care thinking through the ethical dimensions of their work. Addressing potential problems in advance seems like a better policy than hoping they’ll be no big deal.

And losing focus on the potential problems because you don’t like the way in which they were pointed out seems downright foolish.

Much of uBiome’s response to the hard questions raised about the ethics of their project has focused on tone, or on meeting examples that provide historical context for our ethical guidelines for human subject research with the protestation, “We’re not like that!” If nothing else, this suggests that the uBiome team hasn’t understood the point the examples are meant to convey, nor the patterns that they illuminate in terms of ethical pitfalls into which even non-evil scientists can fall if they’re not careful.

And it is not at all clear that the uBiome team’s tone in blog comments and on social media like Twitter has done much to help its case.

What is still lacking, amidst all their complaints about the tone of the critiques, is a clear account of how basic ethical questions (such as how uBiome will ensure that the joint roles of customer, citizen science participant, and human subject don’t lead to a compromise of autonomy or privacy) are being answered in uBiome’s research protocol.

A conversation on the substance of the critiques would be more productive here than one about who said something mean to whom.

Which brings me to my last issue:

New models of scientific funding, subject recruitment, and outreach that involve the internet are better served by teams that understand how the internet works.

Let’s say you’re trying to fund a project, recruit participants, build general understanding, enthusiasm, support, and trust. Let’s say that your efforts involve websites where you put out information and social media use where you amplify some of that information or push links to your websites or favorable media coverage.

People looking at the information you’ve put out there are going to draw conclusions based on the information you’ve made public. They may also draw speculative conclusions from the gaps — the information you haven’t made public.

You cannot, however, count on them to base their conclusions on information to which they’re not privy, including what’s in you’re heart.

There may be all sorts of good efforts happening behind the scenes to get rigorous ethical oversight off the ground. If it’s invisible to the public, there’s no reason the public should assume it’s happening.

If you want people to draw more accurate conclusions about what you’re doing, and about what potential problems might arise (and how you’re preparing to face them if they do), a good way to go is to make more information public.

Also, recognize that you’re involved in a conversation that is being conducted publicly. Among other things, this means it’s unreasonable to expect people with concern to take it to private email in order to get further information from you. You’re the one with a project that relies on cultivating public support and trust; you need to put the relevant information out there!

(What relevant information? Certainly the information relevant to responding to concerns and critiques articulated in the above-linked blog posts would be a good place to start — which is yet another reason why it’s good to be able to get past tone and understand substance.)

In a world where people email privately to get the information that might dispel their worries, those people are the only ones whose worries are addressed. The rest of the public that’s watching (but not necessarily tweeting, blogging, or commenting) doesn’t get that information (especially if you ask the people you email not to share the content of that email publicly). You may have fully lost their trust with nary a sign in your inboxes.

Maybe you wish the dynamics of the internet were different. Some days I do, too. But unless you’re going to fix the internet prior to embarking on your brave new world of crowdfunded citizen science, paying some attention to the dynamics as they are now will help you use it productively, rather than to create misunderstandings and distrust that then require remediation.

That could clear the way to a much more interesting and productive conversation between uBiome, other researchers, and the larger public.

When we target chemophobia, are we punching down?

Over at Pharyngula, Chris Clarke challenges those in the chemical know on their use of “dihydrogen monoxide” jokes. He writes:

Doing what I do for a living, I often find myself reading things on Facebook, Twitter, or those increasingly archaic sites called “blogs” in which the writer expresses concern about industrial effluent in our air, water, consumer products or food. Sometimes the concerns are well-founded, as in the example of pipeline breaks releasing volatile organic chemicals into your backyard. Sometimes, as in the case of concern over chemtrails or toxic vaccines, the concerns are ill-informed and spurious.

And often enough, the educational system in the United States being the way it’s been since the Reagan administration, those concerns are couched in terms that would not be used by a person with a solid grounding in science. People sometimes miss the point of dose-dependency, of acute versus chronic exposure, of the difference between parts per million and parts per trillion. Sometimes their unfamiliarity with the basic facts of chemistry causes them to make patently ridiculous alarmist statements and then double down on them when corrected.

And more times than I can count, if said statements are in a public venue like a comment thread, someone will pipe up by repeating a particular increasingly stale joke. Say it’s a discussion of contaminants in tap water allegedly stemming from hydraulic fracturing for natural gas extraction. Said wit will respond with something like:

“You know what else might be coming out of your tap? DIHYDROGEN MONOXIDE!”

Two hydrogens, one oxygen … what’s coming out of your tap here is water. Hilarious! Or perhaps not.

Clarke argues that those in the chemical know whip out the dihydrogen monoxide joke to have a laugh at the expense of someone who doesn’t have enough chemical knowledge to understand whether conditions they find alarming really ought to alarm them. However, how it usually goes down is that other chemically literate people in earshot laugh while the target of the joke ends up with no better chemical understanding of things.

Really, all the target of the joke learns is that the teller of the joke has knowledge and is willing to use it to make someone else look dumb.

Clarke explains:

Ignorance of science is an evil that for the most part is foisted upon the ignorant. The dihydrogen monoxide joke depends for its humor on ridiculing the victims of that state of affairs, while offering no solution (pun sort of intended) to the ignorance it mocks. It’s like the phrase “chemophobia.” It’s a clan marker for the Smarter Than You tribe.

The dihydrogen monoxide joke punches down, in other words. It mocks people for not having had access to a good education. And the fact that many of its practitioners use it in order to belittle utterly valid environmental concerns, in the style of (for instance) Penn Jillette, makes it all the worse — even if those concerns aren’t always expressed in phraseology a chemist would find beyond reproach, or with math that necessarily works out on close examination.

There’s a weird way in which punching down with the dihydrogen monoxide joke is the evil twin of the “deficit model” in science communication.

The deficit model assumes that the focus in science communication to audiences of non-scientists should be squarely on filling in gaps in their scientific knowledge, teaching people facts and theories that they didn’t already know, as if that is the main thing they must want from science. (It’s worth noting that the deficit model seems to assume a pretty unidirectional flow of information, from the science communicator to the non-scientist.)

The dihydrogen monoxide joke, used the way Clarke describes, identifies a gap in understanding and then, instead of trying to fill it, points and laughs. If the deficit model naïvely assumes that filling gaps in knowledge will make the public cool with science, this kind of deployment of the dihydrogen monoxide joke seems unlikely to provoke any warm feelings towards science or scientists from the person with a gappy understanding.

What’s more, this kind of joking misses an opportunity to engage with what they’re really worried about and why. Are they scared of chemicals per se? Of being at the mercy of others who have information about which chemicals can hurt us (and in which amounts) and/or who have more knowledge about or control of where those chemicals are in our environment? Do they not trust scientists at all, or are they primarily concerned about whether they can trust scientists in the employ of multinational corporations?

Do their concerns have more to do with the information and understanding our policymakers have with regard to chemicals in our world — particularly about whether these policymakers have enough to keep us relatively safe, or about whether they have the political will to do so?

Actually having a conversation and listening to what people are worried about could help. It might turn out that people with the relevant scientific knowledge to laugh at the dihydrogen monoxide joke and those without share a lot of the same concerns.

Andrew Bissette notes that there are instances where the dihydrogen monoxide joke isn’t punching down but punching up, where educated people who should know better use large platforms to take advantage of the ignorant. So perhaps it’s not the case that we need a permanent moratorium on the joke so much as more careful thought about what we hope to accomplish with it.

Let’s return to Chris Clarke’s claim that the term “chemophobia” is “a clan marker for the Smarter Than You tribe.”

Lots of chemists in the blogosphere regularly blog and tweet about chemophobia. If they took to relentlessly tagging as “chemophobe!” people who are lacking access to the body of knowledge and patterns of reasoning that define chemistry, I’d agree that it was the same kind of punching down as the use of the dihydrogen monoxide joke Clarke describes. To the extent that chemists are actually doing this to assert membership in the Smarter Than You tribe, I think it’s counterproductive and mean to boot, and we should cut it out.

But, knowing the folks I do who blog and tweet about chemophobia, I’m pretty sure their goal is not to maintain clear boundaries between The Smart and The Dumb. When they fire off a #chemophobia tweet, it’s almost like they’re sending up the Batsignal, rallying their chemical community to fight some kind of crime.

So what is it these chemists — the people who have access to the body of knowledge and patterns of reasoning that define chemistry — find problematic about the “chemophobia” of others? What do they hope to accomplish by pointing it out?

Part of where they’re coming from is probably grounded in good old fashioned deficit-model reasoning, but with more emphasis on helping others learn a bit of chemistry because it’s cool. There’s usually a conviction that the basics of the chemistry that expose the coolness are not beyond the grasp of adults of normal intelligence — if only we explain in accessibly enough. Ash Jogalekar suggests more concerted efforts in this direction, proposing a lobby for chemistry (not the chemical industry) that takes account of how people feel about chemistry and what they want to know. However it’s done, the impulse to expose the cool workings of a bit of the world to those who want to understand them should be offered as a kindness. Otherwise, we’re doing it wrong.

Another part of what moves the chemists I know who are concerned with chemophobia is that they don’t want people who are not at home with chemistry to get played. They don’t want them to be vulnerable to quack doctors, nor to merchants of doubt trying to undermine sound science to advance a particular economic or political end, nor to people trying to make a buck with misleading claims, nor to legitimately confused people who think they know much more than they really do.

People with chemical know-how could help address this kind of vulnerability, being partners to help sort out the reliable information from the bogus, the overblown risks from risks that ought to be taken seriously or investigated further.

But short of teaching the folks without access to the body of knowledge and patterns of reasoning that define chemistry everything they know to be their own experts (which is the deficit model again), providing this kind of help requires cultivating trust. It requires taking the people to whom your offering the help seriously, recognizing that gaps in their chemical understanding don’t make them unintelligent or of less value as human beings.

And laughing at the expense of the people who could use your help — using your superior chemical knowledge to punch down — seems unlikely to foster that trust.

More on rudeness, civility, and the care and feeding of online conversations.

Late last month, I pondered the implications of a piece of research that was mentioned but not described in detail in a perspective piece in the January 4, 2013 issue of Science. [1] In its broad details, the research suggests that the comments that follow an online article about science — and particularly the perceived tone of the comments, whether civil or uncivil — can influence readers’ assessment of the science described in the article itself.

Today, an article by Paul Basken at The Chronicle of Higher Education shares some more details of the study:

The study, outlined on Thursday at the annual meeting of the American Association for the Advancement of Science, involved a survey of 2,338 Americans asked to read an article that discussed the risks of nanotechnology, which involves engineering materials at the atomic scale.

Of participants who had already expressed wariness toward the technology, those who read the sample article—with politely written comments at the bottom—came out almost evenly split. Nearly 43 percent said they saw low risks in the technology, and 46 percent said they considered the risks high.

But with the same article and comments that expressed the same reactions in a rude manner, the split among readers widened, with 32 percent seeing a low risk and 52 percent a high risk.

“The only thing that made a difference was the tone of the comments that followed the story,” said a co-author of the study, Dominique Brossard, a professor of life-science communication at the University of Wisconsin at Madison. The study found “a polarization effect of those rude comments,” Ms. Brossard said.

The study, conducted by researchers at Wisconsin and George Mason University, will be published in a coming issue of the Journal of Computer-Mediated Communication. It was presented at the AAAS conference during a daylong examination of how scientists communicate their work, especially online.

If you click through to read the article, you’ll notice that I was asked for comment on the findings. As you may guess, I had more to say on the paper (which is still under embargo) and its implications than ended up in the article, so I’m sharing my extended thoughts here.

First, I think these results are useful in reassuring bloggers who have been moderating comments that what they are doing is not just permissible (moderating comments is not “censorship,” since bloggers don’t have the power of the state, and folks can find all sorts of places in the Internet to state their views if any given blog denies them a soapbox) but also reasonable. Blogging with comments enabled assumes more than transmission of information, it assumes a conversation, and what kind of conversation it ends up being depends on what kind of behavior is encouraged or forbidden, who feels welcome or alienated.

But, there are some interesting issues that the study doesn’t seem to address, issues that I think can matter quite a lot to bloggers.

In the study, readers (lurkers) were reacting to factual information in an online posting plus the discourse about that article in the comments. As the study is constructed, it looks like that discourse is being shaped by commenters, but not by the author of the article. It seems likely to me (and worth further empirical study!) that comment sections in which the author is engaging with commenters — not just responding to the questions they ask and the views they express, but also responding to the ways that they are interacting with other commenters and to their “tone” — have a different impact on readers than comment sections where the author of the piece that is being discussed is totally absent from the scene. To put it more succinctly, comment sections where the author is present and engaged, or absent and disengaged, communicate information to lurkers, too.

Here’s another issue I don’t think the study really addresses: While blogs usually aim to communicate with lurkers as well as readers who post comments (and every piece of evidence I’ve been shown suggests that commenters tend to be a small proportion of readers), most are aiming to reach a core audience that is narrower than “everyone in the world with an internet connection”.

Sometimes what this means is that bloggers are speaking to an audience that finds comment sections that look unruly and contentious to be welcoming, rather than alienating. This isn’t just the case for bloggers seeking an audience that likes to debate or to play rough.

Some blogs have communities that are intentionally uncivil towards casual expressions of sexism, racism, homophobia, etc. Pharyngula is a blog that has taken this approrach, and just yesterday Chris Clarke posted a statement on “civility” there that leads with a commitment “not to fetishize civility over justice.” Setting the rules of engagement between bloggers and posters this way means that people in groups especially affected by sexism, racism, homophobia, etc., have a haven in the blogosphere where they don’t have to waste time politely defending the notion that they are fully human, too (or swallowing their anger and frustration at having their humanity treated as a topic of debate). Yes, some people find the environment there alienating — but the people who are alienated by unquestioned biases in most other quarters of the internet (and the physical world, for that matter) are the ones being consciously welcomed into the conversation at Pharyngula, and those who don’t like the environment can find another conversation. It’s a big blogosphere. That not every potential reader does not feel perfectly comfortable at a blog, in other words, is not proof that the blogger is doing it wrong.

So, where do we find ourselves?

We’re in a situation where lots of people are using online venues like blogs to communicate information and viewpoints in the context of a conversation (where readers can actively engage as commenters). We have a piece of research indicating that the tenor of the commenting (as perceived by lurkers, readers who are not commenting) can communicate as much to readers as the content of the post that is the subject of the comments. And we have lots of questions still unanswered about what kinds of engagement will have what kinds of effect on what kinds or readers (and how reliably). What does this mean for those of us who blog?

I think what it means is that we have to be really reflective about what we’re trying to communicate, who we’re trying to communicate it to, and how our level of visible engagement (or disengagement) in the conversation might make a difference. We have to acknowledge that we have information that’s gappy at best about what’s coming across to the lurkers, and attentive to ways to get more feedback about how successfully we’re communicating what we’re trying to communicate. We have to recognize that, given all we don’t know, we may want to shift our strategies for blogging and engaging commenters, especially if we come upon evidence that they’re not working the way we thought they were.

* * * * *
In the interests of spelling out the parameters of the conversation I’d like to have here, let me note that whether or not you like the way Pharyngula sets a tone for conversations is off topic here. You are, however, welcome to share in the comments here what you find makes you feel more or less welcome to engage with online postings, whether as a commenter or a lurker.
_____

[1] Dominique Brossard and Dietram A. Scheufele, “Science, New Media, and the Public.” Science 4 January 2013:Vol. 339, pp. 40-41.
DOI: 10.1126/science.1160364