Academic tone-trolling: How does interactivity impact online science communication?

Later this week at ScienceOnline 2013, Emily Willingham and I are co-moderating a session called Dialogue or fight? (Un)moderated science communication online. Here’s the description:

Cultivating a space where commentators can vigorously disagree with a writer–whether on a blog, Twitter, G+, or Facebook, *and* remain committed to being in a real dialogue is pretty challenging. It’s fantastic when these exchanges work and become constructive in that space. On the other hand, there are times when it goes off the rails despite your efforts. What drives the difference? How can you identify someone who is commenting simply to cause trouble versus a commenter there to engage in and add value to a genuine debate? What influence does this capacity for *anyone* to engage with one another via the great leveler that is social media have on social media itself and the tenor and direction of scientific communication?

Getting ready for this session was near the top of my mind when I read a perspective piece by Dominique Brossard and Dietram A. Scheufele in the January 4, 2013 issue of Science. [1] In the article, Brossard and Scheufele raise concerns about the effects of moving the communication of science information to the public from dead-tree newspapers and magazines into online, interactive spaces.

Here’s the paragraph that struck me as especially relevant to the issues Emily and I had been discussing for our session at ScienceOnline 2013:

A recent conference presented an examination of the effects of these unintended influences of Web 2.0 environments empirically by manipulating only the tone of the comments (civil or uncivil) that followed an online science news story in a national survey experiment. All participants were exposed to the same, balanced news item (covering nanotechnology as an emerging technology) and to a set of comments following the story that were consistent in terms of content but differed in tone. Disturbingly, readers’ interpretations of potential risks associated with the technology described in the news article differed significantly depending only on the tone of the manipulated reader comments posted with the story. Exposure to uncivil comments (which included name calling and other non-content-specific expressions of incivility) polarized the views among proponents and opponents of the technology with respect to its potential risks. In other words, just the tone of the comments following balanced science stories in Web 2.0 environments can significantly alter how audiences think about the technology itself. (41)

There’s lots to talk about here.

Does this research finding mean that, when you’re trying to communicate scientific information online, enabling comments is a bad idea?

Lots of us are betting that it’s not. Rather, we’re optimistic that people will be more engaged with the information when they have a chance to engage in a conversation about it (e.g., by asking questions and getting answers).

However, the research finding described in the Science piece suggests that there may be better and worse ways of managing commenting on your posts if your goal is to help your readers understand a particular piece of science.

This might involve having a comment policy that puts some things clearly out-of-bounds, like name-calling or other kinds of incivility, and then consistently enforcing this policy.

It should be noted — and has been — that some kinds of incivility wear the trappings of polite language, which means that it’s not enough to set up automatic screens that weed out comments containing particular specified naughty words. Effective promotion of civility rather than incivility might well involve having the author of the online piece and/or designated moderators as active participants in the ongoing conversation, calling out bad commenter behavior as well as misinformation, answering questions to make sure the audience really understands the information being presented, and being attentive to how the unfolding discussion is likely to be welcoming — or forbidding — to the audience one is hoping to reach.

There are a bunch of details that are not clear from this brief paragraph in the perspective piece. Were the readers whose opinions were swayed by the tone of the comments reacting to a conversation that had already happened or were they watching as it happened? (My guess is the former, since the latter would be hard to orchestrate and coordinate with a survey.) Were they looking at a series of comments that dropped them in the middle of a conversation that might plausibly continue, or were they looking at a conversation that had reached its conclusion? Did the manipulated reader comments include any comments that appeared to be from the author of the science article, or were the research subjects responding to a conversation from which the author appeared to be absent? Potentially, these details could make a difference to the results — a conversation could impact someone reading it differently depending on whether it seems to be gearing up or winding down, just as participation from the author could carry a different kind of weigh than the views of random people on the internet. I’m hopeful that future research in this area will explore just what kind of difference they might make.

I’m also guessing that the experimental subjects reading the science article and the manipulated comments that followed could not themselves participate in the discussion by posting a comment. I wonder how much being stuck on the sidelines rather than involved in the dialogue affected their views. We should remember, though, that most indicators suggest that readers of online articles — even on blogs — who actually post comments are much smaller in number than the readers who “lurk” without commenting. This means that commenters are generally a very small percentage of the readers one is trying to reach, and perhaps not very representative of those readers overall.

At this point, the take-home seems to be that social scientists haven’t discovered all the factors that matter in how an audience for online science is going to receive and respond to what’s being offered — which means that those of us delivering science-y content online should assume we haven’t discovered all those factors, either. It might be useful, though, if we are reflective about our interactions with our audiences and if we keep track of the circumstances around communicative efforts that seem to work and those that seem to fail. Cataloguing these anecdote could surely provide fodder for some systematic empirical study, and I’m guessing it could help us think through strategies for really listening to the audiences we hope are listening to us.

* * * * *
As might be expected, Bora has a great deal to say about the implications of this particular piece of research and about commenting, comment moderation, and Web 2.0 conversations more generally. Grab a mug of coffee, settle in, and read it.

[1] Dominique Brossard and Dietram A. Scheufele, “Science, New Media, and the Public.” Science 4 January 2013:Vol. 339, pp. 40-41.
DOI: 10.1126/science.1160364

Can we combat chemophobia … with home-baked bread?

This post was inspired by the session at the upcoming ScienceOnline 2013 entitled Chemophobia & Chemistry in The Modern World, to be moderated by Dr. Rubidium and Carmen Drahl

For some reason, a lot of people seem to have an unreasonable fear of chemistry. I’m not just talking about fear of chemistry instruction, but full-on fear of chemicals in their world. Because what people think they know about chemicals is that they go boom, or they’re artificial, or they’re drugs which are maybe useful but maybe just making big pharma CEOs rich, and maybe they’re addictive and subject to abuse. Or, they are seeping into our water, our air, our food, our bodies and maybe poisoning us.

At the extreme, it strikes me that chemophobia is really just a fear of recognizing that our world is made of chemicals. I can assure you, it is!

Your computer is made of chemicals, but so are paper and ink. Snails are made of chemicals, as are plants (which carry out chemical reactions right under our noses. Also carrying out chemical reactions right under our noses are yeasts, without which many of our potables would be less potent. Indeed, our kitchens and pantries, from which we draw our ingredients and prepare our meals, are full of many impressively reactive chemicals.

And here, it actually strikes me that we might be able to ratchet down the levels of chemophobia if people find ways to return to de novo syntheses of more of what they eat — which is to say, to making their food from scratch.

For the last several months, our kitchen has been a hotbed of homemade bread. Partly this is because we had a stretch of a couple years where our only functional oven was a toaster over, which means when we got a working full-sized oven again, we became very enthusiastic about using it.

As it turns out, when you’re baking two or three loaves of bread every week, you start looking at things like different kinds of flour on the market and figuring out how things like gluten content affect your dough — how dense of a bread it will make, how much “spring” it has in the oven, and so forth.

(Gluten is a chemical.)

Maybe you dabble with the occasional batch of biscuits of muffins or quick-bread that uses a leavening agent other than yeast — otherwise known as a chemical leavener.

(Chemical leaveners are chemicals.)

And, you might even start to pick up a feel for which chemical leaveners depend on there being an acidic ingredient (like vinegar or buttermilk) in your batter and which will do the job without an acidic ingredient in the batter.

(Those ingredients, whether acidic or not, are made of chemicals. Even the water.)

Indeed, many who find their inner baker will start playing around with recipes that call for more exotic ingredients like lecithin or ascorbic acid or caramel color (each one: a chemical).

It’s to the point that I have joked, while perusing the pages of “baking enhancers” in the fancy baking supply catalogs, “People start baking their own bread so they can avoid all the chemicals in the commercially baked bread, but then they get really good at baking and start improving their homemade bread with all these chemicals!”

And yes, there’s a bit of a disconnect in baking to avoid chemicals in your food and then discovering that there are certain chemicals that will make that food better. But, I’m hopeful that the process leads to a connection, wherein people who are getting back in touch with making one of the oldest kinds of foods we have can also make peace with the recognition that wholesome foods (and the people who eat them) are made of chemicals.

It’s something to chew on, anyway.

Reasonably honest impressions of #overlyhonestmethods.

I suspect at least some of you who are regular Twitter users have been following the #overlyhonestmethods hashtag, with which scientists have been sharing details of their methodology that are maybe not explicitly spelled out in their published “Materials and Methods” sections. And, as with many other hashtag genres, the tweets in #overlyhonestmethods are frequently hilarious.

I was interviewed last week about #overlyhonestmethods for the Public Radio International program Living On Earth, and the length of my commentary was more or less Twitter-scaled. This means some of the nuance (at least in my head), about questions like whether I thought the tweets were an overshare that could make science look bad, didn’t quite make it to the radio. Also, in response to the Living On Earth segment, one of the people with whom I regularly discuss the philosophy of science in the three-dimensional world, shared some concerns about this hashtag in the hopes I’d say a bit more:

I am concerned about the brevity of the comments which may influence what one expresses.  Second there is an ego component; some may try to outdo others’ funny stories, and may stretch things in order to gain a competitive advantage.

So, I’m going to say a bit more.

Should we worry that #overlyhonestmethods tweets share information that will make scientific practice look bad to (certain segments of) the public?

I don’t think so. I suppose this may depend on what exactly the public expects of scientists.

The people doing science are human. They are likely to be working with all kinds of constraints — how close their equipment is to the limits of its capabilities (and to making scary noises), how frequently lab personnel can actually make it into the lab to tend to cell cultures, how precisely (or not) pumping rates can be controlled, how promptly (or not) the folks receiving packages can get perishable deliveries to the researchers. (Notice that at least some of these limitations are connected to limited budgets for research … which maybe means that if the public finds them unacceptable, they should lobby their Congresscritters for increased research funding.) There are also constraints that come from the limits of the human animal: with a finite attention span, without a built in chronometer or calibrated eyeballs, and with a need for sleep and possibly even recreation every so often (despite what some might have you think).

Maybe I’m wrong, but my guess is that it’s a good thing to have a public that is aware of these limitations imposed by the available equipment, reagents, and non-robot workforce.

Actually, I’m willing to bet that some of these limitations, and an awareness of them, are also really handy in scientific knowledge-building. They are departures from ideality that may help scientists nail down which variables in the system really matter in producing and controlling the phenomenon being studied. Reproducibility might be easy for a robot that can do every step of the experiment precisely every single time, but we really learn what’s going on when we drift from that. Does it matter if I use reagents from a different supplier? Can I leave the cultures to incubate a day longer? Can I successfully run the reaction in a lab that’s 10 oC warmer or 10 oC colder? Working out the tolerances helps turn an experimental protocol from a magic trick into a system where we have some robust understanding of what variables matter and of how they’re hooked to each other.

Does the 140 character limit mean #overlyhonestmethods tweets leave out important information, or that scientists will only use the hashtag to be candid about some of their methods while leaving others unexplored?

The need for brevity surely means that methods for which candor requires a great deal of context and/or explanation won’t be as well-represented as methods where one can be candid and pithy simultaneously. These tweeted glimpses into how the science gets done are more likely to be one-liners than shaggy-dog stories.

However, it’s hard to imagine that folks who really wanted to share wouldn’t use a series of tweets if they wanted to play along, or maybe even write a blog post about it and use the hashtag to tweet a link to that post.

What if #overlyhonestmethods becomes a game of one-upmanship and puffery, in which researchers sacrifice honesty for laughs?

Maybe there’s some of this happening, and if the point of the hashtag is for researchers to entertain each other, maybe that’s not a problem. However, in the case that other members of one’s scientific community were actually looking to those tweets to fill in some of the important details of methodology that are elided in the terse “Materials and Methods” section of a published research paper, I hope the tweeters would, when queried, provide clear and candid information on how they actually conducted their experiments. Correcting or retracting a tweet should be less of an ego blow than correcting or retracting a published paper, I hope (and indeed, as hard as it might be to correct or retract published claims, good scientists do it when they need to).

The whole #overlyhonestmethods hashtag raises the perennial question of why it is so much is elided in published “Materials and Methods” sections. Blame is usually put on limitations of space in the journals, but it’s also reasonable to acknowledge that sometimes details-that-turn-out-to-be-important are left out because the researchers don’t fully recognize their importance. Other times, researchers may have empirical grounds for thinking these details are important, but they don’t yet have a satisfying story to tell about why they should be.

By the way, I think it would be an excellent thing if, for research that is already published, #overlyhonestmethods included the relevant DOI. These tweets would be supplementary information researchers could really use.

What researchers use #overlyhonestmethods to disclose ethically problematic methods?

Given that Twitter is a social medium, I expect other scientists in the community watching the hashtag would challenge those methods or chime in to explain just what makes them ethically problematic. They might also suggest less ethically problematic ways to achieve the same research goals.

The researchers on Twitter could, in other words, use the social medium to exert social pressure in order to make sure other members of their scientific community understand and live up to the norms of that community.

That outcome would strike me as a very good one.

* * * * *

In addition to the ever expanding collection of tweets about methods, #overlyhonestmethods also has links to some thoughtful, smart, and funny commentary on the hashtag and the conversations around it. Check it out!

Fear of scientific knowledge about firearm-related injuries.

In the United States, a significant amount of scientific research is funded through governmental agencies, using public money. Presumably, this is not primarily aimed at keeping scientists employed and off the streets*, but rather is driven by a recognition that reliable knowledge about how various bits of our world work can be helpful to us (individually and collectively) in achieving particular goals and solving particular problems.

Among other things, this suggests a willingness to put the scientific knowledge to use once it’s built.** If we learn some relevant details about the workings of the world, taking those into account as we figure out how best to achieve our goals or solve our problems seems like a reasonable thing to do — especially if we’ve made a financial investment in discovering those relevant details.

And yet, some of the “strings” attached to federally funded research suggest that the legislators involved in approving funding for research are less than enthusiastic to see our best scientific knowledge put to use in crafting policy — or, that they would prefer that the relevant scientific knowledge not be built or communicated at all.

A case in point, which has been very much on my mind for the last month, is the way language in appropriations bills has restricted Centers for Disease Control and Prevention (CDC) and National Institutes of Health (NIH) research funds for research related to firearms.

The University of Chicago Crime Lab organized a joint letter (PDF) to the gun violence task force being headed by Vice President Joe Biden, signed by 108 researchers and scholars, which is very clear in laying out the impediments that have been put on research about the effects of guns. They identify the crucial language, which is still present in subsection c of section 503 and 218 of FY2013 Appropriations Act governing NIH and CDC funding:

None of the funds made available in this title may be used, in whole or in part, to advocate or promote gun control.

As the letter from the Crime Lab rightly notes,

Federal scientific funds should not be used to advance ideological agendas on any topic. Yet that legislative language has the effect of discouraging the funding of well-crafted scientific studies.

What is the level of this discouragement? The letter presents a table comparing major NIH research awards connected to a handful of conditions between 1973 and 2012, noting the number of reported cases of these conditions in the U.S. during this time period alongside the number of grants to study the condition. There were 212 NIH research awards to study cholera and 400 reported U.S. cases of cholera. There were 56 NIH research awards to study diphtheria and 1337 reported U.S. cases of diphtheria. There were 129 NIH research awards to study polio and 266 reported U.S. cases of polio. There were 89 NIH research awards to study rabies and 65 reported U.S. cases of rabies. But, for more than 4 million reported firearm injuries in the U.S. during this time period, there were exactly 3 NIH research awards to study firearm injuries.

One possibility here is that, from 1973 to 2012, there were very few researchers interested enough in firearm injuries to propose well-crafted scientific studies of them. I suspect that that the 108 signatories of the letter linked above would disagree with that explanation for this disparity in research funding.

Another possibility is that legislators want to prevent the relevant scientific knowledge from being built. The fact that they have imposed restrictions on the collection and sharing of data by the Federal Bureau of Alcohol, Tobacco, Firearms and Explosives (in particular, data tracing illegal sales and purchases of firearms) strongly supports the hypothesis that, at least when it comes to firearms, legislators would rather be able to make policy unencumbered by pesky facts about how the relevant pieces of the world actually work.

What this suggests to me is that these legislators either don’t understand that knowing more about how the world works can help you achieve desired outcomes in that world, or that they don’t want to achieve the outcome of reducing firearm injury or death.

Perhaps these legislators don’t want researchers to build reliable knowledge about the causes of firearm injury because they fear it will get in the way of their achieving some other goal that is more important to them than reducing firearm injury or death.

Perhaps they fear that careful scientific research will turn up facts which themselves seem to “to advocate or promote gun control” — at least to the extent that they show that the most effective way to reduce firearm injury and death would be to implement controls that the legislators view as politically unpalatable.

If nothing else, I find that a legislator’s aversion to scientific evidence is a useful piece of information about him or her to me, as a voter.
*If federal funding for research did function like a subsidy, meant to keep the researchers employed and out of trouble, you’d expect to see a much higher level of support for philosophical research. History suggests that philosophers in the public square with nothing else to keep them busy end up asking people lots of annoying questions, undermining the authority of institutions, corrupting the youth, and so forth.

**One of the challenges in getting the public on board to fund scientific research is that they can be quite skeptical that “basic research” will have any useful application beyond satisfying researchers’ curiosity.