More on rudeness, civility, and the care and feeding of online conversations.

Late last month, I pondered the implications of a piece of research that was mentioned but not described in detail in a perspective piece in the January 4, 2013 issue of Science. [1] In its broad details, the research suggests that the comments that follow an online article about science — and particularly the perceived tone of the comments, whether civil or uncivil — can influence readers’ assessment of the science described in the article itself.

Today, an article by Paul Basken at The Chronicle of Higher Education shares some more details of the study:

The study, outlined on Thursday at the annual meeting of the American Association for the Advancement of Science, involved a survey of 2,338 Americans asked to read an article that discussed the risks of nanotechnology, which involves engineering materials at the atomic scale.

Of participants who had already expressed wariness toward the technology, those who read the sample article—with politely written comments at the bottom—came out almost evenly split. Nearly 43 percent said they saw low risks in the technology, and 46 percent said they considered the risks high.

But with the same article and comments that expressed the same reactions in a rude manner, the split among readers widened, with 32 percent seeing a low risk and 52 percent a high risk.

“The only thing that made a difference was the tone of the comments that followed the story,” said a co-author of the study, Dominique Brossard, a professor of life-science communication at the University of Wisconsin at Madison. The study found “a polarization effect of those rude comments,” Ms. Brossard said.

The study, conducted by researchers at Wisconsin and George Mason University, will be published in a coming issue of the Journal of Computer-Mediated Communication. It was presented at the AAAS conference during a daylong examination of how scientists communicate their work, especially online.

If you click through to read the article, you’ll notice that I was asked for comment on the findings. As you may guess, I had more to say on the paper (which is still under embargo) and its implications than ended up in the article, so I’m sharing my extended thoughts here.

First, I think these results are useful in reassuring bloggers who have been moderating comments that what they are doing is not just permissible (moderating comments is not “censorship,” since bloggers don’t have the power of the state, and folks can find all sorts of places in the Internet to state their views if any given blog denies them a soapbox) but also reasonable. Blogging with comments enabled assumes more than transmission of information, it assumes a conversation, and what kind of conversation it ends up being depends on what kind of behavior is encouraged or forbidden, who feels welcome or alienated.

But, there are some interesting issues that the study doesn’t seem to address, issues that I think can matter quite a lot to bloggers.

In the study, readers (lurkers) were reacting to factual information in an online posting plus the discourse about that article in the comments. As the study is constructed, it looks like that discourse is being shaped by commenters, but not by the author of the article. It seems likely to me (and worth further empirical study!) that comment sections in which the author is engaging with commenters — not just responding to the questions they ask and the views they express, but also responding to the ways that they are interacting with other commenters and to their “tone” — have a different impact on readers than comment sections where the author of the piece that is being discussed is totally absent from the scene. To put it more succinctly, comment sections where the author is present and engaged, or absent and disengaged, communicate information to lurkers, too.

Here’s another issue I don’t think the study really addresses: While blogs usually aim to communicate with lurkers as well as readers who post comments (and every piece of evidence I’ve been shown suggests that commenters tend to be a small proportion of readers), most are aiming to reach a core audience that is narrower than “everyone in the world with an internet connection”.

Sometimes what this means is that bloggers are speaking to an audience that finds comment sections that look unruly and contentious to be welcoming, rather than alienating. This isn’t just the case for bloggers seeking an audience that likes to debate or to play rough.

Some blogs have communities that are intentionally uncivil towards casual expressions of sexism, racism, homophobia, etc. Pharyngula is a blog that has taken this approrach, and just yesterday Chris Clarke posted a statement on “civility” there that leads with a commitment “not to fetishize civility over justice.” Setting the rules of engagement between bloggers and posters this way means that people in groups especially affected by sexism, racism, homophobia, etc., have a haven in the blogosphere where they don’t have to waste time politely defending the notion that they are fully human, too (or swallowing their anger and frustration at having their humanity treated as a topic of debate). Yes, some people find the environment there alienating — but the people who are alienated by unquestioned biases in most other quarters of the internet (and the physical world, for that matter) are the ones being consciously welcomed into the conversation at Pharyngula, and those who don’t like the environment can find another conversation. It’s a big blogosphere. That not every potential reader does not feel perfectly comfortable at a blog, in other words, is not proof that the blogger is doing it wrong.

So, where do we find ourselves?

We’re in a situation where lots of people are using online venues like blogs to communicate information and viewpoints in the context of a conversation (where readers can actively engage as commenters). We have a piece of research indicating that the tenor of the commenting (as perceived by lurkers, readers who are not commenting) can communicate as much to readers as the content of the post that is the subject of the comments. And we have lots of questions still unanswered about what kinds of engagement will have what kinds of effect on what kinds or readers (and how reliably). What does this mean for those of us who blog?

I think what it means is that we have to be really reflective about what we’re trying to communicate, who we’re trying to communicate it to, and how our level of visible engagement (or disengagement) in the conversation might make a difference. We have to acknowledge that we have information that’s gappy at best about what’s coming across to the lurkers, and attentive to ways to get more feedback about how successfully we’re communicating what we’re trying to communicate. We have to recognize that, given all we don’t know, we may want to shift our strategies for blogging and engaging commenters, especially if we come upon evidence that they’re not working the way we thought they were.

* * * * *
In the interests of spelling out the parameters of the conversation I’d like to have here, let me note that whether or not you like the way Pharyngula sets a tone for conversations is off topic here. You are, however, welcome to share in the comments here what you find makes you feel more or less welcome to engage with online postings, whether as a commenter or a lurker.
_____

[1] Dominique Brossard and Dietram A. Scheufele, “Science, New Media, and the Public.” Science 4 January 2013:Vol. 339, pp. 40-41.
DOI: 10.1126/science.1160364

Academic tone-trolling: How does interactivity impact online science communication?

Later this week at ScienceOnline 2013, Emily Willingham and I are co-moderating a session called Dialogue or fight? (Un)moderated science communication online. Here’s the description:

Cultivating a space where commentators can vigorously disagree with a writer–whether on a blog, Twitter, G+, or Facebook, *and* remain committed to being in a real dialogue is pretty challenging. It’s fantastic when these exchanges work and become constructive in that space. On the other hand, there are times when it goes off the rails despite your efforts. What drives the difference? How can you identify someone who is commenting simply to cause trouble versus a commenter there to engage in and add value to a genuine debate? What influence does this capacity for *anyone* to engage with one another via the great leveler that is social media have on social media itself and the tenor and direction of scientific communication?

Getting ready for this session was near the top of my mind when I read a perspective piece by Dominique Brossard and Dietram A. Scheufele in the January 4, 2013 issue of Science. [1] In the article, Brossard and Scheufele raise concerns about the effects of moving the communication of science information to the public from dead-tree newspapers and magazines into online, interactive spaces.

Here’s the paragraph that struck me as especially relevant to the issues Emily and I had been discussing for our session at ScienceOnline 2013:

A recent conference presented an examination of the effects of these unintended influences of Web 2.0 environments empirically by manipulating only the tone of the comments (civil or uncivil) that followed an online science news story in a national survey experiment. All participants were exposed to the same, balanced news item (covering nanotechnology as an emerging technology) and to a set of comments following the story that were consistent in terms of content but differed in tone. Disturbingly, readers’ interpretations of potential risks associated with the technology described in the news article differed significantly depending only on the tone of the manipulated reader comments posted with the story. Exposure to uncivil comments (which included name calling and other non-content-specific expressions of incivility) polarized the views among proponents and opponents of the technology with respect to its potential risks. In other words, just the tone of the comments following balanced science stories in Web 2.0 environments can significantly alter how audiences think about the technology itself. (41)

There’s lots to talk about here.

Does this research finding mean that, when you’re trying to communicate scientific information online, enabling comments is a bad idea?

Lots of us are betting that it’s not. Rather, we’re optimistic that people will be more engaged with the information when they have a chance to engage in a conversation about it (e.g., by asking questions and getting answers).

However, the research finding described in the Science piece suggests that there may be better and worse ways of managing commenting on your posts if your goal is to help your readers understand a particular piece of science.

This might involve having a comment policy that puts some things clearly out-of-bounds, like name-calling or other kinds of incivility, and then consistently enforcing this policy.

It should be noted — and has been — that some kinds of incivility wear the trappings of polite language, which means that it’s not enough to set up automatic screens that weed out comments containing particular specified naughty words. Effective promotion of civility rather than incivility might well involve having the author of the online piece and/or designated moderators as active participants in the ongoing conversation, calling out bad commenter behavior as well as misinformation, answering questions to make sure the audience really understands the information being presented, and being attentive to how the unfolding discussion is likely to be welcoming — or forbidding — to the audience one is hoping to reach.

There are a bunch of details that are not clear from this brief paragraph in the perspective piece. Were the readers whose opinions were swayed by the tone of the comments reacting to a conversation that had already happened or were they watching as it happened? (My guess is the former, since the latter would be hard to orchestrate and coordinate with a survey.) Were they looking at a series of comments that dropped them in the middle of a conversation that might plausibly continue, or were they looking at a conversation that had reached its conclusion? Did the manipulated reader comments include any comments that appeared to be from the author of the science article, or were the research subjects responding to a conversation from which the author appeared to be absent? Potentially, these details could make a difference to the results — a conversation could impact someone reading it differently depending on whether it seems to be gearing up or winding down, just as participation from the author could carry a different kind of weigh than the views of random people on the internet. I’m hopeful that future research in this area will explore just what kind of difference they might make.

I’m also guessing that the experimental subjects reading the science article and the manipulated comments that followed could not themselves participate in the discussion by posting a comment. I wonder how much being stuck on the sidelines rather than involved in the dialogue affected their views. We should remember, though, that most indicators suggest that readers of online articles — even on blogs — who actually post comments are much smaller in number than the readers who “lurk” without commenting. This means that commenters are generally a very small percentage of the readers one is trying to reach, and perhaps not very representative of those readers overall.

At this point, the take-home seems to be that social scientists haven’t discovered all the factors that matter in how an audience for online science is going to receive and respond to what’s being offered — which means that those of us delivering science-y content online should assume we haven’t discovered all those factors, either. It might be useful, though, if we are reflective about our interactions with our audiences and if we keep track of the circumstances around communicative efforts that seem to work and those that seem to fail. Cataloguing these anecdote could surely provide fodder for some systematic empirical study, and I’m guessing it could help us think through strategies for really listening to the audiences we hope are listening to us.

* * * * *
As might be expected, Bora has a great deal to say about the implications of this particular piece of research and about commenting, comment moderation, and Web 2.0 conversations more generally. Grab a mug of coffee, settle in, and read it.

——
[1] Dominique Brossard and Dietram A. Scheufele, “Science, New Media, and the Public.” Science 4 January 2013:Vol. 339, pp. 40-41.
DOI: 10.1126/science.1160364

Can we combat chemophobia … with home-baked bread?

This post was inspired by the session at the upcoming ScienceOnline 2013 entitled Chemophobia & Chemistry in The Modern World, to be moderated by Dr. Rubidium and Carmen Drahl

For some reason, a lot of people seem to have an unreasonable fear of chemistry. I’m not just talking about fear of chemistry instruction, but full-on fear of chemicals in their world. Because what people think they know about chemicals is that they go boom, or they’re artificial, or they’re drugs which are maybe useful but maybe just making big pharma CEOs rich, and maybe they’re addictive and subject to abuse. Or, they are seeping into our water, our air, our food, our bodies and maybe poisoning us.

At the extreme, it strikes me that chemophobia is really just a fear of recognizing that our world is made of chemicals. I can assure you, it is!

Your computer is made of chemicals, but so are paper and ink. Snails are made of chemicals, as are plants (which carry out chemical reactions right under our noses. Also carrying out chemical reactions right under our noses are yeasts, without which many of our potables would be less potent. Indeed, our kitchens and pantries, from which we draw our ingredients and prepare our meals, are full of many impressively reactive chemicals.

And here, it actually strikes me that we might be able to ratchet down the levels of chemophobia if people find ways to return to de novo syntheses of more of what they eat — which is to say, to making their food from scratch.

For the last several months, our kitchen has been a hotbed of homemade bread. Partly this is because we had a stretch of a couple years where our only functional oven was a toaster over, which means when we got a working full-sized oven again, we became very enthusiastic about using it.

As it turns out, when you’re baking two or three loaves of bread every week, you start looking at things like different kinds of flour on the market and figuring out how things like gluten content affect your dough — how dense of a bread it will make, how much “spring” it has in the oven, and so forth.

(Gluten is a chemical.)

Maybe you dabble with the occasional batch of biscuits of muffins or quick-bread that uses a leavening agent other than yeast — otherwise known as a chemical leavener.

(Chemical leaveners are chemicals.)

And, you might even start to pick up a feel for which chemical leaveners depend on there being an acidic ingredient (like vinegar or buttermilk) in your batter and which will do the job without an acidic ingredient in the batter.

(Those ingredients, whether acidic or not, are made of chemicals. Even the water.)

Indeed, many who find their inner baker will start playing around with recipes that call for more exotic ingredients like lecithin or ascorbic acid or caramel color (each one: a chemical).

It’s to the point that I have joked, while perusing the pages of “baking enhancers” in the fancy baking supply catalogs, “People start baking their own bread so they can avoid all the chemicals in the commercially baked bread, but then they get really good at baking and start improving their homemade bread with all these chemicals!”

And yes, there’s a bit of a disconnect in baking to avoid chemicals in your food and then discovering that there are certain chemicals that will make that food better. But, I’m hopeful that the process leads to a connection, wherein people who are getting back in touch with making one of the oldest kinds of foods we have can also make peace with the recognition that wholesome foods (and the people who eat them) are made of chemicals.

It’s something to chew on, anyway.

Reasonably honest impressions of #overlyhonestmethods.

I suspect at least some of you who are regular Twitter users have been following the #overlyhonestmethods hashtag, with which scientists have been sharing details of their methodology that are maybe not explicitly spelled out in their published “Materials and Methods” sections. And, as with many other hashtag genres, the tweets in #overlyhonestmethods are frequently hilarious.

I was interviewed last week about #overlyhonestmethods for the Public Radio International program Living On Earth, and the length of my commentary was more or less Twitter-scaled. This means some of the nuance (at least in my head), about questions like whether I thought the tweets were an overshare that could make science look bad, didn’t quite make it to the radio. Also, in response to the Living On Earth segment, one of the people with whom I regularly discuss the philosophy of science in the three-dimensional world, shared some concerns about this hashtag in the hopes I’d say a bit more:

I am concerned about the brevity of the comments which may influence what one expresses.  Second there is an ego component; some may try to outdo others’ funny stories, and may stretch things in order to gain a competitive advantage.

So, I’m going to say a bit more.

Should we worry that #overlyhonestmethods tweets share information that will make scientific practice look bad to (certain segments of) the public?

I don’t think so. I suppose this may depend on what exactly the public expects of scientists.

The people doing science are human. They are likely to be working with all kinds of constraints — how close their equipment is to the limits of its capabilities (and to making scary noises), how frequently lab personnel can actually make it into the lab to tend to cell cultures, how precisely (or not) pumping rates can be controlled, how promptly (or not) the folks receiving packages can get perishable deliveries to the researchers. (Notice that at least some of these limitations are connected to limited budgets for research … which maybe means that if the public finds them unacceptable, they should lobby their Congresscritters for increased research funding.) There are also constraints that come from the limits of the human animal: with a finite attention span, without a built in chronometer or calibrated eyeballs, and with a need for sleep and possibly even recreation every so often (despite what some might have you think).

Maybe I’m wrong, but my guess is that it’s a good thing to have a public that is aware of these limitations imposed by the available equipment, reagents, and non-robot workforce.

Actually, I’m willing to bet that some of these limitations, and an awareness of them, are also really handy in scientific knowledge-building. They are departures from ideality that may help scientists nail down which variables in the system really matter in producing and controlling the phenomenon being studied. Reproducibility might be easy for a robot that can do every step of the experiment precisely every single time, but we really learn what’s going on when we drift from that. Does it matter if I use reagents from a different supplier? Can I leave the cultures to incubate a day longer? Can I successfully run the reaction in a lab that’s 10 oC warmer or 10 oC colder? Working out the tolerances helps turn an experimental protocol from a magic trick into a system where we have some robust understanding of what variables matter and of how they’re hooked to each other.

Does the 140 character limit mean #overlyhonestmethods tweets leave out important information, or that scientists will only use the hashtag to be candid about some of their methods while leaving others unexplored?

The need for brevity surely means that methods for which candor requires a great deal of context and/or explanation won’t be as well-represented as methods where one can be candid and pithy simultaneously. These tweeted glimpses into how the science gets done are more likely to be one-liners than shaggy-dog stories.

However, it’s hard to imagine that folks who really wanted to share wouldn’t use a series of tweets if they wanted to play along, or maybe even write a blog post about it and use the hashtag to tweet a link to that post.

What if #overlyhonestmethods becomes a game of one-upmanship and puffery, in which researchers sacrifice honesty for laughs?

Maybe there’s some of this happening, and if the point of the hashtag is for researchers to entertain each other, maybe that’s not a problem. However, in the case that other members of one’s scientific community were actually looking to those tweets to fill in some of the important details of methodology that are elided in the terse “Materials and Methods” section of a published research paper, I hope the tweeters would, when queried, provide clear and candid information on how they actually conducted their experiments. Correcting or retracting a tweet should be less of an ego blow than correcting or retracting a published paper, I hope (and indeed, as hard as it might be to correct or retract published claims, good scientists do it when they need to).

The whole #overlyhonestmethods hashtag raises the perennial question of why it is so much is elided in published “Materials and Methods” sections. Blame is usually put on limitations of space in the journals, but it’s also reasonable to acknowledge that sometimes details-that-turn-out-to-be-important are left out because the researchers don’t fully recognize their importance. Other times, researchers may have empirical grounds for thinking these details are important, but they don’t yet have a satisfying story to tell about why they should be.

By the way, I think it would be an excellent thing if, for research that is already published, #overlyhonestmethods included the relevant DOI. These tweets would be supplementary information researchers could really use.

What researchers use #overlyhonestmethods to disclose ethically problematic methods?

Given that Twitter is a social medium, I expect other scientists in the community watching the hashtag would challenge those methods or chime in to explain just what makes them ethically problematic. They might also suggest less ethically problematic ways to achieve the same research goals.

The researchers on Twitter could, in other words, use the social medium to exert social pressure in order to make sure other members of their scientific community understand and live up to the norms of that community.

That outcome would strike me as a very good one.

* * * * *

In addition to the ever expanding collection of tweets about methods, #overlyhonestmethods also has links to some thoughtful, smart, and funny commentary on the hashtag and the conversations around it. Check it out!

Fear of scientific knowledge about firearm-related injuries.

In the United States, a significant amount of scientific research is funded through governmental agencies, using public money. Presumably, this is not primarily aimed at keeping scientists employed and off the streets*, but rather is driven by a recognition that reliable knowledge about how various bits of our world work can be helpful to us (individually and collectively) in achieving particular goals and solving particular problems.

Among other things, this suggests a willingness to put the scientific knowledge to use once it’s built.** If we learn some relevant details about the workings of the world, taking those into account as we figure out how best to achieve our goals or solve our problems seems like a reasonable thing to do — especially if we’ve made a financial investment in discovering those relevant details.

And yet, some of the “strings” attached to federally funded research suggest that the legislators involved in approving funding for research are less than enthusiastic to see our best scientific knowledge put to use in crafting policy — or, that they would prefer that the relevant scientific knowledge not be built or communicated at all.

A case in point, which has been very much on my mind for the last month, is the way language in appropriations bills has restricted Centers for Disease Control and Prevention (CDC) and National Institutes of Health (NIH) research funds for research related to firearms.

The University of Chicago Crime Lab organized a joint letter (PDF) to the gun violence task force being headed by Vice President Joe Biden, signed by 108 researchers and scholars, which is very clear in laying out the impediments that have been put on research about the effects of guns. They identify the crucial language, which is still present in subsection c of section 503 and 218 of FY2013 Appropriations Act governing NIH and CDC funding:

None of the funds made available in this title may be used, in whole or in part, to advocate or promote gun control.

As the letter from the Crime Lab rightly notes,

Federal scientific funds should not be used to advance ideological agendas on any topic. Yet that legislative language has the effect of discouraging the funding of well-crafted scientific studies.

What is the level of this discouragement? The letter presents a table comparing major NIH research awards connected to a handful of conditions between 1973 and 2012, noting the number of reported cases of these conditions in the U.S. during this time period alongside the number of grants to study the condition. There were 212 NIH research awards to study cholera and 400 reported U.S. cases of cholera. There were 56 NIH research awards to study diphtheria and 1337 reported U.S. cases of diphtheria. There were 129 NIH research awards to study polio and 266 reported U.S. cases of polio. There were 89 NIH research awards to study rabies and 65 reported U.S. cases of rabies. But, for more than 4 million reported firearm injuries in the U.S. during this time period, there were exactly 3 NIH research awards to study firearm injuries.

One possibility here is that, from 1973 to 2012, there were very few researchers interested enough in firearm injuries to propose well-crafted scientific studies of them. I suspect that that the 108 signatories of the letter linked above would disagree with that explanation for this disparity in research funding.

Another possibility is that legislators want to prevent the relevant scientific knowledge from being built. The fact that they have imposed restrictions on the collection and sharing of data by the Federal Bureau of Alcohol, Tobacco, Firearms and Explosives (in particular, data tracing illegal sales and purchases of firearms) strongly supports the hypothesis that, at least when it comes to firearms, legislators would rather be able to make policy unencumbered by pesky facts about how the relevant pieces of the world actually work.

What this suggests to me is that these legislators either don’t understand that knowing more about how the world works can help you achieve desired outcomes in that world, or that they don’t want to achieve the outcome of reducing firearm injury or death.

Perhaps these legislators don’t want researchers to build reliable knowledge about the causes of firearm injury because they fear it will get in the way of their achieving some other goal that is more important to them than reducing firearm injury or death.

Perhaps they fear that careful scientific research will turn up facts which themselves seem to “to advocate or promote gun control” — at least to the extent that they show that the most effective way to reduce firearm injury and death would be to implement controls that the legislators view as politically unpalatable.

If nothing else, I find that a legislator’s aversion to scientific evidence is a useful piece of information about him or her to me, as a voter.
______
*If federal funding for research did function like a subsidy, meant to keep the researchers employed and out of trouble, you’d expect to see a much higher level of support for philosophical research. History suggests that philosophers in the public square with nothing else to keep them busy end up asking people lots of annoying questions, undermining the authority of institutions, corrupting the youth, and so forth.

**One of the challenges in getting the public on board to fund scientific research is that they can be quite skeptical that “basic research” will have any useful application beyond satisfying researchers’ curiosity.

Are scientists obligated to call out the bad work of other scientists? (A thought experiment)

Here’s a thought experiment. While it was prompted by intertubes discussions of evolutionary psychology and some of its practitioners, I take it the ethical issues are not limited to that field.

Say there’s an area of scientific research that is at a relatively early stage of its development. People working in this area of research see what they are doing as strongly connected to other, better established scientific fields, whether in terms of methodological approaches to answering questions, or the existing collections of empirical evidence on which they draw, or what have you.

There is general agreement within this community about the broad type of question that might be answered by this area of research and the sorts of data that may be useful in evaluating hypotheses. But there is also a good bit of disagreement among practitioners of this emerging field about which questions will be the most interesting (or tractable) ones to pursue, about how far one may reasonably extend the conclusions from particular bits of research, and even about methodological issues (such as what one’s null hypothesis should be).

Let me pause to note that I don’t think the state of affairs I’m describing would be out of the ordinary for a newish scientific field trying to get its footing. You have a community of practitioners trying to work out a reasonable set of strategies to answer questions about a bundle of phenomena that haven’t really been tackled by other scientific fields that are chugging merrily along. Not only do you not have the answers yet to the questions you’re asking about those phenomena, but you’re also engaged in building, testing, and refining the tools you’ll be using to try to answer those questions. You may share a commitment with others in the community that there will be a useful set of scientific tools (conceptual and methodological) to help you get a handle on those phenomena, but getting there may involve a good bit of disagreement about what tools are best suited for the task. And, there’s a possibility that in the end, there might not be any such tools that give you answers to the questions you’re asking.

Imagine yourself to be a member of this newish area of scientific research.*

What kind of obligation do you have to engage with other practitioners of this newish area of scientific research whose work you feel is not good? (What kind of “not good” are we talking about here? Possibly you perceive them to be drawing unwarranted conclusions from their studies, or using shoddy methodology, or ignoring empirical evidence that seems to contradict their claims. There’s no need to assume that they are being intentionally dishonest.) Do you have an obligation to take to the scientific literature to critique the shortcomings in their work? Do you have an obligation to communicate these critiques privately (e.g., in email correspondence)? Or is it ethically permissible not to engage with what you consider the bad examples of work in your emerging scientific field, instead keeping your head down and producing your own good examples of how to make progress in your emerging scientific field?

Do you think your obligations here are different than they might be if you were working in a well-established scientific field? (In a well-established scientific field, one might argue, the standards for good work and bad work are clearer; does this mean it takes less individual work to identify and rebut the bad work?)

Now consider the situation when your emerging scientific field is one that focuses on questions that capture the imagination not just of scientists trying to get this new field up and running, but also of the general public — to the extent that science writers and journalists are watching the output of your emerging scientific field for interesting results to communicate to the public. How does the fact that the public is paying some attention to your newish area of scientific research bear on what kind of obligation you have to engage with the practitioners in your field whose work you feel is not good?

(Is it fair that a scientist’s obligations within his or her scientific field might shift depending on whether the public cares at all about the details of the knowledge being built by that scientific field? Is this the kind of thing that might drive scientists into more esoteric fields of research?)

Finally, consider the situation when your emerging field of science has captured the public imagination, and when the science writers and journalists seem to be getting most of their information about what your field is up to and what knowledge you have built from the folks in your field whose work you feel is not good. Does this place more of an obligation upon you to engage with the practitioners doing not-good work? Does it obligate you to engage with the science writers and journalists to rebut the bad work and/or explain what is required for good scientific work in your newish field? If you suspect that science writers and journalists are acting, in this case, to amplify misunderstandings or to hype tempting results that lack proper evidential support, do you have an obligation to communicate directly to the public about the misunderstandings and/or about what proper evidential support looks like?

A question I think can be asked at every stage of this thought experiment: Does the community of practitioners of your emerging scientific field have a collective responsibility to engage with the not-so-good work, even if any given individual practitioner does not? And, if the answer to this question is “yes”, how can the community of practitioners live up to that obligation if no individual practitioner is willing to step up and do it?

_____
* For fun, you can also consider these questions from the point of view of a member of the general public: What kinds of obligations do you want the scientists in this emerging field to recognize? After all, as a member of the public, your interests might diverge in interesting ways from those of a scientist in this emerging field.

Science, priorities, and the challenges of sharing a world.

For scientists, doing science is often about trying to satisfy deep curiosity about how various bits of our world work. For society at large, it often seems like science ought to exist primarily to solve particular pressing problems — or at least, that this is what science ought to be doing, given that our tax dollars are going to support it. It’s not a completely crazy idea. Even if tax dollars weren’t funding lots of scientific research and the education of scientists (even at private universities), the public might expect scientists to focus their attention on pressing problems, simply because scientists have the expertise to solve these problems and other members of society don’t.

This makes it harder to get the public to care about funding science for which the pay-off is not obviously useful, especially “basic research”. You want to understand the structure of subatomic particles, or the fundamental forces at work in our universe? That’s great, but how is it going to help us live longer, or help us build more fuel-efficient vehicles, or bring smaller iPods to market? Most members of the public don’t even know what a quark is, let alone care about whether you can detect a particular kind of quark experimentally. Satisfying our curiosity about the details on the surface of Mars can strike folks not gripped by that particular curiosity as a distraction from important questions that science could be answering instead.

A typical response is to note that basic research has in the past led to unanticipated practical applications. Of course, this isn’t a way to get the public to see the intrinsic value of basic research — it merely asks them to value such research instrumentally, as sort of a mystery box that is bound to contain some payoff which we cannot describe in advance but which promises to be awesome.

Some years ago Rick Weiss made an argument like this in the Washington Post in defense of space research. For example, space exploration. Weiss expressed concern that “Americans have lost sight of the value of non-applied, curiosity-driven research — the open-ended sort of exploration that doesn’t know exactly where it’s going but so often leads to big payoffs,” then went through an impressive list of scientific projects that started off without any practical applications but ended up making possible all manner of useful applications. Limit basic science, the argument went, and you’re risking economic growth.

But Weiss was careful not to say the only value in scientific research is in marketable products. Rather, he offered an even more important reason for the public to support research:

Because our understanding of the world and our support of the quest for knowledge for knowledge’s sake is a core measure of our success as a civilization. Our grasp, however tentative, of what we are and where we fit in the cosmos should be a source of pride to all of us. Our scientific achievements are a measure of ourselves that our children can honor and build upon.

I find that a pretty inspiring description of science’s value, but it’s not clear that most members of the public would be similarly misty-eyed.

Scientists may already feel that they have to become the masters of spin to get even their practical research projects funded. Will the scientists also have to take on the task of convincing the public at large that a scientific understanding of ourselves and of the world we live in should be a source of pride? Will a certain percentage of the scientist’s working budget have to go to public relations? (“Knowledge: It’s not just for dilettantes anymore!”) Maybe the message that knowledge for knowledge’s sake is a fitting goal for a civilized society is the kind of thing that people would just get as part of their education. Only it’s not on the standardized tests, and it seems like that’s the only place the public wants to put up money for education any more. Sometimes not even then.

The problem here is that scientists value something that the public at large seems not to value. The scientists think the public ought to value it, but they don’t have the power to impose their will on the public in this regard any more than the public can demand that scientists stop caring about weird things like quarks. Meanwhile, the public supports science, at least to the extent that science can deliver practical results in a timely fashion. There would probably be tension in this relationship even if scientists weren’t looking to the public for funding.

Of course, when scientists do tackle real-life problems and develop real-life solutions, it’s not like the public is always so good about accepting them. Consider the mixed public reception of the vaccine against human papilloma virus (HPV). The various strains of HPV are the leading cause of cervical cancer, and are not totally benign for men, causing genital warts and penile cancers. You would think that developing a reasonably safe and effective vaccine against a virus like HPV is exactly the sort of scientific accomplishment the public might value — except that religious groups in the US voiced opposition to the HPV vaccine on the grounds that it might give young women license to engage in premarital sex rather than practicing abstinence.

(The scientist scratches her head.) Let me get this straight: Y’all want to cut funding for the basic science because you don’t think it will lead to practical applications. But when we do the research to solve what seems like a real problem — people are dying from cervical cancer — y’all tell us this is a problem you didn’t really want us to solve?

Here, to be fair, it’s not everyone who wants to opt out of the science, just a part of the population with a fair bit of political clout at particular moments in history. The central issue seems to be that our society is made up of a bunch of people (including scientists) with rather different values, which lead to rather different priorities. In thinking about where scientific funding comes from, we talk as though there were a unitary Public with whom the unitary Science transacts business. It might be easier were that really the case. Instead, the scientists get to deal with the writhing mass of contradictory impulses that is the American public. About the only thing that public knows for sure is that it doesn’t want to pay more taxes.

How can scientists direct their efforts at satisfying public wants, or addressing public needs, if the public itself can’t come to any robust agreement on what those wants and needs are? If science has to prove to the public that the research dollars are going to the good stuff, will scientists have to stretch things a little in the telling?

Or might it actually be better if the public (or the politicians acting in the public’s name) spent less time trying to micro-manage scientists as they set the direction of their research? Maybe it would make sense, if the public decided that having scientists in society was a good thing for society, to let the scientists have some freedom to pursue their own scientific interests, and to make sure they have the funding to do so.

I’m not denying that the public has a right to decide where its money goes, but I don’t think putting up the money means you get total control. Because if you demand that much control, you may end up having to do the science yourself. Also, once science delivers the knowledge, it seems like the next step is to make that knowledge available. If particular members of the public decide not to avail themselves of that knowledge (because they feel it would be morally wrong, or maybe just silly, as in the case of pet cloning), that is their decision. We shouldn’t be making life harder for the scientists for doing what good scientists do.

It’s clear that there are forces at work in American culture right now that are not altogether comfortable with all that science has to offer at the moment. Discomfort is a normal part of sharing society with others who don’t think just like you do. But hardly anyone thinks it would be a good idea to ship all the scientists off to someplace else. We like our tablet computers and our smartphones and our headache medicines and our DSL and our Splenda too much for that.

Perhaps, for a few moments, we should give the hard-working men and women of science a break and thank them for the knowledge they produce, whether we know what to do with it or not. Then, we can return to telling them about the pieces of our world we’d like more help navigating, and see whether they have any help to offer yet.

Movie review: Strange Culture.

The other day I was looking for a movie I could watch with instant streaming that featured Josh Kornbluth* and I came upon Strange Culture. Strange Culture is a documentary about the arrest of artist and SUNY-Buffalo professor of art history Steve Kurtz on charges of bioterrorism, mail fraud, and wire fraud in 2004 after the death of his wife, Hope.

At the time Strange Culture was released in 2007, the legal case against Steve Kurtz (and against University of Pittsburgh professor of genetics Robert Ferrell) was ongoing, so the documentary uses actors to interpret events in the case about which Kurtz could not speak on advice of counsel, as well as the usual news footage and interviews of people in the case who were able to talk freely. It also draws on a vividly illustrated graphic novel about the case (titled “Suspect Culture”) written by Timothy Stock and illustrated by Warren Heise.

The central question of the documentary is how an artist found himself the target of federal charges of bioterrorism. I should mention that I watched Strange Culture not long after I finished reading The Radioactive Boy Scout, which no doubt colored my thinking. If The Radioactive Boy Scout is a story of scientific risks taken too lightly, Strange Culture strikes me as a story of scientific risks blown far out of proportion. At the very least, I think there are questions worth pondering here about why the two cases provoked such wildly different reactions.

In 2004, as part of the Critical Art Ensemble, Steve and Hope Kurtz were working on an art installation for the Massachusetts Museum of Contemporary Art on genetically modified agriculture. The nature of the installation was to demonstrate (and involve museum-goers in) scientific techniques used to isolate genetic information from various food products and to identify genetically modified organisms. The larger aim of the installation was to help the audience better understand the use of biotechnology in agriculture, and to push the audience to think more deeply about the scientific decisions made by agribusiness and how they might impact everyday life.

Regardless of whether one thinks the Critical Art Ensemble was raising legitimate worries about GMOs, or ignoring potential benefits from this use of biotechnology**, there is something about the effort to give members of the public a better understanding of — and even some hands-on engagement with — the scientific techniques that I find deeply appealing. Indeed, Steve and Hope Kurtz were in active collaboration with working biologists so that they could master the scientific techniques in question and use them appropriately in assembling the installation. Their preparations included work they were doing in their home with petri dishes and commercially available incubators using benign bacteria.

However, this was where the problems began for Steve Kurtz. One night in May of 2004, Hope Kurtz died in her sleep of heart failure. Steve Kurtz dialed 911. The Buffalo first responders who responded to the call saw the petri dishes and freaked out and notified the FBI. Suddenly, the Kurtz home was swarming with federal agents looking for evidence of bioterrorist activities and Steve Kurtz was under arrest.

Watching Strange Culture, I found myself grappling with the question of just why the authorities reacted with such alarm to what they found in the Kurtz home. My recollection of the news coverage at the time was that the authorities suspected that whatever was growing in those petri dishes might have killed hope Kurtz, but at this point indications are that her death was due to a congenital heart defect. First responders are supposed to be alert to dangers, but they should also recognize that coincidence in space and time is not the same as causation. Hope Kurtz’s death was less than three years after the September 11th attacks, and the anthrax attacks that came close on their heels, which likely raised anxiety about the destructive potential of biological agents in the hands of someone who knows how to use them. I wonder, though, whether some amount of the reaction was not just post-9/11 hypervigilance but a deeper fear of biological material at the microscopic level. If you can grow it in a petri dish, the reaction seemed to say, it must be some seriously dangerous stuff. (I am grateful that these first responders didn’t stumble upon the forgotten leftovers in the back of my fridge and judge me a bioterrorism suspect, too.)

More baffling than the behavior of the first responders was the behavior of the federal agents who searched the Kurtz home. While they raised the specter that Steve Kurtz was producing biological weapons, they ended up leaving the place in shambles, strewn with bags of purportedly biohazardous material (as well as with the trash generated by the agents over the long course of their investigation). Leaving things in this state would be puzzling if the prime concern of the government was to protect the community from harmful biological materials, suggesting that perhaps the investigative teams was more interested in creating a show of government force.

Strange Culture raises, but does not answer, the question of how the government turned out to be even more alarmed by biotechnology in widespread agricultural use than was an art group aiming to raise concerns about GMOs. It suggests that scientific understanding and accurate risk assessment is a problem not just for the public at large but also for the people entrusted with keeping the public safe. It also suggests that members of the public are not terribly safe if the default response from the government is an overreaction, or a presumption that members of the public have no business getting their hands dirty with science.

It’s worth noting that a 2008 ruling found there was insufficient evidence to support the charges against Steve Kurtz, and that the Department of Justice declined to appeal this ruling. You can read the Critical Art Ensemble Defense Fund press release issued at the conclusion of Steve Kurtz’s legal battle.

_____
*Yes, it’s a very particular kind of thing to want. People are like that sometimes.

**On the question of GMOs, if you haven’t yet read Christie Wilcox’s posts (here, here, and here), you really should.

Book review: Uncaged.

In our modern world, many of the things that contribute to the mostly smooth running of our day-to-day lives are largely invisible to us. We tend to notice them only when they break. Uncaged, a thriller by Paul McKellips, identifies animal research as one of the activities in the background supporting the quality of life we take for granted, and explores what might happen if all the animal research in the U.S. ended overnight.

Part of the fun of a thriller is the unfolding of plot turns and the uncertainty about which characters who come into focus will end up becoming important. Therefore, in order not to spoil the book for those who haven’t read it yet, I’m not going to say much about the details of the plot or the main characters.

The crisis emerges from a confluence of events and an intertwining of the actions of disparate persons acting in ignorance of each other. This complex tangle of causal factors is one of the most compelling parts of the narrative. McKellips gives us “good guys,” “bad guys,” and ordinary folks just trying to get by and to satisfy whatever they think their job description or life circumstances demand of them, weaving a tapestry where each triggers chains of events that compound in ways they could scarcely have foreseen. This is a viscerally persuasive picture of how connected we are to each other, whether by political processes, public health infrastructure, the food supply, or the germ pool.

There is much to like in Uncaged. The central characters are complex, engaging, and even surprising. McKellips is deft in his descriptions of events, especially the impacts of causal chains initiated by nature or by human action on researchers and on members of the public. Especially strong are McKellips’s explanations of scientific techniques and rationales for animal research in ways that are reasonably accessible to the lay reader without being oversimplified.

Uncaged gets to the crux of the societal debate about scientific animal use in a statement issued by the President of the United States as, in response to a series of events, he issues an executive order halting animal research. This president spells out his take on the need — or not — for continued biomedical research with animals:

I realize that the National Institutes of Health grants billions of dollars to American universities and our brightest scientists for biomedical research each year. But there comes a point when we must ask ourselves — that we must seriously question — has our health reached the level of “good enough”? Think of all the medicine we have available to us today. It’s amazing. It’s plenty. It’s more than we have had available in the history of humanity. And for those of us who need medicines, surgeries, therapies and diagnostic tools — it is the sum total of all that we have available to us today. If it’s good enough for those of us who need it today, then perhaps it’s good enough for those who will need it tomorrow as well. Every generation has searched for the fountain of youth. But can we afford to spend more time, more money, and — frankly — more animals just to live longer? Natural selection is an uninvited guest within every family. Some of us will die old; some of us will die far too young. We cannot continue to fund the search for the fountain of youth. We must realize that certain diseases of aging — such as cancer, Alzheimer’s, and Parkinson’s — are inevitable. Our lifestyles and nutrition are environmental factors that certainly contribute to our health. How much longer can we pretend to play the role of God in our own laboratories? (58-59)

In some ways, this statement is the ethical pivot-point around which all the events of the novel — and the reader’s moral calculations — turn. How do we gauge “good enough”? Who gets to make the call, the people for whom modern medicine is more or less sufficient, or the people whose ailments still have no good treatment? What kind of process ought we as a society to use for this assessment?

These live questions end up being beside the point within the universe of Uncaged though. The president issuing this statement has become, to all appearances, a one-man death panel.

McKellips develops a compelling and diverse selection of minor characters here: capitalists, terrorists, animal researchers, animal rights activists, military personnel, political appointees. Some of these (especially the animal rights activists) are clearly based on particular real people who are instantly recognizable to those who have been paying attention to the targeting of researchers in recent years. (If you’ve followed the extremists and their efforts less closely, entering bits of text from the communiques of the fictional animal rights organizations into a search engine is likely to help you get a look at their real-life counterparts.)

But, while McKellips’s portrayal of the animal rights activists is accurate in capturing their rhetoric, these key players who are central in creating the crisis to which the protagonists must respond remain ciphers. The reader gets little sense of the events or thought processes that brought them to these positions, or of the sorts of internal conflicts that might occur within animal rights organizations — or within the hearts and minds of individual activists.

Maybe this is unavoidable — the internet animal rights activists often do seem like ciphers who work very hard to deny the complexities acknowledged by the researchers in Uncaged. But, perhaps naïvely, I have a hard time believing they are not more complex in real life than this.

As well, I would have liked for Uncaged to give us more of a glimpse into the internal workings of the executive branch — how the president and his cabinet made the decision to issue the executive order for a moratorium on animal research, what kinds of arguments various advisors might have offered for or against this order, what assemblage of political considerations, ideals, gut feelings, and unforeseen consequences born of incomplete information or sheer ignorance might have been at work. But maybe presidents, cabinet members, agency heads, and other political animals are ciphers, too — at least to research scientists who have to navigate the research environment these political animals establish and then rearrange.

Maybe this is an instance of the author grappling with the same challenge researchers face: you can’t build a realistic model without accurate and detailed information about the system you’re modeling. Maybe making such a large cast of characters more nuanced, and drawing us deeply into their inner lives, would have undercut the taut pacing of what is, after all, intended as an action thriller.

But to me, this feels like a missed opportunity. Ultimately, I worry that the various players in Uncaged — and worse, their real life counterparts — the researchers and other advocates of humane animal research, the animal rights activists, the political animals, and the various segments of the broader public — continue to see each other as ciphers rather than trying to get in each others heads and figure out where their adversaries are coming from, the better to be able to reflect upon and address the real concerns that are driving people. Modeling your opponents as automata has a certain efficiency, but to me it leaves the resolution feeling somewhat hollow — and it’s certainly not a strategy for engagement that I see leading to healthy civil society in real life.

I suspect, though, that my disappointments are a side-effect of the fact that I am not a newcomer to these disputes. For readers not already immersed in the battles over research with animals, Uncaged renders researchers as complex human beings to whom one can relate. This is a good read for someone who wants a thriller that also conveys a compelling picture of what motivates various lines of biomedical research — and why such research might matter to us all.

Who matters (or should) when scientists engage in ethical decision-making?

One of the courses I teach regularly at my university is “Ethics in Science,” a course that explores (among other things) what’s involved in being a good scientist in one’s interactions with the phenomena about which one is building knowledge, in one’s interactions with other scientists, and in one’s interactions with the rest of the world.

Some bits of this are pretty straightforward (e.g., don’t make up data out of whole cloth, don’t smash your competitor’s lab apparatus, don’t use your mad science skillz to engage in a campaign of super-villainy that brings Gotham City to its knees). But, there are other instances where what a scientist should or should not do is less straightforward. This is why we spend significant time and effort talking about — and practicing — ethical decision-making (working with a strategy drawn from Muriel J. Bebeau, “Developing a Well-Reasoned Response to a Moral Problem in Scientific Research”). Here’s how I described the basic approach in a post of yore:

Ethical decision-making involves more than having the right gut-feeling and acting on it. Rather, when done right, it involves moving past your gut-feeling to see who else has a stake in what you do (or don’t do); what consequences, good or bad, might flow from the various courses of action available to you; to whom you have obligations that will be satisfied or ignored by your action; and how the relevant obligations and interests pull you in different directions as you try to make the best decision. Sometimes it’s helpful to think of the competing obligations and interests as vectors, since they come with both directions and magnitudes — which is to say, in some cases where they may be pulling you in opposite directions, it’s still obvious which way you should go because the magnitude of one of the obligations is so much bigger than of the others.

We practice this basic strategy by using it to look at a lot of case studies. Basically, the cases describe a situation where the protagonist is trying to figure out what to do, giving you a bunch of details that seem salient to the protagonist and leaving some interesting gaps where the protagonist maybe doesn’t have some crucial information, or hasn’t looked for it, or hasn’t thought to look for it. Then we look at the interested parties, the potential consequences, the protagonist’s obligations, and the big conflicts between obligations and interests to try to work out what we think the protagonist should do.

Recently, one of my students objected to how we approach these cases.

Specifically, the student argued that we should radically restrict our consideration of interested parties — probably to no more than the actual people identified by name in the case study. Considering the interests of a university department, or of a federal funder, or of the scientific community, the student asserted, made the protagonist responsible to so many entities that the explicit information in the case study was not sufficient to identify the correct course of action.*

And, the student argued, one interested party that it was utterly inappropriate for a scientist to include in thinking through an ethical decision is the public.

Of course, I reminded the student of some reasons you might think the public would have an interest in what scientists decide to do. Members of the public share a world with scientists, and scientific discoveries and scientific activities can have impacts on things like our environment, the safety of our buildings, what our health care providers know and what treatments they are able to offer us, and so forth. Moreover, at least in the U.S., public funds play an essential role in supporting both scientific research and the training of new scientists (even at private universities) — which means that it’s hard to find an ethical decision-making situation in a scientific training environment that is completely isolated from something the public paid for.

My student was not moved by the suggestion that financial involvement should buy the public any special consideration as a scientist was trying to decide the right thing to do.

Indeed, central to the student’s argument was the idea that the interests of the public, whether with respect to science or anything else, are just too heterogeneous. Members of the public want lots of different things. Taking these interests into account could only be a distraction.

As well, the student asserted, too small a proportion of the public actually cares about what scientists are up to that the public, even if it were more homogeneous, ought to be taken into account by the scientists grappling with their own ethical quandaries. Even worse, the student ventured, those that do care what scientists are up to are not necessarily well-informed.

I’m not unsympathetic to the objection to the extreme case here: if a scientist felt required to somehow take into account the actual particular interests of each individual member of the public, that would make it well nigh impossible to actually make an ethical decision without the use of modeling methods and supercomputers (and even then, maybe not). However, it strikes me that it shouldn’t be totally impossible to anticipate some reasonable range of interests non-scientists have that might be impacted by the consequences of a scientist’s decision in various ways. Which is to say, the lack of total fine-grained information about the public, or of complete predictability of the public’s reactions, would surely make it more challenging to make optimal ethical decisions, but these challenges don’t seem to warrant ignoring the public altogether just so the problem you’re trying to solve becomes more tractable.

In any case, I figure that there’s a good chance some members of the public** may be reading this post. To you, I pose the following questions:

  1. Do you feel like you have an interest in what science and scientists are up to? If so, how would you describe that interest? If not, why not?
  2. Do you think scientists should treat “the public” as an interested party when they try to make ethical decisions? Why or why not?
  3. If you think scientists should treat “the public” as an interested party when they try to make ethical decisions, what should scientists be doing to get an accurate read on the public’s interests?
  4. And, for the sake of symmetry, do you think members of the public ought to take account of the interests of science or scientists when they try to make ethical decisions? Why or why not?

If, for some reason, you feel like chiming in on these questions in the comments would expose you to unwanted blowback, you can also email me your responses (dr dot freeride at gmail dot com) for me to anonymize and post on your behalf.

Thanks in advance for sharing your view on this!

_____
*Here I should note that I view the ambiguities within the case studies as a feature, not a bug. In real life, we have to make good ethical decisions despite uncertainties about what consequences will actually follow our actions, for example. Those are the breaks.

**Officially, scientists are also members of the public — even if you’re stuck in the lab most of the time!