How to be ethical while getting the public involved in your science

At ScienceOnline Together later this week, Holly Menninger will be moderating a session on “Ethics, Genomics, and Public Involvement in Science”.

Because the ethical (and epistemic) dimensions of “citizen science” have been on my mind for a while now, in this post I share some very broad, pre-conference thoughts on the subject.

Ethics is a question of how we share a world with each other. Some of this is straightforward and short-term, but sometimes engaging each other ethically means taking account of long-range consequences, including possible consequences that may be difficult to foresee unless we really work to think through the possibilities ahead of time — and unless this thinking through of possibilities is informed by knowledge of some of the technologies involved and of history of what kinds of unforeseen outcomes have led to ethical problems before.

Ethics is more than merely meeting your current legal and regulatory requirements. Anyone taking that kind of minimalist approach to ethics is gunning to be a case study in an applied ethics class (probably within mere weeks of becoming a headline in a major news outlet).

With that said, if you’re running a project you’d describe as “citizen science” or as cultivating public involvement in science, here are some big questions I think you should be asking from the start:

1. What’s in it for the scientists?

Why are you involving members of the public in your project?

Are they in the field collecting observations that you wouldn’t have otherwise, or on their smart phones categorizing the mountains of data you’ve already collected? In these cases, the non-experts are providing labor you need for vital non-automatable tasks.

Are they sending in their biological samples (saliva, cheek swab, belly button swab, etc.)? In these cases, the non-experts are serving as human subjects, expanding the pool of samples in your study.

In both of these cases, scientists have ethical obligations to the non-scientists they are involving in their projects, although the ethical obligations are likely to be importantly different. In any case where a project involves humans as sources of biological samples, researchers ought to be consulting an Institutional Review Board, at least informally, before the project is initiated (which includes the start of anything that looks like advertising for volunteers who will provide their samples).

If volunteers are providing survey responses or interviews instead of vials of spit, there’s a chance they’re still acting as human subjects. Consult an IRB in the planning stages to be sure. (If your project is properly exempt from IRB oversight, there’s no better way to show it than an exemption letter from an IRB.)

If volunteers are providing biological samples from their pets or reports of observations of animals in the field (especially in fragile habitats), researchers ought to be consulting an Institutional Animal Care and Use Committee, at least informally, before the project is initiated. Again, it’s possible that what you’ll discover in this consultation is that the proposed research is exempt from IACUC oversight, but you want a letter from an IACUC to that effect.

Note that IRBs and IACUCs don’t exist primarily to make researchers’ lives hard! Rather, they exist to help researchers identify their ethical obligations to the humans and animals who serve as subjects of their studies, and to help find ways to conduct that research in ways that honor those obligations. A big reason to involve committees in thinking through the ethical dimensions of the research is that it’s hard for researchers to be objective in thinking through these questions about their own projects.

If you’re involving non-experts in your project in some other way, what are they contributing to the project? Are you involving them so you can check off the “broader impacts” box on your grant application, or is there some concrete way that involving members of the public is contributing to your knowledge-building? If the latter, think hard about what kinds of obligations might flow from that contribution.

2. What’s in it for the non-scientists/non-experts/members of the public involved in the project?

Why would members of the public want to participate in your project? What could they expect to get from such participation?

Maybe they enjoy being outdoors counting birds (and would be doing so even if they weren’t participating in the project), or looking at pictures of galaxies from space telescopes. Maybe they are curious about what’s in their genome or what’s in their belly-button. Maybe they want to help scientists build new knowledge enough to participate in some of the grunt-work required for that knowledge-building. Maybe they want to understand how that grunt-work fits into the knowledge-building scientists do.

It’s important to understand what the folks whose help you’re enlisting think they’re signing on for. Otherwise, they may be expecting something from the experience that you can’t give them. The best way to find out what potential participants are looking for from the experience is to ask them.

Don’t offer potential diagnostic benefits from participation in a project for which that information is a long, long way off. Don’t promise that tracking the health of streams by screening for the presence of different kinds of bugs will be tons of fun without being clear about the conditions your volunteers will undergo to perform those screenings.

Don’t promise participants that they will be getting a feel for what it’s like to “do science” if, in fact, they are really just providing a sample rather than being part of the analysis or interpretation of that sample.

Don’t promise them that they will be involved in hypothesis-formation or conclusion-drawing if really you are treating them as fancy measuring devices.

3. What’s the relationship between the scientists and the non-scientists in this project? What consequences will this have for relationships between scientists and the pubic more generally?

There’s a big difference in involving members of the public in your project because it will be enriching for them personally and involving them in your project because it’s the only conceivable way to build a particular piece of knowledge you’re trying to build.

Being clear about the relationship upfront — here’s why we need you, here’s what you can expect in return (both the potential benefits of participation and the potential risks) — is the best way to make sure everyone’s interests are well-served by the partnership and that no one is being deceived.

Things can get complicated, though, when you pull the focus back from how participants are involved in building the knowledge and consider how that knowledge might be used.

Will the new knowledge primarily benefit the scientists leading the project, adding publications to their CVs and helping them make the case for funding for further projects? Could the new knowledge contribute to our understanding (of ecosystems, or human health, for example) in ways that will drive useful interventions? Will those interventions be driven by policy-makers or commercial interests? Will the scientists be a part of this discussion of how the knowledge gets used? Will the members of the public (either those who participated in the project or members of the public more generally) be a part of this discussion — and will their views be taken seriously?

To the extent that participating in citizen science project, whatever shape that participation may take, can influence non-scientists’ views on science and the scientific community as a whole, the interactions between scientists and volunteers in and around these projects are hugely important. They are an opportunity for people with different interests, different levels of expertise, different values, to find common ground while working together to achieve a shared goal — to communicate honestly, deal with each other fairly, and take each other seriously.

More such ethical engagement between scientists and publics would be a good thing.

But the flip-side is that engagements between scientists and publics that aren’t as honest or respectful as they should be may have serious negative impacts beyond the particular participants in a given citizen science project. They may make healthy engagement, trust, and accountability harder for scientists and publics across the board.

In other words, working hard to do it right is pretty important.

I may have more to say about this after the conference. In the meantime, you can add your questions or comments to the session discussion forum.

The line between persuasion and manipulation.

As this year’s ScienceOnline Together conference approaches, I’ve been thinking about the ethical dimensions of using empirical findings from psychological research to inform effective science communication (or really any communication). Melanie Tannenbaum will be co-facilitating a session about using such research findings to guide communication strategies, and this year’s session is nicely connected to a session Melanie led with Cara Santa Maria at last year’s conference called “Persuading the Unpersuadable: Communicating Science to Deniers, Cynics, and Trolls.”

In that session last year, the strategy of using empirical results from psychology to help achieve success in a communicative goal was fancifully described as deploying “Jedi mind tricks”. Achieving success in communication was cast in terms of getting your audience to accept your claims (or at least getting them not to reject your claims out of hand because they don’t trust you, or don’t trust the way you’re engaging with them, or whatever). But if you have the cognitive launch codes, as it were, you can short-circuit distrust, cultivate trust, help them end up where you want them to end up when you’re done communicating what you’re trying to communicate.

Jason Goldman pointed out to me that these “tricks” aren’t really that tricky — it’s not like you flash the Queen of Diamonds and suddenly the person you’re talking to votes for your ballot initiative or buys your product. As Jason put it to me via email, “From a practical perspective, we know that presenting reasons is usually ineffective, and so we wrap our reasons in narrative – because we know, from psychology research, that storytelling is an effective device for communication and behavior change.”

Still, using a “trick” to get your audience to end up where you want them to end up — even if that “trick” is simply empirical knowledge that you have and your audience doesn’t — sounds less like persuasion than manipulation. People aren’t generally happy about the prospect of being manipulated. Intuitively, manipulating someone else gets us into ethically dicey territory.

As a philosopher, I’m in a discipline whose ideal is that you persuade by presenting reasons for your interlocutor to examine, arguments whose logical structure can be assessed, premises whose truth (or at least likelihood) can be evaluated. I daresay scientists have something like the same ideal in mind when they present their findings or try to evaluate the scientific claims of others. In both cases, there’s the idea than we should be making a concerted effort not to let tempting cognitive shortcuts get in the way of reasoning well. We want to know about the tempting shortcuts (some of which are often catalogued as “informal fallacies”) so we can avoid falling into them. Generally, it’s considered sloppy argumentation (or worse) to try to tempt our audience with those shortcuts.

How much space is there between the tempting cognitive shortcuts we try to avoid in our own reasoning and the “Jedi mind tricks” offered to us to help us communicate, or persuade, or manipulate more effectively? If we’re taking advantage of cognitive shortcuts (or switches, or whatever the more accurate metaphor would be) to increase the chances that people will accept our factual claims, our recommendations, our credibility, etc., can we tell when we’ve crossed the line between persuasion and manipulation? Can we tell when it’s the cognitive switch that’s doing the work rather than the sharing of reasons?

It strikes me as even more ethically problematic if we’re using these Jedi mind tricks while concealing the fact that we’re using them from the audience we’re using them on. There’s a clear element of deception in doing that.

Now, possibly the Jedi mind tricks work equally well if we disclose to our audience that we’re using them and how they work. In that case, we might be able to use them to persuade without being deceptive — and it would be clear to our audience that we were availing ourselves of these tricks, and that our goal was to get them to end up in a particular place. It would be kind of weird, though, perhaps akin to going to see a magician knowing full well that she would be performing illusions and that your being fooled by those illusions is a likely outcome. (Wouldn’t this make us more distrustful in our communicative interactions, though? If you know about the switches and it’s still the case that they can be used against you, isn’t that the kind of thing that might make you want to block lots of communication before it can even happen?)

As a side note, I acknowledge that there might be some compelling extreme cases in which the goal of getting the audience to end up in a particular place — e.g., revealing to you the location of the ticking bomb — is so urgent that we’re prepared to swallow our qualms about manipulating the audience to get the job done. I don’t think that the normal stakes of our communications are like this, though. But there may be some cases where how high the stakes really are is one of the places we disagree. Jason suggests vaccine acceptance or refusal might be important enough that the Jedi mind tricks shouldn’t set off any ethical alarms. I’ll note that vaccine advocates using a just-the-empirical-facts approach to communication are often accused or suspected of having some undisclosed financial conflict of interest that is motivating them to try to get everyone vaccinated — that is, they’re not using the Jedi mind trick social psychologists think could help them persuade their target audience and yet that audience thinks they’re up to something sneaky. That’s a pretty weird situation.

Does our cognitive make-up as humans make it possible to get closer to exchanging and evaluating reasons rather than just pushing each other’s cognitive buttons? If so, can we achieve better communication without the Jedi mind tricks?

Maybe it would require some work to change the features of our communicative environment (or of the environment in which we learn how to reason about the world and how to communicate and otherwise interact with others) to help our minds more reliably work this way. Is there any empirical data on that? (If not, is this a research question psychologists are asking?)

Some of these questions tread dangerously close to the question of whether we humans can actually have free will — and that’s a big bucket of metaphysical worms that I’m not sure I want to dig into right now. I just want to know how to engage my fellow human beings as ethically as possible when we communicate.

These are some of the questions swirling around my head. Maybe next week at ScienceOnline some of them will be answered — although there’s a good chance some more questions will be added to the pile!