Grappling with the angry-making history of human subjects research, because we need to.

Teaching about the history of scientific research with human subjects bums me out.

Indeed, I get fairly regular indications from students in my “Ethics in Science” course that reading about and discussing the Nazi medical experiments and the U.S. Public Health Service’s Tuskegee syphilis experiment leaves them feeling grumpy, too.

Their grumpiness varies a bit depending on how they see themselves in relation to the researchers whose ethical transgressions are being inspected. Some of the science majors who identify strongly with the research community seem to get a little defensive, pressing me to see if these two big awful examples of human subject research aren’t clear anomalies, the work of obvious monsters. (This is one reason I generally point out that, when it comes to historical examples of ethically problematic research with human subjects, the bench is deep: the U.S. government’s syphilis experiments in Guatemala, the MIT Radioactivity Center’s studies on kids with mental disabilities in a residential school, the harms done to Henrietta Lacks and to the family members that survived her by scientists working with HeLa cells, the National Cancer Institute and Gates Foundation funded studies of cervical cancer screening in India — to name just a few.) Some of the non-science majors in the class seem to look at their classmates who are science majors with a bit of suspicion.

Although I’ve been covering this material with my students since Spring of 2003, it was only a few years ago that I noticed that there was a strong correlation between my really bad mood and the point in the semester when we were covering the history of human subjects research. Indeed, I’ve come to realize that this is no mere correlation but a causal connection.

The harm that researchers have done to human subjects in order to build scientific knowledge in many of these historically notable cases makes me deeply unhappy. These cases involve scientists losing their ethical bearings and then defending indefensible actions as having been all in the service of science. It leaves me grumpy about the scientific community of which these researchers were a part (rather than being obviously marked as monsters or rogues). It leaves me grumpy about humanity.

In other contexts, my grumpiness might be no big deal to anyone but me. But in the context of my “Ethics in Science” course, I need to keep pessimism on a short leash. It’s kind of pointless to talk about what we ought to do if you’re feeling like people are going to be as evil as they can get away with being.

It’s important to talk about the Nazi doctors and the Tuskegee syphilis experiment so my students can see where formal statements about ethical constraints on human subject research (in particular, the Nuremberg Code and the Belmont Report) come from, what actual (rather than imagined) harms they are reactions to. To the extent that official rules and regulations are driven by very bad situations that the scientific community or the larger human community want to avoid repeating, history matters.

History also matters if scientists want to understand the attitudes of publics towards scientists in general and towards scientists conducting research with human subjects in particular. Newly-minted researchers who would never even dream of crossing the ethical lines the Nazi doctors or the Tuskegee syphilis researchers crossed may feel it deeply unfair that potential human subjects don’t default to trusting them. But that’s not how trust works. Ignoring the history of human subjects research means ignoring very real harms and violations of trust that have not faded from the collective memories of the populations that were harmed. Insisting that it’s not fair doesn’t magically earn scientists trust.

Grappling with that history, though, might help scientists repair trust and ensure that the research they conduct is actually worthy of trust.

It’s history that lets us start noticing patterns in the instances where human subjects research took a turn for the unethical. Frequently we see researchers working with human subjects who that don’t see as fully human, or whose humanity seems less important than the piece of knowledge the researchers have decided to build. Or we see researchers who believe they are approaching questions “from the standpoint of pure science,” overestimating their own objectivity and good judgment.

This kind of behavior does not endear scientists to publics. Nor does it help researchers develop appropriate epistemic humility, a recognition that their objectivity is not an individual trait but rather a collective achievement of scientists engaging seriously with each other as they engage with the world they are trying to know. Nor does it help them build empathy.

I teach about the history of human subjects research because it is important to understand where the distrust between scientists and publics has come from. I teach about this history because it is crucial to understanding where current rules and regulations come from.

I teach about this history because I fully believe that scientists can — and must — do better.

And, because the ethical failings of past human subject research were hardly ever the fault of monsters, we ought to grapple with this history so we can identify the places where individual human weakness, biases, blind-spots are likely to lead to ethical problems down the road. We need to build systems and social mechanisms to be accountable to human subjects (and to publics), to prioritize their interests, never to lose sight of their humanity.

We can — and must — do better. But this requires that we seriously examine the ways that scientists have fallen short — even the ways that they have done evil. We owe it to future human subjects of research to learn from the ways scientists have failed past human subjects, to apply these lessons, to build something better.

Some thoughts about human subjects research in the wake of Facebook’s massive experiment.

You can read the study itself here, plus a very comprehensive discussion of reactions to the study here.

1. If you intend to publish your research in a peer-reviewed scientific journal, you are expected to have conducted that research with the appropriate ethical oversight. Indeed, the submission process usually involves explicitly affirming that you have done so (and providing documentation, in the case of human subjects research, of approval by the relevant Institutional Review Board(s) or of the IRB’s determination that the research was exempt from IRB oversight).

2. Your judgment, as a researcher, that your research will not expose your human subjects to especially big harms does not suffice to exempt that research from IRB oversight. The best way to establish that your research is exempt from IRB oversight is to submit your protocol to the IRB and have the IRB determine that it is exempt.

3. It’s not unreasonable for people to judge that violating their informed consent (say, by not letting them know that they are human subjects in a study where you are manipulating their environment and not giving them the opportunity to opt out of being part of your study) is itself a harm to them. When we value our autonomy, we tend to get cranky when others disregard it.

4. Researchers, IRBs, and the general public needn’t judge a study to be as bad as [fill in the name of a particularly horrific instance of human subjects research] to judge the conduct of the researchers in the study unethical. We can (and should) surely ask for more than “not as bad as the Tuskegee Syphilis Experiment”.

5. IRB approval of a study means that the research has received ethical oversight, but it does not guarantee that the treatment of human subjects in the research will be ethical. IRBs can make questionable ethical judgments too.

6. It is unreasonable to suggest that you can generally substitute Terms of Service or End User License Agreements for informed consent documents, as the latter are supposed to be clear and understandable to your prospective human subjects, while the former are written in such a way that even lawyers have a hard time reading and understanding them. The TOS or EULA is clearly designed to protect the company, not the user. (Some of those users, by the way, are in their early teens, which means they probably ought to be regarded as members of a “vulnerable population” entitled to more protection, not less.)

7. Just because a company like Facebook may “routinely” engage in manipulations of a user’s environment doesn’t make that kind of manipulation automatically ethical when it is done for the purposes of research. Nor does it mean that that kind of manipulation is ethical when Facebook does it for its own purposes. As it happens, peer-reviewed scientific journals, funding agencies, and other social structures tend to hold scientists building knowledge with human subjects research to higher ethical standards than (say) corporations are held to when they interact with humans. This doesn’t necessarily means our ethical demands of scientific knowledge-builders are too high. Instead, it may mean that our ethical demands of corporations are too low.

How to be ethical while getting the public involved in your science

At ScienceOnline Together later this week, Holly Menninger will be moderating a session on “Ethics, Genomics, and Public Involvement in Science”.

Because the ethical (and epistemic) dimensions of “citizen science” have been on my mind for a while now, in this post I share some very broad, pre-conference thoughts on the subject.

Ethics is a question of how we share a world with each other. Some of this is straightforward and short-term, but sometimes engaging each other ethically means taking account of long-range consequences, including possible consequences that may be difficult to foresee unless we really work to think through the possibilities ahead of time — and unless this thinking through of possibilities is informed by knowledge of some of the technologies involved and of history of what kinds of unforeseen outcomes have led to ethical problems before.

Ethics is more than merely meeting your current legal and regulatory requirements. Anyone taking that kind of minimalist approach to ethics is gunning to be a case study in an applied ethics class (probably within mere weeks of becoming a headline in a major news outlet).

With that said, if you’re running a project you’d describe as “citizen science” or as cultivating public involvement in science, here are some big questions I think you should be asking from the start:

1. What’s in it for the scientists?

Why are you involving members of the public in your project?

Are they in the field collecting observations that you wouldn’t have otherwise, or on their smart phones categorizing the mountains of data you’ve already collected? In these cases, the non-experts are providing labor you need for vital non-automatable tasks.

Are they sending in their biological samples (saliva, cheek swab, belly button swab, etc.)? In these cases, the non-experts are serving as human subjects, expanding the pool of samples in your study.

In both of these cases, scientists have ethical obligations to the non-scientists they are involving in their projects, although the ethical obligations are likely to be importantly different. In any case where a project involves humans as sources of biological samples, researchers ought to be consulting an Institutional Review Board, at least informally, before the project is initiated (which includes the start of anything that looks like advertising for volunteers who will provide their samples).

If volunteers are providing survey responses or interviews instead of vials of spit, there’s a chance they’re still acting as human subjects. Consult an IRB in the planning stages to be sure. (If your project is properly exempt from IRB oversight, there’s no better way to show it than an exemption letter from an IRB.)

If volunteers are providing biological samples from their pets or reports of observations of animals in the field (especially in fragile habitats), researchers ought to be consulting an Institutional Animal Care and Use Committee, at least informally, before the project is initiated. Again, it’s possible that what you’ll discover in this consultation is that the proposed research is exempt from IACUC oversight, but you want a letter from an IACUC to that effect.

Note that IRBs and IACUCs don’t exist primarily to make researchers’ lives hard! Rather, they exist to help researchers identify their ethical obligations to the humans and animals who serve as subjects of their studies, and to help find ways to conduct that research in ways that honor those obligations. A big reason to involve committees in thinking through the ethical dimensions of the research is that it’s hard for researchers to be objective in thinking through these questions about their own projects.

If you’re involving non-experts in your project in some other way, what are they contributing to the project? Are you involving them so you can check off the “broader impacts” box on your grant application, or is there some concrete way that involving members of the public is contributing to your knowledge-building? If the latter, think hard about what kinds of obligations might flow from that contribution.

2. What’s in it for the non-scientists/non-experts/members of the public involved in the project?

Why would members of the public want to participate in your project? What could they expect to get from such participation?

Maybe they enjoy being outdoors counting birds (and would be doing so even if they weren’t participating in the project), or looking at pictures of galaxies from space telescopes. Maybe they are curious about what’s in their genome or what’s in their belly-button. Maybe they want to help scientists build new knowledge enough to participate in some of the grunt-work required for that knowledge-building. Maybe they want to understand how that grunt-work fits into the knowledge-building scientists do.

It’s important to understand what the folks whose help you’re enlisting think they’re signing on for. Otherwise, they may be expecting something from the experience that you can’t give them. The best way to find out what potential participants are looking for from the experience is to ask them.

Don’t offer potential diagnostic benefits from participation in a project for which that information is a long, long way off. Don’t promise that tracking the health of streams by screening for the presence of different kinds of bugs will be tons of fun without being clear about the conditions your volunteers will undergo to perform those screenings.

Don’t promise participants that they will be getting a feel for what it’s like to “do science” if, in fact, they are really just providing a sample rather than being part of the analysis or interpretation of that sample.

Don’t promise them that they will be involved in hypothesis-formation or conclusion-drawing if really you are treating them as fancy measuring devices.

3. What’s the relationship between the scientists and the non-scientists in this project? What consequences will this have for relationships between scientists and the pubic more generally?

There’s a big difference in involving members of the public in your project because it will be enriching for them personally and involving them in your project because it’s the only conceivable way to build a particular piece of knowledge you’re trying to build.

Being clear about the relationship upfront — here’s why we need you, here’s what you can expect in return (both the potential benefits of participation and the potential risks) — is the best way to make sure everyone’s interests are well-served by the partnership and that no one is being deceived.

Things can get complicated, though, when you pull the focus back from how participants are involved in building the knowledge and consider how that knowledge might be used.

Will the new knowledge primarily benefit the scientists leading the project, adding publications to their CVs and helping them make the case for funding for further projects? Could the new knowledge contribute to our understanding (of ecosystems, or human health, for example) in ways that will drive useful interventions? Will those interventions be driven by policy-makers or commercial interests? Will the scientists be a part of this discussion of how the knowledge gets used? Will the members of the public (either those who participated in the project or members of the public more generally) be a part of this discussion — and will their views be taken seriously?

To the extent that participating in citizen science project, whatever shape that participation may take, can influence non-scientists’ views on science and the scientific community as a whole, the interactions between scientists and volunteers in and around these projects are hugely important. They are an opportunity for people with different interests, different levels of expertise, different values, to find common ground while working together to achieve a shared goal — to communicate honestly, deal with each other fairly, and take each other seriously.

More such ethical engagement between scientists and publics would be a good thing.

But the flip-side is that engagements between scientists and publics that aren’t as honest or respectful as they should be may have serious negative impacts beyond the particular participants in a given citizen science project. They may make healthy engagement, trust, and accountability harder for scientists and publics across the board.

In other words, working hard to do it right is pretty important.

I may have more to say about this after the conference. In the meantime, you can add your questions or comments to the session discussion forum.

Ethical and practical issues for uBiome to keep working on.

Earlier this week, the Scientific American Guest Blog hosted a post by Jessica Richman and Zachary Apte, two members of the team at uBiome, a crowdfunded citizen science start-up. Back in February, as uBiome was in the middle of its crowdfunding drive, a number of bloggers (including me) voiced worries that some of the ethical issues of the uBiome project might require more serious attention. Partly in response to those critiques, Richman’s and Apte’s post talks about their perspectives on Institutional Review Boards (IRBs) and how in their present configuration they seem suboptimal for commercial citizen science initiatives.

Their post provides food for thought, but there are some broader issues about which I think the uBiome team should think a little harder.

Ethics takes more than simply meeting legal requirements.

Consulting with lawyers to ensure that your project isn’t breaking any laws is a good idea, but it’s not enough. Meeting legal requirements is not sufficient to meet your ethical obligations (which are well and truly obligations even when they lack the force of law).

Now, it’s the case that there is often something like the force of law deployed to encourage researchers (among others) not to ignore their ethical obligations. If you accept federal research funds, for example, you are entering into a contract one of whose conditions is forking within federal guidelines for ethical use of animal or human subjects. If you don’t want the government to enforce this agreement, you can certainly opt out of taking the federal funds.

However, opting out of federal funding does not remove your ethical duties to animals or human subjects. It may remove the government’s involvement in making you live up to your ethical obligations, but the ethical obligations are still there.

This is a tremendously important point — especially in light of a long history of human subjects research in which researchers have often not even recognized their ethical obligations to human subjects, let alone had a good plan for living up to them.

Here, it is important to seek good ethical advice (as distinct from legal advice), from an array of ethicists, including some who see potential problems with your plans. If none of the ethicists you consult see anything to worry about, you probably need to ask a few more! Take the potential problems they identify seriously. Think through ways to manage the project to avoid those problems. Figure out a way to make things right if a worst case scenario should play out.

In a lot of ways, problems that uBiome encountered with the reception of its plan seemed to flow from a lack of good — and challenging — ethical advice. There are plenty of other people and organizations doing citizen science projects that are similar enough to uBiome (from the point of view of interactions with potential subjects/participants), and many of these have experience working with IRBs. Finding them and asking for their guidance could have helped the uBiome team foresee some of the issues with which they’re dealing now, somewhat late in the game.

There are more detailed discussions of the chasm between what satisfies the law and what’s ethical at The Broken Spoke and Drugmonkey. You should, as they say, click through and read the whole thing.

Some frustrations with IRBs may be based on a misunderstanding of how they work.

An Institutional Review Board, or IRB, is a body that examines scientific protocols to determine whether they meet ethical requirements in their engagement of human subjects (including humans who provide tissue or other material to a study). The requirement for independent ethical evaluation of experimental protocols was first articulated in the World Medical Association’s Declaration of Helsinki, which states:

The research protocol must be submitted for consideration, comment, guidance and approval to a research ethics committee before the study begins. This committee must be independent of the researcher, the sponsor and any other undue influence. It must take into consideration the laws and regulations of the country or countries in which the research is to be performed as well as applicable international norms and standards but these must not be allowed to reduce or eliminate any of the protections for research subjects set forth in this Declaration. The committee must have the right to monitor ongoing studies. The researcher must provide monitoring information to the committee, especially information about any serious adverse events. No change to the protocol may be made without consideration and approval by the committee.

(Bold emphasis added.)

In their guest post, Richman and Apte assert, “IRBs are usually associated with an academic institution, and are provided free of charge to members of that institution.”

It may appear that the services of an IRB are “free” to those affiliated with the institution, but they aren’t really. Surely it costs the institution money to run the IRB — to hire a coordinator, to provide ethics training resources for IRB members and to faculty, staff, and students involved in human subjects research, to (ideally) give release time to faculty and staff on the IRB so they can actually devote the time required to consider protocols, comment upon them, provide guidance to PIs, and so forth.

Administrative costs are part of institutional overhead, and there’s a reasonable expectation that researchers whose protocols come before the IRB will take a turn serving on the IRB at some point. So IRBs most certainly aren’t free.

Now, given that the uBiome team was told they couldn’t seek approval from the IRBs at any institutions where they plausibly could claim an affiliation, and given the expense of seeking approval from a private-sector IRB, I can understand why they might have been hesitant to put money down for IRB approval up front. They started with no money for their proposed project. If the project itself ended up being a no-go due to insufficient funding, spending money on IRB approval would seem pointless.

However, it’s worth making it clear that expense is not in itself a sufficient reason to do without ethical oversight. IRB oversight costs money (even in an academic institution where those costs are invisible to PIs because they’re bundled into institutional overhead). Research in general costs money. If you can’t swing the costs (including those of proper ethical oversight), you can’t do the research. That’s how it goes.

Richman and Apte go on:

[W]e wanted to go even further, and get IRB approval once we were funded — in case we wanted to publish, and to ensure that our customers were well-informed of the risks and benefits of participation. It seemed the right thing to do.

So, we decided to wait until after crowdfunding and, if the project was successful, submit for IRB approval at that point.

Getting IRB approval at some point in the process is better than getting none at all. However, some of the worries people (including me) were expressing while uBiome was at the crowdfunding stage of the process (before IRB approval) were focused on how the lines between citizen scientist, human subject, and customer were getting blurred.

Did donors to the drive believe that, by virtue of their donations, they were guaranteed to be enrolled in the study (as sample providers)? Did they have a reasonable picture of the potential benefits of their participation? Did they have a reasonable picture of the potential risks of their participation?

These are not questions we leave to PIs. To assess them objectively, we put these questions before a neutral third-party … the IRB.

If the expense of formal IRB consideration of the uBiome protocol was prohibitive during the crowdfunding stage, it surely would have gone some way to meeting ethical duties if the uBiome team had vetted the language in their crowdfunding drive with independent folks attentive to human subjects protection issues. That the ethical questions raised by their fundraising drive were so glaringly obvious to so many of us suggests that skipping this step was not a good call.


We next arrive at the issue of the for-profit IRB. Richman and Apte write:

Some might criticize the fact that we are using a private firm, one not connected with a prestigious academic institution. We beg to differ. This is the same institution that works with academic IRBs that need to coordinate multi-site studies, as well as private firms such as 23andme and pharmaceutical companies doing clinical trials. We agree that it’s kind of weird to pay for ethical review, but that is the current system, and the only option available to us.

I don’t think paying for IRB review is the ethical issue. If one were paying for IRB approval, that would be an ethical issue, and there are some well known rubber-stamp-y private IRBs out there.

Carl Elliott details some of the pitfalls of the for-profit IRB in his book White Coat, Black Hat. The most obvious of these is that, in a competition for clients, a for-profit IRB might well feel a pressure to forego asking the hard questions, to be less ethically rigorous (and more rubber-stamp-y) — else clients seeking approval would take their business to a competing IRB they saw as more likely to grant that approval with less hassle.

Market forces may provide good solutions to some problems, but it’s not clear that the problem of how to make research more ethical is one of them. Also, it’s worth noting that being a citizen science project does not in and of itself preclude review by an academic IRB – plenty of citizen science projects run by academic scientists do just that. It’s uBiome’s status as a private-sector citizen science project that led to the need to find another IRB.

That said, if folks with concerns knew which private IRB the uBiome team used (something they don’t disclose in their guest post), those folks could inspect the IRB’s track record for rigor and make a judgment from that.

Richman and Apte cite as further problems with IRBs, at least as currently constituted, lack of uniformity across committees and lack of transparency. The lack of uniformity is by design, the thought being that local control of committees should make them more responsive to local concerns (including those of potential subjects). Indeed, when research is conducted by collaborators from multiple institutions, one of the marks of good ethical design is when different local IRBs are comfortable approving the protocol. As well, at least part of the lack of transparency is aimed at human subjects protection — for example, ensuring that the privacy of human subjects is not compromised in the release of approved research protocols.

This is not to say that there is no reasonable discussion to have about striving for more IRB transparency, and more consistency between IRBs. However, such a discussion should center ethical considerations, not convenience or expediency.

Focusing on tone rather than substance makes it look like you don’t appreciate the substance of the critique.

Richman and Apte write the following of the worries bloggers raised with uBiome:

Some of the posts threw us off quite a bit as they seemed to be personal attacks rather than reasoned criticisms of our approach. …

We thought it was a bit… much, shall we say, to compare us to the Nazis (yes, that happened, read the posts) or to the Tuskegee Experiment because we funded our project without first paying thousands of dollars for IRB approval for a project that had not (and might never have) happened.

I have read all of the linked posts (here, here, here, here, here, here, here, and here) that Richman and Apte point to in leveling this complaint about tone. I don’t read them as comparing the uBiome team to Nazis or the researchers who oversaw the Tuskegee Syphilis Experiment.

I’m willing to stipulate that the tone of some of these posts was not at all cuddly. It may have made members of the uBiome team feel defensive.

However, addressing the actual ethical worries raised in these posts would have done a lot more for uBiome’s efforts to earn the public’s trust than adopting a defensive posture did.

Make no mistake, harsh language or not, the posts critical of uBiome were written by a bunch of people who know an awful lot about the ins and outs of ethical interactions with human subjects. These are also people who recognize from their professional lives that, while hard questions can feel like personal attacks, they still need to be answered. They are raising ethical concerns not to be pains, but because they think protecting human subjects matters — as does protecting the collective reputation of those who do human subjects research and/or citizen science.

Trust is easier to break than to build, which means one project’s ethical problems could be enough to sour the public on even the carefully designed projects of researchers who have taken much more care thinking through the ethical dimensions of their work. Addressing potential problems in advance seems like a better policy than hoping they’ll be no big deal.

And losing focus on the potential problems because you don’t like the way in which they were pointed out seems downright foolish.

Much of uBiome’s response to the hard questions raised about the ethics of their project has focused on tone, or on meeting examples that provide historical context for our ethical guidelines for human subject research with the protestation, “We’re not like that!” If nothing else, this suggests that the uBiome team hasn’t understood the point the examples are meant to convey, nor the patterns that they illuminate in terms of ethical pitfalls into which even non-evil scientists can fall if they’re not careful.

And it is not at all clear that the uBiome team’s tone in blog comments and on social media like Twitter has done much to help its case.

What is still lacking, amidst all their complaints about the tone of the critiques, is a clear account of how basic ethical questions (such as how uBiome will ensure that the joint roles of customer, citizen science participant, and human subject don’t lead to a compromise of autonomy or privacy) are being answered in uBiome’s research protocol.

A conversation on the substance of the critiques would be more productive here than one about who said something mean to whom.

Which brings me to my last issue:

New models of scientific funding, subject recruitment, and outreach that involve the internet are better served by teams that understand how the internet works.

Let’s say you’re trying to fund a project, recruit participants, build general understanding, enthusiasm, support, and trust. Let’s say that your efforts involve websites where you put out information and social media use where you amplify some of that information or push links to your websites or favorable media coverage.

People looking at the information you’ve put out there are going to draw conclusions based on the information you’ve made public. They may also draw speculative conclusions from the gaps — the information you haven’t made public.

You cannot, however, count on them to base their conclusions on information to which they’re not privy, including what’s in you’re heart.

There may be all sorts of good efforts happening behind the scenes to get rigorous ethical oversight off the ground. If it’s invisible to the public, there’s no reason the public should assume it’s happening.

If you want people to draw more accurate conclusions about what you’re doing, and about what potential problems might arise (and how you’re preparing to face them if they do), a good way to go is to make more information public.

Also, recognize that you’re involved in a conversation that is being conducted publicly. Among other things, this means it’s unreasonable to expect people with concern to take it to private email in order to get further information from you. You’re the one with a project that relies on cultivating public support and trust; you need to put the relevant information out there!

(What relevant information? Certainly the information relevant to responding to concerns and critiques articulated in the above-linked blog posts would be a good place to start — which is yet another reason why it’s good to be able to get past tone and understand substance.)

In a world where people email privately to get the information that might dispel their worries, those people are the only ones whose worries are addressed. The rest of the public that’s watching (but not necessarily tweeting, blogging, or commenting) doesn’t get that information (especially if you ask the people you email not to share the content of that email publicly). You may have fully lost their trust with nary a sign in your inboxes.

Maybe you wish the dynamics of the internet were different. Some days I do, too. But unless you’re going to fix the internet prior to embarking on your brave new world of crowdfunded citizen science, paying some attention to the dynamics as they are now will help you use it productively, rather than to create misunderstandings and distrust that then require remediation.

That could clear the way to a much more interesting and productive conversation between uBiome, other researchers, and the larger public.