One of the key requirements that researchers conducting studies with human subjects must meet is that they obtain the informed consent of the participating subjects (or of a parent or guardian, if the subject is not able to give informed consent himself or herself). However, there are particular instances where giving the subjects complete information about the study at the outset may change the outcome of the study — namely, it may make it practically impossible to measure what the research is trying to measure. If these studies are not to be ruled out completely, doing them necessitates some amount of deception or concealment, which seems to be at odds with the need to establish informed consent.
Of course, there are ethical guidelines for dealing with studies that require deception. But recently a reader emailed me about a particular study where there might have been concealment that was an impediment to informed consent rather than a methodological requirement of the study. Here are the broad details*:
Faculty members are solicited to participate in a sociological study of networking within their academic departments. Indeed, a university administrator strongly encourages faculty members to participate in the research by noting that the data it collects is expected to bolster a grant application geared toward funding “institutional transformations”.
The information provided to prospective subjects on the consent form makes no mention of using the results of this study to secure further grants. Some of the faculty who are being solicited to participate in the present study have objections to the sorts of “institutional transformations” promoted by the grant program mentioned in the administrator’s encouragement to participate.
Is the failure to mention this intended use of the study results in the consent forms a violation of informed consent?
Among the American Psychological Association Ethical Guidelines for Research with Human Subjects, we find the following:
4. Except in minimal-risk research, the investigator establishes a clear and fair agreement with research participants, prior to their participation, that clarifies the obligations and responsibilities of each. The investigator has the obligation to honor all promises and commitments included in that agreement. The investigator informs the participants of all aspects of the research that might reasonably be expected to influence willingness to participate and explains all other aspects of the research about which the participants inquire. Failure to make full disclosure prior to obtaining informed consent requires additional safeguards to protect the welfare and dignity of the research participants. Research with children or with participants who have impairments that would limit understanding and/or communication requires special safeguarding procedures.
5. Methodological requirements of a study may make the use of concealment or deception necessary. Before conducting such a study, the investigator has a special responsibility to (1) determine whether the use of such techniques is justified by the study’s prospective scientific, educational, or applied value; (2) determine whether alternative procedures are available that do not use concealment or deception; and (3) ensure that the participants are provided with sufficient explanation as soon as possible. …
8. After the data are collected, the investigator provides the participant with information about the nature of the study and attempts to remove any misconceptions that may have arisen. Where scientific or humane values justify delaying or withholding this information, the investigator incurs a special responsibility to monitor the research and to ensure that there are no damaging consequences for the participant.
(Bold emphasis added.)
Item 4 here covers informed consent. Part of what one owes human subjects of one’s research is an explanation of the purpose of the research and of what, specifically, participation in it will involve. This is to give the potential subjects enough information reasonably to exercise their autonomy in deciding whether to participate in the research or not.
Item 5 deals with instances where disclosing particular details (such as what exactly it is you’re studying) is itself likely to affect the subjects’ behavior and contaminate the results. In such cases, the researchers have extra obligations to the subjects to offset the harms of not fully disclosing the details ahead of time. The first two of these responsibilities must actually be discharged prior to the study — namely, making sure the value of the knowledge expected from the research outweighs the harms from the deception, and making sure there is no deception-free way to build this knowledge. Assuming the expected knowledge justifies the risk, and assuming deception and concealment are required to build that knowledge, the researcher is then obligated to reveal the concealment or deception to the subjects, and to explain why it was necessary, as soon as possible.
It’s worth noting that the harm to the subjects is not allowed to cross a particular threshold no matter how valuable the knowledge you expect the study to produce. The policy at my university states:
1. No human subject is to be exposed to unreasonable risk to health or well-being whether physical, psychological or social.
where the various types of risks are described as follows:
PSYCHOLOGICAL RISK
Research that interrupts the normal activity of human subjects resulting in the immediate and/or long term stress that would not otherwise be experienced by the individual.
1. Stress involves any situation that poses a threat to desired goals or homeostatic organismic conditions and thus places strong adaptive demands on the individual.
2. Stress can be experienced during the actual research situation (immediate) and/or as a result of participation in research (long term).
3. Some examples of situations that may result in stress are threat to self-esteem; exposure to noxious events; requests or demand for behaviors that are discrepant with an individual’s values, morals and/or ethics; the requirement of excess physical effort.
SOCIAL RISK TO INDIVIDUALS
Social risk to individuals is the extent to which an individual subject is exposed to deprivation with respect to desired relations with and within both formal and informal social groups, or normal opportunities for such relationships. Such deprivations include (but are not limited to) derogatory labeling, overt hostile reactions by others, diminished access to otherwise available roles, negative effects on social standing or mobility, reduced opportunity for communication, lost or endangered membership in such groups.
SOCIAL RISK TO GROUPS
Social risk to groups is the extent to which a subject formal or informal group, as a collective, is exposed to loss with respect to factors affecting the viability and vitality of the group. Such loss includes (but is not limited to) derogatory labeling, overt hostile reactions from the social environment, reduced access to resources, diminished ability to recruit and retain members, negative effects on morale and other aspects of internal cohesion and organization, violation of legally required procedures or risk of damage claims through civil action where there is corporate liability, reduced opportunities for communication, distortion of group activities relative to established group purposes and functions.
Note also that in “clinical” or “therapeutic” research, the risks to the human subject participating in the research are supposed to be outweighed by the potential benefit to that human subject. However, in non-therapeutic studies, the risks to the individual participants simply need to be outweighed by the potential benefit to society. In other words, the subject might not benefit directly from participation in the research. Needless to say, it’s a very important part of informed consent to establish that the participants understand which kind of study they’re getting themselves into.
Back to the APA guidelines, item 8 identifies responsibilities that the investigators have toward the subjects even after the subjects’ active involvement in the research (i.e., during the data collection phase) has ended. Here again, notice the presumption that the human subject is entitled to information about the study in which s/he has participated — that human subjects are to be regarded as agents who are entitled to this information in order to make their own decisions and pursue their own interests.
How do these guidelines bear on the situation described by my correspondent?
First, there is the question of whether the consent form sufficed to inform the faculty members being recruited as subjects “of all aspects of the research that might reasonably be expected to influence willingness to participate”.
Undoubtedly, each university’s Institutional Review Board (IRB) has to work out a reasonable interpretation of “reasonably” here. It seems clear to me that one would reasonably want to know something about what kind of interactions there would be between investigators and study participants. Am I answering a questionnaire, being fed cookies, being poked with sharp sticks? This will play some role in my decision to participate or not. As well, it doesn’t seem unreasonable for a potential study participant to want to know what scientific question the study is trying to answer.
Is there a reasonable expectation that study participants would also want to know the various ways that the knowledge generated in the study might be used? (Here, it might make a difference that the pool of potential subjects is made up of university faculty.)
Here, I think we should notice the difference between potential ways someone might, at some point, apply the results of a particular research project and specific plans the researchers themselves or others cooperating with them have to use the expected results once the study is completed. Once a piece of knowledge has been built and communicated, even if that communication happens within a fairly small academic community, it’s very hard to predict with certainty the various ways it may serve as a starting point in the building of additional pieces of knowledge. Similarly, it’s hard to have complete foresight as to what applications of this knowledge others may try to implement. Knowledge has legs, and once it’s out there, the researchers who generated it can’t control how others use it.
In other words, it’s probably not reasonable to demand that a consent form run through all the possible ways the knowledge generated in a study might be put to use.
But, if, in the event that a research project yields the knowledge you expect it to, and if you have a plan to use that knowledge to apply for a grant directed at a particular end, do you have an obligation to disclose this in the consent form? If it’s not in the consent form, do you have an obligation to disclose this plan to potential study participants who ask how you intend to use the knowledge generated in the study?
My presumption would be that you do unless there is a pressing methodological reason to conceal this information.
The next question, then, is whether in the particular circumstances, knowledge of the hypothesis being studied is likely to bias the results. If it is, concealing that hypothesis or deceiving subjects about the precise purpose of the study may be permissible — provided the benefits of the results of the study outweigh the harms to the participants of deceiving them.
Here again, the investigators and the IRBs are in the position of evaluating the size of the harm such deception would create for the study participants, but there’s no guarantee that the potential subjects will judge the size of that harm (nor of the benefit of conducting the study) the same way. I don’t know that there’s an obvious way to solve this problem when study methodology requires deception.
But in this particular case, it may not matter, because the administrator exhorting faculty to participate in the study mentions that the results may be useful in applying for the grant to fund “institutional transformations”. It is quite possible that the curious potential participant could research the granting program, discover the sort of “institutional transformations” funded by this program, and work out the sociologists’ likely hypothesis in the study (or enough of it, anyway). This means that either the administrator’s comment about the grant that may follow on the research could bias the results of that research, or knowledge of the hypothesis being studied won’t bias the research results — which would mean that the investigators ought not to conceal that hypothesis from study participants.
So, on the particular case at hand, I’m inclined to say that while the consent form might not need to address the “next steps” envisioned by the investigators once the results of this study are in, that the investigators probably do have a duty to be forthcoming about these plans when asked by potential subjects. To the extent that keeping these next steps out of the research subjects’ consciousness might have been required by the study methodology, the talky administrator has already screwed that up.
There are also some broader questions here that I will pose but not answer (although I invite you to weigh in on them):
- Is it a harm to me to participate in the generation of knowledge that is used to change things (like patterns of institutional support) in ways I find objectionable? Or ought I to separate the generation of the knowledge from the subsequent use of that knowledge?
- Is it always better to have changes grounded in sound research over changes grounded in hunches (or the pet agendas of administrators and grant writers) — even if the changes so grounded are objectionable to me for other reasons?
- Is opting out of participation in a research project an effective strategy to prevent the generation of knowledge that I have reason to believe will be misused?
- Is opting to participate in a research project an effective strategy to prevent the generation of less accurate conclusions (owing to a biased sample of subjects)?
- Is it harder to misuse accurate conclusions than inaccurate ones?
These are questions aqueous solutions hardly ever ask themselves at the outset of your standard chemistry experiment.
______
* I’m purposely stripping out a bunch of the particulars in order to consider this question somewhat broadly — as well as to consider it on the basis of the ethical requirements of investigators toward human subjects, rather than on the basis of good outcomes that might be generated by the grant to support institutional transformation. If desired, my correspondent may fill in (or link to) specifics of the actual case.
Is it a harm to me to participate in the generation of knowledge that is used to change things (like patterns of institutional support) in ways I find objectionable? Or ought I to separate the generation of the knowledge from the subsequent use of that knowledge?
I think this depends a lot on the foreseeable nature of the knowledge, and the intrinsic worth of the knowledge. Suppose I come up with a natural account of religion, knowing that it will cause many believers to drop their faith, and thus cause social disruption.
I may think social disruption is a harm, but I may think that a natural account of religion is a great benefit. So I guess I’d balance the two and decide on the overall benefit. I don’t think knowledge is, in itself, either an absolute good or an absolute harm. But I may think that knowing that truth is worthwhile no matter what the possible misuses of it (say, that extremist atheists rush through a law banning religious belief) that I cannot easily foresee or think are unjustified.
IOW, I have no idea.
One issue that also needs to be considered is that of laws pertaining to data protection. Such laws often require the organisation collecting the data to inform the subject of the data being collected the purpose for which it will be used. Such laws also often prevent the passing on of such data to third parties except as required by law, or with the express consent of subject. I know data protection laws in the US are weaker than in the EU. EU laws can be pretty stringent and could affect how much, and the type, of data that could be passed to researchers outside the EU.
I’m the person who emailed Janet. I wish I could fill in more details, but my university is being (as usual) secretive about the purpose of the survey, and indeed about the overall goals. I may just have to file a FOIA.
The one thing I know is that the ‘institutional transformation’ referred to is connected with an NSF ADVANCE proposal, which has the overall goal of increasing the participation of women in science and engineering. Some things done with NSF ADVANCE funding have been entirely praiseworthy; some have been trivial; and some have been objectionable, at least to those of us who reject sex-selective hiring programs. There are three categories of ADVANCE grants; of relevance to my institution are planning grants, and ‘institutional transformation’ grants. We know that an ADVANCE proposal was submitted in December 2007, and no word has been given about its funding status (based on typical NSF timelines, it’s unlikely the PIs even have the reviews back yet). A letter faculty received, urging us to participate in the survey certainly indicated it was for a future proposal. My best guess (but it’s only a guess) is that a proposal for a planning grant was submitted; the data collection they’re now doing is anticipation of receiving that grant; and that an ‘institutional transformation’ proposal is in the future.
As for where science faculty stand on all this; one problem is that this thing seems to be being run from outside the science departments; our betters, as usual, seem to have decided we need to be transformed, without asking us. I won’t claim there haven’t been some quite clear examples of discrimination against women in hiring in my own Department — there have. Some of the problems have been taken care of by a Kuhnian mechanism: the offenders have retired or will retire shortly. And at least one was mainly a result of administrative decisions and not the conduct of the faculty, which supported the female candidate.
Would knowing the goals affect the conduct of the survey? Maybe, but we’re talking about a survey group of scientists here; of course we’re going to try to figure out why they’re asking the questions they’re asking. One way or another, the motivation of the research is going to be in play. The rats in the Skinner box seldom ask ‘why are we here’?
Some examples of situations that may result in stress are threat to self-esteem;
And here we have the root of Harbison’s problem I have little doubt. LOL.
Go away, monkey, the grown ups are having a conversation.
If I may point out the policy of our prestigious university has proven to be counterproductive in some situations. I work with professors who cannot publish papers because the onerous requirements. Although the subject of the study was a broad examination of environmental and social conditions, the administration demanded that the professors return to the study area and obtain informed consent from an absurd number of people, potentially numbering in the thousands. They cited that risk to the social group as a problem.
I am of the opinion that accurate, carefully collected information is far better than the “well we think it’s like this” information. If the collection methodology was questionable, that situation can be addressed in the peer review as well as subsequent examination. IMO, attempts to pollute the data pool by the participants does seem be counterproductive. One never knows, the results may reveal a situation favorable to your position. Besides, shouldn’t the pursuit of knowledge trump ideological agenda? I hesitate to read between the lines in this case.
BTW – Dr. Free-Ride, If you want to be subjected to an ad hoc social experiment, the Art History Symposium is this Saturday, 12 Apr 08. It’s in the Engineering Auditorium, 09:00 to 16:00, with lunch ($5.00 donation) in the University Room. A painful time is guaranteed for all. (Too much post-modernism for this medievalist :^)
Professor Harbison writes ..”I won’t claim there haven’t been some quite clear examples of discrimination against women in hiring in my own Department — there have. Some of the problems have been taken care of by a Kuhnian mechanism: the offenders have retired or will retire shortly. And at least one was mainly a result of administrative decisions and not the conduct of the faculty, which supported the female candidate.”
It seems that this clarification raises more ethical questions than the aforementioned survey. Since some of the offenders have not retired, and others involved in the “administrative decisions” may still be present, perhaps it is the appropriate time for an “institutional transformation”? How isolated are these examples? How many women are on the tenured faculty in Professor Harbison’s department?
You avoided the first few words of guideline #4:
“Except in minimal-risk research”
The risk is objection to how the study will be used? Where is the risk?