Abstract
Assessing and managing risks to participants is a central point of contention in the debate about disclosing individualized research results. Those who favor disclosure of only clinically significant results think that disclosing clinically insignificant results is risky and costly, and that harm prevention should take precedence over other ethical considerations. Those who favor giving participants the option of full disclosure regard these risks as insubstantial, and think that obligations to benefit participants and promote their autonomy and right to know outweigh the obligation to prevent harm or financial considerations. The risks of disclosing clinically insignificant research results are currently not quantifiable, due to lack of empirical data. The precautionary principle provides some insight into this debate because it applies to decision-making concerning risks that are plausible but not quantifiable. A precautionary approach would favor full disclosure of individualized results with appropriate safeguards to prevent, minimize, or mitigate risks to participants, such as: validating testing methods; informing participants about their options for receiving tests results and the potential benefits and risks related to receiving results; assessing participants' comfort with handling uncertainty; providing counseling and advice to participants; following-up with individuals who receive tests results; and forming community advisory boards to help investigators deal with issues related to disclosure.
Keywords: autonomy, beneficence, disclosing research results, precautionary principle, risk
Disclosure of individualized results to human research participants has been a hotly contested bioethics issue since the 1990s. Investigators, ethicists, oversight committees, expert panels, and research organizations have expressed diverse opinions on this topic (Beskow and Burke, 2010; Dressler, 2009; Fabsitz et al., 2010; Hernick et al., 2011; Holtzman, 1999; Knoppers et al., 2006; National Bioethics Advisory Commission, 1999; Ravistky and Wilfond, 2006; Renegar et al., 2006; Resnik, 2009; Wolf et al., 2008). For many years, the prevailing view was that individualized results of tests or examinations performed during research should not be returned to participants, because the purpose of research is to develop generalizable knowledge, not to provide human subjects with information pertaining to their health (Dressler, 2009). Aggregate results of a study could be shared with participants, but not individualized results. However, many commentators now agree that clinically significant results from validated tests should be shared with participants (Beskow and Burke, 2010; Fabsitz et al., 2010; Knoppers et al., 2006; Ravistky and Wilfond, 2008). Some researchers, patient advocate groups, and organizations have gone a step further and argued that participants should have the option of receiving all of their individualized research results, not just those with clinical significance (Fernandez et al., 2003; Shalowitz and Miller, 2005; Brody et al., 2007; Brown et al., 2010).
One of the main arguments against disclosing all individualized results is that it can pose significant risks to human participants. Individuals who receive results from tests that are not clinically significant may make ill-advised medical, financial, or personal choices based on a misunderstanding, or they may experience unnecessary anxiety or distress (National Bioethics Advisory Commission, 1999; Dressler, 2009). However, these risks are not well understood at present, due to a lack of empirical research on the psychosocial impacts of receiving clinically insignificant test results (Shalowitz and Miller, 2008a; Mattsson et al., 2010). The precautionary principle, described below, has been advocated as a useful strategy for making decisions involving public health, technology, or the environment when evidence concerning risks is scarce (Cranor, 2001; Dolan and Rowley, 2009; Goklany, 2001; John, 2007; Kriebel et al., 2001; Kuhlau et al., 2011; Resnik, 2003, 2004; Weed, 2004). In this article, I will argue that the precautionary principle can yield important insights and recommendations concerning the disclosure of individualized research results.
ARGUMENTS FOR AND AGAINST DISCLOSING INDIVIDUALIZED RESEARCH RESULTS
The movement toward full disclosure of individualized research results gained momentum when investigators, commentators, and oversight committees started to appreciate how participants can benefit from receiving this information (Knoppers et al., 2006). The duty to benefit human subjects has a solid basis in research regulations and ethical guidance (Emanuel et al., 2000; National Commission, 1979; Shamoo and Resnik, 2009). The most obvious way that participants may benefit from the disclosure of individualized research results is that the information may be useful in diagnosing, treating, or preventing diseases. Blood pressure measurements, physical exams, urinalyses, blood tests, pulmonary function tests, genetic tests, magnetic resonance imaging scans, and other tests conducted during research may yield results with implications for disease-management (Wolf et al., 2008; Dressler, 2009). Participants may also benefit from information with implications for reproductive choices. A woman who learns that she has a genetic variant predisposing her to develop a progressive and incurable neurodegenerative disease may decide to avoid having children who are born with this predisposition (Renegar et al., 2006). Additionally, participants may benefit from receiving tests results that do not have clear implications for disease management, because the information may provide them with some reassurance that they do not have a particular disease (or disease predisposition). In some cases, participants may benefit from receiving information that simply satisfies their desire to know something about themselves. Also, information that is not clinically useful at present may become clinically useful in the future, due to advances in medicine (Fernandez et al., 2003).
Disclosing individualized research results is important not only to benefit participants, but also to promote their autonomous decision-making (Shalowitz and Miller, 2005). The duty to respect and support participants' autonomous decision-making also has a solid basis in research regulations and ethical guidance, especially those dealing with informed consent (Emanuel et al., 2000; Shamoo and Resnik, 2009). Participants can use the information they receive about their test results to decide whether to continue participating in a study and to make medical, reproductive, or lifestyle choices. Participants may also want to know their tests results to satisfy their curiosity or to relieve anxiety. Surveys and interviews with research subjects and potential subjects indicate that most would want to know their individualized tests results, especially results with implications for their health (Beskow and Smolek, 2009; Fernandez et al., 2009; Partridge et al., 2003; Shalowitz and Miller, 2008a; Wendler and Emanuel, 2002; Wilson et al., 2010). In some circumstances participants may choose not to receive their results, and they should be allowed to make this choice as well, according to many commentators (Dressler, 2009).
A third argument for returning individualized research results is that research participants have a right to receive this information as a result of providing the data or specimens (e.g., blood, tissue, or urine) on which the results are based. One need not assume that participants actually own the data or specimens in order to acknowledge their right to receive individualized information, since this right could be based on recognition of the value of the participant's contribution to the study. Returning individualized results embodies a partnership approach to research with human subjects (Shalowitz and Miller, 2005; Brody et al., 2007).
While there are strong arguments for disclosing individualized results to participants, there are also some cogent arguments against this position. First, although information about research results can benefit participants, it can also produce harm (National Bioethics Advisory Commission, 1999; Dressler, 2009). Regulations and ethical guidelines require investigators to minimize harms to human research subjects (Emanuel et al., 2000; Shamoo and Resnik, 2009). Benefits to participants must be balanced against potential harms to participants and others when disclosing results (Ravitsky and Wilfond, 2006; Dressler, 2009; Fabsitz et al., 2010). Participants can be harmed if they receive information that is inaccurate or misleading.
A false positive HIV test result could produce unnecessary stress or anxiety, or lead someone to make an ill-advised medical or financial decision, such as starting unnecessary anti-retroviral treatment. A false negative result could also have adverse impacts, since it may give a false sense of security and lead someone to make unwise choices based on misinformation. For example, a woman might forego a mammogram based on the mistaken genetic test result indicating her breast cancer risk is low.
Because inaccurate or unreliable testing methods can cause harm, many commentators and organizations recommend that individualized research results should be disclosed only if there is good evidence that the testing procedures have been validated (Ravitsky and Wilfond, 2006; Dressler, 2009; Fabsitz et al., 2010). In the United States, the Clinical Laboratory Improvement Amendments (CLIA) require certification of laboratories that perform clinical tests. CLIA certification is designed to promote accurate and reliable clinical testing by ensuring that laboratories have appropriate quality control and quality assurance procedures. CLIA certification is not required for laboratories that perform tests only for research purposes. If a test has been conducted by a laboratory that has not been certified, it may yield results that are inaccurate or unreliable (Holtzman, 1999). According to many commentators, individualized tests results should be returned to participants only if the testing laboratory has CLIA certification or some equivalent method for validating the testing procedures (Dressler, 2009).
Many commentators have argued that harms can also occur when the information disclosed to participants is clinically insignificant, even if the testing procedures have been validated (Ravistky and Wilfond, 2006; Dressler, 2009). For example, suppose investigators are conducting a prospective cohort study to examine the relationship between the risk of developing Alzheimer's disease and variants of several genes. Suppose that after fifteen years of research, they discover that some of these variants are associated with a 10% increased risk of developing Alzheimer's disease, but that there is no effective intervention for treating or preventing this condition. Participants that learn about their genetic test results may not know how to respond to this information. They may not know whether their health is at risk, or if there is anything they can do to prevent this disease from developing. They may suffer psychic distress or make unwise choices based on a misunderstanding of the information they receive (Renegar et al., 2006; Mattsson et al., 2010).
For another example, suppose that investigators are conducting a prospective cohort study on the relationship between low levels of exposure to various metals and cancer. Animal studies have shown that some of these metals increase the risk of cancer, but the human health impacts of low levels of exposure are not well understood. Suppose that investigators inform participants in this study about their individual exposure levels, but there is no established normal or safe level of exposure. Also, the precise source of their exposure is not known. The participants may not understand the meaning of this information or what to do with it. They may not know whether their health is at risk, or whether they should take some action to protect their health. Though some of the participants may be able to deal with these uncertainties, others may not, and they may suffer ill effects (Hernick et al., 2011).
Because disclosure of clinically insignificant research results may cause harm to participants, many commentators and organizations recommend that investigators share only clinically significant results with participants (National Bioethics Advisory Commission, 1999; Ravitsky and Wilfond, 2006; Dressler, 2009; Fabsitz et al., 2010). The ethical duty to avoid causing harm, according to many, takes precedence over the obligation to benefit participants and enhance their autonomy and right to know. However, proponents of full disclosure counter-argue that these potential harms are not likely to be significant and that most participants will be able to deal with the practical uncertainties associated with clinically insignificant information, especially if they receive appropriate counseling and advice from a member of the research team or their physician (Shalowitz and Miller, 2005; Brody et al., 2007). Those who are uncomfortable with uncertainty can choose not to receive clinically insignificant results. However, there is very little empirical data pertaining to the risks of disclosing clinically insignificant research results, and we simply do not know how people are likely to react. More research is needed on this topic (Shalowitz and Miller, 2008a; Mattsson et al., 2010).
A second argument against returning individualized research results is that this could be very expensive in some cases, which would adversely impact a study's budget. The most significant expenses related to returning individualized results are those associated with providing counseling and advice, since it would be unethical to return results without helping participants understand what they mean. Additional staff may be needed to counsel participants, which could drive up the cost of the study significantly. To cover the costs of returning individualized results, investigators might have to find ways to save money, such as enrolling fewer participants, or ask for more money. In the most extreme case, the study might need to be cancelled or drastically cut back. One could argue that greater net social benefits could be produced, in some cases, by not returning individualized results. Some marginal benefits could be withheld from individuals in order to promote a greater social good (Affleck, 2009). Proponents of full disclosure have argued that these expenses will not be excessive, and they certainly won't cripple the study. However, there is very little empirical data on how disclosure of individualized results affects a study's budget, and more research is also needed on this topic (Shalowitz and Miller, 2008a).
A third argument against returning individualized research results is that it encourages therapeutic misconception and therefore undermines the informed consent process (Clayton and Ross, 2006; Ravitsky and Wilfond, 2006; Dressler, 2009). The therapeutic misconception occurs when research participants mistakenly believe that the main purpose of a study in which they are enrolled is to provide them with medical benefits rather than to advance human knowledge (Appelbaum et al., 1987). Research has shown that participants often have difficulty understanding the difference between participating in a clinical trial and receiving clinical care, and, as a result, they often overestimate the benefits of research and underestimate the risks (Lidz et al., 2004). If subjects expect to receive individualized results from research participation, they may assume the study is designed to provide them information that is useful in health promotion or disease prevention, and this may compromise the consent process. Proponents of full disclosure have argued, however, that disclosing only clinically significant results is more likely to encourage the therapeutic misconception than full disclosure, because it will encourage participants to view disclosure as potentially therapeutic rather than as simply informative (Shalowitz and Miller, 2008b). However, there is little empirical data on the relationship between the therapeutic misconception and disclosure of individualized results to participants, and more research is also needed on this topic (Shalowitz and Miller, 2008a).
TAKING STOCK
From this brief survey of the arguments for and against disclosing individualized research results, we can ascertain the following points. First, there is a widespread consensus that some individualized results should be disclosed. The current debate is between those who favor disclosure of only clinically significant results and those who argue for giving participants the option of full disclosure (Beskow and Burke, 2010). Second, there is disagreement over the assessment and management of risks related to disclosure of clinically insignificant results. Opponents of full disclosure believe that the psychosocial risks of disclosing clinically insignificant results are substantial. They also think that preventing harm to participants should take precedence over benefitting them and promoting their autonomy or right to know. Proponents of full disclosure do not view these risks as substantial, and they think that beneficence, autonomy, and the right to know should take precedence over harm prevention. Third, although the cost and the therapeutic misconception arguments are important ethical considerations that weigh against returning individualized research results, the harm prevention argument is the most compelling reason for not sharing results with participants, because the obligation to avoid causing unnecessary harm (primum non nocere, “first, do no harm”) usually takes precedence over other ethical obligations in medicine and research (National Bioethics Advisory Commission, 1999; Beauchamp and Childress, 2008). Fourth, we do not have a good understanding of how disclosure of clinically insignificant, individualized results affects participants. Though it seems reasonable to suppose that disclosure of clinically insignifi-cant results could lead to psychic distress or unwise decision-making, we have no precise estimate of these risks at present.
The lack of scientific evidence concerning the risks of disclosure of clinically insignificant results is an impediment to the effective resolution of this debate. While it is likely that evidence will materialize at some point in the not-too-distant future as a result of additional research, we must decide how to proceed in the interim. The precautionary principle is a method for decision-making about risks when scientific uncertainty exists that may lend some insight into this debate. It would be useful, therefore, to apply this decision-making rule to the controversy concerning the disclosure of individualized research results.
THE PRECAUTIONARY PRINCIPLE
The precautionary principle embodies the aphorisms “an ounce of prevention is worth a pound of cure” and “better safe than sorry.” German scholars refined these commonsense notions in the 1980s by developing an idea known as vorsorgeprinzip (or precautionary principle), which was an approach to making policy decisions concerning environmental and public health issues when scientific evidence is lacking (Goklany, 2001). During the 1980s, evidence for human-caused global warming was beginning to mount, though it was not conclusive. Environmentalists argued that we should take action to prevent global warming, despite the incompleteness of the scientific evidence. The precautionary principle was embodied in the United Nations' Rio Declaration:
In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation. (United Nations, 1992)
Since the 1980s, ethicists, researchers, and policy-makers have argued that the precautionary principle should be applied to risks to public health or the environment, including electromagnetic fields, genetically engineered foods and crops, toxic chemicals, nanotechnology, infectious diseases, cancer screening, and biosecurity (Goklany, 2001; Resnik, 2003).
Though the precautionary principle has gained considerable influence over public policy, especially in Europe, it continues to face harsh criticism. The most significant objections to the precautionary principle are that 1) it is vague and poorly defined and 2) it is highly risk-aversive and denies society the benefits of science and technology (Sandin et al., 2002; Resnik, 2003; Sunnstein, 2005; Peterson, 2006; Hughes, 2006). In response to the first critique, numerous writers have attempted to develop a clearer definition of the precautionary principle (Cranor, 2001; Sandin et al., 2002; Resnik, 2003; Sandin, 2004). To define the principle, two important points need to be clarified. First, the precautionary principle deals with risks which are plausible but not quantifiable, given current scientific knowledge. A risk is plausible if it is consistent with our current scientific knowledge, and there is some evidence that supports its occurrence under certain conditions (Resnik, 2003; Kuhlau et al., 2011). The precautionary principle deals with adverse outcomes that could happen, not with implausible, nightmare scenarios. Risks are not quantifiable if we lack sufficient knowledge to assign them an evidence-based probability. For example, suppose a pistol has six chambers and that it has a bullet in one of those chambers, but I do not know which one. There is a .167 probability that I will shoot myself in the head if I point the gun to my temple and pull the trigger. This risk is quantifiable. Suppose, however, that I do not know whether the pistol has any bullets in it. I have good reasons to believe that there is a chance that I will shoot myself in the head if I aim the gun at my temple, but I do not know the probability that this event will occur under these conditions. This risk is plausible but not quantifiable. In the language of decision theory, the choice is made under conditions of ignorance (Resnik, 2004).
The precautionary principle is different from traditional risk-benefit assessment, which is used in environmental and public health regulation, because risk-benefit assessment addresses quantifiable risks (Cranor, 2001; Resnik, 2003). For example, when a government agency approves a new drug, it considers the drug's benefits (such as improved treatment outcomes and enhanced survival) and risks (such as mortality and adverse reactions). Several stages of clinical trials provide scientific evidence concerning the risks and benefits of the drug. The agency will be able to determine that a particular percentage of the population is likely to experience adverse drug reactions or other harmful outcomes. The precautionary principle applies to situations where level of scientific evidence found in traditional risk-benefit assessment is lacking. As scientific evidence concerning a risk accumulates, and the risk becomes quantifiable, we may forego a precautionary approach to managing the risk in favor of traditional risk-benefit assessment.
A second defining feature of the precautionary principle is that the measures we take in response to risks should be reasonable. Measures could include efforts to avoid, prevent, minimize, or mitigate risks (Sandin, 2004). A response is reasonable if it provides a fair balancing of the different values at stake, which include potential benefits and harms, costs, human rights, and justice (Cranor, 2001; Resnik, 2003; John, 2007; European Commission, 2000). For example, suppose I am concerned about having a flat tire when I drive to work. A reasonable response to this risk would be to minimize and mitigate it by making sure that my tires are properly inflated and taking a spare tire and tire-changing equipment. Deciding not to go to work would not be a reasonable response, because while it would avoid the risk, it would deny me the benefits of going to work, and the risk of having a flat tire is not dire. Driving to work without checking my tires or taking a spare tire and tire-changing equipment would also not be a reasonable response, because it would not address the risk. However, consider the pistol example again. If I have a pistol and I do not know whether it has any bullets in it, the most reasonable response would be to avoid the risk of shooting myself with the gun by not aiming at myself or pulling the trigger. Risk-avoidance (as opposed to mitigation) would be the reasonable course of action in this case because the risk (i.e., shooting one's self) is catastrophic, and risk minimization and mitigation are ineffective ways of managing the risk (Sandin, 2004).
Putting these two points together, we can define the precautionary principle as a rule for decision-making that instructs one to take reasonable measures to prevent, minimize, or mitigate risks that are plausible and significant. This definition is similar to the United Nations' statement mentioned above, but it clarifies the notion of scientific uncertainty concerning risks, and it gives some guidance for deciding how to respond to this uncertainty.
Once we have a clear definition of the precautionary principle, we can dispel the charge that it is exceedingly risk-aversive (Cranor, 2001; Resnik, 2003; Sandin, 2004). If the precautionary principle were interpreted as a rule that tells one to avoid all risks, then it would be overly risk-aversive, because it is impossible to live prosperously without taking some risks. Clearly, many commentators have encouraged the risk-aversive interpretation of the principle by arguing that the best way to deal with some kinds of environmental or public health risks is to avoid them entirely (Sunnstein, 2005). For example, environmental activists in Europe have applied the precautionary principle to argue for banning genetically modified foods and crops (Tait, 2001). However, as we have seen, the precautionary principle need not be interpreted as a highly risk-aversive rule, because reasonable ways of dealing with risks may include risk minimization or mitigation, depending on the balance of different values at stake. Risk avoidance is the preferred option only when the potential harms are catastrophic and other ways of managing the risks are not likely to be effective. The precautionary principle allows society to manage the risks of new technologies in a responsible way (Sandin, 2004).
A PRECAUTIONARY APPROACH TO DISCLOSING INDIVIDUALIZED RESULTS
With this characterization of the precautionary principle in mind, we can now apply it to the decision to disclose individualized research results. As we have seen, disclosure poses potential risks to research participants, such as psychic distress or ill-advised decision-making based on a misunderstanding of the information received. While these risks are plausible, they are not quantifiable (at present). The precautionary principle would instruct us to take reasonable measures to avoid, prevent, minimize, or mitigate these risks. One way to manage the risks of disclosure is simply to not provide participants with their individualized results. While this measure avoids the risks of disclosure, it is not a reasonable response, because it does not balance the competing values fairly. The risks of disclosure, while serious, are not catastrophic (i.e., life-threatening). Also, non-disclosure denies participants important benefits, such as receiving medically useful information, and it does not promote their autonomy or honor their right to receive information about themselves. Full disclosure of results, without any safeguards to protect participants from harm, is also not a reasonable response to the risks, because it does nothing to prevent, minimize, or mitigate potential harms, even though it may produce benefits and enhance autonomy and rights. A reasonable response would involve disclosure of individualized results with appropriate safeguards in place to prevent, minimize, or mitigate the risks associated with receiving this information. This strategy would address risks appropriately without denying participants important benefits. It would also promote autonomous decision-making and honor the right to receive information about one's self.
Under a precautionary approach, the key policy decisions would focus on developing appropriate safeguards to deal with the risks of disclosure. Two of these are uncontroversial and have already been discussed in this article. First, the testing procedures that generate the results should be validated. CLIA certification, or some equivalent method of validating testing procedures, would help to ensure accuracy and reliability. Second, participants should receive appropriate counseling and advice, which will help them understand the information they receive and apply it to practical decisions. Counseling and advice would inform participants about the difference between normal and abnormal results, and which results require medical attention. It would also include referrals to health care providers (National Bioethics Advisory Commission, 1999).
What about precautions related to disclosing clinically insignificant results? As we have seen, commentators and organizations are divided about the best way to manage the risks of disclosing these types of results. Some favor full disclosure, while others believe that disclosure should be limited to clinically significant results. One could argue that the precautionary principle would permit full disclosure, provided that safeguards beyond counseling and validation of testing methods are in place to protect participants from harm, because this approach would provide a fair balancing of the competing values (i.e., risk prevention vs. beneficence, autonomy and rights). What might these additional safeguards be?
One of these safeguards would be to ensure that individuals who receive their test results have the mental and emotional capacity to deal with the information. For the purposes of this discussion, I will focus on returning results to adults with sound decision-making, since returning results to children (or their parents or guardians) or incompetent adults (or their family members or guardians) raises complex ethical and legal issues that are beyond the scope of the present inquiry, such as the right to access information about one's self and the right to informational privacy (see Buchanan and Brock, 1990).
Focusing on adults with sound decision-making, it is important for investigators to understand whether these research participants are comfortable with uncertainty, since some individuals may have difficulty with understanding or responding to uncertain information (Dressler, 2009). During the informed consent process, research participants should receive information about the disclosure of test results and have the opportunity to decide whether they want to receive some or all of the information. Investigators should also inform participants about potential psychological or practical difficulties related to dealing with uncertainty, so that participants can decide whether they want to receive results that are clinically insignificant (Fabsitz et al., 2010). Participants who are uncomfortable with uncertainty may need additional counseling or they may not want to receive the information at all. Participants may also choose to have results disclosed to their primary care physician, who could help them understand the information.
An additional safeguard would be to include some kind of follow-up when full disclosure is planned, to make sure that participants have not been harmed by the receipt of uncertain information. Follow-up could take place several weeks after disclosure, to give participants a chance to assimilate the information. Investigators could ask participants if they have any questions or concerns about their results, how they have reacted to them, if they would like additional counseling, and so on. Information obtained during follow-up could help to mitigate harms to participants and prove useful in revising the study design, or in developing future studies. Some sort of follow-up is a standard procedure in many clinical trials, since investigators often need to understand how participants are responding to treatment after the trial is over, if they are having any adverse reactions to medications, or if they have additional questions, and so on (Gallin, 2007).
Another safeguard would be to form a community advisory board to provide advice concerning study design and implementation (Brody et al., 2007; Brown et al., 2010). The advisory board would include representatives from the relevant community, who would have a good understanding of the community and its needs, concerns, and vulnerabilities. The board could provide investigators with advice about the types of information that members of community are interested in, whether the option of full disclosure is appropriate, and how disclosure should be handled. Community advisory boards are becoming an essential component of research in which investigators work closely with an identifiable community (O'Fallon and Dearry, 2002), but they might not be appropriate in large, epidemiological studies in which there is no identifiable community.
AN OBJECTION
One objection to implementing these safeguards for the full disclosure of individualized results is that this will increase the costs of research. Additional funding and resources will be needed for validation of testing procedures, counseling and advice, assessment of decision-making capacity, follow-up, and support for community advisory boards. One could argue that these safeguards will drive up the costs of research unnecessarily and may interfere with the advancement of knowledge. In some cases, costs may even prevent some studies from being conducted. It would be much cheaper, one might argue, to not disclose any individualized results or to disclose only clinically significant ones. The costs incurred do not justify the loss of social benefits that will come from restrictions on scientific research, one might argue (Affleck, 2009).
While cost is an important utilitarian consideration when formulating research policies, it should not be the dominant consideration. There are many ways to save money in research that would be regarded as egregiously unethical. For example, informed consent drives up the costs of research, but everyone agrees that it is essential to the ethical conduct of research with human participants and should not be eliminated just to save money. Data safety monitoring boards increase the costs of clinical trials, but few people would say that we should forego them to save money. Cost is but one of several factors, including risk, benefits, autonomy, and justice, that should be weighed and considered when designing and implementing research (Emanuel et al., 2000; Gallin, 2007). Even though I don't think that cost should stand in the way of returning individualized research results, I acknowledge that this is an important concern, and more empirical research on the costs of returning individualized research results is needed.
CONCLUSION
Assessing and managing the risks to participants is a central point of contention in the debate about disclosing individualized research results. The risks of disclosing clinically insignificant research results are currently not quantifiable, due to lack of empirical data. The precautionary principle provides some insight into this debate because it applies to decision-making concerning risks that are plausible but not quantifiable. A precautionary approach would favor full disclosure of individualized results with appropriate safeguards to prevent, minimize, or mitigate risks to participants, such as: validating testing methods; informing participants about their options for receiving tests results and the potential benefits and risks of receiving tests results; assessing participants' ability to handle uncertain information; providing counseling and advice to participants; following-up with individuals who receive tests results; and forming community advisory boards to help investigators deal with disclosures issues. Additional studies should be conducted to ascertain the risks, benefits, and costs of disclosing individualized research results.
ACKNOWLEDGMENTS
This article is the work product of an employee or group of employees of the National Institute of Environmental Health Sciences (NIEHS) and National Institutes of Health (NIH); however, the statements, opinions, or conclusions contained therein do not necessarily represent the statements, opinions, or conclusions of NIEHS, NIH, or the United States government. I would like to thank Bruce Androphy and William Schrader for helpful comments.
Footnotes
Publisher's Disclaimer: The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.
REFERENCES
- Affleck P. Is it ethical to deny genetic research participants individualized results? Journal of Medical Ethics. 2009;35:209–213. doi: 10.1136/jme.2007.024034. [DOI] [PubMed] [Google Scholar]
- Appelbaum P, Roth L, Lidz C, Benson P, Winslade W. False hopes and best data: Consent to research and the therapeutic misconception. Hastings Center Report. 1987;17(2):20–24. [PubMed] [Google Scholar]
- Beauchamp T, Childress J. Principles of Biomedical Ethics. 6th ed. Oxford University Press; New York: 2008. [Google Scholar]
- Beskow L, Burke W. Offering individual genetic research results: Context matters. Science Translational Medicine. 2010;2:38–20. doi: 10.1126/scitranslmed.3000952. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Beskow L, Smolek S. Prospective biorepository participants' perspectives on access to research results. Journal of Empirical Research on Human Research Ethics. 2009;4:99–111. doi: 10.1525/jer.2009.4.3.99. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brody J, Morello-Frosch R, Brown P, Rudel R, Altman R, Frye M, Osimo C, Pérez C, Seryak L. Improving disclosure and consent: “Is it safe?”: New ethics for reporting personal exposures to environmental chemicals. American Journal of Public Health. 2007;97:1547–1554. doi: 10.2105/AJPH.2006.094813. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brown P, Morello-Frosch R, Brody J, Altman R, Rudel R, Senier L, Pérez C, Simpson R. Institutional review board challenges related to community-based participatory research on human exposure to environmental toxins: A case study. Environmental Health. 2010;9:39. doi: 10.1186/1476-069X-9-39. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Buchanan A, Brock D. Deciding for Others. Cambridge University Press; Cambridge: 1990. [Google Scholar]
- Clayton E, Ross L. Implications of disclosing individual results of clinical research. Journal of the American Medical Association. 2006;295:37. doi: 10.1001/jama.295.1.37-a. [DOI] [PubMed] [Google Scholar]
- Cranor C. Learning from the law to address uncertainty in the precautionary principle. Science and Engineering Ethics. 2001;7:313–326. doi: 10.1007/s11948-001-0056-0. [DOI] [PubMed] [Google Scholar]
- Dolan M, Rowley J. The precautionary principle in the context of mobile phone and base station radiofrequency exposures. Environmental Health Perspectives. 2009;117:1329–1332. doi: 10.1289/ehp.0900727. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dressler L. Disclosure of research results from cancer genomic studies: State of the science. Clinical Cancer Research. 2009;15:4270–4276. doi: 10.1158/1078-0432.CCR-08-3067. [DOI] [PubMed] [Google Scholar]
- Emanuel E, Wendler D, Grady C. What makes clinical research ethical? Journal of the American Medical Association. 2000;283:2701–2711. doi: 10.1001/jama.283.20.2701. [DOI] [PubMed] [Google Scholar]
- European Commission [Accessed July 29, 2011];Communication for the Commission on the Precautionary Principle. 2000 Available at: http://eur-lex.europa.eu/LexUriServ/ LexUriServ.do?uri=CELEX:52000DC0001:EN:HTML.
- Fabsitz R, McGuire A, Sharp R, Puggal M, Beskow L, Biesecker L, Bookman E, Burke W, Burchard E, Church G, Clayton E, Eckfeldt J, Fernandez C, Fisher R, Fullerton S, Gabriel S, Gachupin F, James C, Jarvik G, Kittles R, Leib J, O'Donnell C, O'Rourke P, Rodriguez L, Schully S, Shuldiner A, Sze R, Thakuria J, Wolf S, Burke G. Ethical and practical guidelines for reporting genetic research results to study participants: Updated guidelines from a National Heart, Lung, and Blood Institute working group. Circulation: Cardiovascular Genetics. 2010;3:574–580. doi: 10.1161/CIRCGENETICS.110.958827. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fernandez C, Kodish E, Weijer C. Informing study participants of research results: An ethical imperative. IRB. 2003;25(3):12–19. [PubMed] [Google Scholar]
- Fernandez C, Gao J, Strahlendorf C, Moghrabi A, Pentz R, Barfield R, Baker J, Santor D, Weijer C, Kodish E. Providing research results to participants: Attitudes and needs of adolescents and parents of children with cancer. Journal of Clinical Oncology. 2009;27:878–883. doi: 10.1200/JCO.2008.18.5223. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gallin J. Principles and Practice of Clinical Research. 2nd ed. Academic Press; Burlington, MA: 2007. [Google Scholar]
- Goklany I. The Precautionary Principle. Cato Institute; Washington, DC: 2001. [Google Scholar]
- Hernick A, Brown M, Pinney S, Biro F, Ball K, Bornschein R. Sharing unexpected biomarker results with study participants. Environmental Health Perspectives. 2011;119:1–5. doi: 10.1289/ehp.1001988. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Holtzman N. Promoting safe and effective genetic tests in the United States: Work of the task force on genetic testing. Clinical Chemistry. 1999;45:732–738. [PubMed] [Google Scholar]
- Hughes J. How not to criticize the precautionary principle. Journal of Medicine and Philosophy. 2006;31:447–64. doi: 10.1080/03605310600912642. [DOI] [PubMed] [Google Scholar]
- John S. How to take deontological concerns seriously in risk-cost-benefit analysis: A re-interpretation of the precautionary principle. Journal of Medical Ethics. 2007;33:221–24. doi: 10.1136/jme.2005.015677. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Knoppers B, Joly Y, Simard J, Durocher F. The emergence of an ethical duty to disclose genetic research results: international perspectives. European Journal of Human Genetics. 2006;11:1170–1178. doi: 10.1038/sj.ejhg.5201690. [DOI] [PubMed] [Google Scholar]
- Kriebel D, Tickner J, Epstein P, Lemons J, Levins R, Loechler E, Quinn M, Rudel R, Schettler T, Stoto M. The precautionary principle in environmental science. Environmental Health Perspectives. 2001;109:871–876. doi: 10.1289/ehp.01109871. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kuhlau F, Höglund A, Evers K, Eriksson S. A precautionary principle for dual use research in the life sciences. Bioethics. 2011;25:1–8. doi: 10.1111/j.1467-8519.2009.01740.x. [DOI] [PubMed] [Google Scholar]
- Lidz C, Appelbaum P, Grisso T, Renaud M. Therapeutic misconception and the appreciation of risks in clinical trials. Social Science and Medicine. 2004;58:1689–1697. doi: 10.1016/S0277-9536(03)00338-1. [DOI] [PubMed] [Google Scholar]
- Mattsson N, Brax D, Zetterberg H. To know or not to know: Ethical issues related to early diagnosis of Alzheimer's Disease. International Journal of Alzheimer's Disease. 2010 Jun 27;:ii:841941. doi: 10.4061/2010/841941. 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- National Bioethics Advisory Commission . Research Involving Human Biological Materials: Ethical Issues and Policy Guidance. National Bioethics Advisory Commission; Washington, DC: 1999. [Accessed July 29, 2011]. Available at: http://bioethics.georgetown.edu/nbac/hbm.pdf. [Google Scholar]
- National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research . The Belmont Report. Department of Health, Education, and Welfare; Washington, DC: 1979. [Google Scholar]
- O'Fallon L, Dearry A. Community-based participatory research as a tool to advance environmental health sciences. Environmental Health Perspectives. 2002;110(suppl 2):155–159. doi: 10.1289/ehp.02110s2155. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Partridge A, Burstein H, Gelman R, Marcom P, Winer E. Do patients participating in clinical trials want to know study results? Journal of National Cancer Institute. 2003;95:491–492. doi: 10.1093/jnci/95.6.491. [DOI] [PubMed] [Google Scholar]
- Peterson M. The precautionary principle is incoherent. Risk Analysis. 2006;26:595–601. doi: 10.1111/j.1539-6924.2006.00781.x. [DOI] [PubMed] [Google Scholar]
- Ravitsky V, Wilfond B. Disclosing individual results to research participants. American Journal of Bioethics. 2006;6:8–17. doi: 10.1080/15265160600934772. [DOI] [PubMed] [Google Scholar]
- Renegar G, Webster C, Stuerzebecher S, Harty L, Ide S, Balkite B, Rogalski-Salter T, Cohen N, Spear B, Barnes D, Brazell C. Returning genetic research results to individuals: Points-to-consider. Bioethics. 2006;20:24–36. doi: 10.1111/j.1467-8519.2006.00473.x. [DOI] [PubMed] [Google Scholar]
- Resnik D. Is the precautionary principle unscientific? Studies in the History and Philosophy of Biology and the Biomedical Sciences. 2003;34:329–344. [Google Scholar]
- Resnik D. The precautionary principle and medical decision making. Journal of Medicine and Philosophy. 2004;29:281–299. doi: 10.1080/03605310490500509. [DOI] [PubMed] [Google Scholar]
- Resnik D. Environmental health research and the observer's dilemma. Environmental Health Perspectives. 2009;117:1191–1194. doi: 10.1289/ehp.0900861. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sandin P. The precautionary principle and the concept of precaution. Environmental Values. 2004;13:461–475. [Google Scholar]
- Sandin P, Peterson M, Hansson S, Rudén C, Juthe A. Five charges against the precautionary principle. Journal of Risk Research. 2002;5:287–299. [Google Scholar]
- Shalowitz D, Miller F. Disclosing individual results of clinical research: Implications of respect for participants. Journal of the American Medical Association. 2005;294:737–740. doi: 10.1001/jama.294.6.737. [DOI] [PubMed] [Google Scholar]
- Shalowitz D, Miller F. Communicating the results of clinical research to participants: attitudes, practices, and future directions. PLoS Med. 2008a;5:e91. doi: 10.1371/journal.pmed.0050091. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shalowitz D, Miller F. The search for clarity in communicating research results to study participants. Journal of Medical Ethics. 2008b;34:e17. doi: 10.1136/jme.2008.025122. [DOI] [PubMed] [Google Scholar]
- Shamoo A, Resnik D. Responsible Conduct of Research. 2nd ed. Oxford University Press; New York: 2009. [Google Scholar]
- Sunnstein C. Laws of Fear: Beyond the Precautionary Principle. Cambridge University Press; Cambridge: 2005. [Google Scholar]
- Tait J. More Faust than Frankenstein: The European debate about the precautionary principle and risk regulation for genetically modified crops. Journal of Risk Research. 2001;4:175–189. [Google Scholar]
- United Nations [Accessed July 29, 2011];Rio Declaration on Environment and Development. 1992 Available at: http://www.un.org/documents/ga/conf151/aconf15126-1annex1.htm.
- Weed D. Precaution, prevention, and public health ethics. Journal of Medicine and Philosophy. 2004;29:313–332. doi: 10.1080/03605310490500527. [DOI] [PubMed] [Google Scholar]
- Wendler D, Emanuel E. The debate over research on stored biological samples: what do sources think? Archives of Internal Medicine. 2002;162:1457–1462. doi: 10.1001/archinte.162.13.1457. [DOI] [PubMed] [Google Scholar]
- Wilson S, Baker E, Leonard A, Eckman M, Lanphear B. Understanding preferences for disclosure of individual biomarker results among participants in a longitudinal birth cohort. Journal of Medical Ethics. 2010;36:736–740. doi: 10.1136/jme.2010.036517. [DOI] [PubMed] [Google Scholar]
- Wolf S, Lawrenz F, Nelson C, Kahn J, Cho M, Clayton E, Fletcher J, Georgieff M, Hammerschmidt D, Hudson K, Illes J, Kapur V, Keane M, Koenig B, Leroy B, McFarland E, Paradise J, Parker L, Terry S, Van Ness B, Wilfond S. Managing incidental findings in human subjects research: Analysis and recommendations. Journal of Law, Medicine & Ethics. 2008;36:219–248. doi: 10.1111/j.1748-720X.2008.00266.x. [DOI] [PMC free article] [PubMed] [Google Scholar]