Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Nov 1.
Published in final edited form as: IRB. 2012 May-Jun;34(3):17–19.

Is HIPAA Enough? Informational Risk, Institutional Review, and Autonomy in the Proposed Changes to the Common Rule

Megan Allyse 1, Katrina Karkazis 2, Sandra Soo-Jin Lee 2, Sara L Tobin 2, Henry T Greely 3, Mildred K Cho 4, David Magnus 5
PMCID: PMC6824263  NIHMSID: NIHMS1054731  PMID: 22830179

Abstract

In 2011, the Department of Health and Human Services proposed changes to the regulations which govern human subjects protection in Federally funded research. The proposed changes involve modifying inclusion standards for minimal risk research and removing the necessity of review from certain categories of non-invasive research. All studies would instead be required to comply with privacy protections as initiated by the Health Information Portability and Accountability Act (HIPAA). We argue that relying on HIPAA to protect participants from participation-related risks in non-invasive research is insufficient to protect the autonomy and psychological health of potential research participants. Instead, we suggest a streamlined review format for these categories of research.

Keywords: Common Rule, IRB, Consent, Participatory Risk


In 2011, the US Department of Health and Human Services’ (HHS) proposed changes to the Common Rule, which governs the protection of research participant in all Federally funded research. These changes represent a valuable step in updating human subjects research protection to reflect challenges that have arisen over the last 30 years (1). A key ingredient of the proposed changes is an expansion of the privacy protections initiated by the Health Information Portability and Accountability Act (HIPAA) such that these protections would serve as the default data protection policy for all research and clinical data. Standardizing privacy protection is, on the face of it, a rational and efficient change; as it stands, there are multiple standards for data protection, which vary between clinical and research settings and generate additional inefficiency and friction along the already liminal border between research and treatment. Efforts to streamline the research review process by focusing institutional review on the riskiest research rather than expending resources on low or minimal risk studies are necessary to reduce unnecessary regulatory burden. However, it is dangerous to conceptualize risk as restricted to the economic or social consequences of having one’s private data made public, or even to the breach of privacy from the release of health-related information alone. We argue that there are negative consequences to adopting HIPAA’s narrow focus on informational, rather than participatory, risk to research participants.

HIPAA, as a clinical standard, conceives of risk as stemming exclusively from the inadvertent or unwilling release of a patient’s protected health information. As long as access to data designated as private is restricted to those to whom permission has been assigned, the assumption is that no risk or harm can accrue to the individual. The idea that risk and harm to individuals are limited to the release of information is reflected in the proposed changes to the Common Rule. For example, the proposed policies would remove the need for ongoing consent from research participants as long as their personal information is maintained in a secure fashion and not returned to them. Thus, samples collected for one study will be available for use in any other study, provided blanket consent has been provided.

A second example of this focus on informational risk is the proposed changes to the management of social and behavioral research. The proposed changes would remove the need for institutional review of studies using specific social science methodologies even if information is potentially damaging to the individual and is stored in an identifiable way. The implication is that certain behavioral research methodologies, including surveys, interviews and focus groups, cannot generate risk as long as the participants are competent adults. Any informational risk that may result from such studies is assumed to be covered by compliance with HIPAA standards.

It may be true that the primary concern of most individuals who participate in research - particularly when that participation involves the collection and storage of biospecimens - is the avoidance of stigma or discrimination adhering to the release of incidental information about their health status or that of their family members. But this is not the only concern. As the recent Havasupai Trive v. Arizona Bd. of Regents case demonstrated, participants have serious concerns about the use of their samples that are not limited to the release of identifiable information (2). In that case, the Havasupai tribe sued the University of Arizona for permitting the use of individual tribal members’ stored genetic samples in studies of schizophrenia and ancestral migration. The Havasupai were concerned not just that their tribe may have been identifiable based on supposedly anonymized samples and data but that their samples had been used, without their consent, for research that ran counter to important cultural and religious tribal values (3). Likewise, Beleno v. Tex. Dept. of Health Services (4), as well as similar cases in other states, marks a cogent example of samples that were collected for clinical purposes being used for research purposes without consent. Several parents of babies whose routinely collected newborn bloodspots had been used for research sued because they felt that this practice strongly violated their autonomous right to control the use of their child’s biospecimens (5).

The proposed changes do not address these problems. Instead, they suggest that the acquisition of blanket consent on collection would permit unlimited use of samples and data for all possible projects. But the effectiveness of blanket consent is contested (68). As with the Havasupai Tribe, many participants give samples to a specific research project and assume that their samples will be limited to such usage. Unless they have sophisticated knowledge of scientific research, they will be unable to conceive of every possible use of their sample and cannot give informed consideration to whether they are willing to donate their samples. At least on a philosophical level, it is possible to damage an individual’s autonomy even if they are unaware that the damage has been done and doing so runs counter to ethical principles of respect.

As a further unintended consequence of this change, removing the necessity of review from studies that do not intend to return results creates a strong disincentive to return results to participants. If researchers are given a choice between designing a protocol that returns results and one that is excused from the protracted review process it seems clear where the incentives lie. In the long term, one can imagine a situation in which a majority of genomic research projects return no results to their participants even if those results are life-threatening and actionable. Creating such incentives runs contrary to the emerging consensus in the research and policy community; several groups have concluded that there may be obligations for researchers to return at least some results, especially those that may have direct health benefits (9,10).

Restricting our conceptions of risk to the informational is also problematic in the context of social and behavioral research. Under the proposed changes, provided social and behavioral research is conducted with the participation of ‘competent adults,’ rather than vulnerable populations, researchers could register their studies in an existing database, rather than undergo review by an institutional review board (IRB). These changes are based on the idea that such research uses methods that do not pose a physical risk to participants. Most IRBs, particularly in medical research institutions, are designed to asses the risk/benefit ratio of physically invasive medical interventions, which is why review can be burdensome for certain social and behavioral studies. In social and behavioral research, however, risks are more likely to be tied to the content or structure of the research, which may involve deception or public observation. For instance, the well-known Tearoom Trade study, in which a researcher posed as a members of an underground homosexual community in order to observe their behavior, is considered highly controversial despite the fact that the behavior in question took place in a ‘public’ place and no identifying information about the participants was revealed (11). However, even if the identities of the individuals observed in the study were not reported, it is questionable whether respect for their privacy was upheld. Under the new regulations, it seems possible that a study like the Tea Room Trade study would not even undergo review.

One response to this problem would be to insist that all social and behavioral research receive first person consent from participants. While this would address the use of face-to-face deception in studies using interviews and surveys, it would curtail public observation research (12) for which requesting consent is either impossible or creates an insurmountable observation bias. Instead, we recommend that all studies meeting the criteria for the new minimal risk category should undergo review and validation by an appropriately trained IRB staff member to ensure that the criteria for minimal risk are met. This would include verifying that the consent documents governing relevant samples are adequate to cover the proposed research.

For social and behavioral research, validation should cover the proposed content of the research and the recruitment methodology. Studies which involve the use of deception or discussion of psychologically disruptive topics – including the realization of adverse health risk or status, the experience of significant trauma (where trauma is defined as the onset/event of severe injury or disability, the death of a family member, interpersonal violence or abuse) or the experience of severe social stigmatization, persecution or discrimination – should be referred to the IRB for review. This method would require the appropriate training of a small number of staff to recognize and evaluate participatory risk. But the far more resource intensive alternative is to train thousands of disincentivised researchers to accurately recognize when their proposed research constitutes minimal risk.

In general, we feel that the changes proposed by the HHS represent a step forward for human subjects research review. Several provisions will provide highly desirable streamlining of the review process and remove unnecessary barriers to the efficient conduct of research. However, we feel that it is necessary to ensure that the push for efficiency does not supersede the need to respect the autonomy of participants.

Contributor Information

Megan Allyse, Stanford Center for the Integration of Research on Genetics and Ethics;.

Henry T. Greely, Stanford Law School;.

Mildred K. Cho, Stanford Medical School;.

David Magnus, Department of Pediatrics, Stanford Medical School, at Stanford University, Stanford, CA..

References

  • 1.Department of Health and Human Services. Human subjects research protection: Enhancing protections for research subjects and reducing burden, delay, and ambiguity for investigators. 2011.
  • 2.Havasupai tribe v. Arizona Bd. of Regents. 204 P. 3D 1063 - Ariz: Court of Appeals, 1St Div., Dept. D 2008.
  • 3.Drabiak-Syed K. Lessons from Havasupai Tribe v. Arizona State University Board of Regents: Recognizing group, cultural, and dignitary harms as legitimate risks warranting integration into research practice. J. Health & Biomed. L 2010;6:175–391. [Google Scholar]
  • 4.Beleno V. Tex. Dept. Of State Health Servs., No. SA-09-CA-188-FB (W.D. Tex.) 2009.
  • 5.Couzin-Frankel J. Science gold mine, ethical minefield. Science 2009;324(5924):166. [DOI] [PubMed] [Google Scholar]
  • 6.Hansson MG, Dillner J, Bartram CR, Carlson JA, Helgesson G. Should donors be allowed to give broad consent to future biobank research? The Lancet Oncology 2006;7(3):266–9. [DOI] [PubMed] [Google Scholar]
  • 7.Hofmann B. Broadening consent—and diluting ethics? Journal of Medical Ethics 2009;35(2):125. [DOI] [PubMed] [Google Scholar]
  • 8.Caulfield T. Biobanks and blanket consent: The proper place of the public good and public perception rationales. King’s Law Journal 2007;18(2):209–26. [Google Scholar]
  • 9.Fabsitz RR, McGuire A, Sharp RR, Puggal M, Beskow LM, Biesecker LG, et al. Ethical and practical guidelines for reporting genetic research results to study participants: Updated guidelines from a national heart, lung, and blood institute working group. Circ Cardiovasc Genet 2010, December;3(6):574–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Wolf SM, Lawrenz FP, Nelson CA, Kahn JP, Cho MK, Clayton EW, et al. Managing incidental findings in human subjects research: Analysis and recommendations. J Law Med Ethics 2008;36(2):219–48, 211. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Warwick DP. Tearoom trade: Means and ends in social research. Stud Hastings Cent 1973, April;1(1):27–38. [PubMed] [Google Scholar]
  • 12.Ubel PA, Zell MM, Miller DJ, Fischer GS, Peters-Stefani D, Arnold RM. Elevator talk: Observational study of inappropriate comments in a public space. Am J Med 1995, August;99(2):190–4. [DOI] [PubMed] [Google Scholar]

RESOURCES