Skip to main content
American Journal of Public Health logoLink to American Journal of Public Health
. 2012 Aug;102(8):1447–1450. doi: 10.2105/AJPH.2012.300661

Beyond Bioethics: Reckoning With the Public Health Paradigm

Amy L Fairchild 1,, David Merritt Johns 1
PMCID: PMC3464836  PMID: 22698051

Abstract

In the wake of scandal over troubling research abuses, the 1970s witnessed the birth of a new system of ethical oversight. The bioethics framework, with its emphasis on autonomy, assumed a commanding role in debates regarding how to weigh the needs of society against the rights of individuals.

Yet the history of resistance to oversight underscores that some domains of science hewed to a different paradigm of accountability—one that elevated the common good over individual rights.

Federal officials have now proposed to dramatically limit the reach of ethical oversight. The Institute of Medicine has called for a rollback of the federal privacy rule. The changing emphasis makes it imperative to grapple with the history of the public interest paradigm.


Only a few years ago, the business of ethical review seemed a juggernaut destined only to expand, inspiring increasingly bitter remonstrations about the “absurd demands” federal research regulation placed on scientists.1 It is remarkable, then, that the Department of Health and Human Services (DHHS) now stands poised to significantly scale back and streamline many institutional oversight procedures.2 In October 2011, DHHS closed the public comment period on a proposal that would expand the categories of social and behavioral research that can be “excused” from institutional review board approval, allow a single institutional review board to oversee multisite studies, and generally adjust the institutional review board system to avoid cases where low-risk studies are subjected to high levels of scrutiny. The pending overhaul comes in the wake of Institute of Medicine recommendations in 2009 to exempt all research from the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule and absolve all “information-based” research from informed consent requirements.3 Taken together, these proposals suggest the regulatory pendulum is taking a swing to the permissive.

Much debate around the DHHS proposal has centered on its identification of the HIPAA Privacy Rule as a potential framework for ensuring research data protection and security—an idea that has caused some researchers to fear that the new regime, designed to simplify the review process, will actually lead to even more red tape.4 Yet there are larger issues at stake than simply ensuring that important research can proceed without being slowed by administrative impediments. The prospect of change creates space to reckon with other ethical traditions and paradigms of accountability that might inform research regulation besides the reigning bioethical regime, which emphasizes autonomy and privacy. One such tradition, distinctly opposed to the current bioethics approach, stands out in the long history of the debate over research regulation and privacy protection.

For more than a decade, professionals in history, journalism, the social sciences, and public health, which play essential roles in protecting the public from threats of disease and corporate and government misconduct, have argued that the purpose of scientific inquiry is to benefit society as a whole, even sometimes at the expense of individual interests. In debates over the mandate of institutional review boards, researchers in these fields have striven not simply to emancipate important nonmedical research from unnecessary fetters, but to underline an inherent tension between deeply conflicting paradigms of accountability in which very different conceptions of who and what require protection are at stake. We lay out 4 brief histories that bear on nonclinical research regulation and the arguments against it, focusing on fields that most forcefully resisted regulation by articulating a fundamentally different vision of accountability. Before turning to those specific cases, we lay out the broad backdrop against which the debates have unfolded.

Following the Nuremberg Code of 1947, the principle of informed consent became the cardinal ethic for scientific research involving human participants. US federal regulations to prevent harm to individual research participants first arose in response to a series of exposés on shocking medical experiments in newspapers and medical journals, culminating in the 1972 revelations about the 40-year-long Tuskegee Syphilis Study. A new system of institutionally based ethical review intended to ensure the safety and voluntary consent of research participants was created.

Initially, epidemiologists and social scientists grew alarmed that the new, individual rights–based bioethical framework that provided the moral architecture of the regulatory schema might make their research more difficult or even impossible to conduct. Some closure on these issues was reached in 1981, when social science and epidemiological research was explicitly exempted from institutional review board review if the risk to participants and their privacy was minimal. But before the 20th century closed, for many scholars the promise of streamlined oversight would be transformed into a stultifying specter, hovering over all scientific inquiry, no longer constrained within the boundaries of medicine.

In the 1990s, activities that had long remained relatively free from ethical oversight began to be subjected to scrutiny following a spate of research-related incidents at high-profile institutions,5 including the death of an 18-year-old experimental gene therapy patient at the University of Pennsylvania in 1999.6 Federal regulators responded by cracking down, and risk-averse institutions began to apply conventional bioethical human participant protections with new muscle, suppressing alternative discourses, such as those that championed the common good, that struggled to resist such “ethical imperialism.”7 Yet even as the grip of bioethics as a regulatory regime tightened, it quickly became apparent that it was the wrong framework of accountability for some domains of inquiry.

It was within this context that debates over research in different disciplines unfolded. For example, when in the early 1990s regulators sought to treat public health surveillance—the often compulsory reporting of the names of individuals diagnosed with certain infectious and chronic diseases by physicians and laboratories to city and state health departments—as research requiring federal ethical oversight, health officials countered that effective surveillance depended on universal reporting of names; it could not depend on informed consent. Such data collection was required by law, and the law reflected the will of the people.8 This was an old argument. As early as 1891, in the face of physician resistance to reporting, health officials argued that surveillance was based on a “contract.”9 It amounted to a public duty in which “the people had consented” in the name of the common good to what might otherwise seem like an “arbitrary” or even “authoritarian” regime.10 Surveillance could demand limitations not only of privacy but even of liberty if the disease in question required mandatory treatment or isolation.11 At stake were questions of what individuals in a civil society owed to one another.12

Practitioners of quality assurance in health care, which involves the assessment of medical records to determine the adequacy of care, voiced similar arguments half a century later. The field of quality assurance originated in the 1960s as a response to poor clinical outcomes,13 and became one of the hallmarks of the provision of health care services under Medicare and Medicaid—two of the signature social welfare programs of President Lyndon Baines Johnson’s Great Society.14 By the 1990s, “quality improvement,” which originated as a proactive technique for regulating and improving the delivery of medical care, had become a “positive” watchword of a new era.15

Beginning in 2000, after federal regulators received a complaint suggesting that a quality improvement study conducted among dialysis patients should have been classified as research,16 demand for bioethical review of all quality assurance activities escalated.17 But when federal panels began to develop recommendations for determining when quality assurance required ethical oversight,18 administrators at the Centers for Medicare and Medicaid Services resisted.19 “Fiduciary responsibility” served as their lodestar.20 Justified by the need to conserve taxpayer dollars while improving clinical outcomes, at the heart of their claim was an ethos similar to that of public health: those charged with administering the social welfare system bore a public duty to protect the interests of populations, not the rights of individuals.

Giving full voice to this public-spirited paradigm were historians, journalists, and others engaged in social science inquiry, who argued that the purpose of research, at its best, was to illuminate critical social and political issues. The primary obligation was ensuring the public’s right to know. “Our job is to hold people accountable,” argued Columbia University historian Alice Kessler-Harris.21 Her point was that the primary commitment of historians was to the unfettered development of knowledge through historical interpretation—not to individual subjects who emerged as the subjects of inquiry. Historians viewed their research not through the lens of bioethics but as serving the “common good” and “vital to democratic debate and civic life.”22 They rested their claims on the centrality of freedom of speech.23 History’s villains could not be allowed to hide behind the regulations governing the protection of human participants.

History also served the powerless. So much of social science research had sought to ensure that the perspectives of the socially and politically marginal—former slaves, women, the poor—came to light. Exposure in this context took on a different hue. Whether acting as watchdogs or giving voice to the voiceless, researchers drew on the Constitution for authority. “Simply put,” commented the University of Buffalo’s Michael Frisch, “the core purpose of oral history is to put named people into the historical record—not mask or anonymize them.”24

Debate in the 1990s over medical privacy also provided occasion for making the case for the public good. President Bill Clinton’s 1993 proposal for universal health coverage, which raised the prospect of the centralized, computer-based management of medical care, sparked early calls to protect medical privacy. When the Clinton proposal failed, privacy concerns were taken up by Congress as part of HIPAA. From the outset, Congress envisioned shielding public health data from privacy regulations: health departments would not need consent to acquire and share personally identifiable information for the purpose of safeguarding the public’s health. As enacted, the DHHS Privacy Rule (45 CFR Part 160 and Subparts A and E of Part 164) contained what is known as a public health “carve-out.” Privacy advocates did not resist this exemption, and disease advocacy groups strongly supported it.25 The March of Dimes, for instance, noted that

While the individual has an interest in maintaining the privacy of his or her health information, public health authorities have an interest in the overall health and well being of the entire population.26

Thus, although the 2003 HIPAA Privacy Rule significantly enhanced medical privacy, it also formalized the notion that privacy rights are not always sacrosanct.

As we debate the protections that must be in place for research participants today, it is essential to acknowledge the difference between social science inquiry and research performed in clinical settings. It goes without question that research with the potential to harm participants must be monitored by third parties. But blanket regulations designed to protect powerless clinical research participants may also inadvertently protect the powerful from necessary social and political scrutiny on the part of investigators in fields that define their mission in terms of the common good. This is not to say that a claim for the common good is always sufficient to justify a release from regulation.27 Our point, rather, is to emphasize that there is an important ethical framework, with a deep and varied history, that has justified affirmative duties to limit rights in the name of public interests.

We agree that the time has come to recognize that social inquiry in areas like history, public health, and quality assurance requires an alternative framework of analysis. Yet we cannot let an obsession with rules allow us to overlook the fact that scientific research is guided by a number of different ethical frameworks that do not always agree. Bioethics asserts that individual rights such as privacy require protection; many other frameworks demand that we look past the individual and prioritize the common good. By embracing a new approach for research in public interest domains, the proposed changes implicitly acknowledge this tension. Indeed, there will always be issues where distinct paradigms hewing to different priorities collide. So although the ongoing debate over changes to ethical oversight procedures may resolve certain conflicts, it also sets the stage for the enduring contest between different paradigms of accountability.

Human Participation Protection

No protocol approval was needed for this study because no human participants were involved.

Endnotes

  • 1. P. Cohen, “As Ethics Panels Expand, No Research Field Is Exempt,” New York Times, February 28, 2007, http://www.nytimes.com/2007/02/28/arts/28board.html?pagewanted=all (accessed May 2, 2012)
  • 2. Department of Health and Human Services, Food and Drug Administration, Proposed Rules, FR Doc No 2011–18792, Federal Register 76, no. 143(2011): 44512-44531, http://www.gpo.gov/fdsys/pkg/FR-2011-07-26/html/2011-18792.htm (accessed November 8, 2011)
  • 3. Institute of Medicine, Beyond the Privacy Rule: Enhancing Privacy, Improving Health Through Research (Washington, DC: National Academies Press, 2009) [PubMed]
  • 4. P. Cohen, “Questioning Privacy Protections in Research,” New York Times, October 23, 2011, http://www.nytimes.com/2011/10/24/arts/rules-meant-to-protect-human-research-subjects-cause-concern.html?pagewanted=all (accessed May 2, 2012)
  • 5. Stuart Plattner, “Comment on IRB Regulation of Ethnographic Research,” American Ethnologist 33, no. 4 (2006): 526.
  • 6. S. G. Stolberg, “Institute Restricted After Gene Therapy Death,” New York Times, May 25, 2000, http://partners.nytimes.com/library/national/science/health/052500hth-gene-therapy.html (accessed May 2, 2012)
  • 7. Z. M. Schragg, Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965–2009 (Baltimore: Johns Hopkins University Press, 2010)
  • 8. A. L. Fairchild, “Dealing With Humpty Dumpty: Research, Practice, and the Ethics of Public Health Surveillance,” Journal of Law, Medicine, and Ethics 31, no. 4 (2003): 615–623. [DOI] [PubMed]
  • 9. L. F. Flick, “The Duty of the Government in the Prevention of Tuberculosis,” Journal of the American Medical Association 17 (1891): 289 (see also p. 290). See also “Compulsory Notification of Tuberculosis,” Journal of the American Medical Association 33(1899): 742.
  • 10. H. M. Biggs, Preventive Medicine in the City of New-York: The Address in Public Medicine Delivered at the 65th Annual Meeting of the British Medical Association, in Montreal, Canada, September 1897 (New York, NY: Health Department), 28.
  • 11. Amy L. Fairchild, Ronald Bayer, and James Colgrove, with Daniel Wolfe, Searching Eyes: Privacy, the State, and Disease Surveillance in America (Berkeley: University of California Press, 1997)
  • 12. W. J. Novak, The People’s Welfare: Law and Regulation in Nineteenth-Century America (Chapel Hill: University of North Carolina Press, 1996), 9.
  • 13. T. A. Swift, C. Humphrey, and V. Gor, “Great Expectations? The Dubious Financial Legacy of Quality Audits,” British Journal of Management 11 (2000): 31–45.
  • 14. R. H. Egdahl and P. M. Gertman, Quality Assurance in Health Care (Germantown, Md: Aspen Systems Corporation, 1976)
  • 15. A. E. Shamoo, “Quality Assurance,” Quality Assurance 1, no. 1 (1991): 4–9. [PubMed]
  • 16. Michael Carome, Office for Human Research Protections (OHRP), oral communication, July 29, 2005; P. M. Palevsky, M. S. Washington, J. A. Stevenson, et al., “Improving Compliance With the Dialysis Prescription as a Strategy to Increase the Delivered Dose of Hemodialysis: An ESRD Network 4 Quality Improvement Project,” Advances in Renal Replacement Therapy 7, no. 4, supplement 1 (2000): S21–S30. [PubMed]
  • 17. Eran Bellin and Nancy Neveloff Dubler, “The Quality Improvement–Research Divide and the Need for External Oversight,” American Journal of Public Health 91, no. 9 (2001): 1512–1517; David Doezema and Mark Hauswald, “Quality Improvement or Research: A Distinction Without a Difference?” IRB: Ethics and Human Research 24, no. 4(2002): 9–12. [PubMed]
  • 18. D. Casarett, J. H. T. Karlawish, and J. Sugarman, “Determining When Quality Improvement Initiatives Should Be Considered Research: Proposed Criteria and Potential Implications,” Journal of the American Medical Association 283, no. 17(2000): 2275–2280; Jeffrey Brainard, “When Is Research Really Research?” Chronicle of Higher Education, November 26, 2004, http://chronicle.com/weekly/v51/i14/14a02101.htm (accessed May 2, 2012); M. A. Baily, “Ethical Quality Improvement (QI) and IRB Review,” abstract for the American Public Health Association 133rd Annual Meeting and Exposition, November 5–9, 2005, New Orleans, LA.
  • 19. Brainard, “When Is Research Really Research?”; Jeff Cohen, former OHRP official, oral communication, August 1, 2005.
  • 20. Mark B. McClellan and Sean R. Tunis, “Medicare Coverage of ICDs,” New England Journal of Medicine 352, no. 3 (2005): 222–224. [DOI] [PubMed]
  • 21.Cohen, “Questioning Privacy Protections in Research.”
  • 22. L. Shopes, 2004 Oral History Association Meeting, Comments for Roundtable on “Oral History Ethics,” October 3, 2004, Portland, Ore; Cary Nelson, “Can E. T. Phone Home? The Brave New World of University Surveillance,” Academe 89, no. 5 (2003): 210, available at: http://aaup.org/AAUP/pubsres/academe/2003/SO/Feat/nels.htm, accessed May 2, 2012. For analogous arguments from journalism, see Walt Harrington, “What Journalism Can Offer Ethnography,” Qualitative Inquiry 9, no. 1(2003): 100–101; American Historical Association, “Statement on Standards of Professional Conduct,” approved by Professional Division, December 2, 2004, and adopted by Council January 6, 2005, http://www.historians.org/pubs/Free/ProfessionalStandards.cfm (accessed June 9, 2005)
  • 23. Christopher Shea, “Don’t Talk to the Humans: The Crackdown on Social Science Research,” Lingua Franca (September 2000): 34; Nelson, “Can E. T. Phone Home?” 211, 216.
  • 24. Michael Frisch, University at Buffalo, State University of New York, General Comment on HHS-OPHS-2011-0005-0001, November 10, 2011, http://www.regulations.gov/#!documentDetail;D=HHS-OPHS-2011-0005-0751 (accessed May 2, 2012)
  • 25. Fairchild et al., Searching Eyes, 233–237.
  • 26. Marin Weiss, March of Dimes, Comments on NPRM: Standards for Privacy of Individually Identifiable Health Information, use and Disclosures for Public Health Activities, Comment #17685, February 17, 2000, as cited in Fairchild et al., Searching Eyes, 321.
  • 27. M. A. Rothstein, “Improving Privacy in Research by Eliminating Informed Consent? IOM Report Misses the Mark,” Journal of Law, Medicine and Ethics 37, no. 3 (2009): 507–512. [DOI] [PMC free article] [PubMed]

Articles from American Journal of Public Health are provided here courtesy of American Public Health Association

RESOURCES