Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Oct 8.
Published in final edited form as: J Empir Res Hum Res Ethics. 2013 Jul;8(3):58–65. doi: 10.1525/jer.2013.8.3.58

How IRBs View and Make Decisions About Social Risks

Robert L Klitzman 1
PMCID: PMC3792497  NIHMSID: NIHMS517822  PMID: 23933777

Abstract

Whether and How IRBs Assess social risks remains unclear, with little empirical investigation. I contacted leaders of 60 IRBs, and interviewed IRB leaders from 34 (response rate = 55%) and additionally, 12 members and administrators. IRBs struggle to assess and balance social risks and benefits, and vary in whether, how, and how much to do so, and how to balance these against individual risks/benefits. Risks to a group affect individuals within it. Hence, social risks can include indirect individual risks, raising ambiguities. Dilemmas emerge: e.g., how much responsibility researchers and IRBs have for addressing broader health inequities. These data, the first to examine how IRBs make decisions about social risks, reveal how IRBs face critical challenges, dilemmas, and ambiguities.

Keywords: ANPRM, justice, stigma, autonomy, vulnerability, autonomy, research ethics, equipoise


Many questions arise that have not been examined empirically concerning whether IRBs assess and weigh considerations beyond those pertaining to individual risks and benefits—specifically, social risks—and if so, how. The U.S. Code of Federal Regulations (45 CFR 46 [U.S. Department of Health and Human Services (HHS)], 2005) stipulates that:

The IRB should not consider possible long-range effects of applying knowledge gained in the research (for example, the possible effects of the research on public policy) as among those research risks that fall within the purview of its responsibility.

Yet the Advanced Notice of Proposed Rule Making (ANPRM), released by the Department of Health and Human Services in July 2011 to revamp IRB regulations, raised questions about this statement. The ANPRM asked:

Do IRBs correctly interpret this provision as meaning that … it is not part of their mandate to evaluate policy issues such as how groups of persons or institutions, for example, might object to conducting a study because the possible results of the study might be disagreeable to them? If that is not how the provision is typically interpreted, is there a need to clarify its meaning? (HHS, 2011)

Broad questions thus arise concerning whether IRBs consider and weigh social risks, and if so, how, when, and to what extent.

These issues are important since IRBs have been shown to vary widely, due to a range of factors, in their reviews of protocols in multi-site studies, impeding the comparability and analysis of the data, and thus hampering scientific progress (Shah et al., 2004; Dziak et al., 2005). Previous articles have suggested that IRBs vary widely in how they interpret and apply regulations and guidelines concerning consent forms (Klitzman, 2013a) and conflicts of interest (Klitzman, 2013b), and are shaped by various factors, including the local ecologies in which they work (Klitzman, 2012a). Hence, understanding how they interpret and apply federal regulations is vital. In addition, American bioethics has been criticized more generally for focusing too much on the individual, rather than on broader social issues (Fox & Swayze, 2008).

Yet strikingly, no empirical studies appear to have been conducted examining these issues of how they view, and make decisions about, potential social harms. Fleischman et al. (2011) argued that in four realms—behavioral genetics, adolescent behavior, harm reduction, and human genetic enhancement—local IRBs do at times consider social risks, though 45 CFR 46 (as quoted above) instructs these committees not to do so. IRB consideration of the social implications of research protocols, these authors conclude, “sometimes create significant delays in initiating or even prevent such research” (Fleischman et al., 2011). These scholars oppose IRBs’ considerations of these issues—since “predicting negative effects of new knowledge on populations or social policy is highly speculative and essentially political” (ibid.). Instead, Fleischman et al. (2011) suggest that national review bodies, rather than local IRBs, would be best to address these issues. Yet as Melo-Martín (2011) points out, these authors do not provide details or data concerning what would be involved in having such national committees perform this task; and these committees may not be able to resolve the dilemmas that arise better than do IRBs.

Given the lack of empirical data concerning these issues, crucial questions remain regarding how IRBs themselves actually view and make decisions about social risks, whether they differ in doing so, and if so, how and why. Moreover, IRBs may face these issues concerning not only the four specific realms discussed by Fleischman et al. (2011), but more broadly in reviewing other types of studies.

In general, individuals may perceive various types of risks and benefits differently, based on a range of factors, including education and past experiences (Tversky & Kahneman, 1981; Redelmeier, Rozin, & Kahneman, 1993; Kong et al., 1986; Bergus, Levin, & Elstein, 2002; Klitzman, 2006; Klitzman, 2011a). Perceptions of risk and danger may involve subjective elements, related to cultural and individual fears, with individuals often seeking to draw boundaries between risk and safety that may not be wholly evidence-based (Douglas, 1966). But how IRB chairs, members, and staff perceive social risks, and balance these against other risks and benefits, remains unclear.

Recently, I conducted in-depth qualitative interviews with IRB chairs, administrators, and members about a series of issues, as part of which, themes arose concerning how to assess and weigh social risks. The interviews explored how IRBs viewed and made decisions about research integrity (RI), broadly defined, which proved to be related to their views and approaches concerning a wide variety of other areas, including the ways they interpreted federal regulations (Klitzman, 2013b). Since the study used qualitative methods, it permitted in-depth exploration of these domains—including how IRBs perceive considerations beyond social risks, which the present paper thus examines.

Methods

In brief, as described elsewhere (Klitzman, 2011ae, 2012ac, 2013ab), in-depth telephone interviews of two hours each were conducted with 46 chairs, directors, administrators, and members. I contacted the leadership of 60 IRBs (every fourth one in the list of the top 240 institutions by NIH funding), and interviewed IRB leaders from 34 of these institutions, yielding a response rate of 55%. In certain cases, both a chair/director as well as an administrator from an institution were included (e.g., if the former thought that the latter could better provide detail about certain areas). Thus, in all, I interviewed 39 chairs/directors and administrators from these 34 institutions. To understand the impact of varying social and institutional milieus in these domains, I included a range of institutions in terms of location, size, and public/private status. I also asked every other one interviewed on the list by amount of NIH funding to disseminate information about the study to their IRB members, to recruit one member from each IRB, and thus included seven other members, too. As summarized in Table 1, the 46 interviewees included 28 chairs/co-chairs; 10 administrators; one director; and seven members. In all, 58.7% were male, and 93.5% were Caucasian. Interviewees were distributed across geographic regions, and institutions by ranking in NIH funding.

TABLE 1.

Characteristics of the Sample of IRB Chairs, Members, and Administrators

Total % (N=46)
Type of IRB Staff
 Chairs/Co-Chairs 28 60.87%
 Directors 1 2.17%
 Administrators 10 21.74%
 Members 7 15.22%
Gender
 Male 27 58.70%
 Female 19 41.30%
Institution Rank
 1–50 13 28.26%
 51–100 13 28.26%
 101–150 7 15.22%
 151–200 1 2.17%
 201–250 12 26.09%
State vs. Private
 State 19 41.30%
 Private 27 58.70%
Region
 Northeast 21 45.65%
 Midwest 6 13.04%
 West 13 28.26%
 South 6 13.04%
Total # of Institutions Represented 34

The study was approved by the Columbia University Department of Psychiatry Institutional Review Board. All interviewees gave informed consent. The interviews focused on participants’ views of RI and factors involved in decisions, but shed important light as well on many other, broader issues that arose concerning IRBs’ decisions, interactions, and relationships. Appendix A presents relevant portions of the semi-structured interview guide. I sought to obtain a “thick description”—to understand aspects of their decisions, lives, and social situations by trying to grasp individuals’ own experiences and language, not by imposing theoretical structures (Geertz, 1973). Interviewees discussed a broad range of types of studies that they had reviewed.

I adapted elements from grounded theory (Strauss & Corbin, 1990), engaging in techniques of “constant comparison,” analyzing data from different contexts for similarities and differences. During the period when I was conducting the interviews, interviews were transcribed and underwent initial analysis to enhance validity.

After completion of all of the interviews, a trained research assistant (RA) and I conducted additional analyses in two phases. In the first phase, we analyzed a subset of interviews independently to gauge factors that affected interviewees’ experiences, identifying sets of recurrent issues and themes to which we then gave codes (e.g., discussions about social risks). Together, we then reconciled these independently developed coding schemes into a coding manual, listing and defining the codes.

In the second phase of the study, we independently content-analyzed the interviews, examining the main subcategories, and ranges of variation in each of the core categories. We reconciled the subthemes each coder noted into a single set of “secondary” themes and an elaborated set of core categories (e.g., specific types of questions, challenges, and ambiguities concerning social risks). We then used codes and subcodes in analysis of all of the interviews, with two coders analyzing all interviews.

Results

Overall, as shown in Table 2 and described more fully below, IRBs struggle to define, assess, and balance social risks, facing dilemmas—e.g., whether and how much to do. In practice, this category often proves related to issues of stigma, vulnerability, and social inequities that can be ambiguous and difficult to assess. Risks to a group can affect individuals within the group, and social risks can thus be indirect. These issues posed challenges, and none of the interviewees appeared to feel that social risks were always easily and readily defined, interpreted, and weighed.

TABLE 2.

Dilemmas IRBs Face Concerning Social Risks

How to define “long-range” social risks:
  • How long?

    • Past conclusion of protocol?

  • Related only to policy?

    • What about increased stigma?

  • Includes long-range risks to communities?

    • What if communities are vulnerable?

    • How to define “vulnerability”?

  • What if a study may perpetuate, or continue to widen, social inequities?

Social Risks

In reviewing protocols, many IRBs consider possible long-range social risks, such as stigma, but vary in how, which, when, and to what degree. IRBs considered harms to not solely the individual, but the group (e.g., social and psychological harms to a population—related, for instance, to stigma), and/or social vulnerabilities that affect groups as a whole (e.g., related to the benefits and burdens of research).

Just because a sample has been de-identified from an individual standpoint, doesn’t mean it has been from a racial or ethnic group standpoint. There could be harm at that level. IRB21

IRBs may at times consider stigma as a potential long-range harm. IRBs also recognize that they may differ from certain cultural groups in perceiving a study’s potential harms.

This group may have spiritual or worldview-related beliefs about that tissue that are much different than ours: we want bones of our ancestors returned to us, because they’re not merely bones. From their perspective, it’s very unpalatable that you have my blood or genes in a freezer somewhere and are keeping them. So we have to expand our vision. IRB21

Questions thus arise as to whether these concerns differ from the regulations’ dictum not to consider “long-range” social risks. Data about a vulnerable group may be de-identified but, some IRBs fear, still cause social harm. Thus, while regulations explicitly state that IRBs should specifically not include long-term social harms as risks, some IRBs may do so anyway. In part, social harms may also harm individuals who are members of the affected group.

These attitudes reflect in part the case of the Havasupai Indians, who sued Arizona State University—a case of which many interviewees were aware. That well-publicized scandal entailed social harm to the tribe: researchers published papers based on DNA samples collected that challenged the group’s beliefs about where they came from. The investigators also reported the presence of certain stigmatized conditions among the tribe without clearly informing the participants that the data would be analyzed for these purposes (Mello & Wolf, 2010). The case, which was settled out of court, hinged on several issues, including inadequate informed consent. Hence, due to concerns and fears about perceptions of potential legal liability, IRBs may not follow this provision of the regulations to not consider “long-term” social risks.

Questions arise here of whether increased stigma to a group, particularly a vulnerable group, in fact constitutes long-range social harm. At times, IRBs felt that it did, but questions then also arose of how to define “long-range.” The regulations do not attempt to define it, and it is unclear whether it means simply extending beyond the point at which the protocol itself ends—e.g., grant funding has ended—as opposed to short-term (i.e., in the immediate period during which subjects participate in the protocol).

Harms to Vulnerable Groups as Long-Range Social Risks?

Relatedly, IRBs often considered risks to vulnerable groups, and wrestled with dilemmas in defining and weighing risks to a population. IRBs tried to protect against vulnerability, but still often “worry” how effective safeguards for vulnerable groups are, and should be. Uncertainties lingered. One social scientist and former chair, as a researcher himself, felt it was important that science proceed, but other IRBs may be more cautious.

We have to weigh the vulnerability of the subjects against the pursuit of knowledge—involving HIV, drugs, suicide. We try to build the safeguards, but know that something can go wrong. IRB22

Removed from the field, IRBs often find such potential dangers hard to assess—whether these will occur, and if so, which, to what degree, and with what effects, and how to weigh these against potential benefits. Balancing possible harms to vulnerable groups against possible scientific benefits can thus be difficult.

IRBs also face questions concerning whether and to what degree to consider wider health disparities related to social risks—e.g., whether to consider possible increases in long-range social inequities as a social risk. Deciding how and to what degree to incorporate and weigh justice concerns—how far and in what ways to seek a fair distribution of benefits and burdens of research—can be difficult. IRBs generally seek to distribute the social burdens of research among population groups to avoid overly burdening or benefiting any one group; but how and to what degree exactly to do so, given prior inequities, is often unclear. These issues surface with regard to vulnerable populations—e.g., those of lower socioeconomic status.

The regulations themselves do not directly address these questions. Generally, interviewees felt that in the end, IRBs could simply not resolve the larger inequities of the health care system in the United States and the developing world—that larger health policy issues lay beyond the IRB’s scope—but that IRBs nevertheless had to take into account the contexts of existing inequalities, in which studies were often conducted. For example, protocols may propose to exclude patients without insurance, and IRBs then face questions of whether, as a result, to disapprove or modify such a study. As one researcher and chair said:

The company will provide the drugs for free, but bill the insurance company for all the doctor’s visits, the time in the hospital, the CT scans, and tests… [But] what about poor people who don’t have insurance? Here’s a potential life-saving treatment that only the rich can get. The drug companies claim that they can’t otherwise afford to conduct these studies. But many of our IRB members, especially our lay members, get upset about this. IRB12

Medicaid and Medicare may cover such treatments, but many patients may still not have any insurance or resources to cover these expenses. IRB members’ opinions about these issues may also thus differ based on their backgrounds and positions on the IRB.

IRB Responses and Solutions

IRBs seek to address these ambiguities and tensions in various ways, at times seeking compromises. IRBs confront these questions in deciding, for instance, whether and how to change a study that may exclude or unfairly burden a disadvantaged population. For example, in a study that required subjects to have a high-speed Internet connection, one IRB developed informal “rules,” permitting exclusion of subjects for a pilot study, but not for a full protocol:

We decided our rule of thumb is: it’s OK to exclude people for a six-month preliminary study. But for a Phase II program, a PI needs to get high-speed Internet to the participants—have subjects use the Internet at a clinic, or pay for it for them for two months. IRB26

These compromises can seek middle ground—not following the regulations only in one strictly interpreted way. Yet objections can arise, based on other interpretations of regulations. The interviewee above continued,

A grant reviewer said, “You’re still excluding a whole population of people!” The PI answered: “We’re not going to be marketing this intervention to people who don’t have the technology to support it. So it doesn’t matter.” But to exclude people flies in the face of justice! IRB26

IRBs thus wrestle with whether to make exceptions, and approve studies that may perpetrate or widen social disparities—i.e., whether to see such problems as potential social risks of a study.

IRBs may allow inequalities to persist and possibly increase because doing otherwise would significantly burden researchers; but these committees then have to decide when and to what degree to do so (given resulting costs to researchers). For instance, tensions arise concerning how ethnically diverse an IRB should require a sample of subjects to be in a predominantly Caucasian region. Many IRBs will want to see a plan for minority recruitment. If the population near an institution is 98% Caucasian, an IRB could encourage collaboration with other regions. But doing so can impose costs that IRBs may recognize or weigh differently. IRBs could potentially suggest that PIs collaborate with other institutions, because of either the rarity of a disease or the lack of diversity, but such input may not be common. As one rural physician and chair said:

Someone could say: why don’t the researchers collaborate with researchers in big cities? But there would have to be a scientific basis for that. Occasionally, though, if the PI is trying to study some rare cancer, the committee says, “The tumor registry here sees one case per year. You need to find collaborators elsewhere.” But it’s usually due to rarity, not diversity. IRB14

IRBs may thus overlook justice concerns about sampling, because of both the costs to researchers and the lack of clarity in the regulations as to how much IRBs should ensure justice versus accept inequities.

Conclusions

In reviewing protocols, IRBs thus struggle with a series of dilemmas relating to social risks, and are at times unclear how to do so. Committees wrestle with whether, when, how, and to what degree to consider social risks, related at times to potential increases in stigma, vulnerability, and health inequities. For committees, social risks frequently prove to be closely related to broader issues of justice and injustice. IRBs often see social harm as exacerbating social vulnerability and vice versa. IRBs thus face underlying tensions concerning how to define, interpret, apply, assess, and weigh these concerns and certain terms in the federal regulations (e.g., “long-range,” “effects,” “social,” “risks,” “fair distribution,” and “vulnerability”). Committees confront these issues in the contexts of specific protocols—whether IRBs, researchers, and funders should be responsible concerning social risks, and if so, to what degree, and how, and to what degree to seek fair distribution of benefits and burdens.

IRBs encounter difficulties knowing how to assess and weigh these issues, which generally cannot be readily quantified, and may be much harder to measure than individual risks and benefits (which often can be quantified—e.g., a particular drug may have a 25% likelihood of eliminating a particular symptom, or causing a particular side effect).

In response to ANPRM’s questions about how IRBs interpret language about long-range social risks, these data suggest that, at times, committees do take these risks into account because of concerns about long-term stigma, vulnerability, unfair distribution of burdens and benefits of research, and legal liability; but that IRBs wrestle with when, how, and to what degree to do so. In part, the regulations themselves are unclear—for instance, how long or short is “long-range,” and whether “long-range effects” refers to effects only on policy, or also on social practices and attitudes, including stigma. OHRP, as it is concerned about these questions (as reflected in the ANPRM), should thus clarify the intent of the regulations, in order to understand how IRBs do and should understand and use these terms. Presumably, the regulations’ intent is that social risks should not stand in the way of science; but IRBs are mandated to uphold nonscientific concerns as well (e.g., related to stigma and vulnerability). Conceptually, on a theoretical level, philosophers and policymakers may see “long-range effects” and “social risks” as well-defined, readily demarcated categories. Yet in practice, these terms become closely interconnected to other social concerns—e.g., heightened stigma, vulnerability, and social inequity.

Long-range social risks can include and/or blur into larger questions of vulnerabilities of populations that over time can heighten social inequities. Concerns about the vulnerabilities of populations and inequities can thus prompt IRBs to consider long-term effects of research (e.g., on the perceptions of a group by itself or others). Consequently, IRBs, though mandated by the regulations to ignore social risks, may nevertheless consider these harms, in part because the scope of the terms of the regulations is not defined or demarcated. Many interviewees were also well aware of the lawsuit involving the Havasupai tribe, and institutions and IRBs may thus be concerned, too, about the reputational risks of a scandal (i.e., rather than social risks alone, in and of themselves).

The boundaries between social vs. individual risks can also be murky. Increased stigma faced by a population can potentially increase the stigma faced by each individual within the group. Moreover, the potential future social effects of research (e.g., on policy) can also not be easily predicted. Research often has little impact on public policy, even when researchers wish their work to have more of an effect. Whether a protocol will in fact affect policy, and if so, how, and to what degree is thus unclear (Sumner et al., 2011).

These uncertainties are important as they can potentially contribute to variations between IRBs that can impede multi-site and other research. The lack of consistency and clearly followed algorithms or approaches between IRBs can impact single-site studies, too. An IRB may approach a study in an idiosyncratic way that is not fully justified, may stymie researchers, and may reflect neither the regulation’s intent nor the standards used by other IRBs.

Best Practices

These data have several key implications for best practices and policy. IRB members are often “removed from the field,” and arguably may not always be best suited to advise on social risks to a population. IRBs generally try to consult with individuals who have expertise about other cultural groups that are participating in research. Yet these committees often face challenges in obtaining such unbiased expertise (i.e., the researcher submitting a protocol may know about a local population, but be conflicted in assessing the ethics of his or her own protocol). These data thus suggest the need for further sensitivity to, and awareness of, these ambiguities and complexities involved in issues related to social risk among IRBs, policymakers, and researchers. Clarification and guidance from scholars as well as OHRP and/or other entities (e.g., Public Responsibility in Medicine and Research [PRIM&R], Secretary’s Advisory Committee on Human Research Protections [SACHRP], or the Institute of Medicine), concerning how IRBs should and do define, interpret, assess, and weigh these categories (i.e., social risks) could be highly beneficial. These data also suggest the need to collect, share, and explore “rules of thumb”—ways of handling these tensions—that individual IRBs may have developed, and that may help other institutions as well.

Research Agenda

These data also have several important implications for future research and scholarship—underscoring needs to explore further the definitional and conceptual ambiguities highlighted here—e.g., examining, among larger samples, how often, when, to what degrees, and in what ways IRBs consider and weigh long-term social risks, and differ in doing so, and how exactly these considerations affect IRB decisions. Interviewees did not mention the lack of reliable evidence to drive decisions, whether clinical or not, as a social harm; but it may also pose challenges in addressing these issues.

Educational Implications

These data suggest the possibility that IRB chairs, members, staff, policy makers, and researchers may benefit from educational efforts targeted at addressing these realms, including the complexities and ambiguities involved in definitions and applications of these terms, to increase awareness and sensitivity about these issues.

These data have several potential limitations. The study included interviews with IRB chairs and members, not observations of IRB meetings, or written IRB records. But future studies can use these other approaches as well. These IRB personnel were primarily Caucasian, as has been found in other studies of IRBs (Hayes, G. J., Hayes, S. C., & Dystra, 1995). Yet the present data did not suggest that, overall, IRB personnel vary significantly based on their ethnicity and race in their concerns about social risks. Several Caucasian IRB personnel were very dedicated to the treatment and welfare of disadvantaged groups and highly concerned about social risks. Nonetheless, future research can investigate more fully among larger samples whether IRB personnel’s concerns do vary based on their race and ethnicity. This research may be difficult to conduct, however, since IRB personnel are overwhelmingly Caucasian (Hayes, G. J., Hayes, S. C., & Dystra, 1995; Campbell et al., 2003). These IRBs also reported difficulty recruiting and maintaining diverse community members (Klitzman, 2011c). Such additional data also may be hard to gather since, anecdotally, IRBs have generally required researchers to get consent from all IRB members, and relevant PIs and funders. This study is based on qualitative data, and thus is not designed to quantify responses. Qualitative data is valuable in suggesting research questions and hypotheses, and illuminating key social phenomena, meanings, and practices. Future studies can quantify frequencies of these various phenomena among larger samples, and explore statistical associations with other variables.

In sum, these data—the first to examine how IRBs view and make decisions about social risks in protocols they have reviewed—highlight how IRBs at times in fact consider these issues, but face a range of challenges, dilemmas, and ambiguities, and can vary as a result. These phenomena have important implications for future practice, research, policy, and education.

Acknowledgments

The author would like to thank Meghan Sweeney, Jason Keehn, and Patricia Contino for their assistance with this manuscript. The NIH (R01-NG04214) and the National Library of Medicine (5-G13-LM009996-02) funded this work.

Biography

Robert L. Klitzman is a Professor of Clinical Psychiatry at the Columbia University Department of Psychiatry and Director of the Masters of Bioethics Program at Columbia University. He conceived of and directed the study.

Appendix A: Sample Questions from Semi-Structured Interview*

  • Do you think IRBs differ in their views or approaches toward research integrity (RI), and if so, when, how, and why? What issues about RI has your IRB confronted? Which, if any, have been difficult? Why? What happened? What other disagreements have occurred on your IRB?

  • Has your IRB confronted issues in assessing social risks and benefits, and if so, how, when, and why? What issues arose, and how did you address these? Was there disagreement on your IRB about these issues, and if so, what? How was it resolved?

  • Should other regulations or guidelines concerning IRB reviews of RI or other areas be developed, and if so, what?

  • Do you have any other thoughts about these issues?

Footnotes

*

Additional follow-up questions were asked, as appropriate, with each participant.

References

  1. Bergus GR, Levin IP, Elstein AS. Presenting risks and benefits to patients: The effect of information order on decision making. Journal of General Internal Medicine. 2002;17:612–617. doi: 10.1046/j.1525-1497.2002.11001.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Campbell EG, Weissman JS, Clarridge B, Yucel R, Causino N, Blumenthal D. Characteristics of medical school faculty members serving on institutional review boards: Results of a national survey. Academic Medicine. 2003;78(8):831–836. doi: 10.1097/00001888-200308000-00019. [DOI] [PubMed] [Google Scholar]
  3. Dziak K, Anderson R, Sevick MA, Weisman CS, Levine DW, Scholle SH. Variations among institutional review boards in a multisite health services research study. Health Services Research. 2005;40(1):279–290. doi: 10.1111/j.1475-6773.2005.00353.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Douglas M. Purity and danger. London: Routledge & Kegan Paul; 1966. [Google Scholar]
  5. Fleischman A, Levine C, Eckenwiler L, Grady C, Hammerschmidt DE, Sugarman J. Dealing with the long-term social implications of research. American Journal of Bioethics. 2011;11(5):5–9. doi: 10.1080/15265161.2011.568576. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Fox RC, Swazye JP. Observing bioethics. New York: Oxford University Press; 2008. [Google Scholar]
  7. Geertz C. Interpretation of cultures: Selected essays. New York: Basic Books; 1973. [Google Scholar]
  8. Hayes GJ, Hayes SC, Dystra T. A survey of university institutional review boards: Characteristics, policies, and procedures. IRB: A Review of Human Subjects Research. 1995;17(3):1–6. [PubMed] [Google Scholar]
  9. Klitzman R. Views and approaches toward risks and benefits among doctors who become patients. Patient Education and Counseling. 2006;64:61–68. doi: 10.1016/j.pec.2005.11.013. [DOI] [PubMed] [Google Scholar]
  10. Klitzman R. Views and experiences of IRBs concerning research integrity. Journal of Law, Medicine & Ethics. 2011a;39(3):513–528. doi: 10.1111/j.1748-720X.2011.00618.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Klitzman R. “Members of the same club”: Challenges and decisions faced by U.S. IRBs in identifying and managing conflicts of interest. PLoS ONE. 2011b;6(7):e22796. doi: 10.1371/journal.pone.0022796. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Klitzman R. How local IRBs view central IRBs in the U.S. BMC Medical Ethics. 2011c;12(13):1–14. doi: 10.1186/1472-6939-12-13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Klitzman R. The myth of community differences as the cause of discrepancies between IRBs. AJOB Primary Research. 2011d;2(2):24–33. doi: 10.1080/21507716.2011.601284. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Klitzman R. The ethics police? IRBs’ views concerning their power. PLoS One. 2011e;6(12):e28773. doi: 10.1371/journal.pone.0028773. Epub ahead of print 13 December 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Klitzman R. The local ecologies of IRBs: The dynamic relationships, systems, and tensions between IRBs and their institutions. AJOB Primary Research. 2012a doi: 10.1080/21507716.2012.757255. Epub ahead of print 18 December 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Klitzman R. U.S. IRBs confronting research in the developing world. Developing World Bioethics. 2012b;12(2):63–73. doi: 10.1111/j.1471-8847.2012.00324.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Klitzman R. Institutional review board community members: Who are they, what do they do, and whom do they represent? Academic Medicine. 2012c;87(7):975–981. doi: 10.1097/ACM.0b013e3182578b54. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Klitzman R. How IRBs view and make decisions about consent forms. Journal of Empirical Research on Human Research Ethics. 2013a;8(1):8–19. doi: 10.1525/jer.2013.8.1.8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Klitzman R. How IRBs view and approach challenges raised by industry-funded research. IRB: Ethics & Human Research. 2013b;35(3):9–16. [PMC free article] [PubMed] [Google Scholar]
  20. Kong A, Barnett GO, Mosteller F, Youtz C. How medical professionals evaluate expressions of probability. New England Journal of Medicine. 1986;315(12):740–744. doi: 10.1056/NEJM198609183151206. [DOI] [PubMed] [Google Scholar]
  21. Mello MM, Wolf LE. The Havasupai Indian tribe case: Lessons for research involving stored biologic samples. New England Journal of Medicine. 2010;363(3):204–207. doi: 10.1056/NEJMp1005203. [DOI] [PubMed] [Google Scholar]
  22. Melo-Martín D. IRBs and the long-term social implications of research. American Journal of Bioethics. 2011;11(5):22–23. doi: 10.1080/15265161.2011.552161. [DOI] [PubMed] [Google Scholar]
  23. Redelmeier DA, Rozin P, Kahneman D. Understanding patients’ decisions: Cognitive and emotional perspectives. Journal of the American Medical Association. 1993;270(1):72–76. [PubMed] [Google Scholar]
  24. Shah S, Whittle A, Wilfond B, Gensler G, Wendler D. How do institutional review boards apply the federal risk and benefit standards for pediatric risks? Journal of the American Medical Association. 2004;291(4):476–482. doi: 10.1001/jama.291.4.476. [DOI] [PubMed] [Google Scholar]
  25. Strauss A, Corbin J. Basics of qualitative research: Techniques and procedures for developing Grounded Theory. Newbury Park, CA: Sage Publications; 1990. [Google Scholar]
  26. Sumner A, Crichton J, Theobald S, Zulu E, Parkhurst J. What shapes research impact on policy? Understanding research uptake in sexual and reproductive health policy processes in resource poor contexts. Health Research Policy and Systems. 2011;9(Suppl 1):S3. doi: 10.1186/1478-4505-9-S1-S3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science. 1981;211:453–458. doi: 10.1126/science.7455683. [DOI] [PubMed] [Google Scholar]
  28. U.S. Department of Health and Human Services. Code of Federal Regulations: Title 45 Public Welfare CFR 46. 2005 Retrieved from http://ohsr.od.nih.gov/guidelines/45cfr46.html.
  29. U.S. Department of Health and Human Services. Human subjects research protections: Enhancing protections for research subjects and reducing burden, delay, and ambiguity for investigators. Federal Register. 2011;76(143):44512–44531. [Google Scholar]

RESOURCES