PRÉCIS
In qualitative interviews with a diverse group of experts, the vast majority believed unregulated researchers should seek out independent oversight. Reasons included the need for objectivity, protecting app users from research risks, and consistency in standards for the ethical conduct of research. Concerns included burdening minimal risk research and limitations in current systems of oversight. Literature and analysis supports the use of IRBs even when not required by regulations, and the need for evidence-based improvements in IRB processes.
INTRODUCTION
In the U.S., Institutional Review Boards (IRBs) are integral to the system of protections for human research participants. IRBs provide independent evaluation of proposed and ongoing research to ensure it is ethically acceptable and in compliance with laws and regulations, acting as a check against researchers’ potential biases.1 IRB oversight is legally required for research involving human subjects that is funded by the federal government, as well as research under the jurisdiction of the Food and Drug Administration (FDA). In addition, many institutions that receive federal funding choose to extend regulatory protections for research participants (including the requirement for IRB oversight) to all their human subjects research, regardless of funding source. Other countries have similar requirements for oversight by an independent ethics committee.2
Even so, some research involving human subjects falls outside these regulatory structures. Although there are legitimate questions about the efficiency and effectiveness of IRB processes,3 lack of independent oversight raises concerns about exposing research participants to risks associated with flawed study design, unfair participant selection, unjustified risk-benefit ratio, and inadequate opportunity to give informed, voluntary consent.4
A growing area of unregulated research is that involving the analysis of individual level data collected via mHealth apps and devices. Such research is being conducted by citizen scientists, patient advocacy organizations, and independent research firms, as well as solo entrepreneurs and researchers based in commercial entities. These ‘non-traditional’ researchers may have limited knowledge or experience with ethical and regulatory paradigms for safeguarding the rights and well-being of human subjects.
To inform discussions regarding appropriate oversight of unregulated mHealth research, here we present results of in-depth interviews with a diverse group of expert stakeholders, focusing on the need for and possible approaches to independent oversight, followed by expert commentary from one author (PPO) and endorsed by the other authors.
EMPIRICAL DATA
Methods
We conducted qualitative interviews with experts from four key stakeholder groups central to mHealth research:
Patient and research participant advocates (“Advocate”)
Researchers who are integrating mHealth technologies into their studies, including independent researchers and citizen scientists (“Researcher”)
Regulatory and policy professionals (“Regulatory”)
App and device developers (“Developer”)
Data collection and analysis are described in detail elsewhere in this issue.5 In brief: Based on our knowledge of the issues and in consultation with the larger research team, we developed, pilot tested, and finalized a semi-structured interview guide centered around hypothetical scenarios involving two commercial mHealth apps collecting health, behavioral, and other data which may be shared for various purposes including research. Here we report findings in response to the following question:
When researchers are in a regulated environment, they are required to seek out external oversight. For example, if their research is funded by NIH, they are required to get approval from an Institutional Review Board, or IRB, which is tasked with making sure human subjects are protected. |
• When people are conducting research that is not subject to these regulations, do you think they should still seek out some sort of external oversight? Why / why not? |
We identified potential participants based on leadership positions in prominent organizations, institutions, and studies, authorship of influential papers, and nominated expert sampling.6 We interviewed 41 experts representing a wide array of professional perspectives and demographic diversity (Table 1).
Table 1.
n | (%) | |
---|---|---|
Category * | ||
Patient/participant advocate | 10 | (24) |
Researcher | 13 | (32) |
Regulatory/policy professional | 9 | (22) |
Mobile app/device developer | 9 | (22) |
Academic Degrees ^ | ||
Bachelors | 7 | (17) |
Masters | 13 | (32) |
JD | 5 | (12) |
PhD | 16 | (39) |
MD | 9 | (22) |
Geographic Region (U.S.) | ||
Midwest | 2 | (5) |
Northeast | 5 | (12) |
South | 18 | (44) |
West | 16 | (39) |
Gender | ||
Male | 20 | (49) |
Female | 20 | (49) |
Non-binary | 1 | (2) |
Race | ||
White | 30 | (73) |
Black, African American | 3 | (7) |
Asian | 5 | (12) |
> 1 Race | 1 | (2) |
Not reported | 2 | (5) |
Hispanic | ||
No | 38 | (93) |
Yes | 2 | (5) |
Not reported | 1 | (2) |
Many of our interviewees have multiple areas of expertise and could have been recognized as belonging to two or more categories of stakeholder groups; this table reflects the primary perspective for which we identified them as experts.
Reflects >1 degree per interviewee, as applicable
Interviews were conducted by telephone and professionally transcriptions of the audio recordings were uploaded into NVivo 12 for coding and analysis using standard iterative processes.7 The Vanderbilt University IRB deemed this research exempt.
Views on the Need for Independent Oversight
Nearly all interviewees (over 90%) believed that unregulated researchers should seek out some type of external or independent oversight. Among these, many highlighted the crucial role of objectivity and the need for a third party to provide “sanity checks” (29_Researcher):
People, even with the best intentions, miss things and maybe don’t consider or see all the impacts their work might have on others. I think that’s largely the benefit of third party review. In some scenarios, people need to be encouraged to follow the rules. But even with the best of intentions, it’s easy to miss things. (05_Developer)
Because people, in their zest to get things done … without oversight, it’s easy for someone to cross the line… We want to make certain that somebody more objective than the person doing the research has a look to make certain everything is in order as it should be. (11_Advocate)
Many also emphasized the importance of independent review to protect app users “from unethical research behavior, or research that presents unnecessary risk or unacceptable risk” (30_Developer):
[Research regulations] have been put in place because … people have been abused for hundreds and hundreds of years, up until very recent memory. We have these rules in place to protect people who may not know that they’re being abused, or that they have recourse in these situations. You should of course have an IRB reviewing your stuff, or whatever comparable structure might exist. Someone has to be looking over your shoulder. (36_Advocate)
Several argued for consistency, suggesting that unregulated research should be treated no differently than regulated research. These experts said research should undergo independent review regardless of who conducts or funds the research because it’s “the right thing to do” (01_Developer):
I think there needs to be consistent protections. Just because something isn’t attached to some level of federal funding, it doesn’t change the basic ethical considerations… It creates a strange impression that just because something has funding attached, it comes with this oversight body… Much research goes on without funding and I think it’s part of the public good to ensure that research is being conducted with some moral compass, and with the right aims and intentions, and not just for profit. (07_Regulatory)
Is there risk to the participant? If there is, then it should come under some external oversight by which the rights and safety of human participants is protected. That’s really what the Helsinki Declaration says. It doesn’t say anything about “Are you publishing? Are you gonna benefit from it?” It’s about managing risk. This is human data and there’s risk to people … Ethically, yes, it should come under IRB. (40_Researcher)
Indeed, some specifically noted that unregulated research may be riskier than the same study would be in a regulated environment:
It doesn’t seem fair that the people most trained to do research would have to be regulated, and then those who are not would not need to be regulated. That just doesn’t seem to make any sense at all… It sounds even more dangerous. Even researchers with oversight, we have scientific misconduct—but people who don’t know regulations … is probably the link to even more misconduct. Not necessarily intentionally, but just because you don’t know what you’re doing. (14_Researcher)
Interviewees also described several practical considerations in favor of unregulated researchers’ seeking independent oversight, such as journal requirements: “They’re gonna have a hard time publishing their research and it’s not gonna make a very big impact in the field unless it has been vetted in some way” (17_Researcher). A few mentioned Apple’s ResearchKit requirement for IRB approval of research apps and the potential business advantages for other app developers:
It’s a good idea to have some independent entity, however that’s defined, help you to stay in line. If your goal is to be an ethical, upstanding, appropriate company, and you are a good player in that market, then you probably would do something like that to show your good faith to [app users] and even to show to the people who might be using your data that they can be confident in the way that you gathered the data was appropriate, and the people that you gathered it from were aware… I think it’s a good data practice, a good research practice… I think it behooves people to have an independent eye on this kind of work. (27_Regulatory)
Among the small minority of interviewees (about 10%) who were less favorable toward the idea of unregulated researchers seeking out independent review, some believed the need for such oversight depends on the level of risk involved:
If you’re getting into an area where it’s getting more serious, related to a medical issue, then I think you’re better off making sure that you’re in line with some sort of an external group that can help validate and think through what are some of the consequences, potentially non-intended consequences, of what you’re doing. (25_Researcher)
It depends … If it is personal data, personal information, if this got out into the public and the detriment of becoming public is negative and bad … I think you have to go to the external review board. If you’re doing something where you’re just basically tracking data that is ‘what pages are you using most in my app?’, not necessarily. So, I think it depends on the type of data you’re collecting. (02_Developer)
Several expressed dissatisfaction with current systems of oversight …
IRB tends to protect the organizations and it actually does not protect the participants and the research. And so, if the goal is to protect participants and make better research happen, then I don’t believe that IRBs right now, the way they’re structured, are the best mechanism for doing that. (16_Researcher)
IRBs are notorious for being different and, to some degree, unpredictable. What one IRB may say is fine, another may freak out over, and that unpredictability and unreliability makes it really difficult for researchers. (30_Developer)
… including potential impracticalities for unregulated researchers:
The cons are it takes more time. If it truly is external, you’re probably gonna have to pay them. It’s one more barrier or hurdle in a complicated process that already takes time and money and so people aren’t usually looking to voluntarily add in things that aren’t required. (03_Regulatory)
It’s not that helpful to tell people that they need external oversight because … if you actually try to make a list of ‘here’s where you go for oversight,’ it’s a very short list. The quality is not that high, like fee-for-service reviews or things like that. The infrastructure is underdeveloped and therefore it’s kind of bad advice to tell people they need external oversight. On the other hand, really those resources should be developed as opposed to everybody just throwing up their hands and saying there’s nothing needed. (32_Advocate)
Views on Approaches to Independent Oversight
Among interviewees who believed that unregulated researchers should seek some type of external review, many proposed such oversight should come from an IRB, often noting the availability of independent IRBs:
If you’re not connected to an institution that has their own, there is an IRB that you can still use… It’s hard to imagine being okay with research, human-subjects research, that doesn’t have an IRB. (14_Researcher)
However, a few cautioned against a traditional IRB model alone because of perceived limitations when it comes to the use of independent IRBs by non-traditional scientists:
If someone off the street says that, “I’m going to go to Western IRB and hire someone to write me an IRB protocol and submit it,” I wouldn’t feel comfortable with that. There has to be vetting of who is involved and whether they have the capacity to understand all the implications of what they’re doing and where the oversight is. At minimum, some sort of IRB—but just an IRB is not enough if that individual doesn’t have any sort of organizational capacity. (39_Researcher)
Is this like a hired gun IRB that’ll stamp whatever procedure you give them…? It must be truly independent and must have qualified people … that are making the judgment, and their decision has to be effective in the sense that if this independent review says no, then you can’t do it. (09_Regulatory)
Some generally described a formal entity or process other than an IRB, such as an oversight board established specifically for unregulated researchers:
I think if there’s an oversight board that comes up with criteria for using the data and ethical approaches to it, I think that is fine. A company could choose to use an external group, or they could have one inside their company. … It doesn’t necessarily have to be an accredited IRB type of thing. (16_Researcher)
Others suggested that unregulated researchers could undertake an informal process of getting feedback from experts, that “they would be wise to contact somebody, ‘Hey, are we doing the right thing in this case?’” (28_Developer). One proposed a forum through which unregulated researchers could request expert review and consultation:
I really like the idea of having a forum for citizen science to put out there and discuss what they’re doing with what we call “regular scientists”… That would provide not only a forum, but a managed forum, kind of like a monitored website. Maybe that kind of management would be a prerequisite before anything went out to the public. (18_Researcher)
Another alternative was training for unregulated researchers and replacing external oversight with formal certification:
There’s something about the way our ethics review process works. It introduces a lot of friction that might be better accomplished through things like training. We currently think of research as something where you create the protocol and it’s reviewed on a case-by-case basis. That’s not how we, for example, perform medicine. We, instead, train someone, then they’re certified, and then we trust them with stuff. In fact, research is kind of exceptional in this way, and so I’d be interested in seeing some sort of certification process outside of the normal IRB realm where it’s like, “I’ve been certified in ethical practices for data”… That would be a lower overhead and would be best to see because a lot of mistakes that people make are sometimes mistakes where they simply weren’t aware that they could be making a mistake. (38_Developer)
Some interviewees did not identify a particular entity or process for external oversight, but rather discussed key attributes. Some emphasized the importance of the oversight’s being external to or independent from the unregulated research, citing potential conflicts of interest, lack of expertise, and limited objectivity. A few emphasized the importance of actively involving “reasonable representation of the community involved” (09_Regulatory), including patients and the public. Finally, one advocated that oversight should encourage open discussion and continuous learning, regardless of form:
[IRBs] are perceived as this police structure, which sets up a really bad relationship model. When someone does something wrong, when something bad happens, ideally, you want people to be able to feel safe enough to be able to say, “I screwed up, but what do I do?” You want somewhere people feel safe enough to actually have conversations about when they’re right or wrong… Having this dialog and debate where you’re building the logical checks and balances, but in a place where people feel safe to try things but also to be able to self-correct. So, it’s less, “Just have this [IRB review],” which is setting up the idea that there’s one time to think about ethics. But really, there should be many times when you’re doing research—you should be constantly thinking about how you can do your work to be more helpful to others and whatnot. So, the external [oversight] entity, in my mind, is someone that is helping you to be your best version of you. Unfortunately, I don’t think IRBs are set up to actually help people to be the best and most ethical version of themselves. (29_Researcher)
In Summary
In these qualitative interviews with a diverse group of experts, the vast majority believed that unregulated researchers should seek out external oversight. Reasons most commonly cited included the essential need for objectivity, protecting app users from research risks and harms, and consistency in standards for the ethical conduct of research regardless of by whom it is conducted or funded. The few who were less inclined to endorse the need for external oversight were primarily concerned about unduly burdening minimal risk research and limitations in current systems of oversight. Regarding how and by whom such oversight should occur, interviewees most often suggested an independent IRB; alternatives included some other formal oversight entities (e.g., a company-created review board), informal input (e.g., citizen science forum), and education and certification of researchers (rather than project-by-project oversight).
The results of descriptive studies such as ours do not prescribe what should be done; rather, they illuminate important issues from a range of perspectives. Many of our interviewees have multiple areas of expertise; Table 1 characterizes the primary perspective for which we identified them as experts. Accordingly, we did not attempt to analyze similarities or differences between stakeholder groups, particularly given the qualitative nature of our study.
We conducted these interviews in 2017-2018 with experts throughout the U.S. Future research will be needed among other stakeholder groups and geographic locations, and as mHealth technologies, research using mHealth data, and privacy expectations continue to evolve.
COMMENTARY
The protection of human subjects participating in research is required and informed by federal regulations. But the construct of these regulations—with the Common Rule tethered to the use of federal funds and FDA regulations limited to products overseen by the FDA—creates a gap. Non-FDA regulated research conducted with no federal support is not legally required to meet regulatory standards for the protection of human subjects.
Thus, a central concern for research falling into this gap is whether human subjects are being exposed to risk, or worse, being harmed. To assess this possibility, one must consider a series of questions: First, do the current regulations truly protect human subjects? Second, what is the scope and risk of unregulated research? Third, are there other mechanisms in place to protect human subjects?
Regarding current regulations, the history of protection of human subjects in research can be traced back centuries, but it was virtually ignited in the mid-20th century with the Nuremberg trials that culminated in the Nuremberg Code in 1947,8 the bombshell article by Henry Beecher in 1966 outlining examples of unethical research published in leading medical journals,9 and the 1972 public revelation of the Tuskegee syphilis study.10
Prior to this, independent review and informed consent were promulgated by some individual entities, but agreed-upon standards and processes did not exist.11 Some felt that investigators could and should be trusted to oversee their own research: “misconduct was a problem of rogue researchers…trust, not regulation would foster better research and clinical care.”12 Even Beecher did not call for regulatory oversight, but rather focused on “the more reliable safeguard provided by the presence of an intelligent, informed, conscientious, compassionate, responsible investigator.”13
In the US, this changed in 1974 when the National Research Act formally codified IRBs for oversight of federally funded research with human subjects.14 Of note, this Act recognized the problem of unregulated research by calling for “… an investigation and study to determine the need for a mechanism to assure that human subjects in biomedical and behavioral research not subject to regulation by the Secretary are protected” (Section 202(3)).
Similar international initiatives emerged at the same time: In 1975, the World Medical Association’s Declaration of Helsinki15 added Principle 2: “(T)he design and performance of each experimental procedure involving human subjects should be clearly formulated in an experimental protocol which should be transmitted to a specially appointed independent committee for consideration, comment and guidance.” In later iterations, approval by an independent ethics committee (current Principle 5) was required and this has become the international norm.
This evolution in research regulations and norms is understandable given the context of egregious examples of self-regulated unethical research.16 Did those conducting such research ever consider that their studies were unethical? Did their enthusiasm for, as well as their intellectual, career, and/or fiscal investment in, their research blind them to potential concerns? Can self-regulation adequately address the complexity of research bias, e.g., small-study effect bias, citation bias, industry bias, financial conflicts of interest, pressure to publish bias, early career bias? Most would likely find it difficult, if not impossible, to truly recognize and acknowledge the degree to which biases influence their own actions.17 To quote one of the interviewees in our study, a review is better done by “somebody more objective than the person doing the research.”
Consistency of review is important and supported by proscribed regulatory criteria that must be met for approval. For example, risks must be minimized and reasonable in relation to anticipated benefits (if any) or knowledge gained; informed consent must be sought and documented (if not waived); there must be adequate provisions for monitoring, protection of confidentiality, and safeguards for vulnerable populations.18 In assessing these criteria, study hypothesis and design are essential considerations; after all, if it is bad science, it is unethical.19
While there has been international endorsement of independent and consistent oversight and approval, there are numerous criticisms of the current process, including those identified by our interviewees. Some of these focus on burden: that the process is too bureaucratic, too time consuming. Some question details of the process, perceiving a one-size-fits-all approach without flexibility or titration of oversight to different levels of risk. Some raise the issue of IRB members’ expertise and training to adequately handle new areas of research. Some question the value of the current system and ask for hard evidence. Some simply ask for a different process.20
Response to these criticisms is challenging. Although federal regulations and guidance documents inform the oversight process, there is tremendous variability in interpretation and implementation.21 This continues despite attempts at accreditation to encourage standardization.22 Measuring/proving the value of IRB review has been elusive.23
Although performance metrics such as turn-around-times can be measured, the critical ask is for evidence that the current process improves the protection of human subjects—and response to this is generally anecdotal, rather than concretely evidentiary. Examples of protocol changes required by the IRB that, had they not been made, would likely have resulted in harm to subjects can readily be found. While compelling, these do not provide a systematic counterargument.
There have been long-standing concerns about the lack of flexibility in the regulations—a fundamental and unresolved one being that biomedical and social-behavioral research are inherently different and should not be covered by the same system of protection.24 The explosion of mHealth apps, often associated with behavioral research, has fueled this controversy. However, the Common Rule does in fact incorporate flexibility, elucidating categories of research that are exempt from the regulations, identifying criteria for waiver of consent requirements, and assigning level of review based on whether the risks involved are greater than minimal.25 Even so, not all IRBs take full advantage of the flexibility that exists.
Proposals for change fall into several buckets,26 many of which were captured by our interviewees’ comments on approaches to independent oversight. One calls for training and certification to empower researchers to oversee their own research. This could be supported by a process to get feedback from experts, and perhaps include posting protocols online to elicit suggestions. In such a system “responsibility for ethical conduct during the study would be shared by both the researchers and the peers who agreed that the plan would adequately protect participants.”27 This system is predicated on an available group of peers who are trained and possibly certified in bioethics and research. Other suggestions involve the creation of new formal or informal entities that would either augment the IRB (e.g., data boards, data security committees28) or take the place of the IRB.
With any of these proposals, the devil is in the myriad details and whether key attributes of review (such as those identified by our interviewees) could be met. For example, would these new mechanisms completely replace the current system or would they only be applied to a particular segment of research (perhaps based on risk)? Would these systems require independence of review? Would they be voluntary? What authority would these new processes have? What standards would be used for the assessment of a particular protocol or for certification of a peer reviewer? Would approval be required? How would new processes be sustained over time? In the example of online peer review, what incentives would be needed to sustain the number and quality of reviewers needed?
If alternate approaches were promulgated, how would that affect the consistency of review that is expected by researchers, institutions, funders, and publishers? If ethically acceptable alternatives were developed, who would determine which system would be available to specific research and/or researchers?
Turning to the issue of the scope and risk of unregulated research, further questions arise. Is it materially distinct from ‘traditional’ research? Are the risks smaller or fewer? Are unregulated and regulated researchers different in ways that matter regarding ethical obligations to protect participants?
The scope of research not subject to regulation is limited by specific attributes that definitively bring certain activities into the regulated sphere, e.g., most research procedures conducted in a hospital setting, collaboration with colleagues who work for institutions that receive federal funding, research that falls under FDA-regulation. (Of note, the lack of clear understanding regarding the FDA’s enforcement discretion in mHealth adds some confusion.29) Further, research that can be conducted without meeting regulatory standards is limited by journal editors who routinely require documentation of compliance with federal regulations as a condition for publication.
But the scope of unregulated research and associated risk is broad, with interventional studies with approved drugs and devices at one extreme. But more common is research with big data, artificial intelligence, and, relevant to this study, mHealth apps, for which a good portion of development is done in the commercial sector by company employees as well as by independent entrepreneurs who are not covered by the regulations. The risk-benefit ratio of this research merits consideration. The main risk to subjects is the potential loss of privacy and confidentiality resulting from the necessary access to large amounts of data (observational and self-reported data as well as information in publicly accessible and/or highly controlled databases). Ensuring adequate protections are difficult as the definition of identifiability changes, technology facilitates re-identification, and hacking and cyber-attacks increase.30 The benefit of the research must justify exposing any subject to these risks. And, despite logistical challenges, some feel strongly that individuals whose data are accessed should provide informed consent, or at least be notified of the research and its risk.31 When mHealth research is regulated, IRBs address these issues. When mHealth is unregulated, it is uncertain how risks and benefits are even considered. As some interviewees noted, “unregulated research may be riskier than the same study in a regulated environment.”
With regard to mechanisms other than IRBs for protecting human subjects, researchers conducting unregulated research will not be nested in conventional research infrastructures available at regulated academic/research centers. In addition to regulatory oversight, these systems support researcher development and good behavior with vetted local policies, structured institutional norms, and consequences for noncompliance.32 For unregulated researchers who work with small independent research staffs or on a research team embedded in a larger organization focused on product development, it may be difficult to find similar support. This is particularly concerning for unregulated researchers who are new to research and have little to no experience with or understanding of human subjects protections. Their facility with basic ethical standards may be limited; their recognition of human subjects issues may be flawed and potential risks not considered. In the absence of independent review, bias may be an issue especially when the investigator has a stake in the product under development.
Unregulated research does pose risk to human subjects and those human subjects have the right to protection. We agree with the large majority of experts who participated in our qualitative interviews: Over 90% said independent oversight is important, and many of these said the current system of oversight is the best option—while also acknowledging ample opportunities for improvement. If the risk of much unregulated research is trivial, assignment to an exempt category may be appropriate in some cases and in others the review could be expedited. Alternate processes could be entertained if they are standardized, sustainable, and can be evaluated. There is a duty to do no harm, and history has well demonstrated the limitations of self-regulation in realizing this duty.
In the absence of legislative action, unregulated research will continue to be a reality. The people and entities conducting this research must become intimately familiar with ethical norms and standards for the protection of human subjects. The future of these activities relies on the trust of everyone involved in the research33 and part of that trust is demonstrating that the rights and welfare of human subjects are being protected. Could the current system of oversight be improved? Absolutely. But today it remains the only process that consistently protects humans in research.
Acknowledgments:
This work was supported by a grant from the National Cancer Institute, National Human Genome Research Institute, and Office of Science Policy and Office of Behavioral and Social Science Research in the Office of the Director, National Institutes of Health (R01-CA-207538). The content is solely the responsibility of the authors and does not necessarily represent the official views of NIH. Our thanks to Carolyn Diehl for her assistance.
Biographies
Laura M. Beskow, MPH, PhD, is a Professor and the Ann Geddes Stahlman Chair in Medical Ethics in the Center for Biomedical Ethics and Society at Vanderbilt University Medical Center (Nashville, TN). She received her BS from Iowa State University (Ames, IA), her MPH with a concentration in health law from Boston University (Boston, MA), and her PhD in Health Policy and Administration with a minor in epidemiology from the University of North Carolina at Chapel Hill (Chapel Hill, NC).
Catherine M. Hammack-Aviran, MA, JD, is an Associate in Health Policy in the Center for Biomedical Ethics and Society at Vanderbilt University Medical Center (Nashville, TN). She received her BS from the University of Southern Mississippi (Hattiesburg, MS), her MA in Bioethics from Wake Forest University’s Center for Bioethics, Health, and Society (Winston-Salem, NC), and her JD from Wake Forest University School of Law (Winston-Salem, NC).
Kathleen M. Brelsford, MPH, PhD, is a Research Assistant Professor in the Center for Biomedical Ethics and Society at Vanderbilt University Medical Center (Nashville, TN). She received her BA from the University of Miami (Miami, FL), her MA in applied anthropology from Northern Arizona University (Flagstaff, AZ), and her MPH with a concentration in maternal and child health and PhD in Applied Anthropology from the University of South Florida (Tampa, FL).
P. Pearl O’Rourke, MD, is the Director of Human Research Affairs at Partners HealthCare Systems in Boston and an Associate Professor of Pediatrics at Harvard Medical School (Somerville, MA). She received her BA from Yale University (New Haven, CT), her Bachelors of Medical Science from Dartmouth Medical School (Hanover, NH), and her MD from The University of Minnesota (Minneapolis, MN)
REFERENCES
- 1.Grady C, ‘Institutional Review Boards: purpose and challenges,’ Chest, 148, no. 5 (2015): 1148–1155. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Office for Human Research Protections, International Compilation of Human Research Standards, at <https://www.hhs.gov/ohrp/internatinal/compilation-human-research-standards/index.html> (last visited September 23, 2019).
- 3.Taylor HA, ‘Moving beyond compliance: measuring ethical quality to enhance the oversight of human subjects research,’ IRB: Ethics & Human Research, 29, no. 5 (2007): 9–14; [PubMed] [Google Scholar]; Coleman CH and Bouesseau MC, ‘How do we know that research ethics committees are really working? The neglected role of outcomes assessment in research ethics review,’ BioMedical Central Medical Ethics, 9, no. 6 (2008); [DOI] [PMC free article] [PubMed] [Google Scholar]; Millum J and Menikoff J, ‘Streamlining ethical review,’ Annals of Internal Medicine, 153, no. 10 (2010): 655–657; [DOI] [PMC free article] [PubMed] [Google Scholar]; Mascette AM, Bernard GR, and Dimichele D, et al. , ‘Are central institutional review boards the solution? The National Heart, Lung, and Blood Institute Working Group’s report on optimizing the IRB process,’ Academy of Medicine, 87, no. 12 (2012): 1710–1714. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Emanuel EJ, Wendler D, and Grady C, ‘What makes clinical research ethical?,’ JAMA, 283, no. 20 (2000): 2701–2711. [DOI] [PubMed] [Google Scholar]
- 5.Hammack-Aviran CM, Brelsford KM, and Beskow LM, ‘Ethical considerations in the conduct of unregulated mHealth research: expert perspectives,’ Journal of Law Medicine & Ethics, 48, Suppl. 1, (2020): 9–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Namey EE, Trotter R, ‘Qualitative research methods,’ in Guest G, Namey EE, eds., Public Health Research Methods (Los Angeles: SAGE, 2015): at 447. [Google Scholar]
- 7.MacQueen KM, McLellan E, Kay K, and Milstein B, ‘Codebook development for team-based qualitative analysis,’ Cultural Anthropology Methods, 10, no. 2 (1998): 31–36. [Google Scholar]
- 8.Moreno JD, Schmidt U, and Joffe S, ‘The Nuremberg Code 70 years later,’ JAMA, 318, no. 9 (2017): 795–796. [DOI] [PubMed] [Google Scholar]
- 9.Miller FG, ‘Homage to Henry Beecher (1904-1976),’ Perspectives in Biology & Medicine, 55, no. 2 (2012): 218–229. [DOI] [PubMed] [Google Scholar]
- 10.Cobb WM, ‘The Tuskegee syphilis study,’ Journal of the National Medical Association, 65, no. 4 (1973): 345–348. [PMC free article] [PubMed] [Google Scholar]
- 11.See Grady, supra note 1.
- 12.Jones DS, Grady C, and Lederer SE, ‘Ethics and Clinical Research – The 50th anniversary of Beecher’s bombshell,’ N. Engl. J. Med 374, no. 24 (2016): 2393–2398. [DOI] [PubMed] [Google Scholar]
- 13.Beecher HK, ‘Ethics and clinical research,’ N. Engl. J. Med 274, no. 24 (1966): 1354–1360. [DOI] [PubMed] [Google Scholar]
- 14.Publication L 93-384 National Research Act. Title II – Protection of Human Subjects of biomedical and Behavioral Research, at <https://history.nih.gov/research/downloads/PL93-348.pdf> (last visited September 23, 2019).
- 15.World Medical Association, Declaration of Helsinki – Ethical Principles for Medial Research Involving Human Subjects, at < https://www.wma.net/policies-post-wma-deciaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/> (last visited September 23, 2019).
- 16.Moon MR, Khin-Maung-Gyi f., ‘The history and role of institutional review boards,’ Virtual Mentor, 11, no. 4 (2009): 311–321. [DOI] [PubMed] [Google Scholar]
- 17.Fanelli D, Costas R, and Ioannidis JP, ‘Meta-assessment of bias in science,’ Proceedings of the National Academy of Sciences of the United States of America, 114, no. 14 (2017): 3714–3719. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.‘Federal Policy for the Protection of Human Subjects. Final Rule.’ Federal Register, 82, no. 12 (2017): 7149–7274. [PubMed] [Google Scholar]
- 19.See Emanuel, supra note 4.
- 20.Abbott L, and Grady C, ‘A systematic review of the empirical literature evaluating IRBs: what we know and what we still need to learn,’ Journal of Empirical Research on Human Research Ethics, 6, no. 1 (2011): 3–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Silverman H, Hull SC, and Sugarman J, ‘Variability among institutional review boards’ decisions within the context of a multicenter trial,’ Critical Care Medicine, 29, no. 2 (2001): 235–241; [DOI] [PMC free article] [PubMed] [Google Scholar]; McWilliams R, Hoover-Fong J, Hamosh A, Beck S, Beaty T, and Cutting G, ‘Problematic variation in local institutional review of a multicenter genetic epidemiology study,’ JAMA, 290, no. 3 (2003): 360–366; [DOI] [PubMed] [Google Scholar]; Green LA, Lowery JC, Kowalski CP, and Wyszewianski L, ‘Impact of institutional review board practice variation on observational health services research,’ Health Services Research, 41, no. 1 (2006): 214–230; [DOI] [PMC free article] [PubMed] [Google Scholar]; Kastner B, Behre S, Lutz N, et al. , ‘Clinical research invulnerable populations: variability and focus of institutional review boards’ responses,’ PLoS One, 10, no. 8 (2015): e0135997. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.I Summers E, Feige M, “Accreditation of Human Research Protection Programs,’ in Gallin JI, Ognibene FP, and Johnson LL, eds. Principles and Practice of Clinical Research 4th edition (Boston: Academic Press, 2018): at 63–72. [Google Scholar]
- 23.Grady C, ‘Do IRBs protect human research participants?,’ JAMA, 301, no. 10 (2010): 1122–1123; [DOI] [PubMed] [Google Scholar]; See Jones, supra note 12; See Abbott, supra note 20.
- 24.De Vries R, DeBruin DA, and Goodgame A, ‘Ethics review of social, behavioral, and economic research: where should we go from here?,’ Ethics Behavior, 14, no. 4 (2004): 351–368. [DOI] [PubMed] [Google Scholar]
- 25.Menikoff J, Kaneshiro J, and Pritchard I, ‘The Common Rule, updated,’ N. Engl. J. Med 376, no. 7 (2017): 613–615. [DOI] [PubMed] [Google Scholar]
- 26.Emanual EJ, Wood A, Fleishchman A, et al. , ‘Oversight of human participants research: identifying problems to evaluate reform proposals,’ Annals of Internal Medicine 141, no. 4 (2004): 282–291. [DOI] [PubMed] [Google Scholar]
- 27.Bloss C, Nebeker C, Bietz M, et al. , ‘Reimagining human research protections for 21st century science,’ Journal of Medicine Internet Research 18, no. 12 (2016): e329. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Ienca M, Ferretti A, Hurst S, Puhan M, Lovis C, and Vayena E, ‘Considerations for ethics review of big data health research: A scoping review,’ PLoS One, 13, no. 10 (2018): e0204937. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Shuren J, Patel B, and Gottlieb S, ‘FDA regulation of mobile medical apps,’ JAMA, 320, no. 4 (2018): 337–339; [DOI] [PubMed] [Google Scholar]; Cortez NG, Cohen IG, and Kesselheim AS, ‘FDA regulation of mobile health technologies,’ N. Engl. J. Med 371, no. 4 (2014): 372–379. [DOI] [PubMed] [Google Scholar]
- 30.Effy V, Tobias H, Afue A, and Alessandro B, ‘Digital health: meeting the ethical and policy challenges,’ Swiss Medicine Weekly, 148 (2018): w14591. [DOI] [PubMed] [Google Scholar]
- 31.Moore S, Tasse TM, Thorogood A, Winship I, Zawati M, and Doerr M, ‘Consent processes for mobile app mediated research: systematic review,’ Journal of Medical Internet Research mHealth and uHealth, 5, no. 8 (2017): e126. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Salloch S, ‘The dual use of research ethics committees: why professional self-governance falls short in preserving biosecurity,’ BioMed Central Medicine & Ethics, 19, no. 1 (2018): 53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.See Effy et al., supra note 30.