Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Oct 1.
Published in final edited form as: Clin Trials. 2015 Sep 15;12(5):449–456. doi: 10.1177/1740774515597685

Harmonization and streamlining of research oversight for pragmatic clinical trials

P Pearl O'Rourke 1, Judith Carrithers 2, Bray Patrick-Lake 3, Todd W Rice 4, Jeremy Corsmo 5, Raffaella Hart 6, Marc K Drezner 7, John D Lantos 8
PMCID: PMC4592396  NIHMSID: NIHMS705971  PMID: 26374678

Abstract

The oversight of research involving human participants is a complex process that requires institutional review board (IRB) review as well as multiple non-IRB institutional reviews. This multifaceted process is particularly challenging for multisite research when each site independently completes all required local reviews. The lack of inter-institutional standardization can result in different review outcomes for the same protocol, which can delay study operations from start-up to study completion. Hence, there have been strong calls to harmonize and thus streamline the research oversight process. Although the IRB is only one of the required reviews, it is often identified as the target for harmonization and streamlining. Data regarding variability in decision-making and interpretation of the regulations across IRBs have led to a perception that variability among IRBs is a primary contributor to the problems with review of multisite research. In response, many researchers and policymakers have proposed the use of a single IRB of record, also called a central IRB (CIRB), as an important remedy. While this proposal has merit, the use of a CIRB for multisite research does not address the larger problem of completing non-IRB institutional review in addition to IRB review—and coordinating the interdependence of these reviews.

In this paper we describe the overall research oversight process, distinguish between IRB and institutional responsibilities, and identify challenges and opportunities for harmonization and streamlining. We focus on procedural and organizational issues and presume that the protection of human subjects remains the paramount concern. Suggested modifications of IRB processes that focus on time, efficiency, and consistency of review must also address what effect such changes have on the quality of review. We acknowledge that assessment of quality is difficult in that quality metrics for IRB review remain elusive. At best, we may be able to assess the time it takes to review protocols and the consistency across institutions.

Keywords: Central IRB, human research protection program, institutional review boards, pragmatic clinical trials, research oversight

Introduction

The oversight of research involving human participants is a complex process. It requires both institutional review board (IRB) review and the coordination of multiple non-IRB institutional reviews, such as grants and contracts, ancillary committees (e.g., radiation safety, pharmacy), and conflict of interest analysis. This multifaceted process is particularly challenging for multisite research when each site independently completes all required local reviews. The lack of inter-institutional standardization often results in different review outcomes for the same protocol that can delay study operations from start-up to study completion.1 Hence, there have been strong calls to harmonize and thus streamline the research oversight process.

The increasing interest in pragmatic clinical trials (PCTs; defined elsewhere in this supplement2) has made the call for harmonization and streamlining even more urgent. As noted by Sugarman and Califf, “research that evaluates elements of usual medical practice may encounter ethical and regulatory challenges.”3 Individual institutions may well respond to these challenges in disparate ways and increase the lack of standardization.

Although the IRB is only one of the required reviews, it is often identified as the target for harmonization and streamlining. Data regarding variability (in some cases significant variability) in decision-making and interpretation of the regulations across IRBs4-11 have led to a perception that variability among IRBs is a primary contributor to the problems with review of multisite research. In response, many researchers and policymakers have proposed the use of a single IRB of record, also called a central IRB (CIRB), as an important remedy. While this proposal has merit, the use of a CIRB for multisite research does not address the larger problem of completing non-IRB institutional review in addition to IRB review—and coordinating the interdependence of these reviews.

In this paper we describe the overall research oversight process, distinguish between IRB and institutional responsibilities, and identify challenges and opportunities for harmonization and streamlining. While harmonization and streamlining are related and together may affect efficiency, we differentiate between them. Harmonization is, at a minimum, a consistency of approach. Streamlining addresses efficiency and reduction of duplicative review. We focus on procedural and organizational issues and presume that the protection of human subjects remains the paramount concern. Suggested modifications of IRB processes that focus on time, efficiency, and consistency of review must also address what effect such changes have on the quality of review. We acknowledge that assessment of quality is difficult in that quality metrics for IRB review remain elusive. At best, we may be able to assess the time it takes to review protocols and the consistency across institutions.

Institutional and IRB functions: The human research protection program

In aggregate, the IRB and all other institutional functions for research oversight are referred to as the human research protection program (HRPP). Our use of the term HRPP includes the IRB and other relevant institutional functions (Table 1). The many non-IRB institutional oversight requirements include identification and management of researcher conflicts of interest; review by ancillary committees (e.g., radiation safety, nursing, pharmacy/therapeutics, biosafety); implementation of the Health Insurance Portability and Accountability Act (HIPAA) standards for privacy and security; maintenance of robust billing plans to prevent incorrect or duplicate billing of research procedures; ensuring congruence among clinical trial agreements, study protocols, and consent documents; confirming that research staff is credentialed and trained; performing compliance monitoring and study audits to assure compliance with the IRB-approved protocol; and providing basic and continuing education about these requirements to the research community.12 In addition, the institution, by virtue of its federal-wide assurance (FWA) agreement, is responsible for supporting IRB review capacity and for mandatory reporting to federal agencies.

Table 1. Distinctions in IRB and Non-IRB HRPP Roles and Responsibilities.

IRB
 Initial review
 Continuing review
 Review of amendments, unanticipated problems, and deviations
Non-IRB HRPP
 Ancillary committee reviews (e.g., radiation safety, pharmacy, nursing, biomedical engineering)
 Health Insurance Portability and Accountability Act
 Conflict of interest review
 Institutional biosafety
 Research billing (including Centers for Medicare and Medicaid Services analysis)
 Reporting requirements per federal-wide assurance
 Investigator training and education

HRPP, human research protection program; IRB, institutional review board

The IRB's responsibility is limited to regulatory review of a protocol as directed by specific federal and, sometimes, local regulations and institutional policies. But communication between the IRB and other institutional research offices is necessary to ensure that all required institutional reviews have been completed before a study can proceed. Often an IRB or research office coordinates these communications. But the IRB/research office—not to be confused with the IRB itself—does not conduct regulatory review. The IRB office administratively supports the IRB and is often designated to coordinate and reconcile institutional reviews and research oversight functions, many of which may be outside the control of the IRB or IRB office. While an IRB office may give the final approval for a protocol to begin, it is important to acknowledge that IRB regulatory review is only one component that contributes to the overall time.

Are harmonization and streamlining possible?

Use of a CIRB has been heralded as the way to effectively streamline review. Current federal research regulations allow review of human subject research to be completed by either a local or external IRB. When relying on an external IRB, the local institution must negotiate the terms of that reliance through a formal reliance agreement (RA) that delineates the scope of the reliance and the tasks and responsibilities assigned to the local institution versus the external IRB. When an institution relies on an external IRB, the function that is ceded is usually only the IRB regulatory review. This means that all other institutional oversight responsibilities remain with the institution. Therefore a relying institution must have systems in place to complete required institutional reviews and, as appropriate, communicate them to the CIRB. Institutions must have a mechanism for allowing a study to be activated only when all IRB and non-IRB institutional reviews have been completed.

There are a number of basic logistics that should be covered in the RA whether using an independent (commercial) IRB or an IRB based at an academic medical center or other research entity. The scope of reliance must be documented in the RA and it can vary; it could cover only a single protocol, or it could cover a specific category of protocols (e.g., cancer or pediatrics or industry-sponsored) or studies done within a network. The RA can require use of the CIRB for the designated scope, or it can allow an institution to decide to use the CIRB on a protocol-specific basis. The RA is usually between one institution and a specific CIRB. An alternative is for a group of institutions to negotiate an RA that allows each signatory institution on a protocol-by-protocol basis to either be the CIRB or rely on the IRB at one of the other institutions. This latter approach is referred to as reciprocal-deferral agreement, examples of which include the Midwest Area Research Consortium for Health,13 Wisconsin Network for Health Research,14 and Greater Plains Collaborative.15

The RA also delineates the regulatory responsibilities assigned to the CIRB and the relying institution. There are generally two basic models of regulatory assignment: non-share and share. In a non-share model, the CIRB is responsible for all regulatory review, including initial and continuing review and review of amendments, deviations, and unanticipated problems. The local IRB has no regulatory authority in this arrangement. Examples of non-share CIRB models include most independent/commercial IRBs, the Department of Veterans Affairs Central IRB,16 National Cancer Institute's CIRB,17 NeuroNEXT network,18 and BRANY.19

In a share model, regulatory responsibilities are shared between the CIRB and the local IRB, most often with the CIRB conducting initial review and allowing other reviews to be assigned to either the local IRB or the CIRB (Table 2). An example of a share model is the IRBshare Collaboration.20

Table 2. CIRB Models: Non-Share and Share.

Task Non-share model Share model
Initial protocol review CIRB CIRB
Continuing review CIRB CIRB or local IRB
Significant amendments CIRB CIRB or local IRB
Site-specific amendments CIRB CIRB or local IRB
Unanticipated problems CIRB CIRB or local IRB
Other events CIRB CIRB or local IRB

CIRB, central institutional review board

Share and non-share CIRB models each have benefits and challenges. While the non-share model has the advantage of clarity, it requires that the local institution cede all regulatory authority. The share model allows the local institution an active role in regulatory oversight but adds potential confusion regarding who is responsible for specific tasks. Yet in both the share and non-share model, the institution remains responsible for all non-IRB institutional reviews.

Many proposals to use CIRBs envision the potential benefit of decreasing study start-up time, reducing redundant reviews, and ensuring consistency of research conduct at all sites. It is possible that these potential advantages could be realized. However, systematic data about advantages and disadvantages of different models are, at present, limited. In the absence of accepted quality metrics, most comparisons focus on time and efficiency, and data exist that indicate a shorter time for CIRB review compared with local IRB review.1 But even comparisons limited to time and efficiency must take into account several factors. First is the type of research; the quality and completeness of industry-sponsored/initiated research protocols and investigator-initiated protocols often differ tremendously, and time comparisons should be limited to similar types of research. Second, how is IRB time measured? The IRB review time often appears shorter when completion of all institutional reviews is required prior to submission to the IRB, rather than conducting the institutional reviews in parallel with the IRB review. But the true time for study approval should include both the IRB time and the institutional review time because the study cannot start until both of these have been completed. It is also helpful to consider how time can be saved: early experience with the NeuroNEXT non-share model18,21 suggests that the time saved is not in the initial approval of the protocol, but rather in the ability to add additional study sites quickly, and in continuing review and review of amendments and unanticipated problems across sites (O'Rourke, personal communication 1/6/2015).

Any advantages gained by using a CIRB must be considered in the context of the logistics and costs of relying on a CIRB as well as serving as a CIRB. An institution relying on any CIRB (independent or academic medical center) must develop a process for interacting with the CIRB including procedures to complete non-IRB institutional requirements and to address any local HRPP details prior to allowing a specific study to proceed. This may involve developing new systems for ensuring accountability, and requiring researchers to submit information to both the CIRB and their HRPP.22 While relying on a CIRB decreases resources needed for IRB regulatory review, the institution (often the IRB office) must provide a new process for coordinating CIRB review with the institutional systems for research oversight. Such infrastructure requires additional resources, and in the absence of additional funding, organizations will find themselves reallocating or further overburdening existing and limited resources.

Serving as the CIRB also requires resources. A site serving as the CIRB must develop processes that support interaction with all the involved sites throughout the research project. These processes include developing and negotiating reliance agreements; developing good communication and trust with all sites; obtaining feedback about local context (e.g., relevant state laws or local policies) that may affect protocol review; systems for handling amendments; reports of deviations and unanticipated problems; coordination of review of noncompliance; and required reporting to federal agencies. Such oversight often requires additional staffing and changes to IT systems. Independent (commercial) IRBs have their own professional business models with unit charges for various tasks. As CIRBs based at academic medical centers increase in number, the institution must develop its own business plan for providing CIRB capacity. More information is needed regarding the true costs of providing CIRB capacity in a variety of settings.

When asked to serve as a CIRB, an academic medical center should carefully consider the details of the study, which will help to determine what resources are needed. Elements to consider include the number of different sites in the study, the types of sites, the total number of participants, the type of research, the level of risk (which will determine the nature of review and required oversight), the need for ancillary committee reviews, and the quality of the research team infrastructure. All these will affect the administrative infrastructure and costs necessary to serve as a CIRB. For example, consider the support needed for CIRB review for a 20-site minimal risk study being conducted by a well-developed, experienced research network with clinical and data coordinating centers versus a 20-site high-risk interventional study conducted by a new inter-institutional collaboration with no prior experience as a network. The costs of providing CIRB capacity at an academic medical center require more data and detailed financial models to accommodate the heterogeneity of the requests.

There are unique resource challenges in the reciprocal-deferral model. For any given study, a site has the potential to be the CIRB, defer to another IRB within the system, or retain full local IRB review. While the existence of the RA facilitates rapid activation of a CIRB for specific protocols, the allowance for this autonomy with roles changing from study to study runs the risk of diminishing the overall value of the reciprocal-deferral model.

Use of a CIRB, regardless of the model, is perhaps the best example of streamlining. Though CIRBs do not directly address harmonization, the increased engagement, communication, and collaboration between CIRBs and relying institutions would seem to promote sharing of policies and procedures and ultimately encourage harmonization. But CIRBs should not merely be a default for any multisite research. As noted above, specific multisite research must be carefully matched to a CIRB's infrastructure and capacity. Institutions may also prefer local IRB review for sensitive or high-risk research that may increase institutional risk.

Challenges to harmonization and streamlining

In addition to the challenges of coordinating between local institutions and a CIRB, a number of challenges to the process of research oversight present opportunities for harmonization and streamlining. Table 3 presents 11 important challenges that are applicable to both multisite research and research in general: (1) comparative effectiveness research (CER), (2) social media, (3) software applications, (4) sponsor requirements, (5) patients as researchers, (6) privacy, (7) cluster randomization, (8) local context, (9) conflict of interest, (10) payment, and (11) federal-wide assurance (FWA). While these challenges are not specific to the conduct of PCTs, they may become more complex in PCTs that use innovative study designs.

Table 3. Challenges and Opportunities of Harmonization and Streamlining.

Challenge Considerations for Harmonization or Streamlining
Comparative effectiveness research (CER) CER directly compares two treatments that are already routinely used in clinical practice. The assessment of risk in CER, being discussed at the national level, is critical because it directly determines whether or not informed consent can be waived or modified (e.g., oral consent (in person or by phone, or electronic consent).23,24 As this debate continues, individual IRBs and institutions may come to different determinations of risk that could undermine multisite CER research, with some sites requiring written consent and others waiving or altering consent. If the research community could reach consensus in concert with federal regulations on the issue of risk in CER, it would provide a basis for increased harmonization.25-28
Social media Social media has become an important tool for outreach; for example, interactive social media can be used to recruit participants, answer questions during consent, or gather data. Traditional IRB review of information provided to potential participants (informed consent forms, informational sheets, letters, advertising, etc.) will need to accommodate these new media.29
A second concern is that many social media interfaces (Twitter, blogs, etc.) are not private and may increase the risk for HIPAA violation.30 Potential partial remedies include allowing monitoring of the social media interaction or limiting use to one-way postings (i.e., not allowing responses/conversations). As social media applications increase in number and variety, the critical need for IRBs and privacy boards, in concert with federal regulations and guidance to develop “best practices,” is apparent.
Software applications Many “apps” now monitor personal health information such as exercise, heart rate, blood pressure, sleep patterns, and other data from wearable devices.31-33 By 2018, approximately 50% of the more than 3.4 billion smartphone and tablet users will have downloaded mobile health apps.34 Many issues must be addressed if they are used for research, including ensuring data accuracy, authenticity, and validity; details about the app's platform; data security; ownership of the data; and access or use of the data for research or other purposes.35 Additional questions arise if app data is to be included in the electronic medical record. Will a clinician be expected to review these data to arrange for appropriate follow-up? Will there be liability issues if the data are not reviewed and acted on?
A standardized approach for how to assess a specific app and its platform would encourage harmonization between institutions. The question of what can or should be included in the medical record also must be addressed. Problem-solving on this topic will require input from privacy and security experts, the patient community, medical records professionals, and clinicians as well as virtually every component of the HRPP.
Sponsor requirements Negotiating research grants and contracts can differ widely among institutions. Costs of laboratories, testing, procedures, differences in standard of care (and thus what might not be considered a research procedure), policies for caring for injuries secondary to research participation, and differing efforts of research team members can be institution-specific. Negotiating these details takes time and can delay study initiation. Although local variability challenges general harmonization, it would be worthwhile to identify those elements of grants and contracts that could be harmonized.
Patients and patient advocates as members of the research team The increased call to involve patients or advocates in the design and conduct of research is increasing not only in PCTs but also in research in general. Patient roles include consultants, members of advisory groups, and investigators. Institutions and IRBs should consider what levels of education, institutional credentialing, or oversight are required. While many institutions have processes for allowing nonemployees to participate as a researcher, these same processes may need to be tailored to address specific issues when patients are on the research team. Standardization of study team roles for patients and patient advocates would increase harmonization and encourage the involvement of patients on research teams.
Privacy A number of HIPAA issues must be considered if patient-researchers have access to protected health information (PHI).36 HIPAA has different rules for allowing access to PHI for workforce members (people who report in some way to that entity) versus non-workforce members. When a patient is a member of the research team, his or her workforce status must be considered. If the patient cannot be considered part of the workforce, then any access to PHI becomes a disclosure to an external entity, and disclosures bring their own required actions (e.g., if pursuant to a waiver of authorization, disclosure must be tracked).
One potential solution is to allow patient-researchers access only to deidentified data because such data would not be considered PHI. However, one goal of pragmatic research is to include patients as active and empowered study team members, and so access to PHI may be critical to their role. A broad discussion and common approach to this issue across IRBs would increase harmonization. This would require input from HIPAA experts as well as IRBs and the institutions.
Cluster randomization Cluster randomization is a common study design in PCTs in which the unit of randomization is a cluster of participants rather than individual participants.37 When randomization is at the system or unit level, and not at the level of the individual patient, obtaining informed consent may not be practicable. It also might not be possible for the individual to “opt out” of receiving the research intervention. Oversight of cluster-randomized trials (CRTs) remains unclear, with questions about criteria for informed consent, mechanisms for opting out, and delineations of the types of interventions that might be studied using cluster randomization.38 In the absence of guidance on how to consider CRTs, there will be numerous institution-specific approaches. Before CRTs become routine, efforts to develop best practices would be invaluable and lead to harmonization.
Local context While it has been stated that local context “issues are relatively straightforward in most multisite studies and do not play a major role in IRB deliberations,”39 local context has been a sticking point in advancing centralized review. Local context issues may be based in state or local law as well as institutional policy. Law-based issues may include age of majority and laws around when minors are emancipated; rules guiding surrogate consent (e.g., who if anyone can give surrogate consent for research); HIPAA privacy regulations; or state laws regarding specific information (e.g., HIV test results, genetic information). Institutional policy issues may include handling of investigational drugs/agents/biologics; contraception and birth control; engagement of vulnerable populations; conflict of interest management; research billing; language for radiation risks; and institutional biosafety (e.g., handling and storage of live viruses for vaccines). While many of these are part of the non-IRB institutional review, some are closely integrated into the IRB review, and institutional nuances must be considered by the reviewing IRB. In addition, local IRBs are often the most familiar with the researchers and the local research environment, and these can be difficult to adequately relate to an external IRB. As the mandates for CIRBs increase, local institutions/IRBs will need to identify, describe, and communicate these local issues. There should be a concerted effort to identify ways of handling common “local issues” and distinguishing those that may have an impact on the review of a specific protocol.
Conflict of interest Conflict of interest has been considered a local context issue. However, all institutions and oversight bodies address the same (or very similar) conflict-of-interest issues. We suggest a national standardized method of managing conflicts of interest, with the option for local oversight to increase the stringency if needed. Standards for determining which conflicts require disclosure, implementation of additional safeguards (such as transferring oversight to a non-conflicted party), or removal of the conflicted party could be agreed upon nationally by a consortium of institutions and applied across institutions.
Payment Payment for the nonresearch care of patients who participate in research has become complicated. Specifically, some payments are controlled by the Centers for Medicare and Medicaid Services (CMS), with plan coverage varying by region. Until these variations are reduced, CIRBs or HRPPs may need to tailor consent forms to accurately inform participants at each participating institution about which costs will be covered by the research and which will not be. A national standard for CMS plan coverage would help to resolve one aspect of this issue.
Federal-wide assurance (FWA) Federal regulations require an FWA agreement with institutions engaged in research funded by federal monies. Traditionally, academic medical centers and large hospitals or healthcare systems have been the recipients of these funds. However, with the growth of patient-centered research, pragmatic trial designs, and the Patient-Centered Outcomes Research Institute (PCORI), smaller community clinics that are not affiliated with academic medical centers are becoming engaged in federally funded research. These smaller clinics often do not have an FWA or their own (or designated) IRB and are uncertain how to proceed The expansion of research into nontraditional clinics and practice sites emphasizes the need to identify what options are available to nontraditional sites in order to comply with these federal requirements.

Discussion

For research conducted at multiple sites, the complex system of IRB and non-IRB institutional reviews presents a challenge when each site conducts its own reviews under institution-specific policies and procedures. Any resultant inconsistency of review and inefficiencies can hinder the initiation and conduct of multisite research. In the interest of facilitating multisite research, there has been a call to improve these reviews by streamlining processes and harmonizing approaches across institutions. Because PCTs rely on multisite research, those involved in PCTs have added to the call for improvement.

CIRBs show great potential as a mechanism to streamline and possibly harmonize the review of research protocols. They may, or may not, turn out to be a panacea. We suggest that CIRBs or any other proposed changes should be considered within the complexity of the overall system of review. For any system change, altering one element could have an unexpected impact on other dependent elements; any unexpected problems will also need to be addressed. For example, many institutions have developed an electronic IRB protocol submission system that also triggers and collects documentation of the non-IRB institutional requirements. When an investigator at such institutions uses an external CIRB, in order to obtain and document all non-IRB institutional reviews, the investigator may be asked to submit a “shadow” protocol (to retain appropriate triggers) or to complete other workarounds. Such solutions require more resources, thus minimizing any streamlining gained, especially if the changes are implemented without regard for scalability. There is a critical need for data on the logistics required both for providing CIRB capacity and for relying on a CIRB. A systematic approach to data collection in IRB and institutional oversight functions would enable interim assessments as changes are made.

While we recognize the potential of CIRBs, we also highlight a number of specific issues that present opportunities for harmonization. Notably, most of these are issues of institutional policy and review and are not specific to IRB review. Attention to, and constructive progress on, these could streamline every aspect of research oversight. In the absence of such a coordinated effort, the overall process of research oversight may in fact become more difficult. The oversight of research involving human subjects is complex and “owned” by many. True harmonization and streamlining will require attention to each component of that review.

Conclusion

There is a clarion call for harmonizing and streamlining the oversight of research involving human subjects, with particular urgency for multisite research. The IRB has been the primary target for reform, and CIRBs have been endorsed as a viable solution. But true improvement requires a more comprehensive understanding and review of research oversight. The IRB regulatory review, while important, is only one component among numerous non-IRB institutional policies and reviews.

It is crucial to agree that the ultimate goal is to improve the efficiency and quality of study initiation, conduct, and regulatory oversight without eroding—and possibly even advancing—the protection of human subjects. While time and consistency of review lend themselves to measurement, assessing the quality of protection of human subjects is hindered by the lack of accepted metrics. If alternate methods of review are developed, it will be important to focus research efforts on better metrics to monitor the effect on human participants.

Acknowledgments

The authors would like to thank Liz Wing, MA, for assistance with manuscript development and editing. Ms. Wing is an employee of the Duke Clinical Research Institute, Durham, NC, and received no compensation for her work apart from her usual salary.

Disclosures: Dr. O'Rourke is supported in part by a grant from NIH's National Institute of Neurological Disorders and Stroke (#5U01NS077179-03) and a Patient-Centered Outcomes Research Institute (PCORI) contract (CDRN-1306-04608).

Ms. Patrick-Lake is a patient representative on the Coordinating Center Executive Leadership Committee of PCORnet, the National Patient-Centered Clinical Research Network, and Director of Stakeholder Engagement at the Clinical Trials Transformation Initiative (CTTI). Her views, and those of the other authors of this article, do not necessarily represent the views of PCORI, its Board of Governors, or Methodology Committee.

Dr. Rice is supported in part by grants from NIH's National Heart, Blood, and Lung Institute (R01HL126492) and NIH's National Center for Advancing Translational Sciences (NCATS) (R13TR000052).

Ms. Hart is a member of the PCORnet Ethics and Regulatory Task Force.

Dr. Lantos is supported by a Clinical and Translational Science Award from NIH's NCATS awarded to the University of Kansas Medical Center for Frontiers (UL1TR000001 [formerly #UL1RR033179]).

The other authors have no disclosures. The perspectives in this article are solely the responsibility of the authors and do not necessarily represent the official views of the NIH, PCORI, or CTTI.

Funding: This work is supported by the NIH Common Fund, through a cooperative agreement (U54 AT007748) from the Office of Strategic Coordination within the Office of the NIH Director. The views presented here are solely the responsibility of the authors and do not necessarily represent the official views of the NIH.

References

  • 1.Abbott D, Califf R, Morrison BW, et al. Cycle time metrics for multisite clinical trials in the United States. Ther Innov Regul Sci. 2013;4:152–160. doi: 10.1177/2168479012464371. [DOI] [PubMed] [Google Scholar]
  • 2.Califf RM, Sugarman J. Exploring the ethical and regulatory issues in pragmatic clinical trials. Clin Trials. 2015 doi: 10.1177/1740774515598334. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Sugarman J, Califf RM. Ethics and regulatory complexities for pragmatic clinical trials. JAMA. 2014;311:2381–2382. doi: 10.1001/jama.2014.4164. [DOI] [PubMed] [Google Scholar]
  • 4.Gold JL, Dewa CS. Institutional review boards and multisite studies in health services research: is there a better way? Health Serv Res. 2005;40:291–307. doi: 10.1111/j.1475-6773.2005.00354.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Petersen LA, Simpson K, Sorelle R, et al. How variability in the institutional review board review process affects minimal-risk multisite health services research. Ann Intern Med. 2012;156:728–735. doi: 10.7326/0003-4819-156-10-201205150-00011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Stark AR, Tyson JE, Hibberd PL. Variation among institutional review boards in evaluating the design of a multicenter randomized trial. J Perinatol. 2010;30:163–169. doi: 10.1038/jp.2009.157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Helfand BT, Mongiu AK, Roehrborn CG, et al. Variation in institutional review board responses to a standard protocol for a multicenter randomized, controlled surgical trial. J Urol. 2009;181:2674–2679. doi: 10.1016/j.juro.2009.02.032. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Harvie HS, Lowenstein L, Omotosho TB, et al. Institutional review board variability in minimal-risk multicenter urogynecology studies. Female Pelvic Med Reconstr Surg. 2012;18:89–92. doi: 10.1097/SPV.0b013e318249bd40. [DOI] [PubMed] [Google Scholar]
  • 9.Stair TO, Reed CR, Radeos MS, et al. Variation in institutional review board responses to a standard protocol for a multicenter clinical trial. Acad Emerg Med. 2001;8:636–641. doi: 10.1111/j.1553-2712.2001.tb00177.x. [DOI] [PubMed] [Google Scholar]
  • 10.Lee DC, Peak DA, Jones JS, et al. Variations in institutional review board reviews of a multi-center, Emergency Department (ED)-based genetic research protocol. Am J Emerg Med. 2013;31:967–969. doi: 10.1016/j.ajem.2013.03.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Silberman G, Kahn KL. Burdens on research imposed by institutional review boards: the state of the evidence and its implications for regulatory reform. Milbank Q. 2011;89:599–627. doi: 10.1111/j.1468-0009.2011.00644.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Flynn KE, Hahn CL, Kramer JM, et al. Using central IRBs for multicenter clinical trials in the United States. PLoS One. 2013;8:e54999. doi: 10.1371/journal.pone.0054999. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Midwest Area Research Consortium for Health. [Accessed 29 January 2015]; https://ictr.wisc.edu/march.
  • 14.Wisconsin Network for Health Research. [Accessed 29 January 2015]; https://ictr.wisc.edu/winhr.
  • 15.Greater Plains Collaborative. [Accessed 29 January 2015]; http://www.gpcnetwork.org/
  • 16.U.S. Department of Veterans Affairs. VA Central Institutional Review Board. [Accessed 13 January 2015]; http://www.research.va.gov/vacentralirb/
  • 17.National Cancer Institute. Central Institutional Review Board Initiative. [Accessed 13 January 2015]; https://www.ncicirb.org.
  • 18.National Institutes of Health. NIH Network for Excellence in Neuroscience Clinical Trials. [Accessed 13 January 2015]; https://www.neuronext.org/
  • 19.Biomedical Research Alliance of New York (BRANY) [Accessed 29 January 2015]; http://www.branyirb.com/
  • 20.Vanderbilt University. IRBshare Collaboration. [Accessed 13 January 2015]; https://www4.vanderbilt.edu/irb/irbshare/
  • 21.Kaufmann PG, O'Rourke PP. Central institutional review board review for an academic trial network. Acad Med. 2015;90:321–323. doi: 10.1097/ACM.0000000000000562. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Clinical Trials Transformation Initiative (CTTI) Considerations Document: CTTI Use of central IRBs in multicenter clinical trials. [Accessed 22 September 2014]; http://www.ctti-clinicaltrials.org/what-we-do/study-start/central-irb.
  • 23.Lantos JD, Wendler D, Septimus E, et al. Considerations in the evaluation and determination of minimal risk in pragmatic clinical trials. Clin Trials. 2015 doi: 10.1177/1740774515597687. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.McKinney RE, Jr, Beskow LM, Ford DE, et al. Effective use of altered informed consent in pragmatic clinical research. Clin Trials. 2015 doi: 10.1177/1740774515597688. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Lantos JD, Spertus JA. The concept of risk in comparative-effectiveness research. N Engl J Med. 2015;372:884. doi: 10.1056/NEJMc1415933. [DOI] [PubMed] [Google Scholar]
  • 26.OHRP and standard-of-care research. N Engl J Med. 2014;371:2125–2126. doi: 10.1056/NEJMe1413296. [DOI] [PubMed] [Google Scholar]
  • 27.Kass N, Faden R, Tunis S. Addressing low-risk comparative effectiveness research in proposed changes to US federal regulations governing research. JAMA. 2012;307:1589–1590. doi: 10.1001/jama.2012.491. [DOI] [PubMed] [Google Scholar]
  • 28.Miller FG, Emanuel EJ. Quality-improvement research and informed consent. N Engl J Med. 2008;358:765–767. doi: 10.1056/NEJMp0800136. [DOI] [PubMed] [Google Scholar]
  • 29.Conway M. Ethical issues in using Twitter for public health surveillance and research: developing a taxonomy of ethical concepts from the research literature. J Med Internet Res. 2014;16:e290. doi: 10.2196/jmir.3617. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Glenn T, Monteith S. Privacy in the digital world: medical and health data outside of HIPAA protections. Curr Psychiatry Rep. 2014;16:494. doi: 10.1007/s11920-014-0494-4. [DOI] [PubMed] [Google Scholar]
  • 31.Arsand E, Froisland DH, Skrovseth SO, et al. Mobile health applications to assist patients with diabetes: lessons learned and design implications. J Diabetes Sci Technol. 2012;6:1197–1206. doi: 10.1177/193229681200600525. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Krishnamoorthy S. Nanostructured sensors for biomedical applications-a current perspective. Curr Opin Biotechnol. 2015;34c:118–124. doi: 10.1016/j.copbio.2014.11.019. [DOI] [PubMed] [Google Scholar]
  • 33.Boudreaux ED, Waring ME, Hayes RB, et al. Evaluating and selecting mobile health apps: strategies for healthcare providers and healthcare organizations. Transl Behav Med. 2014;4:363–371. doi: 10.1007/s13142-014-0293-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Research2Guidance. [Accessed 30 January 2015]; http://research2guidance.com/500m-people-will-be-using-healthcare-mobile-applications-in-2015/
  • 35.Martinez-Perez B, de la Torre-Diez I, Lopez-Coronado M. Privacy and security in mobile health apps: a review and recommendations. J Med Syst. 2015;39:181. doi: 10.1007/s10916-014-0181-3. [DOI] [PubMed] [Google Scholar]
  • 36.McGraw D, Greene SM, Miner CS, et al. Privacy and confidentiality in pragmatic clinical trials. Clin Trials. 2015 doi: 10.1177/1740774515597677. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Smalley JB, Merritt MW, Al-Khatib SM, et al. Ethical responsibilities toward indirect and collateral participants in pragmatic clinical trials. Clin Trials. 2015 doi: 10.1177/1740774515597698. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Taljaard M, Brehaut JC, Weijer C, et al. Variability in research ethics review of cluster randomized trials: a scenario-based survey in three countries. Trials. 2014;15:48. doi: 10.1186/1745-6215-15-48. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Menikoff J. The paradoxical problem with multiple-IRB review. N Engl J Med. 2010;363:1591–1593. doi: 10.1056/NEJMp1005101. [DOI] [PubMed] [Google Scholar]

RESOURCES