Skip to main content
VA Author Manuscripts logoLink to VA Author Manuscripts
. Author manuscript; available in PMC: 2020 May 14.
Published in final edited form as: J Surg Res. 2016 Jun 8;204(2):481–489. doi: 10.1016/j.jss.2016.06.004

Time required to review research protocols at 10 VA Institutional Review Boards

Patrick R Varley a, Ulrike Feske b, Shasha Gao b, Roslyn A Stone b, Sijian Zhang b, Robert Monte c, Robert M Arnold d, Daniel E Hall a,b
PMCID: PMC7224356  NIHMSID: NIHMS1566386  PMID: 27565086

Abstract

Background:

Despite perceptions that IRBs delay research, little is known about how long it takes to secure IRB approval. We retrospectively quantified IRB review times at 10 large Veterans Affairs (VA) IRBs.

Methods:

We collected IRB records pertaining to a stratified random sample of research protocols drawn from 10 of the 26 largest VA IRBs. Two independent analysts abstracted dates from the IRB records, from which we calculated overall and incremental review times. We used multivariable linear regression to assess variation in total and incremental review times by IRB and review level (i.e., exempt, expedited, or full board) and to identify potential targets for efforts to improve the efficiency and uniformity of the IRB review process.

Results:

In a sample of 277 protocols, the mean review time was 112 days (95%CI: 105,120). Compared to full-board reviews at IRB 1, average review times at IRBs 3, 8, 9 and 10 were 27 (95%CI: 6, 48), 37 (95%CI: 11, 63), 45 (95%CI: 20, 69) and 24 (95% CI: 2, 45) days shorter, and at IRB 6, times were 56 (95%CI: 28, 84) days longer. Across all IRBs, expedited reviews were 44 (95% CI: 30, 58) days shorter on average than were full-board reviews, with no significant difference between exempt and full-board reviews. However, after subtracting the time required for Research and Development (R&D) Committee review, exempt reviews were 21 (95%CI: 1, 41) days shorter on average than were full-board reviews.

Conclusions:

IRB review times differ significantly by IRB and review level. Few VA IRBs approach a consensus panel goal of 60 days for IRB review. The unexpectedly longer review times for exempt protocols in the VA can be attributed to time required for R&D Committee review. Prospective, routine collection of key time points in the IRB review process could inform IRB-specific initiatives for reducing VA IRB review times.

Keywords: Quality Improvement, Ethics Committee, Efficiency

1. Introduction

The National Research Act (NRA) of 1974 serves as a turning point for the conduct of scientific research in the United States by mandating creation of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Their guidelines for the ethical conduct of human subjects research can be summarized in the core principles of respect for persons, beneficence and justice [1]. But as the practice of research has matured, so have the ethical guidelines. A modern understanding of ethical research practices requires that a trial conform to seven principles: (1) social or scientific value, (2) scientific validity, (3) fair subject selection, (4) favorable risk-benefit ratio, (5) independent review, (6) informed consent, and (7) respect for potential and enrolled subjects [2].

Conceived by the National Institutes for Health in the 1960s and required by the NRA, institutional review boards (IRBs) serve as a critical safeguard to ensure that the principles of ethical research are upheld. However, rapid expansion in the volume and complexity of medical research over the past 50 years has led to a concomitant increase in the number of independent IRBs [3]. Though IRBs are unified in the goal to ensure adequate protection of human subjects, recent studies have documented significant variability in the time required for approval [410], number of changes requested by the IRB [8], thoroughness and cost of the review [1113], and quality of the overall determination between different IRBs [4, 14, 15].

IRB variability has led to concerns regarding the adequacy of the review process [16], including the time required to secure IRB approval. A systematic review of 25 empirical studies covering 631 IRBs and 336 protocols reported that IRB review typically took between 13 and 116 days, but many protocols take much longer. In fact, the difference between the shortest and longest recorded review within each IRB averaged 220 days [17]. Investigators have argued that IRBs in their current form present a barrier to the conduct of human subject research [1820]. In response to these growing concerns, the US Department of Health and Human Services has publicly announced its intent to revise the current regulations surrounding human subjects research [21]. Proposed changes include broadening activities covered under the Common Rule (e.g., quality improvement activities, research using data subject to the HIPAA Privacy Rule), prohibiting the use of Broad Consent for studies involving biospecimens, and eliminating continuing review for studies that meet the criteria for expedited review.

Despite desires to reform the review process, a paucity of literature describes structural differences between IRBs and how those differences relate to the speed and quality of review. To begin filling this knowledge gap, we recently published a pilot study describing a method to characterize the review process at one IRB within the Veterans Affairs (VA) Health System [22]. In the present study we have extended our methodology to 10 independent IRBs. We aim to identify potential targets for efforts to improve the efficiency and uniformity of the IRB review process.

2. Methods

2.1. Research Ethics Oversight:

Although data were gathered from 10 VA Medical Centers (VAMCs), the VA Central IRB determined that only a single site was engaged in research (VA Pittsburgh), and thus all procedures were approved by the VAPHS IRB (Pro 0326).

2.2. Participating IRBs:

Some VAMCs operate their own IRBs while others use the IRBs of their university affiliates. To eliminate the variability attributable to the policies and procedures pertaining to different university IRBs, we focused exclusively on IRBs operated by the VA. We expected to find relatively greater consistency across these VA IRBs because they are subject to a uniform set of policies and procedures established by VA Central Office. We purposively sampled IRBs from the VA IRBs with the highest volume of newly submitted protocols, soliciting IRB participation sequentially in decreasing order of review volume. A total of 26 VA IRBs were approached to achieve the target sample of 10 IRBs, which included the VA Central IRB and the IRBs from 9 VAMCs. Together, participating IRBs reviewed 798 newly submitted protocols over a 2 year period between January 1, 2010 and December 31, 2011 (444 full board, 283 expedited and 71 exempt). IRB 7 contributed no exempt or expedited protocols because the VA Office of Research Oversight (ORO) required them to review all protocols using full-board procedures. IRB 9 contributed no expedited protocols because ORO had encouraged them to utilize only exempt and full board procedures. The review volume ranged from 29 to 220 newly submitted protocols across participating IRBs in this period. We excluded previously approved protocols for which the IRBs were coordinating adverse event reporting and continuing review, as these actions were beyond the scope of the present study.

2.3. Mapping the IRB Review Process:

Between July 2012 and April 2013, 2 staff from the VAPHS Veterans Engineering Resource Center (VERC) visited each IRB to develop a process flow map of the review process for newly submitted protocols. Using standard systems engineering methods, representatives of the VERC gathered stakeholders in the review process to describe and map each incremental step in the review process. Whenever possible, specific processes were described only by the people who actually execute them. Flow maps were developed in a morning session lasting 2–3 hours, and then reviewed and revised in an afternoon session lasting 1–2 hours. Flow maps were developed on paper and then rendered electronically using Microsoft Visio (Redmond, WA). Research staff developed standard conventions for representing similar procedures across IRBs to facilitate comparisons. Narrative descriptions of each flow map detailed each step in the process. Flow maps for each IRB are available in the supplementary materials.

2.4. Sampling of Research Protocols:

We selected a stratified random sample of research protocols submitted for initial review from January 2011 to December 2012. The goal was to select 5 to 15 protocols for each level of review per IRB. When more than 15 protocols were available for a given review level at a single IRB, 15 were selected at random. If there were 5–15 protocols at any review level, all were included. If there were <5 protocols of a given review level, none were included because the data would be too sparse. For each included protocol, we scanned copies of all available IRB materials related to the initial submission and review up to and including the final determination, including the IRB protocol itself, the related grant proposal, consent documents, HIPAA waivers, correspondence between the IRB staff and investigators, reviewer checklists, approval letters from the IRB, privacy officer (PO), information security officer (ISO) and research and development committee (R&D), and minutes from the IRB meeting(s) where the protocol was discussed.

2.5. Abstracting review times:

2.5.1. Constructing IRB-specific databases:

The IRB review processes were similar across study IRBs, but no two IRBs had identical processes. We therefore developed IRB-specific databases that reflected each IRB’s particular review process (Microsoft Access, Redmond, WA). At each defined step in the review process abstractors could enter specific dates. In addition, general information about the study protocol was recorded, such as the type of study, sponsorship, participant risk level, and review level.

2.5.2. Data abstraction:

IRB review times were extracted by one primary and one secondary coder. The primary coder (LL) had extracted review times for a previous project in which these methods were initially developed [22]. Given the variability between IRBs and protocols, it was not possible to locate dates for every step in each IRB’s process, but the coding effort preserved as much detail as possible. In close collaboration with the PI (DEH), the primary coder first developed a detailed IRB-specific codebook that specified the documents associated with specific steps in the review process as well as rules for interpreting the documents and abstracted dates. The primary coder extracted review times from all 319 IRB protocols. To assess the reliability and consistency of coding, the secondary coder extracted review times from 72 (22.4%) randomly selected protocols across the study IRBs.

2.6. Data analysis.

From the IRB-specific dates, unique variables were defined to represent the elapsed time from initial submission to final determination as well as the times attributable to the intervening steps within the IRB review process. After reviewing the summary statistics for the IRB-specific variables, the PI identified those time intervals that were most comparable across IRBs (Figure.1). For example, each IRB had a “pre-review” process with an associated pre-review time calculated from the date of submission to the date that the protocol was transmitted to the IRB for initial review. The total review time was based on the date of submission and the date of approval from the Associate Chief of Staff (ACOS) for Research and Development (R&D).

Figure 1:

Figure 1:

Schematic of Key Time Intervals in the VA IRB Review Process

Statistical analyses focused on describing the total and incremental review times within and across IRBs, stratified by review level. Variation by IRB and review level was assessed using multivariable linear regression with a robust variance estimator; we used multi-parameter Wald tests to assess variation by IRB and review level. All review times were rounded to the nearest day. We also conducted exploratory analyses to investigate the effects of industry sponsorship and therapeutic trials, the roles of pre-review and required final approvals, and associations with stakeholders’ perceptions of IRB performance. All statistical analyses were conducted using STATA 13 (StataCorp. 2013. Stata 13 Base Reference Manual. College Station, TX: Stata Press).

3. Results

3.1. Descriptive Analysis:

We obtained documentation from 319 sampled protocols of diverse design (Table 1). Overall, 63 (19.8%) protocols were determined to be exempt from IRB review, 117 (36.7%) were approved under expedited procedures, and 139 (43.6%) were approved under full-board procedures. The 33 industry-sponsored protocols accounted for 23.7% of the full-board protocols (10.3% of all protocols). Overall, 26.3% of the protocols were survey or interview studies and 35.7% were database analyses.

Table 1:

Description of 319 Sampled Research Protocols

Review Level
Exempt Expedited Full Board Total
N % N % N % N %
Total Sample 63 19.8 117 36.7 139 43.6 319 100
Type of Study
  Phase I, I/II, or II 0 1 0.9 4 2.9 5 1.6
  Phase III 0 0 16 11.5
  Phase IV 0 0 6 4.3 6 1.9
  Survey or Interview 18 28.6 43 36.8 23 16.6 84 26.3
  Database Analysis 37 58.7 58 49.6 19 13.7 114 35.7
  Non-database Observational 7 11.1 9 7.7 19 13.7 35 11.0
  Therapeutic Interventional1 1 1.6 3 2.6 43 30.9 47 14.7
  Non-Therapeutic Interventional 0 2 1.7 8 5.8 10 3.1
  Not Specified 0 1 0.9 1 0.7 2 0.6
Industry Sponsored 0 0 33 23.7 33 10.3
Greater than Minimal Level of Risk 1 1.6 1 0.9 89 64.0 91 28.5
1

Excludes Phase I-IV studies

Agreement between the primary and secondary coders was strong (98.3%, based on 3038 of 3090 abstracted time points), demonstrating excellent reliability in the abstraction procedures. However, because not all protocols had documented dates of both IRB submission and final determination, total review time could be calculated for only 277 (87.4%) protocols (48 exempt; 105 expedited; 124 full-board).

In this sample of 277 protocols, the mean total review time was 112 days (95%CI: 105,120; Table 2). Across all IRBs, mean total review times were shortest for expedited protocols (93 days; 95% CI: 83, 103), somewhat longer for exempt protocols (107 days; 95%CI: 92, 123), and longest for full board protocols (131 days; 95%CI: 120, 142). The corresponding medians were 83, 98, and 116 days, respectively. However, there was substantial variation around these measures of central tendency, with some right skew (Figure 2). Overall, only 18% of protocols took 60 days or less to review (10% full board, 21% exempt, 27% expedited). However, the proportion increases to 34% (19% full board, 40% exempt, 48% expedited) if the threshold is raised to 79 days. Duration of the pre-review and IRB review portions appeared to be the most variable across institutions, while duration of the final approval process, R&D review and timing of the ACOS signature appeared similar (Supplementary Table 1).

Table 2:

Medians, Inter-Quartile Ranges (IQRs), Means and 95% Confidence Intervals (95% CIs) of Total IRB Review Times (in Calendar Days) for 10 IRBs, by Review Level and Overall

IRB Exempt Expedited Full Board All Levels
Median (IQR) Median (IQR) Median (IQR) Median (IQR)
Mean (95% CI) Mean (95% CI) Mean (95% CI) Mean (95% CI)
N N N N

1 121 (98–496) 81 (69–106) 151 (125–173) 121 (81–151)
140 (94–186) 86 (70–102) 148 (119–177) 122 (103–141)
9 11 9 29

2 88 (67–120) 87 (71–96) 128 (67–173) 89 (67–135)
100 (44–155) 85 (68–102) 122 (83–161) 104 (85–123)
5 13 14 32

3 110 (70–140) 54 (34–75) 100 (83–137) 86 (58–132)
115 (84–145) 61 (43–79) 116 (88–143) 97 (81–112)
14 15 15 44

4 91 (76–146) 78 (49–109) 148 (95–183) 95 (76–148)
109 (73–145) 92 (56–127) 140 (102–178) 110 (89–131)
7 14 9 30

5 -- 107 (83–112) 148 (90–203) 109 (85–146)
101 (84–117) 156 (108–203) 126 (102–150)
13 11 24

6 -- 158 (131–183) 214 (133–223) 170 (131–214)
159 (128–190) 195 (150–240) 176 (150–201)
14 12 26

7 -- -- 103 (92–149) 103 (92–149)
136 (89–184) 136 (89–184)
14 14

8 48 (30–79) 70 (37–118) 88 (62–112) 76 (48–101)
73 (27–119) 90 (44–136) 94 (69–118) 87 (66–107)
9 12 12 33

9 87 (69–112) -- 97 (58–126) 93 (63–126)
91 (46–135) 97 (72–123) 96 (76–116)
4 14 18

10 -- 63 (49–73) 121 (83–176) 83 (55–134)
67 (52–82) 127 (98– 157) 98 (78–118)
14 27

All IRBs 98 (69–140) 83 (55–112) 116 (84–473) 98 (70–148)
107 (92–123) 93 (83–103) 131 (120–142) 112 (105–120)
48 105 124 277

Note: Dashes denote cells for which medians, means, or IQRs cannot be estimated due to insufficient sample size.

Figure 2.

Figure 2.

Total Review Time (in Calendar Days) by Review Level across 10 IRBs

3.2. Models for Total Review Time:

Multivariable linear regression models of total review time including IRB and review level (Table 3) demonstrate that, compared to the mean of 143 days required for full board reviews at IRB 1, average review times at IRBs 3, 8, 9 and 10 were 27 (95%CI: 6, 48), 37 (95%CI: 11, 63), 45 (95%CI: 20, 69) and 24 (95% CI: 2, 45) days shorter, and at IRB 6, times were 56 (95%CI: 28, 84) days longer. Both IRB and review level were significant predictors in this model (P<0.001 for each). Adjusted for IRB, expedited reviews are 44 days (95%CI: 30, 58) shorter than are full board reviews on average, with no significant difference between exempt and full-board reviews. However, after subtracting the time required for R&D Committee review, exempt reviews were 21 (95%CI: 1, 41) days shorter on average than were full-board reviews.

Table 3.

Estimated Review Times and 95% Confidence Intervals (in Calendar Days) based on Multivariable Models Adjusting for IRB and Type of Review. IRB 1 Full Board Review is the Reference and Other Estimates Reflect Differences from the Reference.

Parameter Total Review Time
N=277

Mean (95% CI)
Pre-Review Time
N=179

Mean (95% CI)
IRB Review Time
N=181

Mean (95% CI)
Final Approval Time
N=176

Mean (95% CI)
R&D Time1
N=197

Mean (95% CI)
ACOS Signature Time1
N=161

Mean (95% CI)
Total Review Time (R&D Subsample)
N=179

Mean (95% CI)
Total Review minus R&D Time
N=179

Mean (95% CI)
IRB 1 (reference) 143 (126, 160) 1 (−7, 8) 70 (42, 98) 23 (16, 31) 11 (6, 17) 5 (1, 9) 142 (124, 160) 135 (117, 152)
IRB 2 −19 (−42, 3) 49 (32, 66) 38 (−66, −11) 4 (−7, 15) 28 (−15, 72) 2 (−8, 12) −20 (−103, 63) −41 (−85, 3)
IRB 3 27 (−48, −6) 3 (−5, 11) 35 (−58, −11) -- 38 (15, 60) 12 (7, 17) 26 (−48, −5) 48 (−69, −27)
IRB 4 −9 (−33, 15) 24 (14, 35) 13 (−28, 54) 14 (3, 24) 21 (7, 35) −1 (−7, 5) 3 (−39, 44) −19 (−63, 24)
IRB 5 7 (−19, 32) 15 (4, 25) 33 (0, 67) 10 (−17, −3) 2 (−3, 8) 0.5 (−4, 5) 4 (−22, 29) −4 (−29, 22)
IRB 6 56 (28, 84) 25 (15, 36) 78 (36, 120) -- -- -- -- --
IRB 7 −7 (−53, 39) 60 (26, 95) 8 (−37, 53) 15 (−23, −6) −4 (−10, 2) −4 (−9, 1) −5 (−52, 41) −5 (−51, 40)
IRB 8 37 (−63, −11) 43 (27, 59) 33 (−59, −7) 3 (−4, 10) 10 (3, 17) 4 (−8, 0) 36 (−63, −10) 48 (−72, −23)
IRB 9 45 (−69, −20) 14 (2, 27) 1 (−32, 34) −8 (−18, 3) 2 (−8, 12) 4 (−8, 0) 43 (−69, −17) 47 (−73, −21)
IRB 10 24 (−45, −2) 19 (12, 26) −19 (−49, 11) 27 (9, 44) 14 (−5, 33) −4 (−7, 0) −17 (−42, 8) 37 (−57, −17)
Exempt −14 (−32, 3) 7 (−5, 20) −16 (−40, 9) −6 (−14, 2) −1 (−8, 7) −1 (−3, 1) −10 (−31, 10) 21 (−41, −1)
Expedited 44 (−58, −30) 5 (−5, 15) 25 (−44, −7) 13 (−22, −4) −6 (−13, 2) −1 (−3, 0) 43 (−61, −25) 37 (−54, −20)
1

These models include a significant IRB 3 by Exempt Review interaction term. Dashes denote cells for which means cannot be estimated due to insufficient sample size.

We explored the association of industry sponsorship with total review time. Although industry sponsorship as a univariate predictor added 26 days to total review time on average, this effect shrank to 9 days and lost statistical significance when the model controlled for IRB and review level. This suggests that industry sponsorship is primarily an indicator of a specific review level (i.e., full board) that is associated with a longer review time. Similarly, protocols studying therapeutic interventions, including surgery, did not demonstrate significantly different review times than other types of protocols after controlling for IRB and review level.

3.3. Models for Incremental Review Times:

Across all IRBs, the total time of the review process was subdivided into pre-review, IRB review and final approval procedures, where the final approvals included at least the time required for R&D review and the ACOS signature (Figure 1). Multivariable linear regression models (Table 3) demonstrate that review level was not associated with pre-review time, and that all IRBs except IRB 3 had longer pre-review times than did IRB 1. Expedited protocols had IRB and Final Approval times that were 25 and 13 days shorter on average, respectively, than were full board reviews. Some variation across IRBs was apparent for each of the incremental review times considered.

Exempt protocols were non-significantly shorter on average than were full board reviews in all of the incremental review times except for pre-review time (Table 3). This finding was unexpected, but we noted that at all IRBs, when a protocol was exempted from IRB review, the R&D Committee was tasked with the responsibility of scientific review that they typically delegate to the IRB for expedited and full board reviews. We hypothesized that this added time in R&D review might explain the apparent lack of difference between full board and exempt reviews. To explore this finding, we fit 2 models restricted to the 179 protocols that had both total and R&D review times. The first model estimating total review time recapitulated findings similar to the sample of all 277 protocols: Expedited reviews were 43 days faster than were full board reviews on average, with no significant difference between the total review times of full board and exempt protocols. We defined a new time variable by subtracting the length of R&D review from the total review time; this non-R&D portion of the total review time for exempt protocols was significantly shorter than that of full board reviews by 21 days on average.

3.4. Relationship between Total and Pre-review Times:

To explore the relationship between total and pre-review times, we modeled the non-pre-review portion of total review time controlling for IRB, review level, and the statistical interaction between IRB and pre-review time (data not shown). For each additional day spent in pre-review, the total review times at IRBs 2 and 3 were 0.43 (95%CI: 0.82, 0.04) and 0.65 (95% CI 1.26, 0.05) days shorter on average, respectively.

3.5. Association with Perceptions of IRB Performance:

Finally, to explore the how IRB stakeholders’ perceptions of IRB performance and culture correlate with total review times, we merged this dataset with the results from an associated survey of IRB members’ and investigators involved with the protocols studied here [23]. We found no correlation between review time and the survey responses from investigators. However, mixed model analyses demonstrate that IRB members’ ratings of their own IRBs were associated with total review times at those IRBs. In particular, shorter review times (43 days on average, p=0.02) were noted at IRBs where the timeliness of review was rated highly, and where the IRB was perceived to be as concerned with facilitating research as it was with protecting participants’ rights (14 days on average, p=0.02). Although IRB-level differences were reported regarding the adequacy of resources allocated to the IRB (including staff), we noted no association between the adequacy of resource allocation and total review time.

4. Discussion

Across the diverse portfolios of research managed by 10 of the largest VA IRBs, we found that the average time required for IRB review was 107 days, and only 18% of protocols met the target of 60 days suggested as best practice by a National Heart, Lung and Blood Institute working group [24]. It is also longer than the times reported in a systematic review of 25 empirical studies of IRBs that suggests that IRB review typically requires 63 days in the United States [17]. We do note, however, that VA regulations require additional procedures and protections that stipulate that the R&D Committee and ACOS review and approve the determinations made by the IRB. These additional protections add an average 19 days to the total review time (Supplementary Table 1); comparing these data to those derived from non-VA IRBs may require adjustment to account for the additional 3 weeks of review time that is idiosyncratic to the VA environment. For example, after adjusting the target from 60 to 79 days, the proportion of protocols meeting the target increased to 34%. We also note that the existing studies of IRB efficiency report wide variation in IRB review times, and that the 107 day average described here is neither unusual nor extraordinarily long.

Despite these qualifications, we noted significant differences in review times by IRB and review level that demonstrate that some of the studied IRBs approach or meet the best practice standard of 60 days for IRB review. Specifically, across all review levels, IRB 8 approaches this ideal, and expedited reviews at IRBs 3 and 10 meet this standard. IRBs 1 and 6 demonstrated significantly longer review times, and as such, those IRBs may consider changes to their local procedures aimed at increasing the efficiency of IRB review to approach at least the VA median if not the best practice standard. Such changes might be informed by the procedures at IRBs 3, 8 and 10 that have relatively shorter IRB review times than do the other IRBs in our sample.

Counterintuitively, the minimal risk studies qualifying for exemption from IRB research protocols did not take any less time to secure IRB approval than riskier protocols that required full-board review. Although we had noted this finding a previous single-site study [22], we were surprised to find that the pattern was consistent across all of the studied IRBs except for IRB 8. One recent study reported that minimal risk studies took less time to review than greater-than-minimal risk studies [25], but it did not differentiate between the minimal risk studies that were exempt and those that were expedited. We suspect that there are several potential explanations for the surprisingly long review times of exempt protocols. First, because protocols qualifying for exemption are often conducted by students, residents or other less-experienced investigators, a portion of the delay may be attributable to this inexperience. Second, we suspect that it is often easier for investigators to demonstrate that a minimal risk study complies with the requirements for expedited review than it is to persuade IRB reviewers that a given set of procedures is not “human subjects research” and thus qualifies for exemption. Finally, the above-mentioned idiosyncrasy of VA regulations requires that the R&D committee conduct scientific review of protocols that qualify for exemption from IRB review, and our statistical models suggest that the time spent in R&D review is at least one of the factors that add unexpected length to the review of IRB-exempt protocols. There may be good reasons for the VA to require this second-level review of exempt protocols, but it is unlikely to provide additional human subjects protections as its focus is exclusively on the scientific merit of the proposed research. These data may provide motivation for revising this idiosyncratic requirement.

Local procedures for pre-review of IRB protocols exhibited some of the widest variation in procedure and time across all IRBs. All but IRB 3 utilized pre-review procedures that took longer than IRB 1, but they did not significantly increase the time of overall review. In most IRBs, the pre-review process did not increase the overall review time, but merely shifted time to the pre-review period. We suspect this adds significant value by assigning this work to qualified, individual reviewers who can complete portions of the review process without the cost associated with a fully convened IRB, many members of which command higher salaries than the IRB staff conducting the pre-review. Furthermore, at IRBs 2 and 3, the time spent in pre-review actually shortened the overall review time, providing significant improvement that might become a model of best practice for the slower IRBs (e.g., IRBs 1 and 6).

These data suggest several possible targets for quality improvement. First, the VA’s idiosyncratic requirement for an R&D review delays final approval 3 weeks on average, and this component of the review process seems to account for a significant portion of the unexpected delay in exempt reviews. As such, these data may support revisions in VA policy to eliminate this requirement, or at least empower the chair of the R&D committee with executive authority to approve studies that meet criteria between fully convened meetings of the R&D committee that typically occur only once per month. Second, the paradoxical, but consistent finding that exempt reviews take no less time than full board reviews suggests that QI efforts could focus on streamlining expedited review procedures by identifying and mitigating the sources of this paradoxical delay. Third, the procedures at IRBs with comparatively shorter review times could become models of best practice that could improve review times throughout the VA, but especially at those IRBs with significantly longer review times. Finally, we observed shorter review times at IRBs reporting that their IRB is as concerned with facilitating research as they are with protecting participants’ rights. As such IRB review times across all IRBs might be shorted by efforts to shift IRB culture away from risk aversion toward a more balanced focus on both facilitating research and protecting participants.

Despite these specific suggestions for improvement, IRBs are deliberately decentralized and local. This purpose of this decentralization is to ensure that research oversight is sensitive to local values and concerns. As such, variation across IRBs is predictable, if even purposeful. In similar fashion, IRB quality improvement will require IRB-specific initiatives aimed at addressing concerns particular to both the local stakeholders and the local procedures. Our research may support such local QI initiatives because it demonstrates a reliable and feasible way to assess IRB- and protocol-specific differences in review times. If, for example, IRB stakeholders expressed concern that the VA regulations regarding off-site storage of data/specimens, those stakeholders could use this method to measure the review times of a sample of local protocols along with details about off-site storage to test the concerns about these regulations; if proven to be a source of delay, the same methods could be used to track improvement after implementing procedures aimed at mitigating the delay.

Finally, these data have implications for other investigators and IRBs within and beyond the VA. The largest systematic review of IRB review times [17] noted limitations in the generalizability of the evidence that typically came from only a single IRB or, more typically, from multiple IRBs reviewing a single, multi-site study. This protocol addresses this limitation by presenting findings from a unified method looking at 10 different IRBs across the entire portfolio of VA research. Additionally, we introduce a method for characterizing IRB review procedures that proved adaptable over a diverse sample of IRBs. This methodology identified shared components and allowed estimation of their impact on the time for review as a whole. In light of proposed changes to the Common Rule aimed at facilitating research and streamlining human subjects protections [21], IRBs may see a significant change in the nature of the proposals which they review. Proposed changes may limit the volume of low-risk studies requiring review, but will be offset by heightened focus on procedures surrounding informed consent and the use of biospecimens in studies that are reviewed. While re-organizing to meet these new standards, we believe these procedures could provide an evidence-base for reform if applied to a larger group of IRBs.

This study is limited in several ways. First, although the findings are likely representative of larger VA IRBs, it is not clear if they are generalizable to smaller VA IRBs or IRBs outside the VA. However, this report provides the most comprehensive data published to date regarding the time required for IRB review at VA Medical Centers, and by applying consistent methods to a large and representative sample of protocols from multiple IRBs, it addresses noted limitations of previous studies [17] that typically focus on only a single IRB or, more typically, a single, multi-site protocol reviewed by multiple IRBs. Second, our sample is limited by a substantial portion of missing data due to the fact that the IRB records from which dates were abstracted were not always complete, neither were they designed for the purpose of recording the overall or incremental review times defined in this study. If IRB review times remain a concern, motivated IRBs would have to adopt procedures such as online submission systems that more completely capture data regarding review times.

In addition, we were limited to determining the times recorded in the records retained as part of the official IRB record and sampled by our team. For example, we were not able to assess the amount of time an investigator spent preparing the initial draft of their protocol prior to submission. Time invested before submission, perhaps in informal consultation with IRB staff, could influence the time required in formal review. Were it available, these data might also explain why we did not identify industry-sponsorship as a significant predictor of review time. Though industry-sponsorship introduces a host of issues that can influence study approval, it is likely that many of these issues are addressed by the industry sponsor prior to formal submission of the protocol, resulting in a comparatively more polished protocol than those without the resources afforded by industry sponsorship. We also were unable to attribute sources of delay due to IRB/investigator interactions during the review process. Finally, variations in review procedures at each IRB limit the granularity of comparison between IRBs. However, quality improvement is inherently local, and thus the point of gathering these data is less about developing generalizable findings and more about informing local initiatives for quality improvement.

5. Conclusions

VA IRB review times differ significantly by IRB and review level. Some IRBs approach a consensus panel goal of 60 days for IRB review, but other IRBs need improvement to reach this goal. Review times for exempt protocols are unexpectedly long due to time required for R&D Committee review. These data could inform IRB-specific initiatives for reducing IRB review times.

Supplementary Material

Supplementary Figures
Supplementary Tables
Supplementary Materials

ACKNOWLEDGEMENTS:

This research was supported by supported by the US Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, Health Services Research and Development (CDA 08-281 and SDR 11-399-1). We are grateful to Laura Lupovitz for her careful coding of IRB review times, the staff of the VA Pittsburgh Veterans Engineering Resource Center for their assistance with process flow mapping, and for the participation and support of the IRB staff at each of the participating institutions. We are also grateful to Michael Fine, Bruce Ling, Charles Lidz, Galen Switzer, Susan Zickmund, Barbara Hanusa and Aram Dobalian who were involved in other parts of this research and served throughout as advisors to our team.

FINANCIAL SUPPORT: This research was supported by supported by the US Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, Health Services Research and Development (CDA 08-281 and SDR 11-399-1).

Footnotes

DISCLOSURE: The opinions expressed in this review are those of the authors and do not necessarily reflect the position of the Department of Veterans Affairs or the United States government.

References

  • [1].National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research The Belmont Report: Ethical Pricniples and Guidelines for the Protection of Human Subjects of Research. Washington, DC: US Government Printing Office, 1979. [Google Scholar]
  • [2].Emanuel EJ, Wendler D, Grady C What makes clinical research ethical? JAMA 2000:283:2701–2711. [DOI] [PubMed] [Google Scholar]
  • [3].Catania JA, Lo B, Wolf LE, Dolcini MM, Pollack LM, et al. Survey of u.s. Human research protection organizations: workload and membership. Journal of empirical research on human research ethics : JERHRE 2008:3:57–69. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Driscoll A, Currey J, Worrall-Carter L, Stewart S Ethical dilemmas of a large national multi-centre study in Australia: time for some consistency. J Clin Nurs 2008:17:2212–2220. [DOI] [PubMed] [Google Scholar]
  • [5].Dyrbye LN, Thomas MR, Mechaber AJ, Eacker A, Harper W, et al. Medical education research and IRB review: an analysis and comparison of the IRB review process at six institutions. Acad Med 2007:82:654–660. [DOI] [PubMed] [Google Scholar]
  • [6].Dziak K, Anderson R, Sevick MA, Weisman CS, Levine DW, et al. Variations among Institutional Review Board reviews in a multisite health services research study. Health Serv Res 2005:40:279–290. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Pogorzelska M, Stone PW, Cohn EG, Larson E Changes in the institutional review board submission process for multicenter research over 6 years. Nursing outlook 2010:58:181–187. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Goodyear-Smith F, Lobb B, Davies G, Nachson I, Seelau SM International variation in ethics committee requirements: comparisons across five Westernised nations. BMC Med Ethics 2002:3:E2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Larson E, Bratts T, Zwanziger J, Stone P A survey of IRB process in 68 U.S. hospitals. J Nurs Scholarsh 2004:36:260–264. [DOI] [PubMed] [Google Scholar]
  • [10].Porcu L, Poli D, Torri V, Rulli E, Di Tullio MC, et al. Impact of recent legislative bills regarding clinical research on Italian ethics committee activity. J Med Ethics 2008:34:747–750. [DOI] [PubMed] [Google Scholar]
  • [11].Byrne MM, Speckman J, Getz K, Sugarman J Variability in the costs of institutional review board oversight. Acad Med 2006:81:708–712. [DOI] [PubMed] [Google Scholar]
  • [12].Wagner TH, Bhandari A, Chadwick GL, Nelson DK The cost of operating institutional review boards (IRBs). Acad Med 2003:78:638–644. [DOI] [PubMed] [Google Scholar]
  • [13].Wagner TH, Murray C, Goldberg J, Adler JM, Abrams J Costs and benefits of the national cancer institute central institutional review board. J Clin Oncol 2010:28:662–666. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Belknap SM, Georgopoulos CH, West DP, Yarnold PR, Kelly WN Quality of methods for assessing and reporting serious adverse events in clinical trials of cancer drugs. Clin Pharmacol Ther 2010:88:231–236. [DOI] [PubMed] [Google Scholar]
  • [15].Dorr DA, Burdon R, West DP, Lagman J, Georgopoulos C, et al. Quality of reporting of serious adverse drug events to an institutional review board: a case study with the novel cancer agent, imatinib mesylate. Clin Cancer Res 2009:15:3850–3855. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Emanuel EJ, Wood A, Fleischman A, Bowen A, Getz KA, et al. Oversight of human participants research: identifying problems to evaluate reform proposals. Annals of internal medicine 2004:141:282–291. [DOI] [PubMed] [Google Scholar]
  • [17].Silberman G, Kahn KL Burdens on research imposed by institutional review boards: the state of the evidence and its implications for regulatory reform. Milbank Q 2011:89:599–627. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Infectious Diseases Society of A Grinding to a halt: the effects of the increasing regulatory burden on research and quality improvement efforts. Clin Infect Dis 2009:49:328–335. [DOI] [PubMed] [Google Scholar]
  • [19].Marsh JL, McMaster W, Parvizi J, Katz SI, Spindler K AOA Symposium. Barriers (threats) to clinical research. J Bone Joint Surg Am 2008:90:1769–1776. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Straight TM Clinical research regulation: challenges to the institutional review board system. Clin Dermatol 2009:27:375–383. [DOI] [PubMed] [Google Scholar]
  • [21].Depertment of Health and Human Services Federal Policy for the Protection of Human Subjects. Washington, DC: United States Government, 2015:53931–54061. [Google Scholar]
  • [22].Hall DE, Hanusa BH, Stone RA, Ling BS, Arnold RM Time required for institutional review board review at one Veterans Affairs medical center. JAMA Surg 2015:150:103–109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Hall D, Feske U, Hanusa BH, Ling BS, Stone RA, et al. Prioritizing Initiatives for Institutional Review Board (IRB) Quality Improvement. AJOB Empirical Bioethics (in press). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Mascette AM, Bernard GR, Dimichele D, Goldner JA, Harrington R, et al. Are central institutional review boards the solution? The National Heart, Lung, and Blood Institute Working Group’s report on optimizing the IRB process. Academic Medicine 2012:87:1710–1714. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [25].Tsang TSM, Jones M, Meneilly GS Analysis of Research Ethics Board Approval Times in an Academic Department of Medicine. Journal of Empirical Research on Human Research Ethics 2015. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Figures
Supplementary Tables
Supplementary Materials

RESOURCES