Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Jan 20.
Published in final edited form as: J Urol. 2009 Apr 16;181(6):2674–2679. doi: 10.1016/j.juro.2009.02.032

Variation in Institutional Review Board (IRB) Responses to a Standard Protocol for a Multicenter Randomized Controlled Surgical Trial

Brian T Helfand 1,*, Anne K Mongiu 1,*, Claus G Roehrborn 2, Robert F Donnell 3, Reginald Bruskewitz 4, Steven A Kaplan 5, John W Kusek 6, Laura Coombs 7, Kevin T McVary, on behalf of the Minimally Invasive Surgical Therapy (MIST) Investigators1,8
PMCID: PMC4720253  NIHMSID: NIHMS750097  PMID: 19375101

Abstract

Purpose

The primary responsibility of Institutional Review Boards (IRBs) is to protect human research subjects and therefore ensure that studies are conducted in accordance with a standard set of ethical principles. A number of studies have compared the responses from IRBs in multicenter clinical trials involving medical therapies. To date, none have been conducted in trials investigating surgical interventions. The intent of this study was to investigate the consistency of the recommendations issued from one institutional IRB to another in the Minimally Invasive Surgical Therapies (MIST) for benign prostatic hyperplasia (BPH), a multicenter trial with a uniform consent and study protocol.

Materials and Methods

We obtained the IRB responses from six of the seven participating institutions after the initial submission of the MIST study protocol and classified the responses. We then re-distributed the approved protocols to an IRB at another participating institution and analyzed their review of these protocols.

Results

We found that both the number and types of responses required for IRB approval of an identical study protocol varied significantly among the participating institutions. We also found that IRB responses were inconsistent in the second review, although all protocols were ultimately approved.

Conclusion

We conclude that the current system of local IRB review in the context of a multicenter surgical trial is inefficient in the review process and may not provide expertise in overseeing surgical trials. Based on these results, a central surgical IRB may be needed to improve of the ethical review process in multicenter trials.

Keywords: Institutional Review Board, Controlled Clinical Trials, Minimally Invasive Surgery

INTRODUCTION

Following the public’s outrage over the Tuskegee studies, Congress passed the National Research Act (1974)1. The Institutional Review Board (IRB) was created at this time as a vehicle for monitoring human subjects research. The Belmont Report (1978) formally outlined the basic ethical principles that underlie the conduct of biomedical and behavioral research in human subjects, and became a federal template to guide IRBs in reviewing research protocols2. It also gave rise to Title 45 Code of Federal Regulations Part 46 (Title 45 CFR 46), a document which governs the use of human subjects in research and dictates the function of IRBs2, 3.

The Office of Human Research Protection (OHRP) mandates that all federally funded institutions utilize IRBs to protect human research subjects4. In addition, the OHRP provides oversight of local IRBs and ensures that they follow the guidelines set forth in Title 45 CFR 462, 3. Similar laws have been passed by the Food and Drug Administration4.

Title 45 CFR 46 was deliberately written in a manner which provides discretion to IRBs to accommodate local/regional differences in research settings and populations. For example, while it is stated that each IRB contains at least 5 members with sufficient experience to make an informed decision regarding the ethics behind research, it does not specify what constitutes this experience/expertise. Based on these relatively flexible recommendations, it is not surprising that investigations have found that that IRBs respond in dissimilar ways to identical protocols5. One study concluded that while the ultimate decisions made by local IRBs were consistent, their rationale for these decisions appeared to be inconsistent. As a result, there were wide variances in the manner in which research protocols were modified by IRBs5.

Other studies evaluating the IRB decision-making process have described significant differences in approval time (ranging from 5 days to >30 months)1, 611. These temporal delays consume a large part of study budgets1, 7 and have placed research studies significantly behind schedule1, 7, 10, 12. In some reports, 12% of participating sites dropped out secondary to the perceived burden of addressing numerous IRB changes to the standardized trial protocol12, 13. Additionally, even the type of IRB review process (e.g. expedited vs. full review) of standardized multicenter trials has been subject to variability6, 8, 9. Taken together, the results of these studies uniformly convey that IRB review of multicenter trials is inconsistent, and that reform is needed to improve the efficiency of the review process.

The purpose of the present study was to reassess the decisions of one institutional IRB to another in a multi-center trial with a uniform consent and protocol. The Minimally Invasive Surgical Therapies (MIST) for benign prostatic hyperplasia (BPH) was a National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK)-National Institutes of Health (NIH) sponsored, multi-institutional randomized control trial (RCTs) designed to investigate the outcomes of surgical therapies for BPH. A common MIST protocol was submitted to all participating institutional IRBs. We noticed enormous variability in both the types and timing of responses from each IRB. Therefore, we assessed whether variability in review would occur among IRBs when presented with a surgical study protocol approved by another institution.

METHODS

The MIST trial was a RCT sponsored by the NIDDK initiated in April 2004 at seven different sites to compare the outcomes of surgical (transurethral needle ablation and transurethral microwave thermotherapy) and medical therapies for BPH. All sites were locally affiliated with medical centers and did not include any central IRBs (CIRBs). Prior to trial initiation, participating institutions agreed upon a standardized study protocol and consent forms that were developed by the trial’s Steering Committee and approved by an independent Data and Safety Monitoring Board. All institutions submitted an identical protocol and consent forms to their local IRB for review within the same calendar month.

Six out of seven institutions agreed to participate in our study. One institution did not participate because of restrictions by its own IRB. To compare the variability of local IRB requests, we analyzed the membership composition (physicians, unaffiliated members, etc.) and meeting frequency of each IRB. Particular attention was directed towards the inclusion of surgeons and urologists. In addition, we determined the time to approval at each institution. We also obtained the initial IRB responses with all comments and concerns provided to the principal investigators. The number of comments in both the protocol and consent forms from each IRB that required changes necessary for their approval were quantified and placed into one of the following categories: 1. “Technical” (change requiring the addition of new information not previously presented or which required extensive explanation to help with the overall understanding by non-medical person); 2. “Clarification” (minimal change requiring the expansion of an idea presented or deletion of phrase/sentence to help with overall understanding by non-medical person); 3. “Word Choice” (change of vocabulary used to help with overall understanding by non-medical person); 4. “Formatting” (change of paragraph location or stylistic changes).

The second part of the study was aimed at further determining any inconsistencies between IRBs at various institutions. To this end, we obtained permission from the chairman of each IRB. Under strict confidentiality, the revised, approved protocols (from the first part of the study) from each institution were then re-submitted to an IRB at a different participating institution (e.g. hypothetical example – site #6 accepted and the revised protocol was resubmitted to site #3). Therefore, each IRB assumed that this was an initial review. When multiple committees were available, the IRB chairman at each institution ensured that a different IRB committee would review the protocols in the second round. After receiving IRB responses to these re-submitted protocols, we categorized both the number and types of suggested changes as described above.

All statistical analyses were performed using SAS 9.1 (Statistical Applications System, Cary, NC).

RESULTS

Our study included public and private medical schools located in major metropolitan regions spread across the US (West, Midwest, South, and Northeast). Five of the six participating IRBs provided information regarding the composition and meeting frequency of their IRB committees, and most made the information readily available on their respective IRB websites (Table 1). Site#2 was unable to provide information on their IRB committees due to restrictions from their own IRB. The remaining 5 IRBs in our study were composed of an average of 13 members (SD ± 3.9; range 8–22). Sixty percent of the sites had multiple IRB committees (range 1–5): with two sites having only 1 committee, one site with 4 committees, and two sites with 5 committees. Committees at 80% of sites (4/5) met bi-monthly, while the fifth site met monthly to review submitted protocols. Committees were composed of physicians, researchers, nurses, lawyers, and pharmacists, and all committees contained at least one non-scientist and at least one unaffiliated member (Table 1). 63% (10/16) of committees contained at least one surgeon (general surgery or surgical sub-specialty), with only one committee having a urologist on the IRB committee at the time of review. One site had no surgeons or MDs on their IRB committee.

Table 1. Composition of IRB committees in MIST trial participants.

Five of the six sites that participated in out study provided information on the composition of their IRB committees and meeting frequency. For each site, the number of committees that review submitted protocols is designated by the committee number. Participating site #2 stated that their workload precluded them from releasing/providing this information, and thus no data is available.

Site -
Committee #
MD,
DO,
MD-
PhD
PhD,
Phar
mD
JD RN RPH MS,
MA,
MPH
Other Total Non-
Scientists
Urologist Surgeon Meeting
Frequency
Site 1 - 1 14 2 0 1 0 1 4 22 5 0 3 monthly
Site 1 - 2 7 4 0 1 0 0 2 14 4 0 1 monthly
Site 1 - 3 6 4 0 1 0 1 3 15 3 1 1 monthly
Site 1 - 4 8 3 1 2 0 0 2 16 3 0 0 monthly
Site 1 - 5* 0 6 0 0 0 0 5 11 11 0 0 monthly
Site 2 NA NA NA NA NA NA NA NA NA NA NA NA
Site 3 5 4 2 2 1 2 2 18 5 0 1 bi-monthly
Site 4 - 1 7 5 0 0 0 0 5 17 5 0 1 bi-monthly
Site 4 - 2 4 7 0 0 0 0 4 15 5 0 1 bi-monthly
Site 4 - 3 7 3 1 0 0 0 2 13 3 0 0 bi-monthly
Site 4 - 4 9 5 0 0 0 2 0 16 2 0 2 bi-monthly
Site 4 - 5 7 4 0 0 0 0 1 12 1 0 1 bi-monthly
Site 5 0 8 0 0 0 3 0 11 0 0 0 bi-monthly
Site 6 - 1 3 3 0 0 0 0 2 8 2 0 0 bi-monthly
Site 6 - 2 5 1 0 1 0 0 2 9 2 0 1 bi-monthly
Site 6 - 3 2 4 0 0 0 1 2 9 2 0 0 bi-monthly
Site 6 - 4 5 1 0 0 1 0 2 9 2 0 1 bi-monthly
Average 5.6 4.0 0.3 0.5 0.1 0.6 2.4 13.4 3.4 0.1 0.8
Std Dev 3.5 1.9 0.6 0.7 0.3 1.0 1.5 3.9 2.5 0.3 0.8
*

non-medical research – this committee only evaluates behavioral science protocols

All protocols were eventually approved following incorporation of required changes, which are summarized categorically in Table 3. The average time for IRB response for the first study protocol submission was 35.8 ±18.2 days (range 14–68 days; Table 2). None of the submitted protocols were expedited and all were in compliance with their institutions HIPAA requirements. In the first round of submissions, the six participating IRBs requested an average of 17.5 changes per site (SD ± 20.6; range 2–57). The number of required changes was highly correlated with time to approval (Pearson’s Correlation Coefficient, 0.93). Overall, 95% of the 121 total requested changes were unique, with none of the duplicate changes being requested by more than one other participating site. Changes were mandated in both the protocol and consent forms. Modifications in “Clarification” and “Word Choice” were the most frequent types of changes requested. Requests in “Technical” changes, which required the addition of new information to the protocol, had the highest percentage of duplication out of the four categories. In the first review of the study protocol, five out of six IRBs suggested a total of 15 technical changes, 13 (87%) of which were unique, and none of which were duplicated by more than one institution. The two duplicated “Technical” changes involved requiring the investigators to provide certain medications for free and to better justify their recruiting and exclusion criteria for the study. In particular, site #3 specified that the study drugs, and site #4 that symptom ameliorating medications (e.g. Vioxx), were to be provided at no extra cost to the participant. While most of the unique changes were minor, some did require that participating institution provide, at their own cost, items which other sites in the study did not need to supply to the participants (extra medications to ameliorate symptoms/side effects of study medications or procedures, and making PI/research staff available 24 hr/day, 7 days/week rather than deferral to the practice pattern at the respective site [i.e., use of an on-call physician]).

Table 3. Categorization of changes required for MIST protocol approval by site for the initial submission, and for the second re-submission of the protocol that had been accepted at an alternate study site.

Definition of categories of changes: “Technical” - change requiring the addition of new information not previously presented or which required extensive explanation to help with the overall understanding by non-medical person, “Clarification” - minimal change requiring the expansion of an idea presented or deletion of phrase/sentence to help with overall understanding by non-medical person, “Word Choice” - change of vocabulary used to help with overall understanding by non-medical person, “Formatting” - change of paragraph location or stylistic changes. It should be noted, for the purpose of confidentiality, the site number was changed from Table 1.

Technical Clarification Word Choice Format Total changes
Location 1st
round
2nd
round
1st
round
2nd
round
1st
round
2nd
round
1st
round
2nd
round
1st
round
2nd
round
Site 1 1 7 9 3 3 3 2 5 15 18
Site 2 1 ** 0 ** 0 ** 2 ** 3 **
Site 3 2 3 2 6 4 0 0 4 8 13
Site 4 7 8 5 10 1 6 7 10 20 34
Site 5 0 3 16 22 35 30 6 18 57 73
Site 6 4 ** 10 ** 1 ** 3 ** 18 **
Totals 15 21 42 41 44 39 20 37 121 138
Average 2.5 5.3 7.0 10.3 7.3 9.8 3.3 9.3 20.2 34.5
Std Dev 2.6 2.6 5.9 8.3 13.6 13.7 2.7 6.4 19.3 27.2
T test 0.14 0.49 0.79 0.07 0.35
**

these sites did not participate in 2nd round of re-submissions

Table 2. Time to approval (days) of MIST protocol by IRB site.

It should be noted, for the purpose of confidentiality, the site number was changed from Table 1.

Time in Days
Location 1st round 2nd round
Site 1 36 32
Site 2 25 **
Site 3 14 22
Site 4 32 38
Site 5 68 50
Site 6 40 **
Average 35.8 35.5 T-test
Std Dev 18.2 11.7 0.76
**

these sites did not participate in 2nd round

In the second part of the study, the approved protocols (from the first part of the study) were re-submitted to another IRB at a different participating institution. The average response time of the four participating IRBs to re-review an accepted protocol was 35.5 days (SD ± 11.7; range 22–50), which was not significantly different from the initial submission (T test= 0.76; Table 2). The IRBs requested an average of 34.5 changes (SD ± 27.2; range 13–73), nearly double that seen in the first round (Table 3). Although the number of requested changes increased in every group, changes in “Technical” and “Formatting” categories doubled and tripled in the second round of submission, respectively. Although none of these increases in the second round were found to be statistically significant, they were highly related to an increased amount of time to approval (Table 2, 3; Pearson’s Correlation Coefficient, 0.95). As during the first round, there were almost no duplications in the specific changes. The only suggested formatting change that was the same between multiple institutions was a misspelling of a word that was noticed by 2 of the 4 different IRBs. Additionally, 18 of the 21 “Technical” changes suggested were unique. The three similar changes involved putting in additional contact information for patient questions and/or physician related emergency contact information. In general, there were no duplicated requests observed between the first and second round of submission. However, site #3 made the same request for additional information ensuring that the study drugs would be provided at no additional costs.

DISCUSSION

Local IRBs have become steadily overburdened in the decades since they were introduced, leading some to argue they are no longer able to satisfactorily meet their objectives14. The 1998 Bell Report published by the Office of Extramural Research found that high volume IRBs spend ~3 minutes per review15. This large increase in the review burden on IRBs rests in two root causes: (1) an increase in the number of clinical trials; and (2) the introduction of multicenter clinical trials, which were not present at the time when IRBs were instituted15, 16. Multicenter trials provide an additional burden in that they require that the local IRB not only review and monitor outcomes at the local level, but also assess the safety outcomes at all participating study sites16. In addition, IRBs have to respond to regulations from many different agencies including the OHRP, the FDA and the HIPPA privacy rule. Overall, these burdens impose additional time and cost for all institutions involved.

Surgical trials present additional challenges to local IRBs who review these protocols. One challenge involves the proper evaluation of the inherent risks of surgical interventions which include the surgery itself, anesthesia, infection, and other surgical complications17. Committee members unfamiliar with surgical procedures may have difficulty assessing the risk-benefit criteria when evaluating research protocols. Compounding this unfamiliarity is the general paucity of surgical trials, which comprise <10% of published surgical studies, and a smaller fraction of all studies analyzed by IRBs18, 19. In the field of urology, one group found that ~3% of published studies were RCTs, with less than 25% of these involving surgical interventions20. Similarly, analysis of pediatric urology studies found that RCTs comprised only 0.9% of the literature, and only 43% of those were surgical21. Therefore, IRB review of multicenter surgical trials is doubly burdened by the inherent inefficiencies of review for any multicenter trial, as well as the problem of evaluating higher risk procedures with which they may have little experience and expertise.

Our study was the first to examine IRB responses to a standard protocol in a prospective multicenter surgical RCT (sRCT), and was unique in that we assessed IRB responses to other IRB panels’ approved protocols in a prospective blinded fashion. We found surprising variability in both the time to acceptance of the protocols and in the number and types of changes suggested by the various institutional IRBs. The time to approval ranged from ~2–8 weeks in both rounds of submission, which was not related to the number of committees at a given site or the committee size. However, the length of time to approval did correlate with increased numbers of suggested changes, which may reflect the extended amount of time required for corrections. It was interesting that the approval time did not change during the second round, even though the submitted protocols had already been approved by a prior IRB. From one point of view, it indicated that the various sites maintained a good internal consistency of review because each set of committees appeared to give the same quality of consideration to these protocols. It also suggested that each individual IRB has a relatively standardized process for review. However, because these protocols had already been accepted by other IRBs, one would expect that less changes and subsequently less time for corrections would be required to obtain IRB approval.

During both the first and second submissions, IRBs recommended predominantly non-overlapping changes in all four categories to protocols submitted. With the exception of asking for the inclusion of medication re-imbursement by 2 different IRBs and ensuring physician availability, the changes requested by the IRBs were minor and did not appear to provide additional human subjects’ protection. This suggested an almost arbitrary need for the review committee to make editorial changes to the protocols even though they were aware it was a standard protocol established by a Steering Committee for an NIH sponsored clinical trial and had been approved by an independent Data and Safety Monitoring Board. On the other hand these requested changes may be related regional practices that are necessary for the protection and/or improved comprehension of the local study population. Interestingly, all participating IRBs suggested an increase in the number of total changes during the second submission of a previously approved protocol. This may be related to differences in the make-up of each IRB (e.g. variability in the level of expertise of committee members) between the first and second submissions. Taken together, although the vast number of changes requested in each round was minor, they were highly variable and associated with increased time to submission and possible delays.

We noted that less than 6% of all committee members were surgeons, with more than 37% of IRB committees not containing any surgical representation. At the time of review, only one committee contained a urologist and one committee did not even involve MDs. This lack of surgical expertise on the committees of the various IRBs raises questions about the protections that such a board could provide in sRCTs, as it may be difficult for non-surgeons to properly evaluate surgical interventions and the appropriate consent forms. It was also interesting that the longest review time and highest number of protocol changes came from a site whose single IRB committee did not contain any MDs. This suggests that protocols with highly technical details specific to surgery may be more difficult for non-surgeons to properly evaluate.

As a result of observed inefficiencies, various groups have proposed centralizing the review process. In January 2001, the National Cancer Institute established a 16 member CIRB with specific expertise in cancer treatments to monitor all NCI sponsored phase III clinical adult cancer trials16. Under this system, subjects’ protection is decided upon by a group of highly specialized individuals. In the seven years since its inception, over 302 institutions have used the CIRB, and more than 4,000 facilitated reviews have been submitted22. Similarly, Canada instituted the Ontario Cancer Research Ethics Board (OCREB), which is gaining slow but increasing acceptance among Canadian institutions23, 24. While a CIRB appears promising, it is important to note that there are concerns regarding the assumption of liability25. In addition, CIRBs may lack the understanding, values and attitudes of a local population participating in a study. However, overall the potential risks appear to be outweighed by greater efficiency and possibly improved human rights protection.

Some limitations of our study deserve mention. For example, only a relatively small number of IRBs participated in this study, and thus, the responses may not be representative of all institutions. In addition, only one surgical protocol was considered and perhaps responses to other protocols should be noted. Finally, this study was restricted to men with BPH and it would be interesting to note if there would be additional concerns raised by IRBs given the involvement of women or other vulnerable populations (e.g. children). Future studies interpreting the responses of IRBs at different institutions to multiple sRCT protocols are needed.

In summary, the results of our study challenge the current IRB system for evaluating multicenter sRCTs. At a minimum, we advocate that IRB members who review sRCTs should have expertise/experience in surgery. An IRB with such composition is likely to provide informed review of study protocols, may be able to reduce approval time, and minimize unnecessary changes to the study protocol while preserving human rights protection. Furthermore, we suggest that a central sIRB may be needed to improve and standardize the review process in multicenter sRCTs.

Acknowledgments

This work was completed with funding from NIDDK/NIH and the MIST collaborative group.

MIST1 Manuscript Appendix

The following individuals and institutions constitute the MIST Study Group (* indicates principal investigator or director): Study Chair University of Wisconsin: R. Bruskewitz Clinical Centers Baylor College of Medicine: K. Slawin*, S. V. Pavlik, Columbia University: S. Kaplan*, R. Valenzuela Mayo Clinic: M. Lieber*, C. Van Oort, Medical College of Wisconsin: R. Donnell*, M. Pigsley Northwestern University: K. McVary*, M. Velez University of Colorado Health Sciences Center: D. Crawford*, A.. DeVore, University of Texas Southwestern Medical Center: C. Roehrborn*, A. Beaver, Stage Coordinating Center George Washington University: K. Hirst*, L. Coombs, L. Meyer Project Office National Institute of Diabetes and Digestive and Kidney Diseases: J. Kusek*, L. Nyberg

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.Vick CC, Finan KR, Kiefe C, Neumayer L, Hawn MT. Variation in Institutional Review processes for a multisite observational study. Am J Surg. 2005;190:805. doi: 10.1016/j.amjsurg.2005.07.024. [DOI] [PubMed] [Google Scholar]
  • 2.Title 45 Code of Federal Regulations (CFR) Part 46: The Belmont Report: Ethical Principles and Guidelines for the protection of human subjects of research. Washington DC: Office of Human Research Protection, United States Department of Health and Human Services; 1979. [Google Scholar]
  • 3.Prentice ED, Antonson DL. A protocol review guide to reduce IRB inconsistency. Irb. 1987;9:9. [PubMed] [Google Scholar]
  • 4.Stair TO, Reed CR, Radeos MS, Koski G, Camargo CA. Variation in institutional review board responses to a standard protocol for a multicenter clinical trial. Acad Emerg Med. 2001;8:636. doi: 10.1111/j.1553-2712.2001.tb00177.x. [DOI] [PubMed] [Google Scholar]
  • 5.Goldman J, Katz MD. Inconsistency and institutional review boards. Jama. 1982;248:197. [PubMed] [Google Scholar]
  • 6.Dziak K, Anderson R, Sevick MA, Weisman CS, Levine DW, Scholle SH. Variations among Institutional Review Board reviews in a multisite health services research study. Health Serv Res. 2005;40:279. doi: 10.1111/j.1475-6773.2005.00353.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Green LA, Lowery JC, Kowalski CP, Wyszewianski L. Impact of institutional review board practice variation on observational health services research. Health Serv Res. 2006;41:214. doi: 10.1111/j.1475-6773.2005.00458.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Hirshon JM, Krugman SD, Witting MD, Furuno JP, Limcangco MR, Perisse AR, et al. Variability in institutional review board assessment of minimal-risk research. Acad Emerg Med. 2002;9:1417. doi: 10.1111/j.1553-2712.2002.tb01612.x. [DOI] [PubMed] [Google Scholar]
  • 9.McWilliams R, Hoover-Fong J, Hamosh A, Beck S, Beaty T, Cutting G. Problematic variation in local institutional review of a multicenter genetic epidemiology study. Jama. 2003;290:360. doi: 10.1001/jama.290.3.360. [DOI] [PubMed] [Google Scholar]
  • 10.Dyrbye LN, Thomas MR, Mechaber AJ, Eacker A, Harper W, Massie FS, Jr, et al. Medical education research and IRB review: an analysis and comparison of the IRB review process at six institutions. Acad Med. 2007;82:654. doi: 10.1097/ACM.0b013e318065be1e. [DOI] [PubMed] [Google Scholar]
  • 11.Newgard CD, Hui SH, Stamps-White P, Lewis RJ. Institutional variability in a minimal risk, population-based study: recognizing policy barriers to health services research. Health Serv Res. 2005;40:1247. doi: 10.1111/j.1475-6773.2005.00408.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Mansbach J, Acholonu U, Clark S, Camargo CA., Jr Variation in institutional review board responses to a standard, observational, pediatric research protocol. Acad Emerg Med. 2007;14:377. doi: 10.1197/j.aem.2006.11.031. [DOI] [PubMed] [Google Scholar]
  • 13.Ehrlich PF, Newman KD, Haase GM, Lobe TE, Wiener ES, Holcomb GW. Lessons learned from a failed multi-institutional randomized controlled study. J Pediatr Surg. 2002;37:431. doi: 10.1053/jpsu.2002.30853. [DOI] [PubMed] [Google Scholar]
  • 14.Institutional Review Boards: A Time for Reform. Vol. 2008. Washington, D.C.: Department of Health and Human Services, Office of Inspector General; 1998. Jun, DHHS contract no. OEI-01-97-00193 ed. [Google Scholar]
  • 15.Evaluation of NIH implementation of Section 491 of the Public Health Service Act, mandating a program of protection for research subject. Bethesda, MD: Office of Extramural Research, National Institutes of Health; 1998. [Google Scholar]
  • 16.Christian MC, Goldberg JL, Killen J, Abrams JS, McCabe MS, Mauer JK, et al. A central institutional review board for multi-institutional trials. N Engl J Med. 2002;346:1405. doi: 10.1056/NEJM200205023461814. [DOI] [PubMed] [Google Scholar]
  • 17.Morreim H, Mack MJ, Sade RM. Surgical innovation: too risky to remain unregulated? Ann Thorac Surg. 2006;82:1957. doi: 10.1016/j.athoracsur.2006.07.003. [DOI] [PubMed] [Google Scholar]
  • 18.Gattellari M, Ward JE, Solomon MJ. Randomized, controlled trials in surgery: perceived barriers and attitudes of Australian colorectal surgeons. Dis Colon Rectum. 2001;44:1413. doi: 10.1007/BF02234591. [DOI] [PubMed] [Google Scholar]
  • 19.Rahbari NN, Diener MK, Wente MN, Seiler CM. Development and perspectives of randomized controlled trials in surgery. The American Journal of Surgery. 2007;194:148. [Google Scholar]
  • 20.Scales CD, Jr, Norris RD, Keitz SA, Peterson BL, Preminger GM, Vieweg J, et al. A critical assessment of the quality of reporting of randomized, controlled trials in the urology literature. J Urol. 2007;177:1090. doi: 10.1016/j.juro.2006.10.027. [DOI] [PubMed] [Google Scholar]
  • 21.Welk B, Afshar K, MacNeily AE. Randomized controlled trials in pediatric urology: room for improvement. J Urol. 2006;176:306. doi: 10.1016/S0022-5347(06)00560-X. [DOI] [PubMed] [Google Scholar]
  • 22.The CIRB Initiative: Frequently Asked Questions. 2008 [Google Scholar]
  • 23.Chaddah MR. The Ontario Cancer Research Ethics Board: a central REB that works. Curr Oncol. 2008;15:49. doi: 10.3747/co.2008.196. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Saginur R, Dent SF, Schwartz L, Heslegrave R, Stacey S, Manzo J. Ontario Cancer Research Ethics Board: lessons learned from developing a multicenter regional institutional review board. J Clin Oncol. 2008;26:1479. doi: 10.1200/JCO.2007.12.6441. [DOI] [PubMed] [Google Scholar]
  • 25.Randal J. Growing pains: central review board project still developing. J Natl Cancer Inst. 2003;95:636. doi: 10.1093/jnci/95.9.636. [DOI] [PubMed] [Google Scholar]

RESOURCES