Abstract
Purpose:
The National Cancer Institute (NCI) requirement that clinical trials at NCI-designated cancer centers undergo institutional scientific review in addition to institutional review board evaluation is unique among medical specialties. We sought to evaluate the effect of this process on protocol activation timelines.
Methods:
We analyzed oncology clinical trials that underwent full board review by the Harold C. Simmons Comprehensive Cancer Center Protocol Review and Monitoring Committee (PRMC) from January 1, 2009, through June 30, 2013. We analyzed associations between trial characteristics, PRMC decisions, protocol modifications, and process timelines using the χ2 test, Fisher’s exact test, Wilcoxon rank sum test, Kruskal-Wallis test, and logistic regression.
Results:
A total of 226 trials were analyzed. Of these, 77% were industry sponsored and 23% were investigator initiated. The median time from submission to PRMC approval was 55 days. The length of review was associated with trial phase, timing of approval, and number of committee changes/clarifications requested. The median process time was 35 days for those approved at first decision, 68 days for second decision, and 116 days for third decision (P < .001). The median process time was 39 days if no changes/clarifications were requested, 64 days for one to three changes/clarifications, and 73 days for four or more changes/clarifications (P < .001). Requested changes/clarifications had a greater effect on industry-sponsored trials than on investigator-initiated trials.
Conclusion:
NCI-mandated institutional scientific review of oncology clinical trials contributes substantially to protocol activation timelines. Further evaluation of this process and the value added to research quality is warranted.
INTRODUCTION
The process of oncology clinical trial activation has come under intense scrutiny in recent years. Delays and barriers to study activation increase costs, hinder accrual of patients, and may result in obsolete study objectives.1-3 In addition, critics have cited the detrimental effects of the complexity, length, excessive demands, conservatism, and inconsistencies of the clinical trial review process.4-7 As part of the clinical trial approval process, the National Cancer Institute (NCI) mandates that clinical trials conducted at NCI-designated cancer centers undergo institutional scientific review in addition to institutional review board (IRB) evaluation.8 In contrast to the primary focus of the IRB on study ethics, patient rights, and informed consent, institutional scientific review committees (which go by various names such as protocol review committee, scientific review committee, clinical trial committee, and clinical research committee4,9,10) are tasked with reviewing protocols with a scientific interest, focusing on originality, methodology, feasibility, and relevance.11 This systematic requirement is unique among medical specialties. In general, clinical trials that do not involve patients with cancer proceed directly to the IRB without a preceding formal institution-level scientific review.
The effect of institutional scientific review on protocol content and research processes is highly variable and dependent on study characteristics. In an earlier study, we found that study sponsor (industry sponsored v investigator initiated) was the greatest predictor of review decisions and requests.12 Compared with industry-sponsored trials, requested changes were more likely (54% v 27%; P < .001) and more numerous (mean, 5.6 v 2.4 changes; P < .001) for investigator-initiated trials. Following this trend, investigator-initiated trials also had more changes implemented (mean, 4.6 v 2.1 changes; P = .008). This discrepancy was most pronounced for changes related to study design, which were implemented in 40% of investigator-initiated clinical trials but only 5% of industry-sponsored trials (P = .03).
Given the preponderance of industry-sponsored trials in most cancer center research portfolios and the relatively limited impact that institutional scientific review has on their content, it is critical to understand the effect of this process on study activation timelines. This issue has particular relevance as pharmaceutical partners diversify and expand clinical trial settings. They may base site selection partly on process metrics such as start-up efficiency. Although institutional scientific review is mandated at NCI-designated cancer centers, most of which are housed within major academic institutions, community-based practices do not necessarily have this requirement. As NCI-designated cancer centers face growing pressure to examine and demonstrate their value added, activation delays are receiving heightened attention. We therefore examined in detail scientific protocol review timelines in this setting.
METHODS
Study Setting
The Harold C. Simmons Comprehensive Cancer Center is a freestanding clinical and research facility affiliated with the University of Texas Southwestern Medical Center. Simmons received initial NCI designation in 2010 and comprehensive designation in 2015. The institutional Protocol Review and Monitoring Committee (PRMC), which includes clinicians, investigators, pharmacists, regulatory personnel, data and safety monitoring specialists, and patient advocates, has been in operation since 2001. The PRMC consists of three separate committees. Two general PRMC committees each meet monthly on a staggered schedule to review new protocol submissions, responses to prior reviews, status of previously reviewed protocols, and accrual of activated trials. A third PRMC committee meets monthly to evaluate population science studies. All research conducted at University of Texas Southwestern involving human subjects with cancer is reviewed by the PRMC. Studies considered to be particularly low risk (eg, medical records review) or that have previously undergone adequate scientific review (eg, NCI cooperative group trials) may undergo only administrative review and were not included in this study. Other protocols undergo full committee review, which typically includes evaluation by two clinicians/investigators, a biostatistician, a pharmacist, a data and safety monitoring specialist, and a patient advocate.
Data Collection
We identified studies that underwent full PRMC review between January 1, 2009, and June 30, 2013. For each study, we collected the following items: PRMC submission form, study protocol and consent form, reviewer evaluations, PRMC decision letter, and any revised study documents. For each study, we recorded the year, disease under study, principal investigator (PI) field, phase, type (interventional/noninterventional), and sponsor category (institutional/industrial). Trial characteristics were defined according to previously described parameters.12
We also recorded the initial and final PRMC decision (approval/approval with stipulations/deferral/disapproval). A decision of approval does not require any feedback from the PI and results in the protocol being forwarded to the IRB for review. This category also includes protocols approved with courtesy comments only. Approval with stipulations requires PI response to queries and suggestions. Upon receipt, these responses (and associated protocol changes, if any) are forwarded individually to the original PRMC reviewers, who may approve the protocol after review, request further changes or clarifications, or defer protocol decisions until the next PRMC meeting. A deferral decision requires the PI to respond to queries and suggestions. After receipt, the response is reviewed at the next PRMC meeting by the entire committee. In cases of approval with stipulations or deferral, PI responses are requested within 7 to 10 days after issuance of the PRMC letter. In this study, protocol changes and clarifications requested by the PRMC and addressed by the PI were calculated, broadly grouped as protocol related or nonprotocol related, and further categorized as previously described.12
Dates for the following steps in the PRMC process were recorded for each protocol: submission initiation, submission completion, reviewer assignment, committee meeting, letter issuance, PI response, and interim and final decisions. The total time to approval was defined as the interval from the date of submission initiation to the date of final decision (if approval). All data collection was performed by a single investigator (N.N.). Ten percent of studies were randomly selected for extensive data review by an experienced PRMC member and clinical investigator (D.E.G.). Discrepancies were noted in 0.8% of all data cells, of which there were 116 per trial.
Statistical Analysis
We analyzed the association between trial characteristics, PRMC decisions, PRMC protocol modifications, and PRMC intervals using the χ2 test, Fisher’s exact test, and logistic regression. The distributions of the seven PRMC intervals were studied, and we found that they were all skewed to the right. Because of lack of normality, the intervals were analyzed by nonparametric test methods (eg, the Wilcoxon rank sum test and Kruskal-Wallis test) and logistic regression. All reported P values are two sided. We used a P value of < .05 as the criterion for statistical significance. To limit the influence of outlying data, we binned values for changes/clarifications requested/implemented as none, one to three, or four or more. All statistical calculations were performed on SAS for Windows version 9.4 (SAS Institute, Cary, NC).
RESULTS
A total of 226 studies undergoing full PRMC review were identified and included in the analysis. Trial characteristics are listed in Table 1. The nature and timing of PRMC decisions are shown in Figure 1. PRMC initial decisions were as follows: approve (40%), approve with stipulations (52%), defer (7%), and disapprove (1%). Among the 135 studies for which there was a second decision, second decisions were approve (85%), approve with stipulations (7%), defer (5%), and PI withdrawal of study (3%). Among the 16 studies with a third PRMC decision, 14 were approved and two were deferred. For 97% of all studies, the final decision was approve.
Table 1.
Characteristics of 226 Studies Reviewed by the Protocol Review and Monitoring Committee

Fig 1.
Components, protocol disposition, and median (interquartile range [IQR]) intervals of Protocol Review and Monitoring Committee process. PI, principal investigator.
Across all studies, the median time from PRMC submission initiation to final decision was 55 days (interquartile range [IQR], 35 to 78 days). The median time from PRMC submission initiation to final decision according to study characteristics is shown in Table 2. Investigator-initiated trials had slightly longer timelines than industry-sponsored trials. There was also a clear trend according to trial phase, with the longest timelines for phase I trials and the shortest for phase III trials and those without a designated phase (most commonly nontherapeutic studies).
Table 2.
Time in Days From Protocol Review and Monitoring Committee Submission to Final Decision, According to Study Characteristics and Protocol Review and Monitoring Committee Decision

Median times for component intervals (Fig 1) were as follows: submission initiation to submission completion: 29 days (IQR, 12 to 42 days); submission completion to reviewer assignment: 4 days (IQR, 2 to 9 days); reviewer assignment to PRMC meeting: 11 days (IQR, 10 to 12 days); PRMC meeting to letter issuance: 1 day (IQR, 1 to 7 days); letter issuance to PI response: 4 days (IQR, 0 to 17 days); receipt of PI response to PRMC final decision: 18 days (IQR, 8 to 33 days). Some of these component variables differed according to trial characteristics. The interval from submission initiation to submission completion varied according to trial sponsor (median, 29 days for industry v 12 days for investigator; P = .006) and according to trial phase (median, 21 days for phase I/pilot, 29 days for phase II, 33 days for phase III, and 11 days for none designated; P = .01). None of the subsequent intervals differed meaningfully by trial characteristics.
We also characterized total PRMC intervals (submission initiation to final decision) according to PRMC decisions and PI actions (Table 2). As might be expected, intervals were shorter for trials that required fewer PRMC decisions, had fewer requested clarifications or changes, and had fewer clarifications addressed or changes made. In general, for each additional decision required, the review time increased by at least 1 month. Compared with trials for which no changes or clarifications were requested or implemented, those with one to three had at least 50% longer timelines, and those with four or more had timelines almost twice as long.
Finally, we examined whether PRMC requests had differential effects on total review timelines depending on trial sponsor type (Appendix Table A1, online only). In general, intervals increased for both investigator-initiated and industry-sponsored trials as the number of requested/implemented clarifications/changes increased. For trials with one to three or four or more requested/implemented clarifications/changes, industry-sponsored trials had numerically longer review times. As might be expected, these differences largely arose from the interval between PRMC review (letter issuance) and PI response. For trials with four or more requested clarifications/changes, the median review-to-response interval for industry-sponsored trials was 13 days compared with 4 days for investigator-initiated trials.
DISCUSSION
In an era when the complexities and costs of clinical research have come into question, every step in the protocol activation process is examined closely. Although IRB processes have been studied in detail, institutional scientific review of research protocols—a requirement at NCI-designated cancer centers that is unique among medical specialties—has not been examined in depth. In an earlier study, we described the effect of institutional scientific review on protocol content.12 In the present study, we address the effect of this process on activation timelines.
The overall scientific review process entails several individual steps, including initiation and completion of document submission, reviewer assignment, committee meeting, decision letter issuance, investigator response, and subsequent cycles as needed until a final decision (approve, disapprove, or withdraw) is rendered. In this study of > 200 protocols reviewed in a recent 5-year period, the overall process lasted a median of approximately 2 months. Intervals under the control of the investigator team (submission completion, response to committee review) and those under the control of the review committee (reviewer assignment, review, letter issuance, and final decision) each accounted for approximately one-half of this total time. Ideally, efforts to expedite the overall process should address both components.
The greatest predictor of process timelines was the nature of protocol review. The overall review duration for trials approved at first decision was approximately half as long as that for trials requiring two decisions and one-third as long as that for trials requiring three decisions. Similarly, as the number of committee requests (changes or clarifications) increased, so did the length of the review process. The effect of committee requests on review timelines varied according to trial sponsor. Specifically, we found that the overall process timeline for trials with committee requests ranged from 10% to 20% longer for industry-sponsored trials than for investigator-initiated trials. This observation probably reflects the added steps required to address such changes, which usually entails obtaining sponsor input and approval. For large pharmaceutical partners conducting several concurrent multicenter trials, navigating sponsor and contract research organization staff networks to obtain needed feedback may require considerable effort and time. This effect is particularly noteworthy because we have previously shown that for industry-sponsored trials, changes requested by the scientific review committee are less likely to be implemented.12 Furthermore, those that are implemented only rarely affect study design.
There are few other reports of scientific review committee timelines to which we can compare our findings. In a comparison of lung cancer clinical trial protocol activation at Washington University (St Louis, Missouri) and University of Torino (Italy), the median time from document submission to first patient accrual was 221 days at the US site and 153 days at the Italian site (P = .05).9 At the US site, the median time to regulatory approval (which included both scientific review committee and IRB steps) was 75 days. At the Italian site, which did not require institutional scientific review, the median time to regulatory approval was 31 days (P < .001). However, the actual time required for institutional scientific review was not reported. At Vanderbilt-Ingram Cancer Center, the median time to complete institutional scientific review was 70 days, which exceeded that of IRB review (median, 47 days) and was comparable to contracts/grants review (median, 79 days).1 Scientific review has also been examined at the consortium level. In the European Organisation for Research and Treatment of Cancer, the protocol review committee evaluates all concepts for scientific interest. The committee, which includes 24 members who meet every 3 months at the European Organisation for Research and Treatment of Cancer Data Center, had a median time to first decision of 26 days.11 Although not strictly an analysis of scientific review committees, a study of research ethics committee decisions on cancer trials in the United Kingdom demonstrated remarkable concordance with our findings.13 In that study, 66% of initial committee decisions were provisional compared with 60% in this study. These results suggest that multiple review processes, which we have shown to substantially increase review timelines, are quite common and may represent the majority of protocol reviews.
In recent years, the NCI has modified requirements related to institutional scientific review at NCI-designated cancer centers, particularly for institutional (investigator-initiated) trials.14,15 Formal review of proposals at the concept stage is now required, because substantial changes to study design may be more feasible at that point than after a full protocol is drafted. In addition, the NCI now allows cancer centers to accept the institutional scientific review of another NCI-designated cancer center for multicenter institutional (investigator-initiated) trials. Institutional scientific review is still required at all participating centers for industry-sponsored trials, for which institutional scientific review committees are far less likely to influence study design.12
How institutional scientific review interfaces with IRB review remains a critical question. Certainly, it cannot be assumed that the duration of a scientific review would necessarily be additive to IRB review timelines. It seems probable that scientific review results in protocol improvements that could expedite the IRB step. That is, scientific review may identify required changes that would have otherwise been requested at the IRB stage. Key to effective implementation of institutional scientific review is limiting overlap and redundancies between the two stages. If questions of rationale, feasibility, and efficacy are addressed by the scientific review committee, whereas questions of safety are addressed by the IRB (with a limited number of checks and balances between steps), process efficacy and throughput could be optimized. However, achieving this synergy is likely to be challenging. We have previously shown that approximately half of scientific review committee requests are related to study consent forms and other nonprotocol documentation.12 Conversely, approximately three-quarters of British research ethics committee (analogous to IRBs) reviews raise scientific concerns.16
The principal limitation of this study is its single-center setting. As shown for ethical review boards (ie, IRBs),17-20 concerns and decisions may vary substantially across institutions. In addition, the proportion of industry-sponsored versus investigator-initiated trials may differ among institutions, thereby affecting the content and timelines of scientific review. For an individual trial protocol, we cannot determine whether institutional scientific review prolonged overall activation timelines or if other components (eg, budget and contract negotiations) were rate-limiting steps. Because studies had multiple requested changes or clarifications, it is not possible to determine which particular requested change or clarification was rate limiting. For the same reason, it is not possible to assign the committee’s revision request to a single request type (eg, feasibility, efficacy, safety). Finally, in this study it is not possible to determine the value added to the research quality of individual protocol scientific reviews.
Indeed, determining the value added of institutional scientific review to the quality of clinical research protocols is quite difficult. One could examine protocol characteristics, such as the extent to which eligibility criteria are justified, statistical methodology, and importance of correlative studies. Alternatively, one could evaluate study performance, such as patient enrollment, achievement of study end points, and resulting presentations and publications. These features could be compared among trials with and without scientific review. Alternatively, one could track these metrics over the course of the protocol review process, assessing the effect of scientific review. Even if all of these data were readily available, the ultimate determination of quality remains quite subjective and nebulous.
Bringing new treatments to patients remains a long and costly process. Preclinical evaluation typically requires 4 to 5 years. Clinical trials and Food and Drug Administration approval take an average of > 7 years, with a total cost of > $1 billion to bring a new drug to market.4,21 The costs and lengths of these steps continue to increase.22 Although industry-sponsored trials may have the resources to address these requirements, noncommercial studies might be hindered by them.23 As we have shown in this study, institutional scientific review represents a potentially influential but also potentially lengthy and complex step in this process. Furthermore, some have argued that the distinction between scientific and ethical review is incoherent.24 Accordingly, the value added to research quality and optimal approach to this requirement should be evaluated critically. If the benefits of the process can be maximized and clearly demonstrated, formal institutional scientific review might even be considered for noncancer clinical trials.
ACKNOWLEDGMENT
Supported by a National Institutes of Health Midcareer Investigator Award in Patient-Oriented Research (K24CA201543-01; to D.E.G.), a National Cancer Institute Clinical Investigator Team Leadership Award (1P30 CA142543-01 supplement; to D.E.G.), a National Institutes of Health National Institute of Diabetes and Digestive and Kidney Diseases Short-Term Institutional Research Training Grant (5 T35 DK 66141-10; to N.N.), and the National Center for Advancing Translational Sciences University of Texas Southwestern Center for Translational Medicine (U54 RFA-TR-12-006). Biostatistical support provided by the Biostatistics Shared Resource at the Harold C. Simmons Cancer Center, University of Texas Southwestern Medical Center, Dallas, TX, which is supported in part by a National Cancer Institute Cancer Center Support Grant (1P30 CA142543-01). Presented in abstract form at the American Society of Clinical Oncology Annual Meeting, Chicago, IL, June 2-7, 2017. We thank Helen Mayo, MLS, University of Texas Southwestern Medical Library, for assistance with literature searches and Dru Gray for assistance with manuscript preparation.
Appendix
Table A1.
Time in Days From Protocol Review and Monitoring Committee Submission to Final Decision, According to Number of Requested and Implemented Clarifications and Changes by Sponsor Type

AUTHOR CONTRIBUTIONS
Conception and design: Ning Ning, Martin Dietrich, David E. Gerber
Financial support: David E. Gerber
Administrative support: David E. Gerber
Collection and assembly of data: Ning Ning
Data analysis and interpretation: Jingsheng Yan, Xian-Jin Xie, David E. Gerber
Manuscript writing: All authors
Final approval of manuscript: All authors
Accountable for all aspects of the work: All authors
AUTHORS’ DISCLOSURES OF POTENTIAL CONFLICTS OF INTEREST
Institutional Scientific Review of Cancer Clinical Research Protocols: A Unique Requirement That Affects Activation Timelines
The following represents disclosure information provided by authors of this manuscript. All relationships are considered compensated. Relationships are self-held unless noted. I = Immediate Family Member, Inst = My Institution. Relationships may not relate to the subject matter of this manuscript. For more information about ASCO’s conflict of interest policy, please refer to www.asco.org/rwc or ascopubs.org/jop/site/ifc/journal-policies.html.
Ning Ning
No relationship to disclose
Jingsheng Yan
No relationship to disclose
Martin F. Dietrich
Consulting or Advisory Role: AstraZeneca, ARIAD/Takeda, Guardant Health, Caris MPI
Speakers’ Bureau: Takeda, AstraZeneca
Xian-Jin Xie
No relationship to disclose
David E. Gerber
Stock or Other Ownership: Gilead Sciences
Consulting or Advisory Role: Samsung Bioepis, Bristol-Myers Squibb
Speakers’ Bureau: Bristol-Myers Squibb
Research Funding: Immunogen (Inst), ArQule (Inst), ImClone Systems (Inst), BerGenBio (Inst), Karyopharm Therapeutics (Inst)
Patents, Royalties, Other Intellectual Property: Royalties from Oxford University Press from two books; royalties from Decision Support in Medicine from the Clinical Decision Support–Oncology online program.
Travel, Accommodations, Expenses: Eli Lilly, ArQule, Bristol-Myers Squibb
REFERENCES
- 1.Dilts DM, Sandler AB: Invisible barriers to clinical trials: The impact of structural, infrastructural, and procedural barriers to opening oncology clinical trials. J Clin Oncol 24:4545-4552, 2006 [DOI] [PubMed] [Google Scholar]
- 2.Dilts DM, Sandler AB, Cheng SK, et al. : Steps and time to process clinical trials at the Cancer Therapy Evaluation Program. J Clin Oncol 27:1761-1766, 2009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Korn EL, Freidlin B, Mooney M, et al. : Accrual experience of National Cancer Institute Cooperative Group phase III trials activated from 2000 to 2007. J Clin Oncol 28:5197-5201, 2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Kurzrock R, Pilat S, Bartolazzi M, et al. : Project Zero Delay: A process for accelerating the activation of cancer clinical trials. J Clin Oncol 27:4433-4440, 2009 [DOI] [PubMed] [Google Scholar]
- 5.Printz C: NCI, Cooperative Groups gear up for changes in clinical trials system: New policies initiated in response to Institute of Medicine report. Cancer 117:2017-2019, 2011 [DOI] [PubMed] [Google Scholar]
- 6.Young RC: Cancer clinical trials—A chronic but curable crisis. N Engl J Med 363:306-309, 2010 [DOI] [PubMed] [Google Scholar]
- 7.Duley L, Antman K, Arena J, et al. : Specific barriers to the conduct of randomized trials. Clin Trials 5:40-48, 2008 [DOI] [PubMed] [Google Scholar]
- 8. Department of Health and Human Services–National Cancer Institute: Cancer Center Support Grants (CCSGs) for NCI-designated Cancer Centers (P30), 2017.
- 9.Wang-Gillam A, Williams K, Novello S, et al. : Time to activate lung cancer clinical trials and patient enrollment: A representative comparison study between two academic centers across the Atlantic. J Clin Oncol 28:3803-3807, 2010 [DOI] [PubMed] [Google Scholar]
- 10.Ortega R, Dal-Ré R: Clinical trials committees: How long is the protocol review and approval process in Spain? A prospective study. IRB 17:6-9, 1995 [PubMed] [Google Scholar]
- 11.Lardot C, Steward W, Van Glabbeke M, et al. : Scientific review of EORTC trials: The functioning of the New Treatment Committee and Protocol Review Committee. Eur J Cancer 38(Suppl 4):S24-S30, 2002 [DOI] [PubMed] [Google Scholar]
- 12.Ning N, Yan J, Xie XJ, et al. : Impact of NCI-mandated scientific review on protocol development and content. J Natl Compr Canc Netw 13:409-416, 2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Dixon-Woods M, Angell E, Tarrant C, et al. : What do research ethics committees say about applications to do cancer trials? Lancet Oncol 9:700-701, 2008 [DOI] [PubMed] [Google Scholar]
- 14. National Cancer Institute: Notice of Correction to PAR-13-386 “Cancer Center Support Grants (CCSGs) for NCI-designated Cancer Centers (P30)” to Clarify the Role of the Protocol and Review Monitoring System in the Scientific Review of Multi-Site Clinical Studies (NOT-CA-16-038), 2016.
- 15. Department of Health and Human Services: Cancer Center Support Grants (CCSGs) for NCI-designated Cancer Centers (P30) (PAR-13-386), 2013.
- 16.Angell EL, Bryman A, Ashcroft RE, et al. : An analysis of decision letters by research ethics committees: The ethics/scientific quality boundary examined. Qual Saf Health Care 17:131-136, 2008 [DOI] [PubMed] [Google Scholar]
- 17.Mansbach J, Acholonu U, Clark S, et al. : Variation in institutional review board responses to a standard, observational, pediatric research protocol. Acad Emerg Med 14:377-380, 2007 [DOI] [PubMed] [Google Scholar]
- 18.Van Luijn HE, Musschenga AW, Keus RB, et al. : Evaluating the risks and benefits of phase II and III cancer clinical trials: A look at Institutional Review Board members in the Netherlands. IRB 29:13-18, 2007 [PubMed] [Google Scholar]
- 19.Lux AL, Edwards SW, Osborne JP: Responses of local research ethics committees to a study with approval from a multicentre research ethics committee. BMJ 320:1182-1183, 2000 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Edwards SJ, Ashcroft R, Kirchin S: Research ethics committees: Differences and moral judgement. Bioethics 18:408-427, 2004 [DOI] [PubMed] [Google Scholar]
- 21.DiMasi JA, Hansen RW, Grabowski HG: The price of innovation: New estimates of drug development costs. J Health Econ 22:151-185, 2003 [DOI] [PubMed] [Google Scholar]
- 22.Stewart DJ, Whitney SN, Kurzrock R: Equipoise lost: Ethics, costs, and the regulation of cancer clinical research. J Clin Oncol 28:2925-2935, 2010 [DOI] [PubMed] [Google Scholar]
- 23.Sheard L, Tompkins CN, Wright NM, et al. : Non-commercial clinical trials of a medicinal product: Can they survive the current process of research approvals in the UK? J Med Ethics 32:430-434, 2006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Dawson AJ, Yentis SM: Contesting the science/ethics distinction in the review of clinical research. J Med Ethics 33:165-167, 2007 [DOI] [PMC free article] [PubMed] [Google Scholar]

