Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Mar 1.
Published in final edited form as: Ther Innov Regul Sci. 2014 Nov 14;49(2):225–233. doi: 10.1177/2168479014555912

INVESTIGATING THE EFFICACY OF CLINICAL TRIAL MONITORING STRATEGIES: Design and Implementation of the Cluster Randomized START Monitoring Substudy

Katherine Huppler Hullsiek a, Jonathan M Kagan b, Nicole Engen a, Jesper Grarup c, Fleur Hudson d, Eileen T Denning a, Catherine Carey e, David Courtney-Rodgers e, Elizabeth B Finley f, Per O Jansson c, Mary T Pearson c, Dwight E Peavy g, Waldo H Belloso h, for the INSIGHT START Monitoring Substudy Group
PMCID: PMC4426264  NIHMSID: NIHMS631705  PMID: 25973346

Abstract

Background

Trial monitoring protects participant safety and study integrity. While monitors commonly go on-site to verify source data, there is little evidence that this practice is efficient or effective. An ongoing international HIV treatment trial (START) provides an opportunity to explore the usefulness of different monitoring approaches.

Methods

All START sites are centrally monitored and required to follow a local monitoring plan requiring specific quality assurance activities. Additionally, sites were randomized (1:1) to receive, or not receive, annual on-site monitoring. The study will determine if on-site monitoring increases the identification of major protocol deviations (eligibility or consent violations, improper study drug use, primary or serious event underreporting, data alteration or fraud).

Results

The START study completed enrollment in December 2013, with planned follow-up through December 2016. The monitoring study is ongoing at 196 sites in 34 countries. Results are expected when the START study concludes in December 2016.

Keywords: Data monitoring, centralized monitoring, on-site monitoring, quality assurance, clinical trials

Introduction

Clinical trials are monitored to help protect participant safety and assure data integrity [1]. There are a variety of monitoring approaches, and no single method is suitable for all trials [25]. Traditionally research sites are visited in-person, with an emphasis on source data verification (SDV). While such “on-site monitoring” provides opportunities for personal interactions and training it can be costly both in terms of the actual expense of the monitoring and the potential disruption of site operations [6]. Monitoring expenses in a phase 3 trial may account for up to one-third of total trial costs [7, 8].

To date there is little empirical evidence on which to evaluate the effectiveness or efficiency of on-site monitoring. One recent study examined the usefulness of 100% SDV in a cancer trial dataset from 75 sites in the United Kingdom [9]. The discrepancies found when comparing the data before and after the 100% SDV were mostly random errors which, because evenly distributed across arms, had no significant impact on the primary outcomes.. Another study reviewed sponsor queries from the datasets of three different Phase I studies conducted by one contract research organization in the Netherlands [10]. Only 0.4% (6/1389) of the query-driven data changes could have influenced the primary study results. On the other hand, the US National Cancer Institute’s Cancer and Leukemia Group B has reported that their audit process, including a substantial on-site component, has been very successful in improving protocol compliance, data submission, adherence to administrative requirements, and has also successfully uncovered rare instances of scientific misconduct [11].

With trial costs continually on the rise and increasing globalization of clinical research, sponsors are actively seeking more efficient approaches to improve research quality but at the same time contain expenses [12, 13]. The Food and Drug Administration (FDA), European Medicines Agency (EMA) and others now support a variety of monitoring practices, with focus on critical data elements [24]. Some have reported that increased central monitoring may decrease the need for on-site monitoring [8, 1517], while approaches oriented to staff training can reduce the type of errors typically found with on-site monitoring. Risk-based monitoring and electronic data capture both have the potential to improve research and data quality at reduced cost, but the integration of these approaches poses new challenges to quality management processes.

Ongoing studies seek to determine the applicability and effectiveness of various data monitoring practices, including risk-based monitoring and quality by design [1820]. The Optimon study compares intensive versus risk-adaptive monitoring in a variety of study settings [21], while the Adamon randomized study is evaluating adaptive on-site monitoring strategies [22]. Other randomized studies of on-site monitoring were terminated early and were not able to evaluate the impact of on-site monitoring [23, 24]. To formally evaluate different monitoring approaches it is most efficient to have a study nested within an ongoing clinical trial, ideally one that has minimal risk for participants. This report outlines the design and rationale for our ongoing nested study, the START Monitoring Substudy.

Overview of study design and methods

In the START Monitoring Substudy (or SMS), every eligible site receives central (database driven) monitoring and local (investigator driven) monitoring but only half the sites have been randomized to receive on-site monitoring. Through this design, we seek to determine whether or not the addition of on-site monitoring increases participant safety and integrity of clinical trial data.

The START Study

The SMS is a component of the Strategic Timing of Initiating AntiRetroviral Treatment (START) study. START is a multicenter randomized international trial comparing two management strategies for antiretroviral therapy (ART)-naïve participants with baseline CD4 counts greater than 500 cells/μL, randomizing 4685 HIV+ participants at 215 sites from 35 countries. The two strategies are to begin ART immediately (early ART) or to defer ART until the CD4 cell count declines to below 350 cells/μL or an AIDS event occurs (deferred ART). All participants will be followed for at least 3 years, with average follow-up of 4.5 years. All ART drugs used in the study must have approval or tentative approval from the FDA or EMA. The composite primary endpoint for the START study is time to the development of a nonfatal serious AIDS or non-AIDS condition or death from any cause.

The START study is conducted by the International Network for Strategic Initiatives in Global HIV Trials (INSIGHT) and the design of the study has been previously described [25]. All clinical sites are associated with one of four International Coordinating Centers (ICCs), located in Copenhagen, London, Sydney and Washington DC. Implementation details for local (investigator driven), central (database driven) and on-site monitoring activities for START are summarized in Table 1 and described in detail in the Supplemental Appendix. Briefly, local monitoring is performed at the clinical sites by qualified personnel appointed by the site Principal Investigator through INSIGHT’s Clinical Quality Management Plan (CQMP), the focus of which is quality control of day-to-day activities and the semi-annual submission of forms tracking quality assurance for regulatory, specimen and drug management issues and selected participant chart reviews. Central monitoring is performed by the INSIGHT Statistical and Data Management Center (SDMC) at the University of Minnesota, focusing on the accuracy, timing and completeness of case report form (CRF) data and protocol and regulatory requirements. On-site monitoring is performed annually by an ICC-designated monitor, focusing on participant safety, regulatory and protocol requirements, data accuracy, and concluding with an on-site review with site staff.

Table 1.

Summary1 of central, local and on-site monitoring activities for the INSIGHT START Study

CENTRAL MONITORING (all sites)
Daily reports are posted to the INSIGHT website by the Statistical and Data Management Center (SDMC) summarizing
  • Data Completeness

  • Specimen Tracking

  • Data Quality

  • Regulatory Compliance

Semi-annual performance reports are generated for each site and International Coordinating Center (ICC). The reports are scored for five categories:
  • Eligibility Violations

  • Data quality

  • Retention

  • Serious Event Reporting

  • Adherence to the Clinical Quality Management Plan (CQMP)

The Quality Oversight and Performance Evaluation Committee, with representation from each ICC and the SDMC, reviews all performance reports and recommends corrective actions when necessary.

LOCAL MONITORING (all sites)
Each site is required to implement the START CQMP, including Daily quality control (QC) activities:
  • All case report forms (CRF) checked by a second person

  • All data verified against source documentation at the site

  • All error-corrections made and submitted within 14 days of data query

Semi-annual quality assurance (QA) activities: complete and submit to the SDMC 4 CRFs:
  • Regulatory File Review (including informed consent)

  • Stored Specimen Review

  • Drug Management and Accountability Review

  • Chart/Case Reviews (for participants selected by the SDMC according to a pre-defined algorithm)


ON-SITE MONITORING (sites randomized to the on-site monitoring arm)
Annual on-site monitoring visit by an ICC-designated monitor. Details of the procedures and scope of visits are outlined in the START Monitoring Manual.
On-site monitoring visits include review of
  • Regulatory files and personnel

  • Drug management operations

  • Operations (facilities, security)

  • Specimen storage and labeling

  • Chart reviews (verify designated CRFs with source documents)

1

See the Supplementary Appendix for additional details

Monitoring Substudy Design

The SMS is a cluster randomized study to evaluate and compare data monitoring with two vs three components: (central +local monitoring) compared to (central + local + on-site monitoring); Figure 1. The unit of randomization is a clinical site, with randomization balanced on three factors: ICC, country and projected enrollment (fewer than 15, 15–30, or more than 30 participants). Cluster randomization was used because the study objective is to compare the performance of clinical sites with and without on-site monitoring. While sites were not notified of the randomization assignment it was not blinded, as within the first year sites randomized to the (central + local + on-site monitoring) arm were contacted to schedule a monitoring visits. As no participant interaction takes place related to the SMS, local institutional review board (IRB) or independent ethics committee (IEC) approval of the substudy is not required by the sponsor and no additional informed consent is required from study participants. All sites have IRB or IEC approval for the START study, and all participants sign the START informed consent. Because the German federal co-funder requires semi-annual on-site monitoring, German sites are excluded from the Monitoring Substudy and continue to receive central, local and on-site monitoring for the duration of the START study. A “for-cause” on-site monitoring visit can be conducted at any START site with approval by the START Protocol Team co-chairs.

Figure 1.

Figure 1

Study hypothesis and primary endpoint

The study hypothesis is that the addition of annual on-site monitoring to central and local monitoring will improve site (cluster) performance when compared to central and local monitoring alone. A non-inferiority design was not considered as data on a ‘gold standard’ for data monitoring practices is not available. The primary endpoint for the substudy is a composite of events that could impact participant safety or the START primary endpoint. Components of the primary endpoint include major eligibility violations, major informed consent violations, use of an ART drug for initial therapy that is not permitted by the START protocol, a six month or more delay in reporting START primary endpoints or serious events, and data alteration or fraud. Comparison will be between the SMS randomized arms, and not between the START randomized arms of early and deferred ART. Specific details on the components of the primary endpoint are included in the Supplementary Appendix.

Secondary endpoints and objectives

Secondary endpoints include comparisons between the two randomized monitoring arms for a broad range of outcomes that could affect the integrity of the main START study, including early initiation of ART in the START deferred arm; length of time between when ART should be prescribed and when it actually is prescribed; losses to follow-up, including withdrawal of consent; timeliness of data submissions and number of data queries; number of START protocol re-training visits; and number of for-cause monitoring visits.

The START early and deferred randomized arms will also be compared for the START primary endpoint for the subgroup defined by the randomly assigned SMS arms. Site characteristics associated with poor quality, as judged by the primary and secondary outcomes of the SMS, will be identified. Finally, the overall cost of on-site monitoring visits, both financial and in terms of human resources for the on-site monitor, will be described.

Sample size and power considerations

When the SMS was designed there was little information available on which to base an expected percentage of primary endpoints for the substudy that would be found with on-site monitoring or the expected coefficient of variation of true proportion between clusters (sites). Data from a large international trial conducted in 2003, Evaluation of Subcutaneous Proleukin© in a Randomized International Trial (ESPRIT) [26], was used to generate estimates. The overall percentage of ESPRIT participants who had an unreported event (AIDS, death, grade 4 adverse event [27], or serious adverse event) found at an on-site monitoring visit at the completion of the study was approximately 4%. Because an unreported START primary event is just one component of the composite primary endpoint for the SMS, we estimated that an unreported primary endpoint would be found for 10% and 7% of participants randomized to the arms with and without on-site monitoring, respectively, for a risk difference of 3%. Using the methods of Hayes and Bennett [28] the estimated coefficient of variation for the ESPRIT data is 0.50. Table 2 provides the number of sites needed for each randomized arm under several different scenarios for n (the number of participants randomized per site) and κ (the coefficient of variation) in order to have 80% power with 2-sided α = 0.05. The START study was expected to randomize participants at 235 sites, with an average of 20 participants per site. If the coefficient of variation is 0.5 and n=20 then a total of 204 sites are needed for the SMS.

Table 2.

Number of clusters (sites) per randomization arm for 80% power with 2-sided α=0.05, assuming event rates of 10% and 7%

Coefficient of Variation (κ)
Number randomized per site (n) 0.3 0.5 0.7
15 104 125 156
20 81 102 134

Closure of the SMS

During the course of the SMS the protocol team will work with INSIGHT’s Quality Oversight and Performance Evaluation Committee (QOPEC) to develop a risk algorithm for identifying sites that have a higher risk for unreported START primary and important secondary events. The main START study is expected to complete follow-up in December 2016. The SMS will have a common closing date approximately 9 months prior to the common closing date for START. When the SMS closes all routine on-site monitoring visits for the substudy will end, and the substudy results will be unblinded to the SMS and START protocol teams. Using the risk algorithm and the substudy results, some sites (10–20%) will be chosen for targeted on-site monitoring before the START study closes. That additional monitoring will be targeted to find unreported START primary and important secondary events. Based on observed rates of missed events, additional sites may be on-site monitored.

Study procedures and data collection

Data collection

There are 3 case report forms (CRFs) specific to the SMS. All sites complete a onetime form detailing site characteristics, including the number of full-time equivalent staff on the START study by job classification, and the number of non-START clinical research studies being conducted at the site. If on-site retraining for the START protocol (or any of its substudies) occurs, the ICC completes a retraining summary CRF, indicating the nature of the retraining, the events that triggered the need for retraining, and the issues reviewed with site personnel. For sites randomized to the on-site monitoring arm, the monitor prepares a visit cost summary CRF, detailing time and expense preparing, conducting and following up on the visit, including producing the final monitoring reports.

At least annually, an internal Data and Safety Monitoring Board (DSMB) reviews the SMS data by randomization arms for participant safety. The DSMB is composed of the START co-chairs and representatives from the Division of AIDS, National Institute of Allergy & Infectious Diseases, National Institutes of Health. No other summary data comparing the SMS randomization arms will be released prior to the end of the START study.

Statistical analysis plan

The number and characteristics of sites in each monitoring randomization arm will be summarized, including number of study personnel, projected enrollment, number of other studies ongoing at the site, and prior history of participation with INSIGHT trials. Hierarchical logistic regression models appropriate for cluster randomized trials will be used to compare the two randomized monitoring arms for the primary outcome. Additional models will evaluate the effect of the three randomization strata. Odds ratio and 95% confidence intervals will compare sites with on-site monitoring to sites without on-site monitoring. Mean levels of continuous secondary outcomes will be compared using methods that account for within-site correlation. The intra-site correlation coefficient will be cited.

Subgroup analyses will be conducted with subgroups formed by the following: ICC, on-site monitoring prior to the opening of the SMS, and previous participation in INSIGHT trials. Tests of heterogeneity will be performed to document the differences among subgroups.

The cost of on-site monitoring visits will be summarized, including the number of hours for visit preparation, the actual visit and preparing the final monitoring report as well as the dollar cost of the visit.

Challenges in study implementation

The primary challenge for the implementation of the SMS was one of timing. The first randomization in the main START study took place in April 2009 yet the SMS was not planned until early 2011. Prior to the opening of the SMS (on 01 September 2011), 1547 participants at 113 sites had already been randomized to START and 37% of sites had already had an on-site monitoring visit. Only events and data occurring after the opening of the substudy will be included in the primary substudy analyses. Subgroup analyses will compare sites with and without on-site monitoring prior to the opening of the substudy for the primary substudy endpoint.

While the INSIGHT network has extensive experience with central and on-site monitoring, the more formalized local monitoring system utilized in the START study (submission of the CQMP forms to the SDMC for reporting and evaluation purposes; see Table 1 and the Supplemental Appendix) was a new challenge for the network. While local monitoring has always been a requirement for INSIGHT studies, prior studies did not have to formally submit documentation that the process had been completed. The new CQMP process initially increased responsibilities for the ICCs, because even though it was not required the ICCs generally reviewed the site CQMP forms prior to submission and worked with the sites to resolve issues identified. The CQMP forms can now be filled out online, relieving most of the ICC pre-processing responsibilities. The ICCs continue to work with sites to resolve any issues identified. Site commitment to the CQMP form submission process was also a challenge initially, but has become more accepted as sites get direct and useful feedback from the CQMP and performance reports, allowing for self-improvement opportunities. Finally, as the data collected on the CQMP forms evolved over the enrollment period for the main START study, complete analysis of some specific items on the CQMP forms will not be possible.

Defining and explaining the substudy design and analysis plan has also been challenging. Some investigators assumed that the CQMP report for a 6-month period would be directly comparable with an on-site monitoring report for that same time period. However, in the SMS the CQMP forms are submitted and reviewed every six months for all sites; whereas sites randomized to the on-site monitoring arm are monitored annually and not typically for the same time period or the same set of participant records as for a local CQMP review. Also, the SMS was not designed to track which monitoring activity (local, central or on-site) detected a component of the primary event.

A final challenge was to assure that the elements of the primary composite substudy endpoint: 1) addressed both participant safety and data integrity for the main START study outcome, and; 2) were clearly defined, objective, measurable, and applicable to all sites given the heterogeneity among participating country regulations and requirements

Current status

As of December 2013 the SMS is fully enrolled with 196 sites in 34 countries (4368 participants) randomized to the SMS. Overall, the average sample size at a site was 22 participants, which is close to the design assumption of 20. Table 3 reports the baseline characteristics for both sites and participants. Seventy percent of sites had prior experience with INSIGHT through participation in another study or with the START study prior to the implementation of the SMS. As expected, the median full-time equivalent personnel working on the main START study increases for sites that projected higher enrollment numbers. All site characteristics are similar between the SMS monitoring arms.

Table 3.

Site and Participant Characteristics at Baseline by Randomization Group

Site Characteristic Local + Central Monitoring Local + Central + On-site Monitoring
Prior experience with INSIGHT, N (%) 69 (71%) 68 (69%)
Number of Personnel by projected enrollment; median (IQR)
 < 15 4 (3, 6) 5 (3, 7)
 15–30 6 (3, 8) 5 (4, 6)
 > 30 8 (6, 10) 8 (5, 14)
 All sites 6 (3, 9) 5 (4, 7)
Full-time Equivalent Personnel, by projected enrollment; median (IQR)
 < 15 0.2 (0.1, 0.5) 0.2 (0.1, 0.5)
 15–30 0.7 (0.3, 1.6) 0.7 (0.4, 1.4)
 > 30 1.8 (0.9, 3.9) 1.6 (0.8, 5.3)
 All sites 0.7 (0.3, 1.6) 0.7 (0.3, 1.5)

Participant Characteristic

No. enrolled 2264 2104
Age (years; mean, SD) 37 (10.2) 36 (10.1)
Gender; N (%) female 585 (26%) 651 (31%)
Race; N (%)
 Asian 222 (10%) 165 (8%)
 Black 765 (34%) 634 (30%)
 Latino/Hispanic 311 (14%) 337 (16%)
 White 861 (38%) 959 (46%)
 Other 124 (5%) 40 (2%)
CD4+ (cells/mm3); median (IQR) 648 (581, 759) 660 (588, 776)
HIV-RNA (log10 copies/mL); median (IQR) 4.1 (3.5, 4.6) 4.1 (3.5, 4.6)

Follow-up for the START study is expected to continue through December 2016. The SMS has been reviewed by the DSMB three times (September 2012, May 2013 and February 2014), with recommendations each time that the substudy continue.

Discussion

In clinical trials the focus on quality control and assurance was established more than 25 years ago when the pharmaceutical industry and regulators set standards for clinical trial monitoring during the process of developing the International Conference on Harmonisation Good Clinical Practice guidelines [29]. Industry, being dependent on the quality of trial data for product approvals, initiated thorough clinical trial monitoring, including 100% source data verification. Historically, investigators conducting clinical trials with industry sponsors are accustomed to clinical monitors taking lead responsibility for study data quality, with local clinical trial staff playing a secondary role. By contrast, in investigator driven trials like START, where monitoring resources are most often modest compared to industry trials, the quality of trial data is increasingly dependent on central and local quality management. In such settings it is important to know which monitoring approaches are the most useful and efficient. This forms the basis of our study --seeking to learn whether the addition of on-site monitoring to central and local monitoring improves site performance when compared to central and local monitoring alone. The START trial is an ideal study in which to pose this question because it is a large randomized strategy trial using approved drugs for which there is relatively low clinical risk to participants. This, coupled with the INSIGHT network’s extensive experience in long-term randomized clinical trials with well-developed quality management plans, provides a research environment capable of supporting this type of inquiry that meets the CONSORT guidelines for cluster randomized trials [30].

An additional and important strength of this substudy is that data is being collected detailing specific costs (both in terms of time and financial resources) of on-site monitoring, including preparation for the visit, the actual visit, and completing monitoring report. On the other hand our study is not collecting data on the cost to the sites for monitoring visits, both in terms of personnel to prepare for the visit and disruption at the site during the visit.

One difficulty for any study evaluating monitoring approaches is determining an appropriate endpoint. What specifically is it that we want to measure and evaluate? A potential limitation of our study is that we have a composite endpoint, one that attempts to balance both participant safety (informed consent and eligibility violations) and the integrity of the main START study results (use of ART drug not permitted by protocol, missed or delayed reporting of events, data alteration or fraud). As with any composite endpoint there will be those that argue that not all components are equally important.

Large strategy trials like START provide an opportunity to experimentally evaluate operational issues associated with the conduct of trials in a cost-efficient manner. However there are also potential limitations for nested substudies: enrollment issues in the main study will also affect the substudy, and if the main study closes early the substudy will likely not meet the study design assumptions. Enrollment to the START study was as planned, with full enrollment (N=4685 HIV+ participants). However, of the 235 projected START sites 20 enrolled no participants and another 19 were not able to participate in the Monitoring Substudy (primarily due to regulatory requirements). The remaining 196 sites were randomized to the START Monitoring Substudy, close to the 204 sites required under the SMS design assumptions. We anticipate that within the context of large, multicenter clinical trials with both central and local monitoring capabilities the results of the SMS may have broad generalizability. On the other hand our results may have less relevance for studies without the same extensive central monitoring capability or where clinical sites do not assume responsibility for local monitoring and quality assurance processes.

There is a recognized need for a variety of monitoring approaches and there is ongoing research with different types of studies and clinical networks. As an example, one network [31] has evaluated remote pre-enrollment checking of consent forms with success, but that process involved personal inspection of each consent form prior to randomization (415 forms over 3.5 years). With 4685 consents in just under five years from 35 countries and in multiple languages that process would have been very labor- and time-intensive for the START study. Even if the SMS fails to observe a significant quality benefit with on-site monitoring it will not necessarily mean that on-site monitoring is not useful or even vital for certain studies or to address specific study needs and issues. In fact, on-site monitoring will likely remain necessary for many studies in some distinct format, such as targeted monitoring of eligibility and informed consent (particularly in the early stages of a study), and event identification and determination (at later stages).

Emerging capabilities for secure remote access to source data (electronic patient files) will surely have a role in shaping future monitoring strategies. One network [32] evaluating data changes to electronic case report forms found that most changes were due to transcription errors that had little impact on data analysis. Remote access to source data will permit earlier event identification and enhance the ability to correct problems before they become prevalent. Consequently the focus of on-site visits could be shifted to training efforts and developing stronger working relations between site staff and coordinating center monitors to improve study procedures and focus on quality aspects.

Acknowledgments

Grant support: NIAID U01AI068641

Footnotes

Trial registration number: ClinicalTrials.gov NCT00867048, EudraCT 2008-006439-12

Conflict of interest: The authors report no conflicts of interest

References

  • 1.Baigent C, Harrell F, Buyse M, et al. Ensuring trial validity by data quality assurance and diversification of monitoring methods. Clin Trials. 2008;5:49–55. doi: 10.1177/1740774507087554. [DOI] [PubMed] [Google Scholar]
  • 2.US Food and Drug Administration. [accessed 6 May 2014];Guidance for Industry Oversight of Clinical Investigations – A Risk-Based Approach to Monitoring. 2013 Available at: http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM269919.pdf.
  • 3.European Medicines Agency. [accessed 6 May 2014];Reflection paper on risk based quality management in clinical trials. 2011 http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2011/08/WC500110059.pdf.
  • 4.Usher RW. PhRMA BioResearch Monitoring Committee Perspective on Acceptable Approaches for Clinical Trial Monitoring. Drug Information Journal. 2010;44:477–483. [Google Scholar]
  • 5.Bakobaki J, Joffe N, Burdett S, et al. A systematic search for reports of site monitoring technique comparisons in clinical trials. Clin Trials. 2012;9:777–780. doi: 10.1177/1740774512458993. [DOI] [PubMed] [Google Scholar]
  • 6.Eisenstein EL, Collins R, Cracknell BS, et al. Sensible approaches for reducing clinical trial costs. Clin Trials. 2008;5:75–84. doi: 10.1177/1740774507087551. [DOI] [PubMed] [Google Scholar]
  • 7.Eisenstein E, Lemons P, Tardiff B, et al. Reducing the costs of phase III cardiovascular clinical trials. Am Heart J. 2005;149:482–488. doi: 10.1016/j.ahj.2004.04.049. [DOI] [PubMed] [Google Scholar]
  • 8.Lindblad AS, Manukyan Z, Purohit-Sheth T, et al. Central site monitoring: Results from a test of accuracy in identifying trials and sites failing Food and Drug Administration inspection. Clin Trials. 2014;11:205–217. doi: 10.1177/1740774513508028. [DOI] [PubMed] [Google Scholar]
  • 9.Smith C, Stocken D, Dunn J, et al. The value of source data verification in a cancer clinical trial. [accessed 6 May 2014];PLoS One. 2012 7(12):e51623. doi: 10.1371/journal.pone.0051623. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Pronker E, Geerts BF, Cohen A, et al. Improving the quality of drug research or simply increasing its cost? An evidence-based study of the cost for data monitoring in clinical trials. Br J Clin Pharmacol. 2011;71(3):467–470. doi: 10.1111/j.1365-2125.2010.03839.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Weiss R, Vogelzang N, Peterson B, et al. A successful system of scientific data audits for clinical trials. JAMA. 1993;270:459–464. [PubMed] [Google Scholar]
  • 12.Kramer J, Schulman K. Transforming the economics of clinical trials. [accessed 6 May 2014];Institute of Medicine Discussion Paper. 2012 Apr 13; http://www.iom.edu/~/media/Files/Perspectives-Files/2012/Discussion-Papers/HSP-Drugs-Transforming-the-Economics.pdf.
  • 13.Glickman SW, McHutchinson JG, Peterson ED, et al. Ethical and scientific implications of the globalization of clinical research. N Engl J Med. 2009;360:816–823. doi: 10.1056/NEJMsb0803929. [DOI] [PubMed] [Google Scholar]
  • 14.Morrison BW, Cochran CJ, White JG, et al. Monitoring the conduct of clinical trials: a survey of current practices. Clin Trials. 2011;8:342–349. doi: 10.1177/1740774511402703. [DOI] [PubMed] [Google Scholar]
  • 15.Pogue J, Devereaux P, Thorlund K, et al. Central statistical monitoring: Detecting fraud in clinical trials. Clin Trials. 2013;10:225–235. doi: 10.1177/1740774512469312. [DOI] [PubMed] [Google Scholar]
  • 16.Buyse M, George SL, Evans S, et al. The role of biostatistics in the prevention, detection and treatment of fraud in clinical trials. Statist Med. 1999;18:3435–3451. doi: 10.1002/(sici)1097-0258(19991230)18:24<3435::aid-sim365>3.0.co;2-o. [DOI] [PubMed] [Google Scholar]
  • 17.Uren SC, Kirkman MB, Dalton BS, et al. Reducing clinical trial monitoring resource allocation and costs through remote access to electronic medical records. [accessed 6 May 2014];J Oncol Pract. doi: 10.1200/JOP.2012.000666. Epub ahead of print Oct 9, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Ansmann E, Hecht A, Henn D, et al. The future of monitoring in clinical research – a holistic approach: Linking risk-based monitoring with quality management principles. [accessed 6 May 2014];Ger Med Sci. 2013 11:Doc04. doi: 10.3205/000172. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Mitchel J, Gittleman D, Markowitz JMS, et al. A 21st century approach to QA oversight of clinical trial performance and clinical data integrity. The Monitor. 2013:41–46. [Google Scholar]
  • 20.Sprenger K, Nickerson D, Meeker-O’Connell A, et al. Quality by design in clinical trials: A collaborative pilot with FDA. Therapeutic Innovation & Regulatory Science. 2013;47:161–166. doi: 10.1177/0092861512458909. [DOI] [PubMed] [Google Scholar]
  • 21.Journot V, Pignon J-P, Gaultier C, et al. Validation of a risk-assessment scale and a risk-adapted monitoring plan for academic clinical research studies – the Pre-Optimon study. Contemp Clin Trials. 2011;32:16–24. doi: 10.1016/j.cct.2010.10.001. [DOI] [PubMed] [Google Scholar]
  • 22.Brosteanu O, Houben P, Ihrig K, et al. Risk analysis and risk adapted on-site monitoring in noncommercial clinical trials. Clin Trials. 2009;6:585–596. doi: 10.1177/1740774509347398. [DOI] [PubMed] [Google Scholar]
  • 23.Macefield RC, Beswick AD, Blazeby JM, et al. A systematic review of on-site monitoring methods for health-care randomised controlled trials. Clin Trials. 2013;10:104–124. doi: 10.1177/1740774512467405. [DOI] [PubMed] [Google Scholar]
  • 24.Lienard J-L, Quinaux E, Fabre-Guillevin E, et al. Impact of on-site initiation visits on patient recruitment and data quality in a randomized trial of adjuvant chemotherapy for breast cancer. Clin Trials. 2006;3:1–7. doi: 10.1177/1740774506070807. [DOI] [PubMed] [Google Scholar]
  • 25.Babiker AG, Emery S, Fatkenheuer G, et al. Considerations in the rationale, design and methods of the Strategic Timing of AntiRetroviral Treament (START) study. Clin Trials. 2013;10:S5–S36. doi: 10.1177/1740774512440342. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.The INSIGHT-ESPRIT Study Group, SILCAAT Scientific Committee. Abrams D, et al. Interleukin-2 therapy in patients with HIV infection. N Engl J Med. 2009;361:1548–1559. doi: 10.1056/NEJMoa0903175. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Hayes RJ, Bennett S. Simple sample size calculation for cluster-randomized trials. Int J Epidemiol. 1999;28:319–326. doi: 10.1093/ije/28.2.319. [DOI] [PubMed] [Google Scholar]
  • 28.Division of AIDS, National Institutes of Health. Table for grading severity of adult adverse experiences. Rockville, MD, USA: National Institute of Allergy and Infectious Diseases; 1998. [Google Scholar]
  • 29.International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use. [accessed 7 May 2014];ICH Procedures. doi: 10.1111/j.1365-2125.1994.tb05705.x. http://www.ich.org/fileadmin/Public_Web_Site/ABOUT_ICH/Process_of_Harmonisation/ICH_Procedures_V1.0.pdf. [DOI] [PMC free article] [PubMed]
  • 30.Campbell MK, Elbourne DR, Altman DG for the CONSORT Group. CONSORT statement: extension to cluster randomized trials. BMJ. 2004;328:702–708. doi: 10.1136/bmj.328.7441.702. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Journot V, Perusat-Villetorte S, Bouyssou C, et al. Remote preenrollment checking of consent forms to reduce noconformity. Clin Trials. 2013;10:449–459. doi: 10.1177/1740774513480003. [DOI] [PubMed] [Google Scholar]
  • 32.Mitchel JT, Joong Kim Y, Choi J, et al. Evaluation of data entry erros and data changes to an electronic data capture clinical trial database. Drug Information Journal. 2011;45:421–430. doi: 10.1177/009286151104500404. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES