Skip to main content
Trials logoLink to Trials
. 2018 Oct 16;19:562. doi: 10.1186/s13063-018-2941-8

Monitoring performance of sites within multicentre randomised trials: a systematic review of performance metrics

Kate F Walker 1,, Julie Turzanski 1, Diane Whitham 1, Alan Montgomery 1, Lelia Duley 1
PMCID: PMC6192157  PMID: 30326948

Abstract

Background

Large multicentre trials are complex and expensive projects. A key factor for their successful planning and delivery is how well sites meet their targets in recruiting and retaining participants, and in collecting high-quality, complete data in a timely manner. Collecting and monitoring easily accessible data relevant to performance of sites has the potential to improve trial management efficiency. The aim of this systematic review was to identify metrics that have either been proposed or used for monitoring site performance in multicentre trials.

Methods

We searched the Cochrane Library, five biomedical bibliographic databases (CINAHL, EMBASE, Medline, PsychINFO and SCOPUS) and Google Scholar for studies describing ways of monitoring or measuring individual site performance in multicentre randomised trials. Records identified were screened for eligibility. For included studies, data on study content were extracted independently by two reviewers, and disagreements resolved by discussion.

Results

After removing duplicate citations, we identified 3188 records. Of these, 21 were eligible for inclusion and yielded 117 performance metrics. The median number of metrics reported per paper was 8, range 1–16. Metrics broadly fell into six categories: site potential; recruitment; retention; data collection; trial conduct and trial safety.

Conclusions

This review identifies a list of metrics to monitor site performance within multicentre randomised trials. Those that would be easy to collect, and for which monitoring might trigger actions to mitigate problems at site level, merit further evaluation.

Keywords: Multicentre, Randomised trials, Clinical trials, Performance metrics, Trial management, Site performance, Operational metrics, Key performance indicators

Background

Multicentre randomised trials are complex and expensive projects. Improving the efficiency and quality of trial conduct is important, for patients, funders, researchers, clinicians and policy-makers [1]. A key factor in successful planning and delivery of multicentre trials is how well sites meet their targets in recruiting and retaining participants, and in collecting high-quality, complete data in a timely manner [2]. Collecting and monitoring easily accessible data relevant to performance of sites has the potential to improve the efficiency and success of trial management. Ideally, such performance metrics should provide information that quickly identifies potential problems so they can be mitigated or avoided, hence minimising their impact and improving the efficiency of trial conduct.

We are not aware of any standardised metrics for monitoring site performance in multicentre trials. A recent query to all UK Clinical Research Collaboration (UKCRC), registered Clinical Trials Units (CTUs) revealed that many units routinely collect and report data for each site in a trial; such as numbers randomised, case report forms (CRFs) returned, data quality, missing primary outcome data, and serious breaches. How such data are used to assess and manage performance varies widely however [37]. Agreeing a small number of metrics for site performance that could be easily collected, presented and monitored in a standardised way by a trial manager or trial co-ordinator would be a potentially useful tool to improve efficient trial conduct.

Currently, trial teams, sponsors, funders and oversight committees monitor site performance and trial conduct based primarily on recruitment [8]. Whilst clearly important, recruitment is not the only performance indicator that matters for a successful trial. Using a range of additional metrics that include data quality, protocol compliance and participant retention would give a better overall measure of the performance of each trial site, and the trial overall. To be low cost and efficient, the number of metrics monitored at any one time should be limited to no more than 8 to 12 [9]. We conducted a systematic review to identify performance metrics that have been used, or proposed, for monitoring or measuring performance at sites in multicentre randomised trials.

Methods

We performed a systematic review to identify metrics that have been used or proposed for monitoring or measuring performance at individual sites in multicentre randomised trials.

Criteria for potentially eligible studies

Studies were potentially eligible for inclusion if they:

  • Reported one or more site performance metric, either used or proposed for use, specifically for the purpose of measuring individual site performance

  • Were multicentre randomised trials, or concerning multicentre trials

  • Were published in English

  • Related to randomised trials involving humans

Studies where the strategy for monitoring site performance was randomly allocated were included. We anticipated that there might be studies where the adoption of an individual performance metric might have been tested by randomly allocating sites to using that particular metric or not. Studies relevant to both publically funded and industry-funded trials were included.

Search strategy

We searched the Cochrane Library and five biomedical bibliographic databases (CINAHL, Excerpta Medica database (EMBASE), Medical Literature Analysis and Retrieval System Online (Medline), Psychological Information Database (PsychINFO) and SCOPUS) and Google Scholar from 1980 to 2017 week 07. The search strategy is provided as an Appendix (Table 3).

Selection of studies

Two reviewers (KW, JT) independently assessed for inclusion the titles and abstracts identified by the search strategy. If there was disagreement about whether a record should be included, we obtained the full text.

We sought full-text copies for all potentially eligible records, and two reviewers (KW, JT) independently assessed these for inclusion. Disagreements were resolved by discussion, and if agreement could not be reached the study was independently assessed by a third reviewer (LD). Multiple reports of the same study were linked together.

Data extraction and data entry

Two reviewers (KW, JT) extracted data independently onto a specifically designed data extraction form. In the few cases where full text was not available (n = 9), data were extracted using the title and abstract only. Data were entered into an Excel spreadsheet, and checked.

Data were extracted on the design of the randomised trial (participants, intervention, control, number of sites and target sample size); whether the performance metric/s was theoretical or applied. For each performance metric we collected data that included: a verbatim description of the metric; how the metric was measured or expressed; timing of the measurement and during which phase of the study; who measured the metric; if a threshold exists to trigger action, what the threshold was and what action it triggers; and whether the metric was recommended by the authors.

Data analysis

We described the flow of studies through the review, with reasons for being removed or excluded, using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidance [10]. Characteristics of each study were described and tabulated. Analyses were descriptive only, with no statistical analyses anticipated.

Results

The database search identified 3365 records, of which 177 were duplicates, leaving 3188 screened for eligibility (Fig. 1). At screening, we obtained full-text copies for 147 records to determine eligibility. For a further seven records full-text copies were unavailable, and so screened was based on the abstract only. Of those full-text copies and abstracts (for papers where the full text was unavailable), there was disagreement on three papers. Following discussion two papers were accepted for inclusion [11, 12] and one paper was excluded [13].

Fig. 1.

Fig. 1

Flow diagram

Twenty-one studies were agreed for inclusion, of which 14 were studies proposing performance metrics and seven were studies using performance metrics (Table 1). These 21 studies reported a total of 117 performance metrics. The median number of performance metrics reported per study was 8, with the range being 1–16. Those 117 metrics were then screened, to exclude any judged as: lacking sufficient clarity; being unrelated to individual site performance; being too specific to an individual trial methodology or pertaining to clinical outcomes not trial performance. This left 87 performance metrics to be considered for use in day-to-day trial management. The metrics broadly fell into six main categories: assessing site potential before recruitment starts; and monitoring recruitment, retention, quality of data collection, quality of trial conduct, and trial safety (Table 2).

Table 1.

Characteristics of included studies

Study Study description Number of sites (sample size) Metrics reported by each study
Included as site performance metric Excluded as not site performance metrica
Studies proposing performance metrics
 Bose 2012 [14] Paper discussing trial management through central monitoring Not applicable • Site location potential index based on an assessment of the number of patients at an individual site with the disease of interest
• Trial compliance index based on a number of suggested factors including the number of late visits, failure to achieve recruitment target, number of dosing errors, etc.
• Drug adversity measurement (B)
• Drug potential index (B)
 Djali 2010 [15] Paper discussing a data-driven quality management system Not applicable • Enrolment number per siteb
• Recruitment period per site
• Number of AEs per site
• Number of protocol deviations and violations per site
• Number of discontinuations per site (A)
• Deaths per site (D)
 Elsa 2011 [16] Methodology of developing ‘key risk indicators’ for monitoring of a large international clinical trial Not applicable • Rate of SAE reporting per site: centres assigned a dichotomous score depending on whether they showed extreme deviation from comparable sites (arbitrarily defined as half the observed median rate across sites)
• Short visit duration: centres assigned a dichotomous score depending on whether they showed extreme deviation from comparable sites (arbitrarily defined as half the observed median rate across sites)
• Measures of compliance with study treatment (A)
• Blood results/other continuous variables examined for unusual patterns (A)
 Glass 2007 [17] Study analysing data retrospectively from 262 clinical trials to determine variables associated with successful trial delivery Not applicable • Actual number participants randomised per site
• Number successfully completing the study’s protocol per site
• Time between when an individual site randomises its first participant and the time the first site in that study enrols its first patient
 Hanna, 2013 [11] Development of a list of quality indicators for trial performance based on the consensus of experts Not applicable • SAE reporting measured by the number of SAEs reported/number of SAEs identified in trial database or trial follow-up documents
• Transfer of CRF to CTU measured by the number of completed CRF received by CTU within 30 days/number of completed CRF received by the CTU in 3 months
 Jou, 2013 [18] Aim of the main study: treatment-naïve, hepatitis C patients randomised to two peginterferon regimens. Primary outcome virologic response. A retrospective analysis was performed of individual site performance using trial data 118 (3070) • Rates of screen failure defined as the percentage of participants screened who failed screening
• Completion and discontinuation of treatment, defined as the percentage of participants who completed treatment/ percentage of participants who discontinued treatment
• Completion / discontinuation of follow-up, defined as the percentage who completed follow-up/ percentage who discontinued follow-up
• Treatment adherence (B)
 Khatawkar 2014 [19] Retrospective analysis of data queries using clinical trial data Not applicable • Data query (DQ) rate per page
• DQ rate per page by phase of study
• DQ rate per page by country (B)
• DQ rate per page by therapeutic area (B)
 Lee 2012 [20] Paper describing the output of a Delphi survey to establish an ‘evaluation framework’ for clinical trial data Not applicable • Rapid enrolment, defined as time taken to reach target enrolment
• Timely data entry, defined as time taken for data entry after completion of informed consent
• Timely manual query management, defined as time taken for response to manual query request from data centre
• Timely database lock, defined as time taken for database lock after the last visit of last participant per site
• Data discrepancy management metric encompassing number of manual queries per CRF for missing data; number of manual queries per CRF for out -of-range data; number of manual queries per CRF for logical consistency
• Protocol compliance metric encompassing: rate of ‘dropout’ of total participants; rate of false ‘dropout’ of total dropouts; rate of late detection of ‘dropout’
• Enrolment success defined as % eligible per study
• Weeks after go-live, i.e. after the point of protocol amendment (A)
 Rojavin, 2005 [21] Paper describing and discussing one proposed metric Not applicable • Recruitment Index (RI) = (LPFV − FPFV) x S/P where
LPFV = date of the last participant first visit
FPFV = date of the first participant first visit
S = number of participating sites
P = number of participants who successfully completed the study
 Rosendorf, 1993 [22] Trials of treatment for HIV. No further details. An evaluation tool was proposed to monitor individual site performance within a multicentre randomised trial. 59 (ns) Intensity adjusted score (IAS) = IAS = IS0 + don x IS1 + doff x IS2 where: IS0 = score assigned for enrolling a new participant during the 6 month evaluation period
don = number of days the participant was on the study medication during the evaluation period
doff = number of days the participant was off the study medication
IS1 = intensity score for the days in which the participant is receiving study medication
IS2 = intensity score for the days in which the participant is off all study medication
ISA is calculated for each participant and then summing scores across all participants, once during the evaluation period
• Funding adjusted score = IAS divided by the amount awarded for total direct costs during the given time period
• Summary quartiles = total number of new and continuing participants on study
 Sweetman, 2011 [23] Retrospective analysis of publications of 80 clinical trials on protocol violations reporting Not applicable Occurrence of protocol violations, defined as total number of protocol violations divided by the number of enrolled participants
 Thom, 2011 [12]a Report of a centre performance assessment tool used within a clinical trial network to assess individual site performance Not applicable • Protocol adherence, defined as average rate of protocol violations per enrolled participant
• Data quality, defined as average rate of edit checks per participant
• Data timeliness, defined as the percentage of forms entered late
• Time of starting after the first centre start date
• Sum of protocol adherence, data quality, data timeliness and timeliness of study start-up to give overall rank
• Timeliness of study start-up
• Recruitment, defined as average percentage of participants contributed over all studies conducted (B)
• Retention, defined as average percentage of participants with complete follow-up data (B)
• Recruitment/retention, defined as sum of recruitment + retention to give overall rank (B)
• Adherence/quality (A)
• Quality of laboratory samples collected (A)
 Tudur Smith, 2014 [24] Paper describing monitoring methods using a ‘risk proportionate approach’ used by an individual clinical trials unit Not applicable • Consent form completion, defined as consent forms returned within 7 days of completion by sites.
• Recruitment process, defined as frequency of eligible participants who do not provide consent.
• Missing primary outcome data, defined as cumulative percentage of participants with missing primary outcome data at each site
• SAEs, defined as cumulative percentage of participants with at least one SAE across the trial as a whole and at each site /measure of time, e.g. 1 month
• Sum of all SAEs/sum of all follow-up for the trial
• Sum of all follow-up at site x overall SAE rate for the trial
• Visit dates, defined as time between actual date of visit versus expected date of visit
• Case report form completion, defined as timely submission (A)
 Wilson, 2014 [25] Theoretical paper describing methods of monitoring the conduct of trials Not applicable • Quality metric encompassing: average number of major audit findings per audited site; percentage per site of unreported, confirmed SAEs; number of significant protocol deviations per site
• Frequency of protocol violations for eligibility criteria and randomisation per site
• Rates of withdrawal by site
• Proportion of the enrolled population comprising the non-randomised parallel cohorts (measured by percentage agreement and kappa statistic) (C)
• Radiologic inter-observer agreement (C)
Studies using performance metrics
 Berthon-Jones 2015 [2] Aim of main study: treatment-naïve HIV patients randomised to 2 different types of ART. Primary outcome plasma HIV-RNA, change from baseline to week 48. Performance across 5 geographical regions was assessed using performance metrics 36 (322) • Time from protocol release to ethics/regulatory submission
• Time from protocol release to ethics/regulatory approval
• Time from protocol release to first participant randomised (FPR)
• Time from protocol release to last participant randomised (LPR)
• Time from site opened to first participant randomised (FPR)
• Time from first participant randomised (FPR) to last participant randomised (LPR)
• Actual versus estimated recruitment
• Time from participant visit to electronic data capture (EDC) initiation
• Time from EDC initiation to completion
• Number of missing values per participant
• Number of data queries per participant
• Number of SAEs reported per participant
• Time from SAE occurrence to initial report
• Time from initial SAE report to final report
• Number of samples collected versus number required by protocol
• Number of missed visits per region (B)
• Quality of laboratory sample/s collected (A)
• Number of plasma samples collected versus protocol-mandated samples to be collected (C)
• Number of buffy-coat samples collected versus protocol-mandated samples to be collected (C)
 Katz, 2015 [26] Aim of main studies: osteoarthritis (2 trials), lower back pain (1 trial) randomised to fulranumab infusion or placebo. Primary outcomes unspecified. Within these three clinical trials a method of monitoring individual site performance was applied 40–88 (91–157) • Time to data query response • Compliance with study drug (D)
 Kim, 2011 [27]a Aim of main study: patients with acute cerebral haemorrhage randomised to early intensive antihypertensive or standard regimen. Primary outcome death or disability at 3 months. A site performance monitoring tool was incorporated for monitoring individual site performance during the trial 100 (1280) • Participant recruitment per site
• CRF data collection timeliness + completeness
• Protocol violations per site
• SAE reporting per site
• Participant study progress (A)
• Site data monitoring visit findings (A)
• Data clarification request processing (A)
• Regulatory document collection and tracking (A)
 Rifkind, 1983 [28] Aim of the main study: men with primary type 2 hyper-lipoproteinaemia randomised to bile acid sequestrant or placebo. Primary outcome CHD death and/or nonfatal myocardial infarction. Within this study measures of individual site recruitment performance were monitored. 12 (3550) • Proportion of initial contacts proceeding to first protocol visit by recruitment source
• Proportion of first protocol visits proceeding to study entry by recruitment source
 Saunders, 2015 [29]a Aim of the main study: critical care patients randomised to probiotic or placebo. Primary outcome ventilator associated pneumonia. Within this study the team focused on screening performance in individual centres 14 (285) • Non-screening weeks = proportion of weeks during which participants were not screened for trial eligibility
 Sun, 2008 [30] Aim of the main study: patients with major depression randomised to aprepitant or placebo. Primary outcome change in Hamilton Depression Scale. Within this study measures of individual site performance were captured Not reported • Administration excellence, defined as site administration performance and interaction with central study team rated 1, 2 or 3
• Data quality, defined as data completeness and correctness at initial submission rated 1, 2, or 3
• Proportion of participants with protocol violation, defined as: proportion of participants in each site who do not meet eligibility criteria; have medication compliance < 75%, or take prohibited concomitant medication or wrong study medication; or other serious violation
• Level of visit non-compliance, defined as mean absolute difference of the days between visits and the protocol-specified days between visits for participants in a specific centre
• Level of medication non-compliance, defined as the mean percentage of days participants from each centre taking less than the prescribed number of doses of study-assigned medication (B)
 Wear, 2010 [31]a Aim of the main study: patients with multiple myeloma, multiple clinical trials. No further details. Performance metrics utilised during the study Not reported • First patient dosed (FPD), defined as time from receipt of final protocol to the first participant treated
• Enrolment commitment (EC), defined as commitment from the study site to provide a predicted number of participants who will receive at least 1 dose of study drug (e.g. number of participants randomised and completing first part of intervention
• Baseline enrolment timeline (BET), defined as target time period to obtain EC

AE adverse event; ART antiretroviral therapy; CHD coronary heart disease; CRF case record form; CTU clinical trial unit; ns not specified; SAE serious adverse event; VTE venous thromboembolism

aExcluded due to (a) lack of clarity, (b) not related to individual site performance, (c) too specific to an individual trial methodology, (d) pertaining to clinical outcomes not trial performance

bIt is unclear from the paper whether enrolment refers to participants randomised to a study or simply consented and then screened for study eligibility

Table 2.

Examples of performance metrics within each identified category

Categories Example performance metric Studies in which metric included
Assessing site potential Site location potential index based on an assessment of the number of patients at an individual site with the disease of interest [14]
Monitoring recruitment Number of participants randomised per site [15, 17, 27]
Monitoring retention Rates of withdrawal by site [20, 25]
Quality of data collection Number of data queries per participant [2, 12, 19]
Trial conduct Protocol violations per site or per participant [12, 15, 23, 27, 30]
Trial safety Serious adverse event (SAE) reporting per site [11, 24, 27]

Discussion

As far as we are aware, this is the first systematic review to identify and describe proposed or utilised metrics to monitor site performance in multicentre randomised trials. It provides a list of performance metrics, which can be used to contribute to developing and agreed a proposed set of performance metrics for use in day-to-day trial management. We identified 87 performance metrics which fell broadly into six main categories.

A strength of our study was the comprehensive search of the literature.

In planning this systematic review we envisaged that studies would be identified that had evaluated individual performance metrics either by implementation mid-way through a study, or ideally by randomising individual sites to use of a particular metric or not. Unfortunately, there was a paucity of such studies. Most studies suggested performance metrics on a purely theoretical basis, and did not provide data on the actual use of suggested metrics. The main limitations of our study were the lack of studies implementing performance metrics and reporting the effects of their utilisation, and that published work on this topic is limited, which is perhaps surprising as informal assessment of how sites perform in multicentre trials is common.

This list of performance metrics contributed to development of a Delphi survey sent to trial managers, UKCRC CTU directors and key clinical trial stakeholders, which is reported elsewhere. They were invited to participate through the UK Trial Managers’ Network (UK TMN) and UK Clinical Research Collaboration (UKCRC CTU) Network. Three Delphi rounds were used to steer the groups to consensus, refining the list of performance metrics. The reasons for their decisions were documented. Finally, data from the Delphi survey was presented to stakeholders in a priority setting expert workshop, providing participants with the opportunity to express their views, hear different perspectives and think more widely about monitoring of site performance. This was used to establish a consensus among experts on the top key performance metrics, expected to number around 8–12.

Conclusions

This study provides trialists for the first time with a comprehensive description of performance metrics described in the literature that have been proposed or used in the context of multicentre randomised trials. It will assist future work to develop a concise, practical list of performance metrics which could be used in day-to-day trial management to improve the performance of individual sites. This has the potential to reduce both the financial cost of delivering a multicentre trial, and the research waste and delay in scientific progress that results when trials fail to meet their recruitment target, are poorly conducted, or have inadequate data.

Acknowledgments

Funding

This work was funded by NIHR CTU Support funding. The views expressed are those of the author(s) and not necessarily those of the National Health Service (NHS), the National Institute for Health Research (NIHR) or the Department of Health.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

CINAHL

Cumulative Index to Nursing and Allied Healthy Literature

CRF

Case report form

CTUs

Clinical Trials Units

EMBASE

Excerpta Medica database

Medline

Medical Literature Analysis and Retrieval System Online

NIHR

National Institute for Health Research

PRISMA

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

PsychINFO

Psychological Information Database

UK TMN

UK Trial Managers’ Network

UKCRC

UK Clinical Research Collaboration

Appendix

Table 3.

Search strategy. Monitoring performance of sites within multicentre randomised trials: a systematic review of performance metrics

# Searches
1 Randomised. controlled trial
2 Clinical trial
3 Pragmatic trial
4 Controlled clinical trial
5 1 or 2 or 3 or 4
6 Performance indicator
7 Performance metric
8 Performance measure
9 Enrollment rate
10 Participant enrollment
11 Participant recruitment
12 Quality indicator
13 Quality measure
14 Performance management
15 Assessing site performance
16 Central monitoring
17 Clinical trial monitoring
18 Clinical trial reporting
19 Trial analytics
20 Trial management
21 Site performance
22 Study conduct
23 Trial site performance
24 Benchmarking performance
25 Clinical data management
26 Clinical trial data quality
27 Laboratory sample quality in clinical trials
28 Operational metrics
29 Operational performance
30 Performance evaluation
31 Performance monitoring
32 Performance score
33 Protocol deviations
34 Protocol violations
35 Quality management system
36 Recruitment index
37 Screening logs
38 Strategic project management
39 6 or 7 or 8 or 9 or 10 or 11 or 12 or 13 or 14 or 15 or 16 or 17 or 18 or 19 or 20 or 21 or 22 or 23 or 24 or 25 or 26 or 27 or 28 or 29 or 30 or 31 or 32 or 33 or 34 or 35 or 36 or 37 or 38
40 39 and 5
41 40 Not (animals/ not humans.sh.)
42 40
43 Limit 42 to English language

Authors’ contributions

LD and DW conceived the study. LD, DW, JT, KW and AM designed the study and wrote the protocol. JT and KW performed the search and collected the data. KW analysed the data and drafted the paper with input from LD and JT. All authors read and approved the final manuscript.

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Kate F. Walker, Email: kate.walker@nottingham.ac.uk

Julie Turzanski, Email: julie.turzanski@nottingham.ac.uk.

Diane Whitham, Email: Diane.Whitham@nottingham.ac.uk.

Alan Montgomery, Email: alan.montgomery@nottingham.ac.uk.

Lelia Duley, Email: lelia.duley@nottingham.ac.uk.

References

  • 1.Duley L, Antman K, Arena J, Avezum A, Blumenthal M, Bosch J, Chrolavicius S, Li T, Ounpuu S, Perez AC, et al. Specific barriers to the conduct of randomized trials. Clin Trials. 2008;5(1):40–48. doi: 10.1177/1740774507087704. [DOI] [PubMed] [Google Scholar]
  • 2.Berthon-Jones N, Courtney-Vega K, Donaldson A, Haskelberg H, Emery S, Puls R. Assessing site performance in the Altair study, a multinational clinical trial. Trials. 2015;16(1):138. doi: 10.1186/s13063-015-0653-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Coleby D, Whitham D, Duley L. Can site performance be predicted? Results of an evaluation of the performance of a site selection questionnaire in five multicentre trials. Trials. 2015;16(Suppl 2):176. doi: 10.1186/1745-6215-16-S2-P176. [DOI] [Google Scholar]
  • 4.Kirkwood AA, Cox T, Hackshaw A. Application of methods for central statistical monitoring in clinical trials. Clin Trials. 2013;10(5):783–806. doi: 10.1177/1740774513494504. [DOI] [PubMed] [Google Scholar]
  • 5.Bakobaki Julie M, Rauchenberger Mary, Joffe Nicola, McCormack Sheena, Stenning Sally, Meredith Sarah. The potential for central monitoring techniques to replace on-site monitoring: findings from an international multi-centre clinical trial. Clinical Trials: Journal of the Society for Clinical Trials. 2011;9(2):257–264. doi: 10.1177/1740774511427325. [DOI] [PubMed] [Google Scholar]
  • 6.Timmermans C, Venet D, Burzykowski T. Data-driven risk identification in phase III clinical trials using central statistical monitoring. Int J Clin Oncol. 2016;21(1):38–45. doi: 10.1007/s10147-015-0877-5. [DOI] [PubMed] [Google Scholar]
  • 7.Tantsyura V, Dunn IM, Fendt K, Kim YJ, Waters J, Mitchel J. Risk-based monitoring: a closer statistical look at source document verification, queries, study size effects, and data quality. Ther Innov Regul Sci. 2015;49(6):903–910. doi: 10.1177/2168479015586001. [DOI] [PubMed] [Google Scholar]
  • 8.Smith B, Martin L, Martin S, Denslow M, Hutchens M, Hawkins C, Panier V, Ringel MS. What drives site performance in clinical trials? Nat Rev Drug Discov. 2018;17(6):389–390. doi: 10.1038/nrd.2018.51. [DOI] [PubMed] [Google Scholar]
  • 9.Dorricott K. Using metrics to direct performance improvement efforts in clinical trial management. Monitor. 2012;26(4):9–13.
  • 10.Moher D, Liberati A, Tetzlaff J, Altman DG, Grp P. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA Statement. J Clin Epidemiol. 2009;62(10):1006–1012. doi: 10.1016/j.jclinepi.2009.06.005. [DOI] [PubMed] [Google Scholar]
  • 11.Hanna M, Minga A, Fao P, Borand L, Diouf A, Mben JM, Gad RR, Anglaret X, Bazin B, Chene G. Development of a checklist of quality indicators for clinical trials in resource-limited countries: The French National Agency for Research on AIDS and Viral Hepatitis (ANRS) experience. Clin Trials. 2013;10(2):300–318. doi: 10.1177/1740774512470765. [DOI] [PubMed] [Google Scholar]
  • 12.Thom E. A center performance assessment tool in a multicenter clinical trials network. Clin Trials. 2011;8(4):519. [Google Scholar]
  • 13.Hullsiek KH, Wyman N, Kagan J, Grarup J, Carey C, Hudson F, Finley E, Belloso W. Design of an international cluster-randomized trial comparing two data monitoring practices. Clin Trials. 2013;10:S32–S33. doi: 10.1177/1740774512464831. [DOI] [Google Scholar]
  • 14.Bose A, Das S. Trial analytics—A tool for clinical trial management. Acta Poloniae Pharmaceutica - Drug Research. 2012;69(3):523–533. [PubMed] [Google Scholar]
  • 15.Djali S, Janssens S, Van Yper S, Van Parijs J. How a data-driven quality management system can manage compliance risk in clinical trials. Drug Inform J. 2010;44(4):359–373. doi: 10.1177/009286151004400402. [DOI] [Google Scholar]
  • 16.Elsa Valdés-Márquez, Jemma Hopewell C, Martin Landray, Jane Armitage. A key risk indicator approach to central statistical monitoring in multicentre clinical trials: method development in the context of an ongoing large-scale randomized trial. Trials. 2011;12(Suppl 1):A135. doi: 10.1186/1745-6215-12-S1-A135. [DOI] [Google Scholar]
  • 17.Glass HE, DiFrancesco JJ. Understanding site performance differences in multinational phase III clinical trials. Int J Pharmaceutical Med. 2007;21(4):279–286. doi: 10.2165/00124363-200721040-00004. [DOI] [Google Scholar]
  • 18.Jou JH, Sulkowski MS, Noviello S, Long J, Pedicone LD, McHutchison JG, Muir AJ. Analysis of site performance in academic-based and community-based centers in the IDEAL study. J Clin Gastroenterol. 2013;47(10):e91–e95. doi: 10.1097/MCG.0b013e318294baa4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Khatawkar S, Bhatt A, Shetty R, Dsilva P. Analysis of data query as parameter of quality. Perspect Clin Res. 2014;5(3):121–124. doi: 10.4103/2229-3485.134312. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Lee HJ, Lee S. An exploratory evaluation framework for e-clinical data management performance. Drug Inf J. 2012;46(5):555–564. doi: 10.1177/0092861512452119. [DOI] [Google Scholar]
  • 21.Rojavin MA. Recruitment index as a measure of patient recruitment activity in clinical trials. Contemp Clin Trials. 2005;26(5):552–556. doi: 10.1016/j.cct.2005.05.001. [DOI] [PubMed] [Google Scholar]
  • 22.Rosendorf LL, Dafni U, Amato DA, Lunghofer B, Bartlett JG, Leedom JM, Wara DW, Armstrong JA, Godfrey E, Sukkestad E, et al. Performance evaluation in multicenter clinical trials: Development of a model by the AIDS Clinical Trials Group. Control Clin Trials. 1993;14(6):523–537. doi: 10.1016/0197-2456(93)90032-9. [DOI] [PubMed] [Google Scholar]
  • 23.Sweetman EA, Doig GS. Failure to report protocol violations in clinical trials: a threat to internal validity? Trials. 2011;12:214. doi: 10.1186/1745-6215-12-214. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Tudur Smith C, Williamson P, Jones A, Smyth A, Hewer SL, Gamble C. Risk-proportionate clinical trial monitoring: an example approach from a non-commercial trials unit. Trials. 2014;15(1):127. doi: 10.1186/1745-6215-15-127. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Wilson B, Provencher T, Gough J, Clark S, Abdrachitov R, de Roeck K, Constantine SJ, Knepper D, Lawton A. Defining a central monitoring capability: sharing the experience of TransCelerate BioPharma's approach, Part 1. Ther Innov Regul Sci. 2014;48(5):529–535. doi: 10.1177/2168479014546335. [DOI] [PubMed] [Google Scholar]
  • 26.Katz N. Development and validation of a clinical trial data surveillance method to improve assay sensitivity in clinical trials. J Pain. 2015;1:S88. [Google Scholar]
  • 27.Kim J, Zhao W, Pauls K, Goddard T. Integration of site performance monitoring module in web-based CTMS for a global trial. Clin Trials. 2011;8(4):450. [Google Scholar]
  • 28.Rifkind BM. Participant recruitment to the coronary primary prevention trial. J Chronic Dis. 1983;36(6):451–465. doi: 10.1016/0021-9681(83)90137-6. [DOI] [PubMed] [Google Scholar]
  • 29.Saunders L, Clarke F, Hand L, Jakab M, Watpool I, Good J, Heels-Ansdell D. Screening weeks: a pilot trial management metric. Crit Care Med. 2015;1:330. doi: 10.1097/01.ccm.0000475144.38604.1e. [DOI] [Google Scholar]
  • 30.Sun J, Wang J, Liu G. Evaluation of the quality of investigative centers using clinical ratings and compliance data. Contemp Clin Trials. 2008;29(2):252–258. doi: 10.1016/j.cct.2007.09.003. [DOI] [PubMed] [Google Scholar]
  • 31.Wear S, Richardson PG, Revta C, Vij R, Fiala M, Lonial S, Francis D, DiCapua Siegel DS, Schramm A, Jakubowiak AJ, et al. The multiple myeloma research consortium (MMRC) model: Reduced time to trial activation and improved accrual metrics. Blood Conference: 52nd Annual Meeting of the American Society of Hematology, ASH. 2010;116(21):3803.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.


Articles from Trials are provided here courtesy of BMC

RESOURCES