Skip to main content
British Journal of Clinical Pharmacology logoLink to British Journal of Clinical Pharmacology
. 2019 Dec 14;85(12):2784–2792. doi: 10.1111/bcp.14108

Impact of a targeted monitoring on data‐quality and data‐management workload of randomized controlled trials: A prospective comparative study

Claire Fougerou‐Leurent 1,2,, Bruno Laviolle 1,2,3, Christelle Tual 1,2, Valérie Visseiche 4, Aurélie Veislinger 1,2, Hélène Danjou 1,2, Amélie Martin 1,2, Valérie Turmel 1,2, Alain Renault 1,3, Eric Bellissant 1,2,3
PMCID: PMC6955406  PMID: 31471967

Abstract

Aims

Monitoring risk‐based approaches in clinical trials are encouraged by regulatory guidance. However, the impact of a targeted source data verification (SDV) on data‐management (DM) workload and on final data quality needs to be addressed.

Methods

MONITORING was a prospective study aiming at comparing full SDV (100% of data verified for all patients) and targeted SDV (only key data verified for all patients) followed by the same DM program (detecting missing data and checking consistency) on final data quality, global workload and staffing costs.

Results

In all, 137 008 data including 18 124 key data were collected for 126 patients from 6 clinical trials. Compared to the final database obtained using the full SDV monitoring process, the final database obtained using the targeted SDV monitoring process had a residual error rate of 1.47% (95% confidence interval, 1.41–1.53%) on overall data and 0.78% (95% confidence interval, 0.65–0.91%) on key data. There were nearly 4 times more queries per study with targeted SDV than with full SDV (mean ± standard deviation: 132 ± 101 vs 34 ± 26; P = .03). For a handling time of 15 minutes per query, the global workload of the targeted SDV monitoring strategy remained below that of the full SDV monitoring strategy. From 25 minutes per query it was above, increasing progressively to represent a 50% increase for 45 minutes per query.

Conclusion

Targeted SDV monitoring is accompanied by increased workload for DM, which allows to obtain a small proportion of remaining errors on key data (<1%), but may substantially increase trial costs.

Keywords: data management, randomized clinical trial, risk‐based monitoring, source data verification, targeted monitoring


What is already known about this subject

  • The cost–benefit ratio of monitoring clinical trials is being questioned and risk‐based approaches are encouraged by regulatory guidance.

  • Assessment of the usefulness and cost‐effectiveness of monitoring techniques in a variety of clinical trial settings and indications is still needed.

  • Data quality requires highly reliable sequencing from data entry to data analysis and the interaction between source data verification and data management is of crucial importance.

What this study adds

  • Two monitoring strategies (full and targeted source data verification) including the data management component were prospectively compared to assess their cost‐effectiveness in terms of data management workload, staffing costs and data quality.

  • The implementation of targeted source data verification monitoring is counterbalanced by an increased workload for data management, which allows maintenance of a small proportion of remaining errors on key data (<1%) but may substantially increase trial costs.

1. INTRODUCTION

Data quality and reliability are of crucial importance in clinical trials, as results of these trials are part of the evidence‐based medicine on which medical practices rely. The development of high‐quality clinical research implies the establishment of a quality assurance and control system that is under the responsibility of the study sponsor. As part of this system, monitoring is defined as the act of overseeing the progress of a clinical trial, and of ensuring that it is conducted, recorded and reported in accordance with the protocol, standard operating procedures, good clinical practice (GCP) and applicable regulatory requirements.1 GCP guidelines define data monitoring as verifying that “(i) the rights and well‐being of human subjects are protected; (ii) the reported trial data are accurate, complete and verifiable from source documents; (iii) the conduct of the trial is in compliance with the currently approved protocol/amendment(s), with good clinical practices and with the applicable regulatory requirement(s).” Among monitoring activities, on‐site data audits are conducted to verify that the trial data are accurate, complete and verifiable from source documents. Source data verification (SDV) is the process of comparing source data documents (medical records for example) to data recorded or entered in a case report form (CRF), electronic record or database.2 Comprehensive or full SDV (verification of 100% of data for 100% of patients) has long been the gold standard in the pharmaceutical industry.3, 4 However, considering the exponential rise in clinical research costs (SDV has been estimated to account for approximately 25% of overall clinical trial costs3) and the undemonstrated impact of full SDV on data quality,4, 5, 6, 7, 8 the question of implementing a reduced or a targeted SDV and its cost/benefit ratio has emerged.9, 10, 11, 12 Since the beginning of the decade, regulatory agencies have suggested that sponsors should adapt the nature and extent of the monitoring to a risk‐based approach13, 14, 15 and, nowadays, there is a growing consensus that a monitoring focused on the most critical data elements and processes necessary to achieve study objectives is more interesting than routine visits to all clinical sites and 100% data verification to ensure overall study quality.16, 17 However, the impact of this approach on data quality needs to be evaluated. A wide range of alternative monitoring methods have been described in the literature16, 18, 19, 20, 21, 22, 23 but the publications mainly assessed the impact on data quality retrospectively, and none of them evaluated the impact of these alternative monitoring strategies on global workload and clinical trials costs. Moreover, the place of centralized data management (DM), which consists in detecting missing data and protocol deviations and controlling data consistency on the electronic database after SDV, has not been questioned in these new settings.

Our belief is that data quality requires highly reliable sequencing from data entry or data capture to data analysis and that the interaction between SDV and DM is of crucial importance. A full SDV guarantees a coherent data set and therefore an easier and faster DM. Conversely, a targeted SDV could transfer some workload to the DM step with an increased number of requests for data modification (queries) to the investigators. In this context, the aim of our study was to prospectively compare 2 monitoring strategies (full and targeted SDV) including the DM component and to assess their cost‐effectiveness in terms of DM workload, staffing costs and data quality.

2. METHODS

2.1. Study objectives

The MONITORING study aimed at comparing the impact of 2 different data monitoring approaches on data quality: a full SDV (100% of data verified for all patients) traditionally used in our institution as a gold standard, and a targeted SDV (only key data verified for all patients). The primary endpoint was the error rate in the final dataset prepared using the targeted SDV monitoring process, on total data and on key data. Secondary objectives were to assess the possible impact of targeted SDV on the DM workload and the staffing cost of the trial. Secondary endpoints were the number of discrepancies between the datasets prepared using the 2 monitoring strategies at each step, the number of queries issued with each strategy and the time spent on SDV and DM with each strategy.

2.2. Study design

We used a prospective design, the 2 monitoring approaches being implemented simultaneously and independently on 6 clinical trials (for detailed description of the clinical trials, see supplementary data in File S1). These trials were randomly selected among the 18 ongoing studies managed by our unit with a principle investigator from the Rennes University Hospital. We selected patients included in Rennes, for whom the data monitoring had not started. The regulatory and/or scientific key data verified by the targeted SDV were: informed consent, inclusion and exclusion criteria, main prognostic variables at inclusion (chosen with the principal investigator), primary endpoint, and serious adverse events (SAE).

As this study was not performed on human subjects, no ethics committee approval or subject consent was applicable.

2.3. Data collection and DM

The procedures for data collection and management are presented in Figure 1. For each clinical trial, when a paper CRF was completed, the data from that CRF were captured in a first database named Araw, providing the original raw data. Then, a copy of the CRF was made before any intervention by a clinical research associate (CRA). The 2 monitoring strategies were then implemented in parallel by 2 experienced CRA on the original CRF and on its copy, in an independent manner. SDV was implemented on‐site, comparing source data with data entered in the CRF, according to the monitoring strategy: 100% of data for full SDV and only key data for targeted SDV. Corrections stemming from the full SDV were implemented on the original CRF and the data of that CRF were afterwards captured in a second database named Afull. Corrections stemming from the targeted SDV were implemented on the copy CRF and the data of that CRF were afterwards captured in a third database named Atarget. All data were manually entered in all databases using double data entry: 2 different operators entered the data independently and the 2 resulting databases were compared to identify discrepancies, which were corrected using the CRF.

Figure 1.

Figure 1

Procedure for data collection and management. For each clinical trial, case report forms were captured in the Araw database, providing the original raw data. Case report forms were then copied and each monitoring strategy was applied on a form. The data controlled with the targeted or full source data verification (SDV) were captured in the Atarget and Afull databases, respectively. The data‐management (DM) program was then implemented on these databases and after corrections stemming from DM queries, we obtained the final Btarget and Bfull databases.

The same DM program (missing data, consistency, protocol deviations) was subsequently implemented in each strategy by central DM staff. In case of need, a request for data modification (query) was sent to clinical study coordinators. All queries (missing data, discrepancies) were issued and answered, in accordance with our standard operating procedures. The corrections stemmed from the queries issued from DM on the Afull database were implemented, providing the Bfull database. The corrections stemmed from the queries issued from DM on the Atarget database were implemented, providing the Btarget database. The Bfull and Btarget databases constituted the final databases after each monitoring strategy was completed.

The comparison between the databases was realized with a SAS® procedure (comparing each variable of a first database to the same variable of the second database, for each patient), and enabled identification of discrepancies. The SAE were analysed separately. When an SAE occurred in a clinical trial, it was declared in the CRF on the AE form and qualified as Serious on this form and on a specific SAE form with complementary information.

For the purpose of our study, each SAE declared with the full monitoring strategy was looked for in the targeted monitoring database. In case of discrepancy on the presence or qualification of an AE (present or qualified as serious in the full monitoring database and not in the targeted monitoring database), it was considered as an error on key data for the targeted monitoring strategy.

2.4. Workload and cost evaluation

The time spent on conducting SDV for each CRF was recorded in real‐time by each CRA for both strategies. The recorded time only included the time spent on on‐site CRF data verification from original source documents and not other aspects of monitoring (travel time, CRF corrections etc.). To evaluate the additional workload of the DM, we only considered the time spent on queries, estimating that other DM procedures were approximately the same (documentation, database construction, data checking programming). A query was estimated to take 20 minutes to handle for a data manager (data checking, drafting, issue, transfer to investigator and double data entry of the answer) and 10 minutes for the clinical study coordinator (analysis of the patient medical file, writing of the answer, transfer to the data manager). A sensitivity analysis was performed using 3 hypotheses for each procedure (10, 20 and 30 min per query for the data manager and 5, 10 and 15 min per query for the clinical study coordinator) and by crossing them, therefore obtaining 7 hypotheses (from 15 to 45 minutes per query). These 7 hypotheses (3 below and 3 above the initial 30 minutes per query hypothesis) were applied to the number of queries obtained with each monitoring strategy for each clinical trial and the total time spent both on SDV and queries handling was determined. For the cost analysis, we estimated the costs for a CRA, a data‐manager and a clinical study coordinator to be 33, 30.50 and 30.50 €/h, respectively, in accordance with our institution's scale.

2.5. Statistics

Considering the inclusion rates of the different studies, we planned to include 150 patients over 1 year. The final full SDV database was considered as fully accurate and was used as the control to estimate the error rate for the targeted SDV with its 95% confidence interval. Consequently, an error was defined as a data value in the Btarget database different from the corresponding data value in the Bfull database. Quantitative variables were expressed as mean ± standard deviation and compared between the 2 monitoring strategies using a paired t test. Qualitative variables were expressed as numbers (percentages) and compared using a MacNemar test for paired variables. Analyses were performed using SAS® statistical software (SAS Institute, Cary, NC, USA).

3. RESULTS

In all, 137 008 data including 18 124 key data were collected for 126 patients from 6 clinical trials. The main characteristics of the 6 prospective clinical trials included in our study are presented in Table 1. The mean percentage of key data per study was 13.2%, 95% confidence interval (CI) 13.1–13.4%, amounting to a mean of 144 key data per patient.

Table 1.

Characteristics of the clinical trials included in the MONITORING study

Trial NCT Study design Number of trial visits/patient Total number of patients included in the trial Number of patients included in the MONITORING study Number of data collected Number of key data (%)
1 00280592 Randomized, double‐blind, placebo‐controlled, multicentre phase 324 6 171 48 27 252 4092 (15)
2 00372138 Noncomparative pilot single centre phase 2 5 20 15 5746 1545 (27)
3 00151671 Randomized, double‐blind, placebo‐controlled, single centre phase 325 6 35 15 22 228 1860 (8)
4 00151632 Randomized, controlled, multicentre phase 326 20 195 17 43 148 7223 (17)
5 00323804 Randomized, double‐blind, placebo‐controlled, multicentre phase 3 4 372 20 21 088 1724 (8)
6 00336882 Randomized, single‐blind, controlled, single centre phase 327 5 30 11 17 546 1680 (10)
TOTAL 823 126 137 008 18 124 (13)

3.1. Efficacy

The numbers of corrections at each step for the 2 monitoring strategies are presented in Table 2 (for results per study, see supplementary data in Table S1). Overall, full SDV monitoring strategy led to the modification of 6426 data (4.7%) including 765 key data (4.2%). Targeted SDV monitoring strategy led to the modification of 4314 data (3.1%) including 637 key data (3.5%). The same DM program, implemented on both the Afull and Atarget databases, led to the modification of 496 (0.4%) and 2124 (1.6%) data, respectively. The comparison between the final databases Bfull and Btarget enabled the identification of 2015 discrepancies (1.47%, 95% CI 1.41–1.53%) considered as remaining errors with targeted monitoring. The error rate on key data was 0.78%, 95% CI 0.65–0.91%. The 141 errors on key data following targeted SDV were analysed. No error remained on informed consent, nor on inclusion or exclusion criteria. One study (4) presented 15 errors on the composite primary outcome, evaluated on the basis of the occurrence of several events at each visit. These errors on data related to the primary outcome did not impact the global assessment of the outcome. The other 126 errors on key data were observed on baseline prognostic variables.

Table 2.

Number of data corrections at each step in the 2 monitoring strategies

Targeted SDV strategy Full SDV strategy
On total data (137 008) On key data (18 124) On total data (137 008) On key data (18 124)

By SDV, n (% [95% CI])

4314 (3.1 [3.1–3.2]) 637 (3.5 [3.3–3.8]) 6426 (4.7 [4.6–4.8]) 765 (4.2 [3.9–4.5])
By DM, n (% [95% CI]) 2124 (1.6 [1.5–1.6]) 241 (1.3 [1.2–1.5]) 496 (0.4 [0.3–0.4]) 161 (0.9 [0.8–1.0])

SDV: source data verification; DM: data management; CI: confidence interval

A total of 112 SAE were detected with full SDV and 110 with targeted SDV. The 2 SAE (1.8% of total SAE) not detected with targeted SDV were 1 case of liver failure (prolongation of existing hospitalization) in study 3 and 1 case of leucopenia (important medical event) in study 4.

The number of queries issued with each monitoring strategy is presented in Table 3. Overall, the number of queries was about 4 times larger with the targeted SDV (793) than with the full SDV (206), with a mean ± SD of 132 ± 101 and 34 ± 26 queries per study respectively (P = .03). For queries related to key data, there was no statistical difference between targeted SDV and full SDV (13 ± 16 vs 5 ± 6; P = .15). In addition, as compared to full SDV, targeted SDV entailed a larger proportion of patients for whom at least 1 query was issued (respectively, 59%, 95% CI 49–67% and 92%, 95% CI 86–96% of patients). For 4 studies (1, 3, 4 and 6), targeted SDV led to queries being issued for 100% of the patients whereas full SDV led to queries being issued for 53–64% of the patients in these studies, reflecting an increased workload for data managers and clinical study coordinators in the case of targeted SDV.

Table 3.

Queries in each monitoring strategy

Trial Number of patients Number of queries with targeted SDV Number of patients with queries with targeted SDV Number of queries with full SDV Number of patients with queries with full SDV
1 48 306 48 (100%) 80 27 (56%)
2 15 23 7 (47%) 18 7 (47%)
3 15 166 15 (100%) 34 9 (60%)
4 17 156 17 (100%) 19 9 (53%)
5 20 75 18 (90%) 47 15 (75%)
6 11 67 11 (100%) 8 7 (64%)
TOTAL 126 793 116 (92%) 206 74 (59%)

SDV: source data verification

3.2. Workload and cost analysis

Across the 6 studies, 140 hours were devoted by the CRA to the targeted SDV vs 317 hours for the full SDV, with a mean ± SD of 23 ± 15 and 53 ± 24 hours per study respectively (P = .001). Targeted SDV generated 587 additional queries across studies, with great heterogeneity (from 5 queries for study 2 to 226 for study 1), amounting to <1 (0.3) to >8 additional queries per patient, depending on the study. In terms of time spent on these queries, with an estimation of 30 minutes for handling a single query (DM and clinical study coordinator), the targeted SDV‐related additional queries amounted 294 hours of extra time spent (mean of 2.4 ± 1.7 hours per patient). The sensitivity analysis is presented in Figure 2. For a handling time of 15–20 minutes per query, the time saved on the targeted SDV is almost totally offset by the increase of the number of queries. From a handling time of at least 25 minutes per query, the targeted monitoring strategy increases the global workload (up to 50% with the 45 min hypothesis).

Figure 2.

Figure 2

Estimated time spent on monitoring, assuming different query handling time hypotheses. The time spent on monitoring includes time for on‐site source data verification (SDV) and centralized handling of queries issued by data management, for the 6 clinical trials included in our study.

Translation into costs shows that, on the whole, the targeted SDV strategy provided a €5841 saving on monitoring on the 1 hand, and an additional €8922 linked to the queries on the other, finally totalling an extra cost of €3081. The sensitivity analysis applied to costs showed that the differential cost per patient varies from €−16 ± 16 to €57 ± 64, when the time spent on a query ranges from 15 to 45 minutes.

4. DISCUSSION

To our knowledge, our study is the first to propose a prospective, parallel comparison of 2 monitoring strategies taking into account the impact on DM and trial costs. Our results show final error rates with targeted SDV of 1.5% and 0.8% for overall and key data respectively, compared to full SDV, considered in our study as the gold‐standard. The rate of incorrect data in our study (4.7% of data modified with full SDV monitoring strategy) seems satisfying regarding the literature. Few publications have assessed the rate of source‐to‐database discrepancies, showing a great heterogeneity,2, 28, 29, 30, 31 and authors point out the importance of on‐site data audits to identify data quality and integrity issues.

Surprisingly, our results suggest that targeted SDV generates fewer corrections on key data than full SDV; these corrections are, however, recovered by the DM step. In addition, beyond the key data that it was supposed to be focused on, targeted SDV generated a large number of non‐key data corrections. This could be explained by the SDV method, which consists in analysing the medical file to check the CRF data. If the CRA spotted a false or missing non‐key data when checking a key data, they may have corrected the non‐key data in the CRF. We suppose that this reflects a pragmatic and ethical SDV method used by our CRA. This bias may have led to underestimate the difference between the 2 monitoring strategies.

Beyond this, the initial hypothesis of presuming the error rate to be zero with full SDV needs to be questioned. In a recent study,6 Andersen and colleagues investigated the remaining errors after data base lock, by implementing a post hoc full SDV on the data of 2556 subjects included in 3 clinical trials, previously monitored with a combination of complete and partial SDV. An overall error rate of 0.45% was found. Regarding the level of monitoring, complete and partial SDV respectively yielded error rates of 0.3 and 0.5%. These results suggest that a 0% error rate cannot be achieved, whatever the intensity of the monitoring performed.

Two SAE were not detected with the targeted SDV: 1 was not considered serious (leucopenia) and was only reported as an AE, the other was not reported (liver failure). This should be considered in the light of our study procedure. Indeed, the CRA performing the full SDV reviewed the AE together with the investigator, which sometimes led to additional reporting. In order not to double the workload for investigators, the CRA performing the targeted SDV did not have access to the investigators, possibly introducing a bias in the monitoring of safety. Of course, in real‐life targeted SDV, this bias would disappear.

The most important factor remains, however, the expected impact of the monitoring strategy on the quality of data susceptible of modifying the trial's benefit/risk ratio. Concerning the errors remaining for the primary outcome in study 4, we can specify that the outcome was a composite set of events defined as clinical complications following a liver transplantation, evaluated at each visit. We can assume, as for the monitoring of safety, that access to the investigators would have reduced the number of remaining errors in this particular case. However, we can underline that these errors did not impact the global assessment of the outcome. It is interesting to note that the study concerned was that with the largest number of key data per patient (424 key data), vs a mean of 110 ± 29 key data per patient in the other 5 studies. Smith et al.4 performed a retrospective comparison of a full SDV against corresponding unverified data for 533 subjects participating in a phase III clinical trial. Data discrepancies were identified for the majority of variables examined with discrepancy rates varying from 0.6 to 9.9%. These discrepancies, however, had a negligible impact on the primary outcome and on the clinical conclusions of the trial. In line with Andersen et al.,6 they also highlighted that a full SDV did not provide error‐free data.

The original feature of our work was that it evaluated the impact of a nonexhaustive monitoring strategy on the DM stage. Indeed, the impact was considerable, the number of queries increasing by about 4 when a targeted SDV was implemented, but with marked heterogeneity across studies and the number of patients with at least 1 query 30% greater. This result was consistent with the database analysis, which showed that DM generated 4 times more corrections after targeted SDV than after full SDV. We can also hypothesize that the queries stemming from targeted monitoring were more varied as less data were corrected with the targeted SDV and therefore queries concerned more variables than in the full monitoring strategy. Another interesting point was the cost analysis, which confirmed that the cost of data quality assurance is linked to both monitoring and DM, and that lowering monitoring requirements increases the DM workload. We showed that, provided our assumption of the cost of a query is reliable, the differential cost per patient could reach >€100, which is consistent with Pronker et al.'s assumptions in an evidence‐based study on clinical DM cost.32 Beyond the financial analysis, we need to consider the typology of the data corrected with SDV and with DM. A full SDV vouches for accuracy of data and exhaustiveness of adverse event reporting, which cannot be guaranteed with DM, provided the reported data are credible and consistent. We believe that the sponsor should check early on that the data are being correctly reported and should intervene with the investigating team to correct any bias: this will be in favour of better data quality and better trial implementation. The large use of electronic data capture nowadays allows centralized monitoring capabilities which could help target on‐site monitoring activities. In the TEMPER study, Stenning et al33 assessed the value of triggered monitoring in distinguishing sites with important protocol or GCP compliance issues not identified centrally. Using a prospective, matched‐pair design, investigating sites that had been prioritized for visits after having activated triggers (such as rate of data queries, number of protocol deviations, SAE rate) were matched with a control untriggered site, which would not usually have been visited at that time. The results have shown that triggered monitoring was not sufficiently discriminatory as the rate of Major and Critical on‐site findings were not different between the 2 groups. In the same way, in the ongoing START Monitoring substudy,34 investigating sites were randomized to receive or not annual on‐site monitoring, additionally to the central monitoring and local monitoring plan.

In their last update of the Guideline for Good Clinical Practice E6(R2),1 the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use state that “the sponsor should develop a systematic, prioritized, risk‐based approach to monitoring clinical trials. […] The sponsor may choose on‐site monitoring, a combination of on‐site and centralized monitoring, or, where justified, centralized monitoring”. To date, only a few tools for implementing risk‐based monitoring of clinical trials have been described,16, 34, 35 most of them combining on‐site and centralized monitoring. Among them, risk‐assessment tools were used within the French OPTIMON (for OPTImization of MONitoring—NCT00780091) trial36 and the ADAMON (for ADApted MONitoring) projects. These 2 prospective, comparative, cluster‐randomized trials both compared the efficacy of a risk‐based monitoring strategy with a classic intensive monitoring approach. While the results of the OPTIMON study are awaited, the recently published results of the ADAMON study37 showed that risk‐adapted monitoring is noninferior to extensive on‐site monitoring on the number of audit findings, either overall or in specific error domains. This study also showed that the average number of monitoring visits and time spent on‐site were 2.1 and 2.7 times greater, respectively, in extensive on‐site monitoring than in risk‐adapted monitoring. However, ADAMON was not designed to investigate the overall impact of the monitoring strategy on the reliability of the results. From our point of view, the impact of these strategies on DM requires further evaluation. We believe that this would provide valuable information in designing a global strategy of quality control in clinical trials.

4.1. Limitations

In our study, the time spent on SDV was recorded on site by CRA and only based on the time spent on CRF data verification from original source documents. The other aspects of monitoring such as travel time and corrections with investigators were not recorded. This could have led to underestimate the difference between the time spent on full SDV and the time spent on targeted SDV.

The number of trials included in our analysis is significant compared to the literature. However, although the 2 monitoring approaches were implemented by a single experienced CRA team, our results show great heterogeneity across trials. This could be explained by differences in terms of therapeutic domain, duration of trial and complexity of patient follow‐up, which impact the type and quantity of data to be collected. Moreover, despite harmonization in monitoring procedures and experience between CRA, we cannot discard that, at least part of variability observed in data quality and time spent on SDV could reflect differences in the attentiveness and diligence of the CRA assigned to each monitoring approach. However, it is worth noting that quantifying the time spent on various tasks was (and is still) a routine procedure in our institution and cannot be considered as a limitation of our study.

The selected DM approach might also be discussed. Indeed, in the clinical trials selected in our study, the use of paper CRF, captured once it has been audited (and we assume the audited data are reliable) does not allow any centralized monitoring that could help distinguish between reliable data and potentially unreliable data and may reduce the extent of on‐site SDV. However, nowadays, the use of electronic data capture systems allows study‐specific DM or centralized monitoring process which provide additional capabilities to on‐site SDV. For example, despite barriers in technological and policy access, remote‐SDV has been shown to be feasible28, 38 and might be of interest by providing a more efficient process for performing SDV for clinical trials and therefore reducing costs. Concerning the cost analysis, the staffing costs used in our study reflect the institutional scales for CRA, data‐managers and clinical study coordinators at the time of analysis and cannot be generalized to all present‐day research structures.

5. CONCLUSION

Our study shows that the implementation of targeted SDV monitoring is counterbalanced by an increased workload for DM, which allows maintenance of a small proportion of remaining errors on key data (<1%) but may substantially increase trial costs. Consequently, data quality assurance should be viewed as a whole (monitoring and DM) in terms of efficacy (i.e. the ability to ensure the reliability of the data) and global costs. Our results provide food for thought on the balance between reliability of data and cost that sponsors should address. In addition, the analysis of global strategies of monitoring, including a further DM step, should be of interest in the context of the definition of new monitoring approaches.

COMPETING INTERESTS

There are no competing interests to declare.

CONTRIBUTORS

E.B. and C.F.L. conceived and designed the study. C.F.L., B.L., E.B. and A.R. analysed data and wrote the manuscript. C.F.L., C.T., V.V., A.V., H.D., A.M., V.T. and A.R. performed research. All authors revised the manuscript and approved the final manuscript.

Supporting information

Table S1

Step‐by‐step database comparisons for each study and globally

File S1

Detailed clinical trials characteristics

ACKNOWLEDGEMENTS

The authors would like to thank all investigators and study personnel involved in the trials included in the MONITORING study. The authors would also like to acknowledge Lisa Cahour for her participation in data analysis.

Fougerou‐Leurent C, Laviolle B, Tual C, et al. Impact of a targeted monitoring on data‐quality and data‐management workload of randomized controlled trials: A prospective comparative study. Br J Clin Pharmacol. 2019;85:2784–2792. 10.1111/bcp.14108

The authors confirm that this study was not performed with human subjects/patients and or substances administered, hence there was no Principal Investigator with direct responsibility for patients.

DATA AVAILABILITY STATEMENT

Data are available from the authors with the permission of Rennes Univeristy Hospital and Principle Investigators of each randomized controlled trial included in the present study.

REFERENCES

  • 1. International council for Harmonisation of Technical Requirements for Pharmaceuticals for human Use (ICH) . Integrated Addendum to ICH E6 (R1): Guideline for Good clinical Practice E6(R2) [Internet]. [cited 2017 Oct 23]. Available from: http://www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Efficacy/E6/E6_R2__Step_4.pdf
  • 2. Houston L, Probst Y, Humphries A. Measuring data quality through a source data verification audit in a clinical research setting. Stud Health Technol Inform. 2015;214:107‐113. [PubMed] [Google Scholar]
  • 3. Funning S, Grahnén A, Eriksson K, Kettis‐Linblad Å. Quality assurance within the scope of good clinical practice (GCP)—what is the cost of GCP‐related activities? A survey within the Swedish Association of the Pharmaceutical Industry (LIF)’s members. Qual Assur J. 2009. Jan 1;12(1):3‐7. [Google Scholar]
  • 4. Tudur Smith C, Stocken DD, Dunn J, et al. The value of source data verification in a cancer clinical trial. PloS One. 2012;7(12):e51623. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Morrison BW, Cochran CJ, White JG, et al. Monitoring the quality of conduct of clinical trials: a survey of current practices. Clin Trials Lond Engl. 2011. Jun;8(3):342‐349. [DOI] [PubMed] [Google Scholar]
  • 6. Andersen JR, Byrjalsen I, Bihlet A, et al. Impact of source data verification on data quality in clinical trials: an empirical post hoc analysis of three phase 3 randomized clinical trials. Br J Clin Pharmacol. 2015. Apr 1;79(4):660‐668. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Sheetz N, Wilson B, Benedict J, et al. Evaluating source data verification as a quality control measure in clinical trials. Ther Innov Regul Sci. 2014. Nov;48(6):671‐680. [DOI] [PubMed] [Google Scholar]
  • 8. Tantsyura V, Dunn IM, Fendt K, Kim YJ, Waters J, Mitchel J. Risk‐based monitoring: a closer statistical look at source document verification, queries, study size effects, and data quality. Ther Innov Regul Sci. 2015. Nov;49(6):903‐910. [DOI] [PubMed] [Google Scholar]
  • 9. Yusuf S, Bosch J, Devereaux PJ, et al. Sensible guidelines for the conduct of large randomized trials. Clin Trials Lond Engl. 2008;5(1):38‐39. [DOI] [PubMed] [Google Scholar]
  • 10. Sertkaya A, Wong H‐H, Jessup A, Beleche T. Key cost drivers of pharmaceutical clinical trials in the United States. Clin Trials Lond Engl. 2016. Apr;13(2):117‐126. [DOI] [PubMed] [Google Scholar]
  • 11. Eisenstein EL, Collins R, Cracknell BS, et al. Sensible approaches for reducing clinical trial costs. Clin Trials Lond Engl. 2008;5(1):75‐84. [DOI] [PubMed] [Google Scholar]
  • 12. Duley L, Antman K, Arena J, et al. Specific barriers to the conduct of randomized trials. Clin Trials Lond Engl. 2008;5(1):40‐48. [DOI] [PubMed] [Google Scholar]
  • 13. US Food and Drug Administration . Guidance for Industry Oversight of Clinical Investigations — A Risk‐Based Approach to Monitoring [Internet]. [cited 2017 Oct 23]. Available from: https://www.fda.gov/downloads/Drugs/Guidances/UCM269919.pdf
  • 14. European Medicines Agency . Reflection paper on risk‐based quality management in clinical trials [Internet]. [cited 2017 Oct 23]. Available from: http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2013/11/WC500155491.pdf
  • 15. MRC/DH/MHRA Joint Project. 2011;31.
  • 16. Hurley C, Shiely F, Power J, et al. Risk based monitoring (RBM) tools for clinical trials: a systematic review. Contemp Clin Trials. 2016;51:15‐27. [DOI] [PubMed] [Google Scholar]
  • 17. Brosteanu O, Houben P, Ihrig K, et al. Risk analysis and risk adapted on‐site monitoring in noncommercial clinical trials. Clin Trials Lond Engl. 2009. Dec;6(6):585‐596. [DOI] [PubMed] [Google Scholar]
  • 18. Olsen R, Bihlet AR, Kalakou F, Andersen JR. The impact of clinical trial monitoring approaches on data integrity and cost‐‐a review of current literature. Eur J Clin Pharmacol. 2016. Apr;72(4):399‐412. [DOI] [PubMed] [Google Scholar]
  • 19. Houston L, Probst Y, Martin A. Assessing data quality and the variability of source data verification auditing methods in clinical research settings. J Biomed Inform. 2018. Jul;83:25‐32. [DOI] [PubMed] [Google Scholar]
  • 20. Houston L, Probst Y, Yu P, Martin A. Exploring data quality management within clinical trials. Appl Clin Inform. 2018;9(1):72‐81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Califf RM, Karnash SL, Woodlief LH. Developing systems for cost‐effective auditing of clinical trials. Control Clin Trials. 1997. Dec;18(6):651‐660. discussion 661‐666 [DOI] [PubMed] [Google Scholar]
  • 22. Macefield RC, Beswick AD, Blazeby JM, Lane JA. A systematic review of on‐site monitoring methods for health‐care randomised controlled trials. Clin Trials Lond Engl. 2013. Feb;10(1):104‐124. [DOI] [PubMed] [Google Scholar]
  • 23. Bakobaki J, Joffe N, Burdett S, Tierney J, Meredith S, Stenning S. A systematic search for reports of site monitoring technique comparisons in clinical trials. Clin Trials Lond Engl. 2012. Dec;9(6):777‐780. [DOI] [PubMed] [Google Scholar]
  • 24. Gallien P, Amarenco G, Benoit N, et al. Cranberry versus placebo in the prevention of urinary infections in multiple sclerosis: a multicenter, randomized, placebo‐controlled, double‐blind trial. Mult Scler Houndmills Basingstoke Engl. 2014;20(9):1252‐1259. [DOI] [PubMed] [Google Scholar]
  • 25. Seguin P, Locher C, Boudjema K, et al. Effect of a perioperative nutritional supplementation with Oral impact® in patients undergoing hepatic surgery for liver cancer: a prospective, placebo‐controlled, randomized, double‐blind study. Nutr Cancer. 2016;68(3):464‐472. [DOI] [PubMed] [Google Scholar]
  • 26. Boudjema K, Camus C, Saliba F, et al. Reduced‐dose tacrolimus with mycophenolate mofetil vs. standard‐dose tacrolimus in liver transplantation: a randomized study. Am J Transplant. 2011. May;11(5):965‐976. [DOI] [PubMed] [Google Scholar]
  • 27. Tanguy M, Seguin P, Laviolle B, Bleichner J‐P, Morandi X, Malledant Y. Cerebral microdialysis effects of propofol versus midazolam in severe traumatic brain injury. J Neurotrauma. 2012. Apr 10;29(6):1105‐1110. [DOI] [PubMed] [Google Scholar]
  • 28. Mealer M, Kittelson J, Thompson BT, et al. Remote source document verification in two national clinical trials networks: a pilot study. PloS One. 2013;8(12):e81890. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Duda SN, Shepherd BE, Gadd CS, Masys DR, McGowan CC. Measuring the quality of observational study data in an international HIV research network. PloS One. 2012;7(4):e33908. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Favalli G, Vermorken JB, Vantongelen K, Renard J, Van Oosterom AT, Pecorelli S. Quality control in multicentric clinical trials. An experience of the EORTC gynecological cancer cooperative group. Eur J Cancer Oxf Engl 1990. 2000. Jun;36(9):1125‐1133. [DOI] [PubMed] [Google Scholar]
  • 31. Vantongelen K, Rotmensz N, van der Schueren E. Quality control of validity of data collected in clinical trials. EORTC Study Group on Data Management (SGDM) Eur J Cancer Clin Oncol. 1989. Aug;25(8):1241‐1247. [DOI] [PubMed] [Google Scholar]
  • 32. Pronker E, Geerts BF, Cohen A, Pieterse H. Improving the quality of drug research or simply increasing its cost? An evidence‐based study of the cost for data monitoring in clinical trials. Br J Clin Pharmacol. 2011. Mar;71(3):467‐470. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Stenning SP, Cragg WJ, Joffe N, et al. Triggered or routine site monitoring visits for randomised controlled trials: results of TEMPER, a prospective, matched‐pair study. Clin Trials Lond Engl. 2018;15(6):600‐609. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Hullsiek KH, Kagan JM, Engen N, et al. INVESTIGATING THE EFFICACY OF CLINICAL TRIAL MONITORING STRATEGIES: design and implementation of the cluster randomized START monitoring substudy. Ther Innov Regul Sci. 2015. Mar 1;49(2):225‐233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Journot V, Pignon J‐P, Gaultier C, et al. Validation of a risk‐assessment scale and a risk‐adapted monitoring plan for academic clinical research studies‐‐the pre‐Optimon study. Contemp Clin Trials. 2011. Jan;32(1):16‐24. [DOI] [PubMed] [Google Scholar]
  • 36. Optimisation of Monitoring for Clinical Research Studies ‐ Full Text View ‐ http://ClinicalTrials.gov [Internet]. [cited 2017 Oct 23]. Available from: https://clinicaltrials.gov/ct2/show/NCT00780091
  • 37. Brosteanu O, Schwarz G, Houben P, et al. Risk‐adapted monitoring is not inferior to extensive on‐site monitoring: results of the ADAMON cluster‐randomised study. Clin Trials Lond Engl. 2017. Aug 1;14(6):584‐596. 10.1177/1740774517724165 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Uren SC, Kirkman MB, Dalton BS, Zalcberg JR. Reducing clinical trial monitoring resource allocation and costs through remote access to electronic medical records. J Oncol Pract. 2013. Jan;9(1):e13‐e16. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Table S1

Step‐by‐step database comparisons for each study and globally

File S1

Detailed clinical trials characteristics

Data Availability Statement

Data are available from the authors with the permission of Rennes Univeristy Hospital and Principle Investigators of each randomized controlled trial included in the present study.


Articles from British Journal of Clinical Pharmacology are provided here courtesy of British Pharmacological Society

RESOURCES