Skip to main content
British Journal of Clinical Pharmacology logoLink to British Journal of Clinical Pharmacology
. 2014 Oct 19;79(4):660–668. doi: 10.1111/bcp.12531

Impact of source data verification on data quality in clinical trials: an empirical post hoc analysis of three phase 3 randomized clinical trials

Jeppe Ragnar Andersen 1, Inger Byrjalsen 1, Asger Bihlet 1, Faidra Kalakou 2, Hans Christian Hoeck 3, Gitte Hansen 1, Henrik Bo Hansen 1, Morten Asser Karsdal 1, Bente Juel Riis 1
PMCID: PMC4386950  PMID: 25327707

Abstract

AIM

The aim of this project was to perform an empirical evaluation of the impact of on site source data verification (SDV) on the data quality in a clinical trial database to guide an informed decision on selection of the monitoring approach.

METHODS

We used data from three randomized phase III trials monitored with a combination of complete SDV or partial SDV. After database lock, individual subject data were extracted from the clinical database and subjected to post hoc complete SDV. Error rates were calculated with focus on the degree of on study monitoring and relevance and analyzed for potential impact on end points.

RESULTS

Data from a total of 2566 subjects including more than 3 million data fields were 100% source data verified post hoc. An overall error rate of 0.45% was found. No sites had 0% errors. 100% SDV yielded an error rate of 0.27% as compared with partial SDV having an error rate of 0.53% (P < 0.0001). Comparing partly and fully monitored subjects, minor differences were identified between variables of major importance to efficacy or safety.

CONCLUSIONS

The findings challenge the notion that a 0% error rate is obtainable with on site monitoring. Data indicate consistently low error rates across the three trials analyzed. The use of complete vs. partial SDV offers a marginal absolute error rate reduction of 0.26%, i.e. a need to perform complete SDV of about 370 data points to avoid one unspecified error and does not support complete SDV as a means of providing meaningful improvements in data accuracy.

Keywords: cost-benefit, data error analysis, monitoring, risk-based monitoring, source data verification


WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT

  • Source data verification in clinical trials involves considerable resources and limited evidence for benefit is available.

  • Historically, up to 100% source data verification have been used. Authorities have recently presented guidelines that promote risk-based monitoring.

  • Sponsors of clinical trials are facing a major decision on whether to implement risk-based monitoring.

WHAT THIS STUDY ADDS

  • There is evidence that source data verification does not mean 0% error rates.

  • There are empirical data to support an informed decision on whether to implement risk-based monitoring.

  • The results suggest that the use of complete vs. partial source data verification in these studies offers a marginal, and not relevant, absolute error rate reduction of 0.26%.

Introduction

Sponsors of clinical trials are required to provide monitoring oversight to ensure adequate protection of the rights, welfare and safety of human subjects and the quality and integrity of the resulting data submitted to the Food and Drug administration (FDA) [1].The International Conference on Harmonization (ICH) guideline E6 on Good Clinical Practice (GCP) provides general guidelines for monitoring stating that ‘The purposes of trial monitoring are to verify that a) the rights and well-being of human subjects are protected, b) the reported trial data are accurate, complete and verifiable from source documents and c) the conduct of the trial is in compliance with the currently approved protocol/amendment(s), with GCP, and with the applicable regulatory requirement(s).’ [2]. This is open to interpretation leading to differences in monitoring practices between academia, industry and between companies within the industry [3].

However, insufficient empirical evidence exists to decide which practices are best suited to achieve the objectives of monitoring described in the ICH GCP guidelines E6 [3].

This project focuses on data quality defined as compliance with ICH E6's requirements that data are accurate, complete and verifiable.

In 1988 the FDA released a Guideline for Monitoring of Clinical Investigations stating ‘The most effective way to assure the accuracy of the data submitted to the FDA is to review individual subject records and other supporting documents and compare those records with the reports prepared by the investigator for submission to the sponsor’ [4]. Source data verification (SDV), a verification of the conformity of the data presented in case report forms with source data, is conducted to ensure that the data collected are reliable and allow reconstruction and evaluation of the trial and therefore seemingly fulfil ICH E6's requirements of accuracy, completeness and verifiability from source documents [2]. The 1988 guidelines led to a consensus within the majority of the industry that SDV of up to 100% of all entered data was required to comply with the FDA requirements for data quality and integrity [5].

On April 2011, the FDA withdrew the 1988 guidelines replacing it with the draft guideline ‘Oversight of Clinical Investigations- A Risk Based Approach to Monitoring’. The draft was replaced with a final version in August 2013 [6]. This guideline aims to divert the current monitoring approaches and promote sponsors to use alternative monitoring methods. Emphasis is transferred to critical items for the subjects' safety and trial's integrity and credibility. The European Medicines Agency's (EMA) ‘Reflection paper on risk based quality management in clinical trials’, finalized in November 2013, is also encouraging prioritized approaches in clinical trial processes [7]. Both guidelines acknowledge that the development of approaches based on quality by design principles is more likely to satisfy monitoring requirements. Sponsors are advised to generate an adequate level of monitoring, consisting of alternative monitoring methods other than SDV, combined with a risk identification and mitigation process throughout the trial's lifespan.

SDV is costly and reported to account for an average of 25% of the entire clinical trial budget [8]. The cost-benefit evaluation of SDV has historically been difficult as very few data are available on the actual benefits of applying a 100% SDV approach. It has been assumed that more SDV leads to better quality of data.

This project evaluates the validity of the notion that complete on site source data verification leads to significant changes in data quality, by providing the first empirical data to support an informed decision of a suitable choice of monitoring strategy, based on the cost : benefit ratio of the monitoring strategy.

Methods

The base for the investigation of the impact of SDV was three randomized, double-blind, placebo-controlled, multicentre phase III trials. Trial 1 (ClinicalTrials.gov identifier NCT00486434) and trial 2 (ClinicalTrials.gov identifier NCT00704847) were both 2 year studies for evaluation of efficacy and safety of oral salmon calcitonin in the treatment of subjects with knee osteoarthritis and trial 3 (ClinicalTrials.gov identifier NCT00525798) was a 3 year trial to evaluate the efficacy and safety of oral salmon calcitonin in the treatment of osteoporosis in post-menopausal women. Trials 1 and 2 had 12 site visits (six per year) and trial 3 had 16 site visits (5.3 per year) and were thus comparable in number of visits. The trials have been selected for this analysis based on similarities in trial populations, number of visits, trial sites and monitoring personnel.

Trial 1 included 1176 subjects and was conducted in nine sites across six countries. Trial 2 included 1030 subjects and was conducted in 18 sites across 11 countries and trial 3 included 4665 subjects and was conducted in 16 sites across 11 countries. From trial 3 a subpopulation of 360 subjects was randomly selected for the purpose of this analysis. This number represents 7.7% of the total population randomized for the trial. An overview of the subject distribution among the trials is summarized in Table 1. Data were entered in the electronic case report form (eCRF) by trained staff members at the sites.

Table 1.

Total number of subjects in each trial, total number being 100% monitored (percentage of total number of subjects), number of subjects included in the SDV activity (percentage of total number of subjects), and total number of 100% monitored SDV subjects (percentage of SDV subjects)

Trial Total n of subjects randomized Total n of on study 100% SDV subjects n of post-database lock SDV subjects n of on study 100% SDV subjects included in post-database lock SDV
Trial 1 1176 291 (24.7%) 1172 (99.7%) 290 (24.7%)
Trial 2 1030 272 (26.4%) 1024 (99.4%) 266 (26.0%)
Trial 3 4665 1446 (31.0%) 360 (7.7%) 110 (30.6%)
Total 6871 2009 (29.2%) 2556 (37.2%) 666 (26.1%)

SDV, source data verification.

During the conduct of the trials, SDV was performed by on site monitoring comparing source data with entered data in the eCRF. In case of discrepancies, the monitor would query the site staff, who would correct directly in the eCRF or in the source data.

In accordance with the monitoring plan, 100% SDV was performed for 25% of the randomized subjects in addition to the first 10 subjects per site. For the remaining 75% of the subjects, source data were verified for inclusion/exclusion criteria, informed consent form(s), drug accountability, serious adverse events (SAE) and the first adverse event (AE). The 25% of the subjects who were allocated to 100% SDV were randomly selected by allocation number prior to study start. Sites that enrolled 10 subjects or less were also subjected to 100% SDV of all subjects. The actual percentage of subjects who went into 100% SDV ended on 29.2%.

Data cleaning was performed continuously during the conduct of the trials by the central data management personnel. Site investigator personnel were queried for any discrepancy found and only site personnel were permitted to correct data. The central data management was not permitted to enter, change or delete any data. Subsequent to lock of the eCRF database the data were stored in SAS data sets (SAS Institute Inc., Cary, North Carolina, USA).

In order to investigate the impact of SDV, we analyzed the trial data after it had been monitored, cleaned and locked as per normal procedures. The individual subject data were extracted from the clinical data sets after database lock and printed to a data file in Adobe pdf format. The data files were taken to the investigator sites for additional 100% SDV, independent of any SDV previously performed by monitors. At the investigator sites, all data were reviewed by two trained individuals comparing with source documents, and any discrepancy was noted on the printed data file. After complete verification, the verified data files were collected, all discrepancies were registered and error rates were calculated. The error rate was calculated focusing on (i) an overall error rate, (ii) relevance of the variables in the categories of subject identification, efficacy, major efficacy, safety, major safety and other and (iii) degree of monitoring in the categories of fully monitored data including both SDV against source documents and data entry at investigator sites, and partly monitored data at investigator sites.

While a full mapping and calculation of the actual impact on the efficacy and safety outputs reported after trial completion would have provided an accurate quantification of the possible impact of monitoring, this has not been conducted, due to the obvious complexity of such an analysis. Instead, all data fields were assessed for possible impact on data quality, in each of the categories mentioned above. A complete listing of variables in each category is available in Appendix 1.

‘Major efficacy’ related variables were determined based on their influence on primary and secondary end points of the studies, inclusion criteria, drug compliance and date of randomization, and ‘major safety’ related variables according to their influence on exclusion criteria, adverse events, medication, date of informed consent(s) and discontinuation log.

The primary end points in the trials were Western Ontario and McMaster Universities Arthritis Index (WOMAC) pain score, WOMAC function score, and knee joint space width measured by X-ray (trials 1 and 2) and the proportion of subjects with new vertebral fractures (trial 3).

All imaging and safety chemistry data were provided from external data providers and directly transferred into the clinical database and were not included in the SDV of the eCRF data entry. In the osteoarthritis trials (trials 1 and 2) the external imaging data were those of X-ray of the knee and MRI of the knee. In the osteoporosis trial (trial 3) most of the efficacy data were provided as external data, i.e. X-ray description of spine and measurement of bone mineral density of spine, hip and wrist by dual energy X-ray absorptiometry.

Several eCRF pages allowed entry in non-mandatory comments fields. These comment fields were excluded from the present analysis to avoid bias in lowering of error rate by including a large number of blank non-mandatory fields in the calculations. The data were not analyzed to identify differences between quantitative and qualitative data fields.

In summary; this analysis did not investigate the errors identified and corrected while the trial was ongoing, but only errors identified after monitoring, cleaning and data base lock when the error rate was presumed to be zero. It should be noted that post hoc SDV may not have identified all residual errors. The impact of any possible error not identified during post hoc SDV was not within the scope of this analysis.

Statistical methods

The chi-square test was used to assess the statistical significance of differences in proportions of number of discrepancies detected between trials and according to degree of on-study monitoring. The P value of pairwise comparisons of the difference between trials was adjusted for multiple comparisons by Bonferroni.

Results

In total, SDV of all eCRF data after database lock was performed for 2556 subjects out of the 6871 subjects randomized in the three clinical phase III trials (Table 1). For trials 1 and 2 a total of 99.7% and 99.4% was included in the post-database lock SDV. In trial 3, a random selection subpopulation of 360 subjects was included in the SDV investigation. Among the 2556 subjects included in the post-database lock SDV, 666 subjects (26.1%) had complete SDV while on study. The level of complete SDV was comparable among the three trials.

Overall error rate

The results of the post-database lock verification activities are summarized in Table 2. More than 3 million data entry fields were verified in the three trials. Among the 3 252 743 fields verified, a total of 14 576 discrepancies were reported, resulting in an overall error rate of 0.45%. The error rate was statistically significantly different between the three individual trials with 0.35% in trial 1, 0.56% in trial 2, and 0.46% in trial 3 (P < 0.0001), but overall in the same order of magnitude. The number of variables verified ranged from 210 in trial 3 to 344 in trial 1. Among the key variables of investigator site subject identification number and allocation number used for linking of data to the subject identification number among data sets, there were no discrepancies in any of the trials. The error rates were additionally calculated in the groups of variables in the categories of efficacy, major efficacy, safety, major safety and other. The error rate in the overall efficacy variables in the two osteoarthritis trials was comparable at 0.30 and 0.32%, and in the osteoporosis trial it was 0.25%. The rate of errors in the variables classified as being of major importance to efficacy assessments were similar, 0.28%, 0.29% and 0.35%, for trials 1, 2 and 3, respectively.

Table 2.

Classification according to relevance of variable, number of variables checked, number of fields verified and number of occurred discrepancies

Trial Relevance Number of variables in database Number of fields verified Number of discrepancies identified Error rate
Trial 1 All 344 1 498 515 5 210 0.35%
Subject ID 2 2 344 0 0.00%
Efficacy 109 424 213 1 256 0.30%
Efficacy major 79 348 663 965 0.28%
Safety 80 571 051 2 675 0.47%
Safety major 46 98 663 543 0.55%
Other 153 500 907 1 279 0.26%
Trial 2 All 278 1 252 569 7 071 0.56%
Subject ID 2 2 048 0 0.00%
Efficacy 119 330 739 1 056 0.32%
Efficacy major 105 294 133 856 0.29%
Safety 70 496 287 4 356 0.88%
Safety major 34 77 075 1 066 1.38%
Other 87 423 495 1 659 0.39%
Trial 3 All 210 501 659 2 295 0.46%
Subject ID 2 9 330 0 0.00%
Efficacy 14 11 062 28 0.25%
Efficacy major 12 7 208 25 0.35%
Safety 85 239 922 1 707 0.71%
Safety major 54 44 471 352 0.79%
Other 109 241 345 560 0.23%
Total All 3 252 743 14 576 0.45%
Subject ID 13 722 0 0.00%
Efficacy 766 014 2 340 0.31%
Efficacy major 650 004 1 846 0.28%
Safety 1 307 260 8 738 0.67%
Safety major 220 209 1 961 0.89%
Other 1 165 747 3 498 0.30%

The error rates of the overall safety variables ranged from 0.47% to 0.88%. The discrepancies reported were mostly related to inappropriate entry of concomitant medication, e.g. medication not entered in the eCRF, or discrepancies in dose frequency, unit, stop date etc. Error rates in major safety variables ranged from 0.55% (trial 1) to 1.38% (trial 2) and as described above were mostly related to concomitant medication.

Complete vs. partial monitoring

In order to investigate the impact of the degree of monitoring, the error rate was calculated separately for the on-study 100% monitored subjects and for the on study partly monitored subjects. The results are given in Tables 3 and 4. A statistically significant reduction in error rates was observed in all three trials in the data from the complete on study monitored subjects as compared with the partly monitored subjects (Table 3, P < 0.0001). Overall, for all three trials the error rate of the fully monitored subjects was 0.27% compared with that of the partly monitored subjects having an error rate of 0.53%, yielding an absolute reduction of 0.26%.

Table 3.

Classification according to degree of on study monitoring

Trial On-study monitoring Number of fields verified Number of discrepancies Error rate P value
Trial 1 Complete 458 712 931 0.20% <0.0001
Partial 1 039 803 4 279 0.41%
Trial 2 Complete 393 254 1 440 0.37% <0.0001
Partial 859 315 5 631 0.66%
Trial 3 Complete 193 578 401 0.21% <0.0001
Partial 308 081 1 894 0.61%
Total Complete 1 045 544 2 772 0.27% <0.0001
Partial 2 207 199 11 804 0.53%

Table 4.

Classification according to relevance of variable depending on degree of on study monitoring, number of fields verified and number of discrepancies followed by the corresponding error rate

Trial On-study monitoring Number of fields verified Number of discrepancies Error rate
All Complete 1 045 544 2 772 0.27%
Partial 2 207 199 11 804 0.53%
Subject ID Complete 4 004 0 0.00%
Partial 9 718 0 0.00%
Efficacy Complete 230 298 332 0.14%
Partial 535 716 2 008 0.37%
Efficacy major Complete 200 766 271 0.13%
Partial 449 238 1 575 0.35%
Safety Complete 436 084 1 576 0.36%
Partial 871 176 7 162 0.82%
Safety major Complete 124 977 450 0.36%
Partial 95 232 1 511 1.59%
Other Complete 375 158 864 0.23%
Partial 790 589 2 634 0.33%

Analysis of the rate of errors in the categories of highest impact (major) data across studies (Table 4) showed a consistently low level of errors regardless of category of relevance, albeit with a slightly higher level of errors in the partly monitored groups, compared with the completely monitored. The rate of errors in the major categories of efficacy and safety in the groups that were subjected to partial monitoring were 0.35% and 1.59%, respectively.

Discussion

The results from this analysis contribute to the decision making, when facing the question of whether complete SDV will bring added value to the data collected in large multicentre clinical trials.

The concept of recording clinical trial data completely free of errors is intriguing, and should be the target of any monitoring approach. However, this report did not indicate that this target could be achieved with conventional methods. Even data sets which have undergone a complete SDV will have discrepancies between source data and the eCRF, indicating an inevitable risk of residual error, regardless of the approach utilized. As a consequence, an alternative resource allocation could be applied across sites to identify and target the monitoring efforts towards the critical data points, as not even complete SDV will generate 100% error free data. The overall error rate was low (0.45%) and although statistically different, the absolute numbers across the three clinical trials analyzed (0.35%, 0.56%, 0.46%, respectively) support the consistency of the data despite differences in individual protocols, clinical monitoring and research site personnel.

The total error rate for subjects subjected to 100% SDV was 0.27% and for the subjects subjected to partial monitoring the error rate was 0.53% (P < 0.0001). This yields an absolute risk reduction of 0.26%, corresponding to a need to perform complete SDV of about 370 data points to avoid one unspecified error. This number indicates a high cost of achieving a marginal reduction of unspecified errors in a clinical trial, considering the resources utilized for complete SDV. It may be of higher importance to evaluate the possible gain in data quality when focusing on the parameters which bear the highest importance to the final trial data integrity.

Perhaps the most important factor in deciding the pros and cons of different monitoring strategies is the expected impact of the monitoring strategy on the quality of critical trial data. While it is recognized that a low level of errors is better than a high level, it is also apparent that some errors may have little or no impact on quality, while others could have a high impact. The aim of a rational trial monitoring strategy should naturally balance these considerations. Generally, establishment of a universal limit of an error rate to be considered acceptable or ‘critical’ does not seem feasible, as trials differ greatly in crucial elements such as type and severity of the disease studied, risk of the intervention(s) used, and chosen efficacy and safety parameters. Therefore, interpretation of the quality impact of any quantified type of error should be evaluated based on the trial analyzed.

In an attempt to obtain useful information on this topic, further sub-division of the present data into categories according to the level of quality impact was made. This analysis yielded interesting information.

The data analysis by trial (Table 2) shows very similar levels of residual error when comparing efficacy with major efficacy and safety with major safety, indicating a consistent, low error rate on all parameters across all three trials, whether they are deemed of major importance to the trial outcome or not.

To investigate whether choice of monitoring strategy may have had a measurable impact on errors, we further sub-divided into data fields subjected to complete monitoring vs. partial monitoring (Table 4). As expected, the level of errors was generally lower for the data that were subjected to complete monitoring compared with partial monitoring. This observation was consistent regardless of category.

However, in the present dataset, the level of errors observed in the ‘major’ categories of efficacy (0.35%) for the partly monitored subjects, were, albeit larger than for the completely monitored subjects (0.13%), at levels which were not expected to have any significant impact on trial outcome measures. A similar observation was made on the level of errors categorized as being of major importance to safety measures. These were somewhat higher for the partly monitored subjects (1.59%) than for the data fields subjected to 100% SDV (0.36%), and although this error level was not expected to result in a relevant difference in the trial output, this may indicate that data quality in the most important safety categories could benefit from an alternative monitoring strategy with increased focus on targeting safety-specific data.

These data can serve as a basis for cost/benefit evaluations to select the appropriate monitoring strategy in future trials, bearing in mind that neither of the two described strategies offers an error rate of 0%. The average investments needed for development of new medicinal products have drastically increased, and a recent analysis reports that the number of approved drugs approved per billion USD spent in research and development has declined from approximately 50 drugs per billion USD in the 1950s to less than one per billion USD in 2010 [9]. Much of this increase in spending can be explained by the increased complexity needed for each trial to meet the demands of increasingly cautious regulators, while demonstrating incremental benefits comparing with an established, effective standard of care. Estimating the direct cost of SDV at 25% of the total R&D budget [8], it is a very relevant discussion, whether this activity has a positive impact on the data quality. With this in mind, the present data do not suggest a meaningful gain in value using complete SDV, despite the statistically significant numeric difference, and suggest that monitoring of carefully selected parameters deemed particularly risky could be a more rational use of resources.

Currently no data using a similar systematic approach have been published. Describing a related topic, the non-profit organization TransCelerate has also performed a retrospective analysis of monitoring and SDV. This analysis identified the number of queries generated during conventional on site SDV monitoring, identifying discrepancies between source data and the eCRF. TransCelerate concluded that very few queries concerned critical data (2.4% of all discrepancies), and that SDV therefore had a minor impact of the overall data quality [10]. A related analysis was performed by Pronker and colleagues focusing on sponsor queries, defined as discrepancies identified in data entered in the eCRF while the trial was on going. The authors concluded that very few queries were sent regarding data points which were critical to the trial end points (0.001% of all data points), leaving a poor cost-effectiveness ratio and the authors encouraged the development of a risk-assessment oriented monitoring of clinical trial data [11].

The data from the present report are based on post-trial verification of data that have already been monitored, cleaned and locked for data transfer, representing the rate of error persisting after conventional monitoring and data cleaning strategies during a clinical trial. Therefore, they are not directly comparable with the data published by TransCelerate and Pronker et al., as these data merely described the rate of errors identified during conventional SDV monitoring; not the error rate, which was not identified during the monitoring addressed in this paper.

Further research is needed in order to increase our knowledge in this field. Additional quantitative information retrieved from studies followed by alternative monitoring methodologies and approaches are needed. In a recent survey conducted by the Clinical Trials Transformation Initiative (CTTI) only three of 36 companies in the pharmaceutical, medical device and CRO industry responded that they always or frequently use centralized monitoring to replace on site visits [3]. This is likely changing now with the FDA's new guidelines on a Risk-Based Approach to Monitoring published in August 2013 [6], the EMA's reflection paper on risk based quality management in Clinical Trials published in November 2013 [7] and the results from the current report providing little support for complete SDV.

The risk-based approach is developed to achieve a high level of quality through standardized checks of recorded data, focusing on parameters deemed more likely to cause critical impact on data quality in the case of data entry errors. This method would to some extent replace much of the SDV traditionally performed by monitors on site. The actual impact on data quality and accuracy of this approach is yet to be described, but a risk-based approach is likely to impact positively on the critical items related parameters, while possibly allowing a slightly higher level of errors in non-critical related parameters, thereby increasing the cost : benefit ratio without a loss of data quality. Based on the data from this analysis, it is likely that a risk based approach may benefit by reducing the critical errors while focusing less on the items which are deemed to have a lower impact on data quality.

A site-based analysis has not been included in this manuscript as it did not yield any relevant information. A country-based evaluation was not performed, as most countries had only one site included and would thus be similar to the site-based evaluation.

This analysis has not identified any quantifiable obstacles preventing a 100% error-free SDV. As all data entry as well as SDV is performed by human operators, each of these steps represents a potential point of error. Intuitively, variables such as personnel experience, motivation and attention to detail may impact on data quality, but these factors are very difficult, perhaps impossible, to quantify and correlate with data quality.

In conclusion, the findings from this unique analysis of a very large data set challenge the notion that a 0% error rate can be achieved regardless of the intensity of the monitoring performed. Data indicate a consistently low error rate across the three trial data sets analyzed. The use of complete SDV offers a marginal but statistically significant reduction in the error rates of both elements considered of major importance to trial data and non-critical data, as compared with SDV of selected focus areas but does not support complete SDV as a means of providing meaningful reductions in data accuracy.

Appendix 1

The variable relevance categories used were ‘subject identification’, ‘efficacy’, ‘major efficacy’, ‘safety’, ‘major safety’ and ‘other’. The content variables for each category, per trial, are mentioned in the table below

Trial Number
Trial 1 Trial 2 Trial 3
Variables per category of relevance
SBJ identification Allocation number Allocation number Allocation number
S. ID No S. ID No S. ID No
 Efficacy ARA classification ARA classification Clinical fracture
AUSCAN Drug log Drug log
Drug log EQ-5D Inclusion criteria
EQ-5D Inclusion criteria Last treatment day
Inclusion criteria Last treatment day Randomization date
Last treatment day PRM diary
PRM diary Randomization date
Randomization date Signal knee
Signal knee Knee surgery
Knee surgery VAS
VAS WOMAC
WOMAC
Major efficacy EQ-5D EQ-5D Clinical fracture
Drug log Drug log Randomization date
Inclusion criteria Inclusion criteria Drug log
Last treatment day Last treatment day Inclusion criteria
Randomization date Randomization date Last treatment day
Knee surgery Knee surgery
Signal knee Signal knee
VAS VAS
WOMAC WOMAC
Safety AE AE AE
Concomitant medication Concomitant medication Concomitant medication
Discontinuation Discontinuation Discontinuation
ECG ECG ECG
Exclusion criteria Exclusion criteria Exclusion criteria
Haematology Haematology Haematology
Informed consent(s) Informed consent(s) Informed consent(s)
Physical examination Physical examination Physical examination
Vital signs Vital signs Vital signs
Urinalysis
Major safety AE AE AE
Concomitant medication Concomitant medication Concomitant medication
Discontinuation. Discontinuation Discontinuation
Exclusion criteria Exclusion criteria Exclusion criteria
Informed consent(s) Informed consent(s) Informed consent(s)
Other Demographics Demographics Demographics
Fracture history Medical history Medical history
Hand osteoarthritis Procedure dates Fracture history
Medical history Procedure dates
Procedure dates Ca/Vit D log

AE, adverse event; AUSCAN, Australian/Canadian score; ECG, electrocardiography; EQ-5D, EQ-5D EuroQuol questionnaire; PRM diary, analgesic diary; S. ID no., site subject identification number; VAS, visual analogue scale; WOMAC, Western Ontario and McMaster Universities Arthritis Index.

Competing Interests

All authors have completed the Unified Competing Interest form at http://www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare no support from any organization for the submitted work. Bente J. Riis is a member of the board and a shareholder in Nordic Bioscience A/S, Morten A. Karsdal is CEO, a member of the board and a shareholder in Nordic Bioscience A/S; Jeppe R. Andersen is COO and a shareholder in Nordic Bioscience A/S and Asger Bihlet is a shareholder in Nordic Bioscience A/S. There are no other relationships or activities that could appear to have influenced the submitted work.

The authors would like to thank all site staff members from the sites involved in the three phase 3 trials as well as the associates, monitors, project managers and data managers who provided invaluable data collection and processing. A special thanks goes to the staff at CCBR A/S, for their collaboration in the data collection.

Contributors

Jeppe Ragnar Andersen, Bente Juel Riis, Inger Byrjalsen, Gitte Hansen, Henrik Bo Hansen and Hans Christian Hoeck designed this research project. Jeppe Ragnar Andersen, Bente Juel Riis, Inger Byrjalsen, Gitte Hansen, Henrik Bo Hansen and Hans Christian Hoeck acquired the data with assistance from numerous associates at CCBR A/S and Nordic Bioscience A/S. All authors took active part in analysis and interpretation of the data. All authors wrote the manuscript jointly and gave final approval of the version to be published. All authors are accountable for all aspects of the work in relation to accuracy and integrity. Jeppe Ragnar Andersen and Inger Byrjalsen had full access to all of the data in the trial and take responsibility for the integrity of the data and the accuracy of the data analysis.

References

  • 1.Food and Drug Administration. 2011. 21 CFR part 312, subpart D and 21 CFR part 812, subpart C. Ref Type: Bill/Resolution.
  • 2.European Medicines Agency. ICH harmonised tripartite guideline E6: note for guidance on good clinical practice (PMP/ICH/135/95). 1-6-2011. Ref Type: Bill/Resolution.
  • 3.Morrison BW, Cochran CJ, White JG, Harley J, Kleppinger CF, Liu A, Mitchel JT, Nickerson DF, Zacharias CR, Kramer JM, Neaton JD. Monitoring the quality of conduct of clinical trials: a survey of current practices. Clin Trials. 2011;8:342–349. doi: 10.1177/1740774511402703. [DOI] [PubMed] [Google Scholar]
  • 4.Food and Drug Administration. 1988. Guidance for industry: guideline for monitoring of clinical investigations. Ref Type: Generic.
  • 5.Bakobaki JM, Rauchenberger M, Joffe N, McCormack S, Stenning S, Meredith S. The potential for central monitoring techniques to replace on-site monitoring: findings from an international multi-centre clinical trial. Clin Trials. 2012;9:257–264. doi: 10.1177/1740774511427325. [DOI] [PubMed] [Google Scholar]
  • 6.FDA. Guidance for industry, oversight of clinical investigations – a risk based approach to monitoring. 8-1-2014. Ref Type: Generic.
  • 7.EMA. 2014. Reflection paper on risk based quality management in clinical trials. Ref Type: Generic.
  • 8.Funning S, Grahnén A, Eriksson K, Kettis-Linblad Å. Quality assurance within the scope of Good Clinical Practice (GCP) – what is the cost of GCP-related activities? A survey within the Swedish Association of the Pharmaceutical Industry (LIF)'s members. Qual Assur J. 2009;12:3–7. [Google Scholar]
  • 9.Scannell JW, Blanckley A, Boldon H, Warrington B. Diagnosing the decline in pharmaceutical R&D efficiency. Nat Rev Drug Discov. 2012;11:191–200. doi: 10.1038/nrd3681. [DOI] [PubMed] [Google Scholar]
  • 10.TransCelerate. Risk-based monitoring methodology. 30-5-2013. 6-12-2013. Ref Type: Internet Communication.
  • 11.Pronker E, Geerts BF, Cohen A, Pieterse H. Improving the quality of drug research or simply increasing its cost? An evidence-based study of the cost for data monitoring in clinical trials. Br J Clin Pharmacol. 2011;71:467–470. doi: 10.1111/j.1365-2125.2010.03839.x. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from British Journal of Clinical Pharmacology are provided here courtesy of British Pharmacological Society

RESOURCES