Skip to main content
. 2019 Oct 1;107(4):773–779. doi: 10.1002/cpt.1638

Table 1.

Examples of (novel) methodologies, in no particular order, for analysis of different types of data that would benefit from prospectively designed validation

Methodology Potential benefit for drug developers and decision makers Current limitations How to validate prospectively
Borrowing of data21, 22, 23 “Borrowing” cases from past studies for the control arm of a current RCT could increase the efficiency of decision making with the current study. This may translate into smaller sample size for the current trial and/or unequal randomization. Relies on the assumption of similarity of historical information to the current control data; may result in bias if assumption is not satisfied. Several methods for historical borrowing have been proposed. Use conventional RCTs to concurrently analyze results as per usual and with borrowed data according to preplanned protocol and data sources. Compare and assess various methods of borrowing.
Use of external control group, threshold crossing24, 25 May enable causal inferences about drug effects on the basis of external (historical) control groups for products and indications where RCTs are not feasible. Comparisons with external controls are based on assumptions that often cannot be verified, which may lead to biased conclusions about drug effects.26 External control groups tend to have worse outcomes than a similar control group in an RCT.27, 28 Use conventional RCTs to concurrently analyze results as single‐arm trials with historical comparators; compare results from the randomized and nonrandomized analyses based on pre‐agreed plan.25
Indirect comparisons for relative efficacy29, 30

Allows for estimation of relative efficacy of two (or more) treatments in the absence of any head‐to‐head RCTs (= direct comparisons)

Frequently used by HTA bodies for REA, because many (new) drugs have insufficient RCT information for direct comparisons.

Although indirect comparisons usually rely on randomized data, the treatments of interest have not been randomized against each other (head‐to‐head), only to a common comparator. A variety of methods exist to mitigate this, but each method rests on a number of assumptions about the data used. Methods are still evolving and sometimes generate discrepant results. Use the opportunity afforded by the planning of a head‐to‐head RCT where previous RCTs of the drugs of interest against a common comparator (e.g., placebo) are available to develop a prospective analysis plan for indirect and MTC. The aim is to compare different methods for indirect or MTC, explain discrepancies in results from different methods, and cross‐validate methods against each other and against the head‐to‐head RCT.
Replacing RCT by RWD analysis12 Conceptually, RCTs could, in some situations, be replaced by comparative analyses of RWD. Replacing even a small proportion of postmarketing RCTs with nonrandomized RWD analyses would in many cases translate into faster availability of relevant information using substantially fewer resources. Major concerns about comparative RWD analyses include lack of ability to tightly control measurements of patient characteristics and health outcomes and susceptibility to bias. A general lack of confidence in nonrandomized RWD analyses has limited their impact. Prospectively design new RWD studies to match the design of planned RCTs. This is feasible when both drugs have been in routine use for a sufficient time. The concurrent approach avoids bias by matching the RCTs and RWD analyses as closely as possible (e.g., for patient characteristics, dose regimens), while avoiding the temptation to trim RWD analysis to the RCT results once they become available. It also allows for sensitivity analyses to identify whether alternative designs or analyses could have improved agreement between the designs.
Reweighting of RCT results to reflect real life31, 32 Using RWD (e.g., from disease registries), to “reweight” RCT results may improve external validity and generalizability of RCT results. A demonstration project has shown the feasibility but the concept has not been prospectively validated. Use results of conventional RCTs of novel drugs to obtain reweighted results and compare with measured outcomes once enough RWD has accumulated; according to preplanned protocol and data sources.
Extrapolation of knowledge to an unstudied population33, 34

In some populations (e.g., neonates or young children), the conduct of clinical trials is fraught with operational or ethical challenges leading to an absence of information on drug effects. “Implicit extrapolation,” although subjective, is often the only basis for treatment or dosing decisions in these populations.

A systematic framework for “explicit extrapolation” of relevant information from a source population (e.g., adults), to a target population (e.g., small children) preferably based on quantitative methodology has the potential to improve treatment decisions.

Although some of the methods proposed for the extrapolation exercise are not novel, experience with their use in extrapolation exercises is limited. Few, if any, systematic extrapolation exercises have undergone prospective validation.

As clinical experience grows during the postmarketing phase, the assumptions and predictions made on the basis of extrapolations can be checked against prospectively planned collection of RWD.

Apply the concept of extrapolation also in areas where RCTs are possible (e.g., extension of indications in adults where further RCTs are conducted and compare whether the extrapolation concept; requiring different/less data) would have resulted in similar results. Assess various concepts of extrapolation simultaneously. Might require that some additional data are collected in the current RCT (such as PK/PD).

Predictive approaches to heterogeneous treatment effects35, 36, 37

(Positive) RCTs can only help predict that at least some patients similar to those enrolled in the trial will likely benefit from the intervention (“reference class forecasting”). However, determining the best treatment for an individual patient is different from determining the best average treatment, because of heterogeneity of treatment effects.

Improved prediction of outcome risk and understanding of heterogeneity of treatment effect could be key enablers of personalized treatment decisions and more successful treatment outcomes.

Conventional subgroup analyses, aiming to describe effect modifiers, often fall short because each patient belongs to multiple different subgroups, each of which may yield different inferences.

More elaborate, regression‐based approaches have been proposed to address heterogeneity of treatment effect, including risk modeling and treatment effect modeling. However, experience with these methods is limited, especially with externally derived models. There have been few, if any, attempts to systematically evaluate their usefulness in clinical practice.

Develop models concurrently with the design of an RCT. Where possible, incorporate assessment of the use of RWD for predictive analysis of heterogeneity of treatment effect.

The ultimate test of a predictive approach is to compare decisions or outcomes in settings that use such predictions with usual care in a prospectively planned experiment.36

HTA, health technology assessment; MTC, mixed treatment comparison; PK/PD, pharmacokinetic/pharmacodynamic; RCT, randomized controlled trial; REA, relative effectiveness assessment; RWD, real‐world data.