Skip to main content
JACC: Advances logoLink to JACC: Advances
editorial
. 2023 Jul 19;2(5):100413. doi: 10.1016/j.jacadv.2023.100413

Improving Registry-Based Observational Comparative Effectiveness Studies by Prospectively Incorporating Robust Treatment Preference Instruments

Björn Redfors 1,
PMCID: PMC11198280  PMID: 38939014

Both randomized controlled trials (RCTs) and observational studies can have important roles in the evidence base for the comparative effectiveness of cardiovascular treatments. Individual patient RCTs have advantages over observational studies and should be the preferred tool to assure that a treatment is efficacious and safe prior to its implementation in clinical care.1 However, appropriately performed observational studies can produce unbiased results of comparable value to those of RCTs and if conducted within high-quality health care registries can be efficient and inexpensive tools for continued evaluation and confirmation of the effectiveness of treatments in real-world clinical practice.2 Because a much larger number of patients are treated in the context of real-world practice than the preceding RCT, reliable postimplementation comparisons of treatments can have substantial statistical power to detect differences in effectiveness and may greatly expand the evidence base for these treatments. Unfortunately, many observational comparative effectiveness studies have serious flaws, resulting in a high risk of drawing inaccurate conclusions about the studied treatment from these studies.1,3

In cardiology, the factors governing which treatment a patient is prescribed are typically complex and rarely adequately captured in registries or other databases, ie, there is unmeasured confounding that introduces bias. Simply adjusting observational analyses by measured cardiovascular risk factors, concomitant drugs, etc., is generally insufficient, even when done using advanced statistical methods.1,3

Instrumental variable analysis

A possible means of overcoming the influence of unmeasured confounding in observational studies is instrumental variable (IV) analysis.4 IV analysis uses a third variable called an IV that is correlated with the treatment of interest but is not directly related to the outcome, except through its effect on the treatment. By using the IV to isolate the variation in the treatment allocation that is unrelated to the unobserved confounders, IV analysis can provide unbiased estimates of the effect of the treatment on the outcome. The key assumption in IV analysis is that the IV is not correlated with the unmeasured confounders. The IV must also be correlated with the treatment. If these assumptions are met, then IV analysis can provide a powerful tool for estimating unbiased effects in situations where traditional statistical analyses may be biased.

A very effective IV is the treatment allocation in an individual patient RCT. In the RCT, the association between treatment allocation and any measured or unmeasured risk factors is removed by randomly allocating the treatment to the patient. If the results of the RCT are then evaluated according to the intention-to-treat principle (ie, according to the allocated treatment), then the results of the RCT are unbiased. In this situation, the random allocation is the IV; and in most well-conducted RCTs, the IV is strongly associated with the actual treatment given to the patient, and (in the ideal RCT) it has no association to the outcome other than through its effects on the treatment.

Although they are not as strong IVs as treatment allocation in an RCT, IVs exist that may be of value in observational studies. Examples of IVs include geographic variations in health care policies and physician’s prescribing patterns. These IVs may occur naturally and already exist as variables in the registries or data sets to be used, independent of the researchers or policy makers; but in many cases, pre-existing IVs may not fulfill the key assumptions for a good IV for the treatment of interest (ie, the IV may be associated with unmeasured confounders).5 Instead of retrospectively trying to identify acceptable IVs in pre-existing datasets, the author argues that IVs should be prospectively defined and incorporated in registries and other data capture systems when observational comparative effectiveness studies are being planned or when new treatments are being implemented in clinical practice.

Examples of IVs that could be prospectively included in health care registries (or in separate databases) include relevant details related to local practice patterns and physician’s prescribing patterns. Even better would be to prospectively ‘create’ dedicated IVs, such as systematic (randomized or nonrandomized) changes in local practice (Figure 1).

Figure 1.

Figure 1

Options for Conducting Comparative Effectiveness Studies of New Cardiovascular Treatments

(A) Strengths and weaknesses of different study types. (B) Illustration of how treatment preference instruments (instrumental variables) work. Instrumental variable analysis uses a treatment preference instrument (instrumental variable) that is associated with the actual treatment received by the patients (exposure) but that is not associated with unmeasured confounders and that is not associated with the outcome except through its association with the exposure. IV = instrumental variable; RCT = randomized controlled trial.

Natural variation in treatment preference across hospitals or regions

Different hospitals and different health care regions often differ to some extent in their relative preference of one treatment over another. For example, the likelihood of a patient with multivessel coronary artery disease to undergo percutaneous coronary intervention (PCI) vs coronary artery bypass grafting varies considerably across health care regions in Sweden.6 In such situations, using hospital-level treatment preference as an instrument can overcome unmeasured confounding related to physician preference and local practice patterns (which is otherwise difficult to account for). Analyses using natural variations in treatment preference across hospitals can be done retrospectively at minimal cost.

However, there are several challenges associated with performing such analysis retrospectively using a pre-existing data set. First, the differences in treatment preference across hospitals may not be substantial, resulting in a relatively weak IV and an uncertain effect size. More importantly, the registry (or other data capture system) may not capture enough detail about the patient population to properly define the analysis set, or there may be regional variations in the interpretation and reporting of key variables in the registry (which invalidate the analysis). For example, the relative effectiveness of 2 treatments in heart failure with preserved ejection fraction can only be effectively examined using hospital-level treatment preference as an IV if the heart failure with preserved ejection fraction diagnosis is reliably captured in the registry and consistently reported across hospitals. Regional variations in how registry variables are interpreted and reported can introduce considerable bias in retrospective analyses using natural variations in treatment preference as IVs. To overcome this limitation, key variables (such as criteria for inclusion of a patient in the analysis population) can be prospectively identified, clarified, or modified as needed, and quality assured.

Systematic variation in treatment preference across hospitals or regions

Systematic variations in treatment preference across hospitals can introduce larger differences in the treatment preference across hospitals and be effective IVs. While issues related to equipoise and the principle of equal care to all citizens often preclude policy makers from introducing systematic differences in care across different hospitals, there are situations in which such approaches are reasonable (and arguably preferable). One such situation could be the implementation of a new treatment in clinical practice. Since the evidence base for many new treatments that are implemented in clinical practice consist of data from relatively tightly controlled RCTs (with limited external validity) and often rely on the reduction of nonfatal endpoints, the author argues that it is reasonable to implement such treatments in a manner that allows for reliable assessment of real-world effectiveness.

One means of conducting a systematic change in practice could be the transition from one treatment to another in a stepwise manner in different hospitals or health care regions. An example of this type of prospective IV analysis is the SWITCH-SWEDEHEART (NCT05183178) project that is currently being conducted in Sweden, in which 3 different clusters of health care regions change from ticagrelor to prasugrel as the preferred P2Y12 inhibitor (local recommendation) for patients with acute coronary syndrome (ACS) undergoing PCI.7 The transition from ticagrelor to prasugrel has been coordinated such that the 3 clusters make the transition one after the other with 9-month intervals between each cluster’s transition. At this time, 2 of 3 health care region clusters have made the transition successfully (ie, the great majority of patients with ACS who undergo PCI are treated with the recommended P2Y12 inhibitor both before and after the switch from ticagrelor to prasugrel). Because most patients are treated according to the local recommendation, the preferred P2Y12 inhibitor in the region at the time of PCI will be a strong IV. Because the Swedish Coronary Angiography and Angioplasty Registry captures all patients who undergo PCI in Sweden, and because the variable indicating whether the patient underwent PCI due to ACS as well as the outcome measures are reliable across hospitals, the SWITCH-SWEDEHEART study will provide a robust estimate for the relative effectiveness of prasugrel vs ticagrelor for patients with ACS who undergo PCI.

Systematic variation in treatment preference within hospitals

Another means of creating a prospective IV could be the systematic changing (randomized or nonrandomized) of the preferred drug or device within a hospital or health care region, eg, on a weekly basis.8 Examples include alternating between the preferred statin for cholesterol lowering in patients with ACS or alternating which coronary stent(s) are available on the shelf in the cardiac catheterization laboratory.

The extent to which these local treatment preferences are applied to individual patients can be varied according to the purpose of the project. For quality control of new treatments or processes, one may only want to alternate the ‘recommended’ treatment without more rigorously controlling which treatment individual physicians prescribe, whereas one may want to ensure greater treatment adherence for dedicated effectiveness studies.

Conclusions

In summary, rigorous treatment preference instruments can function as IVs in observational comparative effectiveness analyses just like random allocation does in an RCT. This author argues that prior to implementing any new treatments in clinical care, as well as when evaluating existing treatment options, the responsible parties should carefully consider whether any useful instrument could be incorporated in existing health care registries that would facilitate inexpensive and unbiased comparative effectiveness studies.

Funding support and author disclosures

The author has no relationships relevant to the contents of this paper to disclose.

Footnotes

The author attests they are in compliance with human studies committees and animal welfare regulations of the author’s institution and Food and Drug Administration guidelines, including patient consent where appropriate. For more information, visit the Author Center.

References

  • 1.Vandenbroucke J.P. When are observational studies as credible as randomised trials? Lancet. 2004;363:1728–1731. doi: 10.1016/s0140-6736(04)16261-2. [DOI] [PubMed] [Google Scholar]
  • 2.Hemkens L.G., Contopoulos-Ioannidis D.G., Ioannidis J.P.A. Routinely collected data and comparative effectiveness evidence: promises and limitations. Can Med Assoc J. 2016;188:E158–E164. doi: 10.1503/cmaj.150653. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Hernán M.A., Robins J.M. Chapman & Hall/CRC; 2018. Causal Inference. [Google Scholar]
  • 4.Brookhart M.A., Rassen J.A., Schneeweiss S. Instrumental variable methods in comparative safety and effectiveness research. Pharmacoepidemiol Drug Saf. 2010;19:537–554. doi: 10.1002/pds.1908. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Garabedian L.F., Chu P., Toh S., Zaslavsky A.M., Soumerai S.B. Potential bias of instrumental variable analyses for observational comparative effectiveness research. Ann Intern Med. 2014;161:131–138. doi: 10.7326/m13-1887. [DOI] [PubMed] [Google Scholar]
  • 6.Vasko P., Alfredsson J., Bäck M., et al. SWEDEHEART annual report 2022. https://www.ucr.uu.se/swedeheart/dokument-sh/arsrapporter-sh
  • 7.Omerovic E., Erlinge D., Koul S., et al. Rationale and design of switch Swedeheart: a registry-based, stepped-wedge, cluster-randomized, open-label multicenter trial to compare prasugrel and ticagrelor for treatment of patients with acute coronary syndrome. Am Heart J. 2022;251:70–77. doi: 10.1016/j.ahj.2022.05.017. [DOI] [PubMed] [Google Scholar]
  • 8.Redfors B., Angerås O., Omerovic E. Confirming the performance of new coronary stent platforms by systematic registry-based cluster-randomised evaluation of their implementation in clinical practice. EuroIntervention. 2022;18:e620–e622. doi: 10.4244/eij-d-22-00592. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from JACC: Advances are provided here courtesy of Elsevier

RESOURCES