Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2021 Aug 1.
Published in final edited form as: Clin Trials. 2020 Jun 26;17(4):360–367. doi: 10.1177/1740774520928426

Practical challenges in the conduct of pragmatic trials embedded in health plans: Lessons of IMPACT-AFib, an FDA-Catalyst trial

Crystal J Garcia 1, Kevin Haynes 2, Sean D Pokorney 3, Nancy D Lin 4, Cheryl McMahill-Walraven 5, Vinit Nair 6, Lauren Parlett 2, David Martin 7, Hussein R Al-Khalidi 8, Debbe McCall 9, Christopher B Granger 3, Richard Platt 1, Noelle M Cocoros 1
PMCID: PMC8293906  NIHMSID: NIHMS1720819  PMID: 32589056

Abstract

IMPACT-AFib was an 80,000-patient randomized clinical trial implemented by five US insurance companies (health plans) aimed at increasing the use of oral anticoagulants by individuals with atrial fibrillation who were at high risk of stroke and not on treatment. The underlying thesis was that patients could be change agents to initiate prescribing discussions with their providers. We tested the effect of mailing information to both patients and their providers. We used administrative medical claims and pharmacy dispensing data to identify eligible patients, to randomize them to an early or delayed intervention, and to assess clinical outcomes. The core data were analysis-ready datasets each site had created and curated for the FDA’s Sentinel System, supplemented by updated “fresh” pharmacy and enrollment data to ensure eligibility at the time of intervention. Following mutually agreed upon procedures, sites linked to additional internal source data to implement the intervention – educational information mailed to patients and their providers in the early intervention arm, and to providers of patients in the delayed intervention arm approximately 12 months later. The primary analysis compares the early intervention arm to the delayed intervention arm, prior to the delayed intervention being conducted (i.e., compares intervention to non-intervention). The endpoints of interest were evidence of initiation of anticoagulation (primary) as well as clinical endpoints, including stroke and hospitalization for bleeding. Major challenges, some unanticipated, identified during the planning phase include: convening multi-stakeholder investigator teams and advisors, addressing ethical concerns about not intervening in a usual care comparison group, and identifying and avoiding interference with sites’ routine programs that were similar to the intervention. Needs and challenges during the implementation phase included the fact that even limited site-specific programming greatly increased time and effort, the need to refresh research data extracts immediately before outreach to patients and providers, potential difficulty identifying low cost medications such as warfarin that may not be reimbursed by health plans and so not discoverable in dispensing data, the need to develop workarounds when “providers” in claims data were facilities, difficulty addressing clustering of patients by provider because providers can have multiple identifiers within and between health plans, and the need to anticipate loss to follow up because of health plan disenrollment or change in benefits. As pragmatic trials begin to shape evidence generation within clinical practice, investigators should anticipate issues inherent to claims data and working with multiple large sites. In IMPACT-AFib, we found that investing in collaboration and communication amongst all parties throughout all phases of the study helped ensure common understanding, early identification of challenges, and streamlined actual implementation.

Keywords: pragmatic clinical trial, real-world evidence, real-world data, Sentinel Initiative, atrial fibrillation, stroke

Introduction

With the US Food and Drug Administration (FDA)’s expanding interest in real-world evidence, FDA launched FDA-Catalyst, which combines direct patient and/or provider contact with the data and technical infrastructure that are part of the Sentinel System.1 IMPACT-AFib (Implementation of a Randomized Controlled Trial to Improve Treatment with Oral AntiCoagulants in Patients with Atrial Fibrillation; ClinicalTrials.gov NCT03259373) was the first clinical trial conducted using the Sentinel platform and serves as a proof of concept effort.2 Through IMPACT-AFib, FDA sought to learn whether and how the Sentinel System, originally developed as a distributed database for retrospective post-market safety surveillance, could support pragmatic clinical trials embedded in health insurance companies (health plans). As of 2019, Sentinel had 668 million person-years of data; more than 40 million individuals were enrolled in participating commercial health plans and eligible for intervention studies. Health plans’ ability to engage with their members/patients and providers place them in a unique position to be able to apply interventions and generate real-world evidence. Use of the Sentinel infrastructure made this trial extremely efficient. The total cost for this trial was approximately $5.5 million.

Launched in 2017, the IMPACT-AFib trial tackled a critical public health issue that could be addressed through educational outreach. Atrial fibrillation is a common arrhythmia with a prevalence of 1% overall in the US, and incident stroke occurs among those with the condition at 4–5% per year.3,4 Oral anticoagulation decreases the rate of stroke for patients with atrial fibrillation by more than two-thirds.5 Although well-established as an effective therapy, during the trial’s planning phase we found that only 33% of patients with atrial fibrillation and a high risk of stroke had evidence of using oral anticoagulants one year after diagnosis.6 The IMPACT-AFib intervention consisted of a one-time mailing distributed to eligible members/patients and/or their providers, consistent with the way health plans might normally engage their members and providers, and one that health plans could potentially incorporate if successful. The trial’s overall design is graphically displayed in Figure 1. The trial rationale and design have been described in detail elsewhere.2

Figure 1.

Figure 1.

A schematic diagram of the IMPACT-AFib trial

Sentinel operates by partnering with organizations that each transform their internal data into analysis-ready, quality-checked datasets that conform to the Sentinel Common Data Model. In its routine operations, Sentinel performs analyses by distributing standardized programs that each site executes. The programs’ output typically consists of aggregated results, such as counts or distributions. For this trial, we used such distributed programs to identify eligible individuals in each participating site, to randomize them to the early or delayed intervention groups, and to ultimately examine the impact of the intervention. Here we describe both data-related and operational issues, some anticipated and others not.

Methods

Trial methods overview

In brief, during the planning phase of IMPACT-AFib, the steering committee developed the trial protocol and intervention materials collaboratively. The steering committee was comprised of representatives from five participating sites (health plan and affiliated partners with geographically distributed members), a patient representative, and representatives from the FDA, Clinical Trials Transformation Initiative, the study coordinating center (Harvard Pilgrim Health Care Institute), and the statistical coordinating center (Duke Clinical Research Institute). All sites delegated human subjects oversight to a central institutional review board. In Spring-Summer 2017, each site received and executed a SAS-based (SAS Institute Inc., Cary, NC) distributed program developed at the Harvard Pilgrim Health Care Institute. As previously described, we identified and randomized patients with atrial fibrillation and stroke risk factors prior to assessing their eligibility based on treatment status, which was done closer to mailing – this was done in response to a concern that the study team had an ethical obligation to ultimately notify all patients identified as not being on clinically recommended treatment. We have written a manuscript on the ethical considerations of our study elsewhere.7 Patients were thus randomized to an early and delayed intervention group: the early intervention group is the exposed arm, and the delayed intervention group serves as the control arm, for the primary analysis where we assess the effect of the mailings at 12 months of follow up. At 12 months post follow up, we sent the providers of those in the delayed – or control – arm the intervention of an educational mailing as well (i.e., a delayed intervention). (There were a few deviations in the mailing details, which are described in the section Lessons learned during implementation).

Prior to the early intervention mailing, sites excluded those who had evidence of prior anticoagulant treatment, those who had opted out of non-essential health plan contact, those who transitioned to a plan that does not allow for inclusion in research, and those who were no longer enrolled at the health plan. All records were frozen so that the project team could return a year later and identify the comparison group from among those randomized to the delayed intervention and subsequently assess their treatment status and enrollment prior to the delayed mailing. (Because the analysis was conducted to assess endpoints after one year of follow up, which is prior to this delayed mailing, the early intervention was compared to no intervention.)

The first program sent to sites identified health plan members in the Sentinel Distributed Database) who had atrial fibrillation (defined by occurrence of two International Classification of Diseases, Ninth Revision, ICD-9, or Tenth Revision, ICD-10, diagnosis codes on separate encounters within the prior year), a CHA2DS2-VASc score8 of two or more, which is the American Heart Association/American College of Cardiology guideline indication for oral anticoagulants, and no recent bleeding history. They were randomized to either an early or delayed intervention group. For those in the early intervention group, we excluded individuals with evidence of an oral anticoagulant dispensing (via National Drug Codes) or use of international normalized ratio (INR) tests (via one procedure code describing anticoagulation management, or four INR test claims or test results). Because of warfarin’s low cost, we expected its use was not fully captured in claims data since patients could pay out-of-pocket, so evidence of INR use served as a proxy for evidence of treatment as those treated with warfarin are monitored with INR tests (we will discuss this further later). Sites utilized “fresh” enrollment and pharmacy dispensing data, i.e., data from events occurring after the site’s most recent update to the Sentinel Distributed Database, to exclude those with a more recent anticoagulant dispensing or health plan disenrollment between randomization and mailing. The application of fresh data was necessary as the routine data in Sentinel are intentionally lagged by a few months to ensure all claims are settled (i.e., closed) prior to use, while pharmacy and enrollment data can be used within a few weeks of dates of service. Sites created a crosswalk between the de-identified member information from the Sentinel Distributed Database (members and providers are captured in the Sentinel Distributed Database with masked identifiers) and their internal records to identify the contact information of the members and their providers for the one-time mailing. Each site contracted with its own vendor to mail the intervention materials.

A similar process was conducted for those randomized to delayed intervention. About one year after the early intervention mailing, the study team executed a second distributed program supplied by the coordinating center to identify members eligible for the delayed intervention – those still enrolled and with no anticoagulant use in the previous year. Sites again relied on fresh enrollment and dispensing data to exclude those with a more recent anticoagulant dispensing or health plan disenrollment prior to the delayed mailing, and each site’s vendor mailed the materials to providers of the identified members.

With each distributed program, sites returned de-identified aggregate results to the coordinating center. Sites also provided aggregated tracking information and counts related to the mailings. Sites maintained their individual-level data in their local Sentinel Distributed Database as well as a study-specific dataset containing member and provider identifiers amongst other data – functionally, this was a line list of eligible members and their providers, when mailings occurred, details of the mailing (e.g., whether an alternate provider was identified), key identifiers, and enrollment details. The coordinating center, with guidance from each site’s principal investigator and lead programmer, evaluated output provided by sites to ensure the programs ran successfully and standard operating procedures were followed.

Results

Lessons learned during planning - review

During the planning phase we highlighted the importance of appropriate and substantial trial preparation.6

Knowledgeable site-based investigators as well as patient, provider, and health system leadership engagement are essential.

Inclusion of site-based investigators from the very start of the feasibility work was essential in embedding a trial within health systems’ environments. They brought essential expertise regarding the existence, availability, interpretation of, and access to the electronic health that was the foundation of this study. Although much of the work of this study used the health plans’ curated Sentinel Distributed Datasets, it was necessary to combine this with a wide array of health plan specific operational data systems. These individuals were able to access and work with these data behind institutional firewalls – these were essential tasks that could not have been performed by external investigators. Additionally, these individuals served as internal champions for the research and were instrumental in ensuring the proposed protocol could be executed. We also included a patient representative from the beginning and her insights proved invaluable with respect to cohort inclusion criteria, the content of intervention material, and engagement of the patient community. Patient and provider focus groups also played central roles in development of the materials. The five sites added complexity and time – for example, the protocol and intervention materials were revised numerous times as they required unanimous approval by health plan leaders and communication officers at all sites.

Be prepared to address unique ethical concerns for trials in clinical practice.

Ethical concerns were raised because of the possibility that patients for whom treatment was recommended by national guidelines were unaware of their condition or the recommendation for treatment. We were able to address this ethical issue, but the resulting study design yielded a more complex analysis to account for assessing certain eligibility criteria after randomization.7

Be prepared to address concerns that an intervention might negatively affect an organization’s quality rating.

To assure health plan leaders that it was appropriate to include Medicare Advantage beneficiaries in the trial, we obtained a letter of support from the Centers for Medicare and Medicaid Services, a measure which may be difficult to replicate.

Accommodate organizations’ normal workflows.

We constructed our initial timeline to follow open enrollment season, both to lessen the demands on operations and to avoid the annual disenrollments in December as employers change health insurance carriers.

Understand that organizations may change practice without notice.

We identified two instances in which a program focused on anticoagulation treatment was offered to some patients who would otherwise have been included in the trial population. Close contact with site-based investigators is crucial to identify initiatives like this and to accommodate them via changes to the protocol or analysis plan.

Lessons learned during implementation

Although the study team was able to anticipate and prepare for the challenges previously identified during planning, unexpected issues still arose during trial implementation. These unavoidable deviations were recorded and addressed by the study team. Below we summarize the major challenges we encountered that will be applicable to pragmatic trials that are embedded in health plan-based populations, those that utilize administrative claims data, and/or those that use electronic health data in general. While these lessons are descriptive in nature, the illustrated examples can serve as a guide for future researchers.

Even limited site-specific programming substantially increased time and effort requirements.

Certain steps in the protocol had to be conducted independently by each site due to the uniqueness of their data environments. Each site conducted their fresh enrollment and dispensing data extractions in a slightly different manner, resulting in the need for new quality assurance checking processes prior to the use of these datasets. These quality assurance checks were a combination of distributed code developed to mirror select lower level checks in the normal Sentinel System refresh process, plus manual review by the study team, since not all checks could be automated. Sites’ programming teams investigated anomalies identified by the study team based on the coordinating center’s understanding of each site’s most recent version of their data in the Sentinel Distributed Database. This process, while invaluable, expanded the timeline and placed an unexpected burden on all involved.

Similarly, while the distributed programs generated patient and provider line lists of potentially eligible members using randomized identifiers, sites needed to manually update their line lists with mailing details for the early and delayed intervention (actual identifiers, contact information, dates, newer exclusions, etc.). As a result, sites ran frequencies and quality assurance checks to ensure the mailing datasets were maintained as expected. Future multisite studies like this one should employ single-source distributed programs to update and amend all datasets whenever possible, decreasing the likelihood of introducing errors and differences in interpretation. When site-specific programs or manual dataset maintenance is required, develop and review the work along the way and ensure the timeline is appropriate with participating sites.

“Fresh” claims data may be needed.

Fresh dispensing data was needed to ascertain that ~3% of apparently eligible patients had initiated anticoagulation after the most recent update to the site’s Sentinel Distributed Database. This was true for both the early and delayed intervention mailings (sites ranged from 2% to 7%). Fresh, up to date enrollment data were especially important as we identified thousands of individuals who were no longer enrolled with the required coverage or changed employers via these fresh files. As noted below, high turnover must be anticipated and accounted for in the implementation. While the need to quality check these fresh data affected the timeline and staff workload, the importance of these data were and are apparent.

Approximate use of medications not captured in pharmacy claims.

As noted above, we anticipated incomplete capture of warfarin use in the claims data; warfarin is relatively inexpensive, so patients may choose to pay-out-pocket even with insurance. Therefore, we used INR procedure claims and test results as a proxy for warfarin treatment. Preliminary evaluation at one site indicated that the median number of INR tests for people known to be on warfarin was 10 per year and about two-thirds who received warfarin had at least four INRs. Including documentation of INRs as evidence of anticoagulation treatment increased the number of apparent warfarin recipients excluded from the early or delayed intervention by 2–3% (sites ranged 1–5%). While this is a small number of additional members, we did not know this in advance and wanted to be ensure we did not contact patients already receiving treatment. We will have missed any warfarin initiators who had neither dispensing records nor the specified INR claims/results, but these would have been equally distributed between the early and delayed intervention arms and so should have a negligible effect on the study results.

The impact of clustering of patients by provider should be considered.

Because the trial was randomized at the individual patient level, clustering within a provider was a concern (i.e., one provider might have multiple patients in both the early and delayed arms). We decided to randomize on the individual patient level because an evaluation of source data at one site during the planning phase revealed that the vast majority of providers had only one eligible patient. However, we found that identifying unique providers is challenging since one clinician can have numerous identifiers. Clinicians may have multiple provider identifiers within and between health plans. Provider identifiers are created and randomized independently at the individual sites, and within health plan multiplicity can occur when, for example, a clinician has multiple specialties or locations – each location and/or specialty could have a different provider identifier. At least partial deduplication of providers is technically feasible but the workload is considerable. Therefore, our ability to account for clustering analytically is limited. Subsequent trials could consider using cluster randomization at a geographic area level (e.g., metropolitan statistical area) if preliminary information suggests there could be contamination across intervention groups that biases the study to avoid these analytic complications and limitations.

Linking patients to their providers can be challenging.

The protocol aimed to identify the provider associated with each patient’s most recent atrial fibrillation diagnosis as the target for mailing. When each site mapped the provider identifier in the Sentinel Distributed Database to their source data with provider contact information, only 63% of the provider identifiers mapped to an individual provider – proportions varied by site (57–80%). For the remaining 37%, the provider identifier mapped to a physician group, facility, or institution. This complication led to splitting the early intervention into two waves and an extended launch of six months: patients in wave 1, those with an easily identifiable individual provider, were readily mailed, while sites determined an alternate recipient for patients in wave 2, those whose originally identified “provider” was a facility. The wave 2 mailing took several months as the study team developed a revised plan before resuming mailing. Because the intent was to conduct both a member and provider mailing for the early intervention, sites identified and contacted an alternate provider where possible (primary care provider, cardiologist, or other individual provider on a recent claim) or conducted a patient-only mailing when necessary. Each health plan chose its approach depending on its internal requirements and preferences. Sensitivity analyses will be added to the trial’s statistical analysis plan to address variation in mailing during the early intervention (i.e., provider and member mailings and member-only mailing, separately). Related, because of the differential start of follow up between wave 1 and wave 2 groups, patients randomized to the delayed intervention group were assigned index dates that corresponded to the date on which they would have been included in a mailing if they had been randomized to the early intervention arm. We used the provider information available at time of randomization to make these assignments. The sites documented the index dates by individual member in their line lists, and the analytic program references these dates and flags.

Loss to follow up due to disenrollment or change in benefit status.

People frequently change health plans. We observed attrition between randomization and ascertainment of treatment status, between re-evaluating eligibility status and each mailing, and between the initial mailing and the trial’s end date. Health plan disenrollment and other contracting changes (e.g., pharmacy benefit changes; members switching to Administrative Services Only status) resulted in substantial loss of subjects before mailings and censoring afterward. Health plans expected a member disenrollment rate of 1–2% per month, or 20% turnover year-to-year. At sites where the drop off was most notable, we discovered that these health plans had experienced recent pharmacy carve outs, when employers separate their prescription drug benefits from their major medical plans by contracting directly with pharmacy benefit managers, resulting in patients lacking pharmacy coverage in the data used for the trial and therefore no longer being eligible for the trial. Those who remained in the study cohort had an average enrollment coverage of 3–9 years. Censoring due to loss of medical and/or pharmacy coverage will be a persistent problem for trials that do not conduct outreach to supplement routinely collected electronic health record or claims data. We note, however, that use of medical and pharmacy claims data is typically complete during the period an individual has coverage. This avoids a problem that practice-based trials regularly confront, of identifying care delivered by other providers.

Assessing eligibility after randomization impacts the statistical analytic plan.

While randomizing patients with atrial fibrillation at high risk of stroke and subsequently assessing their treatment status addressed the ethical concerns that had been raised,7 this approach had large implications for the analysis of the trial. Conducting a traditional as-randomized analysis no longer made sense since a majority of the randomized individuals were on treatment and therefore not eligible. Our primary analysis became what we termed a modified intention-to-treat analysis and required careful programming to implement. In brief, the primary analysis included all early intervention patients who met eligibility and were mailed the intervention (waves 1 and 2); using the index date assignments described above, all identified delayed intervention patients who met eligibility at the time of the corresponding early mailing dates were included.

Code lists can be large.

The transition from ICD-9 to ICD-10 diagnosis codes means many more codes are typically applicable. The practical implications for trials utilizing electronic health data are sizable. To calculate the stroke risk score and identify underlying conditions of interest, we generated an initial list of over 16,000 codes. This list required labor intensive review by an expert clinician and pharmacist to ensure we included only codes which were appropriate; the final list for cohort identification are available online.9

Patient identifiers change over time.

In US claims data, patient identifiers may change with time, even within the records of a single health plan. To minimize the loss of patient follow-up when these changes occurred in this study, each site manually created a crosswalk between the original and new patient identifiers.

Continuity of investigator teams is essential.

We emphasize the value of continuity of the participating sites’ lead investigators and programming teams, as they maintained historical knowledge regarding trial implementation and decision-making that is difficult to convey simply through documentation. These individuals were already well versed in their internal data environments, in addition to the Sentinel System and proved to be instrumental in addressing the challenges noted above and describing limitations when course corrections could not be applied.

Conclusion

Here, and in our prior report, we have described the major practical lessons learned throughout our work on a highly pragmatic, multisite, randomized clinical trial that relied entirely on routinely collected administrative and pharmacy dispensing data – real-world data. We believe these observations can inform future embedded pragmatic trials. As pragmatic trials begin to shape evidence generation within clinical practice, investigators should anticipate similar concerns as many of these challenges can be accounted for. The utility of electronic health data for pragmatic trials is evident and has the potential to yield robust, meaningful evidence for real-world applications.

For IMPACT-AFib, we invested in and committed to collaboration and communication amongst all parties throughout all phases of the study. This ensured common understanding, early identification of challenges, and streamlined study implementation. Centralization of programming, and the ability to build on a large curated data set that was already standardized across the participating organizations was essential to the study’s success. However, multi-center embedded pragmatic clinical trials like this one need also to address important additional challenges regarding study design, implementation, and analysis. The main tenets of collaboration, coordination, and communication still apply and their importance should not be overlooked.

Acknowledgements

We thank the following colleagues for their contributions to this work: Smita Bhatia, Rong Chen Tilney, Meighan Rogers Driscoll, April DuCott, Thomas Harkins, Robert Jin, Laura Karslake, Annemarie Kline, Eva Ng, Sonali Shambhu, Jennifer Song, Judy Wong, and Yunping Zhou.

Funding

The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was funded by the US FDA through the Department of Health and Human Services [contract number HHSF223201400030I].

Footnotes

Declaration of conflicting interests

The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: S.D. Pokorney receives research grant support from the Food and Drug Administration, Bristol-Myers Squibb, Pfizer, Janssen Pharmaceuticals, Boston Scientific, and Gilead; advisory board/consulting support from Bristol-Myers Squibb, Pfizer, Boston Scientific, Medtronic, Zoll, and Portola; DSMB support from Milestone Pharmaceuticals. D. McCall reports honoraria for advice or public speaking from SentreHeart (US$3000 speaker’s fee and travel expenses) and US$2500 annual stipend from Duke Clinical Research Institute. C.B Granger reports consultancies and honoraria for advice or public speaking from Boston Scientific Corp., Bayer Corp., Boehringer Ingelheim, Daiichi Sankyo Co., Janssen Pharmaceutica Products, and Pfizer; grants received/pending from Bayer Corp., Boehringer Ingelheim, Bristol-Myers Squibb, Daiichi Sankyo Co., Janssen Pharmaceutica Products, and Pfizer; service on an advisory board for Boehringer Ingelheim, Pfizer, Daiichi Sankyo Co., and Janssen Pharmaceutica Products; medical education funding from Boston Scientific Corp., Bayer Corp., Boehringer Ingelheim, Daiichi Sankyo Co., Bristol-Myers Squibb, Janssen Pharmaceutica Products, and Pfizer. All other authors declare that there is no conflict of interest.

References

  • 1.Platt R, Brown JS, Robb M, et al. The FDA Sentinel Initiative — An Evolving National Resource. New England Journal of Medicine. 2018; 379: 2091–2093. [DOI] [PubMed] [Google Scholar]
  • 2.Pokorney SD, Cocoros N, Al-Khalidi HR, Haynes K, Al-Khatib SM, Garcia C, Goldsack J, Hamre G, Harkins T, Jin R, Knecht D, Lane D, Levenson M, Lin ND, Martin D, McCall D, McMahill-Walraven C, Nair V, O’Brien EC, Parlett L, Rymer J, Saliga R, Temple R, Zhang R, Zhou Y, Platt R, Granger B. IMplementation of a randomized controlled trial to imProve treatment with oral AntiCoagulanTs in patients with Atrial Fibrillation (IMPACT-AFib): Rationale and design. American Heart Journal. Under review. [DOI] [PMC free article] [PubMed]
  • 3.Feinberg WM, Blackshear JL, Laupacis A, Kronmal R, Hart RG. Prevalence, age distribution, and gender of patients with atrial fibrillation: analysis and implications. Archives of internal medicine. 1995;155(5):469. [PubMed] [Google Scholar]
  • 4.Hart RG, Pearce LA. Current status of stroke risk stratification in patients with atrial fibrillation. Stroke. 2009;40(7):2607–2610. [DOI] [PubMed] [Google Scholar]
  • 5.Wilke T, Groth A, Mueller S, et al. Oral anticoagulation use by patients with atrial fibrillation in Germany. Adherence to guidelines, causes of anticoagulation under-use and its clinical outcomes, based on claims-data of 183,448 patients. Thromb Haemost. 2012;107(6):1053–1065. [DOI] [PubMed] [Google Scholar]
  • 6.Cocoros NM, Pokorney SD, Haynes K, et al. FDA-Catalyst-Using FDA’s Sentinel Initiative for large-scale pragmatic randomized trials: Approach and lessons learned during the planning phase of the first trial. Clinical Trials. 2019; 16: 90–7. [DOI] [PubMed] [Google Scholar]
  • 7.Sabin JE, Cocoros NM, Garcia CJ, Goldsack JC, Haynes K, Lin ND, McCall D, Nair V, Pokorney SD, McMahill-Walraven C, Granger CB, Platt R. Bystander ethics and good samaratinism – A paradox for learning health organizations. Hastings Center Report. 2019;49(4):18–26. [DOI] [PubMed] [Google Scholar]
  • 8.American Heart Association Treatment Guidelines of Atrial Fibrillation. http://www.heart.org/HEARTORG/Conditions/Arrhythmia/AboutArrhythmia/Treatment-Guidelines-of-Atrial-Fibrillation-AFib-or-AF_UCM_423779_Article.jsp#.WrKcEmrwbCM (2017, accessed 24 Sept. 2019).
  • 9.List of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM Diagnosis and Procedure), International Classification of Diseases, Tenth Revision, Clinical Modification and Procedure Coding System (ICD-10-CM and ICD-10-PCS), Current Procedural Terminology, Second and Fourth Edition (CPT-2 and CPT-4), and Healthcare Common Procedure Coding System (HCPCS) Codes Used in the Cohort Identification for IMPACT-AFib. https://www.sentinelinitiative.org/sites/default/files/IMPACT-AFib_Inclusion_Exclusion_Code_Lists.pdf (2019, accessed 14 Dec. 2019).

RESOURCES