Abstract
Objectives
To support development of a robust postmarket device evaluation system using real-world data (RWD) from electronic health records (EHRs) and other sources, employing unique device identifiers (UDIs) to link to device information.
Methods
To create consistent device-related EHR RWD across 3 institutions, we established a distributed data network and created UDI-enriched research databases (UDIRs) employing a common data model comprised of 24 tables and 472 fields. To test the system, patients receiving coronary stents between 2010 and 2019 were loaded into each institution’s UDIR to support distributed queries without sharing identifiable patient information. The ability of the system to execute queries was tested with 3 quality assurance checks. To demonstrate face validity of the data, a retrospective survival study of patients receiving zotarolimus or everolimus stents from 2012 to 2017 was performed using distributed analysis. Propensity score matching was used to compare risk of 6 cardiovascular outcomes within 12 months postimplantation.
Results
The test queries established network functionality. In the analysis, we identified 9141 patients (Mercy = 4905, Geisinger = 4109, Intermountain = 127); mean age 65 ± 12 years, 69% males, 23% zotarolimus. Separate matched analyses at the 3 institutions showed hazard ratio estimates (zotarolimus vs everolimus) of 0.85–1.59 for subsequent percutaneous coronary intervention (P = .14–.52), 1.06–2.03 for death (P = .16–.78) and 0.94–1.40 for the composite endpoint (P = .16–.62).
Discussion
The analysis results are consistent with clinical studies comparing these devices.
Conclusion
This project shows that multi-institutional data networks can provide clinically relevant real-world evidence via distributed analysis while maintaining data privacy.
Keywords: medical device safety, unique device identifier, privacy protection, real-world evidence, drug-eluting stents
INTRODUCTION
Current medical device surveillance systems in the United States have been criticized for inadequacies in timely dissemination of accurate, up-to-date information about device performance, recalls, and other potential safety issues.1–4 The Food and Drug Administration (FDA), responsible for activities that protect public health, including postmarket surveillance of medical devices, has historically used adverse event reporting as the primary mechanism for accomplishing that goal. To advance device traceability, implant identification, adverse event reporting, recall management, and other aspects of surveillance beyond adverse event reporting, the FDA in 2013 published the unique device identification (UDI) System Final Rule, which required manufacturers to label marketed devices with a UDI. A UDI identifies the manufacturer and model (device identifier/DI) as well as the lot number, serial number, and/or expiration date (production identifier/PI).5 The goal has been for UDIs to be electronically documented in health information systems, transmitted in adverse event reports, used in recalls, and utilized broadly in postmarket surveillance activities. However, without a mandate for hospitals, providers, and insurers to incorporate the UDI into information systems including electronic health records (EHRs) and claims systems, application of the UDI remains limited.
On the pharmaceutical side, the FDA launched its Sentinel Initiative in 2008 as a distributed data network (DDN) of curated electronic health data covering over 100 million people to assess potential drug safety signals and provide evidence about efficacy.6 A key component of Sentinel pharmacy data is the National Drug Code, which is routinely captured in claims enabling linkage to granular drug data supporting pharmacovigilance and research using real-world data (RWD). The hope has been that UDI would play the same role in medical device evaluations. DDNs protect patient privacy by allowing data partners to maintain control over source data behind firewalls. Analyses are conducted following the transformation of source data into a common data model (CDM) format that allows executable queries to be run against the CDM, returning summary data for aggregation without identifiable, individual patient-level data being exchanged. Congress intended Sentinel to include both drug and medical device information,7,8 but the current lack of device-identifying information in EHR and health insurance claims systems has effectively precluded device data from being incorporated.9
The Building UDI into Longitudinal Data for Medical Device Evaluation initiative (BUILD), initially funded by the FDA, was launched in 2015 to explore the use of UDI for the evaluation of medical device safety and effectiveness.10 BUILD included 2 projects. The first used qualitative methods to study the experiences of health systems that had implemented UDI for implantable devices in their cardiac catheterization labs (Cath Labs) or operating rooms. This project, which was advised by a multistakeholder panel (BUILD Consortium),11 resulted in a conceptual model and detailed recommendations to guide health systems in UDI implementation.12 The second project was to test the practicability and deliverability of a DDN model for device evaluation via a demonstration across multiple health systems. The CDM for this DDN would combine data from cardiovascular procedure systems, EHRs, payors, the FDA Global UDI Database (GUDID) available as AccessGUDID at the National Library of Medicine,13 and an augmented UDI database (AUDI), constructed from publicly available industry data.14
OBJECTIVES
In this article, we describe the design and implementation of the BUILD DDN and CDM and the use of UDIs to link to AccessGUDID and AUDI device information. We also present the results of the initial tests of the system. More details are available in the accompanying Supplementary Materials.
MATERIALS AND METHODS
Setting
This project was conducted at 3 health systems, Geisinger (Danville, PA), Intermountain Healthcare (Salt Lake City, UT), and Mercy Health (Chesterfield, MO). All are members of the Healthcare Transformation Group (HTG), a 6-system alliance that promotes the adoption of data standards in the healthcare supply chain.15 The project was an outgrowth of the HTG-supported Mercy UDI Demonstration project, funded by the FDA as part of the initial Medical Device Epidemiology Network (MDEpiNet) projects aimed at modernizing postmarket medical device evaluation.16–19 Additional detail on the Mercy UDI Demonstration project is available in the Supplementary Materials.
Design and implementation of healthcare system databases and the DNN
A workgroup of clinicians, researchers, and informaticists from the 3 systems was formed to develop governance and processes for the creation of the site UDI research databases (UDIRs) and the DDN. Modeled on other DDNs such as Sentinel and the Healthcare Systems Research Network (HCSRN) Virtual Data Warehouse (VDW),20 the BUILD DDN was designed to combine device, registry, payor, and patient data in a common format (the BUILD CDM). This federated approach enables the authoring of query code that can be run without modification at each participating site without sharing individual-level patient data. Figure 1 provides an illustration of the process by which a DDN query is completed (additional detail on the UDIR and DDN is available in Supplementary Materials). Because the devices selected for evaluation were coronary stents, the CDM design was comprised primarily of elements of the Sentinel CDM (which includes longitudinal EHR and payor data) and cardiology-specific tables that closely followed the National Cardiovascular Data Registry (NCDR) CathPCI Registry’s® data content. In addition, the CDM included tables of publicly accessible data from AccessGUDID and from the supplemental AUDI database containing 9 stent-specific attributes.
Figure 1.
Illustration of how a query is completed in the BUILD distributed data network (DDN).
The workgroup reviewed every field from these sources and discussed definitions and feasibility of capturing the data uniformly across health systems. The other novel addition to this CDM was a table titled “Device” with 11 fields including UDI (divided into DI and PI), UDI format,21 and device disposition (eg, implanted, explanted, or not used). This table, though not currently part of Sentinel, was designed by the authors in consultation with the leaders of the Sentinel network22 to follow their naming conventions and be consistently linkable in the future with Sentinel fields for patient, encounter, and provider. The final design of the BUILD CDM consisted of 24 tables, summarized in Figure 2, containing a total of 472 fields (The BUILD CDM and AUDI database are included in the Supplementary Materials).
Figure 2.
BUILD common data model tables, grouped by domain.
Implementation of the DDN
Next, each health system needed to populate their individual UDIR database conforming to the BUILD CDM. Each health system had flexibility in how to implement the extraction, transformation, and loading of source data, following the general principles of accurately capturing stent patients, their devices, and longitudinal follow-up care. Mercy and Geisinger had been electronically capturing DI data since 2010, and Intermountain had captured these data since 2017, so these years were used as starting dates for the individual health system retrospective data pulls.
Subjects included in the databases were patients undergoing percutaneous coronary intervention (PCI) with stent implantation between 2010 and 2019 at any of the participating centers. Data from patients age 18 or older were included in the comparative analysis if they received an everolimus-eluting stent (EES) or zotarolimus-eluting stent (ZES), but not both, between 2012 and 2019, had an encounter at least 12 months before implantation, and had >90 days follow-up or died in the first 90 days postimplantation. Index date was defined as the stent implantation date, and records from the 12 months leading up to index date were used to establish baseline demographics and disease history.
Once populated, the ability of the system to distribute and execute queries was tested with 3 quality assurance checks. First, each site’s database was queried to verify that it contained each required table. Second, each individual variable was queried to verify its presence, length, format, and percent of blank or missing entries. Finally, the device-relevant sections of the database were queried to quantify the number of devices, stents, and stent type by the FDA product code.
Retrospective analysis
To demonstrate face validity of the DDN, a retrospective analysis was performed on a sample of cases taken from the UDIRs and carried out according to a pre-specified statistical analysis plan to leverage both the device-specific attributes and longitudinal patient outcome data. A comparison of EES versus ZES stent types was made for 6 outcomes: acute myocardial infarction (AMI), stroke, subsequent PCI procedure, coronary artery bypass graft (CABG) procedure, all-cause mortality, and a composite endpoint of all the above. The statistical analytic plan described below was developed prior to performing the analysis. Geisinger served as the data coordinating center.
Because treatment assignment was nonrandomized, we used propensity score matching to mitigate confounding. We originally identified 32 covariates based on input from the clinical coinvestigators, but found that 9 of these were unavailable for more than 50% of patients, so those were omitted, leaving 23 covariates for the propensity score model. Propensity for receiving ZES was modeled on these covariates, and the distributed query was written to fit the propensity model separately at each site and adapt to different patterns of missing data at the various health systems. Covariates are shown in Table 1 and included demographics (eg, age, sex, insurance), comorbidities (eg, diabetes, heart failure), and cardiac-specific variables (eg, stress test results). The most frequently missing variables among these were stress test results and prior surgical history (eg, PCI, CABG), in which case these were completed with single imputation using the PROC MI procedure of SAS (SAS 9.4, Cary, NC). Multiple imputation was discussed but ultimately not used in the interest of keeping the analysis straightforward. Each patient was matched to a patient in the other stent group from the same site with a similar index date (±6 months), same sex, and propensity scores within a caliper (0.6 SDs of logit score). Greedy matching was used, with priority given to the patients with the smallest number of potential matches. Patients with nonoverlapping propensity scores or no matches were dropped from analysis. Standardized difference |d| of mean or percentage was used to assess differences in covariates between groups, prior to any examination of outcomes, with |d| < 0.10 considered desirable for reducing the observed confounding.
Table 1.
Baseline patient characteristics after propensity score matching
| Geisinger |
Mercy |
Intermountain |
|||||||
|---|---|---|---|---|---|---|---|---|---|
| Everolimus | Zotarolimus | |d| | Everolimus | Zotarolimus | |d| | Everolimus | Zotarolimus | |d| | |
| Total N (%), patients | 1215 (100) | 1215 (100) | — | 600 (100) | 600 (100) | — | 29 (100) | 29 (100) | — |
| Age in years, mean (SD) | 63 (12) | 63 (12) | 0.05 | 65 (12) | 65 (12) | 0.03 | 67 (10) | 67 (14) | 0.00 |
| Males, N (%) | 881 (73) | 881 (73) | 0.00 | 412 (69) | 412 (69) | 0.00 | 23 (79) | 23 (79) | 0.00 |
| Ethnicity of Hispanic origin, N (%) | 13 (1.1) | 16 (1.3) | 0.02 | 3 (0.5) | 4 (0.7) | 0.03 | 0 (0) | 0 (0) | 0.00 |
| Current tobacco smoker, N (%) | 357 (29) | 333 (27) | 0.04 | 144 (24) | 142 (24) | 0.01 | 0 (0) | 0 (0) | 0.00 |
| Hypertension, N (%) | 963 (79) | 939 (77) | 0.05 | 535 (89) | 533 (89) | 0.01 | 2 (7) | 3 (10) | 0.10 |
| Diabetes, N (%) | 403 (33) | 417 (34) | 0.02 | 260 (43) | 261 (44) | 0.00 | 0 (0) | 0 (0) | 0.00 |
| Dyslipidemia, N (%) | 943 (78) | 922 (76) | 0.04 | 516 (86) | 518 (86) | 0.01 | 1 (3.4) | 1 (3.4) | 0.00 |
| Heart failure, N (%) | 129 (11) | 136 (11) | 0.02 | 133 (22) | 151 (25) | 0.07 | 0 (0) | 0 (0) | 0.00 |
| Prior myocardial infarction, N (%) | 340 (28) | 343 (28) | 0.00 | 246 (41) | 255 (43) | 0.03 | 0 (0) | 0 (0) | 0.00 |
| Currently on dialysis, N (%) | 7 (0.6) | 10 (0.8) | 0.02 | 17 (2.8) | 17 (2.8) | 0.00 | 0 (0) | 0 (0) | 0.00 |
| Prior CABG surgery, N (%) | 177 (15) | 166 (14) | 0.03 | 137 (23) | 150 (25) | 0.05 | 0 (0) | 0 (0) | 0.00 |
| Prior PCI procedure, N (%) | 421 (35) | 416 (34) | 0.01 | 344 (57) | 344 (57) | 0.00 | 0 (0) | 0 (0) | 0.00 |
| Non-STEMIa at baseline, N (%) | 301 (25) | 308 (25) | 0.01 | 124 (21) | 149 (25) | 0.10 | 3 (10) | 3 (10) | 0.00 |
| Admitted via emergency department, N (%) | 688 (57) | 678 (56) | 0.02 | 300 (50) | 314 (52) | 0.05 | 0 (0) | 0 (0) | 0.00 |
| Had a stress imaging test, N (%) | 286 (24) | 273 (23) | 0.02 | 212 (35) | 210 (35) | 0.01 | 0 (0) | 0 (0) | 0.00 |
| Stress test with indeterminant result, N (%) | 15 (1.2) | 18 (1.5) | 0.03 | 5 (0.8) | 7 (1.2) | 0.04 | 0 (0) | 0 (0) | 0.00 |
| Stress test with negative result, N (%) | 41 (3) | 34 (3) | 0.03 | 26 (4.3) | 27 (4.5) | 0.01 | 0 (0) | 0 (0) | 0.00 |
| Stress test with positive result, N (%) | 221 (18) | 210 (17) | 0.02 | 181 (30) | 173 (29) | 0.03 | 0 (0) | 0 (0) | 0.00 |
| Left main stenosis in %, mean (SD) | 9.6 (19.8) | 9.9 (20.1) | 0.02 | 12.1 (23.9) | 11.6 (22.6) | 0.02 | 6.4 (20.1) | 7.7 (19.1) | 0.01 |
| Insurance: private, N (%) | 791 (65) | 770 (63) | 0.04 | 469 (78) | 471 (79) | 0.01 | 0 (0) | 0 (0) | 0.00 |
| Insurance: medicare, N (%) | 348 (29) | 362 (30) | 0.03 | 308 (51) | 313 (52) | 0.02 | 0 (0) | 0 (0) | 0.00 |
| Insurance: medicaid, N (%) | 76 (6) | 81 (7) | 0.02 | 55 (9) | 60 (10) | 0.03 | 0 (0) | 0 (0) | 0.00 |
| Insurance: none, N (%) | 14 (1.2) | 17 (1.4) | 0.02 | 37 (6) | 32 (5) | 0.04 | 29 (100) | 29 (100) | 0.00 |
Non-ST segment myocardial infarction.
After matching and balance assessment, Cox proportional hazards analysis was used to estimate hazard ratios and 95% confidence intervals (CIs) and to test for statistical significance between groups. Kaplan–Meier curves were generated for each endpoint for visual inspection. Patients were right-censored when they experienced the outcome, had 90 days with no follow-up encounter, or reached 365 days of follow-up, whichever came first. A sensitivity analysis using inverse probability treatment weighting in place of matching was also performed but achieved less balance in the confounding covariates and also reached very similar conclusions, so we present only the propensity-matched results here. All statistical analysis was performed using SAS software with differences of P < .05 considered statistically significant.
In addition to the matched analysis, a query was devised to aggregate survival data from all 3 health systems while staying consistent with the DDN approach not to share individual patient data, following the “risk set” or aggregation approaches described elsewhere.23 This query divided patients by stent type and presence or absence of each outcome event and created tables where each record summarized patients into groups of 5 with the same stent type, censoring event, and similar times-to-event. These tables were then shared with the data coordinating center and recombined into a master dataset to produce unadjusted survival curves for both stent types from all 3 health systems together.
This was a retrospective review of data collected for nonresearch purposes which was granted a Health Insurance Portability and Accountability Act of 1996 (HIPAA) waiver of authorization. The project was granted exempt status by Institutional Research Boards (IRBs) at all 3 institutions.
RESULTS
The quality assurance checks demonstrated that the CDM tables at each health system were appropriately populated as shown in online Supplementary Tables S2–S5. The Geisinger UDIR contains all patients undergoing procedures in their cardiac Cath Labs between January 2010 and February 2018. The Mercy UDIR contains all patients undergoing coronary stent implantation between March 2010 and April 2019. The Intermountain UDIR contains all patients undergoing coronary stent implantation between October 2017 and April 2019 (a period of 19 months). The numbers of patients undergoing drug eluting and bare metal coronary stent implantations during these time periods were as follows: Geisinger 8840; Mercy 11 446; Intermountain 1888. During the checks, mis-specified variables, for example, numeric versus character or different entry lengths, were revised to ensure conformance across all 3 health system databases. Those errors were subsequently fixed so that all 3 health system databases conformed to the BUILD CDM.
In the retrospective analysis, 12 862 unique patients, who received EES or ZES during the study period of 2012–2017, were initially identified across the 3 health systems. After removing patients who received both stent types or had insufficient baseline or follow-up information, the final cohort size for analysis was 9141 patients (Mercy = 4905, Geisinger = 4109, and Intermountain = 127). (Intermountain’s UDIR contained data on patients undergoing procedures only in the last 3 months of the study period: October–December 2017.) Mean age was 65 years (SD 12 years), 69% of patients were male, and 23% of patients received a ZES.
Propensity score matching was successful at identifying 1:1 matched cohorts at all 3 health systems and reducing the standardized differences |d| within those health systems to 0.10 or less for almost all covariates, indicating good balance between groups. Figure 3 plots the standardized differences for 23 covariates before and after propensity matching for comparison, and Table 1 shows the covariate means and percentages as well as |d| for the final, matched cohorts. At Mercy and Geisinger, the number of patients receiving ZES was much smaller than the number receiving EES and dictated the sizes of the final cohorts, which were 1215 patients in each drug group at Geisinger and 600 in each group at Mercy. At Intermountain, there were fewer EES patients than ZES, and a more limited timeframe, so the cohorts were much smaller (29 patients per group), bringing the total final combined sample size for the matched analysis to 3688 patients.
Figure 3.
Forest plot of standardized differences |d| in baseline variables before versus after propensity score matching. Open symbols represent standardized differences before matching, solid symbols represent standardized differences after matching. Note that not all variables were used at all health systems.
The absolute counts and percentages of patients experiencing each of the 6 outcome events during follow-up are shown in Table 2, though these numbers should be interpreted with caution as they do not take into account that patients had varying follow-up times. In general, subsequent PCI was the most frequently observed of the 5 individual endpoints (as high as 15.9% per cohort), with stroke and CABG surgery being the least frequent (0.0–1.3% and 0.0–2.2%, respectively). Overall, 152 patients (4.1% total) died during the study period.
Table 2.
Numbers and percentages of patients in the everolimus and zotarolimus matched cohorts at each institution who experienced each of the 6 study endpoints
| N, patients in matched cohort | N (%) with AMI | N (%), with stroke | N (%), with subsequent PCI | N (%), with CABG | N (%), with all-cause mortality | N (%), with composite endpointa | |
|---|---|---|---|---|---|---|---|
| Geisinger | |||||||
| Everolimus | 1215 | 6 (0.5) | 4 (0.3) | 193 (15.9) | 15 (1.2) | 37 (3.0) | 231 (19) |
| Zotarolimus | 1215 | 12 (1.0) | 0 (0.0) | 167 (13.7) | 20 (1.6) | 40 (3.3) | 221 (18) |
| Mercy | |||||||
| Everolimus | 600 | 32 (5.3) | 8 (1.3) | 64 (10.7) | 11 (1.8) | 30 (5.0) | 123 (20.5) |
| Zotarolimus | 600 | 19 (3.2) | 7 (1.2) | 75 (12.5) | 13 (2.2) | 42 (7.0) | 137 (22.8) |
| Intermountain | |||||||
| Everolimus | 29 | 0 (0.0) | 0 (0.0) | 2 (6.9) | 0 (0.0) | 1 (3.4) | 3 (10.3) |
| Zotarolimus | 29 | 0 (.00) | 0 (0.0) | 3 (10.3) | 0 (0.0) | 2 (6.9) | 4 (13.8) |
Notes: Note that patients were followed for different amounts of time so these simple percentages should not be compared statistically.
Composite endpoint includes all 5 of the other outcomes.
Survival curves from the proportional hazards modeling are shown in Figures 4A–F for the 6 study endpoints. Table 3 shows estimates of hazard ratios (ZES vs EES) for the 3 health systems. These hazard ratios ranged from 0.85 to 4.86 for subsequent PCI (P = .10–.35), 1.06–2.12 for death (P = .16–.78), and 0.94–3.02 for the composite endpoint (P = .16–.53). In summary, we could demonstrate no significant differences in the hazards of these outcomes between the 2 stents recognizing that the sample sizes are relatively small and the 95% CIs are wide, especially in the Intermountain analysis.
Figure 4.
Survival curves comparing everolimus versus zotarolimus stents for propensity-matched cohorts at all 3 BUILD health systems. (A) Acute myocardial infarction. (B) Stroke. (C) Subsequent percutaneous coronary intervention (PCI) procedure. (D) Coronary artery bypass grafting (CABG) procedure. (E) All-cause mortality. (F) Composite of all 5 other endpoints.
Table 3.
Hazard ratio results from Cox proportional hazards analysis comparing event-free survival between zotarolimus versus everolimus (reference group) patients, stratified by BUILD site
| Geisinger |
Mercy |
Intermountain |
||||
|---|---|---|---|---|---|---|
| Hazard ratio [95% CI] | P value | Hazard ratio [95% CI] | P value | Hazard ratio [95% CI] | P value | |
| AMI | 1.98 [0.78, 4.99] | 0.15 | 0.60 [0.34, 1.06] | 0.08 | a | a |
| Stroke | a | a | 0.88 [0.30, 2.63] | 0.83 | a | a |
| Subsequent PCI | 0.85 [0.69. 1.05] | 0.14 | 1.18 [0.84, 1.66] | 0.35 | 1.59 [0.39, 6.46] | 0.52 |
| CABG procedure | 1.31 [0.66, 2.61] | 0.44 | 1.19 [0.53, 2.67] | 0.67 | a | a |
| All-cause mortality | 1.06 [0.68, 1.66] | 0.78 | 1.41 [0.87, 2.29] | 0.16 | 2.03 [0.19, 22.16] | 0.56 |
| Composite endpointb | 0.94 [0.78, 1.13] | 0.53 | 1.12 [0.88, 1.44] | 0.37 | 1.40 [0.38, 5.16] | 0.62 |
These groups had no events and, therefore, hazard ratios could not be calculated.
Composite endpoint includes all 5 of the other outcomes.
Finally, an additional set of survival curves is shown in Figure 5, based on the aggregated (not matched) data from all eligible study patients at all 3 health systems, with shaded areas indicating the 95% CIs; no hypothesis testing was performed on these unmatched data.
Figure 5.
Survival curves showing everolimus vs. zotarolimus stents for aggregated data (not propensity score matched) from all 3 BUILD health systems, with shaded regions representing 95% confidence intervals. (A) Acute myocardial infarction. (B) Stroke. (C) Subsequent PCI procedure. (D) CABG procedure. (E) All-cause mortality. (F) Composite of all 5 other endpoints.
DISCUSSION
In the BUILD initiative, we defined the data needed to assess coronary artery stents, sourced the data across 3 domains (EHR, AccessGUDID/AUDI, and the CathPCI Registry®), and developed a CDM as a framework for data specifics. Each participating health system then captured and transformed their data to the CDM. This allowed the performance of distributed analysis by executing a single common query across the 3 health system databases that returned aggregate, deidentified data which were analyzed separately by health system with an exploratory analysis performed on the aggregate.
Our sample analyses consisted of comparisons of the real-world performance of 2 DES and demonstrate face validity of the BUILD DDN approach. Our results are consistent with randomized trials, which have shown no significant differences in clinical outcomes between the 2 stents with absolute differences for EES compared with ZES in various 1-year adverse events (death, MI, target lesion revascularization, and composites) ranging from −0.4% to 3.4%.24–28 Additionally, Park et al,29 reported nonsignificant hazard ratios of 0.83–1.4 for the individual outcomes including stroke. A registry-based study did show some benefit for EES over ZES in the setting of AMI30 but another prospective registry study in broader patient populations showed no differences between the stents.31 Finally, the findings of the current analysis are consistent with and build on a similar study performed in the Mercy UDI Demonstration project.16 Importantly the analyses also demonstrate the face validity of the BUILD DDN as an approach to maintaining local control of data while allowing for RWD aggregate analytics sufficient for new knowledge generation, contributions to science, and regulatory decision making.
Coronary stents were selected as the initial area for system testing because DES have become the standard of care for PCI and have been well studied in clinical trials. Also, the drug eluted by a DES is one of the key clinical stent attributes found in UDI-linked AUDI data. This work provides important evidence of the feasibility of integrating UDIs into Sentinel-like data networks to perform device effectiveness studies and to support a modern medical device surveillance system, which has significant importance for physicians, patients, and healthcare.
EHR data, which were captured in the routine course of care, enabled longitudinal follow-up that is essential for device surveillance. AccessGUDID and AUDI provided novel, device-specific information that was only linkable because of the availability of UDIs in health system databases. The CathPCI Registry® data, which required manual entry into the registry at each health system, provided highly standardized short-term clinical information following stent implantation.
A major goal of the BUILD initiative was to develop methods that other hospitals could follow to capture robust device data, combine it with clinical data captured in the course of patient care, and thus enable the tracking of medical device performance for quality and research purposes and, in particular, for safety surveillance. We believe we have accomplished that goal. As illustrated in the Supplementary Appendix, the 3 BUILD health systems took different approaches to creating their device databases (the UDIRs) driven by their individual organizational technology infrastructure, databases, and available resources. In addition, we have previously published the results of the BUILD Leading Practices project, which explored the methods used by several health systems to capture UDI systematically.12 It is our opinion that the robust device data capture and analysis approach we have taken would not be possible without UDI.
Limitations
Data missingness is a significant RWE challenge. Missingness can occur in data types or in the amount of data available for analysis. The most obvious missingness in our data relates to post-PCI services rendered patients by other health providers, resulting in the absence of records of those encounters in the respective UDIRs. We did not formally address the degree or causes of missingness in our data. Additionally, while we fit propensity models at each site to address confounding, newer methods for distributed analytics do exist and could have been employed.23 We considered a hybrid approach that would share only propensity scores and outcomes with the coordinating center, without sharing any other individual patient data, but ultimately decided that aggregating patients into groups of 5, while simplistic, was easier for interpretation and provided an adequate demonstration of the network. Our analyses also possess the weaknesses of observational studies including likely residual confounding, although our comparison groups were well matched using propensity models. Moreover, due to factors such as timing of an EHR system conversion at one site (Intermountain) limiting the time period for which data were available, the participating health systems contributed different amounts of data to the analysis. We feel, however, that the successful involvement of all 3 health systems demonstrates the feasibility and generalizability of the overall approach to other health systems. Additionally, since UDIs were not captured at the point of care at Intermountain until 2017, product codes, which included the drug type, had to be utilized to identify the stents of interest prior to that date. Finally, due to time limitations full validation of the AUDI database could not be accomplished during this project but will have to be carried out before strong inferences can be drawn from analyses employing AUDI data.
CONCLUSIONS
In support of the efforts of FDA to develop a robust, post-market medical device evaluation system utilizing RWD and enabled by UDI, we have established the first DDN and CDM to make use of real-world EHR data and UDI-linked device data. The 3 health system BUILD participants are among the first to implement point of care UDI data capture. The UDI enabled the linking of stent data from AccessGUDID to patient-level data, greatly simplifying analyses.13
In the BUILD initiative, we also expanded on the original Mercy supplementary UDI-enriched Database (SUDID), renamed AUDI that contains publicly available key clinical device characteristics, which were linked to patient data with UDI. This approach is much like that used by the FDA’s Sentinel program for evaluating drugs and is the next step in creating a large EHR-based DDN for evaluating all medical devices.32,33
Future directions
The science of using EHR data combined with registry and device data in medical device real-world safety and effectiveness analyses is still nascent. However, when implemented in a large DDN, there is tremendous potential because of the ability to aggregate data generated in everyday practice and link it via UDIs to other data needed for robust device assessments. To advance this potential, the BUILD investigators are continuing work on EHR data quality including the challenge of data missingness. Future work includes incorporating insurance claims data and patient-reported information into the UDIRs, adding additional hospital systems to the DDN, and expanding research to other device types. BUILD investigators are also working with the National Evaluation System for health Technology (NEST) Coordinating Center on the use of EHR data and UDIs in further developing an RWD platform for medical device evaluation across the total product life cycle.34
FUNDING
Funding for this publication was provided, in part, by the Food and Drug Administration through grant 1U01FD005476-01 REVISED, 02 REVISED, and 03. Partial funding was also received through grants from Johnson & Johnson, Titusville, NJ, and Medtronic, Inc., Minneapolis, MN.
AUTHORS CONTRIBUTIONS
JD is the BUILD principal investigator, had overall responsibility for the paper, and was the primary author of the Lay Summary, Background and Significance, Objectives, Discussion, and Conclusion sections. JG is the lead BUILD Geisinger and Data Center investigator and led the analysis. He was the primary author of the Materials and Methods and Results sections and made editorial contributions to the remainder of the paper. JM is the lead BUILD Intermountain investigator, contributed Intermountain data to the analysis, and provided editorial input. JT is a lead BUILD investigator and made significant contributions to the study design, execution and organization, and content of the paper. JR and TF designed and built the AUDI database and the Mercy UDIR and provided content related to the databases. TF was also the BUILD CDM architect. AM, and HM worked on data capture and writing and responding to data queries. SK participated in the analysis methods and results. NW is the principal investigator for the BUILD Leading Practices project and contributed content and editorial input. JB and ES were members of the BUILD Steering Committee, which provided direction for the overall project. They contributed significant content and editorial input.
Supplementary Material
ACKNOWLEDGMENTS
The authors would like to acknowledge the following for their contributions to the BUILD initiative and this article: Kimberly Collison-Farr, the BUILD project manager, who coordinated all BUILD activities and contributed actively to AUDI development and other data aggregation tasks. Michael Beiene, who built Intermountain’s UDIR. Patrick Lupinetti and First Databank, who assisted in the population of the AUDI database by leveraging their Prizm Medical Device and Supply database. Terrie Reed (now at Symmetric Health Solutions), Asiyah Yu Lin (now at National Human Genome Research Institute, NIH), Erika Tang, and their FDA colleagues, who provided ongoing guidance for BUILD. Sentinel leaders, Jeff Brown and Nicolas Beaulieu, who provided invaluable assistance in the creation of the BUILD CDM. All the members of the BUILD Steering Committee without whose support the initiative would not have been possible
CONFLICT OF INTEREST STATEMENT
JPD’s nondependent son is an employee of Boston Scientific. JAB was an employee of Johnson & Johnson at the time the work was conducted (and has since retired) and owns stock in the company. JG received institutional grant funding from Pfizer, Inc.; Purdue Pharma, LP; and Genentech, Inc., for unrelated work during the study period. EPS is an employee of Medtronic and owns stock in the company. NAW has received research funding from the National Evaluation System for health Technology Coordinating Center and the Patient-Centered Outcomes Research Institute. She had purchased stock options in Vitreos Health.
DATA AVAILABILITY
The data on which this study was based is Protected Health Information, remains under the control of the individual participating health systems, and cannot be provided by the study investigators. The aggregated summary data can be provided on request to the corresponding author at jpdrozda@charter.net. The BUILD CDM and AUDI database are available in the Supplementary Material.
REFERENCES
- 1. Hauser H, Kallinen LM, Almquist AK, et al. Early failure of a small-diameter high-voltage implantable cardioverter-defibrillator lead. Heart Rhythm 2007; 4 (7): 892–6. [DOI] [PubMed] [Google Scholar]
- 2. Maisel WH. Semper fidelis—consumer protection for patients with implanted medical devices. N Engl J Med 2008; 358 (10): 985–7. [DOI] [PubMed] [Google Scholar]
- 3. Steele GD, Fehring TK, Odum SM, et al. Early failure of articular surface replacement XL total hip arthroplasty. J Arthroplasty 2011; 26 (6 suppl): 1 4–8. [DOI] [PubMed] [Google Scholar]
- 4. Rising JP, Reynolds IS, Sedrakyan A.. Delays and difficulties in assessing metal-on-metal hip implants. N Engl J Med 2012; 367 (1): e1. [DOI] [PubMed] [Google Scholar]
- 5. U.S. Food and Drug Administration. UDI Basics. Updated May 19, 2019. https://www.fda.gov/medical-devices/unique-device-identification-system-udi-system/udi-basics. Accessed July 2, 2021.
- 6. Platt R, Brown JS, Robb M, et al. The FDA Sentinel initiative-an evolving national resource. N Engl J Med 2018; 379 (22): 2091–3. [DOI] [PubMed] [Google Scholar]
- 7. Food and Drug Administration Amendments Act of 2007. https://www.govinfo.gov/content/pkg/PLAW-110publ85/pdf/PLAW-110publ85.pdf. Accessed June 21, 2021.
- 8. FDA’s Sentinel Initiative—Background. https://www.fda.gov/safety/fdas-sentinel-initiative/fdas-sentinel-initiative-background. Accessed June 21, 2021.
- 9. Krupka DC, Wilson NA, Reich AJ, Weissman JS. The post-market surveillance system for implanted devices is broken. Here’s how CMS and the FDA can act now to fix it. Health Affairs Blog, April 23, 2021. Doi: 10.1377/hblog20210420.717948. [DOI]
- 10. Medical Device Epidemiology Network. Building UDI into Longitudinal Data for Medical Device Evaluation. BUILD Program | MDEpiNet Site. Accessed June 9, 2021.
- 11. MDEpiNet. BUILD Consortium Member Biographies. http://mdepinet.org/wp-content/uploads/June-2019-BUILD-Consortium-Member-Biographies-2-1.pdf. Accessed July 9, 2021.
- 12. Wilson NA, Tcheng JE, Graham J, Drozda JP Jr.. Advancing patient safety surrounding medical devices: a health system roadmap to implement unique device identification at the point of care. Med Devices (Auckl) 2021; 14: 411–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. AccessGUDID. https://accessgudid.nlm.nih.gov/. Accessed June 9, 2021.
- 14. Medical Device Epidemiology Network. MDEpiNet AUDI Project. 2017. AUDI Project | MDEpiNet Site. Accessed October 20, 2021.
- 15. Healthcare Transformation Group—About. https://www.healthcaretransformationgroup.com/. Accessed October 13, 2021.
- 16. Drozda J, Zeringue A, Dummitt B, et al. How real-world evidence can really deliver: a case study of data source development and use. BMJ Surg Interv Health Technol 2020; 2 (1): e000024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Drozda JP Jr, Roach J, Forsyth T, et al. Constructing the informatics and information technology foundations of a medical device evaluation system: a report from the FDA unique device identifier demonstration. J Am Med Inform Assoc 2018; 25 (2): 111–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Drozda JP, Dudley C, Helmering P, et al. The Mercy unique device identifier demonstration project: implementing point of use product identification in the cardiac catheterization laboratories of a regional health system. Healthc (Amst) 2016; 4 (2): 116–9. [DOI] [PubMed] [Google Scholar]
- 19. Tcheng JE, Crowley J, Tomes M, et al. ; MDEpiNet UDI Demonstration Expert Workgroup. Unique device identifiers (UDIS) for coronary stent post-market surveillance and research: a report from the FDA’s Medical Device Epidemiology Network (MDEpiNet) UDI demonstration. Am Heart J 2014; 168 (4): 405–13.e2. [DOI] [PubMed] [Google Scholar]
- 20. Ross T, Ng D, Brown JS, et al. The HMO research network virtual data warehouse: a public data model to support collaboration. EGEMS (Wash DC) 2014; 2 (1): 1049. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. National Evaluation System for health Technology. UDI Center. UDI Center—NESTcc. Accessed February 14, 2022.
- 22. Personal communication with Jeffrey Brown, PhD, Department of Population Medicine, Harvard Medical School, August 18, 2017.
- 23. Toh S, Gagne J, Rassen JA, et al. Confounding adjustment in comparative effectiveness research conducted within distributed research networks. Med Care 2013; 51 (8 Suppl 3): S4–10. [DOI] [PubMed] [Google Scholar]
- 24. Serruys PW, Silber S, Garg S, et al. Comparison of zotarolimus-eluting and everolimus-eluting coronary stents. N Engl J Med 2010; 363 (2): 136–46. [DOI] [PubMed] [Google Scholar]
- 25. Iqbal J, Serruys P, Silber S, et al. Comparison of zotarolimus- and everolimus-eluting coronary stents: final 5-year report of the RESOLUTE all-comers trial. Circ Cardiovasc Interv 2015; 8 (6): e002230. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. von Birgelen C, Basalus MW, Tandjung K, et al. A randomized controlled trial in second-generation zotarolimus eluting Resolute stents versus everolimus eluting Xience V stents in real-world patients: the TWENTE trial. J Am Coll Cardiol 2012; 59 (15): 1350–61. [DOI] [PubMed] [Google Scholar]
- 27. Tandjung K, Sen H, Lam M, et al. Clinical outcome following stringent discontinuation of dual antiplatelet therapy after 12 months in real-world patients treated with second generation zotarolimus-eluting Resolute and everolimus-eEluting Xience V stents: 2-year follow-up of the randomized TWENTE trial. J Am Coll Cardiol 2013; 61 (24): 2406–16. [DOI] [PubMed] [Google Scholar]
- 28. von Birgelen C, Sen H, Lam MK, Danse PW, et al. Third-generation zotarolimus- eluting and everolimus-eluting stents in all-comer patients requiring a percutaneous coronary intervention (DUTCH PEERS): a randomised, single-blind, multicentre, non-inferiority trial. Lancet 2014; 383 (9915): 413–23. [DOI] [PubMed] [Google Scholar]
- 29. Park KW, Kang SH, Kang HJ, et al. ; HOST–ASSURE Investigators. A randomized comparison of platinum chromium-based everolimus-eluting stents versus cobalt chromium based Zotarolimus-Eluting stents in all-comers receiving percutaneous coronary intervention: HOST-ASSURE (harmonizing optimal strategy for treatment of coronary artery stenosis-safety & effectiveness of drug-eluting stents & anti-platelet regimen), a randomized, controlled, noninferiority trial. J Am Coll Cardiol 2014; 63 (25 pt A): 2805–16. [DOI] [PubMed] [Google Scholar]
- 30. Chen KY, Rha SW, Wang L, et al. Unrestricted use of 2 new-generation drug-eluting stents in patients with acute myocardial infarction: a propensity score-matched analysis. J Am Coll Cardiol Intv 2012; 5 (9): 936–45. [DOI] [PubMed] [Google Scholar]
- 31. Park K, Lee J, Kang S-H, et al. Everolimus-eluting Xience V stents versus zotarolimus-eluting Resolute stents in real-world practice patient-related and stent-related outcomes from the multicenter prospective EXCELLENT and RESOLUTE-Korea registries. J Am Coll Cardiol 2013; 61 (5): 536–44. [DOI] [PubMed] [Google Scholar]
- 32. Brown J, Syat B, Lane K, Platt R. Report 27: Effective Health Care Program Research Reports. Blueprint for a distributed research network to conduct population studies and safety surveillance. Rockville, MD: Agency for Healthcare Research and Quality; June 2010. https://effectivehealthcare.ahrq.gov/products/distributed-network-blueprint/research. Accessed June 9, 2021.
- 33. Maro JC, Platt R, Holmes JH, et al. Design of a national distributed health data network. Ann Intern Med 2009; 151 (5): 341–4. [DOI] [PubMed] [Google Scholar]
- 34. National Evaluation System for health Technology. https://nestcc.org/. Accessed October 13, 2021.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data on which this study was based is Protected Health Information, remains under the control of the individual participating health systems, and cannot be provided by the study investigators. The aggregated summary data can be provided on request to the corresponding author at jpdrozda@charter.net. The BUILD CDM and AUDI database are available in the Supplementary Material.





