Abstract
Objective
Little is known regarding variation among electronic health record (EHR) vendors in quality performance. This issue is compounded by selection effects in which high-quality hospitals coalesce to a subset of market leading vendors. We measured hospital performance, stratified by EHR vendor, across 4 quality metrics.
Materials and Methods
We used data on 1272 hospitals in 2018 across 4 quality measures: Leapfrog Computerized Provider Order Entry/EHR Evaluation, Centers for Medicare and Medicaid Services Hospital Compare Star Ratings, Hospital-Acquired Condition (HAC) score, and Hospital Readmission Reduction Program (HRRP) ratio. We examined score distributions and used multivariable regression to evaluate the association between vendor and score, recovering partial R2 to assess the proportion of quality variation explained by vendor.
Results
We found significant variation across and within EHR vendors. The largest vendor, vendor A, had the highest mean score on the Leapfrog Computerized Provider Order Entry/EHR Evaluation and HRRP ratio, vendor G had the highest Hospital Compare score, and vendor F had the highest HAC score. In adjusted models, no vendor was significantly associated with higher performance on more than 2 measures. EHR vendor explained between 1.2% (HAC) and 7.6 (HRRP) of the variation in quality performance.
Discussion
No EHR vendor was associated with higher quality across all measures, and the 2 largest vendors were not associated with the highest scores. Only a small fraction of quality variation was explained by EHR vendor choice.
Conclusions
Top performance on quality measures can be achieved with any EHR vendor; much of quality performance is driven by the hospital and how it uses the EHR.
Keywords: hospitals, electronic health record, patient safety, quality measurement
INTRODUCTION
Over the past decade since the passage of the Health Information Technology for Economic and Clinical Health Act, acute care hospitals in the United States have rapidly adopted electronic health records (EHRs).1,2 One reason that the federal government provided subsidies for EHR adoption was the theorized quality benefits of EHRs, based on the premise that digitizing health care delivery would reduce errors and improve safety.3 EHR-based tools such as clinical decision support (CDS) and computerized provider order entry (CPOE) were designed to provide clinicians with real-time assistance as they practice and intervene at the point of care in order to stop potential patient harm before it occurs in ways not possible in a paper-based world.4 Empirical research on the quality impact of EHRs has been mixed, though some recent studies have found EHRs were associated with improved patient outcomes after systems had time to mature.5 While tools such as decision support are far from perfect, evidence suggests that their performance has improved over the last several years.6,7
Recently, some attention has been directed toward evaluating whether specific EHR vendor systems deliver better quality performance than their competitors. Many early studies evaluating EHR quality evaluated homegrown information technology systems developed in academic medical centers.8,9 However, most hospitals now use a commercial EHR vendor-developed system, and there has been significant market concentration around a small number of vendors, especially for larger hospitals and health systems in the inpatient setting.10 Evaluating hospital EHR vendor quality is notoriously difficult, as vendor choice may be correlated with a host of unobserved variables that are also correlated with quality of care. At the same time, it may be that all high-quality hospitals are choosing 1 of the 2 market-dominant EHR vendors, further confounding the issue. Existing studies have shown significant variation both across and within vendors, though most studies only examine a limited set of quality indicators or focus on clinician user experience with the EHR.6,11,12 Understanding whether high-quality hospitals are concentrated within a single or a few EHR vendors is important to understanding the impact of the EHR vendor on quality, and there is a lack of empirical evidence on whether EHR vendor choice is strongly correlated with quality.
We sought to address this by examining variation both within and across EHR vendors in 4 quality measures: Hospital-Acquired Condition (HAC) total score, Hospital Readmission Reduction Program (HRRP) score, Hospital Compare star ratings, and the Leapfrog CPOE/EHR Evaluation score. These measures represent a variety of dimensions of nationally reported quality metrics, and all measures are potentially sensitive to differences across EHR systems—especially because EHRs are the primary documentation vehicle for many quality programs, and EHR-related activities such as ordering and documentation now pervade nearly all aspects of care delivery. Some of the measures in our study (such as the Leapfrog CPOE/EHR Safety Tool Quality score) we expect to be more sensitive to differences in EHR system quality, and others (such as the HAC score) we expect to be less sensitive to any variation in EHR quality. If we found strong relationships between vendor and hospital performance on quality metrics that are unlikely to be sensitive to EHR functionality, it would support the hypothesis that associations between vendor choice and quality metrics are likely to be a selection effect. To that end, we sought to answer 3 research questions. First, is there variation within and across EHR vendors on our chosen hospital quality metrics? Second, are certain EHR vendors consistently associated with better performance across multiple quality metrics? Third, what proportion of variation in quality scores is explained by EHR vendor compared with other observable hospital characteristics?
MATERIALS AND METHODS
Leapfrog CPOE/EHR evaluation
Our primary data source was all hospitals who completed the Leapfrog CPOE/EHR Evaluation Tool in 2018. The CPOE/EHR Evaluation Tool was designed by researchers at Brigham and Women’s Hospital and the University of Utah and administered by the Leapfrog Group, an employer group whose mission is to encourage improvements to healthcare quality and patient safety.13–15 The CPOE/EHR Evaluation Tool is one of several process quality measures included in the Leapfrog Hospital Survey, which is taken by hospitals across the country each year.
The Leapfrog CPOE/EHR Evaluation Tool uses simulated orders and patients, developed by a group of experts on adverse drug events and computerized physician order entry clinical decision support, to test how effectively a hospitals production EHR alerts to potential adverse drug events for inpatient medication orders. The simulated patients and associated orders were developed using case studies from historical examples in which patients experienced preventable harm from inappropriate medication orders. The number of orders varies as orders involving medications not on a hospital’s formulary are excluded. The orders fall into categories of potential adverse drug events including drug-allergy, drug-dose (for single and daily orders), drug-renal status, drug-diagnosis, drug-age, drug-route of admission, and drug-drug contraindications, as well as therapeutic duplication and orders that require corollary orders to meet the standard of care. A physician at the participating hospital enters the patients and orders into the EHR as they would a real patient and records any decision support generated for each order. The process is timed so that hospitals can take no longer than 6 hours total to complete the test, though most hospitals complete it in 2 to 3 hours. After completion of the evaluation, hospitals receive an immediate score showing their overall percentage of orders they correctly alerted on, as well as subscores across the categories of contraindications, but not the details individual orders they did not alert on unless those orders would have caused potentially fatal patient harm. The primary outcome variable is the overall percentage score of number of orders that would generate an adverse drug event that the system correctly generated a decision support alert for in some way. More detailed descriptions of the development and use of the CPOE/EHR Evaluation Tool have been published in previous studies.6–9,15
Data collection and sample
Our sample included hospitals who completed the CPOE/EHR Evaluation Tool in 2018, for a total of 1272 hospitals in the United States. These data contained information on hospital performance on the Leapfrog CPOE/EHR Evaluation in 2018, as well as what EHR vendor was used by the hospital. We combined these data with 3 other publicly reported measures of hospital quality from the Centers for Medicare and Medicaid Services (CMS). First, the overall star ratings from the Hospital Compare program were for 2018, which are a composite measure of several quality indicators including mortality, readmission, patient experience, safety, effectiveness, timeliness, and efficient use of imaging, designed to give patients a single, easy to interpret indicator of hospital quality.16 Second, the HAC scores were for 2018, which measures inpatient quality via comparing several measures of HACs including rates of in hospital falls with hip fracture, postoperative sepsis, surgical site infection, and more, included in a program that encourages hospitals to improve inpatient quality by penalizing hospitals who perform poorly with reduced reimbursements.17,18 Finally, the HRRP ratio was from July 2016 to June 2019, a payment reform effort to penalize hospitals with reduced reimbursements if they have higher than expected rates of risk-adjusted 30-day readmission for conditions including heart failure, acute myocardial infarction, and pneumonia.19,20 For the HRRP ratio, data were reverse scored in order to make higher numbers correspond to better quality performance to be consistent with the directionality of the other quality measures. Details of how all 3 of the CMS quality measures are calculated and what they are used for are available in detail online through the CMS.gov website. Data were linked based on hospital Medicare identification number. These 4 quality metrics were our outcome variables of interest.
We combined this with data from the American Hospital Association (AHA) Annual Survey for 2018 to capture hospital demographic characteristics. The AHA Annual Survey is sent annually to the CEO of every hospital in the United States with a request to complete the survey or delegate to the most knowledgeable person in the organization. The 2018 survey was conducted from January to May 2019 and asked questions regarding hospital characteristics as of the end of 2018. The response rate for the survey wasover 85%, and every hospital in our sample responded to the 2018 survey.
EHR vendor and hospital characteristics
Our primary independent variable of interest was a hospital’s EHR vendor, determined by self-report as part of the Leapfrog CPOE/EHR Evaluation Tool. Vendors with fewer than 5 hospitals in our sample were aggregated into a category for “other vendor,” while all vendors with 5 or more hospitals in the sample were listed individually. EHR vendor names were anonymized per our data use agreements.
We also selected a set of hospital characteristics captured by the AHA Annual Survey, based on previous studies of health information technology and clinical decision support adoption and use, to serve as control variables.21 These include size by number of beds, teaching status, membership in a health system, rural or urban location, ownership (including private, for profit; private, nonprofit; and public, nonfederal hospitals), and geographic region in the United States.
Analytic strategy
We first calculated sample descriptive statistics showing the mean and range for our 4 measures of hospital quality, the Leapfrog CPOE/EHR Evaluation score, Hospital Compare star ratings, HAC score, and HRRP ratio. We then calculated descriptive statistics of our hospital demographics, including EHR vendor, size, teaching status, system membership, location, ownership, and region. We then plotted the mean and distribution of scores for each of our 4 quality measures by EHR vendor as boxplots. Next, we created 4 ordinary least squares models, each with 1 of our 4 measures of hospital quality as a dependent variable. The independent variables of interest were hospital EHR vendors, and the hospital demographics reported previously were included as control variables. All models included robust standard errors. Finally, we ran each model again without including EHR vendors, and recovered the partial R2 from both regression models to compare the proportion of variation in each of the 4 quality measures explained by EHR vendor, hospital demographic characteristics, and unexplained by either, plotted as a stacked bar graph.
As a robustness test, we examined Leapfrog CPOE/EHR Evaluation scores from 2009 to 2018 including 8657 hospital-year observations in an unbalanced panel, with the sample described fully in previous work,6 to determine if the relationship between EHR vendor and quality scores changed over time. We used a similar ordinary least squares multivariable regression model as described previously and included dummy variables for each year interacted with EHR vendor indicators as our independent variable of interest. The model also included our set of hospital demographics as control variables and used robust standard errors clustered at the hospital level.
RESULTS
The mean Leapfrog CPOE/EHR Evaluation score for hospitals in our sample was 65.9%, ranging from 13.9% to 100.0%. The mean Hospital Compare star rating was 2.9 (range, 1.0 to 5.0), the HAC score was 0.04 (range, −1.66 to 1.74), and the HRRP ratio was -0.01 (range, -0.28 to 0.27). Vendor A was the largest EHR vendor in our sample with 479 (37.7%) hospitals, with 7 other EHR vendors having 5 or more hospitals represented. Most hospitals were medium sized (between 100 and 399 beds) (54.7%), teaching hospitals (51.1%), system members (72.8%), located in urban areas (78.2%), and private, nonprofit (56.8%). Overall, 409 (32.2%) hospitals were in the South, followed by the West (21.9%), the Northeast (17.2%), and the Midwest (15.8%) (Table 1). We also compared hospital sample characteristics across vendors in Supplementary Table A1.
Table 1.
Sample descriptive statistics (n = 1272.)
Quality scores | |
Leapfrog CPOE/EHR Quality Score | 65.9 (13.9 to 100.0) |
CMS Hospital Compare Star Ratings | 2.9 (1.0 to 5.0) |
Hospital-Acquired Condition score | 0.04 (−1.66 to 1.74) |
HRRP scorea | −0.01 (−0.28 to 0.27) |
EHR vendor | |
Vendor A | 479 (37.7) |
Vendor B | 354 (27.8) |
Vendor C | 283 (22.2) |
Vendor D | 49 (3.9) |
Vendor E | 43 (3.4) |
Other vendor | 26 (2.0) |
Vendor F | 20 (1.6) |
Vendor G | 12 (0.9) |
Vendor H | 5 (0.4) |
Hospital size | |
Small hospitals (<100 beds) | 166 (13.1) |
Medium hospitals (100-399 beds) | 696 (54.7) |
Large hospitals (400+ beds) | 410 (32.2) |
Teaching status | |
Nonteaching hospital | 622 (48.9) |
Teaching hospital | 650 (51.1) |
System membership | |
Not a member of health system | 346 (27.2) |
Health system member | 926 (72.8) |
Location | |
Rural | 277 (21.8) |
Urban | 995 (78.2) |
Ownership | |
Private, nonprofit | 723 (56.8) |
Private, for profit | 260 (20.4) |
Public, nonfederal | 125 (9.8) |
Geographic region | |
Northeast | 219 (17.2) |
West | 279 (21.9) |
Midwest | 201 (15.8) |
South | 409 (32.2) |
Values are mean (range) or n (%).
CMS: Centers for Medicare and Medicaid Services; CPOE: computerized provider order entry; EHR: electronic health record; HRRP: Hospital Readmission Reduction Program.
HRRP score is reverse scored.
We found significant variation both within and across EHR vendors in our 4 quality measures (Figure 1). Hospitals using vendor A had the highest mean score on the Leapfrog CPOE/EHR Evaluation, with 68.7% and a range of 26.5$ to 100.0$, followed by hospitals using vendor C (66.6%; range, 21.6% to 100.0%), vendor E (66.0%; range, 32.3% to 100.0%), G (65.0%; range, 13.9% to 89.5%), B (62.9%; range, 13.9% to 97.5%), F (61.5%; range, 37.1% to 79.3%), D (58.0%; range, 13.9% to 86.7%), and H (57.3%; range, 24.2% to 85.7%), with hospitals using other vendors scoring a mean of 67.1%, with a range of 30.6% to 90.3%. For CMS Hospital Compare ratings, hospitals using vendor G were the highest with a mean star rating of 3.3 and a range of 2.0 to 4.0, followed by vendor A (3.2; range, 1.0 to 5.0) and vendor H (3.2; range,1.0 to 5.0). For HAC score, hospitals using vendor F had the highest score (0.33; range, −0.20 to 1.52), followed by other vendors (0.12; range, -1.41 to 0.93), while hospitals using vendor G had the lowest mean score (-0.31; range, −1.30 to 0.39). Finally, for HRRP ratio, hospitals using vendor A had the highest score (0.004; range, -0.138 to 0.274), followed by vendor G (−0.002; range, −0.081 to 0.081), with hospitals using vendor C (−0.036; range, −0.284 to 0.107) and other vendors (−0.038; range, −0.137 to 0.042) scoring the lowest.
Figure 1.
Hospital quality scores by electronic health record (EHR) vendor.
Plot shows boxplots of distributions of scores by EHR vendor (n = 1272). CMS: Centers for Medicare and Medicaid Services; CPOE: computerized provider order entry; HRRP: Hospital Readmission Reduction Program.
In the multivariate results, after controlling for observed hospital characteristics, we found no statistically significant association between EHR vendor and Leapfrog CPOE/EHR scores (Table 2). However, several EHR vendors were associated with higher or lower performance in our robustness test analyzing previous years (Supplementary Table A2). For Hospital Compare star ratings, hospitals using vendors A (β = 1.00, P < .001), B (β = 0.57, P < .001), C (β = 0.67, P < .001), D (β = 0.47, P = .05), F (β = 0.86, P < .001), and G (β = 1.00, P < .001) had higher scores than did hospitals using other vendors. No EHR vendors were associated with better performance on HAC score. Hospitals using vendors A (β = 0.03, P < .001), B (β = 0.03, P < .001), E (β = 0.03, P = .02), F (β = 0.03, P < .001), and G (β = 0.04, P = .01) were associated with better HRRP ratio compared with hospitals using other vendors. Full regression results are available in Supplementary Table A3.
Table 2.
Association between hospital EHR vendor and quality score
Leapfrog CPOE/EHR Quality Score |
CMS Hospital Compare Star Ratings |
Hospital-Acquired Condition score |
Hospital Readmission Reduction Program scorea |
|||||
---|---|---|---|---|---|---|---|---|
Coefficient | P Value | Coefficient | P Value | Coefficient | P Value | Coefficient | P Value | |
EHR vendor | ||||||||
Other vendor | Ref. | |||||||
Vendor A | 4.5 | .16 | 1.00 | <.001 | −0.05 | .64 | 0.03 | <.001 |
Vendor B | −2.2 | .50 | 0.57 | <.001 | −0.03 | .80 | 0.03 | <.001 |
Vendor C | 1.0 | .76 | 0.67 | <.001 | −0.02 | .87 | 0.01 | .17 |
Vendor D | 0.7 | .87 | 0.47 | .05 | −0.01 | .96 | 0.02 | .15 |
Vendor E | −6.5 | .12 | 0.29 | .22 | 0.07 | .62 | 0.03 | .02 |
Vendor F | −6.5 | .19 | 0.86 | <.001 | −0.26 | .29 | 0.03 | <.001 |
Vendor G | 0.6 | .90 | 1.00 | <.001 | −0.36 | .06 | 0.04 | .01 |
Vendor H | −6.0 | .33 | 0.24 | .73 | 0.24 | .42 | 0.02 | .55 |
CMS: Centers for Medicare and Medicaid Services; CPOE: computerized provider order entry; EHR: electronic health record.
HRRP is reverse scored.
Finally, EHR vendor explained 4.0% of the variation in Leapfrog CPOE score, 6.3% of Hospital Compare ratings, 1.2% of HAC score, and 7.6% of HRRP ratio. Hospital characteristics explained 3.2% of the Leapfrog CPOE/EHR score, 6.3% of the variation in Hospital Compare ratings, 4.0% of HAC scores, and 10.1% of the HRRP ratio. Most of the variation in all 4 quality measures was unexplained by either EHR vendor or hospital-observed characteristics that we included in our analysis (Figure 2).
Figure 2.
Percentage of quality score variation explained by observable vs unobservable hospital characteristics.
The bar graph shows proportion of variation in each quality measure explained by observable hospital demographics, electronic health record (EHR) vendor, and proportion of variation unexplained by those observable characteristics, based on recovered partial R2 from multivariable regression. CMS: Centers for Medicare and Medicaid Services; CPOE: computerized provider order entry.
DISCUSSION
We examined variation in 4 hospital quality measures by EHR vendor and found substantial heterogeneity both across vendors, as some vendors scored higher in some metrics, as well as within vendors, as nearly all vendors had institutions with a large range of scores on all 4 measures. No vendor had higher quality across all 4 measures, and the 2 largest vendors (vendor A and vendor B) were not always associated with the highest-quality reporting scores. Perhaps most importantly, however, we found evidence that suggests that only a small portion of the variation in quality across hospitals was explained by either EHR vendor choice or observed hospital characteristics. These results add to the growing body of literature that suggests while EHR vendor may be associated with certain domains of hospital quality, there is no one vendor that clearly dominates in this area, and most of the variation in quality scores is not explained by EHR vendor choice. Further, our study provides new evidence against the idea that the highest-quality hospitals have congregated to 1 or 2 specific EHR vendor systems.
While it is unsurprising that some quality measures are more sensitive to EHR vendor choice than others, we were surprised that the performance on the Leapfrog CPOE/EHR Evaluation score was not associated with EHR vendor in our multivariable model. One possibility is that for CPOE with clinical decision support, the hospital medication reference database choice may be a more important technology than EHR vendor, or that how the hospital implements the decision support is the most important factor. The importance of non-EHR software such as medication reference databases and level of local customization and implementation are likely to be especially important given that previous studies of the Leapfrog CPOE/EHR evaluation have shown significant within-vendor variation in performance.6 It is also possible that, as the Leapfrog evaluation has been ongoing for over a decade, EHR vendors who were initially lagging have improved their products up to a baseline level of performance.6 This theory is supported by our robustness test analyzing performance from 2008 to 2018, which found that in several other years EHR vendors were associated with improved performance on the Leapfrog CPOE/EHR Evaluation. While we expected HAC score to be relatively insensitive to EHR vendor, given the lack of obvious pathways for EHRs to influence the prevalence of HACs, it is notable that 2 other relatively broad measures of quality were associated with several different EHR vendors: Hospital Compare star ratings and HRRP score. Both of these measures are relatively broad and reflect dimensions of both outcome and process quality.22 Even more interesting is that while several of the same EHR vendors were associated with better performance on both measures, several vendors were positively associated with one but not the other. This is also clear in the unadjusted data. While hospitals using the most popular EHR vendor, vendor A, had the highest mean score on the Leapfrog CPOE/EHR Evaluation and HRRP ratio, hospitals using much smaller vendors had the highest mean score for Hospital Compare stars (vendor G) and HAC score (vendor F). If it were true that high-quality hospitals were selecting into the market leading vendors, we would expect to see a similar vendor hierarchy across the quality measures. The fact that we do not observe this suggests that the sentiment that all high-quality hospitals are moving toward a market dominant vendor may not be empirically true.
Broadly, quality measurement is still not mature. Some quality improvement is done through the EHR, but much is not, and the strength of the quality organization and culture at an institution is likely to be associated with the quality delivered.23,24 In addition, the current quality metrics cover only a fraction of the overall picture of quality. Also, individual institutions need to deliver quality data to large numbers of organizations, and this represents a substantial measurement burden for them. Ideally, in the future, we will get to a place in which most quality measurement is being done through the electronic record as a part of routine care, they cover a high proportion of care, and they address domains such as equity as well as safety and quality. It should also be possible to take that information and send it to a limited number of central organizations that will aggregate it and enable comparisons and make data available publicly. That is far from the situation today.
Our findings have important relevance to policymakers as well as practitioners and researchers. For policymakers, despite ongoing EHR market consolidation, many hospitals have achieved high performance with smaller EHR vendors, which may alleviate concerns that hospitals using lower cost EHRs are unable to score well on existing quality measures. For practitioners, our results add to existing literature that suggests that while EHR vendor may have an impact, vendor choice explains only a small portion of the variation in hospital quality. Hospital leaders looking to improve quality should consider investing resources in developing a local team to help manage decision support in an ongoing way or building a safety culture,23,25 rather than feeling pressured to switch to a market-dominant EHR. Our results also run counter to the narrative that “the best hospitals choose the leading vendor,” suggesting that not all EHR vendor impacts on quality are selection effects may not be true. Future research should examine how concrete differences in EHR functionality and design across vendors impacts clinician work and quality of care, and how a more accurate and complete assessment of quality can be extracted from EHR data.
Limitations
Our study should be interpreted with several important limitations in mind. First, hospital quality has many dimensions, and while we selected 4 measures we believe cover a broad array of those dimensions (including structure, process, and outcome quality),22 they are not completely representative of all the components of quality. Second, our study is cross-sectional and cannot identify a causal effect of EHR vendor choice on quality performance. However, our primary goal is to show variation within and across hospitals by EHR vendor, rather than focusing on the impact of any specific vendor. Third, owing to our data use agreement, we are unable to reveal the EHR vendor names, though we believe our results showing that each major vendor has a range of quality performance in each measure, and that the highest quality hospitals have not necessarily flocked to a single vendor, are of significant interest to the informatics and quality community. Fourth, hospitals that participate in voluntary quality assessments such as the Leapfrog CPOE/EHR Evaluation may not be representative of all hospitals, and it is possible that EHR vendor effects are stronger among hospitals not participating in the Leapfrog assessment. Finally, we compare quality performance across EHR vendors and do not evaluate any hospitals not using EHRs or compare quality performance from the pre–Health Information Technology for Economic and Clinical Health Act era.
CONCLUSION
In examining the association between EHR vendor and hospital performance in 4 domains of quality measurement, the Leapfrog CPOE/EHR Evaluation, Hospital Compare star ratings, HAC score, and HRRP ratio, our study finds compelling evidence that while EHR vendor may be associated with 4 measures of hospital quality, there are high-performing hospitals using many different vendors, there is significant variation within vendors, and there is variation across quality measures as to the best-performing vendor. Additionally, EHR vendor choice explains only a small fraction of the variation in quality in all 4 quality measures. These results show that top performance on quality measures can be achieved with any EHR vendor, and that high-quality hospitals have not necessarily coalesced around a small subset of market leading vendors.
FUNDING
This study was supported by grant number R01HS023696 from the Agency for Healthcare Research and Quality. The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
AUTHOR CONTRIBUTIONS
AJH, MK, DC, and DWB conceived the study concept. AJH and MK conceived the study design. AJH was involved in data analysis and visualization. AJH, MK, DC, and DWB wrote the manuscript. All authors reviewed and edited the manuscript. All authors approved the final manuscript.
SUPPLEMENTARY MATERIAL
Supplementary material is available at Journal of the American Medical Informatics Association online
DATA AVAILABILITY STATEMENT
Several data sources underlying the study are publicly available, including CMS Hospital Compare star ratings, HAC score, and HRRP ratio. American Hospital Association data is available for purchase from the American Hospital Association. Leapfrog EHR/CPOE Score data were provided by The Leapfrog Group and requests to share data must be approved by the data provider.
COMPETING INTERESTS
DC has received grants from the Gordon and Betty Moore Foundation and Robert Wood Johnson Foundation and served as an employee of Pascal Metrics, a federally certified patient safety organization, outside the submitted work. DWB has received personal fees from EarlySense and CDI NEGEV; equity from Valera Health, CLEW, MDClone, and Aesop; and research funding from IBM Watson Health outside the submitted work. No other disclosures were reported.
Supplementary Material
REFERENCES
- 1.Blumenthal D, Launching H.. Launching HITECH. N Engl J Med 2010; 362 (5): 382–5. [DOI] [PubMed] [Google Scholar]
- 2.Adler-Milstein J, Jha AK.. HITECH Act drove large gains in hospital electronic health record adoption. Health Aff (Millwood) 2017; 36 (8): 1416–22. [DOI] [PubMed] [Google Scholar]
- 3.Bates DW, Gawande AA.. Improving safety with information technology. N Engl J Med 2003; 348 (25): 2526–34. [DOI] [PubMed] [Google Scholar]
- 4.Kuperman GJ, Bobb A, Payne TH, et al. Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc 2007; 14 (1): 29–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Lin SC, Jha AK, Adler-Milstein J.. Electronic health records associated with lower hospital mortality after systems have time to mature. Health Aff (Millwood) 2018; 37 (7): 1128–35. [DOI] [PubMed] [Google Scholar]
- 6.Classen DC, Holmgren AJ, Co Z, et al. National trends in the safety performance of electronic health record systems from 2009 to 2018. JAMA Netw Open 2020; 3 (5): e205547. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Holmgren AJ, Co Z, Newmark L, et al. Assessing the safety of electronic health records: a national longitudinal study of medication-related decision support. BMJ Qual Saf 2020; 29 (1): 52–9. [DOI] [PubMed] [Google Scholar]
- 8.Leung AA, Keohane C, Lipsitz S, et al. Relationship between medication event rates and the Leapfrog computerized physician order entry evaluation tool. J Am Med Inform Assoc 2013; 20 (e1): e85–90–e90. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Metzger J, Welebob E, Bates DW, et al. Mixed results in the safety performance of computerized physician order entry. Health Aff (Millwood) 2010; 29 (4): 655–63. [DOI] [PubMed] [Google Scholar]
- 10.Monegain B. Cerner has almost double EHR global market share of closest rival Epic, Kalorama says. Healthcare IT News.2018. https://www.healthcareitnews.com/news/cerner-has-almost-double-ehr-global-market-share-closest-rival-epic-kalorama-says. Accessed July 7, 2020.
- 11.Holmgren AJ, Adler-Milstein J, McCullough J.. Are all certified EHRs created equal? Assessing the relationship between EHR vendor and hospital meaningful use performance. J Am Med Inform Assoc 2018; 25 (6): 654–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Tutty MA, Carlasare LE, Lloyd S, et al. The complex case of EHRs: examining the factors impacting the EHR user experience. J Am Med Inform Assoc 2019; 26 (7): 673–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.The Leapfrog Group annual survey overview. 2019. http://www.leapfroggroup.org/survey-materials/survey-overview. Accessed February 24, 2019.
- 14.Jha AK, Orav EJ, Ridgway AB, et al. Does the Leapfrog program help identify high-quality hospitals? Jt Comm J Qual Patient Saf 2008; 34 (6): 318–25. [DOI] [PubMed] [Google Scholar]
- 15.Kilbridge PM, Welebob EM, Classen DC.. Development of the Leapfrog methodology for evaluating hospital implemented inpatient computerized physician order entry systems. Qual Saf Health Care 2006; 15 (2): 81–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.HospitalCompare. https://data.cms.gov/dataset/HospitalCompare/azcw-iwjb. Accessed March 2, 2021.
- 17.Clancy CM.CMS’s hospital-acquired condition lists link hospital payment, patient safety. Am J Med Qual 2009; 24 (2): 166–8. [DOI] [PubMed] [Google Scholar]
- 18.Hospital-Acquired Condition Reduction Program. Provider Data Catalog. https://data.cms.gov/provider-data/dataset/yq43-i98g. Accessed March 2, 2021.
- 19.Gupta A. Impacts of performance pay for hospitals: The Readmissions Reduction Program (October 2017). Becker Friedman Institute for Research in Economics Working Paper No. 2017-07. doi:10.2139/ssrn.3054172. Accessed December 4, 2020.
- 20.Hospital Readmissions Reduction Program. Provider Data Catalog. https://data.cms.gov/provider-data/dataset/9n3s-kdb3. Accessed March 2, 2021.
- 21.Adler-Milstein J, Holmgren AJ, Kralovec P, et al. Electronic health record adoption in US hospitals: the emergence of a digital “advanced use” divide. J Am Med Inform Assoc 2017; 24 (6): 1142–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Donabedian A.Evaluating the quality of medical care. Milbank Q 1966; 44 (3): 166–206. [PubMed] [Google Scholar]
- 23.Singer SJ, Gaba DM, Geppert JJ, et al. The culture of safety: results of an organization-wide survey in 15 California hospitals. Qual Saf Health Care 2003; 12 (2): 112–8. doi:10.1136/qhc.12.2.112 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Tsai TC, Jha AK, Gawande AA, et al. Hospital board and management practices are strongly related to hospital performance on clinical quality metrics. Health Aff (Millwood) 2015; 34 (8): 1304–11. doi:10.1377/hlthaff.2014.1282 [DOI] [PubMed] [Google Scholar]
- 25.Tucker AL, Edmondson AC.. Why hospitals don’t learn from failures: organizational and psychological dynamics that inhibit system change. Calif Manag Rev 2003; 45 (2): 55–72. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Several data sources underlying the study are publicly available, including CMS Hospital Compare star ratings, HAC score, and HRRP ratio. American Hospital Association data is available for purchase from the American Hospital Association. Leapfrog EHR/CPOE Score data were provided by The Leapfrog Group and requests to share data must be approved by the data provider.