Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2018 Sep 1.
Published in final edited form as: Circ Cardiovasc Qual Outcomes. 2017 Sep;10(9):e004180. doi: 10.1161/CIRCOUTCOMES.117.004180

Can Electronic Health Records Make Quality Measurement Fast and Easy?

Eric E Adelman 1, James F Burke 2
PMCID: PMC5658030  NIHMSID: NIHMS900908  PMID: 28912203

Electronic health records (EHRs) present key opportunities to improve the efficiency of quality reporting. An underappreciated aspect of quality measurement is the amount of effort that goes into acquiring and reporting quality data. From chart abstraction to formatting the data so it can be shared with payers, accreditation agencies, and clinical staff, health systems spend a tremendous amount of funds on tracking and reporting of metrics. Electronic quality measures (eQMs) have the potential to automate much of this data collection and reporting process. By freeing staff who have extensive familiarity with the metrics from time consuming chart abstraction, these quality experts can partner with clinical staff to improve patient care.

In this issue of Circulation: Cardiovascular Quality and Outcomes, Bravata et al1 developed and evaluated a series eQMs abstracted electronically from the medical record for patients with minor stroke and transient ischemic attack (TIA). The authors developed 31 eQMs encompassing 15 domains of care for patients with minor stroke and TIA that are aligned with national guidelines,2 clinical performance measures,3 and Joint Commission metrics.4 They then evaluated the agreement between these eQMs, and the same measures abstracted manually in a random sample of 763 patients from 50 Veteran’s Health Administration (VHA) hospitals.

The authors found that for 16 of the 31 measures electronic abstraction compared favorably to manual abstraction both for eligibility determination and for pass rate. The highest concordance was seen in administrative and laboratory data. Not surprisingly, eQMs struggled with data in free-text fields, such as preference-based medication refusal and outside facility diagnostic testing. As a consequence, electronic abstraction of eligibility for clinically important measures, such as antithrombotic therapy by hospital day 2, performed poorly when compared to manual abstraction, because eligibility assessment requires judgements that are typically captured in free-text fields. However, when pass rates (the number of patients that met a measure divided by the number eligible) were evaluated, electronic abstraction was similar to manual abstraction for many measures, likely because eQMs generally performed well enough to evaluate pass rates on measures with high baseline success rates. Strengths of this work include: the broad array of measures studied; a focus on eligibility for each of the measures, in addition to simply evaluating pass/fail status; the use of double manual abstraction to ensure accuracy; rigorous testing of the validity of the eQMs; and comparison of performance across sites.

As the authors note, while eQMs compared favorably with manual chart abstraction, there was a range of concordance and some of the metrics with the highest concordance, such as hemoglobin A1c measurement, are not the most clinically valuable. EQM performance was often at its best for measures where the pass rate was high, thus, the ability of eQMs to drive performance improvement in the absence of abstracted measures may be limited. One conceptually appealing approach may be to replace high-performing abstracted measures with eQMS to prevent back-sliding while focusing abstraction and resources on the frontiers of quality. The applicability of this work to commercially available EHR products is somewhat uncertain, since it was done at VHA facilities using a narrowly disseminated EHR. Even though all VHA facilities use an integrated system, the authors still needed to leverage 6 databases to build the eQMs. In principle, it is likely that these results are reproducible in other systems, but it would require considerable effort.

The inclusion of TIA patients in this study is a potentially important advance for stroke quality measurement. TIA is common and represents a key opportunity to prevent stroke and the exclusion of TIA patients from the existing stroke quality paradigm substantially limits its reach. One problem with including TIA patients with stroke patients for quality metrics is that TIA is a major diagnostic challenge. Interobserver agreement on what constitutes a TIA is limited,5, 6 differential use of MRI can lead to differential classification into stroke,7 and TIA diagnostic codes perform poorly.8 Impressively, the electronic criteria used by Bravata et al to identify TIA patients was robust when compared with manual chart review: only 1% of EHR-based TIA diagnoses were reclassified to a diagnosis other than stroke or TIA by chart review. If replicated in other studies, this finding identifies a major opportunity to include TIA patients in selected stroke quality metrics. Another potential virtue of combining TIA and stroke is that it potentially limits the opportunity for gaming of existing quality measurement systems.9 Given the fuzzy clinical boundary between TIA and stroke, facilities currently have the theoretical capacity to differentially assign patients to one group or the other to suit their needs. For example, classifying a transient ambiguous episode that lasts more than 24 hours as a stroke as opposed to a TIA both increases reimbursement and reduces a facility’s adjusted mortality given the low risk of death in in this condition.

As quality measures (and eQMS in particular) proliferate, we run the risk of being awash in metrics that are easy to generate, but have limited clinical utility.10 Institutions will need to choose how to prioritize which quality measures to track and report. Broadly this requires an understanding of both the marginal clinical utility of individual measures and of the resources necessary to measure them. On both counts, considerable research is needed. For eQMs, a key strategy to increasing their utility is to minimize the burden on clinicians by integrating quality measurement into the typical workflow. This may means more emphasis on structured documentation rather than free-text entry, so data can be “pulled” automatically, but these changes should optimally be done in such a way that patient interactions as well as the narrative flow and informational content of notes is not disrupted.11

With the integration of eQMs into EHRs, we now have the theoretical capacity to identify gaps in quality of care while patients are still in the hospital. Rather than reacting to missed opportunities, days to weeks later, EHRs open the door to a world where we can identify suboptimal care and address it in real time. Yet, in spite of this considerable potential, the present value of EHRs for quality measurement is distressingly limited. EHRs have been in use for almost 50 years and the promise of real time quality monitoring remains almost entirely unfulfilled. If EHRs are to eventually transform care, its essential to understand why this is the case. As Bravata et al1 illustrate, the technology isn’t the problem. Rather we’d speculate that a central factor is that the incentives are not strong enough for hospitals, EHR developers, and the healthcare system at-large, to invest the time and energy needed to meaningfully optimize EHR-based quality measurement. To improve the quality of stroke care, its may be more important to get the incentives right than the technology.

The stroke quality paradigm of the future should pull reliable data electronically from the EHR and integrate it into reports that are used by frontline staff to monitor and address the needs of their patients. The measures should be clinically meaningful and not require excess documentation from clinical staff. Payers, quality improvement registries, and accreditation agencies should harmonize the measures they collect and encourage facilities to submit these data directly from the EHR. The work by Bravata and colleagues is an important first step in this direction.

Footnotes

Conflict of Interest Disclosures: None

References

  • 1.Bravata DM, Myers LJ, Cheng E, Reeves M, Baye F, Yu Z, Damush T, Miech EJ, Sico J, Phipps M, Zillich A, Johanning J, Chaturvedi S, Austin C, Ferguson J, Maryfield B, Snow K, Ofner S, Graham G, Rhude R, Williams LS, Arling G. Development and Validation of Electronic Quality Measures to Assess Care for Patients with Transient Ischemic Attack and Minor Ischemic Stroke. Circ Cardiovasc Qual Outcomes. 2017;10:e003157. doi: 10.1161/CIRCOUTCOMES.116.003157. [DOI] [PubMed] [Google Scholar]
  • 2.Kernan WN, Ovbiagele B, Black HR, Bravata DM, Chimowitz MI, Ezekowitz MD, Fang MC, Fisher M, Furie KL, Heck DV, Johnston SC, Kasner SE, Kittner SJ, Mitchell PH, Rich MW, Richardson D, Schwamm LH, Wilson JA. Guidelines for the Prevention of Stroke in Patients With Stroke and Transient Ischemic Attack. Stroke. 2014;45:2160–2236. doi: 10.1161/STR.0000000000000024. [DOI] [PubMed] [Google Scholar]
  • 3.Smith EE, Saver JL, Alexander DN, Furie KL, Hopkins LN, Katzan IL, Mackey JS, Miller EL, Schwamm LH, Williams LS. Clinical Performance Measures for Adults Hospitalized With Acute Ischemic Stroke. Stroke. 2014;45:3472–3498. doi: 10.1161/STR.0000000000000045. [DOI] [PubMed] [Google Scholar]
  • 4.The Joint Commission. Primary Stroke Center Certification. https://www.jointcommission.org/certification/primary_stroke_centers.aspx Accessed August 16, 2017.
  • 5.Schrock JW, Glasenapp M, Victor A, Losey T, Cydulka RK. Variables Associated With Discordance Between Emergency Physician and Neurologist Diagnoses of Transient Ischemic Attacks in the Emergency Department. Annals of Emergency Medicine. 2012;59:19–26. doi: 10.1016/j.annemergmed.2011.03.009. [DOI] [PubMed] [Google Scholar]
  • 6.Castle J, Mlynash M, Lee K, Caulfield AF, Wolford C, Kemp S, Hamilton S, Albers GW, Olivot JM. Agreement Regarding Diagnosis of Transient Ischemic Attack Fairly Low Among Stroke-Trained Neurologists. Stroke. 2010;41:1367–1370. doi: 10.1161/STROKEAHA.109.577650. [DOI] [PubMed] [Google Scholar]
  • 7.Burke JF, Kerber KA, Iwashyna TJ, Morgenstern LB. Wide variation and rising utilization of stroke magnetic resonance imaging: data from 11 States. Annals of Neurology. 2012;71:179–185. doi: 10.1002/ana.22698. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Benesch C, Witter DM, Jr, Wilder AL, Duncan PW, Samsa GP, Matchar DB. Inaccuracy of the International Classification of Diseases (ICD-9-CM) in identifying the diagnosis of ischemic cerebrovascular disease. Neurology. 1997;49:660–664. doi: 10.1212/wnl.49.3.660. [DOI] [PubMed] [Google Scholar]
  • 9.Mears A, Webley P. Gaming of performance measurement in health care: parallels with tax compliance. Journal of Health Services Research & Policy. 2010;15:236–242. doi: 10.1258/jhsrp.2010.009074. [DOI] [PubMed] [Google Scholar]
  • 10.Kelly A, Thompson JP, Tuttle D, Benesch C, Holloway RG. Public Reporting of Quality Data for Stroke. Stroke. 2008;39:3367–3371. doi: 10.1161/STROKEAHA.108.518738. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Martin SA, Sinsky CA. The map is not the territory: medical records and 21st century practice. The Lancet. 2016;388:2053–2056. doi: 10.1016/S0140-6736(16)00338-X. [DOI] [PubMed] [Google Scholar]

RESOURCES