The rate at which electronic health records (EHRs) are being adopted in the United States appears to be increasing rapidly (Xierali et al. 2013). Of course, adoption is not an end in itself. Instead, the goal of increasing the use of EHRs is to improve quality, safety, and efficiency, and the goal of meaningful use is to improve the likelihood that EHRs will achieve these ends. EHRs should also play a major role in supporting quality measurement, including measurement for accountability (e.g., public reporting, accreditation, or pay-for-performance) and of the improvement itself (Chassin et al. 2010). In addition, many of these improvements require clinical decision support (CDS).
At present EHRs do not include many of the functionalities needed to measure quality, although new measures that use electronic data are being developed. More measures that leverage these data are needed (Kern et al. 2013), especially to measure accountability. Though Mark Chassin at the Joint Commission has promoted the use of accountability metrics for improvement (Chassin et al. 2010), organizations will also need metrics that can be used to improve care but that may not be appropriate for accountability, such as medication error rates. Furthermore, EHRs vary widely in the extent and types of decision support functions they include. Much of clinical decision support addresses quality improvement and incorporates tools that can measure providers’ responses to decision support (e.g., measuring override rates of medication-related alerts or suggestions to perform preventive tests).
The meaningful use requirements that offer financial incentives for adopting and using electronic health records in the United States have become an extremely powerful—albeit blunt—lever for influencing EHRs’ functional content. These requirements already cover quality measurement, implementing decision support and assessing providers’ responses, although they intentionally do not spell out what clinical decision support should be included. Nonetheless, many vendors have objected in public commentaries to some of these meaningful use requirements, particularly those tracking the impact of decision support.
The adoption of EHRs in the UK and the United States has been quite different. In the UK, a large national project, Connecting for Health, which involved direct payment for EHRs, has resulted in nearly universal adoption in primary care. It was successful because the payment was nearly complete and because substantial payment was based on providers’ performance on quality and outcomes, which was measured through EHRs under what is called the Quality and Outcomes Framework. In contrast, the adoption of EHRs in secondary care has lagged far behind (Robertson, Bates, and Sheikh 2011). Despite these notable successes, the national program also failed to deliver in many other ways, and because of these failures and cost overruns, the program was heavily criticized and now has been largely dismantled. The United States has relied instead on incentives to ambulatory care providers and hospitals, both of which have quickly adopted EHRs.
In this issue of The Milbank Quarterly, Mary Dixon-Woods and her colleagues evaluate the impact of using secondary data from an electronic prescribing and decision support system for specific indicators of care in a large British acute care hospital (Dixon-Woods et al. 2013). They conclude that the review of these data, coupled with interventions to support action, worked well, even though the original application was not designed to measure these indicators. The authors did not comment on the difficulty of extracting the necessary data or how this was specifically accomplished, either by the hospital itself or through conversations with the vendor. They do note the possibility of adverse consequences if focusing on one aspect of safety means less scrutiny of other, less measured but equally important, issues.
The process carried out and the issues assessed are notable as well. Most of the measures addressed appear to have been process measures, rather than outcomes, and aimed more at improvement than accountability. In the United States, even though much of the discussion about quality measurement has been focused on outcomes, with computerization a hospital can readily measure a multitude of processes. This represents an important opportunity.
The hospital that Dixon-Woods and her colleagues evaluated in their study targeted the behavior of individual providers, who were taken to task for repeated deviant actions. Although the behavior of individual providers is much easier to measure with computers, to date relatively little quality improvement in the United States has focused on this, particularly in nursing. Another difference in this UK hospital is that high-level management was closely involved, which was very helpful. This would be extremely unusual in the United States.
Dixon-Woods and her colleagues termed the approach the hospital took “technovigilance,” which has an array of potential implications. With computerization, it should be possible to routinely assess performance of many key processes, such as the medication use process, and how providers respond to clinical decision support. Because date-time stamps are routinely included at many points, it also is much easier to examine the frequency of delays. Such data can have a powerful effect on improvement. But in the rush to computerize, tools like this are not yet a routine component of most EHRs.
The overall implication is that all hospitals will need tools like this, and soon. Under health care reform in the United States, organizations are being asked to improve efficiency, quality, and safety and also to measure an increasing number of metrics for accountability. If EHRs are to do this, they must either include or be linked to tool kits that make it easy for organizations to make these improvements.
From a policy perspective, the most important issue in health care reform in the long run is likely to set up the goalposts in the right location so that providers who deliver safe, high-quality, and efficient care are rewarded. In the UK, the incentives appear mostly—though not perfectly—aligned. But in the United States, much of the reimbursement is still fee-for-service, although accountable care is beginning to take hold at variable rates in particular regions. If the incentives are aligned, organizations will be able to work with vendors to get them to provide the tools they need. In the nearer term, if the Office of the National Coordinator issues a fourth round of meaningful use targets, which is not yet certain, a requirement that would be extremely helpful to hospitals would be tools that enable “technovigilance,” which will clearly be a key part of future efforts to improve care.
References
- Chassin MR, Loeb JM, Schmaltz SP, Wachter RM. Accountability Measures—Using Measurement to Promote Quality Improvement. New England Journal of Medicine. 2010;363(7):683–88. doi: 10.1056/NEJMsb1002320. [DOI] [PubMed] [Google Scholar]
- Dixon-Woods M, Redwood S, Leslie M, Minion J, Martin GP, Coleman JJ. Improving Quality and Safety of Care Using “Technovigilance”: An Ethnographic Case Study of Secondary Use of Data from an Electronic Prescribing and Decision Support System. The Milbank Quarterly. 2013;91(3):424–54. doi: 10.1111/1468-0009.12021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kern LM, Malhotra S, Barrón Y, Quaresimo J, Dhopeshwarkar R, Pichardo M, Edwards AM, Kausha R. Accuracy of Electronically Reported “Meaningful Use” Clinical Quality Measures: A Cross-Sectional Study. Annals of Internal Medicine. 2013;158(2):77–83. doi: 10.7326/0003-4819-158-2-201301150-00001. [DOI] [PubMed] [Google Scholar]
- Robertson A, Bates DW, Sheikh A. The Rise and Fall of England's National Programme for IT. Journal of the Royal Society of Medicine. 2011;104(11):434–35. doi: 10.1258/jrsm.2011.11k039. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xierali IM, Hsiao C-J, Puffer JC, Green LA, Rinaldo JCB, Bazemore AW, Burke MT, Phillips RL., Jr The Rise of Electronic Health Record Adoption among Family Physicians. Annals of Family Medicine. 2013;11(1):14–19. doi: 10.1370/afm.1461. [DOI] [PMC free article] [PubMed] [Google Scholar]