In this issue of Health Services Research, Johnston and colleagues present a patch to the New York University Emergency Department Algorithm (EDA), “the most widely used tool for retrospectively assessing the probability that ED visits are urgent, preventable, or optimally treated in an ED, using administrative data” (Johnston et al. 2017). The patch incorporates International Classification of Diseases, 9th Revision, Clinical Modification (ICD‐9‐CM) codes that were not in existence when the EDA was developed. The authors demonstrate that this update eliminates the increase in “unclassified” ED visits over time that is attributable to these new ICD‐9‐CM codes. They have also developed a “beta” version for use with ICD‐10 codes. Although the authors succeed with this well‐designed patch, other concerns about the EDA continue to limit its usefulness.
The EDA was designed as a tool for health services researchers to make inferences about access to primary care by studying patterns of emergency department (ED) use. In the words of its developers, Billings et al., “If uninsured patients who cannot pay for treatment out‐of‐pocket are turned away by neighborhood clinics facing cost pressures, they will be forced to rely more on emergency departments for routine care. This would likely alter the diagnostic mix of uninsured patients in EDs, with less serious, nonemergent cases representing a greater share of the care provided. With an accurate gauge of this shift in ED utilization patterns, researchers would have a powerful tool to understand how changes in the health care delivery system are affecting low‐income, uninsured patients” (Billings, Parikh, and Mijanovich 2000b). As Johnston et al. state, the EDA has become an extremely popular tool for classifying ED visits, in part because of its ease of use. It can be downloaded at no charge. Because it requires only the primary discharge diagnosis, it can be used to analyze a large administrative dataset of ED visits in an afternoon. Researchers, data analysts, and policy makers are drawn to the apparent simplicity of its output.
However, the computational ease of the EDA obscures the complexity of the underlying conceptual model and the methodological problems in its development (Lowe and Fu 2008; Feldman 2010). Many users of the EDA misunderstand the questionable validity of the algorithm and its potential for faulty conclusions. These issues go well beyond what the proposed patch has addressed.
The original development of the algorithm involved four steps (Billings, Parikh, and Mijanovich 2000b). First, emergency physicians were asked to classify a series of 5,700 ED visits as emergent versus nonemergent. The physicians did not review the original medical records but instead based their classifications on abstracted information on chief complaint, age, gender, duration of symptoms, vital signs, and past medical history. Data abstractors coded chief complaints as ICD‐9‐CM diagnosis codes. For example, some medical records with a chief complaint of “chest pain,” were coded as ICD‐9 787.1, “heartburn.” Others were coded as ICD‐9 786.50, “chest pain not otherwise specified.” Emergency physician reviewers classified “heartburn” as nonemergent but classified “chest pain NOS” as emergent (J. Billings, personal communication, September 2003). This approach likely introduced misclassification, because it was highly sensitive to how the chief complaint was coded using ICD‐9‐CM diagnoses.
Second, Billings et al. determined which emergent cases were “primary care treatable,” usually based on whether the procedures and resources used in the ED are typically available in a primary care setting. The developers of the EDA have not published information on how they determined which resources are typically available in a primary care setting. This makes it challenging to test the reproducibility of the results.
The third step was designed to allow the use of the algorithm with administrative datasets that had discharge diagnosis but no other clinical data. In this step, the chief complaints were “mapped” to ICD‐9‐CM discharge diagnoses. For each primary discharge diagnosis, the proportion of cases falling into each of the urgency categories was determined. For example, if there were ten cases with a given discharge diagnosis and four were considered nonemergent, two emergent but primary care treatable, and four emergent requiring the ED, then the probabilities assigned to the ICD‐9‐CM diagnosis would be 40 percent nonemergent, 20 percent emergent, primary care treatable, and 40 percent emergent requiring the ED.
While this approach may be appealing, the process of developing the EDA involved the classification of 5,700 records with 659 different ICD‐9‐CM code diagnoses. As a result, only 8.6 records (5700/659) on average were used to classify each ICD‐9‐CM code. If, of nine cases with a given ICD‐9‐CM code, four (44 percent) fell into the nonemergent category, the 95 percent confidence interval would range from 14 percent to 79 percent. With this level of imprecision, it is not surprising to see inconsistencies in classification. To cite one example, streptococcal septicemia is assigned a 23 percent probability of being emergency, primary care treatable; staphylococcal septicemia is assigned a 100 percent probability of being emergency, primary care treatable; and Staphylococcus aureus septicemia is unclassified.
The final step was to classify “emergent/ED care needed” cases as “preventable/avoidable” or “not preventable/not avoidable” based on a previously developed method (Billings et al. 1993). This method was designed for classification of inpatient visits and was not previously validated for outpatient visits (Billings et al. 1993).
Evidence for the validity of the EDA is limited. The original publications introducing the EDA (Billings, Parikh, and Mijanovich 2000a,b; Billings 2003, 2004) briefly described the methodology used to develop the EDA and presented some results obtained by applying it, but they did not attempt to validate the algorithm against an external criterion standard. To my knowledge, the first attempt at validation was published by my research group. We applied the EDA to 43 months of data from 22 Oregon EDs. At a time when cutbacks in Oregon's Medicaid expansion program led to major shifts in access to care and in ED utilization, we tested the EDA's ability to detect these changes. Despite large changes in access as measured by other instruments in the ED setting and elsewhere, changes in “signal” from the EDA were minimal (Lowe and Fu 2008). A more recent publication used mathematical simulation to analyze the performance of the EDA in detecting differences in utilization patterns across hypothetical ED populations. It found that even large changes in access to care would generate only small changes in the output of the EDA, concluding, “The EDA is insufficiently sensitive to changes in ED utilization patterns to be useful in assessing interventions to change them” (Jones et al. 2013).
Three other studies support the validity of the EDA, but each has limitations. Two (Ballard et al. 2010; Gandhi and Sabik 2014) found that ED visits characterized as emergent were more likely to result in hospitalization or death. However, in each case, the authors modified the EDA extensively, in ways that could alter its performance (Lowe, 2008). Furthermore, their choice of hospitalization or death as the criterion standard ignores the many ED visits that are emergencies but do not require hospital admission, including those in which treatment in the ED averts the need for hospitalization. A third study used data from five safety‐net hospitals in Houston, Texas, comparing the rates of primary care related ED visits in patients’ ZIP codes of residence with several predictor variables (Begley et al. 2006). The results of this ecological study were equivocal, showing a strong correlation between the EDA and rates of uninsurance and poverty, a weak correlation between the EDA and the federal Index of Medical Underservice, and minimal change in the EDA over time. A valid algorithm for predicting nonemergent ED visits requires a more substantive patch than what Johnston et al. offer in their study.
Even if the EDA were a valid and reliable tool, its purpose has been misinterpreted frequently. The EDA has been used by numerous health departments and other policy making organizations to facilitate policy decisions and to plan interventions to reduce “unnecessary ED visits” (Massachusetts Division of Health Care Finance and Policy 2004; OMPRO 2005; Greci 2010; Jones et al. 2013). It has also been proposed as a basis for denying payment for “inappropriate ED visits” (Kellermann and Weinick 2012). These uses of the EDA ignore its developers’ caution: “The algorithm is not intended as a triage tool or a mechanism to determine whether ED use is appropriate for required reimbursement by a managed care plan… Nor was it intended to assess appropriateness of ED utilization” (Billings, Parikh, and Mijanovich 2000b).
The resulting policies place patients denied access to ED care at considerable risk. Interventions to reduce “unnecessary ED visits” based on chief complaint are problematic. For example, Raven et al. ascertained the chief complaints of ED patients found to have primary care treatable diagnosis based on the EDA. When other patients with these same chief complaints were evaluated, they often had other diagnoses—ones that required emergency care or hospital admission (Raven et al. 2013). Kellerman and Weinick reviewed a list of diagnoses for which Washington State's Medicaid program had proposed denying payment to EDs. For many of the diagnoses that Kellerman cited, the EDA assigns probabilities near 100 percent that they are nonemergent or emergency, primary care treatable. Kellermann and Weinick (2012) pointed out the risks associated with delaying care for some of these diagnoses.
There are alternatives to the EDA that are methodologically sound, have greater face validity, and are easier for policy makers and clinicians to interpret. The simplest approach is to study overall ED visit rates. Overall ED utilization rates vary with differences in access in different populations and in different geographic regions, and ED utilization rates change with temporal changes in access (Lowe et al. 2005, 2008, 2009; Lowe, Fu, and Gallia 2010; Heavrin et al. 2011; Cheung et al. 2012). Policy makers can easily understand that—as long as reduction in ED utilization is not achieved through denying access to ED care—reduction in overall ED utilization reflects improved access to primary care.
Another approach to using ED data to study access to care is to identify clinically meaningful subsets of ED visits. ED utilization for chronic medical conditions appears to reflect access to medical care (Oster and Bindman 2003). A concern about lack of access to oral health care led to a study finding that 2.5 percent of Oregon ED visits were for nontraumatic dental conditions, with Medicaid and uninsured ED patients disproportionately likely to have dental conditions (Sun et al. 2015). Cutbacks in Oregon's Medicaid expansion program that eliminated outpatient behavioral health care coverage were associated with a doubling in the number of uninsured ED visits for drug, alcohol, and psychiatric conditions (Lowe et al. 2008). In each of these examples, it is easy for clinicians and methodologists to understand what subset of ED visits is being studied. It is also easy for policy makers to contemplate potential solutions to the problems identified, without the confusion created by common misinterpretations of EDA results.
Users of the EDA—be it the original version or the newer modification developed by Johnston et al.—must be aware of its limited external validation and its methodological problems. Researchers must consider the risks that policy makers misinterpret its output, leading to false conclusions about the potential for monetary savings through programs that put patients with emergency conditions at risk. As described above, there are alternative methods to study ED utilization that are methodologically sound, more easily interpreted, and offer clearer policy implications.
Acknowledgments
Disclosures: In 2012 the author served as a consultant to the American College of Emergency Physicians regarding the appropriate use of emergency departments. In 2013 his research received funding from the Emergency Medicine Foundation, which is affiliated with the American College of Emergency Physicians (ACEP).
Disclaimer: None.
References
- Ballard, D. W. , Price M., Fung V., Brand R., Reed M. E., Fireman B., Newhouse J. P., Selby J. V., and Hsu J.. 2010. “Validation of an Algorithm for Categorizing the Severity of Hospital Emergency Department Visits.” Medical Care 48 (1): 58–63. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Begley, C. E. , Vojvodic R. W., Seo M., and Burau K.. 2006. “Emergency Room Use and Access to Primary Care: Evidence from Houston, Texas.” Journal of Health Care for the Poor and Underserved 17 (3): 610–24. [DOI] [PubMed] [Google Scholar]
- Billings, J . 2003. “Tools for Monitoring the Health Care Safety Net: Using Administrative Data to Monitor Access, Identify Disparities, and Assess Performance of the Safety Net” [accessed on February 22, 2003]. Available at http://archive.ahrq.gov/data/safetynet/toolsoft.htm
- Billings, J . 2004. “Interactive Tool and Software: Safety Net Monitoring Initiative” [accessed on February 22, 2004]. Available at http://www.ahrq.gov/data/safetynet/toolsoft.htm
- Billings, J. , Parikh N., and Mijanovich T.. 2000a. “Emergency Department use in New York City: A Substitute for Primary Care?” Issue Brief (Commonwealth Fund) (433): 1–5. [PubMed] [Google Scholar]
- Billings, J. , Parikh N., and Mijanovich T.. 2000b. “Emergency Room Use: The New York Story.” Issue Brief (Commonwealth Fund) (434):1–12. [PubMed] [Google Scholar]
- Billings, J. , Zeitel L., Lukomnik J., Carey T. S., Blank A. E., and Newman L.. 1993. “Impact of Socioeconomic Status on Hospital Use in New York City.” Health Affairs 12 (1): 162–73. [DOI] [PubMed] [Google Scholar]
- Cheung, P. T. , Wiler J. L., Lowe R. A., and Ginde A. A.. 2012. “National Study of Barriers to Timely Primary Care and Emergency Department Utilization Among Medicaid Beneficiaries.” Annals of Emergency Medicine 60(1): 4–10 e2. [DOI] [PubMed] [Google Scholar]
- Feldman, J . 2010. “The NYU Classification System for ED Visits: WSHA Technical Concerns” [accessed on November 14, 2010]. Available at http://wsha-archive.seattlewebgroup.com/files/169/NYU_Classification_System_for_EDVisits.pdf
- Gandhi, S. O. , and Sabik L.. 2014. “Emergency Department Visit Classification Using the NYU Algorithm.” American Journal of Managed Care 20 (4): 315–20. [PubMed] [Google Scholar]
- Greci, L. K . 2010. “Issue Brief: Profile of Emergency Department Visits Not Requiring Inpatient Admission to a Connecticut Acute Care Hospital, Fiscal Year 2006–2009.” Connecticut. Office of Health Care Access.
- Heavrin, B. S. , Fu R., Han J. H., Storrow A. B., and Lowe R. A.. 2011. “An Evaluation of Statewide Emergency Department Utilization Following Tennessee Medicaid Disenrollment.” Academic Emergency Medicine 18 (11): 1121–8. [DOI] [PubMed] [Google Scholar]
- Johnston, K. J. , Allen L., Melanson T. A., and Pitts S. R.. 2017. “A ‘Patch’ to the NYU Emergency Department Visit Algorithm.” Health Services Research 52 (4): 1264–75. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jones, K. , Paxton H., Hagtvedt R., and Etchason J.. 2013. “An Analysis of the New York University Emergency Department Algorithm's Suitability for Use in Gauging Changes in ED Usage Patterns.” Medical Care 51 (7): e41–e50. [DOI] [PubMed] [Google Scholar]
- Kellermann, A. L. , and Weinick R. M.. 2012. “Emergency Departments, Medicaid Costs, and Access to Primary Care – Understanding the Link.” New England Journal of Medicine 366 (23): 2141–3. [DOI] [PubMed] [Google Scholar]
- Lowe, R. A . 2010. “Comment on ‘Ballard DW, Price M, Fung V, et al. Validation of an Algorithm for Categorizing the Severity of Hospital Emergency Department Visits. Med Care. 2010;48(1):58‐63.’” Medical Care 48 (5): 395. [DOI] [PubMed] [Google Scholar]
- Lowe, R. A. , and Fu R.. 2008. “Can the Emergency Department Algorithm Detect Changes in Access to Care?” Academic Emergency Medicine 15 (6): 506–16. [DOI] [PubMed] [Google Scholar]
- Lowe, R. A. , Fu R., and Gallia C. A.. 2010. “Impact of Policy Changes on Emergency Department use by Medicaid Enrollees in Oregon.” Medical Care 48 (7): 619–27. [DOI] [PubMed] [Google Scholar]
- Lowe, R. A. , Localio A. R., Schwarz D. F., Williams S., Tuton L. W., Maroney S., Nicklin D., Goldfarb N., Vojta D. D., and Feldman H. I.. 2005. “Association Between Primary Care Practice Characteristics and Emergency Department Use in a Medicaid Managed Care Organization.” Medical Care 43 (8): 792–800. [DOI] [PubMed] [Google Scholar]
- Lowe, R. A. , McConnell K. J., Vogt M. E., and Smith J. A.. 2008. “Impact of Medicaid Cutbacks on Emergency Department Use: The Oregon Experience.” Annals of Emergency Medicine 52 (6): 626–34. [DOI] [PubMed] [Google Scholar]
- Lowe, R. A. , Fu R., Ong E. T., McGinnis P. B., Fagnan L. J., Vuckovic N., and Gallia C.. 2009. “Community Characteristics Affecting Emergency Department Use by Medicaid Enrollees.” Medical Care 47 (1): 15–22. [DOI] [PubMed] [Google Scholar]
- Massachusetts Division of Health Care Finance and Policy . 2004. Non‐Emergency and Preventable ED Visits. Boston, MA: Analysis in Brief. [Google Scholar]
- OMPRO . 2005. “Comparative Assessment Report: Emergency Department Utilization, Oregon Health Plan Managed Care Plans, 2002–2003.” Oregon Department of Human Services Office of Medical Assistance Programs. Portland, Oregon. [Google Scholar]
- Oster, A. , and Bindman A. B.. 2003. “Emergency Department Visits for Ambulatory Care Sensitive Conditions: Insights into Preventable Hospitalizations.” Medical Care 41 (2): 198–207. [DOI] [PubMed] [Google Scholar]
- Raven, M. C. , Lowe R. A., Maselli J., and Hsia R. Y.. 2013. “Comparison of Presenting Complaint vs Discharge Diagnosis for Identifying ‘Nonemergency’ Emergency Department Visits.” Journal of the American Medical Association 309 (11): 1145–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sun, B. C. , Chi D. L., Schwarz E., Milgrom P., Yagapen A., Malveau S., Chen Z., Chan B., Danner S., Owen E., Morton V., and Lowe R. A.. 2015. “Emergency Department Visits for Nontraumatic Dental Problems: A Mixed‐Methods Study.” American Journal of Public Health 105 (5): 947–55. [DOI] [PMC free article] [PubMed] [Google Scholar]