Skip to main content
Health Services Research logoLink to Health Services Research
. 2017 Jul 19;52(4):1264–1276. doi: 10.1111/1475-6773.12638

A “Patch” to the NYU Emergency Department Visit Algorithm

Kenton J Johnston 1,, Lindsay Allen 2, Taylor A Melanson 2, Stephen R Pitts 3,4
PMCID: PMC5517669  PMID: 28726238

Abstract

Objective

To document erosion in the New York University Emergency Department (ED) visit algorithm's capability to classify ED visits and to provide a “patch” to the algorithm.

Data Sources

The Nationwide Emergency Department Sample.

Study Design

We used bivariate models to assess whether the percentage of visits unclassifiable by the algorithm increased due to annual changes to ICD‐9 diagnosis codes. We updated the algorithm with ICD‐9 and ICD‐10 codes added since 2001.

Principal Findings

The percentage of unclassifiable visits increased from 11.2 percent in 2006 to 15.5 percent in 2012 (p < .01), because of new diagnosis codes. Our update improves the classification rate by 43 percent in 2012 (p < .01).

Conclusions

Our patch significantly improves the precision and usefulness of the most commonly used ED visit classification system in health services research.

Keywords: Emergency department visit algorithm, emergency department use, health services research


The New York University (NYU) Emergency Department (ED) visit algorithm is the most widely used tool for retrospectively assessing the probability that ED visits are urgent, preventable, or optimally treated in an ED, using administrative data (Billings, Parikh, and Mijanovich 2000b; Feldman 2010). Besides being used in or cited by hundreds of studies, the algorithm has been instrumental in evaluating the impact of at least three major health policy changes: the Massachusetts health reform law (Miller 2012), the Oregon Health Insurance Experiment (Taubman et al. 2014), and the expansion of health insurance to young adults under the Affordable Care Act (Antwi et al. 2015).

The algorithm's popularity hinges on several factors. First, rising ED demand and national capacity constraints have prompted calls for more research on the ED delivery system (Institute of Medicine 2007) and made the measurement and reduction of ED use for nonemergent and ambulatory care‐sensitive conditions prominent policy targets (Joynt et al. 2013; Gandhi, Grant, and Sabik 2014). Second, employing the algorithm requires only the primary discharge diagnosis codes from patient visit records, making it the only comprehensive classification mechanism usable by health services researchers with limited administrative data. Finally, the algorithm is free for anyone to download and use.

The clinical precision of the algorithm itself, however, has not kept pace with its popularity. The algorithm was created with ICD‐9 diagnosis codes that were current as of 2001. Meanwhile, the ICD‐9/ICD‐10 Coordination and Maintenance Committee has released updates to the body of diagnosis codes every year (Centers for Medicare and Medicaid Services 2014). However, the algorithm was not updated to incorporate the new diagnosis codes into its classification of ED visits. As the new diagnosis codes are applied to ED visits, the percentage that are unclassifiable by the algorithm increases substantially. The Washington State Hospital Association noted only 12 percent of its ED visits were unclassifiable by the algorithm in 2006, but that this number rose to 19 percent by 2009 (Feldman 2010).

As the percentage of unclassifiable ED visits rises, important information about changing patterns of ED use is obscured. As the algorithm is used to evaluate local health system resource use, as well as national‐ and state‐level health policy changes, there exists a sizable risk for inaccurate evaluation and suboptimal distribution of system resources. With these concerns in mind, we created a “patch” to improve the usefulness of the original algorithm for retrospective research on ED visit patterns, using ICD‐9 codes, for the period spanning 2001–2015. Given the landmark health policy changes that have occurred under the ICD‐9 system as part of the Affordable Care Act (ACA) over the past several years, this patch comes at an especially crucial time in health services research. To further improve the patch's utility moving forward, we also offer a “beta” version of the patched algorithm for use when ICD‐10 data become available.

In this Methods Brief, we describe how we created the patch and evaluate its effectiveness in improving the sensitivity of the algorithm using a national sample of ED visits. As supplemental digital content, we provide free SAS and Stata macros with the new version of the algorithm (see Appendix SA3 and SA4). We note that although two studies have independently validated the algorithm based on its ability to differentiate the severity of ED visits (Ballard et al. 2010; Gandhi and Sabik 2014), other studies have identified problems with the algorithm due to insufficient sensitivity to changes in ED utilization patterns (Jones et al. 2013), lack of responsiveness to changes in access to ambulatory care alternatives apart from the ED (Lowe and Fu 2008), and failure to keep up with the evolving practice of emergency medicine (Feldman 2010). Thus, we caution that although our patch extends the generalizability of the original algorithm to a much larger number of diagnosis codes, it is not intended to improve upon its construct validity. Specifically, our patch does not address any potential problems identified in prior studies as to whether the algorithm is appropriately differentiating the severity of ED visits or sufficiently responsive to changes in ED utilization patterns.

Methods

Creation of the Original NYU ED Algorithm

In 1999, with the assistance of a panel of ED physicians, researchers at NYU categorized ED visits from six hospitals in the Bronx, NY, into one of the following four categories: (1) nonemergent; (2) emergent, primary care treatable; (3) emergent, ED care needed, but preventable/avoidable; and (4) emergent, ED care needed, not preventable/avoidable (Billings, Parikh, and Mijanovich 2000b). Visits due to injury, mental health, alcohol use, or substance use were carved out into their own four separate categories. From these classifications, the authors compiled a set of probabilistic weights to be applied to ED discharge data using the primary discharge ICD‐9 diagnosis codes to determine the percent of ED use attributable to each of the eight categories (Billings, Parikh, and Mijanovich 2000b). ED visits with diagnosis codes that are not mapped to any of the eight categories default to unclassified. For example, an ICD‐9 code of 0340 (streptococcal sore throat) has the following probabilities: nonemergent, 66 percent; emergent, primary care treatable, 28 percent; emergent, ED care needed, 6 percent; emergent, ED care needed, not preventable/avoidable, 0 percent. For a detailed explanation of how the authors created the algorithm, see Billings, Parikh, and Mijanovich 2000b.

Creation of the Patch

For the patch, our aim was to assign the probabilistic weights from the original algorithm to all new ICD‐9 codes introduced since the algorithm's creation. First, we connected each of the new codes to the most clinically similar diagnosis codes that existed in the original algorithm. Then, we assigned the weights from the most similar original codes to the new codes. We did this in two stages. In the first stage, we figured out how many of the new ICD‐9 codes nest neatly into one of the original codes (i.e., code “07812, plantar wart” was created in 2008, but falls under the umbrella of code “0781, viral warts,” which was listed in the original algorithm). In these cases, the original probability weights were applied to the new code by the existing algorithm. In the second stage, we took the remaining new ICD‐9 codes that were not classified by the existing algorithm and connected them to original ICD‐9 codes via a “bridge.” The bridge, in this case, was the Clinical Classifications Software (CCS) distributed by the Agency for Healthcare Research and Quality (AHRQ), which we describe in the next section.

Data

Our primary data source for this analysis was the Nationwide Emergency Department Sample (NEDS) provided by the Healthcare Cost and Utilization Project (2016). The NEDS is released each calendar year, and in 2012, contained ED discharge data on 31 million ED visits at 950 hospitals in 30 states. The NEDS is a stratified, 20 percent cluster sample and includes all ED visits of the hospitals sampled in a given year, along with weights to generate national estimates (Healthcare Cost and Utilization Project 2016). The unit of analysis is an ED visit, and each record contains primary ICD‐9 discharge diagnosis codes. We used data for the 7 years of 2006–2012 and computed national estimates using ED visit discharge weights.

We also used annual data files released by the ICD‐9 Coordination and Maintenance Committee (Centers for Medicare and Medicaid Services 2014). These contain a complete list of the valid ICD‐9 diagnosis codes for each year, allowing for identification of annual diagnosis code changes from 2001 through the last ICD‐9 release in October of 2014.

We used AHRQ's CCS software as a bridge between the original and the new ICD‐9 codes. This software “rolls up” the diagnosis codes into broader categories, following a four‐level hierarchical system (Elixhauser, Steiner, and Palmer 2014). For example, an ICD‐9 code of 40200 (malignant hypertensive heart disease without heart failure) would be classified under “hypertensive heart and/or renal disease” (level 4) in the most specific of the four levels, then collapsed up to “hypertension with complications and secondary hypertension” (level 3), “hypertension” (level 2), and “diseases of the circulatory system” (level 1), in turn.

We classified all of the ICD‐9 codes from the original algorithm into their CCS categories, and we did the same with all of the newly introduced codes. Then, we matched the original and new codes based on the most clinically similar and granular CCS category available, and assigned the new codes the weights from the original codes in the same CCS category.

Clinical Integrity of the Patch

We took several steps to safeguard the patch's clinical integrity and usefulness. If a new code mapped to multiple original codes sharing the same CCS category but with differing probability weights, then we adopted an approach suggested by Dowd et al. (2014), assigning the weights which gave the highest likelihood of “emergent, ED care needed, not preventable/avoidable.” Further ties were broken by assigning weights giving the highest likelihood of the following, in sequence: (1) ED use that is “emergent, ED care needed, preventable/avoidable”; (2) ED use that is “emergent, primary care treatable”; (3) ED use that is “nonemergent.” Put another way, if a new code could be mapped to several original codes with different weights, we erred on the side of clinical caution by assigning to it the probability weights from the original code that was most likely to represent an unavoidable, true emergency.

We also had a practicing ED physician conduct a manual review of all new ICD‐9 codes with 80 or more ED visits in the NEDS during 2006–2012 (nearly 700 codes, representing 99.8 percent of all ED visits for new ICD‐9 codes not classified by the original algorithm). In cases where the clinician deemed a patch‐assigned probabilistic weight to be of an insufficient severity level, we erred on the side of clinical caution by reassigning that weight to the “emergent and not preventable” category.

Assessing the Role of New ICD‐9 Codes in the Increase in Unclassified ED Visits over Time

To assess whether the overall percentage of unclassified ED visits increased over time, we generated descriptive statistics for this outcome for the estimated universe of all ED visits in the United States from 2006 to 2012. We used chi‐square tests and bivariate linear regression to assess whether the percentage of unclassified ED visits increased each year. Second, we tested whether the new ICD‐9 codes added in or after 2001 were responsible for the observed increase in overall unclassified ED visits. For this analysis, we used bivariate models to test (1) whether there was a significant increase in the year‐over‐year trend in unclassified ED visits with new ICD‐9 codes but not a similar increase in visits with the old codes; (2) whether there was a greater probability of an ED visit being unclassified if its primary discharge diagnosis was a new instead of an old ICD‐9 code.

Evaluating the Patch's Utility

We tested the effectiveness of our patch by applying it to the estimated universe of all ED visits in the United States described above and performed McNemar's test to determine whether the updated algorithm yields a statistically significant improvement in the rate of unclassified visits over time.

A “Beta” Version of the Patch for ICD‐10 Codes

We created a “beta” version of our patch for ICD‐10 by linking the ICD‐9 codes in the patched algorithm to ICD‐10 codes using the 2016 General Equivalence Mappings system developed by the Centers for Medicare and Medicaid Services (2016). We tested the beta version in a simulated national dataset of ED visits coded with 2016 ICD‐10 primary discharge diagnosis codes. We applied both the beta version of our patch to this simulated dataset as well as the original version of the NYU algorithm previously mapped to ICD‐10 codes by the NYU Wagner Graduate School of Public Service (2016). We performed McNemar's test to determine whether the ICD‐10 version of our algorithm yields a statistically significant improvement in the rate of unclassified visits over the original version. (For more details on how we linked ICD‐9 to ICD‐10 codes and on how we simulated a dataset of ED visits with ICD‐10 codes, see Appendix SA2.)

Results

Figure 1 displays the NYU ED algorithm, along with the average estimated rates for each outcome for all U.S. ED visits from 2006 to 2012. Overall, 14 percent of visits are unclassified by the original NYU algorithm during this period. Table 1 shows that the percentage of unclassified ED visits has risen significantly from 11.2 percent in 2006 to 15.5 percent in 2012 (p < .01). During this same time period, an estimated 50.8 million national ED visits (5.7 percent of the total) had a primary discharge diagnosis code that was one of the 2,024 new ICD‐9 codes added since the last update to the NYU algorithm. We find a significant (p < .01) increase in unclassified ED visits with the new ICD‐9 codes as the primary discharge diagnosis of record but not in unclassified ED visits with the old ICD‐9 codes. In addition, we find that ED visits with the new ICD‐9 codes have a significant (p < .01) and substantially greater probability of being unclassified than ED visits with the old codes.

Figure 1.

Figure 1

NYU ED Algorithm, Percentage of Total Visits Yearly Average, 2006–2012

Table 1.

ED Visits Unclassified by the NYU Algorithm: Effect of New ICD‐9 Diagnosis Codes and the Patch

2006 2007 2008 2009 2010 2011 2012
Total National ED visits
Weighted, (000s) 120,034 122,332 124,945 128,885 128,970 131,049 134,399
Unweighted, (000s) 25,703 26,628 28,447 28,861 28,584 29,421 31,091
ED visits unclassified by the existing NYU algorithm
Unclassified ED visits (%)* 11.2 12.7 13.5 14.8 14.9 15.4 15.5
Old ICD‐9 Code (%)** 8.6 8.9 8.9 8.7 8.8 8.9 8.9
New ICD‐9 Code (%)** 2.6 3.8 4.6 6.2 6.1 6.5 6.7
Percent probability that ED visit is unclassified by the existing NYU algorithm: effect of new ICD‐9 codes
Old ICD‐9 Code is the primary discharge diagnosis (%)*** 8.9 9.3 9.4 9.3 9.4 9.6 9.5
New ICD‐9 code is the primary discharge diagnosis (%)*** 89.5 91.6 92.3 93.7 92.0 91.9 91.4
ED visits remaining unclassified after applying the patch to the NYU ED algorithm
Unclassified – After applying patch to algorithm (%)**** 8.6 8.9 8.9 8.7 8.8 8.9 8.9

*Difference in percentage of all ED visits unclassified over 2006–2012 is statistically significant (p < .01) as a linear function of year secular trend, and over each of the 2‐year periods of 2006–2007, 2007–2008, 2008–2009, 2010–2011 (p < .01, chi‐square test).

**Difference in percentage of ED visits unclassified with new (added on or after 2001) ICD‐9 codes over 2006–2012 is statistically significant (p < .01) as a linear function of year secular trend, and over each of the 2‐year periods of 2006–2007, 2007–2008, 2008–2009, 2010–2011 (p < .01, chi‐square test). Difference in percentage of ED visits unclassified with old (already in existence as of 2001) ICD‐9 codes over 2006–2012 is not statistically significant (p < .01) as a linear function of secular trend or over any 2‐year periods.

***Difference in probability of ED visit being unclassified by whether visit was coded with a new ICD‐9 code (added on or after 2001) as the primary discharge diagnosis versus an old ICD‐9 code (already in existence as of 2001) is significant (p < .01, chi‐square test) for all years.

Differences in percentage of ED visits unclassified before and after applying our patch are significant for each year (p < .01, McNemar's test of correlated percentages).

†Calculated from the Nationwide Emergency Department Sample (NEDS) using visit weights on all ED visits in the NEDS.

Only 188 (9.3 percent) of the new ICD‐9 diagnosis codes added on or after 2001 nested within existing codes and were already classified by the original NYU algorithm. Using the CCS technique, we then matched 1,805 (89.2 percent) of the new ICD‐9 diagnosis codes to the NYU weights of clinically similar old ICD‐9 codes in the original algorithm. Our manual clinical review identified 21 (1.2 percent) of these diagnosis codes to be of higher severity than the NYU weights to which they were assigned, and we updated the weights for these 21 codes to 100 percent “emergent, ED care needed, not preventable.” These 21 codes only accounted for 0.06 percent of the unweighted total ED visits during 2006–2012. At the end of the process, we successfully linked 1,805 new diagnosis codes that were previously unclassified to the existing NYU probabilistic weighting system; 31 (1.5 percent) of the new ICD‐9 codes could not be linked and are left in the default category of unclassified. These 31 codes that could not be linked only accounted for 0.0003 percent of the unweighted total ED visits during 2006–2012.

As shown in Table 1, when we apply the patched algorithm to the same universe of national ED visits, we find that 8.9 percent of visits are now unclassified in the year 2012. This represents a 6.7 percentage point reduction in unclassified visits (p < .01), equivalent to a 43 percent improvement over the original algorithm. This pattern is similar for each year included in our analysis. Furthermore, the percentage of ED visits that are unclassified in our patched version of the algorithm is no longer increasing year over year, but remains relatively constant from 2006 to 2012. (For more information on the effect of our patch on the classification of ED visits across all eight NYU ED algorithm categories, see Table S1.)

When we apply the ICD‐10 version of our algorithm to a simulated national dataset of ED visits using 2016 ICD‐10 codes, we find that 8.1 percent of visits are unclassified (Table 2). In contrast, 18.8 percent of such ICD‐10‐coded ED visits are unclassified by the original algorithm. This represents a 10.7 percentage point reduction in unclassified visits (p < .01).

Table 2.

Effect of the ICD‐10 Version of the Patch on Classification of Simulated ED Visits with 2016 ICD‐10 Discharge Diagnosis Codes

ICD‐10 Version of Patched ED Algorithm ICD‐10 Version of Original NYU ED Algorithm
Total National ED visits
Weighted, (000s) 128,809 128,809
Unweighted, (000s) 29,799 29,799
Classification of Simulated ICD‐10 ED Visits
Unclassified ED visits (%)* 8.1 18.8
Emergent EDNNP 16.1 12.2
Emergent EDNP 7.5 6.4
Emergent PCT 22.9 20.2
Nonemergent 20.8 19.2
Injury related 20.9 19.6
Mental health 2.6 2.4
Alcohol related 0.9 0.9
Drug related 0.2 0.2

*Difference in percentage of ED visits unclassified between our patch and the original version is significant (p < .01, McNemar's test of correlated percentages). Note that the ICD‐10 version of the original NYU algorithm assigns some ED visits a weight of “partially unclassified.” For the purposes of McNemar's test, we counted such visits as unclassified only if the probability weight was ≥0.50 for unclassified.

†Calculated from the Nationwide Emergency Department Sample (NEDS) for 2012 using visit weights on ED visits with nonmissing ICD‐9 codes that mapped to ICD‐10 codes (96% of sample).

EDNNP, ED care needed and not preventable/avoidable; EDNP, ED care needed and preventable/avoidable; PCT, primary care treatable.

Conclusion

Despite its widespread popularity in the health services research discipline, the NYU ED algorithm has not been updated since 2001, resulting in ever‐increasing percentages of ED visits that are unclassifiable by the algorithm. An up‐to‐date version of the algorithm is an essential health services research tool, in light of the ACA and other major changes to the health policy landscape that have occurred over the past 15 years. To this end, we have created a patch that substantially reduces the number of ED visits unclassifiable by the algorithm and have provided an additional “beta” version for use with ICD‐10 data as it becomes available. We note that the patch is intended only to broaden the external validity of the original algorithm, and not intended to improve its accuracy. As such, we urge health services researchers to use the patch in accordance with guidelines set forth by its authors (Billings, Parikh, and Mijanovich 2000a).

Supporting information

Appendix SA1. Author Matrix.

Appendix SA2. Process Used to Link ICD‐9 to ICD‐10 Diagnosis Codes and Simulate ED Visits with 2016 ICD‐10 Diagnosis Codes.

Appendix SA3. Patched NYU ED Algorithm for ICD‐9 with SAS and Stata Macros (as Three Text Files): (a) Text File #1: patched_ed_algo_weights_icd9.txt, (b) Text File #2: sas_code_for_patched_ed_algo_icd9.txt, and (c) Text File #3: stata_code_for_patched_ed_algo_icd9.txt

Appendix SA4. Patched NYU ED Algorithm for ICD‐10 with SAS and Stata Macros (as Three Text Files): (a) Text File #1: patched_ed_algo_weights_icd10.txt, (b) Text File #2: sas_code_for_patched_ed_algo_icd10.txt, and (c) Text File #3: stata_code_for_patched_ed_algo_icd10.txt

Table S1. Classification of all ED Visits* before and after Update.

 

 

 

 

Acknowledgments

Joint Acknowledgment/Disclosure Statement: All authors meet the criteria for authorship and have read and approved the final manuscript. The authors disclose no conflicts of interest. This research was deemed exempt from review by the Emory University institutional review board. The authors acknowledge the financial support of the Emory University Rollins School of Public Health, Laney Graduate School, and the School of Medicine. Kenton Johnston also acknowledges the financial support of the Saint Louis University College for Public Health and Social Justice.

Dislosure: None.

Disclaimer: None.

References

  1. Antwi, Y. A. , Moriya A. S., Simon K., and Sommers B. D.. 2015. “Changes in Emergency Department Use among Young Adults after the Patient Protection and Affordable Care Act's Dependent Coverage Provision.” Annals of Emergency Medicine 65 (6): 664–72. [DOI] [PubMed] [Google Scholar]
  2. Ballard, D. W. , Price M., Fung V., Brand R., Reed M. E., Fireman B., Newhouse J. P., Selby J. V., and Hsu J.. 2010. “Validation of an Algorithm for Categorizing the Severity of Hospital Emergency Department Visits.” Medical Care 48 (1): 58–63. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Billings, J. , Parikh N., and Mijanovich T.. 2000a. “Emergency Department Use in New York City: A Substitute for Primary Care?” Issue Brief Commonwealth Fund. Available at http://www.commonwealthfund.org/usr_doc/billings_eduse_433.pdf?section=4039 [PubMed] [Google Scholar]
  4. Billings, J. , Parikh N., and Mijanovich T.. 2000b. “Emergency Department Use: The New York Story.” Issue Brief Commonwealth Fund. Available at http://www.commonwealthfund.org/usr_doc/billings_nystory.pdf?section=4039. [PubMed] [Google Scholar]
  5. Centers for Medicare and Medicaid Services . 2014. “ICD‐9‐CM Coordination and Maintenance Committee Meetings.” Available at http://www.cms.gov/Medicare/Coding/ICD10/ICD-9-CM-Coordination-and-Maintenance-Committee-Meetings.html
  6. Centers for Medicare and Medicaid Services . 2016. “ICD‐10‐CM/PCS to ICD‐9‐CM Reimbursement Mappings.” Available at https://www.cms.gov/Medicare/Coding/ICD10/2016-ICD-10-PCS-and-GEMs.html
  7. Dowd, B. , Karmarker M., Swenson T., Parashuram S., Kane R., Coulam R., and Jeffery M. M.. 2014. “Emergency Department Utilization as a Measure of Physician Performance.” American Journal of Medical Quality 29 (2): 135–43. [DOI] [PubMed] [Google Scholar]
  8. Elixhauser, A. , Steiner C., and Palmer L.. 2014. “Clinical Classifications Software (CCS), 2014.” U.S. Agency for Healthcare Research and Quality; Available at http://www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp [Google Scholar]
  9. Feldman, J. 2010. “The NYU Classification System for ED Visits: WSHA Technical Concerns.” Washington State Hospital Association; Available at http://wsha-archive.seattlewebgroup.com/files/169/NYU_Classification_System_for_EDVisits.pdf [Google Scholar]
  10. Gandhi, S. O. , Grant L. P., and Sabik L. M.. 2014. “Trends in Nonemergent Use of Emergency Departments by Health Insurance Status.” Medical Care Research and Review 71 (5): 496–521. [DOI] [PubMed] [Google Scholar]
  11. Gandhi, S. O. , and Sabik L.. 2014. “Emergency Department Visit Classification Using the NYU Algorithm.” American Journal of Managed Care 20 (4): 315–20. [PubMed] [Google Scholar]
  12. Healthcare Cost and Utilization Project . 2016. “HCUP Nationwide Emergency Department Sample (NEDS)” Rockville, MD: Agency for Healthcare Research and Quality; [accessed on February 26, 2016]. Available at www.hcup-us.ahrq.gov/nedsoverview.jsp [Google Scholar]
  13. Institute of Medicine . 2007. Hospital‐Based Emergency Care: At the Breaking Point. Washington, D.C.: National Academies Press. doi:10.17226/11621. [Google Scholar]
  14. Jones, K. , Paxton H., Hagtvedt R., and Etchason J.. 2013. “An Analysis of the New York University Emergency Department Algorithm's Suitability for Use in Gauging Changes in ED Usage Patterns.” Medical Care 51 (7): e41–50. [DOI] [PubMed] [Google Scholar]
  15. Joynt, K. E. , Gawande A. A., Orav E. J., and Jha A. K.. 2013. “Contribution of Preventable Acute Care Spending to Total Spending for High‐Cost Medicare Patients.” JAMA 309 (24): 2572–8. [DOI] [PubMed] [Google Scholar]
  16. Lowe, R. A. , and Fu R.. 2008. “Can the Emergency Department Algorithm Detect Changes in Access to Care?” Academic Emergency Medicine 15 (6): 506–16. [DOI] [PubMed] [Google Scholar]
  17. Miller, S. 2012. “The Effect of Insurance on Emergency Room Visits: An Analysis of the 2006 Massachusetts Health Reform.” Journal of Public Economics 96 (11–12): 893–908. doi:10.1016/j.jpubeco.2012.07.004 [Google Scholar]
  18. NYU Wagner Graduate School of Public Service . 2016. “NYU ED Algorithm Information Page” [accessed on January 27, 2016]. Available at http://wagner.nyu.edu/faculty/billings/nyued-articles
  19. Taubman, S. L. , Allen H. L., Wright B. J., Baicker K., and Finkelstein A. N.. 2014. “Medicaid Increases Emergency‐Department Use: Evidence from Oregon's Health Insurance Experiment.” Science: 263 Available at http://www.ncbi.nlm.nih.gov/pubmed/24385603, doi:10.1126/science.1246183 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix SA1. Author Matrix.

Appendix SA2. Process Used to Link ICD‐9 to ICD‐10 Diagnosis Codes and Simulate ED Visits with 2016 ICD‐10 Diagnosis Codes.

Appendix SA3. Patched NYU ED Algorithm for ICD‐9 with SAS and Stata Macros (as Three Text Files): (a) Text File #1: patched_ed_algo_weights_icd9.txt, (b) Text File #2: sas_code_for_patched_ed_algo_icd9.txt, and (c) Text File #3: stata_code_for_patched_ed_algo_icd9.txt

Appendix SA4. Patched NYU ED Algorithm for ICD‐10 with SAS and Stata Macros (as Three Text Files): (a) Text File #1: patched_ed_algo_weights_icd10.txt, (b) Text File #2: sas_code_for_patched_ed_algo_icd10.txt, and (c) Text File #3: stata_code_for_patched_ed_algo_icd10.txt

Table S1. Classification of all ED Visits* before and after Update.

 

 

 

 


Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust

RESOURCES