Abstract
Background
Access to programs for high-needs patients depending on single-institution electronic health record data (EHR) carries risks of biased sampling. We investigate a statewide admissions, discharge, transfer feed (ADT), in assessing equity in access to these programs.
Methods
This is a retrospective cross-sectional study. We included high-need patients at Vanderbilt University Medical Center (VUMC), who were 18 years or older, with minimum three emergency visits (ED) or hospitalizations in Tennessee from January 1 to June 30, 2021, including at least one at VUMC. We used the Tennessee ADT database to identify high-need patients with at least one VUMC ED/hospitalization, then compared this population with high-need patients identified using VUMC’s Epic® EHR database. The primary outcome was the sensitivity of VUMC-only criteria for identifying high-need patient when compared to statewide ADT reference standard.
Results
We identified 2549 patients that had at least one ED/hospitalization and were assessed to be high-need based on the statewide ADT. Of those, 2100 had VUMC-only visits, and 449 had VUMC and non-VUMC visits. VUMC-only visit screening criteria showed high sensitivity (99.1%, 95% CI: 98.7% - 99.5%), indicating that the high-needs patients admitted to VUMC infrequently access alternative systems. Results demonstrated no meaningful difference in sensitivity when stratified by patient’s race or insurance.
Conclusions
ADT allows examination for potential selection bias when relying upon single-institution utilization. In VUMC’s high-need patients, there’s minimal selection bias when relying upon same-site utilization. Further research needs to understand how biases may vary by site, and durability over time.
Introduction
Access to specialized health programs for high-need patients often depends upon identification by means of electronic health record (EHR) data based on retrospective costs or utilization. However, the current lack of interoperability(1) between EHRs and delays in accessing claims data hampers ability to gather medical information to understand the total healthcare utilization of patients (2) As a result, high-needs patient identification is often based on single-institution EHR data, rather than comprehensive utilization across regional or even state-wide utilization.
There are implications in patient selection for program value, with regards to what is amenable to intervention(3). Additionally, identification of high-need patients based on single-institution health care utilization carries the risk of biased sampling. For example, hospitals often select high-need patients based on recurrent emergency room and hospitalization visits from their own hospital or health-system. Patients, however, may seek care in multiple hospitals or health systems, due in part to their zip code, preferences, or insurance status. Furthermore, there are multiple decisions points that influence hospital choice, including ambulance transport decisions, hospital bed availability, and severity of illness(4-6). Although efficient, identification of high-needs patients with single-institution EHR data may unintentionally exclude high-needs patients, and potentially exacerbate disparities in access. This is an example digital redlining, or the creation and maintenance of technology practices that embed discriminatory practices against marginalized groups(7).
Recently, a new regulatory requirement from the Centers for Medicare and Medicaid Services (CMS) requires that hospitals, including behavioral health and critical access hospitals, send real time admission, discharge, and transfer (ADT) event notifications to all providers primarily responsible for a patient’s care(6). Furthermore, these ADT data are increasingly aggregated at the state level and provided to participating hospitals. Analysis of newly available ADT records provides an opportunity to understand the presence of existing biases that may reflect digital redlining when relying upon single-institution data(8).
Vanderbilt University Medical Center (VUMC) has a hospital program, the Vanderbilt Interdisciplinary Care Program (VICP) that provides consistent and coordinated care for patients with recurrent healthcare utilization. Upon development, the hospital program only had the ability to select patients based on same hospital ED visits and readmissions. With new access to state-wide hospitalization data our primary objective was to quantify the extent to which patients may have been excluded from this program using same hospital ED visits. Our secondary objective was to understand whether selection patterns based on same-hospital data differed by race or insurance status.
Methods
Study Population:
We performed a retrospective cross-sectional study among all patients 18 years or older who were admitted to VUMC from January 1 to June 30, 2021, recorded in the VUMC Epic EHR, and whose admission was recorded in the ADT database from the same time frame. VUMC is part of Vanderbilt Health, a system of clinics and hospitals across middle Tennessee and neighboring states with 1,615 licensed hospital beds at seven hospitals, with 141,529 emergency room visits, and 55,969 hospital discharges in fiscal year 2022 (9). Patients were excluded from the analysis if their only admissions were at the Vanderbilt Children’s Hospital, Vanderbilt Psychiatric Hospital, or Vanderbilt Stallworth Rehabilitation Hospital. The institutional review board of approved this study as minimal risk and waived informed consent requirements.
Vanderbilt Interdisciplinary Care Program (VICP) Eligibility Criteria
VICP is an interdisciplinary, interprofessional team (internal medicine physicians with a focus on hospital medicine, advanced practice providers, case managers, social workers, pharmacists, and nurses) providing continuity of coordinated care for high-need, medically and socially complex patients. Patients admitted to the hospital medicine service are screened weekly, for eligibility to the program if they had 3 or more ED visits or hospital admissions in the 6 months preceding referral to the program as noted only in the VUMC Epic EHR (VICP Criteria).
Data Sources:
We used two primary data sources. Through the VUMC Clinical Informatics Core, we extracted data from the VUMC Clarity Enterprise Data Warehouse (EDW), a relational database of data stored in the VUMC Epic EHR including emergency room, inpatient and observation admissions. The second data source was the Tennessee Hospital Association’s Admission/Discharge/Transfer (THA ADT) database. When a patient has a hospital or emergency department ADT event within Tennessee, information about that patient’s visit (including, but not limited to, demographic information, information on the source facility, and primary complaint) is collected, packaged into a clinical event notification, and sent real-time to the participating hospital. Currently 130 out of 158 hospitals in Tennessee are part of the ADT database(10).
Outcome Measure:
Our primary outcome was the overall sensitivity of current VICP VUMC-EHR screening for the “High-Need Patient” based on retrospective 6-month THA ADT data as the reference standard, with at least one of these admissions occurring at VUMC. We used 3 ED visits or hospitalizations in the preceding 6 months as our definition as it was the institutional definition of high-need during the time of study. Additionally, we aimed to describe the “Underrecognized High-need Patient”, as a patient who has had one hospital visit or ED visit at VUMC but had 2 or more non-VUMC ED or hospital visits.
Patient Demographics: We extracted key demographic data from the EDW including age, sex, race, ethnicity, insurance status, and distance in miles from primary residence to VUMC.
Statistical Analysis –
We summarized patient demographics and clinic characteristics (hospital admissions, ED visits) using median (25th, 75th percentile) for quantitative variables, and frequency (percentage) for categorical variables.
Enrollment Screening Performance Characteristics. One can think of screening criteria performance like a diagnostic test. In this case, the “true” gold standard high-need patient would be a patient with 3 or more ED visits/hospitalizations anywhere in participating Tennessee hospitals during the study period including at least one at VUMC. The VICP screen-positive high-need patient would have 3 more ED visits/hospitalizations at VUMC. This patient would be a true positive based on VICP screen test. The VICP screen negative patient would have 3 or more ED visits/hospitalizations in the ADT data, including at least one ED/hospitalization at VUMC. This patient would be a false negative based on VICP screen test. Sensitivity is equal to the number of VICP screen-positive patients (true positives) divided by the sum of VICP-screen-positive (true positives) and screen-negative patients (false negatives). Sensitivity (Eq. 1) reflects the ability of VICP’s screening criteria to identify “true” high need patients as identified with the ADT data. (Table 1). We further stratified results according to race and insurance status to assess for any inequities based on these characteristics. We did not examine ADT data for all VUMC patients with one to two ED/hospitalizations (true negatives), since our study aim was to quantify the number of underrecognized high-need patients.
Table 1.
VICP EHR Screening for High Need Patient | High Need Patient (At least 3 ED/hospitalizations in Tennessee including at least one at VUMC | Not a High Need Patient- (1 or 2 ED/hospitalizations in Tennessee including at least one at VUMC) | |
+ (At least 3 ED/hospitalizations at VUMC only) | True Positive | False Positive NA: Data not collected from ADT for patients with < 3 admissions |
|
(1 or 2 ED/hospitalizations at VUMC only) | False Negative | True Negative – N/A: Data not collected from ADT for patients with < 3 admissions |
Results
From January 1, 2021, to June 30, 2021, we identified 2549 patients who were recorded as “high-need” based on the THA ADT as reference standard, who had at least one VUMC ED visit or hospitalization (Table 2). Of the 2549 patients who were in the THA ADT, 449 both had VUMC and non-VUMC visits, and 2100 had only VUMC visits.
Table 2:
Overall | Underrecognized High-need |
VICP VUMC EHR-Criteria High-need Patients |
|
---|---|---|---|
Total Unique Patients | 2549 | 23 | 2526 |
Distance to Hospital (miles) | 33 (11,82) | 11 (7,43) | 33 (11,83) |
Acute Care Utilization | |||
VUMC ED Visits | 1 (0,2) | 1 (1, 2) | 1 (0, 2) |
VUMC Inpatient/Observation Admissions | 2 (1,3) | 1 (0, 2) | 2 (1, 3) |
Non-VUMC ED Visits/Inpatient Observation Admissions | 3 (3,4) | 4 (2,9) | 0 (0,0) |
Race | |||
White | 2006 (79%) | 18 (78%) | 1988 (79%) |
Black | 417 (16%) | 5 (22%) | 412 (16%) |
Other | 88 (3.5%) | 0 (0%) | 114 (4.5%) |
Missing | 12 | 0 | 12 |
Ethnicity | |||
Not Hispanic | 2446 (97%) | 22 (96%) | 2424 (97%) |
Hispanic or Latino | 83 (3.3%) | 1 (4.3%) | 82 (3.3%) |
Missing | 20 | 0 | 20 |
Insurance | |||
Commercial | 907 (36%) | 5 (22%) | 902 (36%) |
Traditional Medicare | 674 (26%) | 1 (4.3%) | 673 (27%) |
Medicare Advantage | 531 (21%) | 8 (35%) | 523 (21 %) |
Medicaid | 241 (9.5%) | 6 (26%) | 235 (9.3%) |
Uninsured/Self Pay/Other | 115 (4.5%) | 3 (13%) | 193 (7.6%) |
We display median (25th, 75th percentile) for continuous variables, and frequency (percentage) for categorical variables.
The current screening VICP criteria using the VUMC EHR shows high sensitivity (99.1 %, 95% CI: 98.7% - 99.5%). The results show that majority of patients who are discharged from VUMC get readmitted to VUMC, and high-need patients in the study infrequently access alternative health systems within the region.Lastly, the results show no difference with regards to race or insurance (Table 3, 4).
Table 3 -.
All Patients | White | Black | Other | |
---|---|---|---|---|
VUMC VICP EHR Criteria Positive | 2526 | 1988 | 412 | 114 |
VUMC VICP EHR Criteria Negative | 23 | 18 | 5 | 0 |
THA ADT Screen | 2549 | 2006 | 417 | 114 |
Sensitivity (%), 95% CI | 99.1 (98.7, 99.5) | 99.1 (98.7, 99.5) | 98.8 (97.8, 99.8) | 100 (100, 100) |
Table 4 –
- | All Patients |
Commercial Insurance |
Medicare / Medicare Advantage |
Medicaid | Uninsured/ Self-Pay/ Other |
---|---|---|---|---|---|
VUMC VICP EHR Criteria Positive | 2526 | 902 | 1196 | 235 | 193 |
VUMC VICP EHR Criteria Negative | 23 | 5 | 9 | 6 | 3 |
THA ADT Screen | 2549 | 907 | 1205 | 241 | 196 |
Sensitivity (%), 95% CI | 99.1 (98.7, 99.5) | 99.4 (99.0, 99.9) | 99.3 (98.8, 99.7) | 97.5 (95.5, 99.5) | 98.5 (96.8, 100) |
Discussion
In this study, we present a novel use of the Admissions Discharge Transfer feed to evaluate potential biases in single-institution screening for the high-need population for a program that aims to enroll patients with a recent history of high healthcare utilization. Our results show that VUMC’s EHR data from the primary hospital shows high sensitivity in identifying high-need patients. Furthermore, we did not observe any statistically relevant differences in sensitivity across race or insurance status. For this specific institution, this is reassuring that selection criteria to date does not have bias. This study demonstrates the value of using state-wide ADT data streams to better characterize a health systems population and determine whether screening biases may exist that could further exacerbate existing inequities in care delivery. Future studies can evaluate all-payer claims databases to reliably show the true prevalence of the high-need population.
To our knowledge, there are no known studies on bias in selection criteria for the high-need population. Kilaru et al. used the Dartmouth’s Hospital Referral Regions (HRR)s and Hospital Service Areas (HSA)s to examine admission patterns, and noted that fewer than half the patients were admitted in the HSAs of residence however, patients living in populous urban HSAs with multiple large and teaching hospitals, tended to remain in same HSAs for inpatient care(13). However, within the same HSAs, studies of patients moving from one hospital to another, or known colloquially as doctor shopping is limited to patients with substance use disorder(14). Our results would support the findings that “doctor shopping” is a rare phenomenon. Only recently, all payer claim databases which give a more comprehensive view of populations have become available, however they are challenges with timeliness in the availability of this data(15), which in the high-need population is essential for real time enrollment into programs.
Our study was reassuring that the current screening using VUMC’s electronic medical records show high sensitivity in recognizing the high-need population regardless of race or insurance status. Bias occurs when an algorithm systematically favors one outcome over another (16), and there had been concerns in previous studies of how algorithms were trained to distribute resources on basis of predicted health costs have prioritized healthier White patients over sicker Black patients because of reduced access to care and tend to use fewer health services (17). Algorithmic and Clinical Decision Support fairness prevents discrimination involving protected groups which are defined such as race, gender, religion, physiologic variability, pre-existing conditions, physical ability, and sexual orientation. Although there is increased focus on bias evaluation using checklists such as the Prediction Model Risk of Bias Assessment Tool (PROBAST)(18), there is still a lack of agreed standard in evaluating clinical decision support tools and prediction models for thorough analysis of fairness.
Health systems evaluating programs targeting their high-need population, would need to be cautious in the assumption that their EHR is of equal sensitivity as VUMC’s screening for this population, as every health system and every region’s referral patterns are different. University of Chicago’s Comprehensive Care Program(19), and Mount Sinai’s PACT (3)programs are both in the top metropolitan statistical areas as compared to Nashville.(20) Additionally, in the Nashville metropolitan area in 2020, VUMC’s Emergency Room was the busiest in the Nashville metropolitan areas with 79,975 ED(21) encounters compared to the next busiest local hospital with 42,488 encounters(22). Additionally, the medical center serves as a referral center for the region and beyond, with 14.9% of hospital discharges in 2020 from outside Tennessee, and 41% of discharges outside the counties surrounding the medical center(21). These admission characteristics are likely to differ across other regions in the country.
It must be pointed out that the lack of racial or insurance differences in sensitivity in the current screening may mask existing structural inequalities in the care for high-need patients, as there is no systematic study of the actual prevalence of the high-need population, and the population’s referral patterns within Tennessee. Additionally, there are no studies understanding disparities in access to care for this population which may affect identification of the population - as our criterion of high-need is dependent on utilization. Tennessee has the second highest rate of hospital closures in the United States, with 13/16 closures since 2010 in the rural areas(23), and may explain why 55.9% of VUMC’s discharges are not from the Nashville metropolitan area (21). However, it is unclear how these closures affect access to care of the high-need population, and how many patients are not able to get to VUMC because of its distance especially those living in the rural counties in Tennessee.
The results of the study were reassuring that we were did not appear to be inadvertently perpetuating disparities through our screening algorithms for program eligibility, as we strove to use a health equity lens (24, 25) in the implementation of our program. The VICP program currently manually screens the electronic medical record, as there were concerns to ensure fairness in screening prior to automating through a clinical decision support system. There are no studies on clinical decision support in screening for the high-need population as previously there is disagreement on its definition of high-need (26) only recently has Medicare given definition to the population using a combination of HCC scores and unplanned admissions in the last year (27), and the program recently updated our criteria. Despite the positive results, we intend to still incorporate the ADT feed into our screening as referral patterns are not static and can change especially with hospital acquisitions and closures.
Limitations
Researchers were given data on high-need patients, that have a relationship with VUMC either through a hospital or ED visit. We are unable to see the total population of “high-need” within middle Tennessee (including those from other healthcare systems) because of this limitation. Additionally, as we had limited our data set from both sources to patients who are defined as high-need, we are not able to calculate specificity and negative predictive value of our current VIC VUMC EHR Criteria. Lastly, the VICP criteria for the current study does not match Medicare’s definition of high-need that combines admissions and HCC score data. Future research should examine for potential screening biases with these newly adopted criteria.
Conclusion
Understanding EHR-based algorithmic fairness is essential in the high-need population to avoid the potential for digital redlining. We evaluated a novel use of the Admissions/Discharge/Transfer (ADT) feed in evaluating equity in access to the VUMC Interdisciplinary care program, an interdisciplinary program for high-need patients. The VUMC-only electronic medical screening for high-need patients is sensitive in identifying this population as validated using the ADT data feed. As different health systems have different contexts, the ADT feed can be used to evaluate algorithmic fairness.
Acknowledgements
This paper and the research behind it would not have been possible without the support of the Center for Health Services Research Health Equity Grant, and The Vanderbilt Institute for Clinical and Translational Research (VICTR) which is funded by the National Center for Advancing Translational Sciences (NCATS) Clinical Translational Science Award (CTSA) Program, Award Number 5UL1TR002243-03. We are also thankful for contributions from Matt Milam of Vanderbilt Enterprise Analytics, Allison McCoy, PhD, Vanderbilt Clinical Informatics Core, Biostatistics Core, and the Tennessee Hospital Association’s Health Information Network.
Funding
Vanderbilt Center for Health Services Research Health Equity
Vanderbilt Institute for Clinical and Translational Research (VICTR) which is funded by the National Center for Advancing Translational Sciences (NCATS) Clinical Translational Science Award (CTSA) Program, Award Number 5UL1TR002243-03
Footnotes
Competing interests
We have no conflicts of interest to disclose.
Ethics approval and consent to participate.
Study was reviewed by Vanderbilt University Medical Center, Human Research Protections Program, Institutional Review Board, with Address at 3319 West End Ave., Suite 600, Nashville TN 37203. They have categorized the study as quality improvement, and as thus, IRB-exempt.
Methods were carried out in accordance with relevant guidelines and regulations.
The need for informed consent was waived by the ethics committee/Institutional Review Board of Vanderbilt University Medical Center, Human Research Protections Program because of the retrospective nature of the study.
Consent for publication
No images, or any identifiable information in the manuscript.
Availability of data and materials
The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request.
Contributor Information
Francis Salvador Balucan, Vanderbilt University Medical Center.
Benjamin French, Vanderbilt University.
Yaping Shi, Vanderbilt University.
Sunil Kripalani, Vanderbilt University Medical Center.
Eduard E. Vasilevskis, Vanderbilt University Medical Center.
References
- 1.Turbow S, Hollberg JR, Ali MK. Electronic Health Record Interoperability: How Did We Get Here and How Do We Move Forward? JAMA Health Forum. 2021. 2021-March-17;2(3):e210253. [DOI] [PubMed] [Google Scholar]
- 2.Samal L, Dykes PC, Greenberg JO, Hasan O, Venkatesh AK, Volk LA, et al. Care coordination gaps due to lack of interoperability in the United States: a qualitative study and literature review. BMC Health Services Research. 2016. 2016-December-01 ;16(1). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Lynch CS, Wajnberg A, Jervis R, Basso-Lipani M, Bernstein S, Colgan C, et al. Implementation Science Workshop: a Novel Multidisciplinary Primary Care Program to Improve Care and Outcomes for Super-Utilizers. Journal of general internal medicine : JGIM. 2016;31 (7)797–802. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Athey S, Stern S. The impact of information technology on emergency health care outcomes. Rand J Econ. 2002;33(3):399–432. [PubMed] [Google Scholar]
- 5.Curka PA, Pepe PE, Ginger VF, Sherrard RC, Ivy MV, Zachariah BS. Emergency medical services priority dispatch. Ann Emerg Med. 1993. Nov;22(11):1688–95. [DOI] [PubMed] [Google Scholar]
- 6.Healthlt.Gov. Improving Hospital Transitions and Care Coordination Using Automated Admission, Discharge and Transfer Alerts. 2013. [cited 2022]. [Google Scholar]
- 7.The Role of Big Data: Committee on Financial Services Task Force on Financial Technology : The Role of Big Data in Financial Services, US House of Representatives, First Session Sess. (2019). [Google Scholar]
- 8.Sun M, Oliwa T, Peek ME, Tung EL. Negative Patient Descriptors: Documenting Racial Bias In The Electronic Health Record. Health Aff (Millwood). 2022. February;41 (2):203–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Moore C. VUMC Financial Statements. 2022. [cited 2022 September 10, 2022]; Available from: https://fmance.vumc.org/treasury/bonddisclosures.aspx.
- 10.THA ADT Encounter Notification Service. Tennessee Hospital Association; 2022. [cited 2022 July 27, 2022]; Available from: http://www.connectn.org/ENS. [Google Scholar]
- 11.Team RC. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2022. [Google Scholar]
- 12.Harrell F. Jr Hmisc: Harrell Miscellaneous. R package version 4.6-0. 2021. [Google Scholar]
- 13.Kilaru AS, Wiebe D. J., Karp D. N., Love J., Kallan M. J. & Carr B. G. Do Hospital Service Areas and Hospital Referral Regions Define Discrete Health Care Populations? Medical Care. 2015;53(6):510–6. [DOI] [PubMed] [Google Scholar]
- 14.Kruse CS, Kindred B, Brar S, Gutierrez G, Cormier K. Health Information Technology and Doctor Shopping: A Systematic Review. Healthcare. 2020. 2020-August-28;8(3):306. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.All-Payer Claims Databases Measurement of Care: Systematic Review and Environmental Scan of Current Practices and Evidence: Agency for Healthcare Research and Quality; 2017. [Google Scholar]
- 16.Huang J, Galal G, Etemadi M, Vaidyanathan M. Evaluation and Mitigation of Racial Bias in Clinical Machine Learning Models: Scoping Review. JMIR Medical Informatics. 2022. 2022-May-31;10(5):e36388. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Baron RJ, Khullar D. Building Trust to Promote a More Equitable Health Care System. Ann Intern Med. 2021. April;174(4):548–9. [DOI] [PubMed] [Google Scholar]
- 18.Wolff RF, Moons KGM, Riley RD, Whiting PF, Westwood M, Collins GS, et al. PROBAST: A Tool to Assess the Risk of Bias and Applicability of Prediction Model Studies. Annals of Internal Medicine. 2019. 2019-January-01;170(1):51. [DOI] [PubMed] [Google Scholar]
- 19.Meltzer D, Ruhnke G,. Redesigning Care For Patients At Increased Hospitalization Risk: The Comprehensive Care Physician Model. Health Affairs. 2014;33(5):770–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Metropolitan and Micropolitan Statistical Areas. In: Bureau USC, editor.2021. [Google Scholar]
- 21.Matthews L. Joint Annual Report of Hospital : Vanderbilt Medical Center: Tennessee Department of Health, Health Statistics 2020. [Google Scholar]
- 22.Matthews L. Joint Annual Report of Hospital: Tristar Centennial Hospital: Tennessee Department of Health, Health Statistics 2020. [Google Scholar]
- 23.Rural Hospital Viability: A Look at alternative models for Rural Hospitals. Tennessee Hospital Association; 2022. [cited 2022 September 11,2022]; Available from: https://tha.com/focus-areas/small-and-rural/rural-hospital-viability/. [Google Scholar]
- 24.Woodward EN, Matthieu MM, Uchendu US, Rogal S, Kirchner JE. The health equity implementation framework: proposal and preliminary study of hepatitis C virus treatment. Implementation Science. 2019. 2019-December-01;14(1). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Woodward EN, Singh RS, Ndebele-Ngwenya P Melgar Castillo A, Dickson KS, Kirchner JE. A more practical guide to incorporating health equity domains in implementation determinant frameworks. Implementation Science Communications. 2021. 2021-December-01;2(1). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Berkman ND, Chang E, Seibert J, Ali R, Porterfield D, Jiang L, et al. Management of High-Need, High-Cost Patients: A “Best Fit” Framework Synthesis, Realist Review, and Systematic Review: Agency for Healthcare Research and Quality (AHRQ) 2021. 2021-October-29. [PubMed] [Google Scholar]
- 27.ACO Reach Model. Centers for Medicare and Medicaid Services; 2022. [cited 2022 September 11, 2022]; Available from: https://innovation.cms.gov/innovation-models/aco-reach. [Google Scholar]