Abstract
Background
Appointment no shows are prevalent in safety-net healthcare systems. The efficacy and equitability of using predictive algorithms to selectively add resource-intensive live telephone outreach to standard automated reminders in such a setting is not known.
Objective
To determine if adding risk-driven telephone outreach to standard automated reminders can improve in-person primary care internal medicine clinic no show rates without worsening racial and ethnic show-rate disparities.
Design
Randomized controlled quality improvement initiative.
Participants
Adult patients with an in-person appointment at a primary care internal medicine clinic in a safety-net healthcare system from 1/1/2022 to 8/24/2022.
Interventions
A random forest model that leveraged electronic health record data to predict appointment no show risk was internally trained and validated to ensure fair performance. Schedulers leveraged the model to place reminder calls to patients in the augmented care arm who had a predicted no show rate of 15% or higher.
Maine Measures
The primary outcome was no show rate stratified by race and ethnicity.
Key Results
There were 5840 appointments with a predicted no show rate of 15% or higher. A total of 2858 had been randomized to the augmented care group and 2982 randomized to standard care. The augmented care group had a significantly lower no show rate than the standard care group (33% vs 36%, p < 0.01). There was a significant reduction in no show rates for Black patients (36% vs 42% respectively, p < 0.001) not reflected in white, non-Hispanic patients.
Conclusions
In this randomized controlled quality improvement initiative, adding model-driven telephone outreach to standard automated reminders was associated with a significant reduction of in-person no show rates in a diverse primary care clinic. The initiative reduced no show disparities by predominantly improving access for Black patients.
Supplementary Information
The online version contains supplementary material available at 10.1007/s11606-023-08209-0.
INTRODUCTION
Racial and ethnic disparities in missed appointments (no shows) are unfortunately common in safety-net healthcare systems.1 These disparities in access risk harming the very patients safety-net systems are designed to care for.2 Patient-level barriers to appointment completion consist of factors known to impact access in minorities and the socioeconomically disadvantaged, including limited transportation, work or childcare conflicts, lack of trust, and missed opportunities due to miscommunication.3 Technology-driven reminders such as automated phone calls, short message service (SMS) messaging, and patient portal notifications could conceptually reduce no show rates by overcoming issues of miscommunication and appointment awareness. Unfortunately, the efficacy of technology-driven solutions in the setting of digital device and access disparities is uncertain.4,5 A potential solution to the so-called digital divide is the addition of live calls to automated messaging, but such an approach can be labor intensive.6,7 With the development of no show risk prediction models, the burden of more labor-intensive outreach efforts could be reduced by focusing on higher risk patients.8–12 However, processes built on no show prediction models require careful vetting given their hypothetical potential of widening disparities, rather than improving them.13,14 To date, little is known regarding the impact of predictive model-driven outreach on disparities in no show rates.
In 2020, our healthcare system considered implementing a risk-driven live outreach process as part of a wider initiative to reduce no show rates. The solution leveraged a machine learning model that relied on appointment details and select patient characteristics that did not include race and ethnicity to calculate the risk of no show for in-person appointments. Our internal validation of the model, its implementation, and our assessment of its impact were focused on ensuring fair performance and an equitable reduction in no show rates. To closely assess and monitor the impact of this intervention, we elected to deploy it through the framework of a prospective, randomized controlled quality improvement initiative.
METHODS
Design, Setting, and Participants
This was a prospective, randomized controlled quality improvement initiative designed to assess the additional value of a risk-driven live call for patients with appointments in a multi-provider primary care internal medicine clinic.
The intervention was conducted at the central outpatient facility of a large safety-net healthcare system in Cleveland, OH. The system receives over 1.4 million outpatient visits from over 300,000 unique patients per year. For this quality improvement initiative, we focused efforts on the main site of an adult internal medicine practice. In 2021–2022, this practice scheduled 11,720 unique patients over 58,680 visits. 42.7% of patients where white, non-Hispanic, 39.9% Black, and 4.9% Hispanic. The no show rate during that period was 18.9% for white, non-Hispanic patients, 28.2% for Black, and 25.7% for Hispanic patients.
For all appointments, outpatient clinic reminders are sent by automated SMS or automated phone call based on the type of phone number registered by the patient. If patients have active patient portals, they can also receive a push notification and email. Outreach through this intervention was provided in addition to these standard procedures.
Model Training and Validation
The no show model is a random forest algorithm developed by the Epic Corporation (Verona, WI). Random forests are a supervised learning method that is commonly used to build “black box” models with a large number of variables to solve classification problems (such as show vs no show).15 For this model, potential variables were based on a list that was curated beforehand by the vendor, with the ultimate contribution of each variable in our setting driven by the localization process. By design, race and ethnicity were not included as variables. Localization was completed via cloud services provided by the Epic Corporation. The localization service leveraged local in-person appointment data from September 2020 to September 2021, with training and validation using an 80:20 split. The locally trained model had a c-statistic of 0.80 and Brier score of 0.12 against a composite no show outcome that included both missed appointments and same calendar day cancellations. The features used by the model and their relative importance after localization are shown in the supplemental material. The highest contributing variables in our setting included historical no show rate (21%), appointment lead time (9.4%), visit type (6%), department (5.8%), age (5.3%), and number of prior appointments (4.4%). Missingness was handled by imputation of the mean for numerical variables, the most common value for categorical variables, and 0 for binary variables.
For our prospective validation, we activated the model in our electronic health record (EHR) for over 3 months (9/20/2021–1/1/2022). No show model predictions were calculated for each appointment in our EHR and updated daily. The prospectively validated c-statistic was 0.75 for appointments in internal medicine during that time frame (supplemental material). The sensitivity of the model for no show at the 15% threshold was 0.76, with a specificity of 0.55 (supplemental material). This corresponded to a positive predictive value of 0.30 and a negative predictive value of 0.90 for predicting no show at that threshold.
To assess potential disparities in the model’s performance, we constructed calibration plots for all appointments as well as for strata of patients identified as white or Black (supplemental material). Visual inspection revealed no major deviations from expected values in any plots. We did not assess calibration plots for other racial and ethnic individuals due to smaller sample sizes that would culminate in exceedingly wide confidence intervals and preclude our ability to conduct a well-informed assessment.
Intervention
After our independent, prospective validation of the no show model, we proceeded with the systems improvement process. In-person appointments were randomized to standard or augmented care groups based on the last digit of a unique internal identifier for each encounter (even digits were assigned to one group, odd the other). For this initiative, we modified a pre-existing appointment outreach activity in the EHR to only show appointments that were included in the augmented care group. The activity juxtaposed appointment information alongside predicted risk of no show. Schedulers called patients with a predicted no show risk of 15% or higher, 3–5 business days prior to an appointment. For consistency, they started with the highest risk patients first. A script was provided for guidance (supplemental material). The intent of the outreach effort was to seek appointment confirmation. If patients declined to confirm the appointment, they were offered the ability to cancel, reschedule, or convert their in-person visit to telehealth visit. In the first iteration of the intervention, we collected call outcome data to identify when patients could not be reached by developing a mechanism within the EHR that allowed schedulers to select from several discrete outcomes (such as voicemail, no response, or invalid number).
Measures and Outcomes
Discrete appointment and patient demographic data were extracted from the EHR. The primary outcome was in-person appointment no show, which included missed appointments and appointments with same day cancellations. This outcome was stratified by racial and ethnic categories.
Secondary outcomes included missed appointment rates, same-day cancellations, or telehealth (telephone or video) conversion rates when done within 7 days of the visit. Demographic data drawn from the EHR includes age, sex, race, and ethnicity. Discrete call outcome data were also tabulated when they became available after the first iteration. Secondary outcomes were also stratified by racial and ethnic categories.
Analysis
Differences in categorical variables were assessed using chi-square testing. Significance was attributed to results with a p-value less than 0.05. Analyses were completed in R version 4.1.16 The stratified analyses included only the three most prevalent racial and ethnic categories in our system: Black, Hispanic, and white, non-Hispanic. Data was reported, but analyses were not conducted on the remaining racial and ethnic categories due to smaller sample sizes.
A post hoc simulation of outreach efficiency was conducted by assessing the number needed to treat to prevent one no show at various thresholds of no show probability (0.25, 0.35, 0.45, and 0.55).
Data from the initiative was reviewed on a biweekly basis by stakeholders, and decisions on the continuation, iteration, or discontinuation of the initiative obtained by majority vote.
Ethical Considerations
The quality improvement protocol was developed in conjunction with stakeholders in primary care and our healthcare system’s quality and clinical informatics leadership. No show predictions were available to all users of the EHR. This initiative used the scoring system primarily to structure and prioritize support staff outreach. The use of a quality improvement approach was felt appropriate in this context as the initiative was focused on process improvement, was deemed low risk, and augmented standard outreach mechanisms. In addition, the use of randomization provided more equitable use of limited support staff time while also allowing for more robust assessment of local impact.17,18 This study was reviewed and deemed exempt by the MetroHealth Institutional Review Board.
RESULTS
The quality initiative formally began on 1/1/2022. The overall approach of patient selection and outreach was consistent throughout, though 2 minor iterations did occur. For the first, discrete call outcomes tracking was introduced on 3/1/2022. For the second iteration, the outreach team was re-coached to emphasize the prospect of telehealth options for high-risk patients to bolster telehealth conversion rates. By 8/24/2022, stakeholders reviewed data that showed evidence for improvement in no show rates in our stratified analysis and unanimously agreed on discontinuing the randomization feature accordingly.
From 1/1/2022 to 8/19/2022, there were 8096 appointments in the general medicine clinic. 58.9% of appointments were for female patients. A total of 3410 appointments (42%) were for patients identified as Black, 3018 (38%) as white, non-Hispanic, and 1142 (14%) as Hispanic.
The study population included all in-person appointments with predicted no show rates of 15% and higher. There were 5840 appointments total, with 2858 that had been randomized to the augmented care group and 2982 randomized to the standard care group. Patient age, sex, portal activation, appointment confirmation rates, and the model-based probability of no show did not differ substantially between study arms (Table 1). The same measures stratified by race and ethnicity were similar (Table 2).
Table 1.
Standard care (N = 2982) | Augmented care (N = 2858) | |
---|---|---|
Age, years (IQR) | 57.46 (45.10–66.09) | 57.64 (46.33–65.62) |
Female sex | 1215 (40.7%) | 1192 (41.7%) |
Race/ethnicity | ||
American Indian | 15 (0.5%) | 9 (0.3%) |
Asian | 35 (1.2%) | 27 (0.9%) |
Black | 1339 (44.9%) | 1339 (46.9%) |
Hispanic | 419 (14.1%) | 424 (14.8%) |
Native Hawaiian | 3 (0.1%) | 3 (0.1%) |
White, non-Hispanic | 1058 (35.5%) | 965 (33.8%) |
Unknown/declined | 113 (3.8%) | 91 (3.2%) |
Patient portal activation | 1879 (63.0%) | 1807 (63.2%) |
New patient visit type | 484 (16.2%) | 456 (16.0%) |
Model predicted probability of no show (IQR) | 0.30 (0.22–0.42) | 0.30 (0.21–0.40) |
Table 2.
Race/ethnicity | Black | Hispanic | White, non-Hispanic | |||
---|---|---|---|---|---|---|
Standard care (N = 1339) | Augmented care (N = 1339) | Standard care (N = 419) | Augmented care (N = 424) | Standard care (N = 1058) | Augmented care (N = 965) | |
Age, years (IQR) | 57.79 (46.39–65.41) | 58.10 (47.62–65.27) | 53.16 (41.78–63.18) | 54.85 (41.62–64.32) | 58.35 (45.84–67.31) | 58.39 (47.56–66.37) |
Female sex | 524 (39.1%) | 516 (38.5%) | 167 (39.9%) | 171 (40.3%) | 476 (45.0%) | 442 (45.8%) |
Patient portal activation | 788 (58.8%) | 775 (57.9%) | 267 (63.7%) | 288 (67.9%) | 716 (67.7%) | 656 (68.0%) |
New patient visit type | 215 (16.1%) | 211 (15.8%) | 75 (17.9%) | 78 (18.4%) | 165 (15.6%) | 140 (14.5%) |
Model predicted probability of no show (IQR) | 0.33 (0.24–0.45) | 0.31 (0.22–0.42) | 0.29 (0.21–0.41) | 0.29 (0.21–0.41) | 0.28 (0.21–0.40) | 0.28 (0.20–0.38) |
Study outcomes are shown in Table 3. There was evidence of a significant reduction in missed appointments in the augmented care group compared to the standard care group (27.1% vs 30.7%, p < 0.01) but no difference in same-day cancellations (5.6% vs 5.5% respectively, p = 0.82). This culminated in a significantly lower composite no show rate of 32.8% in the augmented care group compared to 36.2% in the standard care group (p < 0.01). When stratified along the 3 dominant categories of race and ethnicity (white, non-Hispanic, Black, and Hispanic), there were significant improvements in no show rates for in-person appointments only for Black patients (Table 4). The frequencies of telehealth conversions were low and not significantly different with the intervention in total or when stratified by race (Table 4).
Table 3.
Standard care (N = 2982) | Augmented care (N = 2858) | p value | |
---|---|---|---|
Cancellation (any) | 506 (17.0%) | 510 (17.8%) | 0.38 |
Transitioned to telehealth within 1 week of appointment | 75 (2.5%) | 95 (3.3%) | 0.066 |
Missed appointment | 914 (30.7%) | 775 (27.1%) | 0.003 |
Same day cancellation | 164 (5.5%) | 161 (5.6%) | 0.82 |
No show* | 1078 (36.2%) | 936 (32.8%) | 0.006 |
*Includes missed appointments and same day cancellations
Table 4.
Race/ethnicity | Black | White, non-Hispanic | Hispanic | ||||||
---|---|---|---|---|---|---|---|---|---|
Standard (N = 1339) | Augmented care (N = 1339) | p-value | Standard (N = 1058) | Augmented care (N = 965) | p-value | Standard (N = 419) | Augmented care (N = 424) | p- value | |
Cancellation (any) | 233 (17.4%) | 238 (17.8%) | 0.80 | 201 (19.0%) | 178 (18.4%) | 0.75 | 51 (12.2%) | 72 (17.0%) | 0.048* |
Transitioned to telehealth within 1 week of appointment | 35 (2.6%) | 44 (3.3%) | 0.30 | 28 (2.6%) | 32 (3.3%) | 0.38 | 7 (1.7%) | 15 (3.5%) | 0.090 |
Missed appointment | 481 (35.9%) | 382 (28.5%) | < 0.001* | 246 (23.3%) | 243 (25.2%) | 0.31 | 139 (33.2%) | 122 (28.8%) | 0.17 |
Same day cancellation | 83 (6.2%) | 98 (7.3%) | 0.25 | 62 (5.9%) | 44 (4.6%) | 0.19 | 12 (2.9%) | 16 (3.8%) | 0.46 |
No show** | 564 (42.1%) | 480 (35.8%) | < 0.001* | 308 (29.1%) | 287 (29.7%) | 0.76 | 151 (36.0%) | 138 (32.5%) | 0.29 |
*p < 0.05
**Includes missed appointments and same day cancellations
Appointment confirmation rates by means of automated outreach (patient portal, telephone call, or SMS) are shown in the supplemental material. Patient initiated confirmations were significantly lower in the augmented care group when compared to the standard care group (12.4% v s 15.5%, p < 0.001).
Call outcomes stratified by race are shown in Table 5. While there were no significant differences overall, there were higher connections and fewer calls that went to voicemail in Black patients compared to the Hispanic and white, non-Hispanic patients. The no show rate was notably significantly lower among patients that were reached when compared to those who were left a message (26.4% vs 32.4% respectively, p < 0.001).
Table 5.
Black (N = 627) | Hispanic (N = 169) | White, non-Hispanic (N = 428) | p value | |
---|---|---|---|---|
Call outcome* | 0.241 | |||
Contact reached | 331 (52.8%) | 75 (44.4%) | 181 (42.3%) | |
Left message | 212 (33.8%) | 67 (39.6%) | 177 (41.4%) | |
Unable to contact patient | 39 (6.2%) | 10 (5.9%) | 36 (8.4%) | |
No answer/busy | 31 (4.9%) | 12 (7.1%) | 23 (5.4%) | |
Flipped to telehealth | 8 (1.3%) | 3 (1.8%) | 5 (1.2%) | |
Not available | 3 (0.5%) | 1 (0.6%) | 3 (0.7%) | |
Canceled | 1 (0.2%) | 0 (0.0%) | 1 (0.2%) | |
Non-working phone number | 1 (0.2%) | 0 (0.0%) | 1 (0.2%) | |
Rescheduled | 1 (0.2%) | 0 (0.0%) | 1 (0.2%) | |
Transportation required | 0 (0.0%) | 1 (0.6%) | 0 (0.0%) |
*Call outcomes are only available for patients contacted after the first iteration of the quality improvement initiative
A simulated efficiency analysis showed that the number needed to treat (call) to prevent one no show decreased from 29 to 15 with increasing outreach thresholds from 0.15 to 0.45 (Table 6). At a threshold of 0.55, the intervention’s number needed to treat increased to 28.
Table 6.
Simulated probability threshold | Standard care volume | Augmented care volume | Standard care no show ratio | Augmented care no show ratio | Absolute risk reduction | Number needed to treat |
---|---|---|---|---|---|---|
0.15* | 2982 | 2858 | 0.36 | 0.33 | 0.034 | 29.4 |
0.25 | 1920 | 1803 | 0.42 | 0.39 | 0.035 | 28.3 |
0.35 | 1192 | 1022 | 0.48 | 0.43 | 0.047 | 21.2 |
0.45 | 620 | 493 | 0.54 | 0.47 | 0.066 | 15.1 |
0.55 | 217 | 154 | 0.56 | 0.53 | 0.036 | 27.6 |
*Actual study threshold
DISCUSSION
In this randomized quality improvement initiative, the addition of a predictive model-driven telephone reminder to standard automated messaging led to significant reductions in no show rate in Black patients that was not mirrored in White, non-Hispanic patients. Black patients who received augmented care had a 15% reduction in no show, which drove most of the 9% reduction in overall no show rates. This brought the no show rate in Black patients (35.8%) closer to that in white, non-Hispanics (29.7%). While Hispanic patients did not see a statistically significant improvement in no show rates, our study may have been underpowered for that stratum. Based on the tabulation of call outcome data later in the intervention, these differences may have been driven by higher connection rates with Black patients (52.8%) when compared to Hispanic (44.4%) or White, non-Hispanic patients (42.3%).
Though other studies have used robust prospective randomized controlled study designs to demonstrate how predictive model-driven live outreach can reduce no show rates, none has measured the impact of the intervention on racial disparities in show rates or appointment access.10–12,19 Our study is therefore the first to demonstrate the value of additional model-driven live outreach approach on reducing racial disparities in no show rates.
The large effect size of our initiative in a minority subgroup makes a strong case for the inadequacy of currently leveraged automated communication techniques, including reminders delivered by patient portals, in the patients that safety-net systems serve. Confirmation responses to any automated reminders were rare in general (less than 15%), even when patient portal activation rates were relatively high at 63%. This large gap in phone and patient portal engagement leaves much to be desired but is unfortunately well-documented in underserved populations.20–24
Both the validation of the machine learning model and its implementation were designed to ensure that disparities in no show rate were not worsened with its introduction. The former is particularly important considering recent evidence of the potential harms machine learning models can inadvertently inflict on minority groups.25–27 For our model validation, we were careful to run a racially stratified internal validation before implementation. We moved forward with the model because our results did not yield any concerning deviations in model performance by race in our calibration testing.
As demonstrated by our data, model-driven processes that impact primary care access are likely to select for minorities because they have higher rates of missed appointments. As a result, any intervention built upon such a model must be scrutinized for its potential to exacerbate disparities. In the case of no show prediction, overbooking is one example of how such a model may be misused. Overbooking is likely to lead to inferior service (increased wait times and possibly truncated visits) for the very patients who already struggle with clinic access.28 This is specifically why overbooking was not used in our intervention, and why we were careful to measure for potentially unanticipated effects such as increased cancellation or rescheduling rates as a marker of general access. Fortunately, overall cancellations were not different for most in our study (Tables 2 and 3). The significance of an increase in cancellations in the Hispanic cohort is questioned in the setting of a smaller sample size (i.e., type 1 error) and the absence of any cancellations during call outcomes monitoring in that group.
Though there was a slight increase in telehealth conversions with the intervention, they were relatively rare and there were no significant improvements in the overall or stratified analyses (Tables 3 and 4). Encouraging schedulers to advertise telehealth services did little to change this (Table 5). We suspect that this is due to a selection bias in our study, as our cohort focused on patients who had already opted for in-person visits during the COVID-19 era where telehealth options have already been offered and well-advertised systemwide. While telehealth in principle could have been leveraged to reduce visit show-rate disparities, early evidence from its use in the pandemic suggests that its potential to improve access to minorities left much to be desired.29–32
We were not able to generate a formal cost analysis with our pragmatic design. Nonetheless, schedulers estimated a rather favorable effort load of less than 1 h of work for approximately 15–20 patients per day. Since we set our threshold for a call at a 15% risk of no show in a generally high-risk clinic, most patients (over 70%) were included for outreach. Setting the threshold to a higher risk level could prove less effective overall but more efficient as demonstrated by our threshold simulation (Table 6).
Our study results are strengthened by our reliance on a large and diverse cohort of patients and our randomized controlled design. Major limitations include reliance on single-center experience which may not be generalizable to settings that are not dedicated to caring for underserved patients. Additionally, we did not design our intervention or data collection processes to account for the likely important impact of socioeconomic status as a confounder for race. Furthermore, our assessment of the model in this pragmatic implementation was limited to performance in total, and any potentially important differences in predictors between racial groups were not elucidated. Another noted limitation is our inability to formally identify why live telephone outreach was more effective than current automated techniques alone. Finally, our study design did not allow us to investigate or understand our patients’ barriers to access. This is particularly true for the large contingent of patients who could not be contacted in real-time by our schedulers (Table 5).
CONCLUSION
In this randomized controlled quality improvement initiative, the addition of a risk model-driven telephone outreach to standard automated reminders led to a significant reduction in no show rates in patients presenting to a primary care internal medicine clinic at a safety-net healthcare system. When stratified by patient race, the improvement in no show rate was driven primarily by improvements in connection rates and no show reductions in Black patients. More research is needed around why, despite relatively high patient portal activation, automated reminders are inadequate in this patient population. Systems must nonetheless continue improving access for the underserved both by promoting the effective use of patient-facing tools and by engaging patients in a modality that is convenient and effective for them.
Supplementary Information
Below is the link to the electronic supplementary material.
Acknowledgements:
The authors would like to acknowledge Bridget Perea for her assistance in model implementation, as well as Bridget Harper, Frances Bowman, and Nancy Marti for their dedicated efforts in contacting patients for this initiative.
Funding
This project was supported by the Clinical and Translational Science Collaborative (CTSC) of Cleveland which is funded by the National Institutes of Health (NIH), National Center for Advancing Translational Science (NCATS), Clinical and Translational Science Award (CTSA) grant, UL1TR002548. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
Data Availability
Data from this study can be made available upon reasonable request from the study authors.
Declarations:
Conflict of Interest:
YT reports consulting and research funding from Beckman Coulter for biomarkers in sepsis and sepsis early warning systems that was unrelated to this work. The remaining authors report no conflicts to declare.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Shimotsu S, Roehrl A, McCarty M, et al. Increased likelihood of missed appointments (“no shows”) for racial/ethnic minorities in a safety net health system. J Prim Care Commun Health. 2016;7(1):38–40. doi: 10.1177/2150131915599980. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Andrulis DP. Access to care is the centerpiece in the elimination of socioeconomic disparities in health. Ann Intern Med. 1998;129(5):412–416. doi: 10.7326/0003-4819-129-5-199809010-00012. [DOI] [PubMed] [Google Scholar]
- 3.Parsons J, Bryce C, Atherton H. Which patients miss appointments with general practice and the reasons why: a systematic review. Br J Gen Pract. 2021;71(707):e406–e412. doi: 10.3399/BJGP.2020.1017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Davis D. Using Technology to reduce missed appointments. On-Line J Nurs Inform. 2021;25(2). https://www.proquest.com/scholarly-journals/using-technology-reduce-missed-appointments/docview/2621686165/se-2.
- 5.Saeed SA, Masters RM. Disparities in health care and the digital divide. Curr Psychiatry Rep. 2021;23:1–6. doi: 10.1007/s11920-021-01274-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Hasvold PE, Wootton R. Use of telephone and SMS reminders to improve attendance at hospital appointments: a systematic review. J Telemed Telecare. 2011;17(7):358–364. doi: 10.1258/jtt.2011.110707. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Parikh A, Gupta K, Wilson AC, Fields K, Cosgrove NM, Kostis JB. The effectiveness of outpatient appointment reminder systems in reducing no-show rates. Am J Med. 2010;123(6):542–548. doi: 10.1016/j.amjmed.2009.11.022. [DOI] [PubMed] [Google Scholar]
- 8.Carreras-García D, Delgado-Gómez D, Llorente-Fernández F, Arribas-Gil A. Patient no-show prediction: a systematic literature review. Entropy. 2020;22(6):675. doi: 10.3390/e22060675. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Ding X, Gellad ZF, Mather C, 3rd, et al. Designing risk prediction models for ambulatory no-shows across different specialties and clinics. J Am Med Inform Assoc. 2018;25(8):924–930. doi: 10.1093/jamia/ocy002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Shah SJ, Cronin P, Hong CS, et al. Targeted reminder phone calls to patients at high risk of no-show for primary care appointment: a randomized trial. J Gen Intern Med. 2016;31(12):1460–1466. doi: 10.1007/s11606-016-3813-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Goffman RM, Harris SL, May JH, et al. Modeling patient no-show history and predicting future outpatient appointment behavior in the Veterans Health Administration. Mil Med. 2017;182(5):e1708–e1714. doi: 10.7205/MILMED-D-16-00345. [DOI] [PubMed] [Google Scholar]
- 12.Valero-Bover D, Gonzalez P, Carot-Sans G, et al. Reducing non-attendance in outpatient appointments: predictive model development, validation, and clinical assessment. BMC Health Serv Res. 2022;22(1):451. doi: 10.1186/s12913-022-07865-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Samorani M, Blount LG. Machine learning and medical appointment scheduling: creating and perpetuating inequalities in access to health care. In. Am Public Health Assoc. 2020;110:440–441. doi: 10.2105/AJPH.2020.305570. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Weinick RM, Hasnain-Wynia R. Quality improvement efforts under health reform: how to ensure that they help reduce disparities—not increase them. Health Affairs. 2011;30(10):1837–1843. doi: 10.1377/hlthaff.2011.0617. [DOI] [PubMed] [Google Scholar]
- 15.Biau G, Scornet E. A random forest guided tour. Test. 2016;25(2):197–227. doi: 10.1007/s11749-016-0481-7. [DOI] [Google Scholar]
- 16.R: a language and environment for statistical computing. [computer program]. Version Version 0.98.484. Vienna, Austria: Foundation for Statistical Computing; 2009–20013.
- 17.Ogrinc G, Davies L, Goodman D, Batalden P, Davidoff F, Stevens D. SQUIRE 2.0. (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf. 2016;25(12):986–992. doi: 10.1136/bmjqs-2015-004411. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Cher DJ, Carr B, Maclure MJ. Toward QI2: quality improvement of quality improvement through randomized controlled trials. The Joint Commission Journal on Quality Improvement. 25(1): 26-39.1999;25(1):26–39. 10.1016/S1070-3241(16)30424-2.
- 19.Ulloa-Pérez E, Blasi PR, Westbrook EO, Lozano P, Coleman KF, Coley RY. Pragmatic randomized study of targeted text message reminders to reduce missed clinic visits. Perm J. 2022;26(1):64–72. doi: 10.7812/TPP/21.078. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Reisdorf BC, Fernandez L, Hampton KN, Shin I, Dutton WH. Mobile phones will not eliminate digital and social divides: how variation in internet activities mediates the relationship between type of internet access and local social capital in Detroit. Soc Sci Comput Rev. 2022;40(2):288–308. doi: 10.1177/0894439320909446. [DOI] [Google Scholar]
- 21.Marler W. Mobile phones and inequality: findings, trends, and future directions. New Media Soc. 2018;20(9):3498–3520. doi: 10.1177/1461444818765154. [DOI] [Google Scholar]
- 22.Perzynski AT, Roach MJ, Shick S, et al. Patient portals and broadband internet inequality. J Am Med Inform Assoc. 2017;24(5):927–932. doi: 10.1093/jamia/ocx020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Graetz I, Gordon N, Fung V, Hamity C, Reed ME. The digital divide and patient portals: internet access explained differences in patient portal use for secure messaging by age, race, and income. Med Care. 2016;54(8):772–779. doi: 10.1097/MLR.0000000000000560. [DOI] [PubMed] [Google Scholar]
- 24.Kumar D, Hemmige V, Kallen M, Giordano T, Arya M. Mobile phones may not bridge the digital divide: a look at mobile phone literacy in an underserved patient population. Cureus. 2019; 11 (2): e4104. 10.7759/cureus.4104. [DOI] [PMC free article] [PubMed]
- 25.Celi LA, Cellini J, Charpignon M-L, et al. Sources of bias in artificial intelligence that perpetuate healthcare disparities—a global review. PLOS Dig Health. 2022;1(3):e0000022. doi: 10.1371/journal.pdig.0000022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Vyas DA, Eisenstein LG, Jones DS. Hidden in plain sight—reconsidering the use of race correction in clinical algorithms. In. Mass Medical Soc. 2020;383:874–882. doi: 10.1056/NEJMms2004740. [DOI] [PubMed] [Google Scholar]
- 27.Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447–453. doi: 10.1126/science.aax2342. [DOI] [PubMed] [Google Scholar]
- 28.Samorani M, Harris SL, Blount LG, Lu H, Santoro MA. Overbooked and overlooked: machine learning and racial bias in medical appointment scheduling. Manuf Service Oper Management. 2021. 10.2139/ssrn.3467047.
- 29.Waseem N, Boulanger M, Yanek LR, Feliciano JL. Disparities in telemedicine success and their association with adverse outcomes in patients with thoracic cancer during the COVID-19 pandemic. JAMA Netw Open. 2022;5(7):e2220543–e2220543. doi: 10.1001/jamanetworkopen.2022.20543. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Franciosi EB, Tan AJ, Kassamali B, O'Connor DM, Rashighi M, LaChance AH. Understanding the impact of teledermatology on no-show rates and health care accessibility: a retrospective chart review. J Am Acad Dermatol. 2021;84(3):769–771. doi: 10.1016/j.jaad.2020.09.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Franciosi EB, Tan AJ, Kassamali B, et al. The impact of telehealth implementation on underserved populations and no-show rates by medical specialty during the COVID-19 pandemic. Telemed J E Health. 2021;27(8):874–880. doi: 10.1089/tmj.2020.0525. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Darrat I, Tam S, Boulis M, Williams AM. Socioeconomic disparities in patient use of telehealth during the coronavirus disease 2019 surge. JAMA Otolaryngol-Head Neck Surg. 2021;147(3):287–295. doi: 10.1001/jamaoto.2020.5161. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Data from this study can be made available upon reasonable request from the study authors.