Abstract
Introduction:
Patients and clinicians face challenges in participating in video telehealth visits. Patient navigation has been effective in other settings in enhancing patients’ engagement with clinical programs. Our objective was to assess whether implementing a telehealth navigator program to support patients and clinicians affected video visit scheduling, video usage, and non-attendance.
Methods:
This was a quasi-experimental quality improvement project using difference-in-differences. We included data from 17 adult primary care sites at a large, urban public healthcare system from October 1, 2021 to October 31, 2022. Six sites received telehealth navigators and 11 sites were used as comparators. Navigators contacted patients (by phone) with upcoming video visits to assess and address potential barriers to successful video visit completion. They also provided on-site support to patients and clinicians regarding telehealth visits and usage of an electronic patient portal. The primary outcomes were difference-in-differences for the proportion of telehealth visits scheduled and, separately, completed as video visits and non-attendance for visits scheduled as video visits.
Results:
There were 65 488 and 71 504 scheduled telehealth appointments at intervention and non-intervention sites, respectively. The adjusted difference-in-differences for the proportion of telehealth visits scheduled as video was −9.1% [95% confidence interval −26.1%, 8.0%], the proportion of telehealth visits completed as video visits 1.3% [−4.9%, 7.4%], and non-attendance for visits scheduled as video visits −3.7% [−6.0%, −1.4%].
Conclusions:
Sites with telehealth navigators had comparatively lower video visit non-attendance but did not have comparatively different video visit scheduling or completion rates. Despite this, navigators’ on-the-ground presence can help identify opportunities for improvements in care design.
Keywords: telehealth, patient navigation, primary care, patient non-attendance, access to care
Introduction
For many patients, telehealth—synchronous encounters between remotely located patients and clinicians facilitated by audio and/or video communications technologies— can improve access to and attendance in primary care.1 -4 Telehealth visits may be conducted by audio-only (eg, a phone call) or by video (audio plus a synchronous visual component from a camera-enabled device). While the audio-only option may be important for inclusive and equitable models of providing telehealth,5 -8 video visits may have added advantages in allowing for a limited physical or environmental examination, better patient experience,9 -12 and enhanced reimbursement. 13
However, patients in safety-net settings may face challenges participating in video telehealth compared with audio-only for synchronous telehealth encounters.5 -8 Potential barriers include patient familiarity with video platforms that may be used by health systems, uncertain attitudes toward telehealth, lack of consistent access to digital devices or connectivity, and issues with digital literacy.14 -19 In other contexts where the complexity of healthcare creates barriers for patients to successfully and effectively utilize services, patient navigators can improve access to care and clinical outcomes. 20 Patient navigators may provide outreach, education, assistance with appointment scheduling and follow-up, and facilitate care coordination. 20
In the setting of telehealth utilization, 1 study of a telehealth focused patient navigation program at an academic medical center demonstrated an 9% absolute decrease in appointment non-attendance following implementation. 21 However, data on the efficacy of this type of intervention—including the scope and method of support provided and the effect in different settings—is still nascent. Based on this existing literature, we hypothesized that a telehealth navigation program in a safety-net setting could modestly reduce video visit non-attendance and potentially affect the proportion of telehealth visits scheduled and completed by video (as opposed to audio-only).
The objective of this study was to evaluate whether implementation of a quality improvement intervention using telehealth navigators affected video visit scheduling and completion in a public safety-net adult primary care setting.
Methods
Study Design and Setting
We designed a difference-in-differences quality improvement study using appointment scheduling and completion data from 17 community- and hospital-based adult primary care sites at a single, large public healthcare system in the largest metropolitan area of the northeastern United States from October 1, 2021 to October 31, 2022. This healthcare system serves as part of its city’s safety-net, with over two-thirds of the system’s patients being uninsured or publicly insured (Medicaid and Medicare).
During this time period, telehealth appointments could be scheduled or completed as either audio-only or video based on patient and staff preferences and determination of appropriateness. Prior to this intervention, patients and staff at our health system had identified lack of direct assistance for video visits as a potential area for improvement system wide.
Intervention
The goals of the intervention were to increase the proportion of telehealth visits scheduled and completed as video visits and to reduce non-attendance of scheduled video visits.
We hired 2 full-time, grant-supported staff to serve as rotating telehealth navigators at 6 community- and hospital-based facilities. Navigators were trained in using the electronic health record (EHR), telehealth visit workflows, and patient portal functions. At the beginning of each week, they used the EHR to generate a list of patients with upcoming video visits and performed daytime outreach phone calls to identify potential technological or other barriers to attendance and assist patients in overcoming them using a semi-structured script and knowledge base. If a patient was unable to be contacted, navigators would leave a voicemail with a callback number if patients wanted assistance. They also contacted patients who were marked as not having attended their scheduled video visit to identify reasons for nonattendance and offer support. Initially, navigators would also identify and reach out to patients with scheduled audio-only visits to offer conversion to and guidance for video visits; however, this was discontinued after the first site to focus on supporting already scheduled video visits. Language interpreters were available to use on calls, and 1 navigator was bilingual in English and Spanish. Navigators typically each attempted outreach to 20 to 30 patients a week. Navigators also provided on-site guidance for patients and staff around using the EHR’s integrated video visit platform and patient portal. Patients who were at a clinic for an in-person visit but whose next scheduled visit was arranged to be via video could receive assistance on-site (before leaving the clinic) to prepare them for their next visit. For staff, navigators provided hands-on support for troubleshooting issues with the video visit platform and workflow. Each navigator spent 2 to 4 months at their assigned intervention facility before rotating to the next facility.
Participation in the intervention group was voluntary, and we selected sites to include both hospital- and community-based sites varied with high (≥50% of all telehealth visits) and low (<50%) initial video visit scheduling. For all intervention sites, the site medical, nursing, and administrative directors had to agree to support the intervention (eg, by providing a space for the navigator to work, facilitating integration into clinic teams and workflows). The number of sites selected was based on balancing intervention duration at individual sites with learning from implementation in different site contexts.
Both intervention and non-intervention sites used standard patient communication protocols (appointment reminder text messages 3 days before, 2 day before, and the day of the appointment; a registration phone call up to 3 days before the appointment) and received usual central support from health system administrators provided to all sites.
We treated the intervention as one occurring at the level of the clinic in which a navigator was providing support rather than at the level of individual patients/clinicians because their presence within a clinic and the support they provided for both patients and providers may have spillover effects.
Primary Outcomes
The primary outcomes were difference-in-differences changes in the: (1) proportion of telehealth visits scheduled as video visits (versus audio-only visits), (2) proportion of telehealth visits completed as video visits, and (3) non-attendance rates for video visits at each intervention site compared with non-intervention sites within the health system.
Visit scheduling and completion data were gathered from the EHR. Scheduling of telehealth visits was identified by a “visit type” field in the EHR. Completion by audio-only versus video was determined by billing Current Procedural Terminology (CPT) codes and modifiers: visits billed with the 99441 to 99443 CPT codes were classified as audio-only and visits billed with any in-person code and a telehealth modifier (95/GT) were classified as video. Visits scheduled as audio-only could be completed and billed as video visits if the clinician connected with the patient via video; conversely, visits scheduled as video could be completed and billed as audio only if clinicians connected with the patient via audio only. Non-attendance was determined by a visit completion status flag and presence of a valid billing code in the EHR.
Data on the exact count and outcomes of individual outreach attempts—whether a navigator was able to converse with a patient with an upcoming visit—were not available.
Statistical Analysis
We use an intention-to-treat approach in our analysis and consider the intervention as happening at the site level (as opposed to the patient level).
We describe the median number of attributed patients; annual number of visits; age, sex, race/ethnicity, non-English language preferred status, and insurance distributions at the site level for intervention and non-intervention sites. We used Mann-Whitney U tests to compare characteristics between groups.
For intervention sites, the month in which an intervention site initially received a navigator was considered month 0, and all time periods were defined relative to this timestamp. Monthly metrics from the 3 months prior to intervention (−1, −2, and −3 months) were used as a baseline, and outcomes were followed for 4 months after the start of the intervention at each site (+1, +2, +3, and +4 months). The asymmetric follow-up period was to allow for better capture of variable follow-up appointment intervals for usual primary care chronic disease management (eg, 3-4 months). For the first and last sites to receive a navigator, only 1 month of pre- and post-intervention data were available, respectively. For non-intervention sites, the pre/post cutoff timestamp for comparison (month 0 for non-intervention sites) was derived using the average time of intervention implementation at the intervention sites. Metrics from non-intervention sites were similarly tracked for 3 months prior to the time cutoff and 4 months afterwards.
We used ordinary least squares linear regression models for the difference-in-differences analysis. A separate model was used for each outcome, and each model included an interaction term between a variable indicating receipt or non-receipt of a navigator and a variable indicating time relative to the intervention implementation (or comparison timestamp for non-intervention sites). We controlled for monthly variation in the outcomes and included fixed effects for each site. We used clustered robust standard errors to account for intra-site variation. The assumption of parallel trends was determined visually.
This study was exempted from full review by the Biomedical Research Alliance of New York (#23-12-172-719) and granted a waiver of informed consent for a retrospective review of aggregated data.
Results
There were 25 471 scheduled telehealth appointments at intervention sites (n = 6) and 36 646 scheduled telehealth appointments at non-intervention sites (n = 11) during the pre- and post-intervention periods (−3 to +4 months). The characteristics of intervention and non-intervention sites are described in Table 1.
Table 1.
Characteristics of Sites That Received and Did Not Receive Telehealth Navigators.
| Characteristic, median (IQR) | Non-intervention sites (n = 11) | Intervention sites (n = 6) | P |
|---|---|---|---|
| Attributed patients, N | 47 199 (10 855-70 698) | 65 0634 (21 723-89 052) | .48 |
| Annual visits, N | 38 737 (20 713-52 631) | 50 018 (42 606-68 453) | .13 |
| Age, % | |||
| 18-44 years | 52.3 (46.0-53.8) | 48.0 (39.4-52.4) | .23 |
| 45-64 years | 32.7 (31.3-38.5) | 34.4 (33.8-38.8) | .25 |
| 65+ years | 16.1 (13.7-18.2) | 18.0 (14.8-20.3) | .23 |
| Female, % | 57.5 (54.9-63.0) | 57.0 (52.8-61.3) | .48 |
| Race/ethnicity, % | |||
| White | 4.1 (3.5-10.6) | 7.0 (4.0-8.0) | .84 |
| Black | 30.6 (23.1-33.7) | 20.7 (11.3-66.9) | .48 |
| Hispanic | 42.0 (24.7-50.2) | 44.4 (20.5-55.4) | .80 |
| Asian | 3.6 (1.4-6.0) | 5.4 (1.5-16.4) | .27 |
| Other race/ethnicity | 8.7 (7.0-11.3) | 9.3 (7.3-10.5) | .88 |
| Unknown | 2.1 (1.8-3.4) | 3.0 (2.2-3.3) | .23 |
| Non-English language preferred, % | 35.1 (24.3-43.5) | 37.7 (16.1-48.9) | .96 |
| Insurance, % | |||
| Commercial | 21.0 (18.8-21.5) | 20.8 (17.1-22.1) | 1.00 |
| Medicare | 13.8 (11.8-15.2) | 14.9 (13.3-19.1) | .45 |
| Medicaid | 39.4 (32.0-43.7) | 33.5 (31.3-36.0) | .19 |
| Uninsured | 25.3 (23.0-28.8) | 27.1 (22.5-33.2) | .69 |
| Other insurance | 1.1 (0.2-1.6) | 1.75 (0.8-3.1) | .27 |
Among intervention sites, the proportion of telehealth visits scheduled as video visits was 63.6% (n = 7266) pre-intervention and 69.9% (n = 9812) post-intervention. Among non-intervention sites, the proportion of telehealth visits scheduled as video visits was 53.7% (n = 11 316) before the timestamp for comparison and 57.7% (n = 8991) after. The unadjusted difference-in-differences was 2.2% [95% confidence interval 0.7%, 3.8%]; after adjusting for month and clustering by facility, the difference-in-differences was −9.1% [−26.1%, 8.0%] (Figure 1).
Figure 1.

Changes in video visit scheduling, visit completion by video (vs audio-only), and video visit non-attendance among sites that received and did not receive telehealth navigators.
Among intervention sites, the proportion of telehealth visits completed as video visits was 20.8% (n = 1988) pre-intervention and 22.7% (n = 2702) post-intervention. Among non-intervention sites, the proportion of telehealth visits completed as video visits was 29.1% (n = 4775) before the timestamp for comparison and 31.5% (n = 3816) after. The unadjusted difference-in-differences was −0.6% [−2.2%, 1.0%]; after adjusting for month and clustering by facility, the difference-in-differences was 1.3% [−4.9%, 7.4%] (Figure 1).
Among intervention sites, the non-attendance for visits scheduled as video visits was 15.8% (n = 1148) pre-intervention and 15.8% (n = 1547) post-intervention. Among non-intervention sites, the non-attendance for visits scheduled as video visits was 18.6% (n = 2101) before the timestamp for comparison and 19.9% (n = 1793) after. The unadjusted difference-in-differences was −1.4% [−3.0%, 0.2%]; after adjusting for month and clustering by facility, the difference-in-differences was −3.7% [−6.0%, −1.4%] (Figure 1).
Discussion
In this difference-in-differences quality improvement study of telehealth scheduling and completion before and after implementation of a telehealth navigator program in a safety-net primary care setting, we saw that there were no significant effects on video visit scheduling and completion rates and a small relative reduction in non-attendance for visits scheduled as video visits.
While we hypothesized that providing direct support to both patients and clinicians in conducting video visits would increase the proportion of telehealth visits scheduled and completed as video (as opposed to audio-only), this effect was not observed. For patients, single interactions with a navigator during a pre-visit outreach call for a scheduled video visit or with a navigator in-person after a video visit was scheduled for follow-up may not be sufficient to completely address potential barriers and concerns around video visits. It may take repeated conversations to foster trust and establish the prospective benefits of being able to use video visits and for patients to transition from video non-users to video users. Additionally, there may be barriers such as access to technology/broadband, trust in technology, or socioeconomic factors that may be more difficult for navigators to promptly address.16 -19 Navigators may also not have been able to attempt outreach for all patients with scheduled video visits, particularly at sites with high visit volumes.
For clinicians, though navigators could provide on-site assistance with using the video visit platform, this may not result in immediate comfort with scheduling and completing more telehealth follow-ups by video. In clinic, the additional time involved in receiving this support and other challenges of using video visits in a clinical schedule that may also include in-person visits may limit any positive effects from navigators having helped prepare patients in advance. Patient preparation, clinician comfort in using the platform, and appropriate clinical workflow integration are all likely necessary to meaningfully increase video visit adoption.14,15 Prior research has suggested that clinician and facility factors may be more influential in telehealth utilization than patient-level factors,5,22 thus telehealth navigation primarily focused on patient and clinician technical support, but not workflow integration or structural scheduling issues, may not comprehensively address lower video visit completion.
Consistent with findings from a prior study of a telehealth navigation program at an academic medical center, 21 we saw that there was a decrease in non-attendance for visits scheduled as video visits. However, since the percentage of telehealth visits completed as video did not increase, whether this finding is due to outreach calls serving as additional appointment reminders (versus the opportunity for technical assistance) is unclear.
Although we did not observe a clear effect on video visit scheduling and completion, navigators were able to provide insights from patients and frontline staff that were invaluable in informing the subsequent design of telehealth services and resources. For example, in conversations with patients who did not attend their scheduled video visits, navigators identified that some patients scheduled a video visit because their clinician recommended it but would rather wait and reschedule their appointment so they could have an in-person physical examination. Regardless of the clinical appropriateness, this speaks to mismatched expectations, lapses in communication, or incomplete explanations as to the value of video visits (or telehealth visits in general). On the clinician side, one example of an insight is that navigators observed that clinicians were sometimes opting not to use video (even if appointments were scheduled for video) for non-English speaking patients because they were not aware of how to use the integrated interpreter function of the video visit platform. The telehealth operations team was able to use these insights to improve training materials, offer more on-site support, and provide more patient education on what to expect and how to prepare for and get the most out of a telehealth visit. Ultimately, telehealth navigators can provide important insights and support but may alone be insufficient in improving video visit usage and attendance, both of which are multifaceted issues.
Limitations
Site selection was not random, thus there may be differences in sites that were versus were not willing to participate. Because we also intentionally selected sites with different baseline characteristics, observed effects may be diluted; for example, at a smaller, community-based site with only a few clinicians, the relative impact of (or impediment of) additional support from a single navigator for the patients and clinicians at that site may be different than that at a larger, hospital-based site with many clinicians. Likewise, at a site with relatively higher rates of video visit scheduling and completion, the effect may be different than at a site with relatively lower rates. However, comparison sites were similarly heterogeneous.
Patients may also be somewhat self-selecting in their scheduling for video visits. While clinician and facility factors may also play a role in scheduling propensity,5,22 patients who agree to be scheduled for video visits—and thus received outreach calls from navigators—may be different from patients who do not agree. This potential bias also applies to patients who did versus did not respond to the outreach calls.
All patients scheduled for video visits were flagged to be called by the navigators, but for larger sites with longer patient lists, this blanket approach is resource intensive and outreach may be incomplete; future efforts may include having targeted criteria for identifying and contacting the patients deemed most at risk for non-attendance or non-success with video visits.
Whether the model of daytime outreach calls is the most effective method of contact is also unclear because patients may be working or otherwise unavailable to dedicatedly walk through instructions. Other approaches may be needed to meet patients where they are.
Changes in some outcomes may take longer to be realized; for example, primary care patients may follow up in intervals of 3 months or greater for stable chronic conditions and our follow-up interval may not have been long enough to capture all patients who would have been newly successful in scheduling or completing video visits. We do not account for the time between appointment scheduling and the appointment date (lead time) which may be independently associated with completion rates. However, we would expect the effect of patient and clinician upskilling to take more immediate effect. Relatedly, as patients and clinicians are both increasingly comfortable with video visits over time and with support, the need and efficacy of having a navigator may change.
We treated the intervention as one occurring at the level of the clinic rather than at the level of patients, and data on the exact count and results of individual outreach attempts were not available. This limits our ability to determine the efficacy of actual connection between patients and navigators and instead focuses on the “intention to treat” of having a navigator performing outreach and giving on-site support to patients and clinicians. This also reduces the power to detect differences between groups. Differences in clinician characteristics between sites that may affect outcomes were not captured. Additional variables that were unaccounted for but may affect efficacy include characteristics of the navigators themselves and the content of their scripts and knowledge bases. The design and features of the EHR-integrated video visit platform and clinical workflows may also differ from other institutions and limit generalizability.
Conclusions
Implementation of telehealth navigators to support patient and clinician readiness for video visits was associated with comparatively lower video visit non-attendance but did not affect video visit scheduling or completion amongst telehealth visits. Further development of telehealth programs and support services like telehealth navigators is needed to ensure that they are responsive to actual patient, provider, and clinical needs and preferences for communication, clinical efficacy, and experiential satisfaction. While navigators may not be able to address all of these facets, their on-the-ground relationships with patients and clinicians can help identify opportunities for improvement and care redesign.
Acknowledgments
We would like to acknowledge Brenda Aguilar for her contributions and service as a telehealth navigator. Preliminary results of this project were previously presented at the Society of General Internal Medicine 2023 Annual Meeting (May 12, 2023).
Footnotes
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by a grant from the New York Health Foundation [21-12977]. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. KC and HBJ received the grant, and KK and KB received salary support from the grant.
ORCID iD: Kevin Chen
https://orcid.org/0000-0003-3035-1891
References
- 1. Graetz I, Huang J, Muelly E, Gopalan A, Reed ME. Primary care visits are timelier when patients choose telemedicine: a cross-sectional observational study. Telemed J E Health. 2022;28(9):1374-1378. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Chen K, Zhang C, Gurley A, Akkem S, Jackson H. Appointment non-attendance for telehealth versus in-person primary care visits at a large public healthcare system. J Gen Intern Med. 2023;38(4):922-928. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Drerup B, Espenschied J, Wiedemer J, Hamilton L. Reduced no-show rates and sustained patient satisfaction of telehealth during the COVID-19 pandemic. Telemed J E Health. 2021;27(12):1409-1415. [DOI] [PubMed] [Google Scholar]
- 4. Franciosi EB, Tan AJ, Kassamali B, et al. The impact of telehealth implementation on underserved populations and no-show rates by medical specialty during the COVID-19 pandemic. Telemed J E Health. 2021;27(8):874-880. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Rodriguez JA, Betancourt JR, Sequist TD, Ganguli I. Differences in the use of telephone and video telemedicine visits during the COVID-19 pandemic. Am J Manag Care. 2021;27(1):21-26. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Ramsetty A, Adams C. Impact of the digital divide in the age of COVID-19. J Am Med Inform Assoc. 2020;27(7):1147-1148. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Eberly LA, Kallan MJ, Julien HM, et al. Patient characteristics associated with telemedicine access for primary and specialty ambulatory care during the COVID-19 pandemic. JAMA Netw Open. 2020;3(12):e2031640. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Pierce RP, Stevermer JJ. Disparities in the use of telehealth at the onset of the COVID-19 public health emergency. J Telemed Telecare. 2023;29(1):3-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Polinski JM, Barker T, Gagliano N, Sussman A, Brennan TA, Shrank WH. Patients’ satisfaction with and preference for telehealth visits. J Gen Intern Med. 2016;31(3):269-275. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Chen K, Lodaria K, Jackson HB. Patient satisfaction with telehealth versus in-person visits during COVID-19 at a large, public healthcare system. J Eval Clin Pract. 2022;28(6):986-990. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Ramaswamy A, Yu M, Drangsholt S, et al. Patient satisfaction with telemedicine during the COVID-19 pandemic: retrospective cohort study. J Med Internet Res. 2020;22(9):e20786. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Donelan K, Barreto EA, Sossong S, et al. Patient and clinician experiences with telehealth for patient follow-up care. Am J Manag Care. 2019;25(1):40-44. [PubMed] [Google Scholar]
- 13. Shaver J. The state of telehealth before and after the COVID-19 pandemic. Prim Care. 2022;49(4):517-530. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Sharma AE, Khoong EC, Sierra M, et al. System-level factors associated with telephone and video visit use: survey of safety-net clinicians during the early phase of the COVID-19 pandemic. JMIR Form Res. 2022;6(3):e34088. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Chang JE, Lindenfeld Z, Albert SL, et al. Telephone vs. video visits during COVID-19: safety-net provider perspectives. J Am Board Fam Med. 2021;34(6):1103-1114. [DOI] [PubMed] [Google Scholar]
- 16. Khoong EC, Butler BA, Mesina O, et al. Patient interest in and barriers to telemedicine video visits in a multilingual urban safety-net system. J Am Med Inform Assoc. 2021;28(2):349-353. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Powell RE, Henstenburg JM, Cooper G, Hollander JE, Rising KL. Patient perceptions of telehealth primary care video visits. Ann Fam Med. 2017;15(3):225-229. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Adams AM, Williams KKA, Langill JC, et al. Telemedicine perceptions and experiences of socially vulnerable households during the early stages of the COVID-19 pandemic: a qualitative study. CMAJ Open. 2023;11(2):E219-E226. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Gonçalves RL, Pagano AS, Reis ZSN, et al. Usability of telehealth systems for noncommunicable diseases in primary care from the COVID-19 pandemic onward: systematic review. J Med Internet Res. 2023;25:e44209. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Budde H, Williams GA, Winkelmann J, Pfirter L, Maier CB. The role of patient navigators in ambulatory care: overview of systematic reviews. BMC Health Serv Res. 2021;21(1):1166. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Mechanic OJ, Lee EM, Sheehan HM, et al. Evaluation of telehealth visit attendance after implementation of a patient navigator program. JAMA Netw Open. 2022;5(12):e2245615. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Chen K, Zhang C, Gurley A, Akkem S, Jackson H. Patient characteristics associated with telehealth scheduling and completion in primary care at a large, urban public healthcare system. J Urban Health. 2023;100(3):468-477. [DOI] [PMC free article] [PubMed] [Google Scholar]
