Since the advent of the electronic health records and their increased adoption, novel ideas regarding asynchronous care and their impact on patient outcomes have emerged. In this editorial, we explore the effect of the utilization of patient portal messaging within radiation oncology, emphasizing both the potential benefits and inherent challenges, particularly focusing on the emerging potential of large language models (LLMs) in improving patient engagement.
In this issue of IJROBP, Alexander et al. analyzed patient portal messaging among radiation oncology patients using data from a single institution. The study included a diverse cohort, assessing both incoming and outgoing messages. They found that portal messaging was associated with better survival: HR was 0.90 for sending messages (95% CI 0.84–0.96; P=0.001) and 0.77 for receiving messages (CI 0.70–0.84; P<0.001), adjusted for socioeconomic, disease, and treatment characteristics. Significant disparities were highlighted, with Black patients less likely to send and be sent messages compared to White patients, and Medicaid patients were less likely to use the portal compared to Medicare patients. Female providers were more likely to send messages than males (OR 1.47; CI 1.34–1.62; P<0.001). These disparities underscore the need for targeted interventions to ensure equitable access to digital healthcare tools.
These results align with other studies that examined the role of patient portal messaging. The use of patient portals has been shown to enhance patient engagement, satisfaction, and perceived health outcomes.1 Patients tend to appreciate the asynchronous access to their healthcare team, allowing them to ask questions and seek clarification on their own time. This increased engagement may lead to better management of complex diseases and early identification of potential treatment related issues.
While patient portals offer tangible benefits, significant challenges remain. Access to technology is a major barrier for patients from low socioeconomic backgrounds or those with limited technological proficiency.2 Among others, Non-English speakers face significant challenges navigating these platforms, which can exacerbate existing healthcare disparities.3
Additionally, the increased use of patient portals can lead to higher workloads for healthcare providers.4 The time spent on patient portal messages often goes uncompensated and has been found to associated with increased physician burnout in multiple studies.5–7 This issue is particularly pronounced among female providers, who tend to spend more time on patient communications, potentially impacting their work-life balance and job satisfaction.8
Scalable Engagement: The Potential Role of Large Language Models
To address these challenges and expand the potential of patient portals to improve engagement without increasing physician burden, EHR vendors have begun integrating large language models (LLMs) into patient portal systems as a potential solution. LLMs can draft empathetic and informative responses to patient inquiries, thereby reducing the burden on healthcare providers.9,10 Potential applications of these models in patient portals include assisting with triaging messages, providing initial drafts, and tailoring responses to meet specific patient needs. This is inherently important in resource-limited healthcare settings.
LLMs have the potential to help draft empathetic, personalized responses that physicians can build on instead of starting from scratch. This has the potential to reduce the workload while generating longer, more consistent, and valuable responses. While time-savings are yet to be fully realized, novel LLM and Natural Language Processing (NLP) tools hold great potential in triaging, summarizing, and drafting. A study by Chen et al.11 noted that LLM drafts generally provided acceptable responses and posed minimal risk of harm, and included more educational content than responses written completely manually. With future iterations, the hope is that these models will be able to generate high-quality, personalized drafts more quickly, thereby increasing patient satisfaction, reducing physician burnout, and potentially improving outcomes. While this alone could help increase adoption, another potential application for LLMs is in quick language translation, which could increase the accessibility and utilization of patient portals for non-English speakers. However, extensive testing and clinical benchmarking is necessary to ensure accuracy and reliability in medical translations.
However, integrating LLMs into patient portal messaging systems represents a first step in a paradigm shift of artificial intelligence-informed healthcare, and carries known and unknown risks. One major concern is the potential for biases in the training data propagating through the models, leading to biased output.12 For example, if the training data predominantly represents certain demographic groups or has spurious and/or biased associations between these groups and medical facts, the model may generate responses that are less appropriate for underrepresented populations.13,14 Additionally, since these populations use portals less frequently, their data are less likely to be represented in future training data.
One significant issue with using LLMs is “hallucinations,” where the model generates incorrect or nonsensical information. This is especially dangerous in healthcare, and particularly in oncology, where inaccuracies can have serious consequences. Ensuring that LLM-generated responses are accurate and reliable requires rigorous validation and oversight by healthcare professionals, who are often already pressed for time. Moreover, as LLMs improve and trust in these tools grows, there is a risk of over-reliance on them. This could lead to automation bias and anchoring, with potentially unknown downstream effects. Automation bias occurs when users trust automated systems too much, potentially overlooking errors. Anchoring refers to the human tendency to rely heavily on the first piece of information encountered (the “anchor”) when making decisions. This has already been shown by Chen et al. and merits future study to investigate the impact and optimal mitigation strategies.11
Additionally, integrating LLMs into patient portals raises important questions about privacy and security.15 LLMs require access to patient data to generate contextually appropriate responses, necessitating robust data protection measures to prevent unauthorized access while ensuring patient confidentiality.
Recent studies of clinical deployment have shown mixed results regarding the time-saving potential of LLMs in healthcare communication. Tai-Seale et al. found that LLMs might reduce cognitive burden but did not save time.16 Similarly, Garcia et al. noted no change in response time but reductions in cognitive burden and burnout scores.9 While these early studies suggest some benefit in reducing cognitive load, it remains unclear if these tools will mitigate burnout, emphasizing the need for further clinical validation. Future research should also explore patients’ perceptions and outcomes of LLM-augmented communication, and how patients feel about healthcare teams using LLMs, especially given some systems charge for portal messages. Awareness of LLM-augmented responses might lead patients to turn to public, general-purpose LLMs, which might be more accessible but do not have the necessary oversight.
Accessible LLMs as a Parallel to Patient Engagement
With the widespread availability of LLM applications and ease of access, patients may eventually realize the benefit of having an application that is willing to engage in discussions related to healthcare and provide tailored advice.17 These models can provide empathic and personalized answers. Just as patients often turn to Google searches, patients might find it easier and potentially less expensive to ask their friendly LLM about their care instead of asking their care teams. However, it is important to note that LLMs, at present, are not designed to provide reliable clinical content and answers might not be factually correct, especially without clinical nuance., and could relay misinformation present on the internet text that LLMs are pre-trained on. LLMs often provide answers in an authoritative fashion and can mix correct and incorrect information, making it difficult to detect errors.18 However, instead of discouraging their use entirely, we owe it to our patients to educate them on practical best practices of prompting and fact-checking output. Tips might include how to utilize LLMs to generate meaningful questions to ask their healthcare providers and information summarization, among others.
The utilization of patient portals in radiation oncology offers significant opportunities to enhance patient engagement and thereby improve survival outcomes. However, to fully realize these benefits, it is essential to address the aforementioned challenges related to access, provider workload, and the integration of advanced technologies. Stakeholders must collaborate to develop policies that support equitable access to digital healthcare tools and provide adequate compensation for the additional work associated with patient portal communications.
Technology as a Social Determinant of Health
Underserved communities often lack access to reliable Wi-Fi, computers, and cell phones, and are less likely to be offered patient portals.2 If AI-assisted responses provide a net benefit, these populations are less likely to receive them, widening the healthcare gap. On the flipside, over-reliance on inferior AI in under-resourced settings can also increase harm and reduce human engagement. Therefore, investments in AI should be paralleled with investments in the infrastructure and technology that enable the most vulnerable patients to realize the potential benefits of LLMs.
Conclusion
Novel technologies like integrating LLMs into patient portals present a promising avenue to improve efficiency and patient engagement but must be approached with caution. Ensuring these models are accurate, unbiased, and used responsibly is crucial. This is an exciting time for digital healthcare, and the potential to leverage technology for improved patient outcomes is immense. Continued research and thoughtful implementation of these tools will be key to navigating the complexities of modern healthcare and ensuring the benefits are realized across all patient populations.
Disclosures:
D.S.B. is an associate editor of Radiation Oncology of HemOnc.Org (not related to the submitted work). D.S.B. receives funding from the National Institutes of Health National Cancer Institute (U54CA274516–01A1), the Woods Foundation, the Jay Harris Junior Faculty Award, and the Joint Center for Radiation Therapy Foundation.
Funding:
None.
Footnotes
Author responsible for statistical analysis: N/A
Data Availability Statement for this Work:
N/A
References:
- 1.Kinney AP, Sankaranarayanan B. Effects of Patient Portal Use on Patient Satisfaction: Survey and Partial Least Squares Analysis. J Med Internet Res 2021;23(8):e19820. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Richwine C, Johnson C, Patel V. Disparities in patient portal access and the role of providers in encouraging access and use. J Am Med Inform Assoc 2023;30(2):308–317. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Sadasivaiah S, Lyles CR, Kiyoi S, Wong P, Ratanawongsa N. Disparities in patient-reported interest in Web-based patient portals: Survey at an urban academic safety-net hospital. J Med Internet Res 2019;21(3):e11421. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Murphy DR, Meyer AND, Russo E, Sittig DF, Wei L, Singh H. The Burden of Inbox Notifications in Commercial Electronic Health Records. JAMA Intern Med 2016;176(4):559–560. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Lieu TA, Altschuler A, Weiner JZ, et al. Primary care physicians’ experiences with and strategies for managing electronic messages. JAMA Netw Open. 2019;2(12):e1918287. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Adler-Milstein J, Zhao W, Willard-Grace R, Knox M, Grumbach K. Electronic health records and burnout: Time spent on the electronic health record after hours and message volume associated with exhaustion but not with cynicism among primary care clinicians. J Am Med Inform Assoc 2020;27(4):531–538. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Hilliard RW, Haskell J, Gardner RL. Are specific elements of electronic health record use associated with clinician burnout more than others? J Am Med Inform Assoc 2020;27(9):1401–1410. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Rittenberg E, Liebman JB, Rexrode KM. Primary Care Physician Gender and Electronic Health Record Workload. J Gen Intern Med 2022;37(13):3295–3301. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Garcia P, Ma SP, Shah S, et al. Artificial Intelligence-Generated Draft Replies to Patient Inbox Messages. JAMA Netw Open. 2024;7(3):e243201. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Microsoft News Center. Microsoft and Epic expand strategic collaboration with integration of Azure OpenAI Service. Stories. Published April 17, 2023. Accessed June 2, 2024. https://news.microsoft.com/2023/04/17/microsoft-and-epic-expand-strategic-collaboration-with-integration-of-azure-openai-service/ [Google Scholar]
- 11.Chen S, Guevara M, Moningi S, et al. The effect of using a large language model to respond to patient messages. Lancet Digit Health. 2024;6(6):e379–e381. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Guevara M, Chen S, Thomas S, et al. Large language models to identify social determinants of health in electronic health records. NPJ Digit Med 2024;7(1):6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Chen S, Gallifant J, Gao M, et al. Cross-Care: Assessing the healthcare implications of pre-training data on language model bias. arXiv [csCL]. Published online May 8, 2024. Accessed June 2, 2024. http://arxiv.org/abs/2405.05506 [PMC free article] [PubMed] [Google Scholar]
- 14.Zack T, Lehman E, Suzgun M, et al. Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study. Lancet Digit Health. 2024;6(1):e12–e22. [DOI] [PubMed] [Google Scholar]
- 15.Moreno AC, Bitterman DS. Toward Clinical-Grade Evaluation of Large Language Models. Int J Radiat Oncol Biol Phys 2024;118(4):916–920. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Tai-Seale M, Baxter SL, Vaida F, et al. AI-Generated Draft Replies Integrated Into Health Records and Physicians’ Electronic Communication. JAMA Netw Open. 2024;7(4):e246565. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Carey Goldberg. Patient Portal — When Patients Take AI into Their Own Hands. NEJM AI 2024;1(5):AIp2400283. [Google Scholar]
- 18.Chen S, Kann BH, Foote MB, et al. Use of artificial intelligence chatbots for cancer treatment information. JAMA Oncol 2023;9(10):1459–1462. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
N/A