Skip to main content
Journal of Palliative Medicine logoLink to Journal of Palliative Medicine
. 2024 Dec 9;27(12):1618–1624. doi: 10.1089/jpm.2024.0256

“Hospice Care Could Be a Compassionate Choice”: ChatGPT Responses to Questions About Decision Making in Advanced Cancer

Meghan McDarby 1,, Emily L Mroz 2,3, Jessica Hahne 4, Charlotte D Malling 5, Brian D Carpenter 4, Patricia A Parker 1
PMCID: PMC12372917  PMID: 39263979

Abstract

Background:

Patients with cancer use the internet to inform medical decision making.

Objective:

To examine the content of ChatGPT responses to a hypothetical patient question about decision making in advanced cancer.

Design:

We developed a medical advice-seeking vignette in English about a patient with metastatic melanoma. When inputting this vignette, we varied five characteristics (patient age, race, ethnicity, insurance status, and preexisting recommendation of hospice/the opinion of an adult daughter regarding the recommendation). ChatGPT responses (N = 96) were coded for mentions of: hospice care, palliative care, financial implications of treatment, second opinions, clinical trials, discussing the decision with loved ones, and discussing the decision with care providers. We conducted additional analyses to understand how ChatGPT described hospice and referenced the adult daughter. Data were analyzed using descriptive statistics and chi-square analysis.

Results:

Responses more frequently mentioned clinical trials for vignettes describing 45-year-old patients compared with 65- and 85-year-old patients. When vignettes mentioned a preexisting recommendation for hospice, responses more frequently mentioned seeking a second opinion and hospice care. ChatGPT’s descriptions of hospice focused primarily on its ability to provide comfort and support. When vignettes referenced the daughter’s opinion on the hospice recommendation, approximately one third of responses also referenced this, stating the importance of talking to her about treatment preferences and values.

Conclusion:

ChatGPT responses to questions about advanced cancer decision making can be heterogeneous based on demographic and clinical characteristics. Findings underscore the possible impact of this heterogeneity on treatment decision making in patients with cancer.

Keywords: advanced cancer, artificial intelligence, decision making, end-of-life care, hospice care

Introduction

ChatGPT is an online chat application supported by a large language model (LLM) that has captivated lay and scientific audiences with its human-like approach to communication.1,2 Sourced from publicly available information across the internet, ChatGPT has become the focus of research investigations regarding the quality of chat application responses to questions about medical care, especially to understand the accuracy, appropriateness, and comprehensiveness of information provided by chat applications.3–6 Because many people utilize online information to learn about and plan for future medical care,7–9 understanding the scope, depth, and accuracy of information likely to be obtained by individuals through their use of applications such as ChatGPT is paramount.

Ongoing work indicates that individuals perceive ChatGPT responses to medical advice-seeking questions favorably7 and have trouble distinguishing ChatGPT responses from advice offered by actual medical providers.8 Other early pilot work indicates that lay individuals place greater trust in information obtained from ChatGPT compared with information retrieved through Google.7 Importantly, investigations into the effects of the infrastructure of ChatGPT have confirmed that there is an inherent risk of bias in ChatGPT responses to user questions.9,10

In a preliminary investigation,11 our research group examined 24 ChatGPT responses to hypothetical patient questions about treatment decision making, a task faced by many patients with advanced cancer. We developed a medical advice-seeking vignette based on previous work conducted by Nastasi and colleagues12 about a patient with metastatic melanoma that ended with a single question: Should I look for other treatments or focus on the quality of my life? We varied details of the vignette to depict patients with different demographic (race, ethnicity, insurance status) and clinical (prior recommendation of hospice by care team) characteristics. Our results suggested that ChatGPT responds differently to the same question posed by patients with the same prognosis when patient characteristics and medical recommendations vary. For example, ChatGPT responses to all 12 vignettes describing a patient who had previously been recommended hospice by their care team consistently mentioned hospice; however, among responses to 12 vignettes that did not explicitly state hospice had been recommended by the patient’s care team, only 3 mentioned hospice care (25%).

Our preliminary work, although formative, was constrained in multiple ways and highlighted additional gaps in knowledge. For example, although we identified variability in ChatGPT references to hospice care, we did not examine the descriptions of hospice and thus did not consider possible disparities in how hospice is characterized and endorsed to users. In addition, our initial case vignette, although ecologically valid,11,13 did not consider the role of other parties in decision making, although most patients with advanced cancer involve family or close others.14,15

We conducted the current study to understand what types of content the publicly available chat application, ChatGPT, generates in response to questions from patients with advanced cancer who are seeking support for decision making. Our aims were to (1) describe content covered in ChatGPT responses to hypothetical patients with varied characteristics, (2) characterize descriptions of hospice care in ChatGPT responses, and (3) examine ChatGPT’s approach to addressing an adult daughter’s opinion about potential treatment options.

Methods

Vignette development

We developed a medical advice-seeking vignette (written at the 5th grade level; confirmed by analysis using a Flesch–Kincaid Reading Ease score generator) about treatment decision making for a patient with advanced, metastatic melanoma. The vignette was based on the vignette used in our preliminary work11 following the core structure used by Nastasi and colleagues,12 vignettes used for clinician training as part of the Communication Skills Training Laboratory and Program at Memorial Sloan Kettering Cancer Center (Comskil),16,17 and cursory searches of questions posted by patients with cancer on human-to-human interactive chat forums (e.g., Reddit, r/cancer). We varied five characteristics in the vignette, resulting in 96 permutations: age (3 levels; 45/65/85), race (2 levels; Black/White), ethnicity (2 levels; Hispanic/Non-Hispanic), insurance status (2 levels; Insured/Not Insured), and clinical recommendation of hospice + adult daughter’s opinion on that recommendation (4 levels; no mention of hospice recommendation/hospice recommendation and daughter agrees/hospice recommendation and daughter disagrees/hospice recommendation and no mention of daughter’s perspective). Although most ChatGPT users might not contextualize ChatGPT questions with their demographic information, we embedded demographic characteristics within the vignette for the purpose of the study to ensure that ChatGPT would interpret each vignette from the appropriate patient background. We intentionally did not include the patient’s opinion about hospice or the hospice recommendation because we wanted to highlight consonance and dissonance of outside perspectives being used to inform the patient’s decision (versus consonance and dissonance with the patient’s opinion). See Figure 1 for a sample vignette entered into ChatGPT.

FIG. 1.

FIG. 1.

Sample vignette permutation.

Data generation

Each permutation of the vignette was entered into a new, separate ChatGPT 3.518 thread from the first author’s ChatGPT account in February 2024. Responses were extracted into a document. We did not regenerate responses.

Response coding and data analyses

Three authors (M.M., E.L.M., J.H.) applied quantitative content analysis to responses to count the frequency of ChatGPT’s mentions of the following prespecified topics: (1) hospice care, (2) palliative care, (3) clinical trials, (4) second opinions, (5) financial implications of treatment(s), (6) talking to providers for decision-making support, or (7) talking to family, friends, or close others for decision-making support. We chose these topics a priori based on previous work,7 as well as evidence-based recommendations for discussing goals of care19–21 in the context of advanced disease.8 We coded topics using a categorical coding system (Supplementary Appendix SA1). Coders trained to reliability for these topics using a subset of the data (8 responses) and achieved high reliability intra class correlation coefficient (ICC = 0.93). Following confirmation of reliability, all data were recoded by pairs of coders (M.M. and E.L.M.; M.M. and J.H.); discrepancies were resolved in meetings among all three coders.11

In a second, separate round of coding, the study team isolated ChatGPT’s descriptions of hospice and mentions of the adult daughter. Two authors (M.M., C.M.) conducted a mixed content analysis of the hospice descriptions, first by inductively open coding each description. We elected to conduct this coding inductively to capture the range of ways that hospice might be defined and described in a ChatGPT response; the alternative option of a deductive overlay may have resulted in missing patterns of ChatGPT descriptions. Once a set of preliminary terms used to describe hospice was identified, the coders applied the set of terms to all relevant data to generate binary (present/absent) counts of how responses referenced each term. Coders came to consensus on the set of terms before applying them for content coding and met to resolve discrepancies in coding before determining the frequencies of each term.

Descriptive statistics were calculated to characterize the overall frequency of topics mentioned in responses, hospice description terms, and mentions of the adult daughter. Chi-square analyses were conducted to examine differences in the frequency of topic mentions by characteristics across vignette permutations. All analyses were conducted using Microsoft Excel and JASP Version 0.18.3.

Results

Frequency of key topics mentioned in ChatGPT responses

A detailed description of content from all 96 permutations of the vignette appears in Table 1. Hospice care was mentioned in 76% (n = 73) and palliative care in 34% (n = 33) of responses. Nearly all responses recommended that the patient talk with their health care provider (96%; n = 93) to support treatment decision making. Most responses also broadly recommended that the patient speak with family/friends about the decision (61%; n = 59). Half of responses mentioned seeking a second opinion. A minority of responses mentioned consideration of clinical trials (23%; n = 22) or the financial implications of treatment (7%; n = 7).

Table 1.

Frequency of Mentions of Key Topics Across ChatGPT Responses (n = 96)

Topic Total N (%) Age Race Hispanic ethnicity Insured Hospice recommended
45 S 85 White Black Yes No Yes No Yesa No
Hospice care 73 (76) 23 25 25 37 36 38 35 37 36 71 2
Palliative care 33 (34) 12 11 10 17 16 17 16 18 15 11 22
Financial implications of treatment 7 (7.3) 3 1 3 1 6 3 4 0 7 2 5
Talking to health care providers 93 (97) 32 30 31 46 47 47 46 48 45 69 24
Talk to family/ friends 59 (61) 19 19 21 28 31 33 26 31 28 48 11
Clinical trials 22 (23) 13 4 5 7 15 9 13 10 12 16 6
Second opinions 48 (50) 18 18 12 22 26 22 26 23 25 45 3
a

This column combines all the vignette responses from the condition, hospice was recommended (including the two conditions that reference the daughter; n = 72).

Key topics mentioned by patient characteristics

Only one ChatGPT response overtly referenced the patient’s age as stated in the vignette. However, ChatGPT responses more frequently mentioned clinical trials when vignettes stated the patient was 45 (40% of responses), compared with 65 (12.5% of responses) and 85 (15.6% of responses) years old [X2 (2, N = 96) = 8.6, p = 0.01]. There were no age differences in mentions of hospice care (p > 0.50), palliative care (p > 0.30), talking to providers for decision-making support (p > 0.30), talking to family/friends for decision-making support (p > 0.83), or the financial implications of treatment (p > 0.35). There was a nonsignificant trend toward ChatGPT responses mentioning clinical trials more frequently for vignettes that stated the patient was Black, [X2 (1, N = 96) = 3.8, p = 0.05]. There were no significant differences in key term mentions by ethnicity. ChatGPT responses rarely discussed finances but more frequently mentioned the financial implications of additional treatment when the vignette indicated that the patient was uninsured, 14.6% v. 0% of responses, [X2 (2, N = 96) = 7.5, p = 0.02].

When the vignette stated that a recommendation for hospice had been made by the care team, ChatGPT responses mentioned palliative care less frequently [X2 (6, N = 96) = 40.8, p < 0.01] and hospice care more frequently [X2 (9, N = 96) = 82.2, p < 0.01]. ChatGPT rarely mentioned hospice care when it was not directly referenced in the vignette (n = 1 out of 70 responses). Responses mentioned seeking a second opinion more frequently for vignettes, which stated that the patient had been recommended hospice [(X2 (3, n = 96) = 18.0, p = 0.001].

Descriptions of hospice in ChatGPT responses

Overall, 73% of ChatGPT responses (n = 70) referenced hospice care. Our inductive coding resulted in 14 unique codes, which we examined across responses and collated by frequency (Fig. 2). The role of hospice care in providing comfort for patients was present in most responses (96%, n = 67). Fewer responses referenced symptom management (41%, n = 29) and pain management (40%, n = 28), and even fewer mentioned that hospice care provides support to the patient’s family (30%, n = 21).

FIG. 2.

FIG. 2.

Frequency of terms used to describe hospice in ChatGPT responses (n = 70 responses). **Left Axis: Hospice Terms. **Bottom Axis: Frequency of mentions.

Many responses highlighted the role of hospice in supporting quality of life (50%, n = 35), while few responses highlighted hospice as an alternative to aggressive treatments (14%, n = 10). Concepts such as supporting patient dignity (7%, n = 5) and hospice not meaning “giving up” (3%, n = 2) were mentioned infrequently. Very few responses defined hospice as a type of palliative care (1%, n = 1) or mentioned that it could be delivered in multiple locations (e.g., at home, in a facility) (4%, n = 3).

ChatGPT’s response to the perspective of the patient’s daughter

Forty-eight vignette permutations mentioned the patient’s adult daughter (24 stated that she agreed with the clinical team’s recommendation of hospice, and 24 stated that she disagreed). Approximately one third (35%; n = 17) of responses referred to the daughter and made a recommendation about how to communicate with her (e.g., “have an open and honest conversation about values,” “share values and preferences with your daughter”). Most (76.4%; n = 13) responses that mentioned the daughter corresponded to vignettes stating that she disagreed with the idea of transitioning to hospice.

Discussion

Findings from the current study indicate that there is variation in the content of ChatGPT responses to patient questions about decision making in advanced cancer, in part, based on some patient characteristics. Our results also point to great variation in how ChatGPT defines hospice care and presents recommendations about communication with family members about hospice. As artificial intelligence (AI)-informed technologies become more prevalent in health care, our work provides insight into the clinical implications of patients using ChatGPT for decision-making support in advanced cancer and underscores the urgent need to understand how patient use of ChatGPT is likely to influence communication with clinicians and family about such decision making. Clinicians should heed this key finding: depending on their demographic characteristics, certain patients may not have equitable access to information that informs care decision making in advanced cancer if they rely on ChatGPT.

There were limited but notable differences in ChatGPT responses to scenarios with varied patient demographic characteristics. For example, responses more frequently suggested seeking out information about clinical trials to patients self-reporting their age as 45 years compared with older patients. Historically, older adults have been excluded from clinical trials due to strict eligibility requirements22,23 and exclusion criteria.24 Recruitment and retention of older adults in clinical trials is also challenging,25,26 yet chronic low enrollment of older adults in clinical trials perpetuates disparities in the development of cancer therapies that can support adults across the lifespan. ChatGPT also trended toward more frequently suggested clinical trials to Black patients compared with White patients, a somewhat contradictory finding to a wide body of literature suggesting that Black patients with cancer are systematically excluded from participation in clinical trials.27 In summary, these findings raise important questions about variability in ChatGPT’s recommendations to patients based on demographic factors, and in turn, how these recommendations may differentially influence patient perceptions of care and treatment options in the face of advanced cancer for patients from different demographic groups.

In terms of clinical characteristics, we replicated a striking difference that also appeared in our pilot11 work: responses referenced hospice significantly more frequently when the vignette indicated that the patient had been recommended hospice. This finding could be written off as intuitive: if ChatGPT registers the term “hospice care” in a user question, then of course it will reference the same in the response. However, all other information in the vignette remained constant, with details across all vignettes clearly indicating that the patient could be eligible for a more symptom management-focused approach to care (i.e., hospice). In other words, it would have been reasonable for ChatGPT to reference hospice care for any individual describing recurrence of advanced cancer with new metastasis. Thus, this finding raises concerns about disparities in information provided about hospice care for ChatGPT users and the true “generativity” of AI technologies if responses vary based on whether they heard about hospice care before from their care team and thought it relevant to disclose in their prompt to ChatGPT. When patients use ChatGPT to seek information such as whether to pursue hospice care, chat applications that rely on LLMs pull primarily from text inputted by users, potentially creating biased responses with the potential to exacerbate disparities in understanding of hospice, openness to hospice care as a care approach, and destigmatization of hospice as a discipline.

We also observed notable dissonance between mentions of hospice and mentions of palliative care: 94% of mentions of palliative care occurred in responses to vignettes that had not mentioned a prior clinical recommendation of hospice. Although hospice care is a type of palliative care, lay populations generally misunderstand hospice as a standalone care option only to be offered when no other treatments are available.28,29 In many ways, our vignette mirrors this perspective of many people (i.e., more treatment vs. “give up” and start hospice).30 However, because ChatGPT responses primarily mentioned palliative care for patients who did not reference hospice in their prompt, our results raise important questions about how cursory information about, and preliminary recommendations to, hospice or palliative care shapes additional information that patients are likely to obtain when they ask questions to sources powered by LLMs. Separately, this finding also establishes grounds for future investigation into how ChatGPT and other LLM-powered AI platforms may help patients overcome barriers to early palliative care access31,32: if patients are prompted by ChatGPT to consider palliative care, they may be inclined to “do something” with that recommendation, including learn more about palliative care, or mention palliative care during a clinical encounter, two strategies with the potential to prompt earlier initiation of services. Future work must explore how obtaining such information from ChatGPT is likely to shape patient behavior and follow-up with providers.

The terms that responses used to define hospice care also varied widely. Although responses consistently stated that hospice care provides comfort and support to patients and their families, responses either sporadically or completely failed to mention other features of hospice that may be important to decision making. For example, no ChatGPT responses reported that hospice is covered in full by Medicare and most major insurance companies.33,34 As another example, although responses sometimes indicated that hospice provides pain and symptom management, there were no references to hospice as an interprofessional team. That ChatGPT responses left out cost of care information and neglected to contextualize hospice within the spectrum of palliative care point to concerns about ensuring equitable understanding of care options.

In terms of recommendations specifically about the adult daughter’s perspective on hospice care, ChatGPT responses more frequently mentioned the importance of talking to the daughter about decision making when it was explicit that she disagreed with the hospice suggestion made by the clinical care team. Although the importance of discussing care decisions when there is dissonance among the patient, their family or close others, and their care team is undeniable, sharing care goals, preferences, and values among these key involved groups can be key to patient-centered care, even when there is not explicit dissonance.35,36 Thus, although most responses generically referred to discussing the decision with family or friends, it is perplexing that ChatGPT did not underscore the importance of discussing the decision with the referenced adult daughter as frequently when she agreed with the care team (note: in all vignettes, the patient’s position about hospice is intentionally unclear, as they are asking the question in the first place; Fig. 1). In summary, findings highlight the ways in which use of generative AI and chat application technology have the potential to compound already complex and frequently misunderstood issues in the provision of patient-centered serious illness care.

Limitations

Despite its rigorous design and analytic approach, the study was limited in several ways. First, we did not include other characteristics relevant to patient care in our vignette that could influence ChatGPT responses, including sex, gender identity, and health literacy. Future work should consider additional intersectional patient and clinical characteristics that may shape output, as well as how “prompt engineering” might be relevant to the nature of the responses provided by ChatGPT in advice on medical decision making, since some work has shown prompt engineering matters when asking ChatGPT about medical advice.37,38

In addition, this study did not examine how other features of ChatGPT (i.e., a user can follow-up to a ChatGPT response with an additional question or request for information, that a user can regenerate a response) could help patients refine information they obtained in an initial response. Future work should consider how “dialog” between patients and ChatGPT and requests for reformulated responses shape what information is prioritized in responses and how it is conveyed to users. Finally, we included the perspective of the patient’s adult daughter on hospice care in our vignette, yet many cancer caregivers are spouses, not adult children. We intentionally included an adult daughter with the hope of comparing differences in ChatGPT references to the adult daughter by patient age (i.e., younger patients would have young adult children), yet ChatGPT produced few responses that mentioned the adult daughter, so comparisons by age could not be made. Future research should include the perspectives of other family members, including spouses, in vignettes.

Conclusion

As patient use of publicly available chat applications becomes more commonplace, understanding the potential impacts of this technology on medical advice seeking and decision making will guide the development of education and resources for patients, families, clinicians, and medical systems.39 Long before the advent of ChatGPT, patients used online information to bolster illness understanding and generate information to integrate into care decision making. Moving forward, ChatGPT’s appeal may rise above other, well-established online health information gathering methods, especially as its accessibility, popularity, and human-like responding improve. Therefore, as it becomes clearer that ChatGPT has the potential to produce variable responses to the same question for patients with different demographic and clinical characteristics, future work should explore how best to support effective use of ChatGPT as an information source despite the limitations and potential biases that it may exhibit.

Supplementary Material

Supplementary Appendix SA1

Author Disclosure Statement

The authors have no conflicts to disclose.

Funding Information

M.M. is supported by T32 CA00946 and all MSK authors are supported by P30 CA008748. E.L.M. is supported by the National Institute on Aging (NIA) Institutional Training Grant (T32AG019134). J.H. is supported by T32 AG000030-47.

References

  • 1. A brief overview of ChatGPT: The history, status quo and potential future development, IEEE Journals & Magazine, IEEE Xplore. Available from: https://ieeexplore.ieee.org/document/10113601 [Last accessed: May 29, 2024]. [Google Scholar]
  • 2. Roumeliotis KI, Tselikas ND. ChatGPT and open-AI models: A preliminary review. Future Internet 2023;15(6):192; doi: 10.3390/fi15060192 [DOI] [Google Scholar]
  • 3. Johnson D, Goodman R, Patrinely J, et al. Assessing the accuracy and reliability of AI-generated medical responses: An evaluation of the chat-GPT model. Res Sq 2023:rs.3.rs-2566942; doi: 10.21203/rs.3.rs-2566942/v1 [DOI] [Google Scholar]
  • 4. Kuşcu O, Pamuk AE, Sütay Süslü N, et al. Is ChatGPT accurate and reliable in answering questions regarding head and neck cancer? Front Oncol 2023;13:1256459; doi: 10.3389/fonc.2023.1256459 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Haze T, Kawano R, Takase H, et al. Influence on the accuracy in ChatGPT: Differences in the amount of information per medical field. Int J Med Inform 2023;180:105283; doi: 10.1016/j.ijmedinf.2023.105283 [DOI] [PubMed] [Google Scholar]
  • 6. Kim MJ, Admane S, Chang YK, et al. Chatbot performance in defining and differentiating palliative care, supportive care, hospice care. J Pain Symptom Manage 2024;67(5):e381–e391; doi: 10.1016/j.jpainsymman.2024.01.008 [DOI] [PubMed] [Google Scholar]
  • 7. Sun X, Ma R, Zhao X, et al. Trusting the search: Unraveling human trust in health information from google and ChatGPT. Published Online 2024; doi: 10.48550/arXiv.2403.09987 [DOI] [Google Scholar]
  • 8. Nov O, Singh N, Mann D. Putting ChatGPT’s medical advice to the (Turing) test: Survey study. JMIR Med Educ 2023;9:e46939; doi: 10.2196/46939 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Rao A, Pang M, Kim J, et al. Assessing the utility of ChatGPT throughout the entire clinical workflow. medRxiv 2023;02(21):23285886; doi: 10.1101/2023.02.21.23285886 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Poulain R, Fayyaz H, Beheshti R. Bias patterns in the application of LLMs for clinical decision support: A comprehensive study. Published Online 2024; doi: 10.48550/arXiv.2404.15149 [DOI] [Google Scholar]
  • 11. McDarby M, Mroz EL, Kastrinos A, et al. An initial examination of ChatGPT responses to questions about decision making in advanced cancer. J Pain Symptom Manage 2024;68(1):e86–e89; doi: 10.1016/j.jpainsymman.2024.04.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Nastasi AJ, Courtright KR, Halpern SD, Weissman GE. A vignette-based evaluation of ChatGPT’s ability to provide appropriate and equitable medical advice across care contexts. Sci Rep 2023;13(1):17885; doi: 10.1038/s41598-023-45223-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Thomas J, Prabhu AV, Heron DE, et al. Reddit and radiation therapy: A descriptive analysis of posts and comments over 7 years by patients and health care professionals. Adv Radiat Oncol 2019;4(2):345–353; doi: 10.1016/j.adro.2019.01.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. How do patients with advanced cancer and family caregivers accommodate one another in decision-making? findings from a qualitative study in specialist palliative care. Am J Hosp Palliat Car 2024:10499091241255117; doi: https://journals.sagepub.com/doi/full/10.1177/10499091241255117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Dionne-Odom JN, Ejem D, Wells R, et al. How family caregivers of persons with advanced cancer assist with upstream healthcare decision-making: A qualitative study. PLoS One 2019;14(3):e0212967; doi: 10.1371/journal.pone.0212967 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Bylund CL, Brown R, Gueguen JA, et al. The implementation and assessment of a comprehensive communication skills training curriculum for oncologists. Psychooncology 2010;19(6):583–593; doi: 10.1002/pon.1585 [DOI] [PubMed] [Google Scholar]
  • 17. Brown R, Bylund CL, Eddington J, et al. Discussing prognosis in an oncology setting: Initial evaluation of a communication skills training module. Psychooncology 2010;19(4):408–414; doi: 10.1002/pon.1580 [DOI] [PubMed] [Google Scholar]
  • 18. Ostrowska M, Kacała P, Onolememen D, et al. To trust or not to trust: Evaluating the reliability and safety of AI responses to laryngeal cancer queries. Eur Arch Otorhinolaryngol 2024; doi: 10.1007/s00405-024-08643-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Schulman-Green D, Smith CB, Lin JJ, et al. Oncologists’ and patients’ perceptions of initial, intermediate, and final goals of care conversations. J Pain Symptom Manage 2018;55(3):890–896; doi: 10.1016/j.jpainsymman.2017.09.024 [DOI] [PubMed] [Google Scholar]
  • 20. Grunfeld EA, Maher EJ, Browne S, et al. Advanced breast cancer patients’ perceptions of decision making for palliative chemotherapy. J Clin Oncol 2006;24(7):1090–1098; doi: 10.1200/JCO.2005.01.9208 [DOI] [PubMed] [Google Scholar]
  • 21. Goals of care conversations in serious illness: A practical guide, PubMed. Available from: https://pubmed.ncbi.nlm.nih.gov/32312404/ [Last accessed: May 30, 2024]. [DOI] [PubMed]
  • 22. Cherubini A, Oristrell J, Pla X, et al. The persistent exclusion of older patients from ongoing clinical trials regarding heart failure. Arch Intern Med 2011;171(6):550–556; doi: 10.1001/archinternmed.2011.31 [DOI] [PubMed] [Google Scholar]
  • 23. Abbasi J. Older patients (Still) left out of cancer clinical trials. JAMA 2019;322(18):1751–1753; doi: 10.1001/jama.2019.17016 [DOI] [PubMed] [Google Scholar]
  • 24. Thake M, Lowry A. A systematic review of trends in the selective exclusion of older participant from randomised clinical trials. Arch Gerontol Geriatr 2017;72:99–102; doi: 10.1016/j.archger.2017.05.017 [DOI] [PubMed] [Google Scholar]
  • 25. Buttgereit T, Palmowski A, Forsat N, et al. Barriers and potential solutions in the recruitment and retention of older patients in clinical trials-lessons learned from six large multicentre randomized controlled trials. Age Ageing 2021;50(6):1988–1996; doi: 10.1093/ageing/afab147 [DOI] [PubMed] [Google Scholar]
  • 26. Recruitment and retention of older people in clinical research: A systematic literature review—PubMed. Available from: https://pubmed.ncbi.nlm.nih.gov/33075140/ [Last accessed: May 30, 2024]. [DOI] [PubMed]
  • 27. Bias and stereotyping among research and clinical professionals: Perspectives on minority recruitment for oncology clinical trials - Niranjan. Cancer Wiley Online Library. 2020;126(9):1958–1968. Doi; doi: 10.1002/cncr.32755 [DOI] [PubMed] [Google Scholar]
  • 28. Kozlov E, McDarby M, Reid MC, et al. Knowledge of palliative care among community-dwelling adults. Am J Hosp Palliat Care 2018;35(4):647–651; doi: 10.1177/1049909117725725 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Cagle JG, Van Dussen DJ, Culler KL, et al. Knowledge About Hospice: Exploring Misconceptions, Attitudes, and Preferences for Care. Am J Hosp Palliat Care 2016;33(1):27–33; doi: 10.1177/1049909114546885 [DOI] [PubMed] [Google Scholar]
  • 30. Tate CE, Venechuk G, Brereton EJ, et al. “It’s like a death sentence but it really isn’t” What patients and families want to know about hospice care when making end-of-life decisions—(eds.), 2020. Available from: https://journals.sagepub.com/doi/full/10.1177/1049909119897259?casa_token=a42HuisgxgQAAAAA%3AJHrQDCjtiVYND-JSOWElkj9dOnN75i8KWzBoVXi-pcWnCnxn1A7iuzsDNxma9ghs4PBKEJ1TRg [Last accessed: May 30, 2024]. [DOI] [PMC free article] [PubMed]
  • 31. Barriers to access to palliative care. Palliat Care 2017;10:1178224216688887. Doi; doi: 10.1177/1178224216688887 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Understanding the barriers to introducing early palliative care for patients with advanced cancer: A qualitative study. Journal of Palliative Medicine 2019;22(5):508–516; doi: 10.1089/jpm.2018.0338 [DOI] [PubMed] [Google Scholar]
  • 33. Mor V, Teno JM. Regulating and paying for hospice and palliative care: Reflections on the medicare hospice benefit. J Health Polit Policy Law 2016;41(4):697–716; doi: 10.1215/03616878-3620893 [DOI] [PubMed] [Google Scholar]
  • 34. Obermeyer Z, Makar M, Abujaber S, et al. Association between the medicare hospice benefit and health care utilization and costs for patients with poor-prognosis cancer. JAMA 2014;312(18):1888–1896; doi: 10.1001/jama.2014.14950 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Laryionava K, Hauke D, Heußner P, et al. “Often relatives are the key […]” –Family involvement in treatment decision making in patients with advanced cancer near the end of life. Oncologist 2021;26(5):e831–e837; doi: 10.1002/onco.13557 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Lin JJ, Smith CB, Feder S, et al. Patients’ and oncologists’ views on family involvement in goals of care conversations. Psychooncology 2018;27(3):1035–1041; doi: 10.1002/pon.4630 [DOI] [PubMed] [Google Scholar]
  • 37. He Z, Bhasuran B, Jin Q, et al. Quality of answers of generative large language models versus peer users for interpreting laboratory test results for lay patients: Evaluation study. J Med Internet Res 2024;26:e56655; doi: 10.2196/56655 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Chuang YN, Tang R, Jiang X, et al. SPeC: A soft prompt-based calibration on performance variability of large language model in clinical notes summarization. J Biomed Inform 2024;151:104606; doi: 10.1016/j.jbi.2024.104606 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. McMullan M. Patients using the internet to obtain health information: How this affects the patient-health professional relationship. Patient Educ Couns 2006;63(1–2):24–28; doi: 10.1016/j.pec.2005.10.006 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Appendix SA1

Articles from Journal of Palliative Medicine are provided here courtesy of Mary Ann Liebert, Inc.

RESOURCES