Abstract
Diagnostic imaging reports are generally written with a target audience of other providers. As a result, the reports are written with medical jargon and technical detail to ensure accurate communication. With implementation of the 21st Century Cures Act, patients have greater and quicker access to their imaging reports, but these reports are still written above the comprehension level of the average patient. Consequently, many patients have requested reports to be conveyed in language accessible to them. Numerous studies have shown that improving patient understanding of their condition results in better outcomes, so driving comprehension of imaging reports is essential. Summary statements, second reports, and the inclusion of the radiologist’s phone number have been proposed, but these solutions have implications for radiologist workflow. Artificial intelligence (AI) has the potential to simplify imaging reports without significant disruptions. Many AI technologies have been applied to radiology reports in the past for various clinical and research purposes, but patient focused solutions have largely been ignored. New natural language processing technologies and large language models (LLMs) have the potential to improve patient understanding of their imaging reports. However, LLMs are a nascent technology and significant research is required before LLM-driven report simplification is used in patient care.
Keywords: 21st Century Cures Act, Imaging Report, Radiology Report, Artificial Intelligence, Large Language Model, Natural Language Processing
Introduction
Artificial intelligence (AI) is increasingly applied to the field of radiology. Since 2017, the number of radiology related AI papers has drastically increased (Figure 1). The current applications of AI are centered around interpreting medical images and improving workflow, driven by deep learning such as convolutional neural networks trained on large data sets [1]. Computer aided detection has found application in breast imaging and chest radiographs, among other modalities [2]. During this recent boom, AI has transformed the radiology experience for radiologists, but the use of AI and big data to improve the patient experience has largely been unexplored (Figure 1). New generative AI technology, based off of natural language processing (NLP), has the potential to drastically improve patient health literacy [3].
Figure 1.
Graph of PubMed Indexed radiology publications between 1989 and 2022 related to artificial intelligence, both artificial intelligence and the patient, deep learning, and natural language processing.
Patient Health Literacy
Imaging Reports
Imaging reports have historically been written with a target audience of other physicians and healthcare professionals. Until the 1970s, radiology reports were predominately a form of communication between the radiologist and the referring provider, earning radiologists the nickname “the doctor’s doctor.” In the late 20th century, legal pressures forced radiologists to increasingly become the patient’s doctor as well [4]. Notably, the Mammography Quality Standards Act (MQSA) passed in 1992 legislated that radiologists send a lay summary directly to patients.
21st Century Cures Act
Today, with the 21st Century Cures Act [5], imaging reports are increasingly and immediately available to patients. Unfortunately, they remain incomprehensible by the average patient in the United States [6]. The Health Insurance Portability and Accountability Act’s Privacy Rule (1996), with adjustment by the Health Information Technology for Economic and Clinical Health Act of 2009, established the patient’s right to access their medical records [7]. However, many barriers to access remained, as patients would have to request providers to receive their health information [8]. The 21st Century Cures Act, signed into law in December 2016, significantly changed the way patients interact with their health information. While the act is known for its provisions designed to accelerate drug and device approvals, lesser-known provisions improved patient access to their electronic health information (EHI). The Cures Act Final Rule requires that patients can electronically access their EHI – whether unstructured or structured – for free [9]. Further, the Information Blocking Provision necessitated that patients have access to segments (including imaging reports) of their EHI, defined by the United States Core Data for Interoperability (USCDI), by April 5, 2021 and all of their EHI by October 6, 2022, with certain exceptions [9].
Before the information blocking provision, many practices participated in time-delayed responses to allow the referring provider time to review the report and facilitate future care before patients became aware of abnormal or anxiety-inducing images [10-12]. The effect of the Information Blocking Provision on time-delayed responses is ambiguous, but many providers have already stopped this practice [10-12]. Practices dropping time-delayed responses have reported increased call volume, which may contribute to physician workflow disruptions and provider burnout [10,11]. At the same time, the information blocking provision has many ramifications for patient privacy. Notably, parents or caregivers with proxy-access to an adolescent’s or older adult’s EHI may inadvertently see information that was requested to be withheld [9]. Immediate release creates an opportunity to further patient-centered care and allows for greater patient participation in their care decisions, but many hurdles remain. Here, issues related to patient health literacy are addressed.
Patient Engagement
Even prior to the Cures Act, patients engaged with radiology web portals with 51.2% of patients viewing their radiology reports [13]. In another study, 85% of patients wished to view their radiological images while 64% of patients wished to receive access to their reports [14]. Additionally, a study looking at requests on web portals found that 33% of patients sent messages asking for the results of a recent scan [8]. Demographically, women, English speakers, those with commercial insurance, and patients 25-39 were the most likely to view their reports; compared to whites, Asian Americans were significantly more and African Americans were significantly less likely to view their reports [13]. For content of radiology reports, patients wish to receive very detailed reports of their radiology findings: 81.6% for abnormal findings and 46.4% for normal findings [15].
Limitations to Patient Health Literacy
As radiology moves to a value-based and patient-centered practice, access is only the first step: patients must understand their imaging reports. The American Medical Association (AMA) and National Institutes of Health (NIH) recommend that patient education materials are written between the 3rd- and 7th-grade levels, given that the average American reads at the 8th-grade level [4,16,17]. However, a study by Martin-Carreras found when analyzing 97,052 radiology reports, the mean reading grade level (± standard deviation) was 13.0 ± 2.4 and only 4.2% of reports were at or below the 8th-grade reading level [6].
The average reading level of reports may reflect the fact that the patient’s ability to understand these reports is often not considered [18]. A scoping review of English language diagnostic imaging reporting guidelines found that only two out of six international guidelines (The Royal College of Radiologists and the Royal Australian and New Zealand College of Radiologists) explicitly note that imaging reports should consider the patient. Instead, the guidelines from international radiology professional bodies emphasize structure and technical detail in reports [18].
Some authors have also written about making radiology reports more understandable for both patients and their referring providers [19-21]. These authors recognize that the report must be clear, concise, and specific while balancing the preferences of different readers of the report. Though, authors suggest reports can strike a greater balance between the patient, referring provider, and radiologist without losing medical sophistication by using medical language from medical school over residency specific jargon [19].
Currently, the greatest limitations preventing patient literacy of reports are polysyllabic terms and intricate concepts unknown to the layperson. A pilot study by Gunn et al. asked 104 patients to review CT, X-Ray, ultrasound, and MRI reports and rate their comprehension level, identify any issues with the report, and provide free-text feedback [22]. They found that the median comprehension was 2.5/5, and the most common issue impacting comprehension was “unclear or technical language” (59.6% of the evaluations). In the free text portion, the most common request was an explanation in lay terms (20.1% of evaluations). These findings are despite the fact that 63% of the respondents had at least a college degree, which is much higher than the national average of 32.5% [22,23]. Many other studies extensively show that patients have a poor understanding of their radiology findings, often due to the technical language [24-28].
Importance of Patient Health Literacy
A systematic review by Nickel et al. found that the use of medical jargon contributes to greater patient anxiety, perceptions of increased severity of the ailment, and increases inclination for more aggressive treatments [29], creating concern for many referring providers [30]. This concern is amplified given the increasingly immediate access to imaging reports.
Immediate access to readable imaging reports has the potential to tremendously benefit patients, as patients with a greater understanding of their disease are more likely to adhere to treatment plans [31]. For mammograms, decreasing the grade level of the wording for “recall” letters following an abnormal finding has been shown to significantly improve timely patient follow-up [32]. In general, a systematic review by Berman et al. found improved health literacy is associated with decreased hospitalizations and emergency care, improved use of health care services, improved health status and lower mortality in older patients, and diminished racial disparities [33]. Improving the comprehension of imaging reports has the potential to tremendously improve patient outcomes while also improving the visibility of radiologists [32].
Potential Solutions
There are many potential solutions to bridging the gap in patient health literacy. Early efforts in clinical informatics by the Canon Group sought to bring structure and standard lexicon to radiology reports in the 1990s to aid future computer-assisted analyses [34,35]. Today, standard lexicon, such as RadLex by the Radiological Society of North America, and structured reports have the potential to improve patient comprehension of their radiology reports [4].
Decreasing the reading level of imaging reports has also been proposed, but communication with other providers may be impacted and the chance for medical errors may increase [22]. Instead, radiologists could generate a second report in lay terms for each examination, in addition to the report directed toward other healthcare professionals. However, a second report would increase the administrative burden on radiologists and may lead to lower job satisfaction [22]. Others have suggested the inclusion of the radiologist’s contact information in the reports [36] or an immediate result consultation with a radiologist [37]. Some articles have also suggested the inclusion of a single summarizing statement in layman’s terms at the end of the report [22,38].
Authors have also suggested using AI to mine the report to create these simple summary sentences, representing one of the many AI-driven solutions [10]. Many have also suggested providing annotated reports with definitions or including hyperlinks on medical terms to receive more information and or images. [11,22,39,40]. These suggestions are implementable with software that can recognize medical and radiological terms and then link to medical databases [41].
New Artificial Intelligence Methods to Bridge the Gap
Background
AI driven computer-aided diagnosis systems have long been incorporated into radiology workflow [2]. However, NLP, a subset of AI, is now being used to create many patient-experience tools (Figure 2). NLP has shown effectiveness in summarizing text, translating text, and answering questions [42]. NLP is divided into symbolic and statistical NLP [43]. While symbolic NLP uses a rules-based architecture, statistical NLP learns from large amounts of data [43]. Symbolic systems allow a programmer to know exactly why a certain output was generated and allow programmers to add additional rules to incorporate greater information. Meanwhile, statistical NLP produces greater variation in output but does not require the inclusion of as many linguistic rules to generate the desired output [44]. Statistical approaches are particularly useful in analyzing imaging reports, as there are large variations due to modality, indication, preferences, and culture [45,46]. The earliest initiatives involved in processing radiology reports were based on symbolic approaches, but with advances in NLP, most paradigms are combinations of symbolic and statistical approaches [44].
Figure 2.
Concepts in Computer Science and Artificial Intelligence.
These initial efforts related to imaging reports include noting critical findings [47]; identifying diseases such as urinary tract calculi [48], pneumonia [49], peripheral arterial disease [50], and thromboembolism [51]; creating follow-up recommendations [52]; describing the change in radiology findings over time [53]; categorizing oncologic responses [54]; and extracting information such as measurements [55] and Breast Imaging-Reporting and Data System (BI-RADS) classification [56]. Further, using conventional methods of machine learning Goff and Loehfelm explored the use of NLP in summarizing imaging reports [57].
Deep Learning NLP (DL-NLP) can be considered a type of statistical NLP, but DL-NLP is often used to differentiate earlier simple statistical methods and modern neural-network architecture [58]. Large Language Models (LLMs) such as OpenAI’s ChatGPT are examples of DL-NLP. These DL-NLP approaches have also been applied to imaging reports for similar purposes: identifying critical findings [59], categorizing oncologic responses [60], finding follow-up recommendations [61], identifying pulmonary emboli [62,63], detecting complications of stroke [64], and classifying epilepsy brain MRIs [65] among others.
AI to Simplify Imaging Reports
DL-NLP and LLMs have the potential to generate impressions, simplify radiology reports for a patient, and improve patient engagement [66]. A host of publications have recently explored this possibility [67-76]. The most common LLM studied thus far for radiology report simplification and generation is ChatGPT [67-70,75], but authors have also studied other mass market LLMs such as Google Bard and Microsoft Bing [70] while others have trained their own models [73,74].
To simplify radiology reports, authors have tested various prompts – with different levels of context – in mass market LLMs; the authors suggest greater context leads to improved simplification [67,69,70]. To measure the degree of simplification, few papers have measured the reading grade level of raw reports and LLM produced outputs using well established metrices such as Flesch-Kincaid, Coleman-Liau, Automated Readability Index, and Gunning Fog [68,70]. In line with Martin-Carreras’s study, the reading grade level of raw reports was found to be above the recommended 8th-grade reading level. For certain prompts and LLM combinations, the LLMs were able to simplify the radiology reports to below the 8th-grade reading level [68,70]. Though, the accuracy of simplified reports verifiably beneath a certain grade level is yet to be sufficiently tested.
Some studies have measured the accuracy of simplified reports [67,69]. In Jeblick et al.’s study, most radiologists agreed that the simplified reports were factually correct and complete. Though, factual errors were discovered: misinterpretation of medical terms, imprecise/odd language, and grammatical errors. Further, some key medical information was often skipped in the simplified report [67]. Overall, the radiologists recognized that there are many statements in the simplified report that may lead to the wrong conclusion and consequently psychological harm, but generally believed that direct harm to patients would be averted [67]. In Lyu et al.’s study, an evaluation by two radiologists found that ChatGPT output for CT scans (MRIs) had on average missing information every 10.3 (12.5) outputs and incorrect information every 31.3 (15.4) outputs. The radiologists gave an overall quality score of 4.268/5.0, with 52% of all outputs receiving a full score.
Limitations
These results suggest that LLMs have the potential to simplify radiology reports. Automatically generated second reports could be sent to patients along with their original report after verification by experts. Though, as of now, verification is necessary because LLMs have the potential to hallucinate and provide false information. LLMs may also not have the full picture of a patient’s history and may provide incorrect recommendations. Over time, the LLMs may improve in their ability to accurately simplify radiology reports as GPT-4 has been shown to perform better than GPT-3.5 in certain tasks [69,70].
Conclusion
Due to the Cures Act, patients have greater access to their imaging reports. However, these reports are often not comprehensible by the median patient. As medicine and radiology evolve to a more patient-centered practice, improving the ability of patients to understand their radiology outcomes is warranted, given that patient understanding of their medical information has been shown to improve outcomes [33]. Many solutions such as summary statements or second reports have been proposed, but these may impact a radiologist’s workflow. AI, which already has many applications in radiology, has the potential to drive the simplification of imaging reports without significant disruptions to clinical workflow. However, LLMs are nascent technology and rigorous research is required prior to the implementation of LLM-driven report simplification in patient care.
Funding
none.
Glossary
- AI
Artificial Intelligence
- NLP
Natural Language Processing
- LLM
Large Language Model
- EHI
Electronic Health Information
Author Contributions
KA (ORCID: https://www.orcid.org/0000-0002-0505-9832), PK (ORCID: https://www.orcid.org/0000-0002-6966-6084), and RD (ORCID: https://www.orcid.org/0000-0002-2692-8195): literature search, initial drafting, and editing. HF and SC (ORCID: https://www.orcid.org/0000-0002-3096-835X): critical analysis, drafting, and guidance.
References
- Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJ. Artificial intelligence in radiology. Nat Rev Cancer. 2018. Aug;18(8):500–10. 10.1038/s41568-018-0016-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fujita H. AI-based computer-aided diagnosis (AI-CAD): the latest review to read first. Radiol Phys Technol. 2020. Mar;13(1):6–19. 10.1007/s12194-019-00552-4 [DOI] [PubMed] [Google Scholar]
- Bobba PS, Sailer A, Pruneski JA, Beck S, Mozayan A, Mozayan S, et al. Natural language processing in radiology: Clinical applications and future directions. Clin Imaging. 2023;97:55-61. Epub 20230305. doi: 10.1016/j.clinimag.2023.02.014. [DOI] [PubMed] [Google Scholar]
- Vincoff NS, Barish MA, Grimaldi G. The patient-friendly radiology report: history, evolution, challenges and opportunities. Clin Imaging. 2022. Sep;89:128–35. 10.1016/j.clinimag.2022.06.018 [DOI] [PubMed] [Google Scholar]
- US FDA. 21st Century Cures Act. Available from: https://www.fda.gov/regulatory-information/selected-amendments-fdc-act/21st-century-cures-act
- Martin-Carreras T, Cook TS, Kahn CE Jr. Readability of radiology reports: implications for patient-centered care. Clin Imaging. 2019;54:116–20. 10.1016/j.clinimag.2018.12.006 [DOI] [PubMed] [Google Scholar]
- Lye CT, Forman HP, Daniel JG, Krumholz HM. The 21st Century Cures Act and electronic health records one year later: will patients see the benefits? J Am Med Inform Assoc. 2018. Sep;25(9):1218–20. 10.1093/jamia/ocy065 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mervak BM, Davenport MS, Flynt KA, Kazerooni EA, Weadock WJ. What the patient wants: an analysis of radiology-related inquiries from a web-based patient portal. J Am Coll Radiol. 2016. Nov;13(11):1311–8. 10.1016/j.jacr.2016.05.022 [DOI] [PubMed] [Google Scholar]
- Arvisais-Anhalt S, Lau M, Lehmann CU, Holmgren AJ, Medford RJ, Ramirez CM, et al. The 21st century cures act and multiuser electronic health record access: potential pitfalls of information release. J Med Internet Res. 2022. Feb;24(2):e34085. 10.2196/34085 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mehan WA Jr, Brink JA, Hirsch JA. 21st Century Cures Act: Patient-Facing Implications of Information Blocking. J Am Coll Radiol. 2021. Jul;18(7):1012–6. 10.1016/j.jacr.2021.01.016 [DOI] [PubMed] [Google Scholar]
- Mehan WA Jr, Gee MS, Egan N, Jones PE, Brink JA, Hirsch JA. Immediate Radiology Report Access: A Burden to the Ordering Provider. Curr Probl Diagn Radiol. 2022;51(5):712–6. 10.1067/j.cpradiol.2022.01.012 [DOI] [PubMed] [Google Scholar]
- Mezrich JL, Jin G, Lye C, Yousman L, Forman HP. Patient electronic access to final radiology reports: what is the current standard of practice, and is an embargo period appropriate? Radiology. 2021. Jul;300(1):187–9. 10.1148/radiol.2021204382 [DOI] [PubMed] [Google Scholar]
- Miles RC, Hippe DS, Elmore JG, Wang CL, Payne TH, Lee CI. Patient Access to Online Radiology Reports: Frequency and Sociodemographic Characteristics Associated with Use. Acad Radiol. 2016. Sep;23(9):1162–9. 10.1016/j.acra.2016.05.005 [DOI] [PubMed] [Google Scholar]
- Cabarrus M, Naeger DM, Rybkin A, Qayyum A. Patients prefer results from the ordering provider and access to their radiology reports. J Am Coll Radiol. 2015. Jun;12(6):556–62. 10.1016/j.jacr.2014.12.009 [DOI] [PubMed] [Google Scholar]
- Mangano MD, Rahman A, Choy G, Sahani DV, Boland GW, Gunn AJ. Radiologists’ role in the communication of imaging examination results to patients: perceptions and preferences of patients. AJR Am J Roentgenol. 2014. Nov;203(5):1034–9. 10.2214/AJR.14.12470 [DOI] [PubMed] [Google Scholar]
- Weiss BD. Health literacy and patient safety: Help patients understand. Manual for clinicians: American Medical Association Foundation; 2007. [Google Scholar]
- Hansberry DR, Agarwal N, Baker SR. Health literacy and online educational resources: an opportunity to educate patients. AJR Am J Roentgenol. 2015. Jan;204(1):111–6. 10.2214/AJR.14.13086 [DOI] [PubMed] [Google Scholar]
- Farmer CI, Bourne AM, O’Connor D, Jarvik JG, Buchbinder R. Enhancing clinician and patient understanding of radiology reports: a scoping review of international guidelines. Insights Imaging. 2020. May;11(1):62. 10.1186/s13244-020-00864-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hartung MP, Bickle IC, Gaillard F, Kanne JP. How to create a great radiology report. Radiographics. 2020. Oct;40(6):1658–70. 10.1148/rg.2020200020 [DOI] [PubMed] [Google Scholar]
- Lukaszewicz A, Uricchio J, Gerasymchuk G. The art of the radiology report: practical and stylistic guidelines for perfecting the conveyance of imaging findings. Can Assoc Radiol J. 2016. Nov;67(4):318–21. 10.1016/j.carj.2016.03.001 [DOI] [PubMed] [Google Scholar]
- Goergen SK, Pool FJ, Turner TJ, Grimm JE, Appleyard MN, Crock C, et al. Evidence-based guideline for the written radiology report: methods, recommendations and implementation challenges. J Med Imaging Radiat Oncol. 2013. Feb;57(1):1–7. 10.1111/1754-9485.12014 [DOI] [PubMed] [Google Scholar]
- Gunn AJ, Gilcrease-Garcia B, Mangano MD, Sahani DV, Boland GW, Choy G. JOURNAL CLUB: structured feedback from patients on actual radiology reports: a novel approach to improve reporting practices. AJR Am J Roentgenol. 2017. Jun;208(6):1262–70. 10.2214/AJR.16.17584 [DOI] [PubMed] [Google Scholar]
- Ryan CL, Siebens J. Educational Attainment in the United States: 2009. Population Characteristics. Current Population Reports. P20-566. US Census Bureau. 2012.
- Short RG, Middleton D, Befera NT, Gondalia R, Tailor TD. Patient-centered radiology reporting: using online crowdsourcing to assess the effectiveness of a web-based interactive radiology report. J Am Coll Radiol. 2017. Nov;14(11):1489–97. 10.1016/j.jacr.2017.07.027 [DOI] [PubMed] [Google Scholar]
- Cho JK, Zafar HM, Cook TS. Use of an online crowdsourcing platform to assess patient comprehension of radiology reports and colloquialisms. AJR Am J Roentgenol. 2020. Jun;214(6):1316–20. 10.2214/AJR.19.22202 [DOI] [PubMed] [Google Scholar]
- Karliner LS, Patricia Kaplan C, Juarbe T, Pasick R, Pérez-Stable EJ. Poor patient comprehension of abnormal mammography results. J Gen Intern Med. 2005. May;20(5):432–7. 10.1111/j.1525-1497.2005.40281.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Verosky A, Leonard LD, Quinn C, Vemuru S, Warncke E, Himelhoch B, et al. Patient comprehension of breast pathology report terminology: the need for patient-centered resources. Surgery. 2022. Sep;172(3):831–7. 10.1016/j.surg.2022.05.007 [DOI] [PubMed] [Google Scholar]
- Short RG, Befera NT, Hoang JK, Tailor TD. A normal thyroid by any other name: linguistic analysis of statements describing a normal thyroid gland from noncontrast chest CT reports. J Am Coll Radiol. 2018. Nov;15(11):1642–7. 10.1016/j.jacr.2018.04.016 [DOI] [PubMed] [Google Scholar]
- Nickel B, Barratt A, Copp T, Moynihan R, McCaffery K. Words do matter: a systematic review on how different terminology for the same condition influences management preferences. BMJ Open. 2017. Jul;7(7):e014129. 10.1136/bmjopen-2016-014129 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Johnson AJ, Frankel RM, Williams LS, Glover S, Easterling D. Patient access to radiology reports: what do physicians think? J Am Coll Radiol. 2010. Apr;7(4):281–9. 10.1016/j.jacr.2009.10.011 [DOI] [PubMed] [Google Scholar]
- Walker J, Darer JD, Elmore JG, Delbanco T. The road toward fully transparent medical records. N Engl J Med. 2014. Jan;370(1):6–8. 10.1056/NEJMp1310132 [DOI] [PubMed] [Google Scholar]
- Nguyen DL, Harvey SC, Oluyemi ET, Myers KS, Mullen LA, Ambinder EB. Impact of improved screening mammography recall lay letter readability on patient follow-up. J Am Coll Radiol. 2020. Nov;17(11):1429–36. 10.1016/j.jacr.2020.07.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, Crotty K. Low health literacy and health outcomes: an updated systematic review. Ann Intern Med. 2011. Jul;155(2):97–107. 10.7326/0003-4819-155-2-201107190-00005 [DOI] [PubMed] [Google Scholar]
- Friedman C, Cimino JJ, Johnson SB. A conceptual model for clinical radiology reports. proceedings of the annual symposium on computer application in medical care; 1993: American Medical Informatics Association. [PMC free article] [PubMed] [Google Scholar]
- Friedman C, Huff SM, Hersh WR, Pattison-Gordon E, Cimino JJ. The Canon Group’s effort: working toward a merged model. J Am Med Inform Assoc. 1995;2(1):4–18. 10.1136/jamia.1995.95202547 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kemp J, Gannuch G, Kornbluth C, Sarti M. Radiologists include contact telephone number in reports: experience with patient interaction. AJR Am J Roentgenol. 2020. Sep;215(3):673–8. 10.2214/AJR.19.22713 [DOI] [PubMed] [Google Scholar]
- Kemp J, McKenzie A, Burns J, Miller K. Immediate interpretation and results communication decreases patient anxiety: experience in a private practice community hospital. AJR Am J Roentgenol. 2020. Jun;214(6):1311–5. 10.2214/AJR.19.22264 [DOI] [PubMed] [Google Scholar]
- Kadom N, Tamasi S, Vey BL, Safdar N, Applegate KE, Sadigh G, et al. Info-RADS: adding a message for patients in radiology reports. J Am Coll Radiol. 2021. Jan;18(1 1 Pt A):128–32. 10.1016/j.jacr.2020.09.049 [DOI] [PubMed] [Google Scholar]
- Vitzthum von Eckstaedt H 5th, Kitts AB, Swanson C, Hanley M, Krishnaraj A. Patient-centered radiology reporting for lung cancer screening. J Thorac Imaging. 2020. Mar;35(2):85–90. 10.1097/RTI.0000000000000469 [DOI] [PubMed] [Google Scholar]
- Cook TS, Oh SC, Kahn CE Jr. Patients’ use and evaluation of an online system to annotate radiology reports with lay language definitions. Acad Radiol. 2017. Sep;24(9):1169–74. 10.1016/j.acra.2017.03.005 [DOI] [PubMed] [Google Scholar]
- America RSoN. RadLex Playbook. RSNA website. 2011.
- Min B, Ross H, Sulem E, Veyseh AP, Nguyen TH, Sainz O, et al. Recent advances in natural language processing via large pre-trained language models: A survey. arXiv preprint arXiv:. 2021.
- Maruyama Y. Symbolic and statistical theories of cognition: towards integrated artificial intelligence. Software Engineering and Formal Methods. SEFM 2020 Collocated Workshops: ASYDE, CIFMA, and CoSim-CPS, Amsterdam, The Netherlands, September 14–15, 2020, Revised Selected Papers 18; 2021: Springer. [Google Scholar]
- Steinkamp J, Cook TS. Basic artificial intelligence techniques: natural language processing of radiology reports. Radiol Clin North Am. 2021. Nov;59(6):919–31. 10.1016/j.rcl.2021.06.003 [DOI] [PubMed] [Google Scholar]
- Langlotz CP. Structured radiology reporting: are we there yet? Radiology. 2009. Oct;253(1):23–5. 10.1148/radiol.2531091088 [DOI] [PubMed] [Google Scholar]
- org ESoRcm. ESR paper on structured reporting in radiology. Insights Imaging. 2018;9(1):1–7. 10.1007/s13244-017-0588-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Heilbrun ME, Chapman BE, Narasimhan E, Patel N, Mowery D. Feasibility of natural language processing–assisted auditing of critical findings in chest radiology. J Am Coll Radiol. 2019. Sep;16(9 9 Pt B):1299–304. 10.1016/j.jacr.2019.05.038 [DOI] [PubMed] [Google Scholar]
- Li AY, Elliot N. Natural language processing to identify ureteric stones in radiology reports. J Med Imaging Radiat Oncol. 2019. Jun;63(3):307–10. 10.1111/1754-9485.12861 [DOI] [PubMed] [Google Scholar]
- Dublin S, Baldwin E, Walker RL, Christensen LM, Haug PJ, Jackson ML, et al. Natural Language Processing to identify pneumonia from radiology reports. Pharmacoepidemiol Drug Saf. 2013. Aug;22(8):834–41. 10.1002/pds.3418 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Savova GK, Fan J, Ye Z, Murphy SP, Zheng J, Chute CG, et al. Discovering peripheral arterial disease cases from radiology notes using natural language processing. AMIA Annual Symposium Proceedings; 2010: American Medical Informatics Association. [PMC free article] [PubMed] [Google Scholar]
- Pham AD, Névéol A, Lavergne T, Yasunaga D, Clément O, Meyer G, et al. Natural language processing of radiology reports for the detection of thromboembolic diseases and clinically relevant incidental findings. BMC Bioinformatics. 2014. Aug;15(1):266. 10.1186/1471-2105-15-266 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lou R, Lalevic D, Chambers C, Zafar HM, Cook TS. Automated detection of radiology reports that require follow-up imaging using natural language processing feature engineering and machine learning classification. J Digit Imaging. 2020. Feb;33(1):131–6. 10.1007/s10278-019-00271-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hassanpour S, Bay G, Langlotz CP. Characterization of change and significance for clinical findings in radiology reports through natural language processing. J Digit Imaging. 2017. Jun;30(3):314–22. 10.1007/s10278-016-9931-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen PH, Zafar H, Galperin-Aizenberg M, Cook T. Integrating natural language processing and machine learning algorithms to categorize oncologic response in radiology reports. J Digit Imaging. 2018. Apr;31(2):178–84. 10.1007/s10278-017-0027-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sevenster M, Buurman J, Liu P, Peters JF, Chang PJ. Natural language processing techniques for extracting and categorizing finding measurements in narrative radiology reports. Appl Clin Inform. 2015. Sep;6(3):600–110. 10.4338/ACI-2014-11-RA-0110 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sippo DA, Warden GI, Andriole KP, Lacson R, Ikuta I, Birdwell RL, et al. Automated extraction of BI-RADS final assessment categories from radiology reports with natural language processing. J Digit Imaging. 2013. Oct;26(5):989–94. 10.1007/s10278-013-9616-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Goff DJ, Loehfelm TW. Automated radiology report summarization using an open-source natural language processing pipeline. J Digit Imaging. 2018. Apr;31(2):185–92. 10.1007/s10278-017-0030-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Deng L, Liu Y. A joint introduction to natural language processing and to deep learning. Deep learning in natural language processing. 2018:1-22. 10.1007/978-981-10-5209-5_1 [DOI] [Google Scholar]
- Bressem KK, Adams LC, Gaudin RA, Tröltzsch D, Hamm B, Makowski MR, et al. Highly accurate classification of chest radiographic reports using a deep learning natural language model pre-trained on 3.8 million text reports. Bioinformatics. 2021. Jan;36(21):5255–61. 10.1093/bioinformatics/btaa668 [DOI] [PubMed] [Google Scholar]
- Kehl KL, Elmarakeby H, Nishino M, Van Allen EM, Lepisto EM, Hassett MJ, et al. Assessment of deep natural language processing in ascertaining oncologic outcomes from radiology reports. JAMA Oncol. 2019. Oct;5(10):1421–9. 10.1001/jamaoncol.2019.1800 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carrodeguas E, Lacson R, Swanson W, Khorasani R. Use of machine learning to identify follow-up recommendations in radiology reports. J Am Coll Radiol. 2019. Mar;16(3):336–43. 10.1016/j.jacr.2018.10.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen MC, Ball RL, Yang L, Moradzadeh N, Chapman BE, Larson DB, et al. Deep learning to classify radiology free-text reports. Radiology. 2018. Mar;286(3):845–52. 10.1148/radiol.2017171115 [DOI] [PubMed] [Google Scholar]
- Banerjee I, Ling Y, Chen MC, Hasan SA, Langlotz CP, Moradzadeh N, et al. Comparative effectiveness of convolutional neural network (CNN) and recurrent neural network (RNN) architectures for radiology text report classification. Artif Intell Med. 2019. Jun;97:79–88. 10.1016/j.artmed.2018.11.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Miller MI, Orfanoudaki A, Cronin M, Saglam H, So Yeon Kim I, Balogun O, et al. Natural language processing of radiology reports to detect complications of ischemic stroke. Neurocrit Care. 2022. Aug;37(S2 Suppl 2):291–302. 10.1007/s12028-022-01513-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bayrak S, Yucel E, Takci H. Epilepsy radiology reports classification using deep learning networks. Computers. Comput Mater Continua. 2022;70(2):3589–607. 10.32604/cmc.2022.018742 [DOI] [Google Scholar]
- Elkassem AA, Smith AD. Potential Use Cases for ChatGPT in Radiology Reporting. AJR Am J Roentgenol. 2023. Sep;221(3):373–6. 10.2214/ajr.23.29198 [DOI] [PubMed] [Google Scholar]
- Jeblick K, Schachtner B, Dexl J, Mittermeier A, Stüber AT, Topalis J, et al. ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on Simplified Radiology Reports. arXiv preprint arXiv:. 2022. [DOI] [PMC free article] [PubMed]
- Li H, Moon JT, Iyer D, Balthazar P, Krupinski EA, Bercu ZL, et al. Decoding radiology reports: potential application of OpenAI ChatGPT to enhance patient understanding of diagnostic reports. Clin Imaging. 2023. Sep;101:137–41. 10.1016/j.clinimag.2023.06.008 [DOI] [PubMed] [Google Scholar]
- Lyu Q, Tan J, Zapadka ME, Ponnatapura J, Niu C, Myers KJ, et al. Translating radiology reports into plain language using ChatGPT and GPT-4 with prompt learning: results, limitations, and potential. Vis Comput Ind Biomed Art. 2023. May;6(1):9. 10.1186/s42492-023-00136-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Doshi R, Amin K, Khosla P, Bajaj S, Chheang S, Forman HP. Utilizing Large Language Models to Simplify Radiology Reports: a comparative analysis of ChatGPT3. 5, ChatGPT4. 0, Google Bard, and Microsoft Bing. medRxiv. 2023:2023.06. 04.23290786. 10.1101/2023.06.04.23290786 [DOI]
- Rao A, Kim J, Kamineni M, Pang M, Lie W, Dreyer KJ, et al. Evaluating GPT as an Adjunct for Radiologic Decision Making: GPT-4 Versus GPT-3.5 in a Breast Imaging Pilot. J Am Coll Radiol. 2023. Jun;20230621:S1546-1440(23)00394-0. 10.1016/j.jacr.2023.05.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tu T, Azizi S, Driess D, Schaekermann M, Amin M, Chang PC, et al. Towards Generalist Biomedical AI. arXiv preprint arXiv:. 2023.
- Liu Z, Zhong A, Li Y, Yang L, Ju C, Wu Z, et al. Radiology-GPT: A Large Language Model for Radiology. arXiv preprint arXiv:. 2023.
- Ma C, Wu Z, Wang J, Xu S, Wei Y, Liu Z, et al. ImpressionGPT: an iterative optimizing framework for radiology report summarization with chatGPT. arXiv preprint arXiv:. 2023.
- Adams LC, Truhn D, Busch F, Kader A, Niehues SM, Makowski MR, et al. Leveraging GPT-4 for post hoc transformation of free-text radiology reports into structured reporting: a multilingual feasibility study. Radiology. 2023. May;307(4):e230725. 10.1148/radiol.230725 [DOI] [PubMed] [Google Scholar]
- Sun Z, Ong H, Kennedy P, Tang L, Chen S, Elias J, et al. Evaluating GPT4 on Impressions Generation in Radiology Reports. Radiology. 2023. Jun;307(5):e231259. 10.1148/radiol.231259 [DOI] [PMC free article] [PubMed] [Google Scholar]


