To the Editor,
We recently read the study, “Is Information About Musculoskeletal Malignancies From Large Language Models or Web Resources at a Suitable Reading Level for Patients?” [2], which compared the readability of Google and ChatGPT-3.5 search results for three common bone cancers: osteosarcoma, chondrosarcoma, and Ewing sarcoma. While the goals of this paper are clear and important, the methodology warrants criticism.
The selection of questions appears to be primarily influenced by those associated with the American Cancer Society [1], which limited the scope of the investigation and may have created bias. This study would have benefited from a wider range of data sources and user-generated questions that better reflect patients’ real-world concerns at the time of diagnosis, which would have captured a broader range of information needs and improved the relevance of these findings.
Additionally, important information—such as whether patient responses appropriately addressed their problems or mirrored their experiences—was not evaluated, making the results relatively incomplete. We would like to know if the authors considered using user input or usability testing to determine how real patients perceived the material. Readability is only one part of a patient’s experience; a study that ascertains how patients experience the content is another that might be more robust and, ultimately, more clinically relevant than simply tallying quantitative readability metrics. Future research should investigate the effects of other presenting styles (like summaries and graphics) in conjunction with readability assessments, which might provide greater insight into how to effectively arrange information for patients.
In terms of innovation and future directions, a new study might include comparative assessments of AI models other than ChatGPT-3.5 to assess progress in developing accessible patient teaching materials. Furthermore, using machine-learning techniques to adjust responses to individual cognitive levels may boost customization and adaptability, allowing AI systems to better serve a broader spectrum of patient populations. Finally, longitudinal research should examine how the readability of online health resources changes over time in response to technological advancements and public health communication tactics, perhaps informing larger discussions about successful health literacy treatments.
Footnotes
(RE: Guirguis PG, Youssef MP, Punreddy A, Botros M, Raiford M, McDowell S. Is Information About Musculoskeletal Malignancies From Large Language Models or Web Resources at a Suitable Reading Level for Patients? Clin Orthop Relat Res. 2025;483:306-315.)
Each author certifies that there are no funding or commercial associations (consultancies, stock ownership, equity interest, patent/licensing arrangements, etc.) that might pose a conflict of interest in connection with the submitted article related to the author or any immediate family members.
All ICMJE Conflict of Interest Forms for authors and Clinical Orthopaedics and Related Research® editors and board members are on file with the publication and can be viewed on request.
The opinions expressed are those of the writers, and do not reflect the opinion or policy of CORR® or The Association of Bone and Joint Surgeons®.
References
- 1.American Cancer Society. Bone cancer. Available at: https://www.cancer.org/cancer/types/bone-cancer.html. Accessed October 15, 2024. [Google Scholar]
- 2.Guirguis PG, Youssef MP, Punreddy A, Botros M, Raiford M, McDowell S. Is information about musculoskeletal malignancies from large language models or web resources at a suitable reading level for patients? Clin Orthop Relat Res. 2025;483:306-315. [DOI] [PMC free article] [PubMed] [Google Scholar]
