Skip to main content
Journal of Sport and Health Science logoLink to Journal of Sport and Health Science
editorial
. 2025 May 9;14:101054. doi: 10.1016/j.jshs.2025.101054

Artificial intelligence in health and sport sciences: Promise, progress, and prudence

Ruopeng An 1,
PMCID: PMC12221458  PMID: 40349842

Artificial intelligence (AI) has rapidly transformed from a promising computational tool to a central force in global healthcare, exercise science, and human performance research.1 While once conceptual, its applications today are increasingly tangible—powering diagnostic tools, enhancing training personalization, guiding rehabilitation strategies, and informing ethical governance in sport.2 The integration of AI in health and sports science mirrors a broader digital health revolution, and this special topic offers a compelling snapshot of how AI is reshaping the landscape of patient care, athlete monitoring, and scientific inquiry.

Among the articles in this issue is a comparative analysis of large language models (LLMs) in the context of osteoarthritis (OA) patient education. Cao et al.’s study evaluates the performance of ChatGPT-3.5, ChatGPT-4.0, and Perplexity in addressing common patient questions related to OA. Their findings underscore the technical strides made by advanced LLMs, with ChatGPT-4.0 demonstrating significantly higher accuracy and comprehensiveness than its counterparts. Notably, 64% of its responses were deemed “excellent” by orthopedic specialists, compared to 40% for ChatGPT-3.5 and 28% for Perplexity. Although these tools performed well in domains such as pathogenesis and prognosis, they struggled with nuanced areas like treatment and prevention.

This aligns with broader findings highlighting the promise and pitfalls of generative AI in medical education and health communication. Recent studies confirm that LLMs such as GPT-4.0 have surpassed prior benchmarks in medical exam simulations and clinical reasoning tasks.3 Yet, they also reveal consistent weaknesses in domain-specific accuracy and real-world contextual judgment.4 Cao et al.’s work is significant not only for benchmarking LLM’s utility in OA education but also for raising broader questions about AI-assisted health literacy and misinformation—a subject requiring more structured human oversight and iterative model refinement.5

Ethical complexities emerge as a central concern in the second article by Kim et al., a systematic scoping review that identifies 4 thematic areas of AI-related ethical issues in sport: fairness and bias, transparency, privacy, and accountability. Their synthesis of 25 studies outlines how AI technologies may reproduce or amplify systemic inequalities, emphasizing the challenges in interpretability and transparency in AI-driven decisions. These concerns are corroborated in broader AI-in-sport literature. Munoz-Macho et al.6 in their scoping review of AI in elite team sports, identify widespread dependence on machine learning methods like XGBoost and support vector machines, often without proper attention to demographic bias or explainability mechanisms.

Transparency and explainability are not merely academic ideals but are essential for stakeholder trust. In sports contexts, where biometric data is increasingly mined via wearables and computer vision, the opacity of proprietary algorithms can erode athlete autonomy and lead to data misuse.7 Moreover, the review emphasizes privacy violations in data-rich environments such as biometric monitoring and facial recognition-based assessments, practices flagged by multiple scholars as legally and ethically precarious.8 In this light, the call for tailored ethical frameworks—sensitive to amateur, gender-diverse, and underrepresented contexts—must not be postponed.

Equally innovative is the systematic review by Shen et al. which evaluates AI-powered social robots in promoting physical activity (PA) among older adults. Their analysis of 19 studies illustrates the wide-ranging benefits of robotic companions, from improving PA adherence and engagement level to enhancing sleep and medication routines. The diversity of environments—including home settings and care facilities—speaks to the scalability of these interventions.

Other systematic reviews echo these findings. For example, Seo et al.9 found that robotic interventions hold promise for augmenting physical capabilities and reducing loneliness in older populations. Moreover, human-robot interaction in PA contexts has demonstrated promising psychosocial outcomes, particularly in reducing sedentary behavior and enhancing daily functional autonomy.10 Yet challenges persist, including cost sustainability, personalization to cognitive impairments, and cultural acceptance—factors flagged in recent robotics reviews.11

The final article by Shang et al. explores the Maturo application, an AI-powered tool for assessing biological maturation in youth athletes. Validated against expert evaluations, Maturo demonstrated high agreement across biological age, predicted adult height, and maturation timing, with intraclass correlation coefficient values exceeding 0.95. These results speak to the clinical promise of automated growth tracking tools that are noninvasive, cost-effective, and scalable.

Their study parallels other research investigating digital and algorithmic maturation tools. For example, Retzepis et al. 12 showed that machine learning models using anthropometric data could accurately predict peak height velocity, offering a digital alternative to radiographic methods. Similarly, Maturo’s high κ coefficients for maturity classification align with broader findings that support AI’s role in age estimation, particularly in high-volume sports academies where rapid decisions are essential.13 Nonetheless, generalizability remains limited by population homogeneity, an issue the authors rightfully highlight.

These 4 studies reveal a dynamic convergence of AI and sports health science across multiple dimensions—education, ethics, engagement, and evaluation. Yet these advancements are not without caveats. The excitement around AI must be tempered with rigorous validation, transparent reporting, and context-sensitive implementation.

Emerging reviews have underscored this imperative. Mohammed et al. 14 and Pardeshi15 both caution that AI’s integration into training, biomechanics, and decision-making must be accompanied by stakeholder education and policy clarity. Likewise, interdisciplinary collaboration between computer scientists, ethicists, clinicians, and coaches is vital for responsible deployment.

Looking ahead, the field must grapple with several unresolved challenges: algorithmic fairness,16 cultural biases in training data,17 economic disparities in access to AI tools, and the tension between data-driven decisions and human expertise.18 Future research should prioritize longitudinal designs, randomized trials for intervention efficacy, and diverse sampling across gender, age, and sociocultural contexts.

Integrating AI in sports and health sciences offers fertile ground for innovation, but it must be navigated with caution and care. The studies in this special issue reflect the achievements and growing pains of this frontier. As with all major technological shifts, progress is only meaningful when guided by ethical commitment, methodological rigor, and inclusivity.

Competing interests

The author declares that he has no competing interests.

Footnotes

Peer review under responsibility of Shanghai University of Sport.

References

  • 1.Matheny M, Israni ST, Ahmed M, Whicher D, editors. Chapter 3, How artificial intelligence is changing health and health care. In: Roski J, Chapman W, Heffner J, et al. editors. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: The National Academies Press; 2022. p.65–98. [PubMed]
  • 2.Puce L., Bragazzi N.L., Currà A., Trompetto C. Harnessing generative artificial intelligence for exercise and training prescription: Applications and implications in sports and physical activity: a systematic literature review. Appl Sci. 2025;15:3497. doi: 10.3390/app15073497. [DOI] [Google Scholar]
  • 3.Kung T.H., Cheatham M., Medenilla A., et al. Performance of ChatGPT on the USNLE: Potential for AI-assisted medical education using large language models. PLoS Digit Health. 2023;2 doi: 10.1371/journal.pdig.0000198. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Gilson A., Safranek C.W., Huang T., et al. How does ChatGPT perform on the USMLE Step 1 and Step 2 exams? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ. 2023;9 doi: 10.2196/45312. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Liu T., Xiao X. A framework of AI-based approaches to improving eHealth literacy and combating infodemic. Front Public Health. 2021;9 doi: 10.3389/fpubh.2021.755808. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Munoz-Macho A.A., Domínguez-Morales M.J., JL Sevillano-Ramos. Performance and healthcare analysis in elite sports teams using artificial intelligence: A scoping review. Front Sports Act Living. 2024;6 doi: 10.3389/fspor.2024.1383723. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Ramkumar P.N., Luu B.C., Haeberle H.S., Karnuta J.M., Nwachukwu B.U., Williams R.J. Sports medicine and artificial intelligence: A primer. Am J Sports Med. 2022;50:1166–1174. doi: 10.1177/03635465211008648. [DOI] [PubMed] [Google Scholar]
  • 8.Sureshbabu S., Lavaraju B. A critical study on artificial intelligence and its impact on sports business. Int J Innov Sci Res Technol. 2024;9:458–468. [Google Scholar]
  • 9.Seo K., Jang T., Seo J. Effect of AI intervention programs for older adults on the quality of life: A systematic review and meta-analysis of randomized controlled trials. Digit Health. 2025;11 doi: 10.1177/20552076251324014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Weerarathna I.N., Raymond D., Luharia A. Human–robot collaboration for healthcare: A narrative review. Cureus. 2023;15 doi: 10.7759/cureus.49210. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Sawik B., Tobis S., Baum E., et al. Robots for elderly care: Review, multi-criteria optimization model and qualitative case study. Healthcare (Basel) 2023;11:1286. doi: 10.3390/healthcare11091286. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Retzepis N.O., Avloniti A., Kokkotis C., et al. Identifying key factors for predicting the age at peak height velocity in preadolescent team sports athletes using explainable machine learning. Sports (Basel) 2024;12:287. doi: 10.3390/sports12110287. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Zeng C., Huang Y., Yu L., Zeng Q., Wang B., Xu Y. Long-term assessment of rehabilitation treatment of sports through artificial intelligence research. Comput Math Methods Med. 2021;2021 doi: 10.1155/2021/4980718. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Mohammed A., Othman Z., Abdullah A. The role of artificial intelligence in enhancing sports analytics and training. Cihan Univ-Erbil Sci J. 2024;8:58–62. [Google Scholar]
  • 15.Pardeshi S. Exploring plausible uses of artificial intelligence in sports. Scholarly Rev J. 2024 doi: 10.70121/001c.124886. [DOI] [Google Scholar]
  • 16.Chen R.J., Wang J.J., Williamson D.F.K., et al. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat Biomed Eng. 2023;7:719–742. doi: 10.1038/s41551-023-01056-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Tao Y., Viberg O., Baker R.S., Kizilcec R.F. Cultural bias and cultural alignment of large language models. PNAS Nexus. 2024;3:pgae346. doi: 10.1093/pnasnexus/pgae346. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Ahmad S.F., Han H., Alam M.M., et al. Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanit Soc Sci Commun. 2023;10:311. doi: 10.1057/s41599-023-01787-8. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Sport and Health Science are provided here courtesy of Shanghai University of Sport

RESOURCES