Skip to main content
NPJ Digital Medicine logoLink to NPJ Digital Medicine
. 2025 Jan 24;8:54. doi: 10.1038/s41746-025-01460-1

Transforming diagnosis through artificial intelligence

Luciana D’Adderio 1,2,, David W Bates 3,4,5,6
PMCID: PMC11760373  PMID: 39856192

Abstract

Artificial intelligence (AI) is increasingly permeating the fabric of medicine, but getting full benefits will likely require fundamental changes in practice. Accepting this will be challenging for many clinicians. However, it may be necessary to ensure that AI’s ambitious promises translate into real-life improvement.

Subject terms: Decision making, Health policy, Diagnosis, Translational research


The diagnostic process is a particularly important target for clinical decision support in general, and for AI. Diagnoses are often incorrect but finding the correct diagnosis and doing so quickly are pivotal for delivering effective care. AI-driven algorithms are increasingly used in healthcare settings to support clinicians with diagnosis, treatment, and patient outcome prediction. Drawing on powerful techniques such as machine learning (ML) and deep learning (DL), these algorithms are designed to gain insights from clinical data to assist rapid, accurate diagnostic problem-solving and treatment decision-making. Despite the rapid diffusion of these potentially highly transformative technologies, however, we know little to date about how they may affect real-world diagnostic processes and their outcomes - a gap known as the ‘AI chasm’1.

One area in which augmented diagnostic models are proving particularly useful is hyperacute stroke. This is a high-stakes context where the cost of errors can be clinically and reputationally very high, as well as holding fundamental moral and ethical implications for patients, clinicians and healthcare organizations. Many off-the-shelf stroke AI applications are being introduced into clinical research and practice which aim to generate improvements to the stroke diagnostic and treatment referral workflows. AI appears to be radically changing this process by providing a number of new capabilities including communication features, which can instantaneously distribute MRI/CT images to the stroke team’s mobile phones, and prediction features, such as the indication of whether there is a suspected large vessel occlusion (the principal stroke marker) and of how much of the brain affected by the stroke may be salvageable through surgical intervention (i.e., mechanical thrombectomy)2. The implications for clinical work, however, are not yet fully understood.

The traditional vs. AI-assisted diagnosis and referral process

We are reporting the emergent findings from a five-year, in-depth, qualitative study investigating the adoption of leading-edge AI applications for stroke care at 3 major UK stroke hubs. Our observations focused on the ‘door-to-treatment’ portion of the stroke pathway, starting with the patient scan, and ending with a treatment decision.

Our evidence suggests that AI may trigger radical changes to the stroke diagnosis and treatment referral processes as conducted in practice by clinicians. Traditionally, the diagnostic process has begun with a physician examining the patient, gathering data, and constructing an intuitive assessment of their condition. Initial judgment is progressively refined through an iterative process of information gathering, information integration and interpretation. This refinement process, involving hypothesis generation, fine-tuning, and validation, usually culminates with a diagnostic decision, that attributes a classification label to inform treatment (e.g., thrombolysis or thrombectomy).

As experts have noted, however, AI is not focused on supporting the diagnostic journey as it stands3. Predictive AI tools are generally designed to produce an answer, in the form of a diagnostic label, answering the binary question of whether a patient has a certain diagnosis. This approach has been challenged as “any tools that predicts your destination at the start of your journey isn’t very helpful if it tells you nothing about how to get there”4. Nevertheless, clinicians do not seem discouraged. Our findings show that expert clinicians are adapting to AI by transforming, rather than replacing, the traditional diagnostic process. Instead of simply accepting AI’s diagnostic output or ‘label’, they are developing a new approach that begins with AI’s suggestion and works backwards by assessing its validity against multiple verification steps. This includes cross-referencing patient records, validating against established medical standards, and consulting other experts’ opinions. Through this evolution, clinicians are not only preserving their essential role but enhancing the diagnostic process by integrating AI’s capabilities while maintaining clinical rigor and oversight.

In the advanced stroke AI adoption settings we observed, the diagnostic journey starts with AI producing a recommendation (diagnostic label), based on processing MRI/CT imaging, which is seen simultaneously by the entire stroke team. This is different, both because of the speed with which this happens, and because the recommendations are broadly distributed. The AI ‘diagnosis’, at least in the narrow sense of an algorithmic output, tends to be available in the initial phases of the diagnostic process, rather than just at the end (Fig. 1).

Fig. 1. Changes to the diagnostic process.

Fig. 1

A simplified, information-focused illustration of the complex, dynamic, longitudinal process clinicians embark on in formulating a diagnosis. It depicts how the diagnostic label, currently representing the endpoint of the diagnostic process becomes instead the starting point of the new, AI-mediated process.

Our evidence shows that the automated AI diagnosis is produced, distributed, and read ahead of the clinician’s diagnosis. In the case of stroke, AI might highlight the presence of a vessel occlusion, as well as suggesting the percentage of brain that might be affected by the stroke or identify bleeding. This initial (as yet unverified) claim is delivered by the software through a set of colored maps and 3D representations (including AI CT perfusion and AI CT angiography). From that point onwards, the clinical team’s task becomes to verify the validity of the initial AI ‘judgment’ against their own clinical findings, as well as against conventional imaging tools (e.g., CT brain, CT angiography).

The early availability of the AI diagnosis in turn results in the triggering of a specific treatment pathway (e.g., the thrombectomy pathway in the case of a large vessel occlusion in a main cerebral artery, leading to interventional neuroradiology referral). The neuroradiologist/interventionist is usually alerted in advance of AI’s prediction being fully verified against further evidence, and their role becomes to establish whether to recommend accepting or rejecting it. This scenario appears to suggest that the dial spanning the spectrum between human and machine agency may be subtly but firmly shifting one further notch towards the machine.

Looking ahead

The introduction of AI is likely to fundamentally impact stroke care processes with consequences for clinicians, patients and the healthcare system that warrant urgent further investigation5. Analyses should be done on the implications of AI-induced changes on the diagnostic process and its outcomes along several dimensions.

First, a crucial question is whether and how the introduction of stroke AI and subsequent workflow changes increase diagnostic accuracy. For instance, will AI-augmented clinicians be better equipped to accurately detect large vessel occlusions compared to clinicians operating independently? Trial evidence shows that AI systems perform well compared to clinicians, but these initial findings have not yet been fully verified in real-world conditions6,7. Will this heightened accuracy, if validated, result in more patients receiving treatment than would have otherwise? Similar concerns relate to diagnostic speed. Will AI enable clinicians to reach a diagnosis faster than they could without it? While initial studies suggest this might be the case8, our evidence shows that potential speed gains must be verified against the need for additional verification practices that could paradoxically slow the diagnostic process.

Second, research is urgently needed to understand how to make AI safe and ethical in the context of real-world clinical practices9. This includes understanding how clinicians may learn to deal in practice with AI limitations. While AI could reduce subjectivity and bias in problem-solving and decision-making relative to clinicians, it might also add new sources of bias10. How will clinicians and healthcare organizations be able to manage potential AI errors in the presence of algorithmic opacity? 11,12 How will they address issues of privacy, such as the secure transfer between imaging centers and AI? And how will patient consent practices have to change to reflect the complexity of explaining AI’s role in decision-making? Our findings suggest that clinicians are developing new, dedicated practices and methods to address AI-generated risks and uncertainties.

Third, appreciating AI’s impact on expertise is fundamental. Will inexperienced or overworked clinicians be able to critically assess and, when needed, reject AI’s opinion or will they be succumbing to ‘automation bias’?13 How will AI alter the tasks and roles of clinicians involved in stroke diagnosis and referral? For example, might stroke physicians develop imaging-based diagnostic expertise akin to radiologists and interventionists? Could this precipitate jurisdictional conflicts between stroke physicians and radiologists or interventionists? Another dimension of evolving expertise is the potential emergence of new roles focused on monitoring AI’s clinical application over time. Will these new experts become embroiled in negotiations surrounding competing claims to control and authority over AI outputs, including their reliability and interpretation?

Fourth, the implication of AI-supported diagnosis and referral for patients requires careful consideration. Will swift, easily sharable access to patient information/images via the centralized AI app take precedence over real-life patient presentation in informing diagnosis and referral? This already occurs in some cases like when patients get scanned at stroke spoke hospitals and then transferred to hubs, or after-hours when diagnosis happens via clinicians remotely accessing a screen. However, AI could normalise this scenario making it more prominent in the diagnostic process than an exception (Fig. 2a, b).

Fig. 2. Changes to clinical workflow.

Fig. 2

a Simplified clinical workflow (stroke hub) with AI. b Simplified clinical workflow (stroke hub) without AI.

Generalizing these observations

The extent to which similar changes may occur outside hyperacute stroke deserves investigation. Initial evidence from automated urgent lung cancer triage suggests AI may similarly act as first reader for screening images requiring immediate follow-up, while clinicians analyze remainder cases for potential false negatives. This pattern could extend to other time-sensitive domains with “can’t-miss diagnoses” like surgery (sepsis), pathology (acute leukemia), cardiology (acute myocardial infarction), and emergency care (pneumothorax). AI as a first reader could also add value in non-acute settings by managing high volumes of routine screenings and backlogs, maintaining consistent accuracy during repetitive tasks and long shifts, and optimizing specialist resources and geographic access. These applications warrant further study. In terms of generalisation, it is finally worth considering the global differences in AI implementation due to the variations in individual countries’ underlying healthcare system structures. An interesting comparison here is between the US and the UK. It is possible, for example, that, while the highly standardized UK system may better coordinate the fundamental shift to AI-first diagnostics through concerted verification practices, it could benefit from US-style pragmatic approach to rapid innovation and process improvement. Conversely, the US could learn from the UK’s systematic approach to standardization, risk management, and equitable implementation.

Conclusions

New technologies are hard to grasp, and their effects even harder to envisage—examples include the internet, smart phone, computer-aided decision systems and, in a medical/healthcare context, the x-ray, the MRI scanner14,15. AI is likely to represent a transformational change of similar magnitude. Despite the strong uncertainty around AI and its effects, one thing is becoming increasingly clear: realizing its benefits fully will require fundamental changes in how we practice.

Acknowledgements

Dr. D’Adderio gratefully acknowledges the Chief Scientist Office grant no. HIPS/22/15 “The Impact Of Artificial Intelligence On Hyperacute Stroke Diagnostic And Treatment Pathways” (PI: D’Adderio) and the Wellcome Leap SAVE Grant no. 133448282, “Global Surgery Health Technology Evaluation And Validation Consortium” (PI: Harrison). Dr D’Adderio also acknowledges her Chancellor’s Fellowship funding (PI: D’Adderio).

Author contributions

L.D.A. conceived of the idea for the manuscript. L.D.A. wrote the paper. D.W.B. provided feedback and revisions. Both authors have read and approved the manuscript.

Competing interests

Dr D’Adderio declares no competing interests. Dr Bates reports grants and personal fees from EarlySense, personal fees from CDI Negev, equity from ValeraHealth, equity from Clew, equity from MDClone, personal fees and equity from AESOP, personal fees and equity from FeelBetter, personal fees and equity from Guided Clinical Solutions and grants from IBM Watson Health, outside the submitted work.

Footnotes

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nat. Med.25, 44–56 (2019). [DOI] [PubMed] [Google Scholar]
  • 2.Shafaat, O. et al. Leveraging artificial intelligence in ischemic stroke imaging. J. Neuroradiol.49, 343–351 (2022). [DOI] [PubMed] [Google Scholar]
  • 3.Adler-Milstein, J., Chen, J. H. & Dhaliwal, G. Next-Generation Artificial Intelligence for Diagnosis: From Predicting Diagnostic Labels to "Wayfinding". JAMA326, 2469–2468 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.https://med.stanford.edu/content/dam/sm/healthcare-ai/documents/Unlocking-New-Opportunities-for-AI-enabled-Diagnosis-2-.pdf
  • 5.Recht, M. & Bryan, R. N. Artificial Intelligence: Threat or Boon to Radiologists? J. Am. Coll. Radiol.14, 1476–1480 (2017). [DOI] [PubMed] [Google Scholar]
  • 6.Soun, J. E. et al. Impact of an automated large vessel occlusion detection tool on clinical workflow and patient outcomes. Front. Neurol.14, 1179250 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Chandrabhatla, A. S. et al. Artificial intelligence and machine learning in the diagnosis and management of stroke: a narrative review of United States food and drug administration-approved technologies. J. Clin. Med.12, 3755 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Martinez-Gutierrez, J. C. et al. Automated large vessel occlusion detection software and thrombectomy treatment times: a cluster randomized clinical trial. JAMA Neurol.80, 1182–1190 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Bates, D. W. et al. The potential of artificial intelligence to improve patient safety. NPJ Digit. Med.4, 54 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Weidener, L. & Fischer, M. Role of ethics in developing AI-based applications in medicine: insights from expert interviews and discussion of implications. JMIR AI3, e51204 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Challen, R. et al. Artificial intelligence, bias and clinical safety. BMJ Qual. Saf.28, 231–237 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Reddy, S., Allan, S., Coghlan, S. & Cooper, P. A governance model for the application of AI in health care. J. Am. Med. Inform. Assoc27, 491–497 (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Khera, R., Simon, M. A. & Ross, J. S. Automation bias and assistive AI: risk of harm from AI-driven clinical decision support. JAMA330, 2255–2257 (2023). [DOI] [PubMed] [Google Scholar]
  • 14.Lea, A. S. Digitizing Diagnosis: Medicine, Minds, and Machines in Twentieth-century America (Johns Hopkins University Press, 2023).
  • 15.Bates, D. W. An Artificial History of Natural Intelligence: Thinking with Machines from Descartes to the Digital Age (University of Chicago Press, 2024).

Articles from NPJ Digital Medicine are provided here courtesy of Nature Publishing Group

RESOURCES