Skip to main content
Digital Health logoLink to Digital Health
. 2023 Jul 2;9:20552076231186520. doi: 10.1177/20552076231186520

Artificial intelligence in healthcare: Complementing, not replacing, doctors and healthcare providers

Emre Sezgin 1,2,
PMCID: PMC10328041  PMID: 37426593

Abstract

The utilization of artificial intelligence (AI) in clinical practice has increased and is evidently contributing to improved diagnostic accuracy, optimized treatment planning, and improved patient outcomes. The rapid evolution of AI, especially generative AI and large language models (LLMs), have reignited the discussions about their potential impact on the healthcare industry, particularly regarding the role of healthcare providers. Concerning questions, “can AI replace doctors?” and “will doctors who are using AI replace those who are not using it?” have been echoed. To shed light on this debate, this article focuses on emphasizing the augmentative role of AI in healthcare, underlining that AI is aimed to complement, rather than replace, doctors and healthcare providers. The fundamental solution emerges with the human–AI collaboration, which combines the cognitive strengths of healthcare providers with the analytical capabilities of AI. A human-in-the-loop (HITL) approach ensures that the AI systems are guided, communicated, and supervised by human expertise, thereby maintaining safety and quality in healthcare services. Finally, the adoption can be forged further by the organizational process informed by the HITL approach to improve multidisciplinary teams in the loop. AI can create a paradigm shift in healthcare by complementing and enhancing the skills of healthcare providers, ultimately leading to improved service quality, patient outcomes, and a more efficient healthcare system.

Keywords: Artificial intelligence, large language models, generative AI, doctor, healthcare provider, medicine, implementation

Introduction

The advancements in artificial intelligence (AI) have provided a wealth of opportunities for clinical practice and healthcare. Large language models (LLMs), such as BERT, GPT, and LaMDA have experienced exponential growth, with some now containing over a trillion parameters. 1 This growth in AI capabilities allows for seamless integration between different types of data and has led to multimodal applications in various domains, including medicine. 2 Evidence shows that AI has the potential to improve healthcare delivery by enhancing diagnostic accuracy, optimizing treatment planning, and improving patient outcomes.35 With the recent developments in AI, specifically LLM and generative AI (e.g. DALL-E, GPT-4 via ChatGPT), we reassess benefits and opportunities presented by AI toward being one step closer to an artificial general intelligence (AGI, AI with human cognitive abilities).6,7 Current evidence showed LLM capabilities with medical knowledge and support. The University of Florida's GatorTron, an 8.9 billion parameter LLM, is one of the first medical foundation models developed by an academic health system and medical data. 8 It is designed to improve five clinical natural language processing tasks, such as medical question answering and medical relation extraction. LLM further showed capability of medical knowledge, as an AI model achieved a 79.5% accuracy rate on the U.K. Royal College of Radiology examination compared to 84.8% for human radiologists. 9 Recently, the LLMs (PaLM, GPT) demonstrated their capabilities for the United States Medical Licensing Examination and several other medical question-answering tasks, showcasing the potential of AI in medicine.10,11

The increased capabilities also spread concerning conversations. These include AI and the alignment problem, 12 and ethical and unbiased implementations. 13 Recently, there has been a movement towards urging a “pause” in AI development, 14 to address these concerns, as well as to investigate societal implications, to build robust frameworks, governance, and control mechanisms. Among all, the notion of “AI taking over human jobs,” as it achieves highly accurate results and performance in completing human tasks, 15 has been one of the emerging concerns. In line with that, in the healthcare domain, the question has echoed: Can AI replace doctors? (“doctors” is referring to healthcare providers in this article, including physicians, nurses, and other healthcare providers), or will they serve as invaluable tools that complement and support them?16,17

AI to replace doctors

Even though the idea is intriguing, fundamentally, AI is not meant (designed and developed) to replace doctors, but able to repurpose roles and improve efficiency, as demonstrated by LLM-powered digital scribes and conversation summarization tools. 18 If we step back and look at current applications in clinical practice, AI has already been an integral part of health services, without replacing doctors. For example, AI-aided decision support systems with ultrasound or MRI machines to assist diagnosis, 19 or improving voice recognition in dictation devices to keep radiology notes. 20 However, recent developments in AI are highly complex, rapidly evolving, and overwhelmingly positive—as seen in the increased accuracy of LLMs in completing tasks, high language comprehension, and human-like conversational responses—leading us to question their value and contribution to practice.

AI to collaborate with doctors

By repurposing (not replacing) the roles, AI can contribute to a more efficient and streamlined healthcare system. Does that mean doctors using AI replace those not using it? This statement has been highly popular recently against the “AI replacing doctors” argument. The statement is true at its core. A decision by a doctor aided by AI could be more accurate (and timely) than without AI, 21 as minimizing risk for patients and improving decision-making process, the quality of service, and efficiency. 22 However, one feasible way to enable the “use” of AI goes through “collaboration.” Following human-in-the-loop and human-AI collaboration principles, a structure for utilizing the potential of AI can be established.23,24 Human-in-the-Loop (HITL) approach emphasizes a collaborative partnership between AI and human expertise to optimize outcomes. Through collaborative decision-making, AI offers insights, and individuals leverage their knowledge for final judgment, establishing oversight and quality control to validate AI predictions, reducing potential errors or biases. The collaboration nurtures continuous learning and improvement as both parties learn from each other. This further contributes to trust and acceptance. Ethical practices are important ensuring transparency, accountability and explainability in AI decisions. The human feedback improves AI adaptability, enabling AI to handle complex cases beyond training data.

In healthcare, HITL could be implemented as the trained doctors could collaborate with AI, monitor, validate and guide the process, interpret AI outputs, and provide feedback to improve the capability and accuracy of AI. A recent study showed that AI could enhance the accuracy of diagnosis and clinical decisions when combined with expert human evaluation, emphasizing the collaborative nature of AI and doctors. 25 This collaboration can further contribute to an often overseen value proposition toward addressing the disparities. AI has a higher value to act as a complementary tool, or knowledge augmentation mechanism, to fill the gaps, particularly in low-resource settings, such as rural areas or underdeveloped countries toward improving diagnosis, patient communication and education, and reducing language barriers. 26

However, such AI collaboration and decision support mechanisms are available via the adoption of organizations rather than solely based on personal choices in healthcare settings.

AI to be adopted by healthcare organizations

The adoption of AI is driven by organizational decisions, necessity, and readiness. 27 Therefore, the ultimate question is: will healthcare organizations successfully adopt AI?

Healthcare organizations (e.g., hospitals and clinics) are responsible for providing AI tools that have undergone rigorous evaluation and validation to ensure safety and effectiveness for clinical practice (e.g., FDA clearance and FTC guidelines).2830 In addition to that, legal, infrastructure, privacy, and security teams need to revisit organizational policies and protocols to ensure compliance, including state and federal laws and regulations, with a specific focus on personal health information exchange protocols, accountability, liability, service reimbursement, and clinical workflows.31,32 In parallel, there is a need to develop curricula and educational methods to train doctors on the fundamentals of AI, effective use in practice, and AI-supported healthcare delivery. 33

The organizational process can be informed by the HITL approach to improve the multidisciplinary team in the loop. 34 Enabling human–AI collaboration and inclusion of human feedback and control can forge the partnership, diminishing the false perception of “AI as replacement” (Figure 1). The specific considerations for collaborative AI adoption in a healthcare organization to enable human–AI collaboration are summarized in Textbox 1.

Figure 1.

Figure 1.

AI adoption to enable doctor–AI collaboration and considerations.

AI, artificial intelligence.

Textbox 1.

Considerations for AI adoption in a healthcare organization

1. Establish multidisciplinary teams (clinic, research, information systems, operation, management and administration, patient and community advocate) to explore and evaluate cost-effective and impactful collaborative AI solutions and establish HITL protocols. This requires collaboration and knowledge sharing among team members to anticipate utility and expected outcomes.35,36
2. Prioritize clinical processes, operational workflows, and practices for AI support which need improvements and the leverage from AI collaboration, such as tasks and processes contributing to burnout, tasks leading to inefficiency or low performance, tasks to address patient needs, service quality, and satisfaction.3739
3. Involve multistakeholder groups, including doctors, nurses, administrators, and patients, in identifying essential inclusive training, education needs, and cultural transformation, and in testing AI tools for effective collaboration before large-scale deployment.34,40 This approach ensures that the perspectives of all included parties are communicated and considered in the development and implementation of AI solutions.
4. Establish rigorous evaluation methods and assessment frameworks for AI, focusing on validation, verification, utility, and adoption.41,42 It requires monitoring and testing AI in a controlled environment and a real-world setting with a range of capabilities before expanding to broader applications. 43 This will enable organizations to identify potential issues and lower the risks, refine AI systems, train human collaborators, and measure their impact on patient care before extensive implementation.
5. Revise organizational policies and protocols to facilitate AI adoption and address ethical and legal concerns, ensuring compliance, privacy, and security, and strategizing to build organizational trust, access, and governance.36,44,45 This includes developing guidelines for AI collaboration and practice transparency, accountability, and explainability to minimize risks and ensure patient safety.43,44,46
6. Commit to ethical, inclusive, equitable, responsible, and fair AI practices.34,47 This requires focusing on reducing the digital divide among healthcare organizations. 48 We need to ensure equitable access to collaborative AI tools, training, and resources among hospitals, practitioners, and communities by developing partnerships and initiatives that promote inclusive access to technology and skills development.

Conclusion

The advancements in AI are reassuring, showing promise in creating a paradigm shift in healthcare by complementing and enhancing the skills of doctors and healthcare providers rather than replacing them. To successfully harness the power of AI, healthcare organizations must be proactive, especially now, where generative AI and LLMs are highly accessible but still in need of control and guidance. As AI becomes an essential component of modern healthcare, it is vital for organizations to invest in the necessary infrastructure, training, resources, and partnerships to support its successful adoption and ensure equitable access for all.

Acknowledgements

Figure 1 is created with BioRender.com.

Footnotes

The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author received no financial support for the research, authorship, and/or publication of this article.

References


Articles from Digital Health are provided here courtesy of SAGE Publications

RESOURCES