Abstract
Yosra Mekki suggests that doctors should have the ability to develop their own machine-learning models. She proposes an approach with the “spotlight” on physicians, to create user-friendly frameworks that allow doctors to develop customized models without requiring extensive previous knowledge of machine learning.
Main text
“Today’s ‘AI’ revolution is more of a UX revolution than an AI revolution.”
I build machine-learning (ML) solutions and teach artificial intelligence (AI) to medical students, parallel to my medical training. This dual role has afforded me a front-row seat in the rapid integration of ML into healthcare. I have come to recognize the pressing need for improvements in AI healthcare development. As I toggle between computational and clinical “teams” at our lab, I have the opportunity to experience both the computational and clinical facets of creating medical AI. I can build models from the ground up, and also see them through clinical trials. I may also conduct these trials myself during my shifts at the hospital. This exposure has allowed me to witness firsthand the collaborative nature required in the development of these systems, and most importantly, the pitfalls that come with creating models in interdisciplinary environments. I argue that the future of AI in healthcare lies in taking a versatile, foundational approach.
My role revealed the communication issues that can happen in a team of “all-star” doctors and ML experts. It involves translating between the technical and medical teams. If a model did not perform as intended due to a development issue, I would step in to adjust the pipeline based on my understanding of the medical aspects, and vice versa, where I would communicate with physicians in terms they could understand, explaining the expectations and limitations of a specific model for their use case.
The current landscape of AI in healthcare is largely shaped by collaborations that bring together ML experts, healthcare workers, and professionals with dual expertise in both fields. There is a need for a new collaborative model in medical AI labs worldwide, given the rarity of individuals with dual expertise in both medicine and ML. Expecting every ML expert to have medical knowledge, or every physician to be versed in ML, is unrealistic. I advocate for empowering doctors to independently build their own AI models. Developers should redirect themselves to creating simple, user-friendly frameworks that enable a physician’s autonomy and creativity, particularly benefiting senior physicians who may not have had exposure to AI and technology in their earlier training. As a teacher, I can see a parallel between the evolution of AI in healthcare and the evolution of photography. Just as early photographers needed technical knowledge of cameras, current AI developers need deep understanding of model building. However, the future lies in doctors using AI toolkits with a focus on application and outcome, much like modern photographers focus on composition rather than camera mechanics.
I quote Cassie Kozyrkov,1 former Chief Decision Scientist at Google: “Today’s ‘AI’ revolution is more of a UX revolution than an AI revolution.” This quote succinctly captures the essence of my argument: the true power of AI in healthcare lies not just in its computational capabilities but in how intuitively and effectively it can be used by healthcare professionals in various settings. UX (user experience) designers worked hard in the last decade to create seamless online experiences, where users can achieve desired outcomes effortlessly, without needing to understand complex processes. We use websites and can build them now without necessarily knowing JavaScript.
I noticed that medical students encounter challenges during our practical coding and algorithm sessions. The difficulty lies not in understanding functions, but in navigating the less beginner-friendly interfaces of black screens and terminals. My observation underscores a UX-related learning curve for students in utilizing and developing models, regardless of data or task complexity, highlighting a clear need for more intuitive, accessible toolkits in AI education. I assign medical and computer science students into teams with clinical and computational mentors. Successful projects had each student focused on their strengths—future computer scientists developing toolkits for future doctors, who, in turn, use these to develop their ML models. Notably, setups revealed unpredictable student choices in computational versus medical tasks, which reassured me there is also room for dual-background development.
I imagine platforms that simplify the integration of AI in healthcare, allowing physicians to upload images, select or describe appropriate models, and tweak them in line with their clinical workflow. This facilitates a smoother AI adoption in healthcare, democratizing AI toolkits among healthcare providers. This can align model development with everyday clinical practice and could be key to fully integrating AI in healthcare. In my experience, it boils down to grant award systems. Why should a physician have to take out a giant grant to build a single-use ML model? We should shift our funding focus to creating comprehensive facilitating toolkits developed alongside physicians that allow anyone to create “single-purpose” models, making AI toolkits both accessible and adaptable for medical practitioners. Another goal would be to streamline the AI model development process, especially in hospitals worldwide that may not have extensive computational resources or access to medical AI labs. After all, the best models are really just the product of the best datasets. And we cannot have the best datasets without everyone “in the game”. The launch of MAIDA (framework for global medical imaging data sharing)2 demonstrates the potential of such joint efforts in bridging the gap between advanced technology and practical healthcare applications.
A bottom-top approach
Here, I suggest a “spotlight” approach where machine learners and AI in health researchers direct computational resources and grant funds to building empowering frameworks, rather than single models for specific use cases. The spotlight is on the end-user: the healthcare professional. Crafting toolkits that easily fit into a healthcare environment is our goal.
The three main components of this approach are illustrated below with examples.
-
•
Outcome-focused development: this involves creating AI toolkits that address specific healthcare goals, such as scheduling operating room use based on data from previous surgeries or predicting health outcomes based on patient input. Platforms like Biodock automate microscopy image analysis (https://www.biodock.ai/), simplifying biological object identification for scientists and pathologists without requiring extensive coding skills, demonstrating practical applications of AI in research environments.
-
•
Customizable and user-guided AI: this enables healthcare professionals to personalize models according to their specific requirements. This can be achieved through conversations with an advanced AI system that facilitates the development process without the need for deep technical expertise. Moor et al.’s proposed Generalist Medical AI Framework3 is an excellent demonstration. BioAutoMATED’s end-to-end automated ML tool for explanation and design of biological sequences4 and OpenAI’s GPTs are both valid starting examples.
-
•
Modular and accessible design: the final component enables healthcare professionals to assemble different parts of a ML model similar to using building blocks. Platforms like Stanford’s Spezi5 exemplify this approach, on which HealthGPT (https://healthgpt.us/) and LLMonFHIR (https://github.com/StanfordBDHG/LLMonFHIR) were built to better understand health data.
Some of our medical students, considerably new to ML, created a deep learning framework to analyze inner speech using openly available datasets, while others learned to segment eye images for diabetic retinopathy. To assist them, my hands-on session utilized the third component of the spotlight approach.
Existing approaches aim for flexible models requiring minimal specific data. The van der Schaar lab’s reality-centric AI,6 focused on real-world healthcare data, complements the principles of Generalist Medical AI,3 emphasizing adaptable and simple AI toolkits for healthcare. Spotlight ML acts as the pavement to a proper user interface, making these complex backend principles accessible to healthcare professionals. Automated machine learning (AutoML) simplifies ML development and deployment by handling complex tasks like data processing and algorithm selection.4 This approach aligns with my advocacy to make ML more accessible. Large technology companies now provide ML toolkits for users with limited technical experience in areas such as image analysis and language translation. However, a substantial understanding of ML is still required to effectively use these toolkits.
Can we adapt some principles from math to medical AI?
I believe Conrad Wolfram’s principles for AI education in math can be applied to ML in healthcare, with a focus on practical applications and outcomes.7 To measure success as a clinician-scientist in AI, it is important to value the practical application of ML knowledge in clinical practice rather than computational skills. My experience teaching medical AI has shown that students who apply learned concepts in their clinical settings often develop innovative ideas. By building these ideas on user-friendly frameworks, they have ample opportunity for learning and experimentation, leading to a deeper understanding of practical applications.
In line with Wolfram’s principles of computational thinking, clinicians can ensure that AI models in healthcare meet ethical and legal standards by focusing on patient-centric care values such as beneficence and autonomy. As a doctor-in-training, I employ these to research8 and develop applications9 that align with patient care values. This “spotlight approach” to ML, emphasizing models that clinicians can relate to and understand, aids in making informed decisions that truly resonate with patient needs and values.
It is important to integrate real-world clinical scenarios in developing and updating algorithms for more effective solutions. This requires rethinking the approval process of ML toolkits in clinical settings, considering the increasing use of data augmentation in AI models. Making ML development “easy” allows diverse healthcare professionals to integrate AI solutions that tailor to specific patient populations. As AI-literate healthcare professionals, our ethical considerations and professional opinions in tool design help shape future policies, biases, and equitable treatment recommendations.
To reduce the likelihood of AI clinical trial failures or significant modifications due to discrepancies between training data and real-world patient care scenarios, it is crucial to teach future physicians how to develop, adjust, and evaluate these models. In a memorable teaching session, my student made a key realization. As she fine-tuned a ML pipeline, she saw the potential to apply similar methods to improve her clinical diagnostics. In the next class, she brought insights on approved models pertinent to her specialty and innovation-based residencies, where she could develop her own accurate models. To me, this highlighted the practical application of AI education in medicine, bridging the gap between theoretical learning and real-world medical practice. Taking this spotlight approach further, hospitals could employ bespoke frameworks with internal data cycling in healthcare-focused ML platforms to uphold patient privacy standards. This approach echoes discussions that emphasize the importance of medical expertise in guiding technological innovations.
The chartrooms of medical AI
The road to medical AI utilizing a spotlight approach, though complex, is made accessible through the involvement of professionals with expertise in both medicine and computation. These individuals bring a combination of theoretical and practical perspectives and can translate between medical and technical professionals. An example of this is the team behind Spezi,5 a modular ecosystem of “world-building” toolkits at Stanford Byers Center for Biodesign, where the lead architect and director are both physician software developers (https://biodesign.stanford.edu/).
I am passionate about cultivating professionals with similar expertise. Some of the students I taught transitioned from computer science to medicine. They made a startup out of applications aimed at improving medication adherence, and interned in digital health companies, which not only highlights the practical application of AI in medicine but also demonstrates the significant impact that AI education can have on students’ career paths and contributions to healthcare innovation. Some used the course’s opportunities to join conferences and innovation-focused residency programs that would harness their new skillset. Their experience reinforces the necessity of bridging the education-rooted gap between AI and medicine, demonstrating the tangible benefits of this integration for healthcare innovation.
DuPont’s chartrooms, originally designed for proprietary data visualization, pioneered the way for modern data presentation tools like PowerPoint, which made complex data accessible to all. Similarly, the medical AI industry must transition from exclusive “chartrooms” to user-friendly platforms, democratizing access to advanced technology. As I near the completion of my medical training, I recognize the urgency of integrating AI into patient care in the coming decade. Dual-background professionals will facilitate this shift and lead the way toward an accessible future for medical innovation to physicians. As early as students, doctors will train in intuitive software for model development, moving away from opaque “black box” models toward transparent, understandable AI applications in healthcare. As Rudin10 suggests, we really must stop building black box models all together.
Acknowledgments
Declaration of interests
The author declares no competing interests.
Biography
About the author
Yosra Mekki develops and researches ML and spatial computing modules for health. Like many ML developers, her computational skills are a product of the MOOC revolution. She studies medicine for an “inside-out” look at digital health. She also enjoy teaching computational fundamentals to medical students at the AI in Medicine course. She leads Qatar's national AI/XR Healthcare Interest Group, dedicated to advancing medical AI and spatial computation. She aims to formalize a language and track between both fields. Outside work and school, she enjoys Jiu Jitsu and Muay Thai.
References
- 1.Kozyrkov C. “How AI Is Evolving,”. Medium. 2023 https://kozyrkov.medium.com/how-ai-is-evolving-9638e7eae9f1 April 26, 2023. [Google Scholar]
- 2.Saenz A., Chen E., Marklund H., Rajpurkar P. The MAIDA initiative: establishing a framework for global medical-imaging data sharing. Lancet. Digit. Health. 2024;6:e6–e8. doi: 10.1016/S2589-7500(23)00222-4. [DOI] [PubMed] [Google Scholar]
- 3.Moor M., Banerjee O., Abad Z.S.H., Krumholz H.M., Leskovec J., Topol E.J., Rajpurkar P. Foundation models for generalist medical artificial intelligence. Nature. 2023;616:259–265. doi: 10.1038/s41586-023-05881-4. [DOI] [PubMed] [Google Scholar]
- 4.Valeri J.A., Soenksen L.R., Collins K.M., Ramesh P., Cai G., Powers R., Angenent-Mari N.M., Camacho D.M., Wong F., Lu T.K., Collins J.J. BioAutoMATED: An end-to-end automated machine learning tool for explanation and design of biological sequences. Cell Syst. 2023;14:525–542.e9. doi: 10.1016/j.cels.2023.05.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Schmiedmayer P., Ravi V., Aalami O. The Path to a Modular and Standards-based Digital Health Ecosystem. ArXiv. 2023 doi: 10.48550/arXiv.2311.03363. Preprint at. [DOI] [Google Scholar]
- 6.van der Scharr M., Rashbass A. 2023. The Case for Reality-Centric AI.https://www.vanderschaar-lab.com/the-case-for-reality-centric-ai/ February 17, 2023. [Google Scholar]
- 7.Wolfram C. Wolfram Media. Inc.; 2020. The Math (S) FIX: An Education Blueprint for the AI Age. [Google Scholar]
- 8.Nawaz F.A., Mekki Y.M., Tharwani Z.H., Khan H.A., Shaeen S.K., Boillat T., Zary N., Zughaier S.M. Toward a meta-vaccine future: Promoting vaccine confidence in the metaverse. Digit. Health. 2023;9 doi: 10.1177/20552076231171477. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Mekki Y.M., Mekki M.M., Hammami M.A., Zughaier S.M. 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT) 2020. Virtual Reality Module Depicting Catheter-Associated Urinary Tract Infection as Educational Tool to Reduce Antibiotic Resistant Hospital-Acquired Bacterial Infections; pp. 544–548. [Google Scholar]
- 10.Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019;1:206–215. doi: 10.1038/s42256-019-0048-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
