Skip to main content
Journal of Diabetes Science and Technology logoLink to Journal of Diabetes Science and Technology
. 2024 Dec 26:19322968241304434. Online ahead of print. doi: 10.1177/19322968241304434

Methodology for Safe and Secure AI in Diabetes Management

Remco Jan Geukes Foppen 1, Vincenzo Gioia 2, Shreya Gupta 3, Curtis L Johnson 3, John Giantsidis 4, Maria Papademetris 4,
PMCID: PMC11672366  PMID: 39726262

Abstract

The use of artificial intelligence (AI) in diabetes management is emerging as a promising solution to improve the monitoring and personalization of therapies. However, the integration of such technologies in the clinical setting poses significant challenges related to safety, security, and compliance with sensitive patient data, as well as the potential direct consequences on patient health. This article provides guidance for developers and researchers on identifying and addressing these safety, security, and compliance challenges in AI systems for diabetes management. We emphasize the role of explainable AI (xAI) systems as the foundational strategy for ensuring security and compliance, fostering user trust, and informed clinical decision-making which is paramount in diabetes care solutions. The article examines both the technical and regulatory dimensions essential for developing explainable applications in this field. Technically, we demonstrate how understanding the lifecycle phases of AI systems aids in constructing xAI frameworks while addressing security concerns and implementing risk mitigation strategies at each stage. In addition, from a regulatory perspective, we analyze key Governance, Risk, and Compliance (GRC) standards established by entities, such as the Food and Drug Administration (FDA), providing specific guidelines to ensure safety, efficacy, and ethical integrity in AI-enabled diabetes care applications. By addressing these interconnected aspects, this article aims to deliver actionable insights and methodologies for developing trustworthy AI-enabled diabetes care solutions while ensuring safety, efficacy, and compliance with ethical standards to enhance patient engagement and improve clinical outcomes.

Keywords: AI in health care, artificial intelligence, cybersecurity, data, explainable AI, health information

Introduction

Artificial intelligence (AI) refers to machine-based systems capable of learning, reasoning, decision-making, and understanding natural language, which serve as augmentative tools designed to enhance—not replace—human expertise. Artificial intelligence systems can analyze large volumes of clinical data and create predictive models that help personalize treatments and improve diagnoses.

Artificial intelligence is being used extensively throughout the continuum of diabetes care,1,2 from diabetes detection to optimizing treatment3,4 and predicting and managing complications.5 -7 Due to the less invasive nature of AI models, health care providers are exploring them for pre-screening, screening, and classification of different types of diabetes.8,9 For instance, Rom et al 10 demonstrated the use of deep-learning models for detecting type 2 diabetes using non-diabetic retinopathy fundus images, while Li et al 11 established a non-invasive machine-learning model based on tongue features to predict the risk of prediabetes and diabetes.

As AI systems become integral to diabetes management, ensuring their safety and security becomes imperative. These systems handle sensitive patient data and make critical decisions, underscoring the need for robust security measures. Here, explainable AI (xAI) emerges as the heart of this article, essential for fostering trust and enhancing informed clinical decision-making. To ensure the accuracy, reliability, and ethical use of AI-driven decision-making, and to build xAI systems, human oversight remains an indispensable component as exemplified by approaches like human-in-the-loop (HITL) and hybrid-augmented intelligence (HAI) for blood glucose (BG) monitoring.12,13

This article examines the safety, security, and compliance challenges associated with implementing AI systems in diabetes management. The AI software lifecycle serves as a foundational technical framework for developing xAI, ensuring that security and compliance measures are embedded at every stage of the lifecycle. Furthermore, regulatory compliance is essential for meeting the standards established by governing bodies, thereby reinforcing the integrity and trustworthiness of AI systems. By focusing on these key phases—from planning and requirements gathering to adoption and retirement—we highlight potential risks and provide recommendations. By addressing these interconnected aspects of building xAI systems, this article seeks to provide a holistic understanding of AI systems that are effective, accurate, interpretable, and explainable, fostering trust, and enhancing clinical outcomes in diabetes care.

Role of Explainability in Diabetes Prediction

Artificial intelligence systems have significantly enhanced clinical tools used for analyzing and predicting diseases like diabetes and “Diabesitology.” Diabesitology has been recently defined as exploring a multidimensional state of disease. 14 Relationships can be found between unrelated pathologies, like type 2 diabetes and Alzheimer’s disease. 15 Artificial intelligence’s capability to longitudinally analyze demographic data, family history, clinical phenotype, lifestyle, and biomarker data has been further augmented by integrating data from wearable medical devices. These devices provide real-time data, contributing to a more reliable clinical picture, more effective therapies, and a reduction in diagnostic errors. Diagnostic errors occur when a disease is incorrectly attributed to a different pathology. 16 The flip side of a diagnostic error is when a disease is missed (Figure 1).

Figure 1.

Figure 1.

Descriptive representation of diagnostic errors.

Artificial intelligence systems face significant limitations, one of which is in the “black-box” nature of AI models, making it difficult to understand the internal decision-making process. This limitation can lead to issues, such as AI “hallucinations” and the inability to verify the logical reasoning used to generate outputs.17,18 Explainable AI aims to make AI decision-making processes and predictions transparent, interpretable, understandable, and safer for humans. In the context of diabetes prediction, xAI can help doctors understand why a particular patient is classified as high-risk, increasing confidence in predictions, and facilitating communication with patients. Explainable AI employs various techniques and approaches, including post-hoc techniques like SHAP (SHapley Additive exPlanations) for more complex models.

xAI Applied to the Diagnosis of Diabetes

In diabetes diagnosis, xAI research is gaining increasing attention. A notable example that we have followed is described by Rita Ganguly and Dharmpal Singh, who analyze the PIMA Indians Diabetes Dataset, which contains data collected by the National Institute of Diabetes and Digestive and Kidney Diseases in the United States on a representative sample of members of the PIMA people, a group of Native Americans who live primarily in Arizona. 19 This data set is further analyzed using a SHAP-type xAI model. This approach is public and easily accessible as we show in the graphic generated in house (Figure 2).

Figure 2.

Figure 2.

Graphical representation of SHAP correlation coefficient matrix on diabetes.

Useful graphical representations of the xAI analysis can consist of:

  • The SHAP correlation coefficient matrix (Figure 2) is a tool used to analyze and visualize the importance of variables in predictive models. This matrix summarizes the SHAP values, which represent the impact of each characteristic on the model’s prediction, and allows one to examine the relationships between variables.

  • The mean SHAP value plot, which allows one to illustrate the average impact of each feature on the model. Similar representations are the SHAP value of the model and the global variable importance analysis. All highlight the importance of each variable (or characteristic) in the model, considering the entire data set.

  • Another important visual representation is the local explanation, which delves deeper into explaining individual predictions or patients. Local explainability and clustering methods would provide physicians a clearer view of the different profiles of patients, for purposes of patient stratification and reduction of diagnostic errors.

These visualizations clearly illustrate how individual attributes adhere to the model and allow to visibly understand which factors drive predictions for individual patients, facilitating both result interpretation and intervention planning. Graphical xAI representations would assist in directing toward an understanding of disease severity, comorbidity patterns, repurposing drugs, and advancing precision medicine.

The transparent approach of xAI in clinical applications logic helps to build and sustain credibility in clinical outcomes and facilitates human trust in xAI clinical models. Explainable AI can play a critical role in ensuring adherence to regulations when making choices that could impact data security and privacy. Understanding the technical aspects of the AI product lifecycle provides a foundation for implementing effective xAI solutions in clinical applications. The next section will delve into the technical aspects of building xAI, including the unique challenges associated with each phase of the lifecycle.

Understanding the AI Product Lifecycle for Building xAI Applications

The AI product lifecycle is a crucial framework for understanding how xAI solutions are planned, developed, maintained, and retired in diabetes management. The AI product lifecycle specific to diabetes care (Figure 3) is structured into three primary stages: planning and design, development and testing, and deployment, adoption, and maintenance. Each stage includes distinct phases, from problem identification to retirement, highlighting critical security risks. The lifecycle underscores the necessity of robust security protocols to protect AI systems throughout their development and implementation.

Figure 3.

Figure 3.

Comprehensive workflow illustrating the artificial intelligence product lifecycle for AI-enabled diabetes care and management solutions.

In addition, we will address potential security risks that may arise at different stages and outline strategies to mitigate these risks. This dual focus ensures that we safeguard the integrity and safety of the solutions.

Stage 1—Planning and Design

Problem identification and definition

Clearly defining the problem, setting objectives, and delineating the scope is essential for tailoring AI systems to meet the demands of diabetes care. Building efficient xAI solutions requires identifying pressing challenges, such as ensuring timely BG monitoring, 20 optimal dosing strategies of insulins,3,21,22 or preventing hypoglycemia events.7,23 Understanding these nuances is crucial to ensuring security and effectiveness in AI-enabled diabetes management.

Stakeholder analysis

Norbert Laurisz et al found that by involving consumers in the creation process, better results are achieved regarding product quality and alignment with customer expectations and needs. 24 Also, according to Martha Makwero et al, patient participation in decision-making improves care experiences and responsiveness in diabetes mellitus (DM). It also enhances positive patient experiences of care, treatment goal setting, medication adherence, safety, glycemic control, and lifestyle modification through enhancing patient self-efficacy. 25

Requirements gathering

Establishing specific objectives and defining the scope ensures that the system effectively addresses challenges in diabetes care focusing on security and transparency. Clearly documenting these challenges facilitates the development of Predetermined Change Control Plans (PCCP) that comply with FDA regulatory standards. 26 For example, AI applications in diabetes often require data integration from wearable devices like smart watches, continuous glucose monitors (CGMs), and smart insulin pens. 27

Comprehensive, well-documented protocols must be developed that AI developers, compliance officers, and security teams can consult to ensure transparency for risk mitigation.

Feasibility study and risk assessment

A comprehensive feasibility study must evaluate the technical and financial viability of the AI system, including resource availability, budget constraints, and overall practicality. Given the health care industry’s strict regulations, all potential risks—technical, operational, and financial—must also be identified at this stage to comply with standards like ISO 14971 (Risk Management for Medical Devices). 28

Stage 2—Development and Testing

Data collection and preprocessing

Data collection is initiated from various sources, such as electronic health records (EHRs), electronic medical records (EMRs), and wearable devices. Mackenzie SC et al discuss data’s potential to support diabetes care and the many outlets to collect it. For instance, medical records can offer sensitive information on demographics, medical history, diagnoses, medications, physiological observations, and laboratory and imaging data. 2

At this stage, it is crucial to ensure that the data collection methods comply with health care privacy regulations, such as General Data Protection Regulation (GDPR) or Health Insurance Portability and Accountability Act (HIPAA), thereby safeguarding patient information.

Design and development

Then, the model is trained and monitored to optimize performance and minimize biases, ensuring it meets clinical standards. In diabetes management, AI models like predictive algorithms for insulin dosing or patient adherence are developed. For instance, Zeevi D et al devised a machine-learning algorithm to predict personalized postprandial glycemic response to real-life meals. 29 The FDA’s Good Machine Learning Practice (GMLP) guidelines set the foundation for the safe development phase while the selection of interpretable and explainable models (xAI) and methodologies ensure that diabetes care solutions are trustworthy. 30

Validation and testing

Once developed, the AI model undergoes rigorous validation and testing to ensure effective performance in real-world health care settings. This includes stress testing and robustness checks to confirm the model’s ability to handle various diabetes care scenarios, which is critical due to the need for high accuracy and precision in Diabetes Management systems. For example, Tufail et al conducted one of the first head-to-head validation studies in 2017, comparing EyeArt, Retmarker, and human graders against a third-party reference standard. 31

Clinical trials are also essential in this phase, comparing AI with traditional methodologies. According to the American Diabetes Association, clinical trials test new drugs or devices in patients before FDA approval for public use. 32

Stage 3—Deployment, Adoption, and Maintenance

Certification, deployment, and integration

The validated AI system is submitted to the appropriate regulatory authorities (eg, FDA, CE Mark) for release and certification. Required documentation includes evidence of compliance with standards like ISO 13485 (quality management for medical devices) and ISO/IEC 62304. 33 Upon approval, it is deployed into the health care environment. This involves integrating the model into clinical workflows and EHR systems, ensuring seamless functionality. Post hoc explainability methods are applied to evaluate and clarify the AI-driven decisions after deployment for risk mitigation.

Adoption

Successful adoption of the AI system hinges on engaging health care providers and patients. Clinical trials play a significant role, as positive results can build confidence among clinicians and patients. In addition, collecting and analyzing user feedback aids in smoothly integrating the AI system into daily clinical practice. Norbert Laurisz et al discuss the concept of co-creation in health care indicating the theoretical sophistication of research on collaboration between health care professionals and patients. 24

Monitoring and maintenance

This includes Post-Market Surveillance of AI-enabled Diabetes care solutions. Once the solution is on the market, ongoing monitoring is required to identify potential safety or performance issues. Regulatory authorities may require follow-up clinical trials to ensure that it remains safe and effective. 34

Retirement and decommissioning

When the AI system is no longer needed, a systematic decommissioning process is executed. Patient data are securely archived or transferred, and the system is safely shut down, with all procedures documented to ensure compliance and transparency:

  1. Protecting patient data. Securely erasing or transferring personal health information (PHI) is crucial for compliance with regulations like HIPAA. For example, improper retirement of an insulin pump system could risk exposing sensitive data.

  2. Preventing access to obsolete systems. Revoking access to outdated systems prevents exploitation of known vulnerabilities, as they no longer receive security updates.

  3. Regulatory compliance. Adhering to standards, such as ISO 13485 and ISO 14971 is essential to avoid non-compliance and potential data breaches.

  4. Securing software dependencies. Disconnecting external connections during decommissioning prevents unauthorized access to data.

Addressing Security Concerns at Each Stage

At every stage of the AI application lifecycle, clinical and medical organizations must recognize the inherent risks involved. Various risks associated with AI-powered clinical models—such as lack of explainability and data breaches—necessitate the proactive implementation of appropriate security measures and mitigation strategies.

Artificial intelligence models often exhibit behaviors that are difficult to comprehend and rationalize. The lack of transparency in AI decision-making processes restricts testing opportunities, which diminishes trust and heightens the potential for misuse. 35

  • 1. Protecting sensitive information. The revelation of private information can harm patients and disrupt medical processes, while data breaches can lead to serious legal repercussions due to non-compliance. Criminals often target medical software for its valuable sensitive data, making it essential to protect this information throughout the lifecycle of AI-enabled diabetes care solutions, including the decommissioning phase.

  • Hostile cyber actors may employ membership inference attacks to determine if an individual’s data were used to train an AI model. In addition, cybercriminals can use attribute inference attacks to extract sensitive information by analyzing the model’s output.

  • 2. Measures to Enhance Security:

  • • Encryption protocols. Establish strong encryption protocols for stored data and data in transit.

  • • Differential privacy. Incorporate differential privacy methods throughout the model development process.

  • • Regular audits. Conduct regular audits and monitor access to sensitive information, adhering to the principle of least privilege.

  • • Compliance. Comply with data protection laws, including HIPAA and GDPR.

Next, we will explore the regulatory framework governing these systems, emphasizing the importance of compliance in ensuring the safety and efficacy of AI in health care.

Understanding the Regulatory Compliance Guidelines to Build xAI

Regulatory compliance is crucial in diabetes care, where AI supports critical decision-making. Frameworks enforce transparency, interpretability, and trust, with increasing guidance from entities like the US Food and Drug Administration (FDA) to ensure AI systems’ safety and efficacy in diabetes management. 36 The FDA recently published expectations for AI systems and developed specific guidance for Software as a Medical Device (SaMD), including considerations for adaptive AI systems.37,38

This section examines the FDA’s guidelines and their role in developing xAI systems. The FDA framework ensures that xAI systems meet stringent medical application requirements, aligning technical and regulatory aspects for secure and trustworthy AI solutions in diabetes care.

Good Machine-Learning Practice for Safe xAI

The FDA’s GMLP establishes guidelines for safe AI development in health care. For diabetes management, key areas include:

  • Transparency in AI decisions. Artificial intelligence systems for glucose monitoring and insulin dosing must offer interpretable outputs. For example, xAI models in systems like Tidepool Loop allow clinicians to trace insulin dosage calculations based on glucose data and patient history, meeting GMLP’s transparency requirements.39,40

  • Data quality and integrity. Good Machine Learning Practice requires using high-quality, diverse data sets. Explainable AI helps by clarifying the model’s accuracy across various patient populations, reducing bias, and increasing trust.

  • Monitoring and risk mitigation. Good Machine Learning Practice emphasizes continuous monitoring, critical for high-risk tools like insulin pumps or CGMs. Explainable AI aids in real-time risk mitigation by detecting anomalies in AI decisions, potentially preventing life-threatening events, such as hypoglycemia or hyperglycemia.

AI/ML-Based SaMD and Adaptive xAI

The FDA’s AI/ML SaMD framework guides AI systems that continuously adapt. 41 Key aspects include:

  • PCCP. Explainable AI ensures transparent management of AI updates in insulin delivery systems, maintaining safety, and interpretability with each change.

  • Real-world performance monitoring. As AI adapts to new diabetes data, xAI explains how these updates impact decision-making, helping clinicians, and patients understand changes in insulin dosing or risk predictions.

SaMD and Clinical Evaluation for Diabetes Care

Artificial intelligence-enabled systems for diabetes care, such as CGM systems or insulin delivery applications, undergo rigorous clinical evaluation as per the FDA SaMD: Clinical Evaluation Guidelines, 42 which includes:

  • Scientific and analytical validity. Explainable AIenhances transparency by explaining how outputs like glucose predictions and insulin recommendations are derived scientifically, making them more understandable and reliable for clinicians.

  • Clinical performance. Through real-time explanations, xAI can support clinicians in validating and trusting the AI’s decisions, such as when making personalized insulin adjustments for patients with varying activity levels or carbohydrate intake.

Discussion

In clinical settings, the safety and security of AI are critical, as AI-driven decisions can significantly affect patient health. Safety pertains to the accuracy and reliability of AI systems in providing medical diagnoses and recommendations. To ensure effectiveness, these systems must undergo rigorous testing and validation with real clinical data, following regulatory guidelines. Security focuses on protecting AI systems from cyberattacks and safeguarding sensitive data, such as medical records and genetic information. Adopting advanced encryption protocols and adhering to regulations is vital for ensuring patient privacy. Furthermore, AI systems must be designed to withstand external attacks that aim to manipulate input data, which could lead to incorrect diagnoses or inappropriate treatments.

Explainable AI is a fundamental pillar for the safe, trustworthy, and effective adoption of AI in diabetes prediction and management. By making AI decisions transparent and interpretable, xAI enhances trust among health care professionals and patients. It opens avenues for validation and ethical application of predictive tools while translating complex technical and regulatory requirements into understandable formats. This approach helps mitigate the risks of misinterpretation or overestimation of outcomes. Understanding the interconnected aspects of building xAI helps researchers and developers create effective, accurate, interpretable, and xAI systems that foster trust and enhance clinical outcomes. Ultimately, even the most sophisticated AI systems will struggle to gain traction without a clear understanding of their safe and secure operation. Continuous dialogue among AI developers, health care professionals, and patients is essential to ensure that AI effectively serves to improve diabetes care and prevention.

Footnotes

Abbreviations: AI, artificial intelligence; EO, executive order; xAI, explainable AI; genAI, generative AI; GDPR, General Data Protection Regulation; gpAI, general purpose AI; HIPAA, Health Insurance Portability and Accountability Act; SHAP, Shapley additive explanations.

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

ORCID iD: Maria Papademetris Inline graphic https://orcid.org/0009-0005-2677-2070

References


Articles from Journal of Diabetes Science and Technology are provided here courtesy of Diabetes Technology Society

RESOURCES