Skip to main content
Diabetes Technology & Therapeutics logoLink to Diabetes Technology & Therapeutics
. 2024 May 30;26(6):375–382. doi: 10.1089/dia.2023.0469

Neural-Net Artificial Pancreas: A Randomized Crossover Trial of a First-in-Class Automated Insulin Delivery Algorithm

Boris Kovatchev 1,, Alberto Castillo 1, Elliott Pryor 1, Laura L Kollar 1, Charlotte L Barnett 1, Mark D DeBoer 1, Sue A Brown 1; for the NAP Study Team 1
PMCID: PMC11305265  PMID: 38277161

Abstract

Background:

Automated insulin delivery (AID) is now integral to the clinical practice of type 1 diabetes (T1D). The objective of this pilot-feasibility study was to introduce a new regulatory and clinical paradigm—a Neural-Net Artificial Pancreas (NAP)—an encoding of an AID algorithm into a neural network that approximates its action and assess NAP versus the original AID algorithm.

Methods:

The University of Virginia Model-Predictive Control (UMPC) algorithm was encoded into a neural network, creating its NAP approximation. Seventeen AID users with T1D were recruited and 15 participated in two consecutive 20-h hotel sessions, receiving in random order either NAP or UMPC. Their demographic characteristics were ages 22–68 years old, duration of diabetes 7–58 years, gender 10/5 female/male, White Non-Hispanic/Black 13/2, and baseline glycated hemoglobin 5.4%–8.1%.

Results:

The time-in-range (TIR) difference between NAP and UMPC, adjusted for entry glucose level, was 1 percentage point, with absolute TIR values of 86% (NAP) and 87% (UMPC). The two algorithms achieved similar times <70 mg/dL of 2.0% versus 1.8% and coefficients of variation of 29.3% (NAP) versus 29.1 (UMPC)%. Under identical inputs, the average absolute insulin-recommendation difference was 0.031 U/h. There were no serious adverse events on either controller. NAP had sixfold lower computational demands than UMPC.

Conclusion:

In a randomized crossover study, a neural-network encoding of a complex model-predictive control algorithm demonstrated similar performance, at a fraction of the computational demands. Regulatory and clinical doors are therefore open for contemporary machine-learning methods to enter the AID field.

Clinical Trial Registration number: NCT05876273.

Keywords: Type 1 diabetes, Automated insulin delivery (AID), Hybrid closed-loop (HCL), Neural networks, Machine learning

Introduction

Automated Insulin Delivery (AID) has firmly transitioned to the clinical practice of type 1 diabetes (T1D) and has made its first strides into insulin-using type 2 diabetes as well. High-ranking general medicine journals are now regularly publishing AID clinical trials.1–6 Real-life data for thousands of AID users have been published, consistently showing the superiority of AID over standard therapies.7–13 Consensus recommendations for the use of AID technologies in clinical practice were published by Endocrine Reviews,14 to serve as a comprehensive guide for clinicians interested in utilizing the advantages of AID therapy.

The “brain” of any insulin delivery system is a control algorithm, which digests information from peripheral devices, for example, a continuous glucose monitor (CGM) and insulin pump, and then directs the pump to deliver amounts of insulin that are considered optimal. Typically, this happens at a pace of every few minutes15; thus an AID control algorithm has to be fast, efficient, and with low computational demands, particularly because the data processing is done by a device with low computing power, such as an insulin pump or a smart watch.

The early studies of Hovorka16,17 and Steil18 outlined two major types of algorithms now in use—model-predictive control (MPC)17 and proportional-integral-derivative (PID).18 A modular controller based on the user's state estimation was introduced in 2009,19 which later became the algorithm behind Tandem's Control-IQ AID system.1,2,4,7 In 2010, two new algorithms were introduced: Zone MPC,20 a strategy to minimize hyper- and hypoglycemic events, and MD Logic using clinical knowledge and fuzzy logic models to drive insulin delivery.21

In combination with PID, MD Logic powered the MiniMed Advanced Hybrid Closed-Loop (AHCL) system.22,23 An MPC-PID mix was introduced to drive dual-hormone AP,24 and is now used in studies with the dual-chamber iLet pump.25 Overall, a PubMed search on artificial pancreas, or AID, or closed loop algorithms, identified 555 papers proposing AID controllers. To date, at least six of them were implemented in devices used in clinical practice.

None are “trainable,” in that none of the current commercial algorithms are able to take advantage of the vast amounts of data collected by AID users around the world. This latter point is critically important—at best, contemporary AID algorithms can “learn” from the experience of their users, but the large available databases remain untapped by algorithm adaptation.

However, methods to do so exist and, generally, fall in the Data Science realms known as “machine learning,” or “artificial intelligence” (AI). By definition, Neural Networks are a class of machine-learning models that use one or multiple layers of artificial “neurons” and conduits between them to solve complex problems. Neural nets are “trained” on large amounts of data and can then reproduce the mechanisms that created these data.

The Neural-net Artificial Pancreas (NAP) introduced in this manuscript is the result of a process by which an insulin dosing rule, particularly an AID algorithm, can be reproduced by a neural network.26 This process leverages the key concept of a Saturated Insulin Dosing Dataset, or more precisely, a sufficiently rich ensemble of examples of the insulin dosing rule computation. This saturated dataset is encoded and reproduced by the neural network after a process called “neural network training.”

The saturated dataset is sufficiently dense and sufficiently wide, so that the probability of the neural network deviating from the original rule is kept small. After training, the resulting neural network is faster, more efficient, and computationally less demanding than the original dosing rule or AID algorithm. In addition, neural networks have the inherent ability to continue learning and adapting behavior through established AI methods, such as reinforcement learning, or others. This latter feature, however, was not part of the current study and was left for future NAP investigations, where learning and neural network retraining would be done offline in a Cloud database environment.

Materials and Methods

The objective was to pilot-test in a randomized crossover study, the safety and feasibility of an NAP implementation of a previously introduced AID algorithm—the University of Virginia Model-Predictive Control (UMPC).27 The NAP technology and the study protocol were approved by the FDA Investigational Device Exemption #G230052, and by UVA's Institutional Review Board. The study was registered with ClinicalTrials.gov.

Seventeen participants with T1D signed consent forms and 15 were randomized to two groups: Group A—NAP first, followed by UMPC; Group B—UMPC first, followed by NAP. In brief, the inclusion criteria were: (1) Age ≥18.0 years at time of consent; (2) Clinical diagnosis, based on investigator assessment, of T1D for at least 1 year; (3) Currently using insulin for at least 6 months; (4) Currently using the Control-IQ AID system (Tandem Diabetes Care); (5) Using insulin parameters such as insulin:carbohydrate ratio (ICR) and correction factor consistently to dose insulin for meals or corrections, and (6) Willingness to use the UVA's DiAs system (described below) throughout study sessions.

Exclusions were: (1) History of diabetic ketoacidosis in the 12 months before enrollment; (2) Severe hypoglycemia resulting in seizure or loss of consciousness in the 12 months before enrollment, and (3) Use of metformin/biguanides, GLP-1 agonists, pramlintide, DPP-4 inhibitors, SGLT-2 inhibitors, or nutraceuticals intended for glycemic control with a change in dose in the past month. There were no limits on the entry glycated hemoglobin (HbA1c); detailed inclusion and exclusion criteria are given in the study clinicaltrials.gov.

Following enrollment, 1 week of Control-IQ historical data downloaded from the participants' pumps was used to establish a baseline glucose control. The study then followed a randomized cross-over design including the 20-h NAP and UMPC sessions, in random order. All sessions were conducted at a local hotel, began at ∼4 PM each day, and continued until noon on the next day. Each session included carbohydrate entry with their usual ICR for dinner and breakfast.

The corresponding meals during the NAP and UMPC were identical and contained the same amount of carbohydrates. Generally, the participants followed their usual daily routines as closely as possible. All participants went on an ∼25-min walk 2 h after breakfast. Study staff remotely monitored the participants' CGM and insulin delivery patterns throughout the admission.28 Blood ketones were measured if needed with the Abbott Precision Xtra Blood Glucose and Ketone Monitoring System meters and strips (Abbott Laboratories, Abbott Park, IL), in accordance with the manufacturer's labeling.

Contour Next Blood Glucose Meters and strips (Ascensia Diabetes Care, Parsippany, NJ) were used to check capillary blood sugar if the CGM was unavailable or if requested by the study team such as when the values were inconsistent with the participant's reported symptoms.

The study hardware included our established DiAs prototyping platform,29 which has been used in a number of studies for over a decade,27,30–35 connected to a Tandem t:AP research pump and a Dexcom G6 sensor (Dexcom, Inc., San Diego, CA). During all sessions, NAP and UMPC were running simultaneously on DiAs, but only one of these algorithms was used to control the insulin pump, as determined by the order of randomization. This allowed for: (1) seamless transition between NAP and UMPC sessions and vice versa, and (2) precise comparison of the insulin dosing commands issued by NAP or UMPC under identical input conditions.

NAP was trained on a dense saturated data set generated via in silico approximation of the action of the UMPC using our established UVA/Padova diabetes simulator. Specifically, we generated 750 simulations of 60 days each for 100 different virtual subjects with T1D, adding up to 4.5 million days (∼12,328 years) of UMPC action. The simulator was configured to maximize the data variability, with the following randomized variables: meal intake carbohydrates, meal intake times, meal type (fast or slow absorption), physical activity, insulin type, insulin sensitivity, and rescue carbohydrates in case of hypoglycemia.

Also, different operation modes of the UMPC were considered, for example, fully automated or hybrid closed loop, and various UMPC modules were activated or deactivated, for example, hypoglycemia and hyperglycemia safety supervision, and a bolus priming system (available in full closed-loop mode).27 Details regarding the neural network used to construct NAP are given in Supplementary Data S1.

Statistically, this early feasibility study was not powered to establish formal noninferiority of NAP compared with UMPC. Instead, criteria for success were established as follows: no critical system errors and the following performance criteria, which factor in likely inter-day variability for each participant and are consistent with the recommendations of the International Consensus on time-in-range (TIR)36: (1) Difference in TIR, 70 to 180 mg/dL, between NAP and UMPC <8 percentage points; (2) Difference in time below range (TBR <70 mg/dL) between NAP and UMPC <3 percentage points; and (3) Difference in time above range (TAR, >180 mg/dL) between NAP and UMPC <8 percentage points.

TIR was the primary outcome and several other CGM-based metrics were computed and compared across NAP and UMPC sessions as well: mean glucose; glucose variability measured by coefficient of variation; percentage of readings <54 mg/dL (i.e., level 2 hypoglycemia36:); and percentage of readings >250 and >300 mg/dL (i.e., level 2 hyperglycemia36:). The study protocol also required observing, recording, and tabulating any system errors, including: (1) Malfunctions requiring study team contact and other reported device issues; (2) Percent time in closed loop and any other relevant operational modes; and (3) Rate of relevant NAP and UMPC failure events and alarms per study session.

The statistical analysis included general linear models (GLM), with repeated measures (NAP vs. UMPC) and a covariate in the 1-h average CGM level before initiating either controller. This covariate was used to compute the adjusted performance differences between NAP and UMPC. The data were analyzed using the GLM procedures of IBM SPSS 28.0.

Results

Table 1 presents the demographic and baseline glycemic control characteristics of the participants. Participants ranged in age from 22 to 68 years, with a mean age of 47.9 years. They have had T1D for 7 to 58 years with a mean of 26.1 years. There were 10 Females and 5 Males with ethnic/racial distribution of 13 White non-Hispanic and 2 Black. HbA1c ranged from 5.4% to 8.1% with a mean of 6.77%. While on Control-IQ during the baseline, the average TIR was 69.3%.

Table 1.

Demographic and Baseline Characteristics of the Participants Who Completed Admission

Demographics and diabetes parameters
 Age (years), mean (range) 47.9 (27.6–67.9)
 Duration of diabetes (years), mean (range) 26.1 (7–58)
 Gender female/male 10/5
 Race or ethnic group: White, non-Hispanic/Black 13/2
 Glycated hemoglobin at screening (%), Mean (Range) 6.8 (5.4–8.1)
Baseline glycemic control during 1-week control-IQ before admission
 Mean CGM glucose (SD), mg/dL 158.89 (24.45)
 Percent time below 54 mg/dL, Mean (SD) 0.39 (0.48)
 Percent time below 70 mg/dL, Mean (SD) 1.64 (1.85)
 Percent time in range (70–180 mg/dL), Mean (SD) 69.33 (13.39)
 Percent time in tight range (70–140 mg/dL), Mean (SD) 44.81 (14.62)
 Percent time above 180 mg/dL, Mean (SD) 29.03 (14.45)
 Percent time above 250 mg/dL, Mean (SD) 8.78 (8.16)
 Coefficient of variation (%) 34.77 (3.56)

All predefined criteria for success described in the Methods section were achieved. In particular, the TIR difference between NAP and UMPC adjusted for glycemic control at the initiation of AID sessions was 0.63 percentage points. The difference in TBR was 0.27 percentage points, and the difference in TAR was 0.22 percentage points. Table 2 presents the glycemic control metrics with their values during the NAP and UMPC sessions and NAP-UMPC differences adjusted glucose levels at the beginning of each session. Significance levels are included as well; however, as noted in the Methods, this feasibility study was not powered to formally test statistical noninferiority.

Table 2.

Continuous Glucose Monitor Metrics During Neural-Net Artificial Pancreas Versus University of Virginia Model-Predictive Control Admissions

Glycemic control metrics derived from CGM NAP UMPC Adjusted difference UMPC-NAP P
Mean CGM glucose (SD), mg/dL 130 (9.3) 132.3 (25.6) 3.67 0.49
Percent time below 54 mg/dL, Mean (SD) 0.25 (0.7) 0.19 (0.5) −0.1 1.0
Percent time below 70 mg/dL, Mean (SD) 1.88 (2.25) 1.79 (2.1) −0.27 1.0
TIR 86.08 (6.5) 87.25 (12.9) 0.63 0.2
Percent time in tight range (70–140 mg/dL), mean (SD) 68.67 (8.1) 67.84 (13.1) −1.22 0.9
Percent time above 180 mg/dL, Mean (SD) 11.93 (6.4) 10.96 (13.1) −0.22 0.33
Percent time above 250 mg/dL, Mean (SD) 1.3 (1.7) 3.5 (9.1) 2.39 0.77
Coefficient of variation (%) 29.3 (5.7) 29.1 (8.2) −0.48 0.93
Technical specifications
 Percent time in closed loop, Mean (SD) 98.7 (1.9) 98.5 (1.3) 0.19
 Processor time per insulin dose computation on an intelCore i5 (milliseconds), mean (range) 1.4 (0–2.4) 8.3 (0–46) 6-fold

NAP, Neural-Net Artificial Pancreas; TIR, time-in-range; UMPC, University of Virginia Model-Predictive Control.

Supplementary Table S1 includes a table, which lists the performance of NAP and UMPC for each study participant and includes any carbohydrate treatments used during the sessions.

Figure 1 presents the median and quartiles of the glucose traces during all NAP study sessions (Panel A) and all UMPC study sessions (Panel B). It is evident that the profiles generated by the two algorithms are similar.

FIG. 1.

FIG. 1.

Median and quartiles of the glucose traces during all NAP study sessions (A) and all UMPC study sessions (B). NAP, Neural-Net Artificial Pancreas; UMPC, University of Virginia Model-Predictive Control.

Because NAP and UMPC were running simultaneously on DiAs during all sessions (but only one algorithm was used to control the insulin pump), we were able to tabulate the insulin dosing decisions of NAP versus UMPC under identical conditions. The average absolute difference between the two algorithms was 0.031 insulin units per hour, with 95% and 99% confidence intervals of (0–0.06) and (0–0.13) units per hour, respectively.

There was a period of time (∼10.5% of the duration of the study) when a DiAs mistake occurred and UMPC was used instead of NAP as defined by the protocol. During this period, the average absolute difference between the two algorithms was 0.024 insulin units per hour, with 99% confidence interval of (0–0.16) units per hour. Thus, the two algorithms were issuing virtually identical commands and therefore the inadvertent algorithm switch did not influence either the data analysis or the study results.

Figure 2 presents the simultaneous action of the two algorithms—one directing insulin delivery and the other running in the background. It is evident that the two algorithms have virtually identical insulin delivery profiles, when provided with the same input data.

FIG. 2.

FIG. 2.

Simultaneous action of the two algorithms. (A) NAP is directing insulin delivery, and UMPC is running in the background. (B) UMPC is directing insulin delivery, and NAP is running in the background.

There were no adverse events recorded during the execution of this trial. During the study, there were four pumps that needed to be replaced, three CGM sensors were changed, and one phone SIM card was replaced, in total during all NAP and UMPC sessions. However, these device issues did not meet the criteria for reportable device malfunctions.

NAP required sixfold less computational effort to determine an insulin dose. Specifically, NAP took on average 1.4 ms processor time per insulin dose computation (on an intelCore i5), with a 99% confidence interval of 0–2.4 ms. In contrast, for UMPC the average processor time was 8.3 ms per insulin dose computation with a 99% confidence interval of 0–46 ms. This is because UMPC required numerical iterations and involved a compatible third-party numerical solver, whereas NAP only required standard matrix multiplication operations.

The heaviest computational tasks for NAP were a few vector-matrix multiplications of order 256. Thus, NAP offered greater simplicity and stability compared to the iterative model-predictive algorithm and is therefore more suitable for implementation in devices with low processing power, such as insulin pumps or pods.

Discussion

Virtually all contemporary AID algorithms rely on approximations of the human metabolic system by equations (in the case of PID) or by a model, in the case of MPC.15 Empirical controllers have been introduced as well, incorporating clinical knowledge in the insulin dosing decision making process.21 Adaptive AID algorithms have been introduced over the years, attempting to compensate for the ever-changing physiology of their users.37–40 This is typically done by using a person's data from the previous hours/days/weeks and re-estimating accordingly the parameters of the model underlying the control algorithm. Mathematically, such re-estimation requires rather complex numerical methods, as closed-form solutions rarely exist. As a result, contemporary adaptive algorithms have challenges running on devices with low processing power, such as insulin pumps.

In some cases, such as the CamAPS AID system, the demand for computing power is satisfied by placing the control algorithm on a smart phone.41 But, integrated AID devices that do not rely on external computing resources, for example, Medtronic's AHCL,22,23 Tandem's Control-IQ,1,2,4,7 and Insulet's Omnipod 5,42 cannot afford running complex nonlinear algorithms.

To summarize, the current AID systems face two major shortcomings: (1) Any learning or adaptation of the AID algorithm is based only on the data patterns of this algorithm's user and does not have a mechanism to utilize the vast amounts of data collected in various databases, and (2) Any learning or adaptation of the AID algorithm requires substantial computing power.

Contemporary Data Science methods, such as machine learning and AI, can remedy both of these shortcomings, and the first step toward this concept is translating an established AID control algorithm, into a Data Science “environment,” for example, creating a neural network approximation of a model-predictive controller. This pilot-feasibility study is the first test of NAP, an algorithm that is a result from a process of encoding an insulin dosing rule into a neural network.

This process is based on the concept of a “Saturated Dataset,” that is, an ensemble of examples of the dosing rule computation that is sufficiently dense and sufficiently wide, so that the deviation between the original dosing rule and the resulting neural network is kept within predefined limits. If the preset limits are sufficiently small, the neural net can be considered a safe and efficacious alternative to the original dosing rule.

The results from this randomized crossover study show that the NAP concept works as intended—all predefined conditions for success, that is, proximity of NAP to its original UMPC algorithm, have been met. Moreover, any glycemic control discrepancies between the NAP and UMPC sessions of the study can be entirely attributed to external factors—the behavioral and physiological variance of the study participants between the two sessions.

When a clean experiment is made and NAP and UMPC are presented with identical input data as done throughout the duration of this study, the insulin dosing recommendations of the two algorithms were virtually identical.

Further, the computational demands of NAP were sixfold lower compared with the UMPC, without requiring third-party numerical solvers or libraries. While here this comparison was done on a computer (intelCore i5) and both 1.4 and 8.3 ms are exceedingly small, this difference would be of essence when the algorithm is scaled down to fit in an insulin pump.

This result would justify the NAPing of any AID algorithm before algorithm embedding in limited-power devices. Future optimization of the neural network structure and size will inevitably result in adaptive fully automated AID controllers capable of running on anything with a processor.

In addition to a proof of the NAP concept, this study has certain regulatory consequences: by definition, NAP is a “black-box” algorithm, as is any neural network or AI computational system, meaning that its inner operations are not transparent to the outside observer. This is a risk that must be managed, especially when the “black box” is a medical device. In this first study, mitigations included a well-defined saturated dataset, an external safety supervision system, and the safety procedures of the clinical protocol. In future studies and real-life applications, special attention should be paid to the training of the neural network.

Conclusions

This pilot-feasibility study tested the concept of a neural-network encoding of a complex model-predictive AID control algorithm. In a randomized crossover trial, the neural net demonstrated similar performance to the original algorithm, at a fraction of the computational demands. Studies to follow will expand this concept further and will open new areas of research, for example, adding adaptation based not only on the data of a user, but also on patterns of others recorded in databases, enabling a fully automated closed loop on computationally limited devices, or providing a base for AI and reinforcement learning.

Acknowledgments

The NAP Study Team included: Melissa Schoelwer, Dillon Cullipher, Erian Crocker, Emma Emory, David Fulkerson, Morgan Fuller, Jacob Hellman, Viola Holmes, Madison Maloney, Mary Oliveri, Lianna Smith, Anas El Fathi, Chaitanya Koravi, Giulio Santini, Jenny Diaz, Marcela Moscoso-Vasquez, Patricio Colmegna, and Marc Breton. Tandem Diabetes Care Provided insulin pumps for this study, but had not influenced the design, conduct, and reporting of this study.

Contributor Information

for the NAP Study Team, Center for Diabetes Technology, University of Virginia School of Medicine, Charlottesville, Virginia, USA..

for the NAP Study Team:

Melissa Schoelwer, Dillon Cullipher, Erian Crocker, Emma Emory, David Fulkerson, Morgan Fuller, Jacob Hellman, Viola Holmes, Madison Maloney, Mary Oliveri, Lianna Smith, Anas El Fathi, Chaitanya Koravi, Giulio Santini, Jenny Diaz, Marcela Moscoso-Vasquez, Patricio Colmegna, and Marc Breton

Authors' Contributions

B.K. co-created the NAP concept, contributed to the study design, was the sponsor of the Investigational Device Exemption by FDA, and wrote the first draft of this manuscript. A.C. and E.P. co-designed the NAP algorithm and contributed to the engineering study design, data analysis, and manuscript writing. L.K. was Project and Nursing Manager and contributed to the writing of this manuscript. C.B. was responsible for the data retrieval and contributed to the writing of this manuscript. M.D.D. and S.A.B. contributed to the study design, trial execution as the study physicians, and manuscript writing.

Author Disclosure Statement

B.K. declares research support from Dexcom, Novo Nordisk, and Tandem Diabetes Care, and patent royalties handled by the University of Virginia's Licensing and Ventures Group from Dexcom, Lifescan, Novo Nordisk, and Sanofi. M.D.B. has received research support to UVA from Dexcom, Tandem Diabetes Care, and Medtronic. S.A.B. has received research support to UVA from Dexcom, Insulet, Roche, Tandem Diabetes Care, and Tolerion.

Funding Information

National Institutes of Health/National Institute for Diabetes and Digestive and Kidney Diseases (NIDDK) Grant RO1 DK 133148. RedCap at the University of Virginia is supported in part by the National Center for Advancing Translational Sciences of the NIH under Award # UL1TR003015.

Supplementary Material

Supplementary Data S1
Supplementary Table S1

References

  • 1. Brown SA, Kovatchev BP, Raghinaru D, et al. ; for the iDCL Trial Research Group. 6-month randomized multicenter trial of closed-loop control in type 1 diabetes. N Engl J Med 2019;381:1707–1717. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Breton MD, Kanapka LG, Beck RW, et al. ; for the iDCL Trial Research Group. A randomized trial of closed-loop control in children with type 1 diabetes. N Engl J Med 2020;383:836–845. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Burnside MJ, Lewis DM, Crocket HR, et al. Open-source automated insulin delivery in type 1 diabetes. N Engl J Med 2022;387(10):869–881; doi: 10.1056/NEJMoa2203913 [DOI] [PubMed] [Google Scholar]
  • 4. Wadwa RP, Reed ZW, Buckingham BA, et al. Trial of hybrid closed-loop control in young children with type 1 diabetes. N Engl J Med 2023;388(11):991–1001; doi: 10.1056/NEJMoa2210834 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Bionic Pancreas Research Group; Russell SJ, Beck RW, Damiano ER, et al. Multicenter, randomized trial of a bionic pancreas in type 1 diabetes. N Engl J Med 2022;387(13):1161–1172; doi: 10.1056/NEJMoa2205225 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Boughton CK, Allen JM, Ware J, et al. Closed-loop therapy and preservation of C-peptide secretion in type 1 diabetes. N Engl J Med 2022;387(10):882–893; doi: 10.1056/NEJMoa2203496 [DOI] [PubMed] [Google Scholar]
  • 7. Kovatchev BP, Singh H, Mueller L, et al. Biobehavioral changes following transition to automated insulin delivery: A large real-life database analysis. Diabetes Care 2022;45(11):2636–2643; doi: 10.2337/dc22-1217 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Arrieta A, Battelino T, Scaramuzza AE, et al. Comparison of MiniMed 780G system performance in users aged younger and older than 15 years: Evidence from 12 870 real-world users. Diabetes Obes Metab 2022;24(7):1370–1379; doi: 10.1111/dom.14714 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Forlenza GP, Carlson AL, Galindo RJ, et al. Real-world evidence supporting tandem control-IQ hybrid closed-loop success in the medicare and medicaid type 1 and type 2 diabetes populations. Diabetes Technol Ther 2022;24(11):814–823; doi: 10.1089/dia.2022.0206 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Benhamou PY, Adenis A, Lebbad H, et al. One-year real-world performance of the DBLG1 closed-loop system: Data from 3706 adult users with type 1 diabetes in Germany. Diabetes Obes Metab 2023;25(6):1607–1613; doi: 10.1111/dom.15008 [DOI] [PubMed] [Google Scholar]
  • 11. Matejko B, Juza A, Kieć-Wilk B, et al. One-year follow-up of advance hybrid closed-loop system in adults with type 1 diabetes previously naive to diabetes technology: The effect of switching to a calibration-free sensor. Diabetes Technol Ther 2023;25(8):554–558; doi: 10.1089/dia.2023.0059 [DOI] [PubMed] [Google Scholar]
  • 12. Lombardo F, Passanisi S, Alibrandi A, et al. MiniMed 780G six-month use in children and adolescents with type 1 diabetes: Clinical targets and predictors of optimal glucose control. Diabetes Technol Ther 2023;25(6):404–413; doi: 10.1089/dia.2022.0491 [DOI] [PubMed] [Google Scholar]
  • 13. Wang XS, Dunlop AD, McKeen JA, et al. Real-world use of Control-IQ™ technology automated insulin delivery in pregnancy: A case series with qualitative interviews. Diabet Med 2023;40(6):e15086; doi: 10.1111/dme.15086 [DOI] [PubMed] [Google Scholar]
  • 14. Phillip M, Nimri R, Bergenstal RM, et al. Consensus recommendations for the use of automated insulin delivery technologies in clinical practice. Endocr Rev 2023;44(2):254–280; doi: 10.1210/endrev/bnac022 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Doyle III FJ, Huyett LM, Lee JB, et al. Closed-loop artificial pancreas systems: Engineering the algorithms. Diabetes Care 2014;37:1191–1197. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Hovorka R, Chassin LJ, Wilinska ME, et al. Closing the loop: The ADICOL experience. Diabetes Technol Ther 2004;6:307–318. [DOI] [PubMed] [Google Scholar]
  • 17. Hovorka R, Canonico V, Chassin LJ, et al. Nonlinear model predictive control of glucose concentration in subjects with type 1 diabetes. Physiol Meas 2004;25:905–920. [DOI] [PubMed] [Google Scholar]
  • 18. Steil GM, Rebrin K, Darwin C, et al. Feasibility of automating insulin delivery for the treatment of Type 1 Diabetes. Diabetes 2006;55:3344–3350. [DOI] [PubMed] [Google Scholar]
  • 19. Kovatchev BP, Patek SD, Dassau E, et al. Control-to-range for diabetes: Functionality and modular architecture. J Diabetes Sci Technol 2009;3:1058–1065. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Grosman B, Dassau E, Zisser HC, et al. Zone model predictive control: A strategy to minimize hyper- and hypoglycemic events. J Diabetes Sci Technol 2010;4:961–975. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Atlas E, Nimri R, Miller S, et al. MD-logic artificial pancreas system: A pilot study in adults with type 1 diabetes. Diabetes Care 2010;33(5):1072–1076; doi: 10.2337/dc09-1830 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Collyns OJ, Meier RA, Betts ZL, et al. Improved glycemic outcomes with Medtronic Minimed advanced hybrid closed-loop delivery: Results from a randomized crossover trial comparing automated insulin delivery with predictive low glucose suspend in people with type 1 diabetes. Diabetes Care 2021;44(4):969–975; doi: 10.2337/dc20-225 [DOI] [PubMed] [Google Scholar]
  • 23. Bergenstal RM, Nimri R, Beck RW, et al. ; for Group FS. A comparison of two hybrid closed-loop systems in adolescents and young adults with type 1 diabetes (FLAIR): A multicentre, randomised, crossover trial. Lancet 2021;397:208–219; doi: 10.1016/S0140-6736(20)32514-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. El-Khatib FH, Russell SJ, Nathan DM, et al. A bihormonal closed-loop artificial pancreas for type 1 diabetes. Science Trans Med 2010;2:27ra27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Castellanos LE, Balliro CA, Sherwood JS, et al. Performance of the insulin-only iLet bionic pancreas and the bihormonal iLet using dasiglucagon in adults with type 1 diabetes in a home-use setting. Diabetes Care 2021;44(6):e118–e120; doi: 10.2337/dc20-1086 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Castillo A, Villa-Tamayo MF, Pryor E, et al. Deep Neural Network Architectures for an Embedded MPC Implementation: Application to an Automated Insulin Delivery System. IFAC-PapersOnLine 2023;56(2):11521–11526. [Google Scholar]
  • 27. Garcia-Tirado J, Diaz JL, Esquivel-Zuniga R, et al. Advanced closed-loop control system improves postprandial glycemic control compared with a hybrid closed-loop system following unannounced meal. Diabetes Care 2021:dc210932; doi: 10.2337/dc21-0932 [DOI] [PubMed] [Google Scholar]
  • 28. Place J, Robert A, Ben Brahim N, et al. DiAs web monitoring: A real-time remote monitoring system designed for artificial pancreas outpatient trials. J Diabetes Sci Technol 2013;7:1427–1435. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Kovatchev BP, Keith-Hynes PT, Breton MD, et al. Unified Platform For Monitoring and Control of Blood Glucose Levels in Diabetic Patients. U.S. Patent No. 10,610,154 Granted 2020. Available from: https://patents.google.com/patent/US10610154B2
  • 30. Cobelli C, Renard E, Kovatchev BP, et al. Pilot studies of wearable artificial pancreas in type 1 diabetes. Diabetes Care 2012;35:e65–e67. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. DeSalvo D, Keith-Hynes P, Peyser T, et al. Remote glucose monitoring in camp setting reduces the risk of prolonged nocturnal hypoglycemia. Diabetes Technol Ther 2013;16(1):1–7; doi: 10.1089/dia.2013.0139 [DOI] [PubMed] [Google Scholar]
  • 32. Kovatchev BP, Renard E, Cobelli C, et al. Safety of outpatient closed-loop control: first randomized crossover trials of a wearable artificial pancreas. Diabetes Care 2014;37:1789–1796. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Chernavvsky DR, DeBoer MD, Keith-Hynes P, et al. Use of an artificial pancreas among adolescents for a missed snack bolus and an underestimated meal bolus. Pediatr Diabetes 2016;17(1):28–35; doi: 10.1111/pedi.12230 [DOI] [PubMed] [Google Scholar]
  • 34. Brown SA, Kovatchev BP, Breton MD, et al. Multinight “Bedside” closed-loop control for patients with type 1 diabetes. Diabetes Technol Ther 2015;17(3):203–209; doi: 10.1089/dia.2014.0259 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Kovatchev B, Cheng P, Anderson SM, et al. Feasibility of long-term closed-loop control: A multicenter 6-month trial of 24/7 automated insulin delivery. Diabetes Technol Ther 2017;19(1):18–24. [DOI] [PubMed] [Google Scholar]
  • 36. Battelino T, Danne T, Bergenstal RM, et al. for the International Time-in-Range Consensus. Clinical targets for continuous glucose monitoring data interpretation: recommendations from the international consensus on time in range. Diabetes Care 2019;42:1593–1603. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37. Schaller HC, Schaupp L, Bodenlenz M, et al. On-line adaptive algorithm with glucose prediction capacity for subcutaneous closed loop control of glucose: Evaluation under fasting conditions in patients with type 1 diabetes. Diabet Med 2006;23(1):90–93; doi: 10.1111/j.1464-5491.2006.01695.x [DOI] [PubMed] [Google Scholar]
  • 38. Shi D, Dassau E, Doyle FJ. Adaptive zone model predictive control of artificial pancreas based on glucose- and velocity-dependent control penalties. IEEE Trans Biomed Eng 2019;66(4):1045–1054; doi: 10.1109/TBME.2018.2866392 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. Pinsker JE, Dassau E, Deshpande S, et al. Outpatient randomized crossover comparison of zone model predictive control automated insulin delivery with weekly data driven adaptation versus sensor-augmented pump: Results from the International Diabetes Closed-Loop Trial 4. Diabetes Technol Ther 2022;24(9):635–642; doi: 10.1089/dia.2022.0084 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Sun X, Rashid M, Hobbs N, et al. Incorporating prior information in adaptive model predictive control for multivariable artificial pancreas systems. J Diabetes Sci Technol 2022;16(1):19–28; doi: 10.1177/19322968211059149 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Ware J, Boughton CK, Allen JM, et al. Cambridge hybrid closed-loop algorithm in children and adolescents with type 1 diabetes: A multicentre 6-month randomised controlled trial. Lancet Digit Health 2022;4(4):e245–e255; doi: 10.1016/S2589-7500(22)00020-6 [DOI] [PubMed] [Google Scholar]
  • 42. Brown SA, Forlenza GP, Bode BW, et al. Multicenter trial of a tubeless, on-body automated insulin delivery system with customizable glycemic targets in pediatric and adult participants with type 1 diabetes. Diabetes Care 2021;44(7):1630–1640; doi: 10.2337/dc21-0172 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Data S1
Supplementary Table S1

Articles from Diabetes Technology & Therapeutics are provided here courtesy of Mary Ann Liebert, Inc.

RESOURCES