Skip to main content
. Author manuscript; available in PMC: 2024 Oct 5.
Published in final edited form as: IEEE Trans Neural Netw Learn Syst. 2023 Oct 5;34(10):6983–7003. doi: 10.1109/TNNLS.2022.3145365

TABLE V.

A summary of existing studies applying RNNS for BG level prediction.

Author Dataset Input modalities Model architecture Performance
Gu et al. 2017[91] 35 non-diabetic subjects, 38 type I and 39 type II diabetic patients The past BG data, meal, drug, insulin intake, physical, activity, and sleep quality. • The features were conducted from physiological and temporal perspectives based on the external factors.
• This study proposed an Md3RNN model, which had three divisions: 1. A grouped input layer with three different sets of weights corresponding to non-diabetic, type I, and type II diabetic patients; 2. shared staked LSTM; 3. personalized output layers assigned for individual users.
The average accuracy was 82.14%
Fox et al. 2018[95] 40 patients with type 1 diabetes over three years. The past BG data only. The authors proposed four models with Seq2seq structures. All the encoders used GRUs, and only the decoder parts were different:
• DeepMO: Fully connected layers were used for prediction.
• SeqMO: GRU was used for decoder.
• PolyMO: Multiple fully connected layers were used to learn the coefficients of a polynomial model.
• PolySeqMO: the latent vector was first fed into a GRU, and the hidden states were used to learn the coefficients of a polynomial model.
PolySeqMO achieved the lowest absolute percentage error, which was 4.87.
Dong et al. 2019[49] 40 type I diabetic patients. The past BG data only. • Raw data sequence was directly fed into a GRU layer. The output at the last step was fed to 2 dense layers for final output.
• Train a model on multiple patients and then fine tune for one patient.
The mean square error at 30 and 45 min were 0.419 and 0.594 (MMOL/L)2, respectively.
Dong et al. 2019[93] 40 type I and 40 type II diabetic patients.. The past BG data only. • The 1-day sequential BG data were first assigned into different clusters by a k-mean method.
• The authors designed parallel dense layers for different clusters.
• Each sequence was first fed into the dense layer according to the corresponding cluster, and the outputs of all parallel layers were fed into a shared LSTM for final output.
For type I, the mean square errors were 0.104, 0.318, and 0.556 (MMOL/L)2 at 30 min, 45 min, and 60 min prediction, respectively; for type II, the mean square errors were 0.060, 0.143 and 0.306 (MMOL/L)2 at 30 min, 45 min, and 60 min prediction, respectively.
He et al. 2020[92] 112 subjects, including non-diabetic people, type I and type II diabetic patients. The past BG data, food, drug intake, activity, sleep quality, time, and other personal configurations. • The feature sequence was conducted from physiological models and auto-correlation encoder.
• The input sequence was first fed into a personal characteristic dense layer, in which the weights and bias were uniquely assigned for each subject.
• All the subject-specific dense layers shared a GRU layer for final output.
The root mean square errors were 0.29, 0.47, and 0.91 (MMOL/L) at 15 min, 30 min, and 60 min prediction, respectively
Zhu et al. 2020[94] OhioT1DM [96] and silicon dataset from the UVA-Padova simulator[97]. The past BG recording, insulin bolus, meal intake and time index. • The electronic health record in a sliding window was directly fed into the dilated RNN.
• The dilated RNN was structured by 3-layer Elman RNN. The second and third layers had skipped connection through time. The last step was used for final decision-making.
• The model was first trained on both datasets and then fine-tuned for the specific subject.
The root-mean-square error was 18.9 mg/dL at 30 min prediction.