Abstract
Streaming data from continuous glucose monitoring (CGM) systems enable the recursive identification of models to improve estimation accuracy for effective predictive glycemic control in patients with type-1 diabetes. A drawback of conventional recursive identification techniques is the increase in computational requirements, which is a concern for online and real-time applications such as the artificial pancreas systems implemented on handheld devices and smartphones where computational resources and memory are limited. To improve predictions in such computationally constrained hardware settings, efficient adaptive kernel filtering algorithms are developed in this paper to characterize the nonlinear glycemic variability by employing a sparsification criterion based on the information theory to reduce the computation time and complexity of the kernel filters without adversely deteriorating the predictive performance. Furthermore, the adaptive kernel filtering algorithms are designed to be insensitive to abnormal CGM measurements, thus compensating for measurement noise and disturbances. As such, the sparsification-based real-time model update framework can adapt the prediction models to accurately characterize the time-varying and nonlinear dynamics of glycemic measurements. The proposed recursive kernel filtering algorithms leveraging sparsity for improved computational efficiency are applied to both in-silico and clinical subjects, and the results demonstrate the effectiveness of the proposed methods.
Keywords: Kernel filtering algorithms, sparsification, type-1 diabetes (T1D)
I. Introduction
INDIVIDUALS with type-1 diabetes (T1D), an autoimmune disease in which the pancreas stops producing insulin, must administer exogenous insulin to maintain their blood glucose concentration (BGC) within a safe target range (70–180 mg/dL) in order to avoid the health complications associated with hypo- and hyperglycemia [1]. The recent advances in continuous glucose monitoring (CGM) systems [2]–[4] provide high-frequency glucose measurements with information on glycemic variabilities for informed therapeutic decision-making.
The improved accuracy of CGM sensors has enabled the accurate and reliable estimation of future glucose values for predictive control. Artificial pancreas (AP) systems benefit from accurate online predictions of future glucose concentrations through expedited and proactive intervention that can potentially mitigate or avoid undesirable glycemic excursions to critical levels [5]–[7]. The AP systems use the historical and current measurements from the CGM sensor to provide short-term predictions using appropriate models and inform patients with T1D to make the necessary therapy adjustments [8], [9]. For patient convenience and to facilitate frequent uninterrupted communications between the insulin pump and CGM sensor, the AP systems are typically deployed on mobile handheld devices and smartphones where computing resources are limited [8], [10]–[12]. Therefore, online learning algorithms must be computationally tractable to maintain consistent and reliable operation, yet provide accurate predictions of the time-varying glycemic dynamics for effective operation [13]–[15].
Numerous BGC prediction techniques based on compartmental physical models or data-driven empirical models are proposed to use in AP systems incorporating model predictive control (MPC) for computing the optimal insulin dosing decisions [16]–[18]. Physical models characterize the complex and intricate physiological and chemical phenomena through an in-depth understanding of the underlying mechanisms [19], [20]. These parameterized physiological models are typically trained using experimental data or measurements though the inherent nonlinear differential algebraic equation systems may be computationally intractable for identifying personalized models online and for real-time predictive control [21]. In contrast, data-driven models provide a relatively simpler structure that readily permits the efficient and consistent estimation of model parameters to adequately capture the time-varying relationships among the variables [22], [23]. Although the identified standard empirical models are sufficient for glucose prediction, they may be limited in their effectiveness if not appropriately adapted online as the glycemic dynamics vary substantially due to the effects of numerous factors such as current and historical glucose trends, meal carbohydrate (CHO) amount, administered insulin, exercise and physical activity [24], and concentrations of certain hormones [25]. Many of these factors evolve dynamically over time and their effects are not readily inferred from the available measurements. Moreover, glucose dynamics may vary substantially among individuals, and over time within people, due to their diverse lifestyles and habits [12], [26]–[28]. As a result, challenges to accurately predict glucose values are abundant and must be appropriately addressed.
In an effort to address the complications in accurately forecasting glucose values, various approaches involving complex nonlinear methods are proposed, including artificial neural network models (NNMs) that are trained using CGM data and possibly auxiliary information from electronic patient records (i.e., carbohydrate intakes, insulin dosages, hypoglycemic and hyperglycemic symptoms, lifestyle activities, and emotional status) [29], [30]. Although NNMs are widely employed in modeling of complex relationships between the input–output process variables, the approach often requires a substantial amount of training data and considerable learning time. In addition, the time-varying dynamics in glucose measurements and abnormal sensor readings (i.e., calibration-related abrupt transitions) are not readily addressed in some of the classical prediction methods.
Beyond neural networks, kernel methods provide a universal nonlinear empirical modeling framework though with the added advantage of yielding computationally tractable convex optimization problems [31]. The well-developed kernel-based algorithms include support vector machines and regression, kernel principal component analysis, and various kernel filtering algorithms [32]–[38]. Over the past few decades, these kernel methods have been successfully applied in many diverse areas such as image processing, biomedical engineering, and communications. A concern for kernel-based online learning algorithms, as with other analogous empirical modeling approaches, is that the computational complexity and memory requirements typically grow super linearly with the number of training samples [38].
One approach to address the limited memory capacity and low power consumption operating environment of handheld AP systems is to implement the AP by using embedded systems or integrated circuits with microcontrollers or microprocessors for reliable (and potentially wireless) data transmission between the CGM sensor, controller, and insulin pump [1], [8], [11], [12], [39]–[43]. Another direction for improving the computational efficiency involves software improvements, such as the reformulation of the control algorithm to shift the computationally onerous optimization problem to an offline procedure, through explicit/multiparametric MPC, and thus render the online control law calculations as expedited function evaluations. Regardless of the nature of the controller employed, the model-based predictive control algorithms stand to benefit from adaptive models that better characterize the time-varying glycemic dynamics.
Although various strategies for AP hardware and software improvements are proposed, decreasing the computational complexity and fitting efficiency of the model adaptation procedure in AP systems can be explored further. Therefore, additional prediction accuracy and computational improvements may be realized from mitigating the stringent computational complexity requirements and improving the online learning efficiency of AP systems using notions of sparse computational algorithms to prevent the size of the kernel functions from becoming prohibitively large and to avoid model overfitting. These sparsification methods select a compact dictionary of finite size sufficient for representing the training data using a limited number of samples through various information theoretic criteria [34]. Exploiting these sparse kernel methods, variants of the traditional kernel-based filtering algorithms, such as the kernel least mean squares [44], kernel affine projection [35], and kernel recursive least-squares (KRLS) algorithms [45], can be developed for computationally efficient online learning.
Motivated by the above considerations, extensions for the kernel-based modeling approaches are proposed in this paper that employ the sparsification criteria for improved computational efficiency in online glucose prediction for patients with T1D. The reminder of the paper is organized as follows. Section II provides a brief overview of the kernel recursive algorithm and introduces two online sparsification criteria. Then, a detailed description of the proposed sparse KRLS model for online glucose prediction is presented in Section III. In the proposed approach, the sparsification-based models are trained using only the current and historical CGM measurements and validated using data from in-silico (University of Virginia [UVa]/Padova metabolic simulator) and subjects participating in clinical experiments. Section IV analyzes the computational complexity of the proposed algorithms. Section V provides a detailed description of the approach for modeling the CGM measurements, followed by a discussion of the modeling results and computational improvements. Concluding remarks are provided in Section VI.
II. Preliminaries
A. Nonlinear Kernel Adaptive Filters
The kernel-based adaptive filtering algorithms, developed from the framework of reproducing kernel Hilbert spaces (RKHS), provide an elegant and efficient method to handle the nonlinearity in the data. To this end, Mercer kernels are applied in these algorithms to efficiently transform the conventional linear algorithms to their respective nonlinear versions [46]. Accordingly, the original input data ui from input space is mapped nonlinearly into a feature space as
| (1) |
The input space is a compact subset of is a positive definite kernel, and the feature space is the so-called RKHS. The inner products can be computed through a positive definite and symmetric kernel function [46]
| (2) |
In Mercer’s condition, different kinds of kernels, such as projective kernels and radial kernels, can be directly applied to compute the inner products involved in the RKHS. Harmonic analysis can also be used to design an appropriate kernel in a nonstationary Gaussian process [47]. Among these kernels approaches, the Gaussian kernel function is commonly applied and given by
| (3) |
In such kernel-based regression techniques, a nonlinear mapping is evaluated as a linear combination of a given kernel and input data set [31], which can also be seen as a growing radial basis function network [34]
| (4) |
The above function is still linear in the coefficients ωi stored in the memory during training.
To define the kernel-based learning algorithm, consider the problem of least-squares regression in an offline scenario with available input–output data Denoting the coefficient vector the kernel-based learning process can be defined as finding that minimizes
| (5) |
where is the target vector of the training data, is the Gram matrix with elements and λ is a regularization parameter. The solution of (5) is
| (6) |
where I denotes the identity matrix of appropriate dimension.
In KRLS [45], the above least-squares problem is formulated in the feature space and the inner products in RKHS that arise when determining the solution can readily be calculated using the kernel functions. Then, the weighted cost function of the online KRLS algorithm can be defined as finding the weight vector ω that minimizes the cost function
| (7) |
The aim of the KRLS algorithm is to recursively and effectively update online the solution vector as new data become available [35].
B. Sparsification Criteria of Kernel Methods
In contrast to the conventional linear recursive least squares relying on a covariance matrix of fixed dimensions, the dimension of the kernel function in KRLS increases with the number of input data. The incrementally augmented kernel matrix in (6) in turn causes computational complexity to increase and memory requirements for information storage to intensify. Moreover, the higher dimension of vector ω may also lead to overfitting and poor generalization ability of the model.
To overcome this drawback, sparsification methods based on the information theoretic approach are used in the kernel-based online learning algorithms. The basic idea of these methods is to prevent the size of kernel functions K from becoming prohibitively large. Over the last decade, several sparsification criteria for selecting a finite proper dictionary have been employed in KRLS algorithms. Employing the finite-dimensional summary of the training data, the reduced-order model can be written as [35]
| (8) |
In this expression, the sparsified data set is a subset of the original data set with As a result, the new finite dictionary is composed of a subset of samples of the original dictionary such that the new finite dictionary is sufficient for capturing the relationships among the variables in the complete data. An elegant method of building the compact dictionary online is through recursive updating, which involves checking if the new kernel function is appropriate to be added to the sparsified subset. Therefore, for each new input–output pair, an admission criterion is needed to determine whether the input vector should be admitted to the dictionary or omitted. Two commonly used sparsification criteria, the approximate linear dependency (ALD) criterion [45] and the surprise criterion (SC) [34], are described next.
1). Approximate Linear Dependency Criterion:
In the ALD criterion [45], the new kernel function of a newly available data sample un+1 is admitted to the dictionary if it cannot be written as an approximate linear combination of the kernel-version of the previous vectors Hence, the ALD criterion tests the following cost:
| (9) |
which quantifies the distance in the RKHS of the new input data un+1 to the linear span of the data already present in the dictionary. The ALD criterion in (9) is readily computed with the solution given by
| (10) |
where and
The thresholds υ1 and υ2 are specified to check if the new data un+1 should be added in the dictionary or omitted.
Case 1: n+1 ≤ υ1, the new input data un+1 are redundant; the dictionary is unchanged.
Case 2: υ1 < n+1 < υ2, then add the new input data un+1 into the dictionary and update the learning system.
Case 3: n+1 ≥ υ2, the new coming input data un+1 is abnormal and should be further investigated; the dictionary is unchanged.
2). Surprise Criterion:
In [34], an admission criterion termed SC was proposed to quantify the uncertainty of a new input–output data in relation to the current knowledge of the learning system. This criterion uses an information theoretic method that captures the surprise of the new exemplar and allows for adding or discarding the sample to the previous learning system. Therefore, the dictionary growth of the online filter could be effectively curbed and overfitting avoided.
For the incoming data pattern the SC is defined as
| (11) |
where is the posterior probability distribution of hypothesized by the dictionary
For the training process, the SC-KRLS algorithm computes SC at the ith iteration of training data by
| (12) |
where ri can be obtained by standard recursive least squares (RLS) and it is mathematically equal to the ALD measure, ωi is the weight vector obtained in (6) and the kernel matrix is obtained by is the variable dictionary size, and is the posterior probability distribution of ui, which can be derived by assuming the input vectors are normally distributed Then, we have
| (13) |
Therefore, by discarding the constant terms, the SC (12) is simplified to
| (14) |
Through the evaluation of i, the KRLS gives a general dictionary selection framework by setting two thresholds τ1 and τ2 for redundancy removal, abnormality detection, and knowledge discovery.
Case 1: i < τ1, then the training pair {ui, di} is redundant; the dictionary is unchanged.
Case 2: τ1 < i < τ2, then add the new input data {ui, di} to the dictionary and update the learning system.
Case 3: i > τ2, then the training pair {ui, di} is abnormal and should be further investigated; the dictionary is unchanged.
III. Glucose Prediction Using Online Sparse KRLS Algorithms
Combining the KRLS algorithm with the sparsification criteria (ALD and SC), the online sparse variants of the KRLS algorithm are obtained, and designated as ALD-KRLS and SC-KRLS. When applying the sparse KRLS algorithms for online glucose prediction in APs, several problem-specific practical considerations should be recognized.
A. Time-Varying Nature of Glucose Dynamics
As the dynamics of glucose measurements vary significantly over time, more weight can be given to the relatively recent data points than the earlier data. For this reason, a forgetting mechanism is often used so that the training data in the distant past are down-weighted exponentially relative to the more recent data [35]. By neglecting the earlier data, the proposed online predictor can better track the temporal dynamics. A forgetting factor β is added to the online sparse KRLS algorithm to depreciate the earlier data in the weighted cost function as [35]
| (15) |
B. Noise Reduction in CGM Sensor Data
For online prediction applications, the time-series glucose data from the CGM sensor are mixed with noise that may dominate the true signal at high frequency and affect the accuracy of the online prediction algorithm. To address this issue, denoising and smoothing filters, such as the Savitsky–Golay filter [48], can be used in the offline training process to limit the high-frequency disturbances and avoid suboptimal model identification.
C. Threshold Selection
The threshold parameters are important for the effective implementation of the sparse filtering algorithms. However, the thresholds are problem-dependent parameters, and the choice of the thresholds and learning strategies defines the characteristics of the learning system. In this paper, we first run the offline training processes to obtain an original data set of the informatics indexes (calculating the indexes using the ALD and SC) and sorting them from largest to smallest value. Then, we set the thresholds according to the characteristics of the CGM data. In the context of online glucose prediction, the following steps are proposed to select appropriate thresholds.
-
1)
Step 1: Use a large υ1 for ALD-KRLS algorithm, τ1 for SC-KRLS algorithm to disable the abnormality detection.
-
2)
Step 2: Use a small υ2 for ALD-KRLS algorithm, τ2 for SC-KRLS algorithm to disable the redundancy detection.
-
3)
Step 3: Run the offline training processes to obtain an original data set of informatics index.
-
4)
Step 4: According to the real characteristics of the CGM data, set the reasonable thresholds for the next offline training processes. For example, if the real training data set has 10% abnormal data and 50% redundant data, then υ1 and τ1 can be chosen as the median value of the informatics index vector, and υ2 and τ2 can be set as the value larger than 90% of the informatics index.
-
5)
Step 5: Use the new thresholds to train the sparse prediction model, then use this model to start online learning and glucose prediction.
-
6)
Step 6: After every Nhyp sampling instances, re-estimate the thresholds of the prediction model to avoid the negative effects of the nonstationary environment.
D. Abnormal Situation Detection Such as Calibration
Despite the significant improvement in CGM sensor accuracy over the past decade, the current generation of CGM sensors requires online recalibration by the U.S. Food and Drug Administration (FDA) to maintain safe operation of the insulin therapy [49]. When a recalibration of the sensor occurs, artificial artifacts are induced as the current measurements can abruptly and suddenly change while the previous data remain unaffected. Thus, the input vector does not represent the underlying glucose dynamics because of the inclusion of the abrupt change attributed to the sensor recalibration. Therefore, recalibrations of current CGMs must be properly handed in online algorithms. Specifically, online recalibration results in a bias Δui between the sensor output ui and the corrected estimate of the CGM value after recalibrating. While sensor recalibration is crucial for maintaining the accuracy of the CGM measurements, the recalibration causes an abrupt change that does reflect a change in underlying glucose dynamics. To address this problem, consider a new input vector available online when recalibration is performed at the ith iteration
| (16) |
where L is the dimension of the input vector and denotes the recalibrated values with the added identified bias term. To maintain the real glucose dynamic free of this recalibration artifact, the identified bias term Δui is incorporated into all the previous input elements in to generate the pseudoinput vector
| (17) |
Using this pseudoinput vector as the input data for online learning system circumvents the negative effects of recalibration, thus ensuring accurate predictions that are not affected by the recalibration.
E. Computational Efficiency
Since the number of observed data points grows continuously, the capacity of the dictionary needs to be limited. Otherwise, the learning algorithm may lead to overfitting or instability of the identified model. To address this problem, the maximum capacity of the learning dictionary is set to a constant Mdic. When the size of the dictionary exceeds its maximum finite upper limit Mdic, a sample has to be removed from the dictionary. One way to reduce the size of the dictionary to the capacity limit is to update the information criteria and discard the less informative sample. This approach was tested and found to have no significant effect on the model prediction accuracy compared to discarding the oldest samples for both the ALD and SC kernel-based algorithms, especially when the Mdic is set to be sufficiently large so it can retain enough informative samples. The drawback of discarding the least informative sample is the additional computational burden. In the glycemic modeling application, the characteristics of glucose dynamics change over time, and thus more importance should be given to more recent samples than the old measurements for good tracking performance. Such an approach is similar to the moving window, and forgetting factor techniques commonly applied in many applications [32], [36], [50]. Therefore, in this paper, we discard the oldest samples and retain the newest samples as a computationally efficient approach to prune the size of the dictionary to a prespecified finite limit.
F. Parameter Estimation
To reduce the high computational cost of estimating and adapting the model parameters at each iteration, the model is only adapted as needed at certain sampling times Nhyp.
Initially, the kernel filtering algorithm is designed without the sparsification criterion to evaluate the effects of the hyperparameters on the predictive accuracy via a cross-validation approach. After selecting the best values for the hyperparameters, the chosen parameter values are used online when utilizing the kernel function. To initialize the algorithm, training data for obtaining an initial estimate of the model are collected and set to the maximum capacity of dictionary. Then, the model is identified to obtain a preliminary prediction model. For online prediction, when a new observation of CGM data un+1 becomes available, online predictions can be made using the proposed ALD-KRLS and SC-KRLS algorithms.
For online glucose prediction using ALD-KRLS algorithm, the following steps are iterated for every new data.
-
1)
Compute the ALD criterion n+1 for the new input data un+1 using (10).
-
2)
If n+1 ≤ υ1, the newly available input data un+1 are redundant, the dictionary is unchanged.
-
3)
If υ1 < n+1 < υ2, then add the newly available input data un+1 into dictionary and update the learning system.
-
4)
If n+1 ≥ υ2, the newly available input data un+1 obtained from the CGM are abnormal, thus set the calibration flag = 1, use the pseudoinput instead of un+1 for online prediction.
-
5)
Compute the prediction result in feature space using (6) and (8).
-
6)
After every Nhyp sampling instances, re-estimate the parameters and thresholds υ1 and υ2.
-
7)
Proceed to the next data point.
For online glucose prediction using SC-KRLS algorithm, the following steps are iterated for every new data.
-
1)
Compute the SC criterion i for the new data pattern {ui, di}. For the short-term prediction horizon (PH) H, the output di is the current CGM measurement sample un+1, and the input vector is the past sample . Therefore, the new data pattern is
-
2)
If the new pair is redundant for the current learning system.
-
3)
If τ1 < i < τ2, the new pair is learnable for current learning system.
-
4)
If > τ2, the new pair is abnormal, then use the pseudoinput instead of un+1 for online prediction.
-
5)
Compute the prediction result in feature space using (6) and (8).
-
6)
After every Nhyp sampling instances, re-estimate the parameters and thresholds τ1 and τ2.
-
7)
Proceed to the next data point.
IV. Computational Complexity
The proposed ALD and SC-based sparse KRLS algorithms are developed with the primary objectives of avoiding the overfitting of the identified models and mitigating the negative effects of the abnormal and redundant data on the model predictions. The application of sparsification criteria achieves models that well characterize the appropriately chosen training data retained in the dictionary, while the reduction in computation complexity is an inherent benefit of the sparsification criteria. The computational complexity of the RLS algorithm follows the square law; the complexity scales squarely with the dimension of the parameter vector w [51]. One of challenges with the high-dimensional feature spaces of kernel-based algorithms is the rapid increase in computational complexity with dimensionality [52]. For KRLS algorithm, the complexity becomes O(i2) at iteration i, which results in ad hoc approaches for negotiating the tradeoffs between data rates (or database sizes) and hardware constraints (such as memory storage) [36]. The sparse kernel methods can overcome this inherent limitation by selecting the most appropriate subset of samples for training purposes using informative criteria. For both ALD-KRLS and SC-KRLS algorithms, the computational complexity becomes where mi is the cardinality of the present dictionary as well as the effective number of centers in the network at time i [35]. By adjusting the values of the thresholds, the final network size (dictionary dimension) can be controlled by different criteria (ALD or SC), which implies that after an incipient period where the order of the model increases, computational complexity is effectively limited to the square of the network size mi.
V. Results and Discussion
In this paper, all the prediction methods mentioned above were programmed in MATLAB R2012a for offline training and online updating of the models as well as for predicting future glucose values. The model predictions were compared with the actual CGM measurements. The sampling time of the CGM sensor is 5 min. The first period data set from CGM is used for offline training. Then, the prediction model obtained after offline training is incorporated for real-time model adaption and online glucose prediction. Due to the recursive updating of the kernel filtering model parameters, the 30-min-ahead predictions are computed at each sampling time with the current and past measurements used to train and update the recursive models and the future data used to evaluate the predictive performance. For online model adaption, the desired response of a certain PH is unknown in real time though the current measurement can be used as a desired target for the sample obtained at the t − PH sampling instance to update the parameters of the prediction models.
For quantitative evaluation of the model accuracy, let y(i) denote the CGM measurements and denote the predicted glucose values. Then, the model prediction accuracy can be assessed based on the following metrics.
-
Mean absolute relative deviation [MARD (%)] [53]
(18) This metric indicates how close the prediction results and the actual measurements match.
- Root-mean-square rror [RMSE (mg/dL)] [54]
where N is the number of samples.(19) Prediction-error grid analysis (PRED-EGA) [55], [56] PRED-EGA is a new version of the continuous glucose-EGA [57], which is shown to be a reliable and robust method for evaluating the accuracy of the predictions in terms of both accurate nominal values and accurate derivative or rate of change of the predictions. It includes two interacting components: 1) point-EGA (P-EGA) for evaluating the prediction accuracy and 2) rate-EGA (R-EGA) for assessing the capability of the model to characterize the derivative and rate of change of the measurement values.
Network size Network size criterion quantifies the number of the centers in the learning dictionary. For online updating algorithms, the network size increases proportionally to the number of the samples. For sparse algorithms, the network size is effectively limited to reduce the computational complexity. Therefore, the network size metric indicates the computational load and the learning efficiency of the online adaptive algorithms.
A. In-Silico Subjects
For the evaluation of in-silico subjects, the online prediction of BGC values is performed using the academic version of the FDA approved UVa/Padova metabolic simulator [58], which has 30 in-silico subjects (ten adults, ten adolescents, and ten children). Each subject is operated in closed-loop with a general predictive controller (GPC) [59] and for a duration of six days. The same meal plan is used for adults and adolescents, and the amount of carbohydrates in meals was reduced for children. The meal amounts and meal times for each group of subjects are given in Table I. The proposed algorithms do not use any meal information for model identification and glucose-level predictions. To replicate the random noise observed in CGM measurements, zero-mean and Gaussian white noise is added to the CGM data. To further improve the realism of the in-silico study, the abrupt changes in CGM measurements due to the sensor recalibrations are also incorporated in the data.
TABLE I.
Meal Information for In-Silico Subjects
| Subjects |
Dayl and Day 4 | Day2 and Day 5 | Day3 and Day 6 | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| &Time | B | L | D | S | B | L | D | S | B | L | D | S |
| 9:45 | 13:30 | 17:45 | 21.30 | 9:10 | 13:45 | 18:00 | 22:00 | 9:00 | 14:00 | 18:20 | 22:30 | |
| Adults & Adolescents | 48 | 47 | 75 | 31 | 55 | 70 | 65 | 20 | 40 | 68 | 75 | 25 |
| Children | 36 | 35.25 | 56.25 | 23.25 | 41.25 | 52.5 | 48.75 | 15 | 30 | 51 | 56.25 | 18.75 |
B: Breakfast; L: Lunch; D: Dinner; S: Snack; Unit of meal amount: Gram.
For offline training, glucose measurements from three days (864 points) are used to estimate the algorithm parameters and identify an initial prediction model. The PH is specified as six steps ahead (equivalent to 30 min ahead). The model order (length of input vector) L is set as 6. Then, for the given data set, the training data size can be chosen between the ranges of 1–799 samples (864 − PH − L + 1). In order to demonstrate the effectiveness of the proposed approach, two adults, two adolescents, and two children were randomly selected from the in-silico subjects to detail the achieved prediction accuracy.
After using a data set of three days for offline model training, we have three predictive models based on KRLS, SC-KRLS, and ALD-KRLS for online updating and predicting. Figs. 1(a), 2(a), and 3(a) show the 30-min-ahead predictions for different subjects (Adult Subject #9, Adolescent Subject #5, and Child Subject #6) obtained from the KRLS, ALD-KRLS, and SC-KRLS algorithms. Figs. 1(b), 2(b), and 3(b) show the ALD and SC criteria for removing redundant samples and detecting abnormal measurements.
Fig. 1.
Comparison of measured and 30-min-ahead predicted glucose values using the KRLS, ALD-KRLS, and SC-KRLS algorithms for Adult Subject #9. (a) Prediction results along with actual CGM measurements. (b) Criteria of ALD and SC.
Fig. 2.
Comparison of measured and 30-min-ahead predicted glucose values using the KRLS, ALD-KRLS, and SC-KRLS algorithms for Adolescent Subject #5.
Fig. 3.
Comparison of measured and 30-min-ahead predicted glucose values using the KRLS, ALD-KRLS, and SC-KRLS algorithms for Child Subject #6.
The prediction results obtained from the KRLS, SC-KRLS, and ALD-KRLS algorithms are accurate, and the glucose profiles closely coincide with the actual measurements (Figs. 1–3). The glucose predictions based on the KRLS model fluctuate more often than the other approaches, especially since the glucose values change abruptly when the CGM sensor is recalibrated. The abrupt transitions in the CGM measurements do not affect the prediction results based on SC-KRLS and ALD-KRLS models as the proposed criteria are used to detect and remove the abnormal measurements. As such, the faulty measurements are detected and neglected in the model updating procedure to avoid model overfitting or instability. Table II summarizes the 30-min-ahead prediction accuracy and performance of different methods based on four metrics (MARD, RMSE, PRED-EGA, and network size) for the selected in-silico subjects (two adults, two adolescents, and two children). In Table II, prediction errors given by MARD and RMSE are presented in the third and fourth columns, respectively. In most conditions, SC-KRLS has the smallest MARD and RMSE, the next is ALD-KRLS. Both of them present better prediction errors than KRLS because the online sparsification-based criteria can accommodate the time-varying glucose dynamics. Meanwhile, the accuracy of the proposed algorithms is also evaluated in three distinct regions of the blood glucose (BG) values, namely, the hypoglycemia (BG ≤ 70 mg/dL), euglycemia (BG: 70–180 mg/dL), and hyperglycemia (BG ≥180 mg/dL) ranges. The percentage of samples within these ranges that are predicted with high accuracy (reported as accurate) or low accuracy (reported as error) in terms of the nominal values as well as the veracity of the derivative of the glucose trajectories is also reported. The samples classified as benign errors are those that are predicted accurately, yet have predicted trajectories with derivatives that do not closely correspond with the rate of change of the actual CGM measurement profiles. These are reported under benign errors as they often do not have severe negative consequences in clinical settings as the predicted glucose values correspond closely with the actual measurements. As the BGC of the in-silico subject is regulated using the GPC approach [59], hypoglycemia is avoided in two subjects (Adult Subject 9# and Adolescent Subject 5#). The differences between these model predictions are readily observed, and the SC-KRLS has higher prediction accuracy and smaller online network size than other two models based on KRLS and ALD-KRLS.
TABLE II.
Glucose Prediction Accuracy for In-Silico Adult Subjects
| Subject | Model | MARD(%) | MEAN ± STANDARD DEVIATION | BG ≤ 70 mg/dl | BG 70 – 180 mg/dl | BG ≥ 180 mg/dl | Online Network Size | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Accurate (%) | Benign (%) | Error (%) | Accurate (%) | Benign (%) | Error (%) | Accurate (%) | Benign (%) | Error (%) | |||||
| Adult 3# | KRLS | 19.322 | 16.769+/−16.335 | 44.09 | 16.54 | 39.37 | 67.43 | 25.69 | 6.88 | 69.23 | 23.08 | 7.69 | 576 |
| ALD-KRLS | 18.133 | 15.881+/−16.121 | 51.97 | 11.02 | 39.37 | 72.02 | 21.33 | 6.65 | 69.23 | 23.08 | 7.69 | 243 | |
| SC-KRLS | 15.509 | 13.868+/−17.131 | 73.23 | 5.51 | 21.26 | 80.05 | 14.91 | 5.05 | 76.92 | 15.38 | 7.69 | 173 | |
| Adult 9# | KRLS | 11.377 | 16.283+/−14.05 | - | - | - | 62.32 | 31.77 | 5.91 | 63.53 | 31.76 | 4.70 | 576 |
| ALD-KRLS | 10.103 | 14.51+/−12.947 | - | - | - | 74.38 | 21.92 | 3.69 | 69.41 | 24.12 | 6.47 | 329 | |
| SC-KRLS | 9.16 | 13.609+/−15.241 | - | - | - | 86.95 | 11.58 | 1.47 | 81.76 | 13.53 | 4.70 | 192 | |
| Adolescent 3# | KRLS | 7.615 | 13.546+/−15.053 | 83.33 | 0 | 16.67 | 81.33 | 14.22 | 4.44 | 78.38 | 15.62 | 6.00 | 576 |
| ALD-KRLS | 7.386 | 12.91+/−13.976 | 88.89 | 0 | 11.11 | 83.11 | 13.78 | 3.11 | 81.38 | 13.21 | 5.41 | 218 | |
| SC-KRLS | 7.588 | 13.036+/−14.199 | 83.33 | 0 | 16.67 | 87.56 | 9.33 | 3.11 | 84.99 | 10.81 | 4.20 | 203 | |
| Adolescent 5# | KRLS | 7.247 | 15.595+/−15.348 | - | - | - | 61.33 | 32 | 6.67 | 73.45 | 21.76 | 4.79 | 576 |
| ALD-KRLS | 6.515 | 14.113+/−12.024 | - | - | - | 74.67 | 20 | 5.33 | 77.64 | 17.17 | 5.19 | 219 | |
| SC-KRLS | 6.349 | 13.852+/−12.961 | - | - | - | 78.67 | 16 | 5.33 | 81.64 | 14.37 | 3.99 | 177 | |
| Child 6# | KRLS | 18.507 | 22.633+/−23.31 | 42.86 | 17.14 | 40 | 56.71 | 31.06 | 12.24 | 75 | 15.52 | 9.48 | 576 |
| ALD-KRLS | 17.436 | 21.192+/−20.793 | 40 | 17.14 | 42.86 | 59.29 | 31.06 | 9.65 | 78.45 | 12.07 | 9.48 | 199 | |
| SC-KRLS | 15.34 | 19.387+/−21.328 | 62.86 | 5.71 | 31.43 | 68.71 | 24 | 7.29 | 83.62 | 6.90 | 9.48 | 189 | |
| Child 8# | KRLS | 11.603 | 17.133+/−14.198 | 50 | 12.5 | 37.5 | 63.44 | 32.26 | 4.30 | 72.96 | 19.39 | 7.65 | 576 |
| ALD-KRLS | 10.983 | 16.694+/−14.103 | 50 | 12.5 | 37.5 | 70.97 | 25 | 4.03 | 71.94 | 18.88 | 9.18 | 202 | |
| SC-KRLS | 9.67 | 15.139+/−13.778 | 62.5 | 0 | 37.5 | 84.14 | 13.98 | 1.88 | 78.06 | 17.35 | 4.59 | 294 | |
B. Clinical Subjects
Data from clinical experiments with subjects with T1D are used to identify the performance of the proposed approach. The clinical subjects, aged 18–35 years, recruited from the University of Chicago Medical Center, Kovler Diabetes Center, and were scheduled for a visit at the University of Chicago General Clinical Research Center. Closed-loop experiments were approximately 60-h in-hospital trial, and the subject’s own insulin type and pump were used during the experiments [59].
The open-loop experiments were performed at the University of Illinois–Chicago, College of Nursing (UIC-CON). The subjects used manual continuous subcutaneous insulin infusion pump for their glucose regulation based on their own meal bolus and correction insulin calculation methods prescribed by their diabetes care provider. Based on some subjects’ schedule, some experiments took two-to-eight weeks (total hours stay at UIC-CON is still the same). Based on these clinical experiments, three cases are considered to evaluate the accuracy of the designed predictor.
The Guardian real-time CGM [60]–[62] was used to collect the glucose concentration information from the subjects every 5 min. It measures glucose in the interstitial tissue and displays the glucose level every 5 min. Real data of several clinical subjects are used in analyzing the performance of the proposed sparse filtering algorithms.
Case 1 Description:
Closed-loop CGM data from a three-day period (around 56-h, 670 samples) from Subjects A and B are used to identify the prediction performance. The first 6 h (72 samples) of data were used to train the prediction models offline and then predict the 30-min-ahead glucose output online using the subsequent two days of CGM data (576 samples). The prediction results along with actual CGM measurements are shown in Figs. 4(a) and 5(a), and the criteria of ALD and SC are shown in Figs. 4(b) and 5(b).
Fig. 4.
Comparison of measured and 30-min-ahead predicted glucose values using KRLS, ALD-KRLS, and SC-KRLS algorithms for Clinical Subject A.
Fig. 5.
Comparison of measured and 30-min-ahead predicted glucose values using KRLS, ALD-KRLS, and SC-KRLS algorithms for Clinical Subject B.
Case 2 Description:
The 10 days (around 3000 data points) of open-loop CGM data from Subject C are used to evaluate the performance of the proposed predictor. The open-loop data show that the BG values Subject C are not regulated well as the CGM measurements are often outside the target range (70–180 mg/dL), which corresponds with more time spent in hypo- and hyperglycemia. In addition, patient C was allowed recalibrating the CGM sensor whenever desired, which caused abrupt changes in the CGM measurements on the top of the typical variability of glucose dynamics. This presents a notable challenge for accurately predicting the blood glucose values. The prediction results along with actual CGM measurements are shown in Fig. 6(a), and the criteria of ALD and SC are shown in Fig. 6(b).
Fig. 6.
Comparison of measured and 30-min-ahead predicted glucose values using KRLS, ALD-KRLS, and SC-KRLS algorithms for Clinical Subject C.
To further evaluate the utility of the prediction algorithms using clinically accepted metrics, the PRED-EGA metric [56] is used to assess both the accuracy of the predicted BGC values (P-EGA) and accuracy of the derivative or rate of change of the predicted BGC trajectories (R-EGA). The PRED-EGA results consist of two interacting components, the P-EGA for measuring the point accuracy [Fig. 7(a)–(c)], and R-EGA for measuring the rate accuracy [Fig. 7(d)–(f)]. These two criteria are considered simultaneously in a plot matrix representing the accuracy of predictions across various glucose concentration ranges. Although the P-EGA and R-EGA may provide distinct information on the predictive performance of the models, it is meaningful to consider these two analyses simultaneously (as shown in Table III) to accurately detect the prediction accuracy as well as the accuracy in the derivatives of the future predictions.
Fig. 7.
PRED-EGA based on KRLS, ALD-KRLS, and SC-KRLS predictors for Clinical Subject C. (a) P-EGA for KRLS. (b) P-EGA for ALD-KRLS. (c) P-EGA for SC-KRLS. (d) R-EGA for KRLS. (e) R-EGA for ALD-KRLS. (f) R-EGA for SC-KRLS.
TABLE III.
Glucose Prediction Accuracy for Clinical Adult Subjects
| Subject | Model | MARD(%) | MEAN ± STANDARD DEVIATION | BG ≤ 70 mg/dl | BG 70 – 180 mg/dl | BG ≥ 180 mg/dl | Online Network Size | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Accurate (%) | Benign (%) | Error (%) | Accurate (%) | Benign (%) | Error (%) | Accurate (%) | Benign (%) | Error (%) | |||||
| Clinical Subject A | KRLS | 12.069 | 16.382+/−16.75 | 82.05 | 13.99 | 3.96 | 86.39 | 8.84 | 4.76 | 576 | |||
| ALD-KRLS | 12.309 | 16.917+/−16.654 | - | - | - | 82.98 | 13.05 | 3.96 | 87.76 | 7.48 | 4.76 | 322 | |
| SC-KRLS | 11.787 | 16.546+/−18.649 | - | - | - | 83.22 | 12.59 | 4.20 | 79.59 | 11.56 | 8.84 | 293 | |
| Clinical Subject B | KRLS | 10.218 | 20.108+/−21.291 | 100 | 0 | 0 | 77.42 | 19.35 | 3.23 | 66.79 | 16.79 | 16.41 | 576 |
| ALD-KRLS | 10.442 | 19.2+/−18.662 | 100 | 0 | 0 | 76.45 | 20 | 3.55 | 66.79 | 17.56 | 15.65 | 341 | |
| SC-KRLS | 10.611 | 18.587+/−18.193 | 100 | 0 | 0 | 83.87 | 13.23 | 2.90 | 69.08 | 16.03 | 14.89 | 400 | |
| Clinical Subject C | KRLS | 12.228 | 23.814+/−24.951 | - | - | - | 74.32 | 16.39 | 9.29 | 55.22 | 27.22 | 17.56 | 576 |
| ALD-KRLS | 11.022 | 21.917+/−22.997 | - | - | - | 79.24 | 12.02 | 8.74 | 61.32 | 23.41 | 15.27 | 185 | |
| SC-KRLS | 10.62 | 20.463+/−23.495 | - | - | - | 83.61 | 9.84 | 6.55 | 74.81 | 12.98 | 12.21 | 321 | |
Figs. 4–6 show the online 30-min-ahead prediction results for Clinical Subjects A, B, and C, respectively. For the ALD-KRLS prediction algorithm, as the ALD criterion is used for removing the abnormalities in the CGM measurements. The detected faulty samples could result in the prediction model becoming overfit or even unstable if not appropriately addressed. The details on the computational efficiency of these prediction models and the utility of the algorithms using clinically accepted metrics are given in Table III. For Subject A, the SC-KRLS algorithm has a smallest MARD, a highest accuracy of PRED-EGA in normal range of blood glucose, and the lowest network size. The ALD-KRLS algorithm performs better than SC-KRLS in the hyperglycemia range for Clinical Subject A. For Subject B, the SC-KRLS algorithm has better RMSE and prediction accuracy in normal and hyperglycemia ranges. For open-loop Subject C, as the glucose is not tightly regulated, the RMSE of the online predictor increases compared to the other two subjects (A and B). However, the PRED-EGA shows that the SC-KRLS algorithm has higher accuracy than other two approaches. Although no missing data handling was performed in this paper, the one-step-ahead online predictions can be used as an extrapolated estimate of the missing measurements. The results of the clinical experiments show that the online adaptive sparse filtering algorithms can accommodate the time-varying glucose dynamics with a degree of robustness to the unknown disturbances as subjects consume unannounced meals and conduct exercises without prior notifications or feed forward signals to the algorithm.
C. Prediction Accuracy for Different Horizons
The performance of the KRLS, ALD-KRLS, and SC-KRLS algorithms is evaluated with eight different PHs, from 1 to 8 steps ahead predictions. The sampling time of CGM is 5 min; so, the PHs range from 5 to 40 min. The data set is from Case 3, and two days of CGM data are used for offline training. The mean value and the standard deviation of RMSE index are calculated to quantify the tradeoff between prediction accuracy and the PH.
The prediction accuracy decreases as PH increases for all three algorithms (Fig. 8). The SC-KRLS algorithm has better prediction accuracy, especially when the PH is greater than 15 min. Although the prediction model based on KRLS has better accuracy for short-term PHs (up to 10-min-ahead prediction), the performance decreases as the PH becomes larger than 15 min. The proposed adaptive kernel filtering algorithms leverage sparse techniques, and improve the predictive performance and computational efficiency that are realized through the conventional recursive filtering method. The sparse filtering approaches identify redundant samples by comparing the newly available samples to the existing informative samples stored in a dictionary. The ALD approach achieves this comparison by evaluating whether the new samples can be expressed as a linear combination of the existing samples in the dictionary. The SC approach evaluates the posterior probability of the new samples given the existing dictionary, while the Gaussian process assumption of the data set may not be appropriate in all processes. Nevertheless, it may be difficult to determine the most appropriate sparsification criterion for a particular application, and an evaluation of both approaches in specific applications to determine the most suitable method can be a viable technique to decide on the sparsification approach to use without rigorously testing several metrics for each data set.
Fig. 8.
Evaluation of the tradeoff between the PH and the prediction accuracy.
VI. Conclusion
Computationally efficient adaptive kernel filtering algorithms are developed to provide online predictions of BGCs in people with T1D. Two sparsification-based criteria, the ALD and SC indexes, are incorporated with the KRLS algorithm to yield the respective sparse versions, ALD-KRLS and SC-KRLS algorithms, which have improved computational efficiency and are insensitive to abnormal or faulty CGM measurements. The online adaptive framework of the sparse filtering algorithms can adequately accommodate the time-varying glucose dynamics. Moreover, the sparsification method reduces the computational load of the adaptive modeling approach and can appropriately handle the abnormal or faulty samples, which is critical to compensate for measurement noise and sensor inaccuracy. The computational efficiency and predictive performance of the proposed models are evaluated using in-silico and clinical subjects, and the results illustrate the improvement in computational complexity and accuracy of the proposed algorithms.
Acknowledgments
This work was supported by the National Institutes of Health under Grant 1DP3DK101075-01 and Grant 1DP3DK101077-01. The work of X. Yu was supported by the China Scholarship Council under Grant 201406085037. Recommended by Associate Editor G. Mercere.
Biography
Xia Yu received the B.Eng. degree in automation and the Ph.D. degree in control theory and control engineering from Northeastern University, Shenyang, China, in 2005 and 2011, respectively.
She has been a Lecturer with the College of Information Science and Engineering, Northeastern University, since 2011. From 2016 to 2017, she was a Visiting Scholar with the Department of Chemical Engineering and Biological Engineering, Illinois Institute of Technology, Chicago, IL, USA, and a member of the Process Modeling, Monitoring, and Control Research Laboratory supervised by A. Cinar. Her current research interests include the development of kernel-based learning algorithms, recursive subject-specific glucose prediction models, fault detection and diagnosis, and an automated closed-loop artificial pancreas algorithm for patients with type 1 diabetes.
Mudassir Rashid received the B.Eng. and Ph.D. degrees from McMaster University, Hamilton, CA, USA, in 2011 and 2016, respectively.
He is a Senior Research Associate with the Department of Chemical Engineering and Biological Engineering, Illinois Institute of Technology, Chicago, IL, USA. He has authored or co-authored over 26 papers in referred journals and conference proceedings. His current research interests include process systems engineering, with emphasis on system identification, process monitoring and fault diagnosis, model predictive control, fault tolerant control, and optimization.
Jianyuan Feng received the B.Sc. degree in mechanical engineering from Changzhou University, Changzhou, China, in 2012, and the M.E. degree and the Ph.D. degree in chemical engineering from the Illinois Institute of Technology, Chicago, IL, USA, in 2013 and 2017, respectively.
He is currently a Post-Doctoral Researcher with the College of Nursing, University of Illinois at Chicago, Chicago. His current research interests include controller performance assessment, sensor error detection and analytical sensor redundancy for closed-loop artificial pancreas systems for patients with type 1 diabetes.
Nicole Hobbs received the B.Sc. degree in biomedical engineering from the Illinois Institute of Technology, Chicago, IL, USA, with a specialization in cell and tissue engineering, where she is currently pursuing the Ph.D. degree in biomedical engineering.
Her current research interests include system identification, optimization, process modeling, and blood glucose variations in people with type 1 diabetes during exercise for application in artificial pancreas systems.
Iman Hajizadeh received the B.Sc. and M.Sc. degrees in chemical engineering from the Sharif University of Technology, Tehran, Iran, in 2011 and 2013, respectively, with a primary focus on process control. He is currently pursuing the Ph.D. degree with the Department of Chemical Engineering and Biological Engineering, Illinois Institute of Technology, Chicago, IL, USA.
His current research interests include process control, estimation, system identification, fault-tolerant control, and optimization.
Sediqeh Samadi received the B.Sc. degree in chemical engineering from the University of Tehran, Tehran, Iran, and the M.Sc. degree in chemical engineering from Sharif University, Tehran. She is currently pursuing the Ph.D. degree with the Department of Chemical Engineering and Biological Engineering, Illinois Institute of Technology, Chicago, IL, USA.
Her current research interests include the mathematical modeling of chemical and biological systems based on soft computing and artificial intelligence methods, and modeling and detection of different events and disturbances on blood glucose variation in people with type 1 diabetes.
Mert Sevil received the B.Sc. degree in electrical engineering, the double major B.Sc. degree in computer engineering with primary focus on control algorithm design, detection, and estimation, and the M.Sc. degree in control and automation engineering from Yildiz Technical University, Istanbul, Turkey, in 2013, 2015, and 2015, respectively. He is currently pursuing the Ph.D. degree with the Biomedical Engineering Department, Illinois Institute of Technology, Chicago, IL, USA.
His current research interests include stress detection, exercise detection, energy expenditure estimation, algorithm development, control systems, communication of electronic devices, fuzzy logic estimation, artificial neural network and artificial intelligence algorithms, and data communication.
Caterina Lazaro received the M.Sc. degree in telecommunication engineering from the Technical University of Madrid, Madrid, Spain, in 2014, and the Master of Electrical Engineering degree from the Illinois Institute of Technology, Chicago, IL, USA, in 2014, where she is currently pursuing the Ph.D. degree in computer engineering.
Her current research interests include sensor networks, wireless communications, mobile health systems, and data safety and security.
Zacharie Maloney received the B.A. degree in political science from Brown University, Providence, RI, USA, in 2000, and the B.Sc. degree in biomedical engineering and the Master of Chemical Engineering degree from the Illinois Institute of Technology, Chicago, IL, USA.
He spent several years in financial services before returning to academia. His current research interests include biological systems, process control, and the application of personalized technologies to health care.
Elizabeth Littlejohn received the M.D. degree from the M. D. Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA, in 1988.
She was an Associate Professor of pediatrics and medicine at the University of Chicago, Chicago, IL, USA, until 2017 and then moved to Michigan State University, East Lansing, MI, USA. She specializes in pediatric diabetes and endocrinology. As a Clinical Researcher, she is investigating innovative treatments and more efficient diagnostic tools to improve diabetes care for children and young adults.
Laurie Quinn received the Ph.D. degree in nursing science from the University of Illinois at Chicago, Chicago, IL, USA, in 1996.
She is a Clinical Professor at the College of Nursing, University of Illinois at Chicago. She is experienced in the clinical management of patients with diabetes mellitus and has investigated the physiological mechanisms that contribute to their excessive rate of cardiovascular disease. Specifically, she has studied the effects of aerobic exercise on the metabolic determinants of cardiovascular disease in patients with type 2 diabetes (i.e., insulin resistance, oxidative stress, lipid, and lipoprotein abnormalities). A major focus of her work has been examining the influence of aerobic exercise on postprandial metabolism in obese nondiabetic and diabetic patients.
Ali Cinar (SM’15) received the Ph.D. degree in chemical engineering from Texas A&M University, College Station, TX, USA.
He is currently a Professor of chemical engineering and biomedical engineering with the Illinois Institute of Technology, Chicago, IL, USA. Since 2004, he has been the Director of the Engineering Center for Diabetes Research and Education. He has authored or co-authored three books and over 200 technical papers in journals and refereed conference proceedings. A full list of publications, detailed description of research interests, presentations, and software is available at https://engineering.iit.edu/faculty/ali-cinar. His current research interests include agent-based techniques for modeling, supervision, and control of complex systems, modeling of diabetes, angiogenesis and tissue formation, and adaptive control techniques for artificial pancreas systems for people with diabetes.
Dr. Cinar is a fellow of the American Institute of Chemical Engineers.
Contributor Information
Xia Yu, School of Information Science and Engineering, Northeastern University, Shenyang 110819, China.
Mudassir Rashid, Department of Chemical and Biological Engineering, Illinois Institute of Technology, Chicago, IL 60616 USA.
Jianyuan Feng, Department of Chemical and Biological Engineering, Illinois Institute of Technology, Chicago, IL 60616 USA.
Nicole Hobbs, Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, IL 60616 USA.
Iman Hajizadeh, Department of Chemical and Biological Engineering, Illinois Institute of Technology, Chicago, IL 60616 USA.
Sediqeh Samadi, Department of Chemical and Biological Engineering, Illinois Institute of Technology, Chicago, IL 60616 USA.
Mert Sevil, Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, IL 60616 USA.
Caterina Lazaro, Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL 60616 USA.
Zacharie Maloney, Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL 60616 USA.
Elizabeth Littlejohn, Kovler Diabetes Center, Department of Pediatrics and Medicine, University of Chicago, Chicago, IL 60637 USA.
Laurie Quinn, Department of Biobehavioral Health Science, College of Nursing, University of Illinois at Chicago, Chicago, IL 60612 USA.
Ali Cinar, Department of Chemical and Biological Engineering, Illinois Institute of Technology, Chicago, IL 60616 USA, and also with the Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, IL 60616 USA.
References
- [1].Kirchsteiger H, Jørgensen JB, Renard E, and Del Re L, Prediction Methods for Blood Glucose Concentration: Design, Use and Evaluation. Cham, Switzerland: Springer, 2015. [Google Scholar]
- [2].Klonoff DC, “Continuous glucose monitoring: Roadmap for 21st century diabetes therapy,” Diabetes Care, vol. 28, no. 5, pp. 1231–1239, 2005. [DOI] [PubMed] [Google Scholar]
- [3].Deiss D. et al. , “Improved glycemic control in poorly controlled patients with type 1 diabetes using real-time continuous glucose monitoring,” Diabetes Care, vol. 29, no. 12, pp. 2730–2732, 2006. [DOI] [PubMed] [Google Scholar]
- [4].Juvenile Diabetes Research Foundation Continuous Glucose Monitoring Study Group, “Continuous glucose monitoring and intensive treatment of type 1 diabetes,” New England J. Med, vol. 359, no. 14, pp. 1464–1476, 2008. [DOI] [PubMed] [Google Scholar]
- [5].Turksoy K. et al. , “Hypoglycemia detection and carbohydrate suggestion in an artificial pancreas,” J. Diabetes Sci. Technol, vol. 10, no. 6, pp. 1236–1244, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [6].Turksoy K, Samadi S, Feng J, Littlejohn E, Quinn L, and Cinar A, “Meal detection in patients with type 1 diabetes: A new module for the multivariable adaptive artificial pancreas control system,” IEEE J. Biomed. Health Informat, vol. 20, no. 1, pp. 47–54, January 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [7].Visentin R, Man CD, and Cobelli C, “One-day Bayesian cloning of type 1 diabetes subjects: Toward a single-day UVA/Padova type 1 diabetes simulator,” IEEE Trans. Biomed. Eng, vol. 63, no. 11, pp. 2416–2424, November 2016. [DOI] [PubMed] [Google Scholar]
- [8].Turksoy K, Monforti C, Park M, Griffith G, Quinn L, and Cinar A, “Use of wearable sensors and biometric variables in an artificial pancreas system,” Sensors, vol. 17, no. 3, p. 532, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [9].Toffanin C, Visentin R, Messori M, Di Palma F, Magni L, and Cobelli C, “Toward a run-to-run adaptive artificial pancreas: In silico results,” IEEE Trans. Biomed. Eng, vol. 65, no. 3, pp. 479–488, March 2018. [DOI] [PubMed] [Google Scholar]
- [10].Keith-Hynes P, Mize B, Robert A, and Place J, “The diabetes assistant: A smartphone-based system for real-time control of blood glucose,” Electronics, vol. 3, no. 4, pp. 609–623, 2014. [Google Scholar]
- [11].Zavitsanou S, Chakrabarty A, Dassau E, and Doyle FJ, “Embedded control in wearable medical devices: Application to the artificial pancreas,” Processes, vol. 4, no. 4, p. 35, 2016. [Google Scholar]
- [12].Bonfanti R. et al. , “Advanced pump functions: Bolus calculator, bolus types, and temporary basal rates,” in Research into Childhood-Onset Diabetes. Cham, Switzerland: Springer, 2017, pp. 173–181. [Google Scholar]
- [13].Cinar A, Turksoy K, and Hajizadeh I, “Multivariable artificial pancreas method and system,” U.S. Patent 15171355, June 2, 2016.
- [14].Garg SK et al. , “Glucose outcomes with the in-home use of a hybrid closed-loop insulin delivery system in adolescents and adults with type 1 diabetes,” Diabetes Technol. Therapeutics, vol. 19, no. 3, pp. 155–163, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [15].Jacobs PG et al. , “Randomized trial of a dual-hormone artificial pancreas with dosing adjustment during exercise compared with no adjustment and sensor-augmented pump therapy,” Diabetes, Obesity Metabolism, vol. 18, no. 11, pp. 1110–1119, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [16].Bremer T. and Gough DA, “Is blood glucose predictable from previous values? A solicitation for data,” Diabetes, vol. 48, no. 3, pp. 445–451, 1999. [DOI] [PubMed] [Google Scholar]
- [17].Man CD, Micheletto F, Sathananthan M, Vella A, and Cobelli C, “Model-based quantification of glucagon-like peptide-1–induced potentiation of insulin secretion in response to a mixed meal challenge,” Diabetes Technol. Therapeutics, vol. 18, no. 1, pp. 39–46, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [18].Chakrabarty A, Zavitsanou S, Doyle IF, and Dassau E, “Event-triggered model predictive control for embedded artificial pancreas systems,” IEEE Trans. Biomed. Eng, vol. 65, no. 3, pp. 575–586, March 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Messori M, Toffanin C, Del Favero S, De Nicolao G, Cobelli C, and Magni L, “Model individualization for artificial pancreas,” Comput. Methods Programs Biomed, July 2016, doi: 10.1016/j.cmpb.2016.06.006. [DOI] [PubMed] [Google Scholar]
- [20].Piccinini F, Man CD, Vella A, and Cobelli C, “A model for the estimation of hepatic insulin extraction after a meal,” IEEE Trans. Biomed. Eng, vol. 63, no. 9, pp. 1925–1932, September 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [21].Reifman J, Rajaraman S, Gribok A, and Ward WK, “Predictive monitoring for improved management of glucose levels,” J. Diabetes Sci. Technol, vol. 1, no. 4, pp. 478–486, 2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [22].Cherkassky V. and Mulier FM, Learning From Data: Concepts, Theory, and Methods. Hoboken, NJ, USA: Wiley, 2007. [Google Scholar]
- [23].Araghinejad S, Data-Driven Modeling: Using MATLAB in Water Resources and Environmental Engineering. Dordrecht, The Netherlands: Springer, 2013. [Google Scholar]
- [24].Diabetes Research in Children Network Study Group, “Impact of exercise on overnight glycemic control in children with type 1 diabetes mellitus,” J. Pediatrics, vol. 147, no. 4, pp. 528–534, 2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [25].Nomura M. et al. , “Stress and coping behavior in patients with diabetes mellitus,” Acta Diabetol, vol. 37, no. 2, pp. 61–64, 2000. [DOI] [PubMed] [Google Scholar]
- [26].Brazeau A-S, Rabasa-Lhoret R, Strychar I, and Mircescu H, “Barriers to physical activity among patients with type 1 diabetes,” Diabetes Care, vol. 31, no. 11, pp. 2108–2109, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [27].Pasieka AM et al. , “Advances in exercise, physical activity, and diabetes mellitus,” Diabetes Technol. Therapeutics, vol. 19, no. S1, pp. S-94–S-104, 2017. [DOI] [PubMed] [Google Scholar]
- [28].Kovatchev B, Tamborlane WV, Cefalu WT, and Cobelli C, “The artificial pancreas in 2016: A digital treatment ecosystem for diabetes,” Diabetes Care, vol. 39, no. 7, pp. 1123–1126, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [29].Pappada SM, Cameron BD, and Rosman PM, “Development of a neural network for prediction of glucose concentration in type 1 diabetes patients,” J. Diabetes Sci. Technol, vol. 2, no. 5, pp. 792–801, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [30].Pérez-Gandía C. et al. , “Artificial neural network algorithm for online glucose prediction from continuous glucose monitoring,” Diabetes Technol. Therapeutics, vol. 12, no. 1, pp. 81–88, 2010. [DOI] [PubMed] [Google Scholar]
- [31].Schölkopf B. and Smola AJ, Learning With Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. Cambridge, MA, USA: MIT Press, 2002. [Google Scholar]
- [32].Van Vaerenbergh S, Via J, and Santamaría I, “A sliding-window kernel RLS algorithm and its application to nonlinear channel identification,” in Proc. IEEE Int. Conf. Acoust., Speech Signal Process, vol. 5, May 2006, pp. V-789–V-792. [Google Scholar]
- [33].Van Vaerenbergh S, Vía J, and Santamaría I, “Nonlinear system identification using a new sliding-window kernel RLS algorithm,” J. Commun, vol. 2, no. 3, pp. 1–8, 2007. [Google Scholar]
- [34].Liu W, Park I, and Príncipe JC, “An information theoretic approach of designing sparse kernel adaptive filters,” IEEE Trans. Neural Netw, vol. 20, no. 12, pp. 1950–1961, December 2009. [DOI] [PubMed] [Google Scholar]
- [35].Liu W, Príncipe JC, and Haykin S, Kernel Adaptive Filtering: A Comprehensive Introduction. Hoboken, NJ, USA: Wiley, 2011. [Google Scholar]
- [36].Van Vaerenberg V, Lazaro-Gredilla M, and Santamaría I, “Kernel recursive least-squares tracker for time-varying regression,” IEEE Trans. Neural Netw. Learn. Syst, vol. 23, no. 8, pp. 1313–1326, August 2012. [DOI] [PubMed] [Google Scholar]
- [37].Chen B, Zhao S, Zhu P, and Príncipe JC, “Quantized kernel recursive least squares algorithm,” IEEE Trans. Neural Netw. Learn. Syst, vol. 24, no. 9, pp. 1484–1491, September 2013. [DOI] [PubMed] [Google Scholar]
- [38].Fan H, Song Q, and Shrestha SB, “Online learning with kernel regularized least mean square algorithms,” Knowl.-Based Syst, vol. 59, pp. 21–32, March 2014. [Google Scholar]
- [39].Walsh J. and Roberts R, Pumping Insulin: Everything You Need for Success on a Smart Insulin Pump, 4th ed San Diego, CA, USA: Torrey Pines, 2012. [Google Scholar]
- [40].Cobelli C. et al. , “Pilot studies of wearable outpatient artificial pancreas in type 1 diabetes,” Diabetes Care, vol. 35, no. 9, pp. e65–e67, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [41].Boiroux D, Jørgensen JB, Poulsen NK, and Madsen H, “Model predictive control algorithms for pen and pump insulin administration,” Dept. Inform. Math. Model., Tech. Univ. Denmark, Kongens Lyngby, Denmark, Tech. Rep. IMM-PHD-2012–283, 2012. [Google Scholar]
- [42].Al-Taee AM, Al-Taee MA, Al-Nuaimy W, Muhsin ZJ, and AlZu’bi H, “Smart bolus estimation taking into account the amount of insulin on board,” in Proc. IEEE Int. Conf. Comput. Inf. Technol., Ubiquitous Comput. Commun., Dependable, Auton. Secure Comput., Pervasive Intell. Comput. (CIT/IUCC/DASC/PICOM), October 2015, pp. 1051–1056. [Google Scholar]
- [43].Bequette BW, “Challenges and recent progress in the development of a closed-loop artificial pancreas,” Annu. Rev. Control, vol. 36, no. 2, pp. 255–266, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [44].Liu W, Pokharel PP, and Príncipe JC, “The kernel least-mean-square algorithm,” IEEE Trans. Signal Process, vol. 56, no. 2, pp. 543–554, February 2008. [Google Scholar]
- [45].Engel Y, Mannor S, and Meir R, “The kernel recursive least-squares algorithm,” IEEE Trans. Signal Process, vol. 52, no. 8, pp. 2275–2285, August 2004. [Google Scholar]
- [46].Vapnik V, The Nature of Statistical Learning Theory. Springer, 2013. [Google Scholar]
- [47].Zorzi M. and Chiuso A. (2017). “The harmonic analysis of kernel functions.” [Online]. Available: https://arxiv.org/abs/1703.05216
- [48].Breton M. and Kovatchev B, “Analysis, modeling, and simulation of the accuracy of continuous glucose sensors,” J. Diabetes Sci. Technol, vol. 2, no. 5, pp. 853–862, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [49].Choleau C. et al. , “Calibration of a subcutaneous amperometric glucose sensor implanted for 7 days in diabetic patients: Part 2. Superiority of the one-point calibration method,” Biosensors Bioelectron, vol. 17, no. 8, pp. 647–654, 2002. [DOI] [PubMed] [Google Scholar]
- [50].Kou P, Gao F, and Guan X, “Sparse online warped Gaussian process for wind power probabilistic forecasting,” Appl. Energy, vol. 108, pp. 410–428, August 2013. [Google Scholar]
- [51].Haykin SS, Adaptive Filter Theory. London, U.K.: Pearson, 2008. [Google Scholar]
- [52].Richard C, Bermudez JCM, and Honeine P, “Online prediction of time series data with kernels,” IEEE Trans. Signal Process, vol. 57, no. 3, pp. 1058–1067, March 2009. [Google Scholar]
- [53].Noujaim SE, Horwitz D, Sharma M, and Marhoul J, “Accuracy requirements for a hypoglycemia detector: An analytical model to evaluate the effects of bias, precision, and rate of glucose change,” J. Diabetes Sci. Technol, vol. 1, no. 5, pp. 652–668, 2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [54].Gani A, Gribok AV, Lu Y, Ward WK, Vigersky RA, and Reifman J, “Universal glucose models for predicting subcutaneous glucose concentration in humans,” IEEE Trans. Inf. Technol. Biomed, vol. 14, no. 1, pp. 157–165, January 2010. [DOI] [PubMed] [Google Scholar]
- [55].Sivananthan S. et al. , “Assessment of blood glucose predictors: The prediction-error grid analysis,” Diabetes Technol. Therapeutics, vol. 13, no. 8, pp. 787–796, 2011. [DOI] [PubMed] [Google Scholar]
- [56].Kovatchev BP, Gonder-Frederick LA, Cox DJ, and Clarke WL, “Evaluating the accuracy of continuous glucose-monitoring sensors: Continuous glucose–error grid analysis illustrated by TheraSense Freestyle Navigator data,” Diabetes Care, vol. 27, no. 8, pp. 1922–1928, 2004. [DOI] [PubMed] [Google Scholar]
- [57].Clarke WL, Cox D, Gonder-Frederick LA, Carter W, and Pohl SL, “Evaluating clinical accuracy of systems for self-monitoring of blood glucose,” Diabetes Care, vol. 10, no. 5, pp. 622–628, 1987. [DOI] [PubMed] [Google Scholar]
- [58].Kovatchev BP, Breton M, Man CD, and Cobelli C, “In silico preclinical trials: A proof of concept in closed-loop control of type 1 diabetes,” J. Diabetes Sci. Technol, vol. 3, no. 1, pp. 44–55, 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [59].Turksoy K, Quinn L, Littlejohn E, and Cinar A, “Multivariable adaptive identification and control for artificial pancreas systems,” IEEE Trans. Biomed. Eng, vol. 61, no. 3, pp. 883–891, March 2014. [DOI] [PubMed] [Google Scholar]
- [60].Ly TT et al. , “Day and night closed-loop control using the integrated Medtronic hybrid closed-loop system in type 1 diabetes at diabetes camp,” (in English), Diabetes Care, vol. 38, no. 7, pp. 1205–1211, July 2015. [DOI] [PubMed] [Google Scholar]
- [61].Lau YN, Korula S, Chan AK, Heels K, Krass I, and Ambler G, “Analysis of insulin pump settings in children and adolescents with type 1 diabetes mellitus,” (in English), Pediatric Diabetes, vol. 17, no. 5, pp. 319–326, August 2016. [DOI] [PubMed] [Google Scholar]
- [62].Buhling KJ et al. , “Introductory experience with the continuous glucose monitoring system (CGMS; medtronic minimed) in detecting hyperglycemia by comparing the self-monitoring of blood glucose (SMBG) in non-pregnant women and in pregnant women with impaired glucose tolerance and gestational diabetes,” (in English), Exp. Clin. Endocrinol. Diabetes, vol. 112, no. 10, pp. 556–560, November 2004. [DOI] [PubMed] [Google Scholar]








