Skip to main content
Sensors (Basel, Switzerland) logoLink to Sensors (Basel, Switzerland)
. 2020 Mar 26;20(7):1840. doi: 10.3390/s20071840

Fiber Bragg Grating Dynamic Calibration Based on Online Sequential Extreme Learning Machine

Qiufeng Shang 1, Wenjie Qin 1,*
PMCID: PMC7181166  PMID: 32224936

Abstract

The fiber Bragg grating (FBG) sensor calibration process is critical for optimizing performance. Real-time dynamic calibration is essential to improve the measured accuracy of the sensor. In this paper, we present a dynamic calibration method for FBG sensor temperature measurement, utilizing the online sequential extreme learning machine (OS-ELM). During the measurement process, the calibration model is continuously updated instead of retrained, which can reduce tedious calculations and improve the predictive speed. Polynomial fitting, a back propagation (BP) network, and a radial basis function (RBF) network were compared, and the results showed the dynamic method not only had a better generalization performance but also had a faster learning process. The dynamic calibration enabled the real-time measured data of the FBG sensor to input calibration models as online learning samples continuously, and could solve the insufficient coverage problem of static calibration training samples, so as to improve the long-term stability, accuracy of prediction, and generalization ability of the FBG sensor.

Keywords: optical fiber sensors, fiber Bragg gratings, online sequential extreme learning machine, dynamic calibration

1. Introduction

Fiber Bragg grating (FBG) sensors have considerable advantages, such as high sensitivity, high accuracy, immunity to electromagnetic interference, stable chemical properties, compact size, and light weight. They are widely used in the measuring and monitoring of physical quantities, including strain, temperature, and humidity [1,2,3,4,5]. In recent decades, the development of optoelectronic technology has gradually expanded the application range of FBG sensors. FBG sensors currently find relevant applications in structural health monitoring [6,7], aeronautic prospecting [8], electric measurement [9], the production of medical devices [10,11], composite detection [12], and other fields. By monitoring the Bragg wavelength, it is possible to monitor the parameters that induce the wavelength shift of the FBG sensor, namely temperature and/or strain. The calibration is used to determine the mapping relationship between the wavelength and the physical quantity, and it is one of the critical factors affecting the performance of the sensor.

The static calibration of FBG sensor temperature measurement has been researched for a long time. As early as 1998, the authors of [13] pointed out that the Bragg wavelength of fiber gratings has a non-linear relationship with temperature over the range of 4.2–350 K, and determined the effect of embedding and the manufacturing process on the fibers’ temperature dependence, therefore it is essential to calibrate the measurement of fiber grating sensors. In 2006, the authors of [14] used a fifth-order polynomial to describe the temperature–wavelength correspondence and found that the wavelength drift caused by temperature change is highly non-linear over the range of 4.2–350 K. In 2012, the authors of [15] proposed a calibration algorithm based on a lookup table. The size of the lookup table can be selected according to the accuracy of the measurement data and the processing time requirements. The lookup table calibration algorithm reduces the processing time and measurement errors due to the imperfect fitting of polynomial functions when compared with polynomial fitting calibration. In addition, the authors of [16] and [17] put forward temperature calibration methods for FBG sensors based on a back propagation (BP) network and a radial basis function (RBF) network, respectively. They also found that neural networks have a higher calibration accuracy than polynomial fitting. The feasibility of neural networks was verified for complex calibration relationships.

However, in actual engineering, we find that the wavelength–temperature response curve of an FBG sensor changes with time. This change is mainly caused by the temperature drift property of Fabry–Perot (F–P) etalons [18], the FBG pre-stretching amplitude, and the sealability between the FBG and the packaging material [19]. If the static calibration method is adopted, the measured error should be greatly increased. Therefore, we propose a dynamic calibration method that is based on an online sequential extreme learning machine (OS-ELM), which has the advantages of a fast learning speed, strong adaptability, and good generalization [20,21]. Additionally, it has been proven that an OS-ELM can be used in online prediction tasks in some fields [22,23,24]. To the best of our knowledge, this is the first study to report a long-term improvement in stability predictions for FBG dynamic calibration in the past ten years. This study may provide a new recognition of FBG sensors for measuring temperature.

2. Methods and Experiment Setup

2.1. Extreme Learning Machine

The extreme learning machine (ELM) is the basis of an OS-ELM. The ELM is a single hidden layer forward neural network, including the input layer, hidden layer, and output layer. The N training samples and network’s output are described by (xj,tj)Rn×Rm,j=1,2,,N and fN˜(xj)=i=1N˜βihi(xj),j=1,2,,N, respectively.

Here, xj is an n×1 input vector and tj is an m×1 target vector. N˜ is the number of hidden nodes, which is an approximation of N, βi is the weight between nodes ith and the output layer. hi(xj) is the output of the ith node when input xj,which is shown in Equation (1). i is the weight between the input and node ith, bi is the bias of the ith node.

hi(xj)=g(ixj+bi),j=1,2,,N (1)

According to [25], if N=N˜, then i=1Nβihi(xj)=tj,j=1,2,,N, its matrix form is

Hβ=T (2)

The execution process of the ELM can be equivalent to the minimum norm of solving Equation (2), which means solving minimizing HβT. We assume that β˜ is the least square solution of Equation (2), it is obtained β˜=HT. Where H is the Moore–Penrose generalized inverse of H, which can be solved by the orthogonalization method and the iterative method [25].

2.2. OS-ELM

The ELM is a static batch learning process. The training sample is not updated with the arrival of new data. The OS-ELM was proposed by G. Huang’s team to address this issue. The OS-ELM is generally divided into initial training and online learning. In the initial training phase, the network learns the initial N0 training samples (xj,tj)Rn×Rm,j=1,2,,N0. At the same time, β0 is the solution to minimizing H0βT0, where β0=K01H0TT0 and K0=H0TH0. When entering the online learning phase, the first new data or data block of size N1 are newly learned, and the training sample is updated as (xj,tj)Rn×Rm,j=1,2,,N0+N1. At this point, the network is updated, shown in Equations (3) and (4).

β1=K11[H0H1]T[T0T1] (3)
K1=[H0H1]T[H0H1] (4)

To infer the characteristics of continuous online learning, considering the relationship between β0 and β1, leads to

β1=β0+K11H1T(T1H1β0) (5)

Pushing to generalization, when learning the ith data or data block, there are

βk+1=βk+Kk+11Hk+1T(Tk+1Hk+1βk) (6)
Kk+1=Kk+Hk+1THk+1 (7)

2.3. Experiment Setup

In this paper, the experiment setup was as depicted in Figure 1. The temperature change can be detected by measuring the wavelength shift of FBG. As shown in Figure 1, line segments without arrows represent optical transmission, while those with arrows represent electrical transmission. The light from the broadband light source passed through the isolator. An F–P filter with a center wavelength of 1550 nm, a free spectral range (FSR) of 98.8 nm, and a bandwidth of 0.177 nm was adopted in this system. The tunable F–P filter was utilized to obtain a narrow-band tunable light, which scanned broadband light under the driving of a triangle wave. The narrow-band tunable light was split into two branches using an optical coupler. The upper branch was transmitted to the FBG through the circulator, and the reflected light was detected by the photodetector (PD1). When the transmission wavelength of the tunable F–P filter coincided with the reflection wavelength of the FBG, the PD1 detected the maximum light intensity. The lower branch into the F–P etalon was detected by another photodetector (PD2). The F–P etalon was similar in structure to the F–P filter, and its main part was also composed of an F–P cavity. The F–P etalon which had an FSR of 0.798 nm and a fineness of 6.61, was selected with a wavelength marking function as a wavelength reference. PD1 and PD2 had an operating wavelength range of 1100–1650 nm, a bandwidth of 4 MHz, a dark current of less than 0.85 nA, and a sensitivity of −52 dBm. PD1 and PD2 converted the detected optical signal into an electrical signal, then the electrical signal was sent to a Personal Computer (PC) via a data acquisition card, and then the PC performed denoising and peak detection. The F–P etalon was used as the wavelength reference to calculate the Bragg wavelength of the FBG. In this paper, the data acquisition card was used to simultaneously acquire the FBG reflection spectrum and the transmission spectrum of the F–P etalon, since the wavelength value of each positive peak in the transmission spectrum of the F–P etalon was known. Therefore, the Bragg wavelength of the FBG was determined by comparing the peak position of the FBG reflection spectrum with that of the F–P etalon transmission spectrum.

Figure 1.

Figure 1

The fiber Bragg grating (FBG) sensing system.

3. Results and Discussion

3.1. Data Set

In order to verify the improvement in the measurement accuracy, generalization ability, and long-term stability of an OS-ELM for FBG sensor dynamic calibration, four data acquisition experiments were conducted, respectively, and wavelength–temperature pairs were provided. The experimental results are displayed in Figure 2. The four experiments were conducted in chronological order, with an interval of five months between the first experiment and the second experiment, five days between the second experiment and the third experiment, and nine months between the third experiment and the fourth experiment. In the first experiment, six temperatures were taken: 10, 15, 20, 24, 28, and 32 °C. The second experiment also took six temperatures, unevenly: 12, 14, 18, 22, 26, and 30 °C. The ranges of the third and fourth experiments were (13–16 °C) and (5–9 °C), respectively. It can be seen from Figure 2 that the FBG wavelength and temperature maps of the four experiments are different curves, and it is impossible to fit a single curve to represent their relationship. The discrepancy between measurement sets was mainly caused by the temperature drift property of the F–P etalon, the FBG pre-stretching amplitude, and the sealability between the FBG and the packaging material. Since eliminating the discrepancy from the optical path with hardware would increase the complexity and cost of the system, this paper studies a method of dynamic calibration to eliminate the discrepancy.

Figure 2.

Figure 2

The four data acquisition experiments.

3.2. Simulated Analysis

When comparing the performance of the dynamic calibration model with other static calibration models, an ELM was employed as the static model of an OS-ELM to compare with other calibration models for better control variables. Due to the limited space of the article, 352 pairs of data from the third experiment with the most severe noise among the four experiments were taken for verification. The 352 pairs of data were first randomly divided into 300 pairs as a training data set, and the remaining 52 pairs as a testing data set for accuracy testing, as shown in Figure 3. Figure 4 gives the data set for generalized performance testing. A total of 110 data points in the temperature range (14–15 °C) were taken as the training data set, and the remaining 242 data points were used as the testing data set.

Figure 3.

Figure 3

Data for the prediction accuracy experiments.

Figure 4.

Figure 4

Data for the generalization ability experiments.

The ELM commonly used activation functions that have a sigmoidal function (sig), a sine function (sin), a hardlim function (hardlim), and a radial basis function (radbas). The ELM model performs differently when different activation functions are used. The performances of these four activation functions were compared in terms of the root mean square error (RMSE) and goodness of fit (R2) given by Equations (8) and (9), respectively, and the results are shown in Table 1.

RMSE=1Ni=1N(fN˜(xj)tj)2 (8)
R2=i=1N(tifN˜(xj)¯)2i=1N(fN˜(xj)fN˜(xj)¯)2 (9)

where N is the total number of testing samples, N˜ is the number of hidden layer neurons. fN˜(xj) and tj are the temperature measurement from the ELM and thermometer, respectively. fN˜(xj)¯ is the mean of fN˜(xj).

Table 1.

Performance of the prediction accuracy of different activation functions.

Types of Activation Function Time (s) RMSE (°C) R2
Training Testing Training Testing Training Testing
sig 0.1094 0 0.0533 0.1027 0.9772 0.9698
sin 0.0154 0 0.0555 0.1803 0.9790 0.8983
hardlim 0.0156 0 0.3677 0.8426 −5.5816 × 10−12 3.0073 × 10−13
radbas 0.0625 0 0.3774 0.8660 0.9772 0.9698

Because the RMSE described the precision of the prediction, its value was close to 0, which means a better prediction performance. Nevertheless, the closer the R2 value is to 1, the better the fitting degree of the observed regression curve. As shown in Table 1, it is evident that the hardlim function had the worst performance. The sigmoid function returned the smallest value of RMSE, and the sine function received the shortest training time. The result of using a radial basis function is close to the sigmoid function in the evaluation criterion of R2. However, the RMSE of the sigmoid function was smaller than that of the radial basis function. A comprehensive analysis showed that the ELM performs best when using a sigmoid function.

The prediction accuracy of the ELM, polynomial, BP, and RBF were also compared. In order to make the comparison fair, the results with the best performance in each calibration model were compared. As shown in Table 2, the polynomial took the least time, the RMSE for the ELM was the lowest, and R2 for all models was very close. In terms of real-time, the polynomial was the best, and ELM was the best in terms of accuracy. As a prediction model, the generalization performance should also be considered. The generalization performance of each model is compared below.

Table 2.

Performances of prediction accuracy of different calibration models.

Types of Calibration Model Time (s) RMSE (°C) R2
Training Testing Training Testing Training Testing
Polynomial 0.0156 0 0.0544 0.1327 0.9781 0.9686
BP 0.4063 0.0156 0.0535 0.1601 0.9789 0.9220
RBF 7.9688 0.0544 0.0625 0.1321 0.9781 0.9686
ELM 0.1094 0 0.0533 0.1027 0.9772 0.9698

Table 3 compares the generalization performance of the polynomial, BP, RBF, and ELM. The RBF and ELM performed best in RMSE and R2, but the RBF took more time than the ELM. Figure 5 shows the boxplot of the differences between the predicted and observed values of the four models to analyze their stability. It can be seen from Figure 5 that the generalization prediction error of the polynomial was the largest, while the error of the ELM was the smallest. Meanwhile, the line extending from both sides of the ELM box is the shortest in the boxplot, so the ELM prediction was more stable than the others.

Table 3.

Performances of generalization of different calibration models.

Types of Calibration Model Time (s) RMSE (°C) R2
Training Testing Training Testing Training Testing
Polynomial 0.0469 0 0.0764 0.0476 0.9120 0.9787
BP 0.4688 0 0.0799 0.1067 0.9052 0.8920
RBF 1.4844 0 0.0762 0.0467 0.9125 0.9819
ELM 0.0156 0 0.0762 0.0456 0.9125 0.9818

Figure 5.

Figure 5

A boxplot of prediction errors for different calibration models.

By comparing the prediction accuracy and generalization performance of the four models, including the polynomial, BP, RBF, and ELM, it was found that the prediction accuracy and generalization performance of the ELM model was better than that of the other three models.

3.3. Dynamic Calibration

The ELM model can be considered as a model in which the OS-ELM only has initial training, so the ELM is a static model. The calibration based on the OS-ELM is dynamic, and the calibration model can be continuously updated as new data arrives, rather than retraining the model. The dynamic calibration of the performances of the two aspects of stability and generalization was evaluated. The long-term stability was verified by the first and third experimental data (interval of five months), and the short-term stability was verified with the second and third experimental data (interval of five days). Date from the first experiment (10–32 °C) and data from the fourth experiment (5–9 °C) (interval of 14 months) verified the long-term generalization performance.

Firstly, the short-term stability was studied. The data set of the second experiment was used to train and calibrate the network. Then the trained network was used to predict the data of the third calibration experiment. The prediction results are shown in Figure 6, and the prediction error boxplot is shown in Figure 7. The polynomial, BP, RBF, and ELM prediction errors were 1.2436 °C, 1.2316 °C, 1.2350 °C, and 1.1956 °C, respectively, and the OS-ELM prediction error was 0.2 °C.

Figure 6.

Figure 6

Performance comparisons of different calibration models in terms of short-term stability.

Figure 7.

Figure 7

A boxplot of prediction errors for different calibration models in terms of short-term stability.

When verifying long-term stability, the data set of the first experiment was used to train and calibrate the network, and then the data of the third experiment were predicted by the trained network. The prediction results are shown in Figure 8, and the prediction error boxplot is shown in Figure 9. The advantages of the OS-ELM dynamic calibration in predicting accuracy and stability can clearly be observed.

Figure 8.

Figure 8

Performance comparisons of different calibration models in terms of long-term stability.

Figure 9.

Figure 9

A boxplot of prediction errors for different calibration models in terms of long-term stability.

In order to study the long-term generalization performance of dynamic calibration, the data set of the first experiment was used to train the calibration network, and then the trained network was used to predict the data of the fourth experiment. The online learning sample was the data of the fourth experiment in the range of (5–6 °C). The prediction results are shown in Figure 10, and the prediction errors boxplot is shown in Figure 11. The apparent advantages of the OS-ELM can also be seen.

Figure 10.

Figure 10

Performance comparisons of different calibration models in terms of generalization.

Figure 11.

Figure 11

A boxplot of prediction errors for different calibration models in terms of generalization.

The comparative analyses above show that the dynamic calibration model based on the OS-ELM not only has an excellent generalization performance but also has a high prediction accuracy. The dynamic calibration can realize the sensor field-measured data and continuously input it into the network model as the online learning sample, which solves the problem of large drift errors of the static calibration model and insufficient coverage of the initial training sample.

4. Discussion

This paper provides a new dynamic model updating method, which is different from the traditional static calibration. In the dynamic updating phase, both the current prediction accuracy and the historical record are considered, which helps to reduce the fitting error of insufficient online learning samples. Besides, the dynamic calibration based on the OS-ELM significantly improved the prediction accuracy and generalization performance compared with previous static calibration methods. The maximum absolute error of the short-term stability experiment was 0.502 °C, that of the long-term stability experiment was 0.516 °C, and that of the long-term generalization experiment was 0.374 °C. Future research will focus on improving the calibration model according to the data characteristics.

Author Contributions

Formal analysis, Q.S. and W.Q.; Funding acquisition, Q.S.; Investigation, W.Q.; Methodology, W.Q.; Project administration, Q.S.; Writing—original draft, W.Q.; Writing—review & editing, Q.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (grant number 61775057), and Natural Science Foundation of Hebei Province (grant number E2019502179).

Conflicts of Interest

The authors declare no conflict of interest.

References

  • 1.Ma S., Guo J., Guo L., Cao J., Zhang B. On-line monitoring system for downhole temperature and pressure. Opt. Eng. 2014;8:087102. [Google Scholar]
  • 2.Antunes P., Travanca R., Rodrigues H., Melo J., Jara J., Varum H., Andre P. Dynamic structural health monitoring of slender structures using optical sensors. Sensors. 2012;5:6629–6644. doi: 10.3390/s120506629. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Mohanty L., Tjin S.C., Lie D.T.T., Panganiban S.E.C., Chow P.K.H. Fiber grating sensor for pressure mapping during total knee arthroplasty. Sens. Actuators A Phys. 2007;2:323–328. doi: 10.1016/j.sna.2006.07.021. [DOI] [Google Scholar]
  • 4.Xiong L., Jiang G., Guo Y., Liu H. A Three-dimensional Fiber Bragg Grating Force Sensor for Robot. IEEE Sens. J. 2018;9:3632–3639. doi: 10.1109/JSEN.2018.2812820. [DOI] [Google Scholar]
  • 5.Huang F., Chen T., Si J., Pham X., Hou X. Fiber laser based on a fiber Bragg grating and its application in high-temperature sensing. Opt. Commun. 2019;452:233–237. doi: 10.1016/j.optcom.2019.05.046. [DOI] [Google Scholar]
  • 6.Hong C., Zhang Y., Zhang M., Gordon L.L.M., Liu L. Application of FBG sensor for geotechnical health monitoring, a review of sensor design, implementation methods and packaging techniques. Sens. Actuators A Phys. 2016;244:184–197. doi: 10.1016/j.sna.2016.04.033. [DOI] [Google Scholar]
  • 7.Zhang X., Wang P., Liang D., Fan C., Li C. A soft self-repairing for FBG sensor network in SHM system based on PSO–SVR model reconstruction. Opt. Commun. 2015;343:38–46. doi: 10.1016/j.optcom.2014.12.079. [DOI] [Google Scholar]
  • 8.Lamberti A., Chiesura G., Luyckx G., Degrieck J., Kaufmann M., Vanlanduit S. Dynamic strain measurements on automotive and aeronautic composite components by means of embedded fiber Bragg grating sensors. Sensors. 2015;10:27174–27200. doi: 10.3390/s151027174. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Marignetti F., de Santis E., Avino S., Tomassi G., Giorgini A., Malara P., De Natale P., Gagliardi G. Fiber bragg grating sensor for electric field measurement in the end windings of high-voltage electric machines. IEEE Trans. Ind. Electron. 2016;5:2796–2802. doi: 10.1109/TIE.2016.2516500. [DOI] [Google Scholar]
  • 10.Dziuda L., Skibniewski F.W., Krej M., Lewandowski J. Monitoring respiration and cardiac activity using fiber Bragg grating-based sensors. IEEE Trans. Biomed. Eng. 2012;7:1934–1942. doi: 10.1109/TBME.2012.2194145. [DOI] [PubMed] [Google Scholar]
  • 11.Domingues F., Alberto N., Leitão C., Tavares C., Lima E., Radwan A., Sucasas V., Rodriguez J., André P., Antunes P. Insole optical fiber sensor architecture for remote gait analysis-an eHealth Solution. IEEE Internet Things J. 2017;6:207–214. doi: 10.1109/JIOT.2017.2723263. [DOI] [Google Scholar]
  • 12.Okabe Y., Tsuji R., Takeda N. Application of chirped fiber Bragg grating sensors for identification of crack locations in composites. Compos. Part A Appl. Sci. 2003;1:59–65. doi: 10.1016/j.compositesa.2003.09.004. [DOI] [Google Scholar]
  • 13.Reid M.B., Ozcan M. Temperature dependence of fiber optic Bragg gratings at low temperatures. Opt. Eng. 1998;1:237–242. doi: 10.1117/1.601610. [DOI] [Google Scholar]
  • 14.Roths J., Andrejevic G., Kuttler R., Süßer M. International Optical Fiber Sensors Conference. OSA; Washington, DC, USA: 2006. Calibration of fiber Bragg cryogenic temperature sensors; pp. 81–85. [Google Scholar]
  • 15.Saccomanno A., Breglio G., Irace A., Bajko M., Szillasi Z., Buontempo S., Giordano M., Cusano A. A calibration method based on look-up-table for cryogenic temperature Fiber Bragg Grating sensors; Proceedings of the 3rd Asia Pacific Optical Sensors Conference; Sydney, Australia. 31 January–3 February 2012; pp. 83513–83515. [Google Scholar]
  • 16.An Y., Wang X., Qu Z., Liao T., Nan Z. Fiber Bragg grating temperature calibration based on BP neural network. Optik. 2018;172:753–759. doi: 10.1016/j.ijleo.2018.07.064. [DOI] [Google Scholar]
  • 17.An Y., Wang X., Qu Z., Liao T., Wu L., Nan Z. Stable temperature calibration method of fiber Bragg grating based on radial basis function neural network. Opt. Eng. 2019;9:096105. doi: 10.1117/1.OE.58.9.096105. [DOI] [Google Scholar]
  • 18.Jiang J., Zang C., Wang Y., Zhang X., Liu Y., Yang Y., Xie R., Fan X., Liu T. Investigation of composite multi—Wavelength reference stabilization method for FBG demodulator in unsteady temperature environment. J. Optoelectron. Laser. 2018;6:5–11. [Google Scholar]
  • 19.Xie R., Zhang X., Wang S., Jiang J., Liu K., Zang C., Chu Q., Liu T. Research on influencing factors of FBG temperature sensors stability. J. Optoelectron. Laser. 2018;4:363–369. [Google Scholar]
  • 20.Huang G., Zhu Q., Siew C. Extreme learning machine: Theory and applications. Neurocomputing. 2006;1:489–501. doi: 10.1016/j.neucom.2005.12.126. [DOI] [Google Scholar]
  • 21.Rong H., Huang G., Sundararajan N., Saratchandran P. Online Sequential Fuzzy Extreme Learning Machine for Function Approximation and Classification Problems. IEEE Trans. Syst. Man Cybern. B. 2009;4:1067–1072. doi: 10.1109/TSMCB.2008.2010506. [DOI] [PubMed] [Google Scholar]
  • 22.Li Z., Fan X., Chen G., Yang G., Sun Y. Optimization of iron ore sintering process based on ELM model and multi-criteria evaluation. Neural Comput. Appl. 2017;8:2247–2253. doi: 10.1007/s00521-016-2195-x. [DOI] [Google Scholar]
  • 23.Lu F., Wu J., Huang J., Qiu X. Aircraft engine degradation prognostics based on logistic regression and novel OS-ELM algorithm. Aerosp. Sci. Technol. 2018;84:661–671. doi: 10.1016/j.ast.2018.09.044. [DOI] [Google Scholar]
  • 24.Wang X., Yang K., Kalivas J.H. Comparison of extreme learning machine models for gasoline octane number forecasting by near-infrared spectra analysis. Optik. 2020;200:163325. doi: 10.1016/j.ijleo.2019.163325. [DOI] [Google Scholar]
  • 25.Liang N., Huang G., Saratchandran P., Sundararajan N. A Fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans. Neural Netw. 2006;6:1411–1423. doi: 10.1109/TNN.2006.880583. [DOI] [PubMed] [Google Scholar]

Articles from Sensors (Basel, Switzerland) are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES