Abstract

The partial least squares (PLS) algorithm is a commonly used key performance indicator (KPI)-related performance monitoring method. To address nonlinear features in the process, this paper proposes neural component analysis (NCA)-PLS, which combines PLS with NCA. (NCA)-PLS realizes all the principles of PLS by introducing a new loss function and a new principal component selection mechanism to NCA. Then, the gradient descent formulas for network training are rederived. NCA-PLS can extract components with large correlations with KPI variables and adopt them for data reconstruction. Simulation tests using a mathematical model and the Tennessee Eastman process show that NCA-PLS can successfully handle nonlinear relationships in process data and that it performs much better than PLS, KPLS, and NCA.
1. Introduction
Multivariate statistics-based process monitoring (MSPM)1−5 is one of the most attractive data-driven approaches for monitoring complex processes with high-dimensional data structures, for example, biopharmaceutical and chemical processes. Its core idea is to transform high-dimensional process data to low-dimensional principal components (PCs) and monitor them using several statistical indices.6−8
It should be noted that process data contain two types of information: key performance indicator (KPI)-related and KPI-unrelated,1 where KPIs are indicators of product quality or process safety that must be emphasized in process monitoring.9 Partial least squares (PLS),9−12 as a commonly used KPI-related performance monitoring method,13 can maximize the variation between process variables and KPI-related variables and use KPI-related information to detect faults. As such, PLS is more sensitive to abnormal changes in KPI-related components, and hence, it is suitable for monitoring the industrial process with clear KPI indices.
As a linear approach, PLS cannot handle nonlinear relationships among process variables. To address this issue, kernel PLS (KPLS)14−18 has been proposed, and it has become the mainstream nonlinear PLS approach.19 KPLS is a two-step method, which addresses the nonlinearity by calculating inner products between data samples (in both the training and testing stage) and monitoring them by traditional PLS. Much research work has been carried out on KPLS, and many improved versions have been proposed and applied in process monitoring. Wang et al. found that the nonlinear mapping of kernel methodology completely obscures the correspondence between the original variable sample and the kernel model, and hence, they integrated KPLS with the kernel sample equivalent replacement (KSER) method in KPLS–KSER.20 Yang et al. introduced a cross-validatory framework to address the parameter selection problem of KPLS.21 Wang et al. found that existing KPLS methods cannot accurately decompose measurements into KPI-related and KPI-unrelated parts and proposed a new space division strategy for KPLS.22 Zhang et al. applied KPLS in dynamic processes and proposed dynamic KPLS.23
The nonlinear fitting ability of KPLS is not very good in practical industrial applications, as the kernel function in PLS should be defined manually, but picking the kernel function and setting its parameters are still open issues that need to be solved. As a result, these parameters should be tuned by trial and error, and the nonlinear mapping model is not optimal.24
Inspired by the artificial neural network (ANN)25,26 and principal component analysis (PCA),27−29 Lou et al. proposed a nonlinear approach called neural component analysis (NCA),30 which reconstructs the ANN with PCA principles, adopts a neural network structure for nonlinearity description, and updates the parameters by gradient descent.31 As such, NCA provides a new idea for solving the nonlinear monitoring issue, which can be transplanted to other algorithms. For example, Chen et al. applied the NCA structure in the description of canonical correlation analysis32 and proposed artificial neural correlation analysis,33 and Lou et al. applied the NCA structure in a non-Gaussian process and proposed improved NCA.34
In this article, NCA is combined with PLS, as NCA-PLS, to address the KPI monitoring issue. By introducing a new loss function and a new PC selection mechanism, all principles of PLS are realized by the NCA network structure. Simulation tests with a mathematical model and the Tennessee Eastman (TE) process35 show that NCA-PLS can successfully address the nonlinear relationships among process data and extract the KPI information for process monitoring; moreover, it performs better than PLS, KPLS, and NCA.
The main contributions of this study are as follows. First, we propose a new nonlinear PLS approach, which inherits the nonlinear fitting ability of the ANN; second, we rederive the formulas of the gradient descent method for NCA-PLS; third, we propose a new PC extraction mechanism for NCA-PLS.
The remainder of this paper is organized as follows. Section 2 reviews the ideas of PLS and NCA. The new nonlinear PLS approach is proposed in Section 3, and some details are discussed. A nonlinear mathematical model and the TE process are employed to demonstrate the performance of the proposed method in Section 4. Section 5 relates our conclusions.
2. Methods
2.1. PLS
PLS can decompose process data X∈Rn×s and KPI data Y∈Rn×r (where n is the number of samples, s is the number of process variables, and r is the number of process variables) into
| 1 |
where TPLS∈Rn×k and UPLS∈Rn×k refer to the score matrices, PPLS∈Rs×k and QPLS∈Rr×k are the loading matrices, EPLS∈Rn×s and FPLS∈Rn×r are the residual matrices, and k is the number of PCs.
The model objective of PLS is to maximize the variation between X and Y, as follows
![]() |
2 |
where Xβ and Yα are columns of TPLS and UPLS correspondingly. The solution to eq 2 is achieved by an iterative algorithm, NIPALS.36
2.2. NCA
The main idea of NCA is to realize
the principles of PCA through the ANN. As shown in Figure 1, the middle layer in NCA,
the PC layer, is responsible for nonlinear mapping and PC selection.
The main steps of NCA are as follows: (1) obtain the inputs of the
PC layer by linear transformation as
. (2) Obtain the uncorrelated PCs Ij(j = 1,2,···,s) by a nonlinear activation function as
. (3) Calculate the PC selection score,
, where D(*) is the variance
function and varstand is the boundary value for
calculated by the cumulative percent variance
method.37 (4) Calculate the outputs of
the PC layer as Hj′ = HjΞ(j). (5) Linearly
map from the PC space to the original data space as
. Parameters {wij} and {wij} are the weights,
and {bj} and {bj′} are bias terms. The abovementioned
parameters in NCA are symmetric, that is, bj = −bj(j = 1,2,···,s), and
![]() |
3 |
Figure 1.
NCA Framework.
To guarantee that the reconstructed output
is close to original data
and all the PCs are uncorrelated, the two
cost functions are combined into one
![]() |
4 |
where σ is a weight parameter and cov(*) is the covariance function.
3. NCA-PLS
The main difference between PLS and PCA is that PCA extracts components with large autocorrelation coefficients (large variance) in X, while PLS extracts components with large correlation with UPLS, so NCA should be adjusted for PLS principles, as shown in Figure 2.
Figure 2.
Framework of NCA-PLS.
3.1. Modification for NCA Cost Function
Basedon eq 2 in Section 2.1, one understands
that PLS extracts the components with large correlation with Yα rather than Y. In most cases, the dimension
of output variable Y is much lower than that of X, that is, r ≪ s, and hence r is equal to or very close to k. Because FPLS is orthogonal to UPLSQPLST, one obtains
. As such, the most information contained
in output variable Y has been extracted, and hence, there
is little information loss during matrix transformation (FPLS ≈ 0 and Y ≈ UPLSQPLS). Therefore, extracting the
PCs in Y is not a necessary step for PLS. Based on the
abovementioned content, in most cases, the objective of PLS can be
replaced as extracting the components with large correlation with Y.
To extract the components with large correlation with Y, we define the correlation coefficient value between latent variable Ii and the jth KPI variable Yj as
| 5 |
Hence, the cost function for NCA-PLS is
![]() |
6 |
where cost function E1 maximizes the correlation between the PCs in X and cost function E3 minimizes the correlation between the PCs in X and Y.
Then, parameters {wij} and {bj} can be obtained by applying the gradient descent method to cost function E
![]() |
7 |
where
and
have been deduced in paper.30 For
, we obtain
| 8 |
Then
![]() |
9 |
where
| 10 |
and
| 11 |
Similarly, we obtain
![]() |
12 |
where
| 13 |
and
| 14 |
In the abovementioned equations,
and
are determined by the activation function f(*); for example, if
then
and
Hence, parameters {wij} and {bj} can be calculated using following steps:
Step 1: set random initial values for {wij} and {bj} (e.g., random values between 0 and 1) and set a value for learning rate η (e.g., 0.1).
Step 2: set gen = 1 and set total iteration number Gen.
Step 3: calculate
and
according to eq 12 in paper.30
Step 4: calculate
and
according to eqs 8–14.
Step 5: update {wij} and {bj} as follows
| 15 |
| 16 |
| 17 |
| 18 |
and take gen = gen + 1.
Step 6: If gen = Gen, output {wij} and {bj}; else, go back to Step 3.
3.2. New PC Extraction Mechanism for NCA-PLS
Different from PCA, the components highly related to KPI variables are picked as PCs in PLS. The following new PC extraction mechanism is proposed for NCA-PLS:
Step 1: calculate the correlation coefficient between latent variable Ii and KPI variable Yj with eq 5;
Step 2: calculate the correlation score for each latent variable Ii as
| 19 |
Step 3: sort scorei values from large to small and find the threshold varstand using the cumulative percent variance method;37 and
Step
4: calculate the picker score for each latent variable as
.
Based on the abovementioned new PC extraction mechanism, when Ii contains KPI-related information, scorei will have a nonzero value; then Ξ(i) = 1, and hence, Hi is used for data reconstruction. In contrast, when Ii is KPI-unrelated, then scorei is very close to zero, and hence, Ii is not picked as a PC.
4. Results and Discussion
4.1. Nonlinear Numerical Model
The following mathematical model is designed to test the proposed algorithm
![]() |
20 |
where Ni (i = 1,2,···4) denotes independent Gaussian variables and ωi (i = 1,2,···7) denotes Gaussian noise. In this paper, xi (i = 1,2,···5) denotes process data and yi (i = 1,2) denotes KPI variables. In this process, N1 and N2 are contained in y1 or y2, and hence, they are KPI-related. Approximately, 960 samples are generated for offline modeling.
Figure 3 shows the cost functions of NCA-PLS during training iterations. In this paper, the sigmoid function is used in NCA-PLS, with 500 gradient descent iterations. According to Figure 3, both E1 and E3 decrease in each iteration, E1 reaches a value close to zero after the 150th iteration, and E3 decreases from initial value −0.03 to −0.1. Therefore, NCA-PLS can successfully minimize the correlation between the PCs and maximize the correlation between them and the KPIs.
Figure 3.
Cost function values of NCA-PLS.
This process generates another 960 samples for testing, introducing faults at the 161th sampling points. There are three types of faults:
fault 1: a step fault occurs at variable x1 with amplitude 5;
fault 2: a step fault occurs at variable N1 with amplitude 2; and
fault 3: a step fault occurs at variable N3 with amplitude 2.
Faults 1 and 2 are KPI-related because they occur in components containing information of N1 and N2, and fault 3 is KPI-unrelated because it only occurs in N3.
These data are used to compare PLS, KPLS, and NCA-PLS. Table 1 shows the false-alarm and detection rates for the three types of faults. For NCA-PLS, the number of gradient descent iterations is set to 500. For KPLS, the PC number is set by trial and error. In this study, all control limits are based on 99% confidence limits. The best result for each item is bolded and underlined.
Table 1. False-Alarm and Fault-Detection Rates (%) of Three Methods.
| method |
PLS |
KPLS |
NCA-PLS |
||||
|---|---|---|---|---|---|---|---|
| index | T2 | SPE | T2 | SPE | I2 | SPE | |
| false-alarm rate | 6.25 | 0.63 | 7.50 | 0.63 | 0.00 | 0.00 | |
| fault-detection rate | fault 1 | 49.30 | 0.25 | 29.63 | 2.13 | 99.00 | 3.50 |
| fault 2 | 28.38 | 3.38 | 28.37 | 5.88 | 90.88 | 6.13 | |
| fault 3 | 2.50 | 11.13 | 35.63 | 3.75 | 62.75 | 57.25 | |
In Table 1, NCA-PLS achieves a zero false-alarm rate, which is lower than PLS (6.25%) and KPLS (7.50%). Because PLS cannot handle nonlinear features, it has the worst fault-detection rate. KPLS adopts the kernel function for nonlinear approximation, so it performs much better than PLS on fault 3. However, as no prior knowledge is available to set kernel function parameters, the KPLS model is not suitable for all faults, and it performs poorly on the other two faults. As NCA-PLS can successfully handle nonlinear features using an ANN and can extract KPI-related information for fault detection, it successfully detects faults 1 and 2. Although fault 3 is not related to KPI, NCA-PLS still achieves a 67.25% fault-detection rate. NCA-PLS can generally achieve a high fault-detection rate without false alarms.
Figures 4 and 5 show the monitoring charts of faults 1 and 3, respectively. For PLS and KPLS, the control limit line is very close to the monitoring indices under normal and fault conditions. As such, the monitoring indices of PLS and KPLS may be beyond the control limit line under normal conditions, which causes a large false-alarm rate, and they may go below the control limit line after a fault occurs and thus cause a low fault-detection rate. Different from them, the control limit line of NCA-PLS is very high, and the monitoring indices increase quickly after a fault occurs, so NCA-PLS is sensitive to process faults and will not cause a false-alarm problem.
Figure 4.

Monitoring charts for fault 1.
Figure 5.

Monitoring charts for fault 3.
4.2. Tennessee Eastman Process
The Tennessee Eastman (TE) process, proposed by Downs and Vogel35 in 1993, is a widely used simulation model for testing MSPM methods. The TE process simulates the chemical industry process containing nonlinear features, which consists of five major unit operations: a reactor, a product condenser, a vapor–liquid separator, a recycle compressor, and a product stripper. The whole process includes 34 measurable variables and another 19 unmeasurable composition measurements. As the agitation speed, one measurable variable, is not manipulated, this paper takes the 33 measurable variables as the input X. Two unmeasurable variables, XMES (35) and XMES (36), representing products G and H, respectively, are chosen as the KPI matrix Y. Approximately, 960 samples of normal data are generated for offline training. According to paper,38 nine types of faults are considered to be quality-related faults in the TE process, which are listed in Table 2. As such, nine test data sets are generated as testing data, and each testing data set contains 960 samples; for each testing data set, the fault is introduced from the 161th sample and continues until the end. Table 2 describes these faults.
Table 2. Quality-Related Faults in the TE Process.
| no. | description | type |
|---|---|---|
| 1 | feed ratio of A/C, compositionconstant of B (stream 4) | step |
| 2 | composition of B, ratio constant of A/C (stream 4) | step |
| 5 | inlet temperature of condenser cooling water | step |
| 6 | feed loss of A (stream 1) | step |
| 7 | header pressure loss of C—reduced availability (stream 4) | step |
| 8 | feed composite of A, B, and C (stream 4) | random variation |
| 10 | feed temperature of C (stream 4) | random variation |
| 12 | inlet temperature of condenser cooling water | random variation |
| 13 | reaction kinetics | slow drift |
Table 3 lists the monitoring results of NCA-PLS, PLS, KPLS, and NCA. The best result for each item is marked in bold and underlined. In Table 3, NCA-PLS achieves the best result in faults 1, 5, 8, 10, 12, and 13; for faults 2, 6, and 7, which can be easily detected by all four methods, the fault-detection rate of NCA-PLS is also very close to the other three methods. Among the four methods, NCA-PLS achieves the highest fault-detection rate. One issue should also be noted that NCA-PLS also achieves the lowest false-alarm rate, and hence, NCA-PLS is more reliable than other three methods.
Table 3. False-Alarm and Fault-Detection Rates (%) of Four Compared Methods in the TE Process.
| methods |
PLS |
KPLS |
NCA |
NCA-PLS |
|||||
|---|---|---|---|---|---|---|---|---|---|
| indices | T2 | SPE | T2 | SPE | I2 | SPE | I2 | SPE | |
| false-alarm rate | 1.16 | 0.92 | 0.83 | 1.46 | 0.95 | 0.83 | 0.83 | 0.33 | |
| fault-detection rate | fault 1 | 99.13 | 99.50 | 99.38 | 99.13 | 97.25 | 98.35 | 98.63 | 99.50 |
| fault 2 | 98.38 | 98.38 | 98.38 | 97.50 | 97.50 | 99.85 | 97.38 | 97.88 | |
| fault 5 | 19.00 | 25.63 | 24.88 | 11.50 | 23.85 | 25.38 | 24.63 | 27.00 | |
| fault 6 | 98.25 | 100.00 | 99.25 | 98.50 | 89.13 | 97.67 | 98.50 | 98.63 | |
| fault 7 | 48.63 | 100.00 | 100.00 | 100.00 | 66.38 | 100.00 | 99.38 | 41.63 | |
| fault 8 | 90.50 | 96.13 | 96.25 | 72.50 | 69.88 | 93.75 | 97.00 | 81.75 | |
| fault 10 | 22.75 | 32.50 | 34.88 | 3.38 | 42.45 | 37.20 | 61.00 | 9.13 | |
| fault 12 | 84.13 | 97.38 | 98.13 | 76.50 | 84.38 | 53.11 | 99.13 | 61.13 | |
| fault 13 | 92.00 | 94.00 | 94.63 | 87.63 | 94.15 | 93.63 | 96.75 | 81.38 | |
| average | 72.53 | 82.61 | 82.86 | 71.85 | 73.89 | 77.66 | 85.82 | 66.45 | |
For fault 10, NCA-PLS achieves much greater fault-detection rates than KPLS, which demonstrates that it has better nonlinear fitting ability than KPLS. In addition, the parameters of KPLS should be tuned by trial and error and those of NCA-PLS are trained by the gradient descent method automatically, which means NCA-PLS is more convenient for engineering application.
Although NCA has the similar nonlinear fitting ability as NCA-PLS, it performs much worse than NCA-PLS and KPLS. Furthermore, NCA also achieves lower average fault-detection rate than PLS, which is a linear method. The reasons for this phenomenon are as follows: (a) on the one hand, NCA loses the information of KPIs and (b) on the other hand, NCA monitors all components in process data rather than the KPI-related components, and hence, it is insensitive to faults in KPI-related components.
Two issues should also be noted, which differ from the testing results in Section 4.2: (a) the false-alarm rate is not zero in this testing, because the TE process is not static, and a dynamic feature will cause model deviation and (b) NCA-PLS cannot achieve 100% detection even for faults that can be easily detected, such as fault 7, because NCA-PLS adopts summation to calculate the I2 index, which may cause a tiny detection delay.30
5. Conclusions
The NCA structure was adopted to handle the nonlinear issue of traditional PLS. By proposing a new loss function and a new PC selection mechanism, all PLS principles were realized by NCA.
The superiority of NCA-PLS was verified by several simulation tests. In a mathematical model test, it achieved much better detection performance on three types of faults, which could not be effectively detected by PLS and KPLS. NCA-PLS performed better than other improved MSPM algorithms on TE process testing.
Our future work will focus on analyzing the convergence of NCA-PLS and addressing the detection delay issue in NCA-PLS.
Acknowledgments
This work was supported by the Natural Science Foundation of Guangdong Province, China, 2022A1515011040, the Innovation Team by Department of Education of Guangdong Province, China, 2020KCXTD041, the Natural Science Fundation of Shenzhen, China, 20220813001358001, and the Open Research Fund of Anhui Engineering Technology Research Center of Automotive New Technique, China, QCKJ202103.
The authors declare no competing financial interest.
References
- Lou Z.; Wang Y.; Si Y.; Lu S. A novel multivariate statistical process monitoring algorithm: Orthonormal subspace analysis. Automatica 2022, 138, 110148. 10.1016/j.automatica.2021.110148. [DOI] [Google Scholar]
- Ma X.; Si Y.; Qin Y.; Wang Y. Fault Detection for Dynamic Processes Based on Recursive Innovational Component Statistical Analysis. IRE Trans. Autom. Sci. Eng. 2022, 10.1109/TASE.2022.3149591. [DOI] [Google Scholar]
- Ali H.; Maulud A. S.; Zabiri H.; Nawaz M.; Suleman H.; Taqvi S. A. A. Multiscale Principal Component Analysis-Signed Directed Graph Based Process Monitoring and Fault Diagnosis. ACS omega 2022, 7, 9496–9512. 10.1021/acsomega.1c06839. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhou Z.; Wang J.; Yang C.; Wen C.; Li Z. Fault Detection and Isolation of Non-Gaussian and Nonlinear Processes Based on Statistics Pattern Analysis and the k-Nearest Neighbor Method. ACS Omega 2022, 7, 18623. 10.1021/acsomega.2c01279. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rännar S.; MacGregor J. F.; Wold S. Adaptive batch monitoring using hierarchical PCA. Chemom. Intell. Lab. Syst. 1998, 41, 73–81. 10.1016/s0169-7439(98)00024-0. [DOI] [Google Scholar]
- Sun X.; Rashid M.; Hobbs N.; Askari M. R.; Brandt R.; Shahidehpour A.; Cinar A. Prior informed regularization of recursively updated latent-variables-based models with missing observations. Control Eng. Pract. 2021, 116, 104933. 10.1016/j.conengprac.2021.104933. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sun X.; Cinar A.; Yu X.; Rashid M.; Liu J. Kernel-Regularized Latent-Variable Regression Models for Dynamic Processes. Ind. Eng. Chem. Res. 2022, 61, 5914. 10.1021/acs.iecr.1c04739. [DOI] [Google Scholar]
- Alrifaey M.; Lim W. H.; Ang C. K. A novel deep learning framework based RNN-SAE for fault detection of electrical gas generator. IEEE Access 2021, 9, 21433–21442. 10.1109/access.2021.3055427. [DOI] [Google Scholar]
- Yin S.; Zhu X.; Kaynak O. Improved PLS focused on key-performance-indicator-related fault diagnosis. IRE Trans. Ind. Electron. 2014, 62, 1651–1658. [Google Scholar]
- Ringle C. M.; Sarstedt M.; Mitchell R.; Gudergan S. P. Partial least squares structural equation modeling in HRM research. Int. J. Hum. Resour. Man. 2020, 31, 1617–1643. 10.1080/09585192.2017.1416655. [DOI] [Google Scholar]
- Geladi P.; Kowalski B. R. Partial least-squares regression: a tutorial. Anal. Chim. Acta 1986, 185, 1–17. 10.1016/0003-2670(86)80028-9. [DOI] [Google Scholar]
- Liu Y.; Wang F.; Gao F.; Cui H. Hierarchical Multiblock T-PLS Based Operating Performance Assessment for Plant-Wide Processes. Ind. Eng. Chem. Res. 2018, 57, 14617–14627. 10.1021/acs.iecr.8b02685. [DOI] [Google Scholar]
- Solihin M. I.; Shameem Y.; Htut T.; Ang C. K.; Bt Hidayab M. Non-invasive blood glucose estimation using handheld near infra-red device. Int. J. Recent Technol. Eng. 2019, 8, 16–19. [Google Scholar]
- Fazai R.; Mansouri M.; Abodayeh K.; Nounou H.; Nounou M. Online reduced kernel PLS combined with GLRT for fault detection in chemical systems. Process Saf. Environ. Prot. 2019, 128, 228–243. 10.1016/j.psep.2019.05.018. [DOI] [Google Scholar]
- Botre C.; Mansouri M.; Nounou M.; Nounou H.; Karim M. N. Kernel PLS-based GLRT method for fault detection of chemical processes. J. Loss Prev. Process. Ind. 2016, 43, 212–224. 10.1016/j.jlp.2016.05.023. [DOI] [Google Scholar]
- Zhu W.; Zhen W.; Jiao J.. Partial Derivate Contribution Plot Based on KPLS-KSER for Nonlinear Process Fault Diagnosis, 2019 34rd Youth Academic Annual Conference of Chinese Association of Automation; (YAC), 2019.
- Wold S.; Kettaneh-Wold N.; Skagerberg B. Nonlinear PLS modeling. Chemom. Intell. Lab. Syst. 1989, 7, 53–65. 10.1016/0169-7439(89)80111-x. [DOI] [Google Scholar]
- Liu H.; Yang C.; Carlsson B.; Qin S. J.; Yoo C. Dynamic nonlinear partial least squares modeling using Gaussian process regression. Ind. Eng. Chem. Res. 2019, 58, 16676–16686. 10.1021/acs.iecr.9b00701. [DOI] [Google Scholar]
- Wang Y.; Si Y.; Huang B.; Lou Z. Survey on the theoretical research and engineering applications of multivariate statistics process monitoring algorithms: 2008–2017. Can. J. Chem. Eng. 2018, 96, 2073–2085. 10.1002/cjce.23249. [DOI] [Google Scholar]
- Jiao J.; Zhen W.; Wang G.; Wang Y. KPLS–KSER based approach for quality-related monitoring of nonlinear process. ISA Trans. 2021, 108, 144–153. 10.1016/j.isatra.2020.09.006. [DOI] [PubMed] [Google Scholar]
- Fu Y.; Kruger U.; Li Z.; Xie L.; Thompson J.; Rooney D.; Hahn J.; Yang H. Cross-validatory framework for optimal parameter estimation of KPCA and KPLS models. Chemom. Intell. Lab. Syst. 2017, 167, 196–207. 10.1016/j.chemolab.2017.06.007. [DOI] [Google Scholar]
- Si Y.; Wang Y.; Zhou D. Key-performance-indicator-related process monitoring based on improved kernel partial least squares. IRE Trans. Ind. Electron. 2021, 68, 2626–2636. 10.1109/tie.2020.2972472. [DOI] [Google Scholar]
- Jia Q.; Zhang Y. Quality-related fault detection approach based on dynamic kernel partial least squares. Chem. Eng. Res. Des. 2016, 106, 242–252. 10.1016/j.cherd.2015.12.015. [DOI] [Google Scholar]
- Sun H.; Lv G.; Mo J.; Lv X.; Du G.; Liu Y. Application of KPCA combined with SVM in Raman spectral discrimination. Optik 2019, 184, 214–219. 10.1016/j.ijleo.2019.02.126. [DOI] [Google Scholar]
- Balavalikar S.; Nayak P.; Shenoy N.; Nayak K.. Particle Swarm Optimization based Artificial Neural Network Model for Forecasting Groundwater Level in Udupi District, AIP Conference Proceedings; AIP Publishing, 2018; p 020021.
- Abiodun O. I.; Jantan A.; Omolara A. E.; Dada K. V.; Mohamed N. A.; Arshad H. State-of-the-art in artificial neural network applications: A survey. Heliyon 2018, 4, e00938 10.1016/j.heliyon.2018.e00938. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang Y.; Yu H.; Li X. Efficient Iterative Dynamic Kernel Principal Component Analysis Monitoring Method for the Batch Process with Super-large-scale Data Sets. ACS omega 2021, 6, 9989–9997. 10.1021/acsomega.0c06039. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rong M.; Shi H.; Song B.; Tao Y. Multi-block dynamic weighted principal component regression strategy for dynamic plant-wide process monitoring. Measurement 2021, 183, 109705. 10.1016/j.measurement.2021.109705. [DOI] [Google Scholar]
- Li G.; Hu Y.; Chen H.; Shen L.; Li H.; Hu M.; Liu J.; Sun K. An improved fault detection method for incipient centrifugal chiller faults using the PCA-R-SVDD algorithm. Energy Build. 2016, 116, 104–113. 10.1016/j.enbuild.2015.12.045. [DOI] [Google Scholar]
- Lou Z.; Wang Y. New Nonlinear Approach for Process Monitoring: Neural Component Analysis. Ind. Eng. Chem. Res. 2021, 60, 387. 10.1021/acs.iecr.0c02256. [DOI] [Google Scholar]
- Petrović M.; Rakočević V.; Kontrec N.; Panić S.; Ilić D. Hybridization of accelerated gradient descent method. Numer. Algorithm. 2018, 79, 769–786. 10.1007/s11075-017-0460-4. [DOI] [Google Scholar]
- Liu Y.; Liu B.; Zhao X.; Xie M. A mixture of variational canonical correlation analysis for nonlinear and quality-relevant process monitoring. IRE Trans. Ind. Electron. 2017, 65, 6478–6486. [Google Scholar]
- Chen Q.; Liu Z.; Ma X.; Wang Y. Artificial neural correlation analysis for performance-indicator-related nonlinear process monitoring. IRE Trans. Ind. Inform. 2021, 18, 1039–1049. [Google Scholar]
- Lou Z.; Li Z.; Wang Y.; Lu S. Improved Neural Component Analysis for Monitoring Nonlinear and Non-Gaussian Processes. Measurement 2022, 195, 111164. 10.1016/j.measurement.2022.111164. [DOI] [Google Scholar]
- Gao X.; Hou J. An improved SVM integrated GS-PCA fault diagnosis approach of Tennessee Eastman process. Neurocomputing 2016, 174, 906–911. 10.1016/j.neucom.2015.10.018. [DOI] [Google Scholar]
- Risvik H.Principal Component Analysis (PCA) & NIPALS algorithm; It.lut.fi, 2007.
- Lou Z.; Shen D.; Wang Y. Preliminary-summation-based principal component analysis for non-Gaussian processes. Chemom. Intell. Lab. Syst. 2015, 146, 270–289. 10.1016/j.chemolab.2015.05.017. [DOI] [Google Scholar]
- Wang G.; Yin S. Quality-related fault detection approach based on orthogonal signal correction and modified PLS. IRE Trans. Ind. Electron. 2015, 11, 398–405. [Google Scholar]











