Abstract
To improve enterprise financial early warning, we propose an algorithm based on a decision tree. According to the shortcomings and defects of the classical algorithm and the traditional decision tree algorithm, in the ordinary decision tree improved algorithm based on PCA, there is a problem that the representativeness of the data after dimensionality reduction processing are not high, resulting in the fact that the accuracy of the algorithm can be improved slightly after multiple data runs. Based on the classical algorithm, attribute eigenvalues before classification are extracted twice, and the amount of data to be classified is calculated. That is, the most important attributes of the original data are selected. After the subtree is established, the dimension reduction and merging selection of the data are performed, and the improved algorithm is verified by using three data sets in the UCI database. The results show that the average accuracy in the three datasets is 94.6%, which is improved by 1.6% and 0.6% for the traditional classical algorithm and the ordinary PCA decision tree optimization algorithm, respectively. PCA-based decision tree algorithms can improve the accuracy of the results to some extent, which is of practical importance. In the future, a classic algorithm improved for secondary modeling will be used to obtain a more efficient decision tree model. The decision tree algorithm has been proven to recognize an early warning of an enterprise's financial risks, which enhances the effectiveness of an enterprise's early financial warning.
1. Introduction
In the post-financial crisis period, enterprises are in an unpredictable external environment, and different forms of risks emerge one after another. The survival and development of enterprises are inseparable from the external environment. At the same time, the uncertainty of the external environment has a nonnegligible impact on the daily operation of enterprises. Additionally, the negligence or change of any link in the daily operation of enterprises, if not identified and controlled in time, is likely to lead to financial risks. With the increasing degree of economic globalization, the complex and changeable market environment makes enterprises face more financial risks different from the past [1]. Additionally, through the tree algorithm, the problems in decision-making and control in enterprise operation and management activities, such as the difficult recovery of accounts receivable, improper authority setting, wrong financing decision-making, and investment decision-making, will lead to financial risks and bring huge economic losses to the enterprise if they cannot be identified and handled in time. Under the background of the contemporary market economy, any enterprise is in an extremely complex living environment and cannot obtain all the information to make it grow, which determines that the enterprise will face certain financial risks. Recently, the outbreak of the financial crisis has shifted the focus of attention to the risk management of enterprises, especially the importance of financial risk management. At present, in the daily operation and management of enterprises, financial risk management is still ignored by most Chinese enterprises, and financial risk still exists in enterprises. For example, the setting of financial control right is unreasonable, and the financial governance structure is imperfect [2]. The financial control mode lags, and the financial early warning analysis and control mechanism are not perfect. The concept of financial risk management is backward, and the concept of risk value is insufficient. Internal audit exists in name only, and internal financial supervision is lacking. Recently, from the various financial failure cases of Chinese enterprises, people have deeply realized that further research on enterprise financial risk management has a theoretical and practical significance that cannot be ignored. This kind of information is defined as an internal report. The internal report provides managers at different levels with the information they need for business decision-making and control, to timely adjust the business management strategy and make corresponding decisions. Additionally, compared with the external reports, not only do the internal reports have the functions of precontrol and in-process control after reporting and presentation but also attract more and more attention from enterprise management. However, compared with the development of financial accounting, the development and attention of management accounting are still backward. At the same time, the theory of management accounting is divorced from practice, and a complete theoretical framework that can guide and apply to management accounting practice has not been formed [3, 4]. Among them, there is less research on the theory and system of internal reports, and there is no unified concept of internal reports. Currently, a few internal reports in enterprises are mostly prepared for the final completion of external reports, which cannot give full play to the value of internal reports.
The main goal of this paper is to optimize the decision tree algorithm and apply it to enterprise cost control. Therefore, the basic technology and theory based on this research mainly include data mining technology and decision tree algorithm. Common data mining technologies include the following main contents in the implementation process. From front to back, these contents are to ask questions according to needs, make necessary data preparation, preprocess massive data, build data mining models, and evaluate and explain the models. The model system of the whole data mining is shown in Figure 1.
Figure 1.

Basic architecture of data mining.
2. Related Work
Kumar and Ragava Reddy sorted out and analyzed the reasons for the operation failure of small and medium-sized enterprises in Finland with reference to the evaluation method of Bank of America on the operation and financial status of small and medium-sized enterprises [5]. The research found that the lack of management ability of middle-level managers, the lack of information provided by the accounting system, and the attitude of enterprises towards employees have an important impact on the operation failure of enterprises. This research result provides a new direction for evaluating the financial and operating status of enterprises. Xu et al. used a variety of mixed financial risk early warning models, that is, different combinations of multivariable discriminant analysis, logistic regression, and BP neural network methods to conduct an empirical analysis of the sample data. Through research, it was found that the prediction accuracy of the mixed early warning model was significantly higher than that of a single early warning model [6]. Clarin conducted in-depth research on the safety management problems and countermeasures of foreign students in colleges and universities, expounded on the importance of early warning mechanisms, and established early warning mechanisms for emergencies, cultural conflicts, accidental injuries, and other problems endangering the personal safety of foreign students during their stay in China. The early warning mechanism can discover the causes of contradictions in advance, and managers can take measures in advance to reduce the impact scope of safety events and even avoid the occurrence of safety events, so as to achieve the transformation from remedying the occurrence of events to early warning to avoid the occurrence of events [7]. When exploring the management strategy of foreign students in China in the new era, Hammed and Jumoke mentioned the need to establish a prevention mechanism for emergencies of foreign students [8, 9]. Hassan et al. conducted an in-depth study on the construction of the emergency early warning and prevention mechanism for foreign students in China and expounded the necessity of the safety early warning mechanism for foreign students. In terms of the purpose and effect of safety event management, “plan ahead” is far better than “make up for the lost.” Li et al. was the first scholar to use regression analysis to study financial risk early warning. The emergence of this research made discriminant analysis replaced by regression analysis and became the mainstream method in the field of financial crisis early warning in the 1980s [10]. Zhang et al. referring to the evaluation method of Bank of America on the operation and financial status of small and medium-sized enterprises, sorted, and analyzed the reasons for the operation failure of small and medium-sized enterprises in Finland. The research found that the lack of management ability of middle-level managers, the lack of information provided by the accounting system, and the attitude of enterprises towards waiting for employees have an important impact on the operation failure of enterprises. This research result provides a new direction for evaluating the financial and operating status of enterprises [11]. Bian and Wang empirically analyzed the sample data by using a variety of mixed financial risk early warning models, that is, different combinations of multivariate discriminant analysis, logistic regression, and neural network methods. The research found that the prediction accuracy of the mixed early warning model is significantly higher than that of a single early warning model [12]. When conducting univariate analysis on the financial status of sample enterprises, Oujdi et al. selected a financial ratio, including asset liability ratio, return on net assets, return on total assets, and current ratio. The research results show that the asset liability ratio and current ratio have the lowest misjudgment rate on the financial status [13, 14]. Nguyen et al. studied the financial risk early warning model based on fuzzy neural network, selected variables from three alternative financial indicators through stepwise regression method, and finally locked the indicators of long-term liabilities, shareholders' equity, profit growth rate, current ratio, total working capital assets, asset turnover, and asset return as the final analysis variables.
Based on existing research, a tree-based algorithm is being developed. Algorithm improvement depends on the inadequacy of classical algorithms and always deciding tree algorithms. Data research has shown that there are many ways to select the determination of classical algorithms. Therefore, this paper presents an algorithm for optimizing tree decisions based on PCA. In an algorithm for improving PCA-based decision making, the problem is that the representation data is not very high after the reduction in size. As a result, the accuracy of the algorithm improves slightly after data recovery. According to the classical algorithm, this dataset counts the value of the identity of the behavior before the two groups and counts how much data to divide, for example, selecting the most important objects of the data in old papers, creating a subtree, reducing the cost of data, and growing and consolidating information. The development algorithm has been validated using our dataset in the UCI database.
3. Method
Some research on specific issues requires the collection of data and the use of data, and classical procedures for this data often have many differences, and the relationship between differences are difficult to find intuitively. However, there are several factors that often play an important role in the integration of classical algorithms. Therefore, we need to pay attention to the hidden connection and learn more about the relationship of the relationship matrix of the first difference in the data structure of the classical algorithm or internal structure of the covariance matrix and identify the algorithm of linear combinations of the first variables of the data algorithm. The first classical algorithm, obtained by many incomplete, we call it the main component. The classical algorithm generally maintains the following relationships between the main components and the base components obtained after calculating the original variables.
The main idea is the composition of the source material.
The number of critical points is less than the number of original objects.
Most of the information contained in the initial variables is stored in each principle.
Independence of the two priorities is established. After the analysis, we can see the key points from the differences of the first ones that contain the basic characteristics, so that when we are faced with a lot of data, we can reach a numerical analysis and then study and understand the relationship between the initial differences. The internal rules of product research and the results are studied at a deeper level. The steps of principal component analysis are as follows: let the thing we want to study contain P indexes, expressed as x1.x2 … xp, and the p-dimensional random vector that can be formed by this index is (x1.x2 … xp). Let the mean of random vector X be u and the covariance matrix be e [15, 16]. Next, we perform linear transformation on X. after this step, we can get a new comprehensive variable Y. In other words, the new comprehensive variable can be described linearly by the original variable; that is, it meets the following requirements:
| (1) |
Because we can make the above linear changes to the original variables, the statistical characteristics of the comprehensive variable Y will change with different linear transformations. Therefore, to obtain better results, the variance of yi=μi,x should be kept as large as possible, and each yi should be independent, because
| (2) |
Given any constant C, you get
| (3) |
Therefore, if P is not limited, var(Yi) can be increased at will, which makes the problem meaningless [17]. Linear transformation constraints need to be carried out under the following principles:
-
(1)
(4) - The following is obtained:
(5) -
(2)
Yi and Yj are not related to each other (i ≠ j, i, j=1,2 …, p).
-
(3)
Y1 is the maximum term of the variance obtained in the linear combination of X1, X2,…, Xp satisfying principle 1.
Y 2 is the largest term of variance among all linear combinations of X1, X2,…, Xp independent of Y1; …Yp is the largest term of variance among all linear combinations of X1, X2,…, Xp not related to Y1,, Y2,…YP−1.
The comprehensive variable Y1,, Y2,…YP determined under the previously mentioned three principles is called the 1,2,…, p-th principal component of the original variable, and the proportion of each comprehensive variable in all variance sum is decreasing. In our research, we usually only select the first few components, to simplify the model.
In the traditional ID3 algorithm, the attributes of different individuals are different at the specific value level, and the subset constructed by the algorithm is the training set divided based on this different value. Therefore, the number of subsets finally constructed by ID3 algorithm is the same as the number of types of individual attributes. On this basis, when building the decision tree, the subset corresponds to the branches of the decision tree, and the end points of these branches become leaf nodes, resulting in corresponding decision rules [18]. In this case, if there are too many kinds of attribute values, it will obviously directly increase the leaf nodes, decision paths and finally formed decision rules of the decision tree. That is, it will increase the overall scale of the decision tree, and even lead to the multivalue tendency of decision attributes, resulting in the low accuracy of decision rules. Then, the optimization of ID3 algorithm proposed in this paper is to solve such problems. The specific optimization ideas are as follows.
Select training to determine classification attribute AK.
Establish a node n.
If the data under consideration are of the same category, n is the decision tree leaf and the class is the leaf.
If there are no other properties that can be analyzed in the current data under consideration, n is also a leaf at this time, and the classification of leaves is marked according to the principle of minority obedience to the majority.
Otherwise, select the average value of the attribute n along the test node with the best expected value of the attribute.
The algorithm is initially based on the original ID3 decision tree algorithm. The main idea is as follows.
Step 1: select the file, first compress the file using PCA algorithm, and select the key operations. During this time, the data is compressed using the PCA algorithm. Currently, there are many differences in the process of tree design decisions, and the relationship between the differences shows that there are many controls. We use this data to analyze and study the internal structure of the bond matrix or the covariance matrix of the original data exchange. For example, coefficient matrix can be created as follows [19].
Step 2: initialize the data we need to process to construct a complete data matrix. Xm∗n, m represents the number of records, and n represents the dimension of data records.
- Step 3: for data standardization, the average of the data is 0, and the standard deviation is 1. That is, data with different dimensions is entered into the same matrix. The mathematical models for normalizing data are as follows:
(6) Step 4: solve the differences between the matrix files. The purpose of problem solving is to measure the relationship between two differences, and the models are as follows:
| (7) |
When we encounter an n-dimensional matrix data, we can obtain the difference between the two datasets. Thus, this n-dimensional data can be obtained from the n∗n covariance matrix [20].
| (8) |
If Xi represents the ith attribute of the matrix, it can be expressed as follows:
| (9) |
Then, we use the linear combination of the original variables to get some comprehensive indicators, which is what we call the principal component.
4. Experimental Results and Discussion
To verify the optimization algorithm, the author selected the data from UCI machine learning database for the experiment. The first selected is the wine dataset, which contains 178 samples and 13 attributes. After substituting into the original ID3 algorithm and the operation comparison of the optimization algorithm, as shown in Figure 2.
Figure 2.

Comparison of results of wine dataset. (a) Results under the original ID3 algorithm. (b) Results of optimized algorithm.
In Figure 2, the original ID3 algorithm had five conflicts and only two conflicts after improvements, confirming that the optimization accuracy was higher than the original algorithm [21]. Additionally, experiments were performed on adult data and vehicle datasets and compared with the same PCA as the ID3 algorithm. The results are shown in Figure 3.
Figure 3.

Comparison of algorithm accuracy.
In Figure 3, in the comparison data, the algorithm is more accurate than the original ID3 algorithm and PCA hybrid algorithm. There is reason to believe that the algorithm has improved. For clarity, the results are shown in Figure 4.
Figure 4.

Comparison of accuracy of algorithm results.
After double compression of the test samples, the accuracy of the optimization was improved to some extent. The accuracy of the optimization algorithm is 2.2% higher than that of the traditional ID3 algorithm and 1% higher than that of the traditional PCA sheep algorithm. The accuracy of the adult data is 1.1% higher than the standard ID3 standard and 0.6% higher than the PCA melting algorithm. The accuracy of the optimization algorithm of vehicle dataset is 1.4% higher than the traditional ID3 algorithm and 0.1% higher than the PCA integration algorithm. Additionally, two data mining algorithms, KNN and naïve Bayes, were tested and integrated into the adult data set for comparison and use. Of these, the accuracy of the KNN algorithm is 87.5% and the accuracy of the Naive Bayes algorithm is 93.75%, which is lower than the performance of this form factor. The algorithms described in this paper have been proven in practice to be practical. The new algorithm, optimized for the ID3 algorithm, has the following advantages.
The optimization algorithm effectively solves the problem of multivalue tendency of traditional ID3 algorithm in decision attributes. That is, through principal component analysis, select more representative decision attributes, so as to avoid the multivalue tendency after the calculation of decision attribute information entropy and the reduction of decision attributes, which will inevitably reduce the overall time of decision tree modeling and improve the efficiency of decision tree modeling.
After the establishment of subtree, PCA algorithm is used to compress again, and the merged branches are marked with nodes, and then, they are continuously split, which not only effectively avoids the limitations of first pruning but also makes effective use of all datasets. To further verify the advantages of the optimization algorithm, this paper is also based on the dataset of an agricultural material enterprise in the previous paper. In Windows 10 operating system, Python is used to rewrite the optimization algorithm, and the rewritten algorithm is simulated. That is, the secondary modeling of the cost control of an agricultural material enterprise is implemented to obtain a new decision tree model as shown in Figure 5.
Figure 5.

Decision tree of agricultural materials enterprises obtained by optimization algorithm.
Additionally, according to the new decision tree, some rules related to the cost control decision of an enterprise can be obtained. These rules are different from the rules obtained by ID3 algorithm. The specific rules are as follows.
If main business cost control = excellent, then cost control = excellent.
If main business cost control = qualified and management cost control = excellent, then cost control = excellent.
If main business cost control = qualified and management cost control = qualified and sales cost control = unqualified, then cost control = unqualified.
From the previously mentioned experimental results, we can find that when using the improved algorithm in this paper, only three cost items are retained through PCA algorithm. To include the production cost for analysis, the overall tree view is more streamlined [22]. Moreover, according to the evaluation results after the completion of decision tree modeling, the accuracy of decision tree modeling using optimization algorithm is higher, up to 95.1%. The modeling time is shorter, which is 8.2 seconds. Additionally, we also substituted the data into the ordinary PCA fusion algorithm, and its accuracy rate reached 94.2, but the time spent is less than the optimization algorithm in this paper, which is 7.8 seconds.
According to the decision tree modeling of the traditional ID3 algorithm and the modeling results of the new optimized algorithm, there is no small difference between them. From the perspective of decision tree modeling time, the newly optimized algorithm improves the accuracy of the algorithm to a certain extent due to the compression and dimensionality reduction of decision attributes, and the operation time is also less than that of the original ID3 algorithm. The comparison results of the algorithm before and after optimization in decision tree modeling time are shown in Figure 6. There is no doubt that the optimized algorithm is faster in decision tree modeling, and the modeling time is 4.4 seconds shorter than the traditional ID3 algorithm.
Figure 6.

Comparison of algorithm decision tree modeling time before and after optimization.
Additionally, from the accuracy level of decision tree model, there are also differences in ID3 algorithm before and after optimization. The accuracy of modeling is compared, and the results are shown in Figure 7.
Figure 7.

Comparison of accuracy of algorithm decision tree before and after optimization.
According to Figure 6, it is not difficult to find that the optimized algorithm has higher accuracy in decision tree modeling than the traditional ID3 algorithm and ordinary PCA combined algorithm, which shows that the optimized algorithm is more accurate and can be used in decision tree modeling. In other words, after comparing ID3 algorithm with the optimization algorithm in this paper, the specific comparison data in modeling time, decision tree size, prediction rules, and accuracy can be obtained, as shown in Figure 8.
Figure 8.

Comparison and summary of algorithms before and after optimization.
We can see that the optimized algorithm has obvious advantages in terms of modeling time, total number of nodes, number of leaf nodes and number of decision rules, which is more practical for the construction of enterprise cost control decision tree.
In early warning, we should first make a preliminary analysis of the company's annual reports in recent years to teenagers, judge which link of the current company's financial situation is in the early warning chain, and then determine the early warning path and early warning target. Then, according to the early warning link, determine the index system, select and calculate relevant financial indicators, make up for the missing data, and then standardize. The comprehensive indicators are calculated using factor analysis and other methods. Finally, several early warning models are used to warn the financial situation in the next period.
Through the traditional multipurpose regression model for financial risk early warning, the dependent variable of the general multiple linear regression model is a continuous variable. When the dependent variable is a binary value or variable, if the classical regression model is established, the model is likely to no longer be effective. Therefore, for the data with dependent variable the regression model should be used, and its function form is as follows.
| (10) |
Let Π represent the probability of y = 1, which is recorded as P(y = 1) = Π, and the probability of y = 0 is p (y = 0) 1 − Π. It is assumed that there are n explanatory variables x1, x2 … xn, , which are recorded as (x1, x2 … xn,), by vector X. different from the linear model. The logistic model does not study the relationship between the dependent variable value and the explanatory variable, but the relationship between the dependent variable value probability Π and the explanatory variable [23]. In fact, the relationship between probability Π and explanatory variables is not a simple linear relationship, but an “s” curve. At this time, the logical curve model is expressed as
| (11) |
Logit transforms the previously mentioned formula to obtain
| (12) |
The previously mentioned formula is the logistic regression model, in which β0, β1 … βn is the parameter to be estimated.
β 0, β1 … βn can be estimated by maximum likelihood and solved by Newton-Raphson iteration.
For m samples, there are P=(yi=1)=πi and P=(yi=0)=1 − πi, where i = 1,2,…, m. Then, the joint probability density function of yi is
| (13) |
The detection rate is defined as the possibility of risk samples found by the early warning model. The detection rate focuses on whether it can be found effectively. The definition accuracy rate is the accuracy of early warning discrimination after the early warning results come out. The accuracy rate focuses on whether the early warning results are accurate. Calculate the risk value of training samples according to the early warning model and verify the early warning effect of the model. The results are summarized in Table 1.
Table 1.
Effect of logistic model.
| Prediction results | Detection rate | Average detection rate | |||
| 0 | 1 | ||||
|
| |||||
| Original mark | 0 | 299 | 94 | 76.1% | 72.1% |
| 1 | 125 | 268 | 68.2% | ||
| Accuracy | 70.5% | 74.0% | Total early warning effect | ||
| Average accuracy | 72.3% | 72.2% | |||
In terms of detection ability, there are training samples marked as in the original data and the number of successful predictions by the early warning model. Among them, there are samples that are misjudged as normal, and the detection rate of risk samples is as high as 76.1%. There are samples marked as in the original data, and there are samples that are successfully predicted, among which there are samples that are misjudged as risk samples, and the detection rate of normal samples is 68.2%, nearly 70%, and the recognition rate of normal samples is slightly low. The average detection rate of the model is 72.1%. The overall detection effect of the established logistic early warning model is good, which can basically be used for actual early warning.
The probability of early warning is 72.5%, while the probability of accurate early warning is 29.5% when the sample is normal, and the probability of accurate early warning is 3.5%. The detection rate and accuracy of logistic early warning model are equivalent.
Taking the total early warning effect as the arithmetic mean of average detection rate and average accuracy, the total early warning effect of logistic model is 72.2%.
5. Conclusion
This paper studies the specific application of decision tree algorithm in enterprise cost control. The application of data mining technology first needs to clarify the data mining requirements, which is the prerequisite before each data mining task is established. Only by clarifying the data mining requirements, can we further determine what kind of data to choose and what algorithm to use for data mining, to make the realization goal of data mining more targeted. From the previously mentioned experimental results, we can find that when using the improved algorithm in this paper, only three cost items are retained through PCA algorithm. To include the production cost for analysis, the overall tree view is more streamlined. Moreover, according to the evaluation results after the decision tree modeling is completed, the accuracy of decision tree modeling using the optimization algorithm is higher, up to 95.1%, and the modeling time is shorter, 8.2 seconds. Additionally, we also substituted the data into the ordinary PCA fusion algorithm, and its accuracy reaches 94.2, but the time spent is less than the optimization algorithm in this paper, which is 7.8 seconds.
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest regarding the publication of this paper.
References
- 1.Yan J., Li C., Liu Y. Insecurity early warning for large scale hybrid ac/dc grids based on decision tree and semi-supervised deep learning. Power Systems, IEEE Transactions on . 2021;36(99):p. 1. doi: 10.1109/tpwrs.2021.3071918. [DOI] [Google Scholar]
- 2.Wang P. Research on public opinion and early warning analysis model of network emergencies based on decision tree. Journal of Physics: Conference Series . 2021;1769(1) doi: 10.1088/1742-6596/1769/1/012037.012037 [DOI] [Google Scholar]
- 3.Mei H., Li X. Research on e-commerce coupon user behavior prediction technology based on decision tree algorithm. International Core Journal of Engineering . 2019;5(9):48–58. doi: 10.6919/ICJE.201908_5(9).0008. [DOI] [Google Scholar]
- 4.Yu S., Li X., Wang H., Zhang X., Chen S. C_cart: an instance confidence-based decision tree algorithm for classification. Intelligent Data Analysis . 2021;25(4):929–948. doi: 10.3233/ida-205361. [DOI] [Google Scholar]
- 5.Santhosh Kumar B., Ragava Reddy M. A c4.5 decision tree algorithm with mrmr features selection based recommendation system for tourists. Psychology . 2021;58(1):3640–3643. doi: 10.17762/pae.v58i1.1352. [DOI] [Google Scholar]
- 6.Xu W., Ning L., Luo Y. Wind speed forecast based on post-processing of numerical weather predictions using a gradient boosting decision tree algorithm. Atmosphere . 2020;11(7):p. 738. doi: 10.3390/atmos11070738. [DOI] [Google Scholar]
- 7.Clarin J. A. Academic analytics: predicting success in the licensure examination of graduates using cart decision tree algorithm. Journal of Advanced Research in Dynamical and Control Systems . 2020;12(1-Special Issue):143–151. doi: 10.5373/jardcs/v12sp1/20201057. [DOI] [Google Scholar]
- 8.Hammed M., Jumoke S. An implementation of decision tree algorithm augmented with regression analysis for fraud detection in credit card. International Journal of Computer Science and Information Security . 2020;18(2):79–88. [Google Scholar]
- 9.Hassan E., Saleh M., Ahmed A. Network intrusion detection approach using machine learning based on decision tree algorithm. Journal of Engineering and Applied Sciences . 2020;7(2):p. 1. doi: 10.5455/jeas.2020110101. [DOI] [Google Scholar]
- 10.Li T., Ma L., Liu Z., Liang K. Economic granularity interval in decision tree algorithm standardization from an open innovation perspective: towards a platform for sustainable matching. Journal of Open Innovation: Technology, Market, and Complexity . 2020;6(4):p. 149. doi: 10.3390/joitmc6040149. [DOI] [Google Scholar]
- 11.Zhang F., Gao D., Xin J. A decision tree algorithm for forest fire prediction based on wireless sensor networks. International Journal of Embedded Systems . 2020;13(4):p. 422. doi: 10.1504/ijes.2020.10031194. [DOI] [Google Scholar]
- 12.Bian F., Wang X. School enterprise cooperation mechanism based on improved decision tree algorithm. Journal of Intelligent and Fuzzy Systems . 2020;40(13):1–11. doi: 10.3233/JIFS-189439. [DOI] [Google Scholar]
- 13.Oujdi S., Belbachir H., Boufares F. C4.5 decision tree algorithm for spatial data, alternatives and performances. Journal of Computing and Information Technology . 2020;27(3):29–43. doi: 10.20532/cit.2019.1004651. [DOI] [Google Scholar]
- 14.Nguyen T. V., Nguyen V. D., Nguyen T. T., Khanh P. C. P., Nguyen T. A., Tran D. T. Motorsafe: an android application for motorcyclists using decision tree algorithm. International Journal of Interactive Mobile Technologies (iJIM) . 2020;14(2):p. 119. doi: 10.3991/ijim.v14i02.10742. [DOI] [Google Scholar]
- 15.Ojeka S. A., Adegboye A., Adegboye K., Alabi O., Afolabi F. Chief financial officer roles and enterprise risk management: an empirical based study. Heliyon . 2019;5(6) doi: 10.1016/j.heliyon.2019.e01934.e01934 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Luthfiyanti N. K., Dahlia L. The effect of enterprise risk management on financial distress. Journal of Accounting Auditing and Business . 2020;3(2):p. 30. doi: 10.24198/jaab.v3i2.25910. [DOI] [Google Scholar]
- 17.Yu X. Retracted article: rainfall erosion characteristics of loess slope based on big data and internal financial control of enterprises. Arabian Journal of Geosciences . 2021;14(16):p. 1589. doi: 10.1007/s12517-021-07960-0. [DOI] [Google Scholar]
- 18.Tian J., Hu N., Li X., Zhang F. Research on a computer aid risk control system using classification capabilities of support vector machines. Journal of Physics: Conference Series . 2021;1952(4) doi: 10.1088/1742-6596/1952/4/042090.042090 [DOI] [Google Scholar]
- 19.Li J., Tan Y., Zhang A. The application of internet big data and support vector machine in risk warning. Journal of Physics: Conference Series . 2021;1952(4):p. 11. doi: 10.1088/1742-6596/1952/4/042026.042026 [DOI] [Google Scholar]
- 20.Yan H., Yuan Z. Financial risk analysis and early warning research based on data mining technology. Journal of Physics: Conference Series . 2019;1187(5):p. 52106. doi: 10.1088/1742-6596/1187/5/052106. [DOI] [Google Scholar]
- 21.Ding Q. Risk early warning management and intelligent real-time system of financial enterprises based on fuzzy theory. Journal of Intelligent and Fuzzy Systems . 2020;40(6):1–11. doi: 10.3233/JIFS-189441. [DOI] [Google Scholar]
- 22.Shang H., Lu D., Zhou Q. Early warning of enterprise finance risk of big data mining in internet of things based on fuzzy association rules. Neural Computing & Applications . 2021;33(9):3901–3909. doi: 10.1007/s00521-020-05510-5. [DOI] [Google Scholar]
- 23.Fraser J., Quail R., Simkins B., Mcaleer M. The history of enterprise risk management at hydro one inc. Journal of Risk and Financial Management . 2021;14(8):p. 373. doi: 10.3390/jrfm14080373. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data used to support the findings of this study are included within the article.
