Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2024 Dec 28;14:31027. doi: 10.1038/s41598-024-82277-y

A Gaussian convolutional optimization algorithm with tent chaotic mapping

Yanying Qi 1, Aipeng Jiang 1,, Yuhang Gao 1
PMCID: PMC11680939  PMID: 39730896

Abstract

To solve the problems of the traditional convolution optimization algorithm (COA), which are its slow convergence speed and likelihood of falling into local optima, a Gaussian mutation convolution optimization algorithm based on tent chaotic mapping (TCOA) is proposed in this article. First, the tent chaotic strategy is employed for the initialization of individual positions to ensure a uniform distribution of the population across a feasible search space. Subsequently, a Gaussian convolution kernel is used for an extensive depth search within the search space to mitigate the likelihood of any individuals converging to a local optimum. The proposed approach is validated by simulation using 23 benchmark functions and six recent evolutionary algorithms. The simulation results show that the TCOA achieves superior results in low-dimensional optimization problems and solves practical, spring-related industrial design problems. This algorithm has important applications to solving optimization problems.

Keywords: Convolutional optimization algorithm, Tent chaotic mapping, Gaussian convolution

Subject terms: Computer science, Electrical and electronic engineering

Introduction

With the continuous advancement of science and technology, optimization problems have evolved into increasingly difficult challenges, encompassing scenarios such as nondifferentiable objective functions and search spaces that are characterized by robust nonlinearity1. Traditional deterministic approaches that rely on gradients and nongradients have proven not to be adept at effectively addressing optimization problems within this domain. These challenges are intrinsic to practical applications and have prompted researchers to explore intelligent evolutionary algorithms. Intelligent evolutionary algorithms iteratively build stronger individuals from the initial population through the evolutionary process of selection and through operations such as crossover and mutation, approaching the optimal solution gradually. Evolutionary algorithms are widely used to solve complex optimization problems in fields such as medicine2, engineering3,4, and computer science5,6.

With the development of intelligent optimization algorithms in recent years, a series of new meta-heuristic algorithms have been developed that further improve the optimization of complex problems. Some examples include a template algorithm that simulates the proliferation of template cells7, a social behavior algorithm that modifies strawberry varieties, a template-molecule optimization algorithm that shows excellent global search capabilities in its attempt to find an optimal solution8, a template slime mold algorithm9, a moth search algorithm for moth flight patterns10, and an algorithm that simulates group predation behavior11. The development of the moth evolutionary algorithm used for two-dimensional flight patterns12 was inspired by applications required in areas such as computer vision and artificial intelligence at the cutting edge of image processing, where different rafts are used to extract different feature information from an original image. As initial individual positions in the basic raft evolution algorithm are generated via random initialization, the distribution of each initial individual position in the solution space is often unbalanced, which effect the efficiency of the algorithm. In addition, some gradient processes have adopted stochastic gradient structures, which elicit instability in the search space. Therefore, they do not solve complex optimization problems well. The optimization process of intelligent evolutionary algorithms is mainly represented by group initialization and group position update13. Therefore, this study improves the individual initialization method and group position update mechanism to solve these problems.

With advances in nonlinear dynamics, the chaos theory has successfully been integrated into various intelligent evolutionary algorithms and has produced excellent results. Some notable algorithms include particle swarm optimization (PSO)14, gray wolf optimization (GWO)15, grasshopper optimization algorithm16, whale optimization algorithm17, and firefly algorithm18. To further enhance the optimization performance of convolutional evolutionary algorithms, we introduce and design a new compound convolutional evolutionary algorithm that combines tent chaotic mapping and Gaussian mutation. This algorithm has important applications to solving optimization problems. The main contributions of this study are as follows:

  1. Demonstration of tent chaotic mapping in population initialization to achieve enhanced population diversity.

  2. Use of a Gaussian convolution kernel of size 3 × 3 to expand the search space and enhance the algorithm’s global search performance without changing original data.

  3. Evaluation of the algorithm’s performance using 23 standard benchmark functions from the CEC-2005 test suite to assess the algorithm’s efficacy in solving optimization problems.

  4. Comparison between the optimization results of the proposed algorithm (the Gaussian mutation convolutional optimization algorithm based on tent chaotic mapping (TCOA)) and those of a basic convolutional evolution algorithm and five other widely recognized intelligent evolution algorithms.

The subsequent sections of this paper are organized as follows: "Convolutional optimization algorithm" presents a basic convolutional evolution algorithm in detail. "TCOA" provides a detailed description of the improved convolutional evolution algorithm. "Simulation experiments and discussion" discusses simulation studies and discusses analysis results of the efficiency of the TCOA in handling real-world applications. Finally, conclusions are provided in "Conclusion and future scope".

Convolutional optimization algorithm

Convolutional process

Chen12 proposed a convolutional evolution algorithm in 2023, which applies the idea of processing redundant image features using two-dimensional convolutional information for solving multiobjective optimization problems in nonlinear systems. The algorithm first randomly initializes the initial positions of each individual:

graphic file with name M1.gif 1

where Inline graphic is the initial position of each individual, Inline graphic is the lower limit of the Inline graphic individual, and Inline graphic is the upper limit of the Inline graphic individual.

Then, by updating the positions during the search process in four directions—vertical, horizontal, regional, and overall—the search process for the optimal individual is completed. The update strategy for the vertical, horizontal, and regional positions is

graphic file with name M7.gif 2

where Inline graphic is the current number of iterations; Inline graphic is an Inline graphic dimensional matrix, which is the position vector of the Inline graphic-generation population; Inline graphic represents the position vectors of the population after the t-generation vertical, horizontal, and regional convolutional position updates, respectively; Inline graphic is a vertical convolution kernel with dimensions of Inline graphic; horizontal convolution kernel has dimensions of Inline graphic; and the regional convolution kernel has dimensions of Inline graphic.

Further,

graphic file with name M17.gif 3

In the comprehensive position update stage, the individual positions Inline graphic of the population after the Inline graphic-generation vertical convolutional position update, position vector Inline graphic of the population after the Inline graphic-generation horizontal convolutional position update, and position vector Inline graphic of the population after the Inline graphic-generation regional convolutional position update are combined using random or proportional weights to form Inline graphic:

graphic file with name M25.gif 4

where Inline graphic, Inline graphic, and Inline graphic are all random numbers between Inline graphic and Inline graphic is taken in this study.

Location update strategy

Based on the convolution process in the first part, the position of each individual is obtained. Further, by comparing the fitness sizes of all individual positions in Inline graphic and Inline graphic, the optimal replacement of the individual position in Inline graphic is obtained

graphic file with name M34.gif 5

where Inline graphic is the position of the Inline graphic individual of the Inline graphic-generation population. Inline graphic is the position of the individual in the population updated by the Inline graphic-generation’s vertical, horizontal, regional, and comprehensive convolutional positions.Inline graphic represents the fitness function, which is the objective function.

TCOA

Chaotic system

A convolutional optimization algorithm works by generating the initial optimal solution through random initialization, and the position of a nonlinear system obtained is unevenly distributed in the solution space, which greatly influences the efficiency of the algorithm. Chaotic techniques are used to produce stochastic initial populations1921. Usually, this type of random initialization generates a different initial population each time, which is convenient to use. However, there are certain drawbacks, one of which is that the distribution of initial particles in the solution space is not uniform and particles in local areas are often too dense while the initial particles in other areas are too sparse. This situation is quite unfavorable for the early convergence of optimization algorithms. For group optimization algorithms that tend to fall into local optima, this situation may lead to a decrease in the convergence speed or even an inability to converge.

Chaos initialization can effectively prevent these problems. Chaos initialization introduces the characteristics of randomness, traversal, and regularity. In chaos initialization, the search space is traversed within a certain range according to its own laws and without repetition22. This generates an initial population that demonstrates high solution accuracy and convergence speed. The chaos initialization method employed in this study uses the tent chaos model, whose formula is as follows:

graphic file with name M41.gif 6

As shown in Fig. 1, tent mapping has good traversal performance, stable initialization results, and good distribution uniformity of the initial population. Therefore, this study used tent mapping to initialize the population of the TCOA to improve and enhance the distribution uniformity of the initial population in the search space and achieve enhanced global search ability.

Fig. 1.

Fig. 1

Chaos map and random data distribution.

Gaussian convolutional kernel

As shown in Fig. 2, a standard two-dimensional convolutional algorithm was used during the optimization process. The selection of convolution kernels has a profound effect on the performance of an algorithm23,24. The size of the shape parameters and their number as well as the number and dimensionality of convolutional kernels determine the range, accuracy, and flexibility of the convolution operation. A larger convolution kernel is able to capture a greater amount of contextual information, but it may also increase computational complexity. Smaller convolution kernels can be computed more quickly, but they may not capture enough contextual information. The original convolutional optimization algorithm randomly followed solutions in the search space to achieve individual position updating. Although this update type maintains species diversity during the early stages of iterations, the selection of convolutional kernels introduces a certain degree of blindness with respect to the algorithm’s position update method because it is not possible to determine whether the current obtained information is the current global optimal position.

Fig. 2.

Fig. 2

2D convolution process.

Through experimentation, we discovered that when the evolutionary algorithms considered that the solution obtained after initialization had already met the optimization objectives of the given nonlinear system, the solution position was no longer updated, which made it easy for the solution to fall into local optima. To solve this problem, it was necessary to ensure that the randomly initialized solutions were no longer evenly distributed, i.e., the convolutional kernel at the algorithm update position had to gradually decrease in weight from the center to the outside so that the weight at the center was the maximum. Therefore, in this study, we enlarged the search space of the algorithm by applying Gaussian mutation to the current algorithm’s optimal solution position and dimension to reduce the probability that the optimal solution would fall into local optima. The specific steps undertaken in the algorithm were as follows:

  1. The objective was to determine the optimal size of the convolution kernel. This size influenced the search space range and feature extraction capabilities of the convolutional evolution algorithm. Given the high number of pixels in the image, the aim was to obtain the optimal solution in the shortest possible time. Therefore, this article proposes a convolution kernel size of Inline graphic.

  2. To set the standard deviation of the Gaussian function, it is necessary to convolve the weight values of each individual position within the kernel. The convolution kernel model in the convolution process of the improved evolutionary algorithm was determined as follows:
    graphic file with name M43.gif 7
    where Inline graphic represents the standard deviation (here, Inline graphic). Indicates the height and width of the convolution kernel.
  3. Normalizing the weights such that the total weight of the convolutional kernel was 1. The goal here was to avoid amplifying or attenuating the solution after multiplying the specified region of the original position by the convolution kernel. By employing normalization, the value of the original solution was preserved.

Basic process of the TCOA

Chaos initialization

In the convolutional evolution algorithm, a chaotic strategy was employed to initialize individual positions. This approach enhanced both the consistency and diversity of the initial positions, improving the overall efficacy of the initialization process. While chaotic systems display long-term behavior with stochastic characteristics, chaos is distinct from stochastic processes. Chaotic motion traverses every state within a defined region, known as a chaotic space, following an inherent regularity. In addition, each state is accessed only once, leading to a lack of exact periodicity. To ensure that tent mapping has values between (0, 1), the initial value was ultimately determined to be Inline graphic andInline graphic.

Perform population initialization through the tent mapping shown in Eq. (6), and Eq. (2) is modified,

graphic file with name M48.gif 8

Position update stage in the TCOA

At this stage, the main approach is to use Gaussian convolution kernels for two-dimensional solution search. During vertical search, the size of the convolution kernel used was Inline graphic. When searching horizontally, the size of the convolution kernel used was Inline graphic. Finally, when searching for regions, the size of the convolution kernel used was Inline graphic. The position update strategy for each individual was as follows:

graphic file with name M52.gif 9

where the convolution kernels for vertical, horizontal, and regional position updates are

graphic file with name M53.gif 10
graphic file with name M54.gif 11
graphic file with name M55.gif 12

During comprehensive position update, the update strategy of the algorithm was consistent with Eq. (4) and Inline graphic.

Time complexity analysis of the TCOA

Time complexity is an important indicator of algorithm performance, and changes in the time complexity of the developed algorithm compared with those of traditional evolutionary algorithms was mainly affected by the initialization of chaotic maps and Gaussian mutation optimization. Given that the population size was Inline graphic, number of iterations performed to find the optimal solution was Inline graphic, and dimension of the test function was Inline graphic, the time complexity of the traditional evolutionary algorithm was Inline graphic. The improved evolutionary algorithm involved the following additional three steps:

Step 1: Initialize the population position using the chaotic mapping method with a time complexity of Inline graphic.

Step 2: Use a Inline graphic Gaussian convolution kernel and calculate the position of the updated population after mutation with a time complexity of Inline graphic.

The time complexity of the improved evolutionary algorithm was the sum of the time complexity of the traditional evolutionary algorithm and the aforementioned two steps, and it was at the same order as that of the traditional evolutionary algorithm.

Simulation experiments and discussion

Qualitative analysis of the TCOA

We applied the TCOA to several common single-peak and multipeak optimization problems25, and the results of our qualitative analysis are shown in Fig. 3. The performance of the optimization algorithm was analyzed based on the convergence curve, which demonstrated some improvement. The TCOA exhibited excellent capability to scan a search space at the global and local levels and quickly obtained the optimal solution with high convergence speed after it determined the optimal region. This result highlights the efficiency of the algorithm during the search process.

Fig. 3.

Fig. 3

Fig. 3

Qualitative analysis of TCOA.

Comparison between the TCOA and recent excellent algorithms

In this subsection, we compare the performance of the proposed TCOA with six state-of-the-art algorithms and testing functions with CEC200526, including COA, WDO27, SCA28, PSO29, TSA30, and GWO31. The convergence curves for the algorithms are shown in Fig. 4. The simulation results show that compared with the state-of-the-art algorithms, this method undertakes greater exploration and development and shows superior balancing ability at low dimensions and superior performance. However, the TCOA exhibits competitive advantages in terms of convergence accuracy, stability, convergence speed, and optimization ability. At the same time, it increases computational complexity, which leads to longer running times. This problem remains a subject for future research and refinement. To analyze the ability of the TCOA to provide optimal solutions, four indicators were used to report the optimization results: mean, standard deviation (std), and execution time (et). Table 1 shows the average results for all the algorithms, which were run independently for 30 iterations. All of the simulation results for the competitive algorithms are shown in Table 1. The TCOA performed well in handling the functions F1–F3 and F6–F8.

Fig. 4.

Fig. 4

Fig. 4

Convergence curves of TCOA and some latest outstanding algorithms in the optimization.

Table 1.

Assessment results of the CEC-2005 objective functions.

PSO WDO GWO SCA TSA COA TCOA
F1 Mean 1.092E−5 1.8989E−7 − 1.453.8 914.5 − 1220.9 1080 − 1245.3
std 0.0813E−5 7.9635E−7 85.1 69.5 89 125.1 139.5
et 0.3476 0.2785 0.5587 0.2818 0.3181 0.2911 0.2713
F2 Mean 3.3524E−09 4.8999E−09 6.8714E + 04 1.4219 4.2907E−10 2.3463E−04 1.2604E−09
std 8.9524E−09 1.4504E−09 3.7516E + 05 1.2094 2.8865E−10 2.7615E−04 6.2267E−09
et 0.4374 0.3579 1.1953 0.3192 0.3550 0.3245 0.2968
F3 Mean 13,717.2511 1.34808E + 5 21,780.45696 11,543.13317 2.334839E + 5 2.35825E + 5 11,981.85266
std 6050.865871 7.29986E + 5 40,828.58155 5560.243664 3.790254E + 5 4.78335E + 5 29.68442534
et 2.0432 2.0144 4.6870 0.4622 0.5234 0.4946 0.4654
F4 Mean 8.0897116 1.28466E−3 33.0815288 38.6005514 0.00011160 2.69739E−05 1.65744286
st 2.6906986 3.55292E−3 4.32440373 9.99975900 4.93083E−05 3.63304E−05 0.12549598
et 0.5395 0.4690 1.2128 0.1513 0.2081 0.1792 0.2126
F5 Mean 480.95096 260.275343 969.149842 183.668112 271.977330 286.989401 438.0.7194
st 1826.1395 0.34797509 674.628392 42,896,164.2 0.97612791 0.02072033 1169.33601
et 0.5927 0.5394 1.4768 0.1598 0.2188 0.1752 0.1592
F6 Mean 0.241965478 0.004157242 5.17606E−03 101.5708991 0.521417539 0.159340125 16.02263652
st 0.0388619 0.00155955 1.68187E−06 113.558501 0.34818717 0.07081406 1.92188239
et 0.4374 0.3479 1.1990 0.1161 0.1637 0.1266 0.0993
F7 Mean 0.043436088 0.0217946 0.792904606 0.01123324 0.003849996 0.000812167 0.029237835
st 0.024631308 0.0120539 0.390101499 0.01890299 0.001992456 0.000494169 0.0742575199
et 0.8512 0.7722 1.9567 0.2063 0.2463 0.2155 0.1979
F8 Mean − 1768.446508 − 1818.85401 − 1340.89316 − 891.304266 − 1227.92941 − 1060.44438 − 1228.61117
st 53.504232 51.9627300 76.4625132 59.5792059 103.410507 88.9108096 110.541180
et 0.5348 0.6006 1.6590 0.1409 0.1848 0.1561 0.1277
F9 Mean 3.0553614 0 2.03895679 3.62484199 9.26475282 8.01864275 2.42051886
st 2.2086755 0 13.0867769 282.756086 5.80317516 23.2840111 18.6597309
et 0.3766 0.4822 1.2437 0.1313 0.1725 0.1357 0.1477
F10 Mean 20.30756 19.29627 20.2737568 20.3045734 20.8791403 0.19719213 15.3683478
st 0.775647 0.195145 0.04304237 0.07701260 0.07722330 0.61012740 7.89237789
et 0.5045 0.4151 1.3885 0.1888 0.1398 0.1214 0.1492
F11 Mean 0.07533234 0 0.0077571 0.76887935 0.00638905 0.03585102 0.57188237
st 0.08180990 0 0.00276231 0.24548519 0.01042765 0.07349119 0.04400227
ET 0.5093 0.6481 1.4596 0.1618 0.1431 0.1666 0.1983
F12 Mean 0.363752697 0.0263165 5.455883689 3,487,112.905 0.048703069 0.103382063 3.816253547
st 0.202784123 0.000135964 1.857121107 7,028,295.419 0.02821944 0.17541855 3.270668825
et 1.6020 1.8830 3.6962 0.3754 0.3913 0.4309 0.4225
F13 Mean 0.235434178 0.048864404 7.001251947 27,807,984 0.551308814 0.382420501 2.35608945
st 0.142961862 0.071162774 4.198193549 102,209,520.3 0.233052388 0.791936424 0.309975409
et 1.7181 1.7839 3.6261 0.4014 0.4982 0.4320 0.5130
F14 Mean 1.42187982 0.998102502 0.998003838 1.732562515 4.039801603 4.307357875 1.097406545
st 2.132480213 0.00020755 0 0.967176981 3.692969303 3.309689187 0.303306063
et 2.5982 2.7647 5.8311 1.5992 1.6122 1.6545 1.6380
F15 Mean 0.00125 0.0010 0.0010 0.0011 0.0011 0.0049 0.0011
st 0.0315 4.1792E−05 1.69487E−06 0.0001 0.0004 0.0198 0.3779E−3
et 0.3635 0.3808 0.6553 0.1077 0.1886 0.1977 0.1814
F16 Mean − 1.006602115 − 1.031627734 − 1.031628453 − 1.031560237 − 1.031628442 − 1.031607497 − 1.03123321
st 0.010333705 9.70135E−07 6.77522E−16 9.47659E−05 1.55172E−08 2.9992E−05 0.000379405
et 0.2903 0.3116 0.4468 0.1360 0.1418 0.1810 0.1674
F17 Mean 1.27521144 0.39916485 0.39788735 0.41460115 0.39991663 0.39910152 0.39844936
st 1.73729562 0.00133281 0 0.04089749 0.00785870 0.00198887 0.00065634
et 0.3765 0.3893 0.6043 0.2510 0.2584 0.2224 0.2838
F18 Mean 3.0000806 3.000212916 3 3.000094169 3.000046757 3.001065 3.03859715
st 0.000827284 0.00054386 1.24246E−7 0.000182734 4.73783E−05 0.001250499 0.04467167
et 0.3765 0.3893 0.6043 0.2510 0.2584 0.2224 0.2838
F19 Mean − 3.005316489 − 3.860355401 − 3.862782148 − 3.717073745 − 3.859371285 − 3.861907593 − 2.874893963
st 1.507366174 0.003708541 2.71009E−15 0.70213413 0.005046061 0.001239905 1.019283808
et 0.3617 0.3769 0.5974 0.1560 0.1626 0.2028 0.2903
F20 Mean 0 − 2.19272718 − 3.43539E−08 − 0.436391417 − 0.045053118 − 3.19611236 − 2.97307E−15
st 0 1.191916563 1.88132E−07 1.045159845 0.246766091 0.133412908 1.62841E−14
et 0.5511 0.7158 1.4392 0.3979 0.4756 0.4390 0.5002
F21 Mean − 2.39958134 − 10.06123764 − 10.15319968 − 3.598961213 − 8.888367709 − 8.079687143 − 5.601825935
st 2.01218978 0.064360038 7.92858E−09 1.448855778 2.368659126 2.966678245 2.287039183
et 0.5579 0.5859 0.8462 0.3215 0.3456 0.1726 0.4264
F22 Mean − 2.347426403 − 10.25454113 − 10.40294057 − 4.118420526 − 10.22638756 − 8.731282348 − 7.077563903
st 1.661192709 0.079920142 1.36005E−15 1.027643842 0.96285627 2.77327844 2.388128202
et 0.6475 0.6905 1.1951 0.3306 0.3645 0.4527 0.5500
F23 Mean − 8.2073 − 10.405 − 10.536 − 3.5946 − 10.355 − 7.4675 − 6.8828
st 1.6828 0.0924 5.71336E−16 1.3561 0.9872 3.5229 2.8255
et 0.5968 0.5648 1.0993 0.3150 0.3205 0.3539 0.3416

Tension–compression spring design

As discussed in "Comparison between the TCOA and recent excellent algorithms", the TCOA has better function optimization performance compared to other intelligent algorithms and some improved algorithms. To further verify the effectiveness of the TCOA in a practical tension–compression spring, it was applied to address design problems. Tension–compression spring designing is an optimization problem in engineering sciences, with the aim of reducing the weight of the tension–compression spring, the schematic of which is reported in31. The mathematical formulation of this engineering design is as follows:

graphic file with name M64.gif 13
graphic file with name M65.gif 14
graphic file with name M66.gif 15

The results of the TCOA and its competitor algorithms in finding an optimal solution for the tension–compression spring design variables are listed in Table 2. The TCOA provided an optimal solution to the tension–compression spring problem, offering values of (0.0534, 0.3940, 9.7900) for the design variables and 0.0132 for the objective function. The statistical results obtained by the TCOA and its competitor algorithms for this design problem are presented in Table 3. These results indicate the superior performance of the TCOA, as reflected by better values of its statistical metrics. The TCOA convergence curve for solving the tension–compression spring design problem is presented in Fig. 5.

Table 2.

Performance of optimization algorithms on the tension/compression spring design problem.

Algorithms Optimal variables Optimum cost
x1 x2 x3
TCOA 0.0534 0.3940 9.7900 0.0132
COA 0.0644 0.6966 4.5952 0.0191
TSA 0.0536 0.4021 9.0649 0.0127
PSO 0.0696 0.9541 2.000 0.0185
GWO 0.0500 0.3172 14.0675 0.0127

Table 3.

Statistical results of optimization algorithms on the tension/compression spring design problem.

Algorithms Best Worst Mean Std
TCOA 0.0129 0.0138 0.0133 0.00026
COA 0.0133 0.0327 0.0225 0.00566
TSA 0.0126 0.0131 0.0127 0.00096
PSO 0.0127 0.0185 0.0138 0.00150
GWO 0.0127 0.0173 0.0129 0.00027

Fig. 5.

Fig. 5

TCOA’s performance convergence on the tension/compression spring.

Conclusion and future scope

To address the challenges of low convergence speed and long computation time encountered when using traditional two-dimensional convolutional evolution algorithms to solve complex nonlinear systems, this study introduces a new composite convolutional evolution algorithm. This algorithm utilizes the chaos theory and Gaussian mutation to overcome the limitations of traditional evolution algorithms and provides a balanced approach toward global exploration and local optimization while addressing common pitfalls such as local optima. However, the trade-off between enhanced performance and increased computational complexity suggests that further refinement is needed to fully realize the potential of the proposed algorithm. Future research should focus on improving the algorithm’s efficiency, scalability, and applicability to a wider range of optimization problems, ensuring that the TCOA can be effectively utilized in theoretical and practical settings.

Author contributions

Y.Q.: Conceptualization, methodology, writing-review & editing. A.J.: Funding acquisition, resources. Y.G.: Data curation.

Funding

This research was funded by “Pioneer” and “Leading Goose” R&D Program of Zhejiang (Grant Number 2023C01024).

Data availability

The datasets used and/or analyzed during this study are available from the corresponding author on reasonable request.

Competing interests

The authors declare no competing interests.

Ethical and informed consent for data used

This paper does not require ethical and informed consent for data.

Footnotes

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Chen, X., Mei, C., Xu, B., Yu, K. & Huang, X. Quadratic interpolation based teaching-learning-based optimization for chemical dynamic system optimization. Knowl. Based Syst.145, 250–263 (2018). [Google Scholar]
  • 2.Chen, Y. et al. HADCNet: automatic segmentation of COVID-19 infection based on a hybrid attention dense connected network with dilated convolution. Comput. Biol. Med.149, 105981 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Bing, Z. et al. Complex robotic manipulation via graph-based hindsight goal generation. IEEE Trans. Neural Netw. Learn. Syst.33, 7863–7876 (2022). [DOI] [PubMed] [Google Scholar]
  • 4.Lu, W., Zhao, H., He, Q., Huang, H. & Jin, X. Category-consistent deep network learning for accurate vehicle logo recognition. Neurocomputing463, 623–636 (2021). [Google Scholar]
  • 5.Zou, D. et al. Deep field relation neural network for click-through rate prediction. Inf. Sci.577, 128–139 (2021). [Google Scholar]
  • 6.Shi, B. et al. Prediction of recurrent spontaneous abortion using evolutionary machine learning with joint self-adaptive sime mould algorithm. Comput. Biol. Med.148, 105885 (2022). [DOI] [PubMed] [Google Scholar]
  • 7.Kalita, K. et al. Multi-objective liver cancer algorithm: A novel algorithm for solving engineering design problems. Heliyon10, e26665 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Lian, J. et al. Parrot optimizer: algorithm and applications to medical problems. Comput. Biol. Med.172, 108064.10.1016 (2024). [DOI] [PubMed] [Google Scholar]
  • 9.Xiong, W., Li, D., Zhu, D., Li, R. & Lin, Z. An enhanced slime mould algorithm combines multiple strategies. Axioms12, 907.10.3390 (2023). [Google Scholar]
  • 10.Feng, Y., Wang, H., Cai, Z., Li, M. & Li, X. Hybrid learning moth search algorithm for solving multidimensional knapsack problems. Mathematics11, 1811 (2023). [Google Scholar]
  • 11.Xu, B., Heidari, A. A., Cai, Z. & Chen, H. Dimensional decision covariance colony predation algorithm: global optimization and high-dimensional feature selection. Artif. Intell. Rev.56, 11415–11471 (2023). [Google Scholar]
  • 12.Kewei, C., Shuguang, W. & Jiaxi, Z. Intelligent optimization algorithm based on two-dimensional convolution operation. J. Armored Forces2, 102–108 (2023). [Google Scholar]
  • 13.Coello Coello, C. A. C. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art. Comput. Methods Appl. Mech. Eng.191, 1245–1287 (2002). [Google Scholar]
  • 14.Kennedy, J. & Eberhart, R. Particle swarm optimization. In Proceedings of ICNN’95-International Conference on Neural Networks. 1942–1948 (IEEE, 1995).
  • 15.Kohli, M. & Arora, S. Chaotic grey wolf optimization algorithm for constrained optimization problems. J. Comp. Des. Eng.5, 458–472 (2018). [Google Scholar]
  • 16.Arora, S. & Anand, P. Chaotic grasshopper optimization algorithm for global optimization. Neural Comput. Appl.31, 4385–4405 (2019). [Google Scholar]
  • 17.Mirjalili, S. & Lewis, A. The whale optimization algorithm. Adv. Eng. Softw.95, 51–67 (2016). [Google Scholar]
  • 18.Gandomi, A. H., Yang, X. S., Talatahari, S. & Alavi, A. H. Firefly algorithm with chaos. Commun. Nonlinear Sci. Numer. Simul.18, 89–98 (2013). [Google Scholar]
  • 19.Coelho, L. S. & Mariani, V. C. Use of chaotic sequences in a biologically inspired algorithm for engineering design optimization. Expert Syst. Appl.34, 1905–1913 (2008). [Google Scholar]
  • 20.Yang, D., Li, G. & Cheng, G. On the efficiency of chaos optimization algorithms for global optimization. Chaos Solitons Fract.34, 1366–1375 (2007). [Google Scholar]
  • 21.Song, Y., Chen, Z. & Yuan, Z. Neural network nonlinear predictive control based on tent-map chaos optimization. Chin. J. Chem. Eng.15, 539–544 (2007). [Google Scholar]
  • 22.Zhang, X. et al. Time optimal trajectory planing based on improved sparrow search algorithm. Front. Bioeng. Biotechnol.10, 852408 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Bailon, M. L. & Horntrop, D. J. On the calculation of convolutions with Gaussian Kernels. Appl. Math. Comput.176, 383–387 (2006). [Google Scholar]
  • 24.Xu, L., et al. Gaussian process image classification based on multi-layer convolution kernel function. Neurocomputing480, 99–109 (2022).
  • 25.Kaveh, M., Mesgari, M. S. & Saeidian, B. Orchard Algorithm (OA): A new meta-heuristic algorithm for solving discrete and continuous optimization problems. Math. Comput. Simul.208, 95–135 (2023). [Google Scholar]
  • 26.Suganthan, et al. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Nat. Comput. 341–357 (2005).
  • 27.Ibrahim, I. A., Hossain, M. J., Duck, B. C. & Nadarajah, M. An improved wind driven optimization algorithm for parameters identification of a triple-diode photovoltaic cell model. Energy Convers. Manag.213, 112872 (2020). [Google Scholar]
  • 28.Sca, M. S. a sine cosine algorithm for solving optimization problems. Knowl. Based Syst.96, 120–133 (2016). [Google Scholar]
  • 29.Gandomi, A. H., Yun, G. J., Yang, X.-S. & Talatahari, S. Chaos-enhanced accelerated particle swarm optimization. Commun. Nonlinear Sci. Numer. Simul.18, 327–340 (2013). [Google Scholar]
  • 30.Kaur, S., Awasthi, L. K., Sangal, A. L. & Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell.90, 103541 (2020). [Google Scholar]
  • 31.Zhang, X., Zhang, Y. & Ming, Z. Improved dynamic grey wolf optimizer. Front. Inform. Technol. Electron. Eng.22, 877–890 (2021). [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets used and/or analyzed during this study are available from the corresponding author on reasonable request.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES