Skip to main content
Computational Intelligence and Neuroscience logoLink to Computational Intelligence and Neuroscience
. 2022 Apr 12;2022:1376479. doi: 10.1155/2022/1376479

Chaotic Enhanced Genetic Algorithm for Solving the Nonlinear System of Equations

A M Algelany 1,2,, M A El-Shorbagy 1,3
PMCID: PMC9019409  PMID: 35463250

Abstract

Many engineering and scientific models are based on the nonlinear system of equations (NSEs), and their effective solution is critical for development in these domains. NSEs can be modeled as an optimization problem. So, the goal of this paper is to propose an optimization method, to solve the NSEs, which is called a chaotic enhanced genetic algorithm (CEGA). CEGA is a chaotic noise-based genetic algorithm (GA) that improves performance. CEGA will be configured so that it uses a new definition which is chaotic noise to overcome the drawbacks of optimization methods such as lack of diversity of solutions, the imbalance between exploitation and exploration, and slow convergence of the best solution. The goal of chaotic noise is to reduce the number of repeated solutions and iterations to speed up the convergence rate. In the chaotic noise, the chaotic logistic map is utilized since it has been used by numerous researchers and has proven its efficiency in increasing the quality of solutions and providing the best performance. CEGA is tested using many well-known NSEs. The suggested algorithm's results are compared to the original GA to prove the importance of the modifications introduced in CEGA. Promising results were obtained, where CEGA's average percentage of improvement was about 75.99, indicating that it is quite effective in solving NSEs. Finally, comparing CEGA's results with previous studies, statistical analysis by Friedman and Wilcoxon's tests demonstrated its superiority and ability to solve this kind of problem.

1. Introduction

Many models in engineering and science are based on the nonlinear system of equations (NSEs), and their solution is very critical for development in these fields. NSEs can be found directly in some applications, but they can also be found indirectly when practical models are transformed into NSEs [1]. Finding a robust and effective solution for the NSEs might be a difficult task in theory.

The bisection technique, Muller's method, false-position method, Levenberg–Marquardt algorithm, Broyden method, steepest descent methods, branch and prune approach, Halley's method, Newton/damped Newton methods, and Secant method have traditionally been used to solve NSEs [2]. Secant and Newton are the methods of choice for solving NSEs in general. Some techniques, on the other hand, turn the NSEs into an optimization problem [3], which is subsequently solved using the augmented Lagrangian method [4]. These approaches are time-consuming, may diverge, are inefficient when solving a set of nonlinear equations, require a tedious process to calculate partial derivatives to build the Jacobian matrix, and are sensitive to initial conditions [5].

Because of these constraints, the researchers used evolutionary algorithms (EAs) to solve NSEs. EAs are a sort of metaheuristic that is often used to address problems of optimization that are too difficult to solve using traditional methods. EAs such as the genetic algorithm (GA) [68], particle swarm algorithm (PSO) [9, 10], artificial bee colony (ABC) [11], cuckoo search algorithm (CSA) [12], and firefly algorithm (FA) [13] have been used to solve NSEs. In [6], Chang proposed a real-coded GA for solving the nonlinear system. In [7], Grosan and Abraham offered a novel approach based on GA for dealing with the problem of complex NSEs by recasting it as a multiobjective optimization problem. In [8], an efficient GA with symmetric and harmonic individuals was used to solve NSEs. Mo et al. in [9] presented a conjugate direction to PSO for addressing NSEs, which merges the conjugate direction method (CDM) into PSO to enhance it and enable for fast optimization of high-dimensional optimization problems. By moving the challenge of high-dimensional function optimization to low-dimensional, CDM aids PSO in avoiding local minima. Jaberipour et al. suggested a new version of PSO for solving NSEs, which is based on a novel way of updating each particle's location and velocity [10]. To tackle the drawbacks of the classic PSO approach, such as trapping in local minima and delaying convergence, they changed the way each particle was updated. Also, Jia and He presented a hybrid ABC technique for solving NSEs in [11], which combined the ABC and PSO algorithms. The hybrid algorithm corrects the problem of sinking into a premature or local optimum by integrating the benefits of both strategies. Furthermore, in [12], Zhou and Li proposed an upgraded CSA to handle the NSEs. They employed a novel encoding strategy that ensures the provided solution is achievable without requiring the cuckoo's evolution to be altered. Finally, in [13], enhanced FA to solve NSEs as an optimization problem is introduced by Ariyaratne et al. with several advantages such as eliminating the need for beginning assumptions, differentiation, or even function continuity and allowing it to provide many root estimates at the same time.

The genetic algorithm (GA), based on natural selection, genetics, and evolution, was presented in 1975 [14] and described in 1989 [15] as a competent global strategy for tackling optimization problems. GA is well suited to solving optimization issues, and it continues to pique academics' interest. According to the literature, GA was commonly used to solve NSEs, where Mangla et al., in [16], highlight flaws in existing approaches (Bisection, Regula Falsi, Newton–Raphson, Secant, Muller, and so on) and justify the GA's application to NSEs while an approach for sorting out NSEs to solve them using the fixed-point method was proposed in [17], with the equations' arrangement determined by a GA that works with a population of the possible resolution procedures for the system. In addition, in [18], Ji et al. presented an optimization approach based on clustering evolution for obtaining an optimum piecewise linear approximation of a set of nonlinear functions. The technique is built on a balance of approximation precision and simplicity, and it enhances the approximate linear with the fewest possible departments. In [19], a GA technique to solve NSEs for a variety of applications is presented, in which the roots of NSEs were approximated using population size, degree of mutation, crossover rate, and coefficient size. Also, a method for solving nonlinear equations using GA was given in [20]. Furthermore, in [21], evolutionary algorithms to solve NSEs were used, which were turned into an unconstrained optimization problem with some basic mathematical relations. Finally, in [22], a new intelligent computer strategy for solving nonlinear equations based on evolutionary computational approaches was proposed mainly based on variants of GAs. But, when it works with complex and massive systems, however, GA has some downsides, including being extremely slow and making it hard to identify the global optimal solution due to the increased number of iterations required or long search time.

From this motivation, this study offers an algorithm that solves one of the most significant drawbacks with GA and all EAs which is the repeating of solutions during the optimization process, which wastes time. The proposed optimization algorithm is called a chaotic enhanced genetic algorithm (CEGA). Chaotic is a mathematical strategy that has been shown to improve the performance of numerous optimization algorithms. It has received a great deal of attention, and it has been applied in a range of domains including optimization [23]. The proposed CEGA is a combination between GA and chaotic noise. The chaotic noise is used when the solutions are repeated, during the optimization process of GA, to change the positions of the solutions chaotically. This combination aims to enhance GA by overcoming its drawbacks such as lack of diversity of solutions, the imbalance between exploitation and exploration, and slow convergence of the best solution.

The major contributions of this paper include the following:

  1. Proposing a new methodology called a chaotic enhanced genetic algorithm (CEGA) to solve NSEs by using a combination between GA and chaotic noise

  2. Presenting sufficient diversity of the solutions, and preventing consuming time during the optimization process by overcoming repetition of solutions

  3. Ensuring improvement in every iteration by using chaotic noise and fast convergence to best solutions

  4. Testing CEGA by many well-known NSEs

  5. Using statistical tests to determine the relevance of the CEGA findings

  6. Showing that CEGA is competitive and better than other optimization algorithms

The following is how the paper is structured. Section 2 discusses nonlinear systems of equations. The proposed technique is detailed in Section 3. The numerical findings and discussions are shown in Sections 4 and 5, respectively. Section 6 concludes with observations and conclusions.

2. Nonlinear System of Equations

The mathematical definition of a nonlinear system of equations (NSEs) is

SNLE=f1z=0,f2z=0,fQz=0, (1)

where  z=(z1, z2,…, zn) is a vector of n components subset of ℝn, and fqq=1,2,…, Q are the nonlinear functions that translate the n-dimensional space ℝn's vector z=(z1, z2,…, zn) to the real line. Some of the functions may be nonlinear, while others are linear. Finding a solution for NSEs entails finding a solution in which each of the Q functions above equals zero [24].

Definition 1 . —

If ∀q=1, ..., Q, the functions fq(z)=0, then the solution z=(z1, z2,…, zn) is called the optimal solution of the NSEs.

Many approaches [2527] transform the NSEs into an unconstrained optimization problem by the inclusion of the left side of all equations and the use of the absolute value function as

Fz=absf1z+f2z++fQz,subject to f1z=0,f2z=0,fQz=0, (2)

where F(z) denotes the objective function. If all of the nonlinear equations are equal to zero (fq=0 ∀ q=1, ..., Q), the objective function in (2) has a global minimum.

3. The Proposed Methodology

This section provides an overview of GA and chaos theory. The suggested CEGA is next presented in detail.

3.1. Genetic Algorithm

In 1975 and 1989, respectively, Holland and Goldberg proposed and defined the genetic algorithm (GA) as an optimization technique [14, 15]. GA begins with a collection of chromosomes (solutions). Then, using GA operators (selection, mutation, and crossover), a new set of chromosomes is generated (solutions). The freshly generated chromosomes will be of greater quality than the preceding generation. These procedures are repeated until the termination conditions are met. As a final solution, the best chromosome (solution) of the previous generation is offered. Figure 1 depicts the generic GA's pseudocode.

Figure 1.

Figure 1

The pseudocode of the general GA.

3.2. Chaos Theory

Chaos theory is concerned with the behavior of systems that obey deterministic laws yet look random and unpredictable. Many elements of the optimization sciences have benefited from the mathematics of chaos theory. Chaos optimization algorithms have received a lot of attention as a novel method of global optimization because they are based on many chaotic maps, and the inherent characteristics of chaotic maps can improve optimization algorithms by allowing them to escape from local solutions and increase the convergence to reach the global solution. To increase solution quality, many researchers advocated integrating chaos theory and optimization algorithms [2831]. Chaotic maps are maps (evolution functions) that display chaotic behavior and typically take the form of iterated functions. Many well-known chaotic maps may be found in the literature, including the sinusoidal map, Chebyshev map, singer map, tent map, sine map, circle map, Gauss map, and logistic map.

3.3. Chaotic Enhanced Genetic Algorithm

In this subsection, the proposed chaotic enhanced genetic algorithm (CEGA) will be described, which is an integration between GA and chaos theory. CEGA be configured so that it uses chaotic noise to overcome any limitations that can be appearing during optimization by GA such as lack of diversity of solutions, the imbalance between exploitation and exploration, and slow convergence of the best solution. CEGA operates in two phases: in the first one, the genetic algorithm is implemented as a global optimization system to solve the NSEs. If the best solution is repeated during the GA optimization process, the chaotic noise is employed as the second phase. Chaotic noise tries to show a sufficient diversity of solutions while preventing time consumption during the optimization process by overcoming the repetition of the best solution and reducing the number of iterations. The following is a full description of the suggested algorithm:

  • Step 1: initialization
    • (i)
      Individuals of the population (in n-dimensions) are created with random placements in the search domain and the number of iterations set to one  (t=1)
    • (ii)
      The fitness function F(z) is assessed for each individual
    • (iii)
      Assign the best individual to the best position Best^t
  • Step 2: evolution by GA (t=t+1)
    • (i)
      Ranking [32]: individuals are ranked based on their fitness value, and a vector containing the corresponding individual fitness value is returned, allowing the selection process to compute survival probabilities.
    • (ii)
      Tournament selection (TS) [33]: many solutions (individuals) are chosen at random from the population, and the best of these solutions is chosen to be a parent. This process is performed as many times as necessary to choose parents.
    • (iii)
      BLX-α crossover operator [34]: two-parent candidate solutions with n design variables, X=[x1, x2,…, xn] and  Y=[y1, y2,…, yn], are chosen with crossover probability Pc. The BLX-α operator creates the k-th component of a new offspring W. The k-th component of W is a uniform random scalar in the range [min(xk, yk) − αI, max(xk, yk)+αI], where I defines the distance between parent candidates given by I=max(xk, yk) − min(xk, yk) and a is a user-defined parameter.
    • The BLX-α efficacy comes from its capacity to seek in a space domain that is not always constrained by the parents. Furthermore, because the search space is dependent on the distance between the parents, the GA is self-adaptive. The parameter α must be chosen carefully since it quantitatively specifies the search domain. Based on the findings of Herrera et al. [35], we choose α=0.5 in this investigation.
    • (iv)
      Real-valued mutation [36]: randomly generated values are added to the variables for each new offspring with a low probability (Pm) as follows:
      VariMut=Vari±si·ri·ai,i1,2,,n,uniform at random (3)
    • where si ∈ {−1, +1} uniform at random, ri=r · do  maini, r  is mutation range (standard: 10%), ai=2u·m,  u ∈ [0,1] uniform at random, and m is mutation precision.
    • (v)
      Elitist strategy: the best individuals in the generation t − 1 are directly added to the new generation t.
    • (vi)
      Evaluation: for each individual, F(z) is evaluated to find the new best position Best^t.
    • (vii)
      Updating: if the new best position Best^t is worse than or equal to the previous best position Best^t1, go to Step 3. Otherwise, continue by updating the best position as the best individual position discovered so far as Best^t.
    • (viii)
      Termination criteria: the proposed algorithm is terminated when the maximum number of iterations is achieved or when the individual convergences. Convergence happens when the locations of all individuals in the population are identical. Finally, put out the optimal solution as the best individual position Best^t.
  • Step 3: chaotic noise
    • (i)
      Chaotic noise: chaotic noise is applied if the best solution is repeated during the GA optimization process. It tries to show a sufficient diversity of solutions while preventing time consumption during the optimization process by overcoming the repetition of the best solution and reducing the number of iterations. In this step, the population at generation t(POPt) is changed by chaotic noise as follows:
      POPt=ϑ.POPt, (4)
    • where ϑ is a chaotic random number generated by the logistic map by using the following equation:
      ϑq=4ϑq11ϑq1,ϑ00,1,ϑ00.0,0.25,0.50,0.75,1.0. (5)
    • The logistic map, according to the results in [37], improves the quality of the solutions and provides the best performance.
    • (ii)
      Evaluation: for each individual in POPt, F(z) is evaluated to find the new best position Best^t.
    • (iii)
      Updating: if the new best position Best^t is better than the previous best position Best^t1, update the best position Best^t+1 as the best individual's position found so far and continue and go to Step 2. Otherwise, repeat Step 3.

Figure 2 depicts the suggested algorithm's pseudocode.

Figure 2.

Figure 2

The pseudocode of the proposed algorithm.

4. Numerical Results

Four systems of nonlinear equations are solved to assess the suggested method. These four test systems are common challenges that have been explored by other researchers and are known as benchmarks. The proposed algorithm is coded in MATLAB R2012b and implemented on the PC with Intel(R) Core(TM) i7-6600U CPU @ 2.60 GHz, 16 GB RAM, and Windows 10 operating system. The results will be compared to those obtained by the original GA to demonstrate the benefits of the suggested modifications and their impact on achieving an optimal solution.

For computational studies, a population size equal to 20, generation gap (GGAP) is 0.9, crossover probability Pc is 0.8, and mutation probability Pm is 0.02. Also, the termination criterion for CEGA is defined as

δ=FoptimumFtε=1e20. (6)

F optimum is the optimum value of the objective function which is 0 in all nonlinear system cases while Ft is the calculated objective function at each iteration t. It should be noted that the maximum number of iterations for both algorithms (original GA and CEGA) is the same, and all results are recorded from the first run. Furthermore, when one of them meets the termination requirement, the computations stop and the number of used iterations is reported. Finally, to statistically evaluate the CEGA compared to other algorithms, the Friedman test and Wilcoxon rank-sum test are executed here.

4.1. Benchmark 1: Experiment Test

This benchmark problem can be described as [7]

f1z1,z2=cos2z1cos2z20.4=0,f2z1,z2=2z2z1+sin2z2sin2z11.2=0,z110,10,z210,10. (7)

This benchmark is solved by many algorithms such as Newton's method, Secant's method, evolutionary algorithm approach (EAA) [7], genetic algorithms (GAs) [21], and hybridization of grasshopper optimization algorithm with genetic algorithm (hybrid-GOA-GA) [38]. Table 1 shows a comparison between the best function value F obtained by such algorithms, original GA, and the proposed CEGA. The convergence curves of the best F(z) achieved so far using original GA and CEGA are shown in Figure 3.

Table 1.

Results for benchmark 1, Experiment test.

Method (z1, z2) (f1, f2) F(z) No. of iterations
Newton's method (0.15, 0.49) (0.00168, 0.01497) 0.0083 NA
Secant's method (0.15, 0.49) (0.00168, 0.01497) 0.0083 NA
EAA (0.15722, 49458) (0.001264, 0.000969) 0.0011 150
GAs (0.156522, 0.49338) (4.8606E − 06, 3.7164E − 06) 4.2885E − 06 10
Hybrid-GOA-GA (0.680235945188233, 2.25999176017399) (2.2840E − 06, 1.2967E − 06) 1.7904E − 06 300
Original GA (−2.98506954610277, −2.64821484596259) (5.2059E − 07, 7.4084E − 06) 3.9645E − 06 300
CEGA (−9.26825582324219, −8.93140064444864) (2.9827E − 07, 5.1472E − 06) 2.7227E − 06 11

Figure 3.

Figure 3

Benchmark 1: the convergence curves of the best F(z) achieved so far by original GA and CEGA.

4.2. Benchmark 2: Arithmetic Application

This benchmark problem can be described as [7]

f1z=z10.2542872200.18324757×z4z3z9=0,f2z=z20.3784219700.16275449×z1z10z6=0,f3z=z30.2716257700.16955071×z1z2z10=0,f4z=z40.1980791400.15585316×z7z1z6=0,f5z=z50.4416672800.19950920×x7x6x3=0,f6z=z60.1465411300.18922793×z8z5z10=0,f7z=z70.4293716100.21180486×z2x5x8=0,f8z=z80.0705643800.17081208×z1z7z6=0,f9z=z90.3450490600.19612740×z10z6z8=0,f10z=z100.4265110200.21466544×z4z8z1=0,10z1,,z1010. (8)

This benchmark is solved by many algorithms as the EAA [7], GAs [21], and hybrid-GOA-GA [38]. Table 2 shows a comparison between the best function value F obtained by such algorithms, original GA, and the proposed CEGA while the convergence curves of the best F(z) achieved so far using original GA and CEGA are shown in Figure 4.

Table 2.

Results for benchmark 2, Arithmetic application.

Method z 1z10 f 1f10 F(z) No. of iterations
EAA z 1 0.2077500302 f 1 0.0464943 0.2344 300
z 2 0.0299198492 f 2 0.3489889
z 3 −0.0339491324 f 3 0.3058418
z 4 −0.2027950317 f 4 0.4012915
z 5 0.2131771707 f 5 0.2284027
z 6 0.0568458067 f 6 0.0886970
z 7 0.2267650517 f 7 0.2024745
z 8 −0.0977041236 f 8 0.1687259
z 9 −0.0339921200 f 9 0.3787652
z 10 0.2532921324 f 10 0.1741025

GAs z 1 2.5783339E − 01 f 1 −7.3844E − 10 1.2674E − 09 10
z 2 3.8109715E − 01 f 2 −1.1684E − 12
z 3 2.7874502E − 01 f 3 1.7931E − 09
z 4 2.0066896E − 01 f 4 −8.8837E − 10
z 5 4.4525142E − 01 f 5 −4.5866E − 10
z 6 1.4918391E − 01 f 6 −5.270E − 09
z 7 4.3200969E − 01 f 7 −6.3852E − 09
z 8 7.3402777E − 02 f 8 −9.7362E − 10
z 9 3.4596683E − 01 f 9 −6.0389E − 11
z 10 4.2732628E − 01 f 10 3.0841E − 10

Hybrid-GOA-GA z 1 0.2578333 f 1 1.2656E − 12 1.7220E − 12 1200
z 2 0.3810971 f 2 7.9096E − 14
z 3 0.2787450 f 3 1.7517E − 12
z 4 0.2006689 f 4 4.5315E − 12
z 5 0.4452514 f 5 1.1361E − 12
z 6 0.1491839 f 6 2.2230E − 12
z 7 0.4320096 f 7 1.4795E − 12
z 8 0.0734027 f 8 6.5123E − 13
z 9 0.3459668 f 9 3.5476E − 12
z 10 0.4273262 f 10 5.5468E − 13

Original GA z 1 0.257833393700735 f 1 2.6685E − 13 1.7873E − 12 1200
z 2 0.381097154600942 f 2 1.8415E − 12
z 3 0.278745017345425 f 3 1.0000E − 12
z 4 0.200668964224041 f 4 1.3058E − 12
z 5 0.445251424840196 f 5 8.3411E − 13
z 6 0.149183919967650 f 6 1.8859E − 12
z 7 0.432009698988807 f 7 4.9226E − 12
z 8 0.0734027777813010 f 8 5.0493E − 12
z 9 0.345966826875570 f 9 3.8700E − 14
z 10 0.427326275994071 f 10 7.2846E − 13

CEGA z 1 0.257833393700561 f 1 5.7399E − 14 3.0855E − 14 272
z 2 0.381097154602820 f 2 1.2136E − 14
z 3 0.278745017346455 f 3 1.3031E − 14
z 4 0.200668964225329 f 4 1.5905E − 14
z 5 0.445251424841115 f 5 7.1657E − 14
z 6 0.149183919969369 f 6 1.4279E − 14
z 7 0.432009698983808 f 7 8.7737E − 14
z 8 0.0734027777762290 f 8 2.1295E − 14
z 9 0.345966826875559 f 9 4.9712E − 15
z 10 0.427326275993280 f 10 1.0141E − 14

Figure 4.

Figure 4

Benchmark 2: the convergence curves of the best F(z) achieved so far by original GA and CEGA.

4.3. Benchmark 3: Combustion Application

This benchmark problem can be described as [7]

f1z=z2+2z6+z9+2z10105=0,f2z=z3+z83×105=0,f3z=z1+z3+2z5+2z8+z9+z105×105=0,f4z=z4+2z7105=0,f5z=0.5140437×107z5z12=0,f6z=0.1006932×106x62z22=0,f7z=0.7816278×1015z7z42=0,f8z=0.1496236×106z8z1z3=0,f9z=0.6194411×107z9z1z2=0,f10z=0.2089296×1014z10z1z22=0,10z1,z2,,z1010. (9)

This benchmark is solved by many algorithms as the EAA [7], GAs [21], and hybrid-GOA-GA [38]. Table 3 shows a comparison between the best function value F obtained by such algorithms, original GA, and the proposed CEGA, while Figure 5 shows the convergence curves of the best F obtained so far by original GA and CEGA.

Table 3.

Results for benchmark 3, Combustion application.

Method z 1z10 f 1f10 F(z) No. of iterations
EAA z 1 2.8724570E − 4 f 1 −9.0156756E − 5 −1.8038E − 05 300
z 2 4.6449359E − 004 f 2 −3.3881318E − 021
z 3 −3.8722475E − 006 f 3 −5.9848143E − 008
z 4 5.7046411E − 005 f 4 −9.0000000E − 005
z 5 1.2033492e + 000 f 5 −2.0652682E − 008
z 6 3.2144041e + 000 f 6 −1.0783996E − 007
z 7 −2.3523205E − 005 f 7 −3.2542930E − 009
z 8 3.3872248E − 005 f 8 1.1173545E − 009
z 9 1.6152635e + 000 f 9 −3.3367727E − 008

GAs z 1 7.7944699E − 5 f 1 −9.0000000E − 5 −1.8034E − 05 70
z 2 2.3453123E − 4 f 2 −4.7433845E − 20
z 3 5.6870072E − 8 f 3 −5.5091023E − 18
z 4 −5.1124010E − 4 f 4 −9.0000000E − 5
z 5 1.1665683E − 1 f 5 −7.8705351E − 11
z 6 3.6717284E − 1 f 6 −7.3037986E − 8
z 7 2.6062005E − 4 f 7 −2.6136644E − 7
z 8 2.9943130E − 5 f 8 4.7478263E − 14
z 9 2.6776713E − 1 f 9 −1.6938693E − 9
z 10 −5.0116867E − 1 f 10 −4.2883872E − 12
z 10 −4.0222631e + 000 f 10 −6.1982897E − 011

Hybrid-GOA-GA z 1 1.5541664E − 9 f 1 8.5611E − 12 1.2499E − 09 300
z 2 4.6710388E − 6 f 2 1.2440E − 08
z 3 2.9852019E − 5 f 3 1.9449E − 14
z 4 1.7239638E − 10 f 4 6.6138E − 12
z 5 9.8332225E − 6 f 5 5.0547E − 13
z 6 2.5029647E − 6 f 6 4.3385E − 11
z 7 4.9999104E − 6 f 7 2.5812E − 20
z 8 1.3554000E − 7 f 8 2.6115E − 14
z 9 9.4779067E − 8 f 9 1.3886E − 15
z 10 1.1412198E − 7 f 10 3.3671E − 20

Original GA z 1 0.000131595492467185 f 1 1.2576E − 04 7.4518E − 05 300
z 2 8.25174833157296E − 05 f 2 1.0366E − 04
z 3 −2.16100194956660 f 3 1.5119E − 04
z 4 −0.00728929937743800 f 4 2.6026E − 05
z 5 −2.84721332483602 f 5 1.6368E − 07
z 6 −4.25864110800585 f 6 4.4243E − 07
z 7 0.00363663681060500 f 7 5.3134E − 05
z 8 2.16113561379106 f 8 2.8470E − 04
z 9 −1.45063000953809 f 9 1.0072E − 07
z 10 4.98385697563882 f 10 8.8564E − 13

CEGA z 1 1.15278259019717E − 06 f 1 5.7802E − 11 4.5300E − 09 183
z 2 9.06471796614326E − 06 f 2 4.4498E − 08
z 3 1.56300393104332E − 05 f 3 4.5304E − 10
z 4 7.01041293845308E − 06 f 4 4.9701E − 11
z 5 2.11248562801178E − 06 f 5 1.2203E − 12
z 6 1.28545186382671E − 07 f 6 1.6433E − 10
z 7 1.49481838115443E − 06 f 7 4.9146E − 11
z 8 1.43254622355700E − 05 f 8 1.5875E − 11
z 9 5.33696042558367E − 09 f 9 1.0449E − 11
z 10 3.36398449219863E − 07 f 10 9.4722E − 17

Figure 5.

Figure 5

Benchmark 3: the convergence curves of the best F(z) achieved so far by original GA and CEGA.

4.4. Benchmark 4: Neurophysiology Application

This benchmark problem can be described as [7]

f1=z12+z321=0,f2=z22+z421=0,f3=z5z33+z6z43=0,f4=z5z13+z6z23=0,f5=z5z1z32+z6z42z2=0,f6=z5z12z3+z6z22z4=0,10z1,z2,...,z610. (10)

This benchmark is solved by many algorithms as the EAA [7], GAs [21], and hybrid-GOA-GA [38]. Table 4 shows a comparison between the best function value F obtained by such algorithms, original GA, and the proposed CEGA while Figure 6 shows the convergence curves of the best F obtained so far by original GA and CEGA.

Table 4.

Results for benchmark 4, Neurophysiology application.

Method z 1z6 f 1f6 F(z) No. of iterations
EAA z 1 7.0148122E − 001 f 1 1.1532022E − 009 3.7764E − 10 200
z 2 7.5925767E − 001 f 2 2.6058267E − 011
z 3 −7.1268794E − 001 f 3 −6.5553074E − 010
z 4 6.5079013E − 001 f 4 1.1783451E − 009
z 5 2.4122542E − 009 f 5 1.1134504E − 009
z 6 7.8977724E − 010 f 6 −5.4967453E − 010

GAs z 1 3.2484137E − 001 f 1 1.5105117E − 010 5.2127E − 11 20
z 2 3.2484137E − 001 f 2 1.5114510E − 010
z 3 9.4576852E − 001 f 3 −1.2749912E − 011
z 4 9.4576852E − 001 f 4 4.6365863E − 012
z 5 −5.6887875E − 001 f 5 1.0181522E − 011
z 6 5.6887875E − 001 f 6 8.4981744E − 012

Hybrid-GOA-GA z 1 0.0820223613267075 f 1 6.9593E − 11 7.0908E − 11 1000
z 2 −0.138287000903135 f 2 3.1647E − 11
z 3 −0.996630489354999 f 3 3.3110E − 12
z 4 0.990392197774631 f 4 9.6123E − 12
z 5 4.48130330622387E − 09 f 5 2.5478E − 10
z 6 4.56992671931472E − 09 f 6 5.6505E − 11

Original GA z 1 0.00459210535797400 f 1 3.8965E − 11 8.3319E − 06 1000
z 2 −0.0140392033441710 f 2 8.6758E − 11
z 3 0.999989456248088 f 3 3.3069E − 11
z 4 −0.999901445571622 f 4 1.3870E − 08
z 5 −0.00519291646053100 f 5 4.9063E − 05
z 6 −0.00519428784572100 f 6 9.1419E − 07

CEGA z 1 0.132104801350580 f 1 1.9783E − 11 1.0693E − 11 87
z 2 0.225320570231597 f 2 1.2026E − 11
z 3 −0.991235754742487 f 3 1.6804E − 11
z 4 −0.974284681506633 f 4 1.2213E − 12
z 5 −1.46708097455544E − 10 f 5 1.0116E − 11
z 6 1.36330007947428E − 10 f 6 4.2055E − 12

Figure 6.

Figure 6

Benchmark 4: the convergence curves of the best F(z) achieved so far by original GA and CEGA.

5. Discussions

Tables 14 show the results of all algorithms for the four benchmark problems in terms of the best-obtained solution and the number of iterations. We can observe, for the 1st benchmark problem (experiment test), that hybrid-GOA-GA [38] surpassed the other algorithms in reaching the lowest value of F(z), which is 1.7904E − 06, but in the number of iterations of 300 while the proposed CEGA obtained a solution very close to the solution obtained by hybrid-GOA-GA, which is 2.7227E − 06, but in only 11 iterations. For the 2nd benchmark problem (arithmetic application), we find that the proposed CEGA outperformed the rest of the algorithms in obtaining the lowest value of F(z), which is 3.0855E − 14, in 272 iterations, while GAs [21] got an acceptable solution, which is 1.2674E − 09, in the least number of iterations, which is 10. For the 3rd benchmark problem (combustion application), we find that hybrid-GOA-GA [38] outperformed the rest of the algorithms in obtaining the lowest value of F(z), which is 1.2499E − 09, in 300 iterations. CEGA got an acceptable solution, which is 4.5300E − 09, in an acceptable number of iterations, which is 183, while GAs [21] obtained a reasonable solution, which is 1.8034E − 05, in the fewest number of iterations, which is 70. Finally, for the 4th benchmark problem (Neurophysiology application), we find that the proposed CEGA outperformed the rest of the algorithms in obtaining the lowest value of F(z), which is 1.0693E − 11, in 87 iterations, while GAs [21] got an acceptable solution, which is 5.2127E − 11, in the least number of iterations, which is 20.

On the other hand, we can see that the original GA' convergence curves had several straight portions, which reflect periods of nonimproving in the objective function owing to entrapment in a local minimum as seen in Figures 36, while for CEGA, it is clear that the chaotic noise was successful in permanently improving the objective function and not repeating solutions or spending time on iterations that did not enhance the objective function. The following percentage relationship (IMP%) is used to indicate the improvement between the original GA and the proposed CEGA algorithm:

IMP%=original GA iterationsCEGA iterationsoriginal GA iterations×100. (11)

As indicated in Table 5, CEGA improved all results significantly by 75.99% on average. So, we can say that chaotic noise guides GA to eliminate the local minimum and enhance the search results, reducing the number of iterations and, as a result, time, by preventing iterations from being used without improvement or convergence to the best solution.

Table 5.

Percentage improvement between the original GA and the proposed CEGA.

Benchmark problem Original GA CEGA IMP% (%) Average IMP% (%)
Benchmark 1, experiment test 300 11 96.33 75.99
Benchmark 2, arithmetic application 1200 272 77.33
Benchmark 3, combustion application 300 183 39
Benchmark 4, neurophysiology application 1000 87 91.3

The EAA [7], GAs [21], hybrid-GOA-GA [38], original GA, and the proposed CEGA solved the 4 benchmark problems. Therefore, a statistical evaluation of CEGA compared to these algorithms will be done, according to the best function value F(z) by implementing the Friedman test [39] and the Wilcoxon signed-rank test [40] here. The Friedman test compares the algorithms' average ranks and produces Friedman statistics, where the smaller the ranking, the better the performance of the algorithm while the Wilcoxon signed-rank test is used to show the significant differences between the CEGA and the other algorithms.

The Friedman test results are shown in Table 6. Table 6 shows that the Asymp. Sig. (P value) is smaller than 0.05, indicating that there are variations in the outcomes obtained by all algorithms. Furthermore, with a lower mean rank, the suggested CEGA algorithm outperforms the other algorithms.

Table 6.

Friedman test.

Ranks Test statistics
Method Mean rank
EAA [7] 4.50
GAs [21] 3.25 N 4
Hybrid-GOA-GA [38] 1.75 Chi-square 11.400
Original GA 4.00 df 4
CEGA 1.50 Asymp. Sig. ( P value) 0.022

Table 7, on the other hand, displays the results of the Wilcoxon signed-rank test. The sum of positive ranks is R+, whereas the sum of negative ranks equals R−. Table 7 demonstrates that CEGA achieves better R+ values than R− values in 3 cases and is equal in 1 case, indicating that it outperforms other algorithms. As a result of Table 7, we can infer that the proposed CEGA is a significant algorithm and better than the other algorithms.

Table 7.

Wilcoxon signed ranks test.

Test statistics Ranks
N Mean rank Sum of ranks
EAA-CEGA R 0a 0.00 0.00 (a) EAA < CEGA
Z −1.826m R + 4b 2.50 10.00 (b) EAA > CEGA
Asymp. Sig. (2-tailed) 0.068 Ties 0c (c) EAA = CEGA
m. based on negative ranks Total 4

GAs-CEGA R 0d 0.00 0.00 (d) GAs < CEGA
Z −1.826m R + 4e 2.50 10.00 (e) GAs > CEGA
Asymp. Sig. (2-tailed) 0.068 Ties 0f (f) GAs = CEGA
m. based on negative ranks Total 4

Hybrid-GOA-GA-CEGA R 2g 3.50 7.00 (g) Hybrid-GOA-GA < CEGA
Z −0.730n R + 2h 1.50 3.00 (h) Hybrid-GOA-GA > CEGA
Asymp. Sig. (2-tailed) 0.465 Ties 0i (i) Hybrid-GOA-GA = CEGA
n. based on positive ranks Total 4

Original GA-CEGA R 0j 0.00 0.00 (j) Original GA < CEGA
Z −1.826m R + 4k 2.50 10.00 (k) Original GA > CEGA
Asymp. Sig. (2-tailed) 0.068 Ties 0l (l) Original GA = CEGA
m. based on negative ranks Total 4

6. Conclusions

In this paper, a chaotic enhanced genetic algorithm (CEGA) to solve the nonlinear system of equations (NSEs) is proposed, which is a combination of genetic algorithm (GA) and chaos theory. CEGA was designed by using a new definition which is chaotic noise to solve the shortcomings of original GA such as a lack of solution variety, an imbalance between exploitation and exploration, repeating best solution throughout the optimization process, and sluggish convergence of the optimal solution. NSEs are first transformed into an unconstrained optimization problem, which is then solved using CEGA.

Four benchmarks problems were considered, which are experiment test, arithmetic application, combustion application, and neurophysiology application. The results obtained by CEGA and the original GA showed that CEGA leads to faster convergence and is successful in finding the optimal solution in fewer iterations than the original GA with an average improvement percentage of about 75.99. On the other hand, the convergence curves showed how the original GA consumes time in trapping into the local minima while the CEGA, by using the chaotic noise, terminated this sticking in the local minimum and moved the optimization process to new better search space. In addition, by comparing CEGA results with other studies, we find that CEGA is competitive and the best. Furthermore, statistical analysis by Friedman and Wilcoxon's tests showed the significance of the CEGA findings, where it got the lowest mean rank and achieved better R+ values than R values.

In our future works, three directions will be concentrated: (i) implementing more modifications for CEGA and assessing their impact on optimization results, (ii) applying CEGA to solve optimization problems in different fields, and (iii) using other metaheuristic algorithms to solve this kind of problems, such as particle swarm optimization [41], ant colony optimization [42], artificial bee colony (ABC) Algorithm [43], krill herd [44], monarch butterfly optimization (MBO) [45], earthworm optimization algorithm (EWA) [46], elephant herding optimization (EHO) [47], moth search (MS) algorithm [48], slime mould algorithm (SMA) [49], hunger games search (HGS) [50], Runge Kutta optimizer (RUN) [51], colony predation algorithm (CPA) [52], and harris hawks optimization (HHO) [53].

Acknowledgments

The authors extend their appreciation to the Deputyship for Research and Innovation, Ministry of Education in Saudi Arabia, for funding this research work through the project number (IF-PSAU-2021/01/18396).

Data Availability

All data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Authors' Contributions

All authors are equally contributed to this article.

References

  • 1.Jeeves T. A. Secant modification of Newton’s method. Communications of the ACM . 1958;1(8):9–10. doi: 10.1145/368892.368913. [DOI] [Google Scholar]
  • 2.Moré J. J., Cosnard M. Y. Numerical solution of nonlinear equations. ACM Transactions on Mathematical Software . 1979;5:64–85. [Google Scholar]
  • 3.Dennis, John E., Schnabel R. B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations . Pennsylvania, USA: Society for Industrial and Applied Mathematics; 1996. [Google Scholar]
  • 4.Conn A. R., Gould N. I. M., Toint P. A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds. SIAM Journal on Numerical Analysis . 1991;28(2):545–572. doi: 10.1137/0728030. [DOI] [Google Scholar]
  • 5.Hoffman J. D., Frankel S. Numerical Methods for Engineers and Scientists . Florida, USA: CRC Press; 2018. [Google Scholar]
  • 6.Chang W.-D. An improved real-coded genetic algorithm for parameters estimation of nonlinear systems. Mechanical Systems and Signal Processing . 2006;20(1):236–246. doi: 10.1016/j.ymssp.2005.05.007. [DOI] [Google Scholar]
  • 7.Grosan C., Abraham A. A new approach for solving nonlinear equations systems. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans . 2008;38(3):698–714. doi: 10.1109/tsmca.2008.918599. [DOI] [Google Scholar]
  • 8.Ren H., Wu L., Bi W., Argyros I. K., Argyros Solving nonlinear equations system via an efficient genetic algorithm with symmetric and harmonious individuals. Applied Mathematics and Computation . 2013;219(23):10967–10973. doi: 10.1016/j.amc.2013.04.041. [DOI] [Google Scholar]
  • 9.Mo Y., Liu H., Wang Q. Conjugate direction particle swarm optimization solving systems of nonlinear equations. Computers & Mathematics with Applications . 2009;57:1877–1882. doi: 10.1016/j.camwa.2008.10.005. [DOI] [Google Scholar]
  • 10.Jaberipour M., Khorram E., Karimi B. Particle swarm algorithm for solving systems of nonlinear equations. Computers & Mathematics with Applications . 2011;62(2):566–576. doi: 10.1016/j.camwa.2011.05.031. [DOI] [Google Scholar]
  • 11.Jia R., He D. Hybrid artificial bee colony algorithm for solving nonlinear system of equations. Proceedings of the 2012 Eighth international conference on computational intelligence and security; November 2012; Guangzhou, China. IEEE; pp. 56–60. [DOI] [Google Scholar]
  • 12.Zhou R. H., Li Y. G. Applied Mechanics and Materials . 651-653. Trans Tech Publications Ltd; 2014. An improve cuckoo search algorithm for solving nonlinear equation group; pp. 2121–2124. [DOI] [Google Scholar]
  • 13.Ariyaratne M. K. A., Fernando T. G. I., Weerakoon S. Solving systems of nonlinear equations using a modified firefly algorithm (MODFA) Swarm and Evolutionary Computation . 2019;48:72–92. doi: 10.1016/j.swevo.2019.03.010. [DOI] [Google Scholar]
  • 14.Holland J. H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence . Cambridge, USA: MIT press; 1992. [Google Scholar]
  • 15.Goldberg David E. Genetic Algorithms in Search, Optimization, and Machine Learning. Reading, MA . Boston, USA: Addison-Wesley; 1989. [Google Scholar]
  • 16.Mangla C., Bhasin H., Ahmad M., Uddin M. Industrial Mathematics and Complex Systems . Singapore: Springer; 2017. Novel solution of nonlinear equations using genetic algorithm; pp. 249–257. [DOI] [Google Scholar]
  • 17.Rovira A., Valdés M., Casanova J. A new methodology to solve non-linear equation systems using genetic algorithms. Application to combined cyclegas turbine simulation. International Journal for Numerical Methods in Engineering . 2005;63(10):1424–1435. doi: 10.1002/nme.1267. [DOI] [Google Scholar]
  • 18.Ji Z., Li Z., Ji Z. Research on genetic algorithm and data information based on combined framework for nonlinear functions optimization. Procedia Engineering . 2011;23:155–160. doi: 10.1016/j.proeng.2011.11.2482. [DOI] [Google Scholar]
  • 19.Joshi G., Bala Krishna M. Solving system of non-linear equations using Genetic Algorithm. Proceedings of the 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI); September 2014; Delhi, India. IEEE; pp. 1302–1308. [DOI] [Google Scholar]
  • 20.Mastorakis N. E. Solving non-linear equations via genetic algorithms. Proceedings of the 6th WSEAS International Conference on Evolutionary; June 2005; Lisbon, Portugal. pp. 16–18. [Google Scholar]
  • 21.Pourrajabian A., Ebrahimi R., Mirzaei M., Shams M. Applying genetic algorithms for solving nonlinear algebraic equations. Applied Mathematics and Computation . 2013;219(24):11483–11494. doi: 10.1016/j.amc.2013.05.057. [DOI] [Google Scholar]
  • 22.Raja M. A., Zahoor Z., Sabir Z., et al. Design of stochastic solvers based on genetic algorithms for solving nonlinear equations. Neural Computing & Applications . 2015;26(1):1–23. doi: 10.1007/s00521-014-1676-z. [DOI] [Google Scholar]
  • 23.Wang L., Zheng D.-Z., Lin Q. S. Survey on chaotic optimization methods. Computing Technology and Automation . 2001;20(1):1–5. [Google Scholar]
  • 24.Cuyt A. A. M., Rall L. B. Computational implementation of the multivariate Halley method for solving nonlinear systems of equations. ACM Transactions on Mathematical Software . 1985;11(1):20–36. doi: 10.1145/3147.3162. [DOI] [Google Scholar]
  • 25.Zhao R., Ni H., Feng H., Song Y., Zhu X. An improved grasshopper optimization algorithm for task scheduling problems. Int. J. Innov. Comput., Inf. Control . 2019;15:1967–1987. [Google Scholar]
  • 26.Nie P.-y. A null space method for solving system of equations. Applied Mathematics and Computation . 2004;149(1):215–226. doi: 10.1016/s0096-3003(03)00135-8. [DOI] [Google Scholar]
  • 27.Nie P.-Y. An SQP approach with line search for a system of nonlinear equations. Mathematical and Computer Modelling . 2006;43(3-4):368–373. doi: 10.1016/j.mcm.2005.10.007. [DOI] [Google Scholar]
  • 28.Yang D., Liu Z., Zhou J. Chaos optimization algorithms based on chaotic maps with different probability distribution and search speed for global optimization. Communications in Nonlinear Science and Numerical Simulation . 2014;19(4):1229–1246. doi: 10.1016/j.cnsns.2013.08.017. [DOI] [Google Scholar]
  • 29.Pan P., Wang D., Niu B. Design optimization of APMEC using chaos multi-objective particle swarm optimization algorithm. Energy Reports . 2021;7:531–537. [Google Scholar]
  • 30.Talatahari S., Azizi M. Optimization of constrained mathematical and engineering design problems using chaos game optimization. Computers & Industrial Engineering . 2020;145 doi: 10.1016/j.cie.2020.106560.106560 [DOI] [Google Scholar]
  • 31.Feng J., Zhang J., Zhu X., Lian W. A novel chaos optimization algorithm. Multimedia Tools and Applications . 2017;76(16):17405–17436. doi: 10.1007/s11042-016-3907-z. [DOI] [Google Scholar]
  • 32.Abdelsalam A. M., El-Shorbagy M. A. Optimization of wind turbines siting in a wind farm using genetic algorithm based local search. Renewable Energy . 2018;123:748–755. doi: 10.1016/j.renene.2018.02.083. [DOI] [Google Scholar]
  • 33.Chakraborti D., Biswas P., Pal B. B. FGP approach for solving fractional multiobjective decision making problems using GA with tournament selection and arithmetic crossover. Procedia Technology . 2013;10:505–514. doi: 10.1016/j.protcy.2013.12.389. [DOI] [Google Scholar]
  • 34.Pathan M. V., Patsias S., Tagarielli V. L. A real-coded genetic algorithm for optimizing the damping response of composite laminates. Computers & Structures . 2018;198:51–60. doi: 10.1016/j.compstruc.2018.01.005. [DOI] [Google Scholar]
  • 35.Herrera F., Lozano M., Verdegay J. L. Tackling real-coded genetic algorithms: operators and tools for behavioural analysis. Artificial Intelligence Review . 1998;12(4):265–319. doi: 10.1023/a:1006504901164. [DOI] [Google Scholar]
  • 36.Soni N., Kumar T. Study of various mutation operators in genetic algorithms. International Journal of Computer Science and Information Technologies . 2014;5:4519–4521. [Google Scholar]
  • 37.El-Shorbagy M. A., Mousa A. A., Nasr S. M. A chaos-based evolutionary algorithm for general nonlinear programming problems. Chaos, Solitons & Fractals . 2016;85:8–21. doi: 10.1016/j.chaos.2016.01.007. [DOI] [Google Scholar]
  • 38.El-Shorbagy M. A., El-Refaey A. M., El-Refaey A. M. Hybridization of grasshopper optimization algorithm with genetic algorithm for solving system of non-linear equations. IEEE Access . 2020;8:220944–220961. doi: 10.1109/access.2020.3043029. [DOI] [Google Scholar]
  • 39.Derrac J., García S., Molina D., Herrera F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation . 2011;1(1):3–18. doi: 10.1016/j.swevo.2011.02.002. [DOI] [Google Scholar]
  • 40.García S., Fernández A., Luengo J., Herrera F. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: experimental analysis of power. Information Sciences . 2010;180(10):2044–2064. [Google Scholar]
  • 41.El-Shorbagy A. Weighted method based trust region-particle swarm optimization for multi-objective optimization. American Journal of Applied Mathematics . 2015;3(3):81–89. doi: 10.11648/j.ajam.20150303.11. [DOI] [Google Scholar]
  • 42.Dorigo M., Birattari M., Stutzle T. Ant colony optimization. IEEE Computational Intelligence Magazine . 2006;1(4):28–39. doi: 10.1109/ci-m.2006.248054. [DOI] [Google Scholar]
  • 43.Karaboga D., Akay B. A comparative study of artificial bee colony algorithm. Applied Mathematics and Computation . 2009;214(1):108–132. doi: 10.1016/j.amc.2009.03.090. [DOI] [Google Scholar]
  • 44.Wang G.-G., Guo L., Gandomi A. H., Hao G.-S., Wang H. Chaotic krill herd algorithm. Information Sciences . 2014;274:17–34. doi: 10.1016/j.ins.2014.02.123. [DOI] [Google Scholar]
  • 45.Wang G.-G., Deb S., Cui Z. Monarch butterfly optimization. Neural Computing and Applications . 2019;31(7):1995–2014. doi: 10.1007/s00521-015-1923-y. [DOI] [Google Scholar]
  • 46.Hosseini Rad M., Abdolrazzagh-Nezhad M. A new hybridization of DBSCAN and fuzzy earthworm optimization algorithm for data cube clustering. Soft Computing . 2020;24(20):15529–15549. doi: 10.1007/s00500-020-04881-0. [DOI] [Google Scholar]
  • 47.Li J., Lei H., Alavi A. H., Wang G.-G. Elephant herding optimization: variants, hybrids, and applications. Mathematics . 2020;8(9):p. 1415. doi: 10.3390/math8091415. [DOI] [Google Scholar]
  • 48.Elaziz M. A., Xiong S., Jayasena K. P. N., Li L. Task scheduling in cloud computing based on hybrid moth search algorithm and differential evolution. Knowledge-Based Systems . 2019;169:39–52. doi: 10.1016/j.knosys.2019.01.023. [DOI] [Google Scholar]
  • 49.Li S., Chen H., Wang M., Asghar Heidari A., Heidari A. A., Mirjalili S. Slime mould algorithm: a new method for stochastic optimization. Future Generation Computer Systems . 2020;111:300–323. doi: 10.1016/j.future.2020.03.055. [DOI] [Google Scholar]
  • 50.Yang Y., Chen H., Asghar Heidari A., Heidari A. A., Gandomi A. H. Hunger games search: visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Systems with Applications . 2021;177 doi: 10.1016/j.eswa.2021.114864.114864 [DOI] [Google Scholar]
  • 51.Yousri D., Mudhsh M., Shaker Y., et al. Justification under Partial Shading and Varied Temperature Conditions . Vol. 10. IEEE Access; 2022. Modified interactive algorithm based on Runge Kutta optimizer for photovoltaic modeling. [Google Scholar]
  • 52.Tu J., Chen H., Wang M., Gandomi A. H., Gandomi The colony predation algorithm. Journal of Bionics Engineering . 2021;18(3):674–710. doi: 10.1007/s42235-021-0050-y. [DOI] [Google Scholar]
  • 53.Heidari A. A., Mirjalili S., Faris H., Aljarah I., Mafarja M., Chen H. Harris hawks optimization: algorithm and applications. Future Generation Computer Systems . 2019;97:849–872. doi: 10.1016/j.future.2019.02.028. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

All data used to support the findings of this study are included within the article.


Articles from Computational Intelligence and Neuroscience are provided here courtesy of Wiley

RESOURCES