ABSTRACT
In this paper, a new pattern search is proposed to solve the systems of nonlinear equations. We introduce a new non-monotone strategy which includes a convex combination of the maximum function of some preceding successful iterates and the current function. First, we produce a stronger non-monotone strategy in relation to the generated strategy by Gasparo et al. [Nonmonotone algorithms for pattern search methods, Numer. Algorithms 28 (2001), pp. 171–186] whenever iterates are far away from the optimizer. Second, when iterates are near the optimizer, we produce a weaker non-monotone strategy with respect to the generated strategy by Ahookhosh and Amini [An efficient nonmonotone trust-region method for unconstrained optimization, Numer. Algorithms 59 (2012), pp. 523–540]. Third, whenever iterates are neither near the optimizer nor far away from it, we produce a medium non-monotone strategy which will be laid between the generated strategy by Gasparo et al. [Nonmonotone algorithms for pattern search methods, Numer. Algorithms 28 (2001), pp. 171–186] and Ahookhosh and Amini [An efficient nonmonotone trust-region method for unconstrained optimization, Numer. Algorithms 59 (2012), pp. 523–540]. Reported are numerical results of the proposed algorithm for which the global convergence is established.
Keywords: Nonlinear equation, pattern search, coordinate search, non-monotone technique, theoretical convergence
2010 AMS Subject Classifications: 90C30, 93E24, 34A34
1. Introduction
Consider the following nonlinear system of equations
| (1) |
for which is a continuously differentiable mapping. Suppose that has a zero. Then every solution of the nonlinear equation problem (1) is a solution of the following nonlinear unconstrained least-squares problem
| (2) |
where denotes the Euclidean norm. Conversely, if solves Equation (2) and , then is a solution of (1). There are variant methods to solve nonlinear system (1), as conjugate gradient methods [33,35], line-search methods [5,12,14–16,32] and trust-region methods [2,3,7–11,34,36,37,39], which are quite fast and robust; but they may have some shortcomings. First, by a ratio, trust-region algorithm tries to control the agreement between the actual and predicted reduction essentially only along with a direction, for more details on the trust-region algorithm, cf. [26] If this ratio is near one and the Jacobin matrix is ill-conditioned or f is a highly nonlinear function for which approximated quadratic is not good, then the trust-region radius may increase before reaching a narrow curved valley. Afterwards, we need to reduce several times the radius to get around this narrow curved valley that leads to increase computational cost and also produce unsuitable solution for the cases in which highly accurate solutions are necessary. Second, solving the trust-region subproblems leads to increase CPU times. Third, these methods need to compute both and to determine the approximated quadratic in each iteration. Pattern search methods represent a derivative free subclass of direct search algorithms to minimize a continuous function (see, e.g. [4,17,21,22]). Box [4] and Hooke and Jeeves [17] were the first researchers to introduce the original pattern search methods. Some researchers have shown that pattern search algorithms converge globally, see [13,19,20,30,31]. Lewis and Torczon successfully extended these algorithms to obtain bound and linearly constrained minimization [19,20]. Torczon [29,30] presented a multidirectional search algorithm for parallel machines. In ill-conditioned problems, using monotone pattern search auxiliary algorithm may have unsuitable influence on the performance of the whole procedure, cf. [13]. Hence, we are going to introduce a new non-monotone pattern search framework that decreases the total number of function evaluations and CPU times. This development enables us to produce a suitable non-monotone strategy at each iteration and maintains the global convergence. Numerical results show that the new modification of pattern search is efficient to solve systems of nonlinear equations.
Notation: The Euclidean vector norm or the associated matrix norm is denoted by the symbol . A set of directions is called positively span if for each there exist , for , such that
Moreover, , for , is considered as the orthonormal set of the coordinate directions. To simplify our notation, we set .
Organization. The rest of this paper is organized as follows. In Section 2, we first describe the exploratory moves and then the generalized pattern search is presented. A new non-monotone pattern search algorithm is presented in Section 3. In Section 4, the global convergence of the new algorithm is investigated. Numerical results are provided in Section 5 to show that the proposed algorithm is efficient and promising for systems of nonlinear equations. Finally, some concluding remarks are given in Section 6.
2. The generalized pattern search method
First of all, we define two components, namely a basis matrix and a generating matrix, cf. [31].
Definition 2.1
Any arbitrary non-singular matrix is called a basis matrix.
Definition 2.2
The generating matrix with p>2n, divided into two parts, is considered as
in which , , M is a finite set of non-singular matrices and is a matrix that contains at least a column zeros.
Defined by the columns of the matrix , in which B is a basis matrix, is a pattern . By the definition and this fact that has rank n, it is clear that also has rank n. This fact implies that the columns of span . To act better, the partition of the generating matrix to partition is used, as follows:
| (3) |
Given , a step-size , we define a trial step to be any vector of the form
in which indicates the ith column of , the vectors , named exploratory moves as proposed in [31], determine the step directions and is considered as a step-size parameter. Furthermore, a trial point as any point of the form will be defined, where is the current iterate. Before declaring a new iterate and updating the associated information, pattern search methods use the series of exploratory moves in order to produce the new iterate. To prove the convergence property of pattern search methods, we require that the exploratory moves are obtained by the following two procedures:

In Procedure 1, note that means that the vector y is contained in the set of columns of the matrix A. (S.2) is more interesting; hence, let us describe how it works. As long as there exists a decrease on the function value at each iterate among any of the steps presented by , the exploratory moves must return a decrease step on the function value at each iterate, without satisfying .

In Procedure 2, (S.2) is replaced by a strong version, as presented above.
Algorithm 1 situates the generalized pattern search method for systems of nonlinear equations, cf. [31].
In Algorithm 1, if (Line 8), then it is called a successful iteration. Otherwise, it is called a unsuccessful iteration. The parameter θ is considered the shrinkage parameter with the role , in which and is a negative integer, and is called the expanding factor such that
in which are positive integers, with . In Line 4 of this algorithm, the step can be obtained either by Procedure 1 or Procedure 2. This algorithm is called generalized weak pattern search (GWPS) if is obtained by Procedure 1; otherwise, if is obtained by Procedure 2, it is called generalized strong pattern search (GSPS).
Both GWPS and GSPS contain a drawback. This fact that the quantity can not truly prevent the production of unsuccessful iterations in the presence of narrow curved valley leads to the increase of CPU time and the total number of function evaluations. In order to overcome this drawback, Gasparo et al. [13] modified the quantity .
Torczon [31] showed in Theorem 3.2 that each iterate generated by GWPS can be considered as
| (4) |
in which α and β are relatively prime positive integers while satisfying , , and . Moreover, Torczon showed that can be written as follows:
| (5) |
in which . Both Equations (4) and (5) help us to prove Lemma 4.6 in Section 4.
3. The new non-monotone strategy
It is believed that some globalization techniques such as pattern search can generally guarantee the global convergence of the traditional direct search approaches. A monotonicity of the sequence of objective function values is generated by this globalization technique, which usually leads to produce short steps. Due to this fact, a slow numerical convergence is created for highly nonlinear problems, see [1,5,13,14,27,28,38]. As an example, the generalized pattern search framework exploits the quantity which guarantees
this means that the sequence is monotone. In order to avoid this drawback of globalization techniques, Gasparo et al. [13] based on the definition introduced by Grippo et al. [14], proposed a non-monotone strategy in pattern search algorithms with the quantity satisfying
for which
| (6) |
in which and with . This strategy has excellent results having caused many researchers to investigate the effects of these strategies in a wide variety of optimization procedures and to propose some other non-monotone techniques, see [1,13,14,27,28,38]. Although the non-monotone technique (6) has many advantages, this rule contributes to some drawbacks as well, see [1,38]. Recently, Ahookhosh and Amini [1] have presented a weaker non-monotone strategy of Grippo et al. [14] which overcomes some of its disadvantages [14] with the quantity satisfying
where
| (7) |
in which , and . Although, this proposal generates the more efficient algorithm, it depends on choosing . An unsuitable choice of can cause some shortcomings. According to the characteristics and expectations of our algorithm, we further propose an appropriate . In this regard, let us first define the following ratio
which can help us to compare the distance between the members of and . It is clear that because and Lemma 4.5 show that . Also, it can be seen that if (), then and are far away from each other and otherwise they are close. Now, after representing of
| (8) |
a new non-monotone pattern search formula is defined by
| (9) |
for which the new quantity is considered as
| (10) |
The theoretical and numerical results show that the new choice of has remarkable positive effects on pattern search to get faster convergence, especially for highly nonlinear problems. Let us now use the following procedure to compute the non-monotone strategy (9)

Remark 3.1
The sequence generates the convergence results obtained by stronger non-monotone strategy whenever iterates are far away from the optimizer and the members of and are close to each other while this sequence generates the convergence results obtained by weaker non-monotone strategy whenever iterates are close to the optimizer and the members of and are far away from each other.
Before presenting our algorithm, we describe how to determine the step by the following two procedures:

Procedure 3 tries to reach the condition at each iterate among any of the steps presented by without satisfying .

Now, to investigate the effectiveness of the new pattern search, we add the new non-monotone strategy to the framework of pattern search method.
Note that in Algorithm 2, if is obtained by Procedure 3, then it is called non-monotone weak pattern search (NMWPS-N) while if is obtained by Procedure 4, then it is considered as non-monotone strong pattern search (NMSPS-N). To guarantee the global convergence of NMWPS-N using Procedure 3 to determine , we need to update by
| (11) |
and NMSPS-N, obtained by Procedure 4, updates by
| (12) |
where both θ and are updated similar to Algorithm 1. We determine how to update in Section 5.
The global convergence results of both NMWPS-N and NMSPS-N require the following assumptions necessary:
The level set is bounded.
is continuously differentiable on a compact convex set Ω containing .
It can be easily seen that in Algorithm 2 for any index k, one of the following cases can occur
Lemma 3.1
Suppose that the sequence is generated by Algorithm 2. Then, we have the following properties:
If then .
If then .
If then .
Proof.
(1) This fact that implies and consequently
On the other hand, because , it is easily seen that
So, (P1) is correct.
(2) Using the definition along with this fact that implies that and so
which gives (P2).
(3) The definition of and results in
so, (P3) is correct.
Based on Lemma 3.1, using the new sequence causes some appropriate properties. If , (P1) concludes , so in this case where iterates are close to the optimizer, the definition (9) proposes a weaker non-monotone strategy in relation to the non-monotone strategy (7). Otherwise, if , then (P2) concludes that and so it leads us to produce a medium non-monotone strategy whenever iterates are not far away from the optimizer. Finally, if , far away from the optimizer, (P3) results in and so algorithm uses a strong non-monotone strategy with respect to the non-monotone strategy (6).
4. Convergence analysis
In this section, we investigate the global convergence results of the new proposed algorithm.
Lemma 4.1
Suppose that Assumption (H1) holds and the sequence is generated by Algorithm 2. Then, for all we have and the sequence for all is a convergent decreasing sequences and also for all provided that .
Proof.
If is not accepted by Algorithm 2, then and . Otherwise, we have
(13) In the sequel, we divide the proof into two parts.
(a) . (P1), (P2) of Lemma 3.1 along with (13) imply that . In order to prove that the sequence is decreasing, we consider the following two cases:
(i) k<N. In this case . It is easily seen that
(ii) . In this case, we have , for all k. Therefore, inequality results in
while the last inequality along with is conclusion of (13).
(b) and . The proof is similar to the cases (i) and (ii) of the part (a).
Now, by a strong induction, assuming , for all , it is sufficient to show . Now, we can obtain
Thus, the sequence is contained in . Finally, Assumption (H1) along with for all implies that the sequence is bounded. Thus, the sequence is convergent.
Lemma 4.2
Suppose that Assumption (H1) holds and the sequence is generated by Algorithm 2. Then, for all we have and whenever the sequence for all is a convergent decreasing sequences.
Proof.
If is not accepted by Algorithm 2, then and . Otherwise, we have
This fact along with and the definition results in and also
Now, by a strong induction, assuming , for all , it is sufficient to show . Now, we can obtain
Thus, the sequence is contained in . Finally, Assumption (H1) along with for all implies that the sequence is bounded. Thus, the sequence is convergent.
Lemma 4.3
Let be a bounded sequence of vectors in by the NMSPS-N algorithm and such that . Then, under Assumptions (H1) and (H2), there exist such that for all if then the kth iteration of NMSPS-N will be successful and .
Proof.
Similar to Proposition 6.4 in [31], for , If , then we can get
in which is a constant. Hence, there exists at least one such that . Whenever , . If , then Procedures 3 guarantees and consequently . According NMWSP-N, we have .
Lemma 4.3 gives the following corollary, see Corollary 6.5 in [31].
Corollary 4.4
Let be a bounded sequence of vectors in by NMWPS-N and such that . Then, under Assumptions (H1) and (H2), there exist such that for all if then
The above corollary helps us to establish the following lemma.
Lemma 4.5
Suppose that Assumptions (H1) and (H2) hold and the sequence is generated by the NMWPS-N algorithm. Then, we have
Proof.
Using the fact that is not the optimum of (2), we can conclude that there exists a constant such that . This fact along with Lemma 4.4 and , for some , implies that
(14) where . By replacing k with in Equation (14), we have
(15) This fact along with Lemma 4.2 results in
(16) Assumption (H2) and (16) give
(17) By letting and using the induction, for all , we can prove
(18) This fact that and according to Equation (16), for j=1, Equation (18) is satisfied. Assume that Equation (18) holds for a given j and take k large enough so that . Using Equation (14) and substituting k with , we have
Following the same argument to derive (17), we deduce that
and also
Similar with Equation (17), for any given , we have
On the other hand, we can generate
This fact along with Equation (18) and implies that
Hence, Assumption (H2) leads to
Using Lemma 4.5, we can obtain the following corollary.
Corollary 4.6
Suppose that Assumptions (H1) and (H2) hold and the sequence is generated by the NMWPS-N algorithm. Then, we have
Proof.
(1) If , then the inequality along with Lemma 4.5 implies that
(2) For , recalling Lemma 4.5 along with the definition of results in
The following lemmas show that NMWPS-N and NMSPS-N algorithms are well-defined.
Lemma 4.7
Suppose that Assumption (H1) holds and the NMWPS-N algorithm has constructed an infinite sequence then .
Proof.
By contradiction, suppose that is not satisfied; hence, we can assume that there exists a constant and a set index such that
This fact along with Equation (5) results in
which means that the sequence is bounded away from zero. Since and is compact, Lemma 3.1 in [31] implies that the sequence has an upper bounded denoted by and hence the sequence is bounded above. In other words, the sequence is finite and consequently has, respectively, a lower and upper bounded, defined by
hence, for any , it can be concluded
i.e. it lies on a translated integer lattice generated by and the columns of , denoted by . Therefore for which is finite and it must has at least a point in such that for infinitely many k. By steps of NMWPS-N, a lattice point can be revisited finitely many times; hence, the new step is accepted if only if . This fact implies that there exists an positive index m such that , for . This fact together with Corollary 4.6 yields to and consequently , which is a contradiction since .
Since NMSPS-N uses the relationship (12) to update , it ensures that .
Corollary 4.8
Suppose that Assumption (H1) holds and the NMSPS-N algorithm has constructed an infinite sequence . Then, .
Remark 4.1
Grippo and Sciandrone, in Proposition 2 in [23], showed that if there exist the sequences , , which are bounded and each limit point of the sequence is denoted by for which , , is positively span , then
(19)
Theorem 4.9
Suppose that Assumptions (H1) and (H2) hold. Let be the infinite sequence generated by the NMWPS-N. Then,
(20)
Proof.
By contradiction, we assume that Equation (20) does not hold. Then, there exists a constant such that for all . From Lemma 4.7, there exists an infinite sequence such that
(21) By recalling the continuous differentiability of f, it can find, for each and for , , in which , such that
(22) We can get because Equation (21) gives () and . Now, these facts along with taking a limit from both sides (22), for , gives
yielding to
Then, by Equation (19), we get
leading
The following lemma helps us to establish the main global theorem.
Lemma 4.10
Suppose that Assumptions (H1) and (H2) hold and the columns of the are bounded in norm, i.e. there exist two positive constant and such that for . Let be the sequence generated by NMSPS-N. If there exists a positive constant δ and a subsequence such that for then
Proof.
First, we show that
By Lemma 4.4, we get
Suppose that there exists a subset index such that . Then, we get
yielding to , which is a contradiction. Hence, we conclude that
At this point, the global convergence of Algorithm 2 based on the mentioned assumptions of this section can be investigated.
Theorem 4.11
Suppose that Assumptions (H1) and (H2) hold and the columns of the are bounded in norm, i.e. there exist two positive constant and such that for . Then, for any generated by the non-monotone pattern search method NMSPS-N
(23)
Proof.
By contradiction, let us assume that the conclusion does not hold. Then, there is a subsequence of successful iterations such that
Theorem 4.9 guarantees that, for each , there exists a first successful iteration such that . Denote and define the index set , hence there exists another subsequence such that
(24) This fact along with taking leads to
Then, Lemma 4.10 gives
leading to
Hence
which deduces from continuity of on
This is a contradiction since Equation (24) implies .
5. Numerical experiments
One of the well known pattern search methods is the generalized coordinate search method with fixed step lengths [31]. This section reports some numerical experiments. Our algorithm, NMCS-N, is compared with the following considered algorithms
GSCS: The generalized strong coordinate search [31]
NMSCS-G: Algorithm 2 with the non-monotone term of Grippo et al. [14]
NMSCS-A: Algorithm 2 with the non-monotone term of Ahookhosh and Amini [1]
NMSCS-Z: Algorithm 2 with the non-monotone term of Zhang and Hager [38]
Test problems were selected from a wide range of papers: Problems 1–23 from [25], problems 24–31 from [24] and the problems 32–52 from [18].
All codes are written in MATLAB 9 programming environment on a 2.7 GHz Pentium(R) Dual-core CPU Windows 7 PC with 2G RAM with double precision format in the same subroutine. In our numerical experiments, the algorithms are stopped
or whenever the total number of function evaluations exceeds 100,000. For all algorithms, we take advantages of the parameters , , and B:=I. To calculate the non-monotone term , NMSCS-G, NMSCS-A and NMSCS-N have selected N:=5. For NMSCS-A, NMSCS-Z and NMSCS-N, we use and for NMSCS-A and NMSCS-N, the parameter is updated by
For NMSCS-N, we have taken , in which is machine ϵ. For all iterations of the coordinate search method, the generating matrix is fixed, i.e. . Hence, this matrix contains in its columns all possible combinations of and consequently it has columns. In particular, the columns of C contain both I and , as well as a column of zeros.
The following algorithm briefly summarizes how the exploratory move directions for non-monotone coordinate search are generated, see [31]:

The exploratory moves are executed sequentially in the sense that the selection of the next trial step is based on the success or failure of the previous trial step. Thus, we may compute as few as n trial steps while there are possible trial steps, but we compute no more than at any given iteration, see Figure 1 in [31]. However, in the worst case, the algorithm for coordinate search ensures that all steps, defined by , are tried before returning the step . In other words, the exploratory moves given in Algorithm 3 examine all steps defined by unless a step satisfying is found.
At this point, to have a more reliable comparison and demonstrate the overall behaviour of the present algorithms and get more insight about the performance of considered codes, the performance of all codes, based on both and which are shown in Table 1, have been, respectively, assessed in Figure 1 by applying the performance profile proposed from Dolan and Moré in [6]. Subfigure (a) and (b) of Figure 1 plot the function , considered as
where denotes the set of test problems, denotes the ratio of number of function evaluations and CPU-times needed to solve problem p with method s with the least number of function evaluations and CPU-times needed to solve problem p, respectively, and is the maximum value of . Finally, the highest on the plot is describing the best solver.
Table 1. List of test functions.
| Problem name | Dim | Problem name | Dim |
|---|---|---|---|
| Extended Powell badly scaled | 2 | Powell singular | 4 |
| Brent | 3 | Broyden banded | 5 |
| Seven-Diagonal System | 7 | Chebyquad | 10 |
| Extended Powell Singular | 8 | Brown almost linear | 10 |
| Triadiagnal exponential | 10 | Discrete integral equation | 20 |
| Generalized Broyden banded | 10 | Diag. func. premul. by … matrix | 3 |
| Flow in a channel | 10 | Function 18 | 3 |
| Swiriling flow | 10 | Strictly convex 2 | 5 |
| Thorech | 12 | Strictly convex 1 | 5 |
| Trig. exponential system 2 | 15 | Zero Jacobian | 5 |
| Countercurrent reactors 1 | 16 | Geometric | 5 |
| Countercurrent reactors 2 | 16 | Extended Rosenbrock | 6 |
| Porous medium | 16 | Geometric programming | 8 |
| Trigonometric | 20 | Tridimensional valley | 9 |
| Singular Broyden | 20 | Chandrasekhar's H-equation | 10 |
| Broyden tridiagonal | 20 | Singular | 10 |
| Extended Wood | 20 | Logarithmic | 10 |
| Extended Cragg and Levy | 24 | Variable band 2 | 10 |
| Trig. exponential system 1 | 25 | Function 15 | 10 |
| Structured Jacobian | 25 | Linear function-full rank 1 | 10 |
| Discrete boundary value | 25 | Hanbook | 10 |
| Possion | 25 | Variable band 1 | 15 |
| Possion 2 | 25 | Linear function-full rank 2 | 20 |
| Rosenbrock | 2 | Function 27 | 20 |
| Powell badley scaled | 2 | Complementary | 20 |
| Helical valley | 3 | Function 21 | 21 |
Figure 1.

A comparison among proposed algorithms with the performance measures: (a) Number of function evaluations (top), (b) CPU-times (bottom).
On one hand, subfigure (a) of Figure 1 compares NMSCS-N in the sense of the total number of function evaluations. It can be easily seen that NMSCS-N is the best algorithm in the sense of the most wins on more than 50% of the test functions. On the other hand, to compare the CPU times, because of variation of CPU time, each problem is solved five times and then the average of the CPU times is taken into account. Subfigure (b) of Figure 1 represents a comparison among the considered algorithms regarding CPU times. The results of this subfigure indicate that the performance of NMSCS-N is better than other present algorithms. In details, the new algorithm is the best algorithm on more than 35% of all cases.
6. Concluding remarks
This paper proposes a new non-monotone coordinate search algorithm to solve systems of nonlinear equations. Our method can overcome some disadvantages of the proposed method by Ahookhosh and Amini [1] by presenting a new parameter, defined by using combination of the maximum function value of some preceding successful iterates and the current function value. This parameter can prevent the production of weaker non-monotone strategy whenever iterates are far away from the optimizer and stronger nonmonotone strategy whenever iterates are close to the optimizer. The global convergence properties of the proposed algorithms are established. Preliminary numerical results show the significant efficiency of the new algorithm.
Funding Statement
The second author acknowledges the financial support of the Doctoral Program ‘Vienna Graduate School on Computational Optimization’ funded by Austrian Science Foundation under Project No W1260-N35.
Disclosure statement
No potential conflict of interest was reported by the authors.
References
- [1].Ahookhosh M. and Amini K., An efficient nonmonotone trust-region method for unconstrained optimization, Numer. Algorithms 59 (2012), pp. 523–540. doi: 10.1007/s11075-011-9502-5 [DOI] [Google Scholar]
- [2].Ahookhosh M., Amini K., and Kimiaei M., A globally convergent trust-region method for large-scale symmetric nonlinear systems, Numer. Funct. Anal. Optim. 36 (2015), pp. 830–855. doi: 10.1080/01630563.2015.1046080 [DOI] [Google Scholar]
- [3].Ahookhosh M., Esmaeili H., and Kimiaei M., An effective trust-region-based approach for symmetric nonlinear systems, Int. J. Comput. Math. 90(3) (2013), pp. 671–690. doi: 10.1080/00207160.2012.736617 [DOI] [Google Scholar]
- [4].Box G.E.P., Evolutionary operation: A method for increasing industrial productivity, Appl. Stat. 6 (1957), pp. 81–101. doi: 10.2307/2985505 [DOI] [Google Scholar]
- [5].Dai Y.H., On the nonmonotone line search, J. Optim. Theory Appl. 112(2) (2002), pp. 315–330. doi: 10.1023/A:1013653923062 [DOI] [Google Scholar]
- [6].Dolan E.D. and Moré J.J., Benchmarking optimization software with performance profiles, Math. Program. 91 (2002), pp. 201–213. doi: 10.1007/s101070100263 [DOI] [Google Scholar]
- [7].Esmaeili H. and Kimiaei M., An improved adaptive trust-region method for unconstrained optimization, Math. Model. Anal. 19 (2014), pp. 469–490. doi: 10.3846/13926292.2014.956237 [DOI] [Google Scholar]
- [8].Esmaeili H. and Kimiaei M., An efficient adaptive trust-region method for systems of nonlinear equations, Int. J. Comput. Math. 92 (2015), pp. 151–166. doi: 10.1080/00207160.2014.887701 [DOI] [Google Scholar]
- [9].Esmaeili H. and Kimiaei M., A trust-region method with improved adaptive radius for systems of nonlinear equations, Math. Methods Oper. Res. 83(1) (2016), pp. 109–125. doi: 10.1007/s00186-015-0522-0 [DOI] [Google Scholar]
- [10].Fan J.Y., Convergence rate of the trust region method for nonlinear equations under local error bound condition, Comput. Optim. Appl. 34 (2005), pp. 215–227. doi: 10.1007/s10589-005-3078-8 [DOI] [Google Scholar]
- [11].Fan J. and Pan J., An improved trust region algorithm for nonlinear equations, Comput. Optim. Appl. 48(1) (2011), pp. 59–70. doi: 10.1007/s10589-009-9236-7 [DOI] [Google Scholar]
- [12].Gasparo M.G., A nonmonotone hybrid method for nonlinear systems, Optim. Methods Softw. 13 (2000), pp. 79–94. doi: 10.1080/10556780008805776 [DOI] [Google Scholar]
- [13].Gasparo M.G., Papini A., and Pasquali A., Nonmonotone algorithms for pattern search methods, Numer. Algorithms 28 (2001), pp. 171–186. doi: 10.1023/A:1014046817188 [DOI] [Google Scholar]
- [14].Grippo L., Lampariello F., and Lucidi S., A nonmonotone line search technique for Newton's method, SIAM J. Numer. Anal. 23 (1986), pp. 707–716. doi: 10.1137/0723046 [DOI] [Google Scholar]
- [15].Grippo L., Lampariello F., and Lucidi S., A truncated Newton method with nonmonotone line search for unconstrained optimization, J. Optim. Theory Appl. 60(3) (1989), pp. 401–419. doi: 10.1007/BF00940345 [DOI] [Google Scholar]
- [16].Grippo L., Lampariello F., and Lucidi S., A class of nonmonotone stabilization methods in unconstrained optimization, Numer. Math. 59 (1991), pp. 779–805. doi: 10.1007/BF01385810 [DOI] [Google Scholar]
- [17].Hooke R. and Jeeves T.A, Direct search solution of numerical and statistical problems, J. ACM 8 (1961), pp. 212–229. doi: 10.1145/321062.321069 [DOI] [Google Scholar]
- [18].LaCruz W., Venezuela C., Martínez J.M., and Raydan M., Spectral residual method without gradient information for solving large-scale nonlinear systems of equations: Theory and experiments, Technical Report RT–04–08, July 2004.
- [19].Lewis R.M and Torczon V., Pattern search algorithms for bound constrained minimization, SIAM. J. Optim. 9 (1999), pp. 1082–1099. doi: 10.1137/S1052623496300507 [DOI] [Google Scholar]
- [20].Lewis R.M. and Torczon V., Pattern search methods for linearly constrained minimization, SIAM. J. Optim. 10 (2000), pp. 917–941. doi: 10.1137/S1052623497331373 [DOI] [Google Scholar]
- [21].Lewis R.M., Torczon V., and Trosset M.W., Why pattern search works, Optima (1988), pp. 1–7. [Google Scholar]
- [22].Lewis R.M., Torczon V., and Trosset M.W., Direct search methods: Then and now, J. Comput. Appl. Math. 124 (2000), pp. 191–207. doi: 10.1016/S0377-0427(00)00423-4 [DOI] [Google Scholar]
- [23].Lucidi S. and Sciandrone M., On the global convergence of derivative free methods for unconstrained optimization, Technical Report 32–96, DIS, Universita' di Roma ‘La Sapienza’, 1996.
- [24].Lukšan L. and Vlček J., Sparse and partially separable test problems for unconstrained and equality constrained optimization, Techical Report, No. 767, January 1999.
- [25].Moré J.J., Garbow B.S., and Hillström K.E., Testing Unconstrained Optimization Software, ACM Trans. Math. Softw. 7 (1981), pp. 17–41. doi: 10.1145/355934.355936 [DOI] [Google Scholar]
- [26].Nocedal J. and Wright J.S., Numerical Optimization, Springer, NewYork, 2006. [Google Scholar]
- [27].Shi Z.J. and Wang S., Modified nonmonotone Armijo line search for descent method, Numer. Algorithms 57(1) (2011), pp. 1–25. doi: 10.1007/s11075-010-9408-7 [DOI] [Google Scholar]
- [28].Toint P.L., An assessment of nonmonotone linesearch techniques for unconstrained optimization, SIAM J. Sci. Comput. 17 (1996), pp. 725–739. doi: 10.1137/S106482759427021X [DOI] [Google Scholar]
- [29].Torczon V., Multidirectional search: A direct search algorithm for parallel machines, Ph.D. thesis, Rice University, Houston, TX, 1989.
- [30].Torczon V., On the convergence of the multidirectional search algorithm, SIAM J. Optim. 1 (1991), pp. 123–145. doi: 10.1137/0801010 [DOI] [Google Scholar]
- [31].Torczon V., On the convergence of pattern search algorithms, SIAM J. Optim. 7 (1997), pp. 1–25. doi: 10.1137/S1052623493250780 [DOI] [Google Scholar]
- [32].Yuan G.L. and Lu X.W., A new backtracking inexact BFGS method for symmetric nonlinear equations, Comput. Math. Appl. 55 (2008), pp. 116–129. doi: 10.1016/j.camwa.2006.12.081 [DOI] [Google Scholar]
- [33].Yuan G.L. and Zhang M.J., A three-terms Polak-R*ibière-Polyak conjugate gradient algorithm for large-scale nonlinear equations, J. Comput. Appl. Math. 286 (2015), pp. 186–195. doi: 10.1016/j.cam.2015.03.014 [DOI] [Google Scholar]
- [34].Yuan G.L., Lu S., and Wei Z., A new trust-region method with line search for solving symmetric nonlinear equations, Int. J. Comput. Math. 88(10) (2011), pp. 2109–2123. doi: 10.1080/00207160.2010.526206 [DOI] [Google Scholar]
- [35].Yuan G.L., Meng Z.H., and Li Y., A modified Hestenes and Stiefel conjugate gradient algorithm for large-scale nonsmooth minimizations and nonlinear equations, J. Optim. Theory Appl. 168 (2016), pp. 129–152. doi: 10.1007/s10957-015-0781-1 [DOI] [Google Scholar]
- [36].Yuan G.L., Lu X.W., and Wei Z.X., BFGS trust-region method for symmetric nonlinear equations, J. Comput. Appl. Math. 230 (2009), pp. 44–58. doi: 10.1016/j.cam.2008.10.062 [DOI] [Google Scholar]
- [37].Yuan G.L., Wei Z.X., and Lu X.W., A BFGS trust-region method for nonlinear equations, Computing 92(4) (2011), pp. 317–333. doi: 10.1007/s00607-011-0146-z [DOI] [Google Scholar]
- [38].Zhang H.C and Hager W.W., A nonmonotone line search technique and its application to unconstrained optimization, SIAM J. Optim. 14(4) (2004), pp. 1043–1056. doi: 10.1137/S1052623403428208 [DOI] [Google Scholar]
- [39].Zhang J. and Wang Y., A new trust region method for nonlinear equations, Math. Methods Oper. Res. 58 (2003), pp. 283–298. doi: 10.1007/s001860300302 [DOI] [Google Scholar]
