Skip to main content
MethodsX logoLink to MethodsX
. 2026 Feb 18;16:103835. doi: 10.1016/j.mex.2026.103835

Some generalized recurrence relations for nonlinear equations by using decomposition technique with application of fractal geometry

Farooq Ahmed Shah a, Syeda Rameesha Hamdani b, Fikadu Tesgera Tolasa c,, Iftikhar Haider a
PMCID: PMC12964302  PMID: 41799827

Abstract

Nonlinear equations frequently appear in diverse fields of applied sciences, where real-world phenomena cannot be accurately represented by linear models. Therefore, developing efficient numerical methods to approximate the roots of such equations remain a challenging and intellectually stimulating task. These methods are crucial in physics, engineering and computer science for solving nonlinear equations. In response to the growing demands of real-time systems, complicated simulations and high-performance computing, this article introduces few novel root-finding methods that significantly improve the convergence order of the traditional approaches. Accelerated decomposition technique is to diversify different classes of iterative methods. Newly derived methods are compared with existing methods numerically as well as graphically. Polynomiography is employed to visualize the basins of attraction, providing insight into the convergence behavior and stability of the methods. The results indicate that the new algorithms not only overcome the limitations of existing techniques but also offer a visually intuitive understanding of root-finding processes.

This study presents innovative root-finding methods that utilize accelerated decomposition techniques.

The proposed methods demonstrate a significant improvement in convergence order compared to traditional approaches

Through numerical and graphical comparisons, the newly derived methods are shown to outperform existing methods.

=xnf(xn)p(xn)p(xn)f(xn)+f(xn)p(xn)

Keywords: Iterative method; Quadrature rule, Taylor series; Decomposition method; Newton method; Convergence; Fractals

Graphical abstract

Image, graphical abstract


Algorithm 2.1. For a given x0, compute the approximate solution xn+1 by the following iterative scheme.

Alt-text: Unlabelled box dummy alt text

Unlabelled image dummy alt text

Theorem 3.1. Let αI be a simple zero of sufficiently differentiable function f:IRR for an open interval I. If x0is sufficiently close to α, then iterative method define in Algorithm 2.3 has at least fourth-order convergence.

Alt-text: Unlabelled box dummy alt text

Specifications table

Subject area Mathematics and Statistics
More specific subject area Numerical Analysis
Name of your method Iterative methods for nonlinear equation
Name and reference of original method All are newly developed methods
Resource availability Maple is used for calculation

Background

Finding approximate solutions of nonlinear equations is a crucial aspect of various branches of pure and applied mathematics, playing a vital role in numerous fields. In physics, it's essential for modeling population growth, electrical circuits, mechanical systems, and quantum mechanics. Engineering relies on it for designing electronic circuits, control systems, signal processing and structural analysis. Computer science utilizes nonlinear equations in machine learning, data analysis, algorithm development and computer graphics. Biology, economics, environmental science, medicine, optimization, signal processing, chemistry, aerospace, materials science, neuroscience and finance also heavily depend on solving nonlinear equations to make predictions, model real-world phenomena and optimize systems. The significance of nonlinear equations is evident in understanding market dynamics, resource allocation, and economic growth, as well as in analyzing medical imaging, modeling disease spread and understanding pharmacokinetics. The ability to solve these equations efficiently and accurately is critical in advancing research, driving innovation and solving complicated problems across these diverse disciplines. In physics, nonlinear equations appear in quantum mechanics, nonlinear optics, plasma physics, and thermodynamics. Problems such as determining energy levels, solving dispersion relations, or computing steady states of nonlinear dynamical systems rely heavily on iterative root-finding techniques. High-order iterative methods are particularly valuable in these contexts, as they can achieve rapid convergence and high precision, reducing computational cost in large-scale simulations.

The Newton method, a widely used and powerful technique for finding roots of nonlinear equations, boasts quadratic convergence but is not without its limitations. Despite its rapid convergence, the Newton method can be susceptible to numerical instabilities, including division by zero, poor initial guesses, non-convergence and numerical overflow. To mitigate these issues, various modifications and alternatives have been developed, such as the secant method, bisection method, quasi-Newton methods, trust region methods, and hybrid methods. These refinements aim to improve the robustness and reliability of root-finding method making them more suitable for a broad range of applications. By addressing the shortcomings of the Newton method, researchers and practitioners can harness the strengths of these techniques to efficiently and accurately solve complex nonlinear equations. This method can suffer from numerical instabilities due to f(x)=0 or f(x) be very small at any xn during computational process at any step. Several iterative methods have been developed for solving f(x)=0using different techniques [[1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37]]. Abbasbandy [1] and Chun [[8], [9], [10]] worked out to gain different one-step and two-step iterative methods by using Adomian decomposition techniques [1]. These methods are of higher order of convergence but persevere with some drawbacks. Our goal is to develop a generalized recurrence relation that can be implemented to generate iterative methods for finding more accurate solutions for nonlinear equations. In this work, we will implement the technique of Gejji & Jafari [11] to decompose the nonlinear equation for obtaining an applicable formula for nonlinear equations. We will use auxiliary functions to diversify the techniques for best implementation of the methods. We will apply quadrature for this procedure then finally we will generalize the decomposition technique of [11] which is different than Adomian decomposition [1] to formulate the higher order convergent iterative methods. The use of arbitrary and auxiliary function will make the decomposition much more flexible and applicable for derivation and implementation of the iterative methods. Convergence analysis of the newly derived iterative is also studied. Performance of these new schemes is illustrated by several examples and comparison with other methods is shown graphically.

Method details

In this section, we develop a novel class of iterative methods for solving nonlinear equations by integrating the Taylor series expansion and numerical quadrature techniques. The Taylor series approach provides a local polynomial approximation of the nonlinear function and its derivatives, enabling the construction of higher-order iterative schemes with predictable convergence behavior. Complementing this, the quadrature-based approach utilizes the integral form of the nonlinear equation and approximates the integral using Newton–Cotes or Gaussian quadrature formulas, which effectively capture the cumulative behavior of the derivative over the iteration interval. By combining these two strategies, we construct hybrid iterative methods in which the Taylor expansion offers accurate local predictions while the quadrature evaluation refines the update step by incorporating derivative information at multiple nodes. This unified framework results in efficient, high-order Newton-type methods that maintain rapid convergence with improved stability, providing a flexible and systematic approach for generating powerful iterative algorithms for nonlinear problems. Newton method, Halley method and Householder methods are easily obtained by using Taylor series technique and quadrature rules with fundamental Theorem of callus combination provides various applicable methods. This section introduces fundamental and unique recurrence relations and with the help of these new relations innovative iterative methods for addressing nonlinear equations. By combining a system of coupled equations with decomposition techniques, we develop a robust approach to obtaining approximate solutions. The incorporation of an auxiliary function expands the versatility of the main recurrence relation, facilitating its application to a broader range of nonlinear equations.

Consider the nonlinear equation

f(x)=0. (1)

On the condition that α as a simple root of nonlinear Eq. (1) and γ is the initial estimate which is close to α. Let g(x) be the involved auxiliary and arbitrary function, such that

f(x)p(x)=0. (2)

Using Taylor series on p(x)and quadrature formula on f(x) then Eq. (2) can be written as

f(x)p(x)=f(γ)p(γ)+(xγ)[p(γ)f(γ)+{f(γ)+2f(γ+x2)+f(x)4}p(γ)]. (3)

We can express Eq. (3) as:

h(x)=f(x)p(γ)f(γ)p(γ)(xγ)[p(γ)f(γ)+{f(γ)+2f(γ+x2)+f(x)4}p(γ)]. (4)

Eq. (4) can take the following form by simple manipulation

x=γ4(h(x)f(x)p(γ)+f(γ)p(γ))4p(γ)f(γ)+{f(γ)+2f(γ+x2)+f(x)}p(γ)=c+N(x). (5)

Where

c=γ, (6)

and

N(x)=4(h(x)+f(γ)p(γ))4p(γ)f(γ)+{f(γ)+2f(γ+x2)+f(x)}p(γ). (7)

Where N(x) is a nonlinear function.

Now we obtain new iterative schemes by using the decomposition techniques of [11]. The main idea of this decomposition technique is applied to find the solution in the following series form.

x=i=0xi. (8)

The nonlinear operator N(x) can be decomposed as shown in the following equation

N(x)=N(x0)+i=1{N(j=0ixj)N(j=0i1xj)}. (9)

From Eq. (6), (8), (9), we get,

{x0=cx1=N(x0)x2=N(x1)N(x0)xm=N(j=0mxj)N(j=0m1xj) (10)

From Eq. (10), we have

Xm=x1+x2+xm+1=N(x0+x1+x2++xm). (11)

Where

limmXm=x. (12)

Now for m=0

xXm=x0=c=γ. (13)

For m=1

xXm=x0+x1=γ+N(x0). (14)

So, from Eq. (8) and (11)

x1=N(x0)=4(h(x0)+f(γ)p(γ))4p(γ)f(γ)+{f(γ)+2f(γ+x02)+f(x0)}p(γ). (15)

From Eq. (4) and by using the idea of suggested by Yun [27].

h(x0)=0. (16)

Substituting Eq. (16) in (15) and combining with Eq. (14) and (11).

x=x0+x1=γ4f(γ)p(γ)4p(γ)f(γ)+{f(γ)+2f(γ)+f(γ)}p(γ), (18)

Which yields to the

x=γf(γ)p(γ)p(γ)f(γ)+f(γ)p(γ). (19)

Eq. (19) allows us to establish the following one-step iterative scheme for solving the nonlinear equation as:

Algorithm 2.1

For a given x0, compute the approximate solution xn+1 by the following iterative scheme.

xn+1=xnf(xn)p(xn)p(xn)f(xn)+f(xn)p(xn)

From Eq. (19), we have

x0+x1γ=f(γ)p(γ)p(γ)f(γ)+f(γ)p(γ). (20)

Now using (4) and (7) with the help of the idea given by using Yun [33], we have

h(x0+x1)=f(x0+x1)p(γ)f(γ)p(γ)(x0+x1γ)[p(γ)f(γ)+{f(γ)+2f(γ+x0+x12)+f(x0+x1)4}p(γ)]= (21)
f(x0+x1)p(γ)f(γ)p(γ)+(f(γ)p(γ)p(γ)f(γ)+f(γ)p(γ))[p(γ)f(γ)+{f(γ)+2f(γ+x0+x12)+f(x0+x1)4}p(γ)]. (22)

From Eq. (8) and (23) we have

x1+x2=N(x0+x1)=4(h(x)+f(γ)p(γ))4p(γ)f(γ)+{f(γ)+2f(γ+x2)+f(x)}p(γ)=f(γ)p(γ)p(γ)f(γ)+f(γ)p(γ)4f(x0+x1)p(γ)p(γ)f(γ)+(f(γ)+2f(γ+x0+x12)+f(x0+x1))p(γ). (23)

For m=2

xX2=x0+x1+x2=γ+N(x0+x1). (24)

Now by combining Eq. (7), (23) and (24) we get,

x=γf(γ)p(γ)p(γ)f(γ)+f(γ)p(γ)4f(x0+x1)p(γ)4p(γ)f(γ)+(f(γ)+2f(γ+x0+x12)+f(x0+x1))p(γ). (25)

Using the above relation, we conclude to suggest the following two-step iterative method for solving nonlinear Eq. (1).

Algorithm 2.2

For a given x0, compute the approximate solution xn+1 by the following iterative scheme:

yn=xnf(xn)p(xn)p(xn)f(xn)+f(xn)p(xn),
xn+1=yn4f(yn)p(xn)4p(xn)f(xn)+(f(xn)+2f(xn+yn2)+f(yn))p(xn).

From Eq. (25), we have

x0+x1+x2γ=f(γ)p(γ)p(γ)f(γ)+f(γ)p(γ)4f(x0+x1)p(γ)4p(γ)f(γ)+(f(γ)+2f(γ+x0+x12)+f(x0+x1))p(γ). (26)

Now from Eq. (4) and using the design of Yun [33], we have

h(x0+x1+x2)=f(x0+x1+x2)p(γ)f(γ)p(γ)(x0+x1+x2γ)×[p(γ)f(γ)+{f(γ)+2f(γ+x0+x1+x22)+f(x0+x1+x2)4}p(γ)] (27)
=f(x0+x1+x2)p(γ)f(γ)p(γ)(f(γ)p(γ)p(γ)f(γ)+f(γ)p(γ)4f(x0+x1)p(γ)4p(γ)f(γ)+(f(γ)+2f(γ+x0+x12)+f(x0+x1))p(γ))×[p(γ)f(γ)+{f(γ)+2f(γ+x0+x1+x22)+f(x0+x1+x2)4}p(γ)], (28)

and

x1+x2=N(x0+x1)=f(γ)p(γ)p(γ)f(γ)+f(γ)p(γ)4f(x0+x1)p(γ)4p(γ)f(γ)+(f(γ)+2f(γ+x0+x12)+f(x0+x1))p(γ)4f(x0+x1)p(γ)4p(γ)f(γ)+(f(γ)+2f(γ+x0+x1+x22)+f(x0+x1+x2))p(γ). (29)

For m=3

xX2=x0+x1+x2+x3=γ+N(x0+x1+x3). (30)

So by combining Eqs. (7), (26) and (33) we get the following

x=γf(γ)p(γ)g(γ)f(γ)+f(γ)g(γ)4f(x0+x1)p(γ)4p(γ)f(γ)+(f(γ)+2f(γ+x0+x12)+f(x0+x1))p(γ)4f(x0+x1)p(γ)4p(γ)f(γ)+(f(γ)+2f(γ+x0+x1+x22)+f(x0+x1+x2))p(γ). (31)

Above relation allows us to suggest the following three-step iterative method for solving Eq. (1).

Algorithm 2.3

For a given x0, calculate the approximate solution xn+1 by the following iterative schemes:

yn=xnf(xn)p(xn)p(xn)f(xn)+f(xn)p(xn),zn=yn4f(yn)p(xn)4p(xn)f(xn)+(f(xn)+2f(xn+yn2)+f(yn))p(xn),xn+1=zn4f(zn)p(xn)4p(xn)f(xn)+(f(xn)+2f(xn+zn2)+f(zn))p(xn)

From Algorithms 2.1, 2.2 and 2.3, we suggest some iterative method for solving nonlinear Eq. (1) by using different values of auxiliary function. By choosing suitable auxiliary functions diversify the main recurrence relation to obtain the best solution for nonlinear Eq. (1).

We consider the two auxiliary functions which are p(x)=eωx andp(x)=eωf(x)f(x). When we choosep(x)=eωx, and p(x)=eωf(x)f(x), these will provide us with Algorithms from 2.4 to 2.6 and Algorithms 2.7 to 2.9 respectively. Some of these methods are well known and treated as special cases of our main recurrence relations.

Algorithm 2.4

For a given x0, compute the approximate solution xn+1 by the following iterative scheme:

xn+1=xnf(xn)f(xn)ωf(xn)

Well known Newton method [19] is a special case of Algorithm 2.4, when we take ω=0.

Algorithm 2.5

For a given x0, compute the approximate solution xn+1 by the following iterative schemes:

yn=xnf(xn)f(xn)ωf(xn),xn+1=yn4f(yn)f(xn)+2f(xn+yn2)+f(yn)4ωf(xn).

Algorithm 2.6

For a given x0, compute the approximate solution xn+1 by the following iterative schemes:

yn=xnf(xn)f(xn)ωf(xn),zn=yn4f(yn)f(xn)+2f(xn+yn2)+f(yn)4ωf(xn),xn+1=zn4f(zn)f(xn)+2f(xn+zn2)+f(zn)4ωf(xn).

Algorithm 2.7

For a given x0, compute the approximate solution xn+1 by the following iterative scheme:

xn+1=xnf(xn)(f(xn))2(f(xn))3ω((f(xn)2+f(xn)f(xn))f(xn)

Well known Newton method is a special case of Algorithm 2.7. when we take ω=0.

Algorithm 2.8

For a given x0, compute the approximate solution xn+1 by the following iterative schemes:

yn=xnf(xn)(f(xn))2(f(xn))3ω((f(xn))2+f(xn)f(xn))f(xn),xn+1=yn4f(yn)(f(xn))2(f(xn)+2f(xn+yn2)+f(yn))(f(xn))24ω((f(xn)2+f(xn)f(xn))f(xn).

Algorithm 2.9

For a given x0, compute the approximate solution xn+1 by the following iterative schemes:

yn=xnf(xn)(f(xn))2(f(xn))3ω((f(xn)2+f(xn)f(xn))f(xn),zn=yn4f(yn)(f(xn))2(f(xn)+2f(xn+yn2)+f(yn))(f(xn)24ω((f(xn))2+f(xn)f(xn))f(xn),xn+1=zn4f(zn)(f(xn))2(f(xn)+2f(xn+zn2)+f(zn))(f(xn)24ω((f(xn)2+f(xn)f(xn))f(xn).

Remark: By varying the value of ω, we can generate distinct classes of iterative schemes from these newly developed methods. Optimal results can be achieved by selecting the value of that maximizes the denominator of the corrector function.

Nomenclature.

Symbols Descriptions
IT Number of Iterations
x0 Initial Guess
xn Current iterate in the iterative method
xn+1 Next iterate (updated value)
ε Tolerance for convergence
|xn+1xn| Absolute Error
COC Computational order of convergence
TOC Time of Computations
NM Newton method
Alg Algorithm
DIV Divergence

Convergence analysis

This section is devoted to analyzing the convergence behavior of the proposed iterative methods, presented earlier as Algorithm (2.3), by utilizing Taylor series approach.

Theorem 3.1

Let αI be a simple zero of sufficiently differentiable function f:IRR for an open interval I. If x0is sufficiently close to α, then iterative method define in Algorithm 2.3 has at least fourth-order convergence.

Proof. Let α be a simple zero of f(x). Then by expanding f(xn) and f(xn), in Taylor’s series about α we have

f(xn)=f(α)[en+c2en2+c3en3+c4en4+c5en5+c6en6+O(en7)], (32)

and

f(xn)=f(α)[1+2c2en+3c3en2+4c4en3+5c5en4+6c6en5+O(en6)]. (33)

Where

ck=1k!f(k)f(α),k=2,3,anden=xa

Now we expand (xn)p(xn),f(xn)p(xn) and f(xn)p(xn) by Taylor series, we obtain,

f(xn)p(xn)=f(α)[p(α)en+(p(α)+c2p(α))en2+p1en3+O(en4)], (34)
f(xn)p(xn)=f(α)[p(α)en+(p(α)+c2p(α))en2+p2en3+O(en4)], (35)

and

f(xn)p(xn)=f(α)[p(α)+(p(α)+2c2p(α))en+p1en2+p3en3+O(en4)]. (36)

Where

p1=p(α)2+c3p(α)+c2p(α), (37)
p2=p(α)2+c3p(α)+c2p(α), (38)

and

p3=p(α)6+4c4p(α)+c2gp(α)+3c3p(α). (39)

From Eq. (34), (35), (36), we get

f(xn)p(xn)f(xn)p(xn)+f(xn)p(xn)=en(p(α)p(α)+c2)en2(p(α)p(α)c22+2c32p(α)p(α)c2+2(p(α)p(α))2+2c22)en3+O(en4). (40)

Using Eq. (40), we have

yn=α+en+(p(α)p(α)+c2)en2+(p(α)p(α)+2c32p(α)p(α)c22(p(α)p(α))22c22)en3+O(en4). (41)

Now Expanding f(yn) by Taylor’s series about α and using Eq. (41), we have

f(yn)=f(α)[(p(α)p(α)+c2)en2+(p(α)p(α)+2c32p(α)p(α)c22(p(α)p(α))22c22)en3+O(en4)]. (42)

Similarly, we get

f(yn)=f(α)[1+2c2(p(α)p(α)+c2)en2+2c2(p(α)p(α)+2c32p(α)p(α)c22(p(α)p(α))22c22)en3+O(en4)]. (43)

Now we find f(yn)p(xn) by expanding with Taylor’s series

f(yn)p(xn)=f(α)[(c2p(α)+p(α))en2((p(α))2p(α)+p(α)p(α)c2p(α)2c3p(α)+c22p(α))en3+O(en4)]. (44)

By Taylor’s series, we obtain the expansion of f(xn+yn2) as,

f(xn+yn2)=f(α)[1+c2en+12(4c2p(α)p(α)+4c22+3c3)en2+O(en3)]. (45)

Using Eq. (33), (43), (45), we get

[f(xn)+2f(xn+yn2)+f(yn)]p(xn)=f(a)[4p(a)+(4p(a)+5c2p(a))en+14(8p(a)+21c3p(a)+32c2p(a)+12c22p(a))en2+O(en3)] (46)

Using Eqs. (35), (45) and (46), we get

4f(yn)p(xn)[f(xn)+2f(xn+yn2)+f(yn)]p(xn)+4f(xn)p(xn)(p(a)p(a)+c2)en214[12(p(a)p(a))2+17c2p(a)p(a)4p(a)p(a)8c3+13c22]en3+O(en4) (47)

Using Eqs. (41) and (47), we get

z=α+((p(α)p(α))2+2c2p(α)p(α)+c22)en3+18(16p(α)p(α)c2+25p(α)p(α)56(p(α))2p(α)3c2+16p(α)p(α)p(α)348c22p(α)p(α)+25c2c332(p(α)p(α))324c23)en4+O(en5). (48)

Now using Eq. (48), we have

f(z)=f(α)[((p(α)p(α))2+2c2p(α)p(α)+c22)en3+18(16(p(α))3p(α)c2+25p(α)p(α)c3+16p(α)p(α)p(α)356(p(α))2p(α)3c248c22p(α)p(α)+25c2c332(p(α)p(α))324c23)en4+O(en5)], (49)

and

f(z)=f(α)[1+(2c2(p(α)p(α))2+2c2p(α)p(α)+c22)en3+14c2(16(p(α))3p(α)c2+25p(α)p(α)c3+16p(α)p(α)p(α)356(p(α))2p(α)3c248c22p(α)p(α)+25c2c332(p(α)p(α))3+24c23)en4+O(en5)]. (50)

Again using Eq. (48), (49) and (50), we get

[f(xn)+2f(xn+zn2)+f(zn)]p(xn)=f(a)[4p(α)+4(p(a)+c2p(a))en+(2p(a)+4c2p(a)+92c3p(a))en2+O(en3)] (51)

Using Eq. (49) and (51), we obtain

4f(zn)p(xn)[f(xn)+2f(xn+zn2)+f(zn)]p(xn)+4f(xn)p(x)=((p(a)p(a))2+2c2p(a)p(a)+c22)en318(40(p(a)p(a))3+80(p(a)p(a))3c2+72c22p(a)p(a)(p(a))2p(a)2c225p(a)p(a)16p(a)p(a)p(a)325c2c3+32c23)en4+O(en5) (52)

By using Eq. (47) and (52), we get the result as

xn+1=α+(3(p(α)p(α))2c2+3c22p(α)p(α)+(p(α)p(α))3+c23)en4+O(en5). (53)

Finally, we get the error equation from Eq. (53)

en+1=(3(p(α)p(α))2c2+3c22p(α)p(α)+(p(α)p(α))3+c23)en4+O(en5). (54)

Error equation (56) shows that the scheme mentioned in the Algorithm 2.3 generates at least fourth order convergent iterative methods for the nonlinear equations.

Method validation

In this section, we implement numerical simulations for some real-life applications to supply significant validation of the proposed iterative schemes expressed as Algorithms 2.4 - 2.9. Comparison is made with existing methods with the classical Newton method (NM), Chun’s methods, CH1 [8], CH2 [8], CH3 [9], CH4 [9] and SH [21]. The error tolerance is set to ε=1013, while the precision is set to 400. The numerical results are calculated and analyzed in this way additionally. In each Table, we collect the number of iterations required for any technique applied to meet the mentioned error tolerance, the computational order of convergence, the absolute error, the function's absolute value at the final step of the proceedure. These numerical explorations were performed on a machine with Windows 11 (128 bit) with 24.0 GB of RAM, and an Intel Core i7–1065G7 CPU (1.50 GHz) processor. Stopping criteria for computer programming is shown as:

|xn+1xn|<ε.

The computational order of convergence (COC) approximated as

coc=ln(|xn+1xn|/|xnxn1|)ln(|xnxn1|/|xn1xn2|)

Numerical results, for examples 1 and 2 are shown in TABLE 1, Table 2 while log of residues are shown by graph in Figs. 1,2,Fig. 3, Fig. 4 respectively by using Algorithms 2.4–2.9. IT stands for number of iterations, |xn+1xn| is the difference of the last two consecutive iterations, COC is also given in Tables and in the last column TOC is expressed for CPU time taking one second as unit.

TABLE 1.

(Numerical result for Example 4.1).

Method ω xn+1 IT |xn+1xn| COC TOC
NM - 0.100997929685740 13 1.8001e-14 1.98361039461551 0.0160
CH1 - 0.100997929685740 8 1.8001e-14 3.10445656730423 0.0150
CH2 - 0.100997929685740 12 1.8001e-14 3.00232300230455 0.0160
CH3 - DIV - - - -
CH4 - DIV - - - -
SH - DIV - - - -
Alg2.4 0.4 0.100997929685740 5 1.8000e-14 2.34553635649529 0.0140
Alg2.5 0.4 0.100997929685740 4 2.4686e-10 2.96798171496893 0.0140
Alg 2.6 −0.2 0.100997929685740 4 1.3000e-14 3.96124106685813 0.0140
Alg2.7 0.54 0.100997929685740 7 1.8000e-14 3.80904621045617 0.0150
Alg2.8 −0.2 0.100997929685740 4 2.4686e-10 2.97015872405991 0.0140
Alg 2.9 0.1 0.100997929685740 4 2.1000e-14 4.03557680849725 0.0140

Table 2.

(Numerical result for example 4.2).

Method ω xn+1 IT |xn+1xn| COC TOC
NM - −0.317061774531111 14 3.5000000000e-14 2.07199263041623 0.0160
CH1 - −0.317061774531066 5 3.0000000000e-14 3.11394152433442 0.0150
CH2 - −0.317061774531127 6 9.5000000000e-14 2.98394752854687 0.0160
CH3 - DIV - - - -
CH4 - DIV - - - -
SH - DIV - - - -
Alg2.4 −0.6 −0.317061774531037 8 1.6159053000e-08 2.00212508311349 0.0140
Alg2.5 −0.2 −0.317061774531073 4 1.5090000000e-11 2.98394501593417 0.0140
Alg 2.6 −0.3 −0.317061774531111 3 2.5169423986e-05 3.90774890438257 0.0140
Alg2.7 −0.6 −0.317061774531066 6 3.7127510000e-09 2.00137483225598 0.0150
Alg2.8 −1 −0.317061774531073 5 5.7914419000e-08 2.98474714409496 0.0140
Alg 2.9 −0.2 −0.317061774531142 3 1.4224339000e-08 3.99711813534491 0.0140

DIV stands for divergence.

Fig. 1.

Fig 1: dummy alt text

Log of residue of Example 1.

Fig. 2.

Fig 2: dummy alt text

Comparison of iterations for Example 1.

Fig. 3.

Fig 3: dummy alt text

Log of residue of Example 1.

Fig. 4.

Fig 4: dummy alt text

Comparison of iterations for Example 2.

Example 1

Analyze short-term population dynamics by assuming the growth of population grows continuously with time at a rate of proportional to the number present at that time. Let N(t) denotes the number at time t and λ denotes the constant birth rate proportion. Supposing a uniform immigration ratev, then the population satisfies the differential equation

dN(t)dt=λN(t)+v.

The solution is given by:

N(t)=N0eλt+vλ(eλt1).

Suppose a certain population consists on N(0)=1,000,000 residents initially, that 435,000 residents relocate into the community in the first year and that N(1)=1,564,000 residents are present at the end of one year. To settle this population, we have to gain λ from the following equation

1,564,000=1,000,000eλ+435,000λ(eλ1).

We use x0=1, as a priori estimate and approximated above problem up to 15 decimal digits is 0.100997929685750.

A comparative analysis of the residual fall for each method in Example 1 is presented in Fig. 1. This visualization enables a detailed evaluation of the methods' performance, revealing that the newly developed methods demonstrate a substantially faster rate of residual fall. This accelerated convergence underscores the improved efficacy and precision of the new methods, making them an asset for solving related problems.

A comprehensive comparison of various iterative methods is presented in Fig. 2, showcasing their performance on Example 1. Notably, all methods were implemented with identical stopping criteria and initial guesses, ensuring a fair evaluation. The results unequivocally demonstrate that the newly derived iterative methods significantly outperform existing methods, achieving faster convergence and greater accuracy. Moreover, it is evident that the CH3, CH4, and SH methods fail to converge for this specific problem, diverging and failing to attain the required accuracy for the root. In contrast, the newly developed methods exhibit robust performance, consistently approaching the root with precision.

Residual fall related to all methods, for example 1 can also be viewed in Fig. 3. One can compare the methods and can obtain the result that newly derived methods have faster fall.

Example 2

Assume that a particle has zero initial momentum on smooth inclined plane whose angle θ is changing at a constant rate

dθdt=ω<0,

At the end of t seconds, the position of the object is given by

x(t)=g2ω2(eωteωt2sinωt).

Imagine the particle has traversed a distance 1.7ft in1sec. We have to find the rate at which angle θ changes. Assuming thatg=32.17ft/sec2. The solution of problem approximated to the 15 decimals digits is −0.317061774531088. We us x0=3, as initial guess for this example.

A comparative analysis of iterative methods is presented in Fig. 4, where Example 2 is solved using identical stopping criteria and initial guesses for each method. The results illustrate the superior performance of newly derived iterative methods, which demonstrate accelerated convergence and enhanced accuracy compared to existing methods. Furthermore, the CH3, CH4, and SH methods exhibit divergence for this problem, failing to achieve the prescribed accuracy for the root. Conversely, the novel methods display consistent and robust convergence, accurately approximating the root.

The residual fall associated with each method for Example 2 is illustrated in Fig. 5, allowing for a comprehensive comparison of the methods. Upon examination, it is evident that the newly derived methods exhibit a significantly faster rate of residual fall compared to existing methods. This indicates that the new approaches converge more rapidly to the solution, showcasing their enhanced efficiency and accuracy.

Fig. 5.

Fig 5: dummy alt text

log of residue of Example 2.

Residual fall related to all methods, for example 2 can be viewed in Fig. 6. One can compare the methods and can obtain the result that newly derived methods have faster fall. The choice of numerical method for solving nonlinear equations plays a crucial role in achieving accurate and efficient solutions. While single-step methods have their strengths, multi-step methods offer distinct advantages in handling highly nonlinear equations, including improved convergence, increased accuracy and robustness.

Fig. 6.

Fig 6: dummy alt text

Log of residue of Example 2.

Efficiency index is the combination of order of convergence and number of function evaluations per iteration which is E.I=p1/m, where p is order of convergence of the method while m in the number of functional evaluations per iteration [32]. A higher order of convergence combined with fewer function evaluations leads to an improved efficiency index and a more optimal iterative method. However, researchers must sometimes address serious limitations of existing methods, as such methods may fail to obtain solutions for certain classes of nonlinear problems. Therefore, the efficiency index may be compromised. As in this article draw backs of Newton method and Newton type methods are overcome. The result can be observed in Numerical calculations in Tables which indicate that some methods diverge. Given methods in this article are converging to an exact solution while some methods such as CH3 CH4 and SH are diverging. Here is the detail in Table 3 regarding efficiency index of newly derived methods.

Table 3.

(Efficiency index of suggested methods).

Methods Order of Convergence Functional Evaluations Efficiency Index
Algorithm 2.4 2 2 1.4142
Algorithm 2.5 3 5 1.2457
Algorithm 2.6 4 8 1.1892
Algorithm 2.7 2 3 1.2600
Algorithm 2.8 3 6 1.2009
Algorithm 2.9 4 9 1.1606

Fractal analysis of the proposed algorithms

Fractal geometry offers an insightful way to visualize the dynamics of iterative methods for solving nonlinear equations. When applied to complex-valued initial guesses, iterative schemes reveal the basins of attraction corresponding to different roots of the nonlinear equation. The boundary regions between basins often display complex fractal structures. In this section, we explore the fractal patterns generated using Algorithms 2.4 to 2.9 on a range of nonlinear functions.

All basin-of-attraction fractals presented in this work were generated using MATLAB (R2021a) for various test polynomials. The complex plane was discretized over the square domain Re(z),Im(z)[2,2], using a uniform 300 × 300 grid of initial points. For each initial valuez0, the corresponding iterative scheme was applied with a relaxation parameter ω=0.5 and a maximum of 50 iterations. At each iteration, the stopping criterion was based on the residual condition∣ |f(zk)|<106, while iterations were terminated prematurely if the denominator of the update formula satisfied ∣ |denominator|<1012, to avoid numerical instability. After termination, each initial point was assigned to the root with minimum distance among the known exact roots. Points satisfying |f(z)|<104were classified as convergent; otherwise, they were treated as non-convergent. Distinct hues were assigned to different roots, and the color intensity was scaled according to the number of iterations required to reach convergence, with brighter shades indicating faster convergence. Non-convergent points were displayed in a uniform dark tone. Circular contours were additionally drawn around the true roots to highlight local convergence behavior. The same domain, grid resolution, stopping criteria, and color-mapping strategy were used across all algorithms and test functions to ensure a fair and reproducible comparison.

The following nonlinear equations were selected to analyze and compare the fractal convergence behavior of the proposed algorithms:

These test functions were selected to represent polynomial equations of increasing degree which allow the proposed methods to be examined in the presence of multiple real and complex roots. It also assess their convergence behaviour and basin structures as the algebraic complexity of the problem increases.

Example 3

To investigate the convergence behaviour of the proposed iterative methods, we have generated fractal plots for the Quadratic equation f1(z)=z21. using Algorithms 2.4 to 2.9 presented in Fig. 7, Fig. 8, Fig. 9, Fig. 10, Fig. 11, Fig. 12. These fractals illustrate the basins of attraction corresponding to two simple roots z=+1,1in the complex plane. The colour scheme represents which root a given initial point converges to, while the brightness reflects the speed of convergence. These visualizations offer a clear and intuitive comparison of the convergence efficiency, stability, and sensitivity of the algorithms. As shown in Fig. 7, Algorithm 2.4 produces two well-defined basins separated by a curved fractal boundary. The convergence is relatively fast near the true roots, as indicated by the lighter colour intensity aroundz=1,1.. However, the boundary between the basins exhibits noticeable irregularity, suggesting sensitivity to initial guesses in this region. Darker pixels concentrated along the boundary indicate slower convergence and higher iteration counts. Fig. 8 demonstrates that Algorithm 2.5 slightly improves the basin geometry compared to Algorithm 2.4. The basins become smoother near the roots, and the region of slow convergence is reduced. This behaviour reflects the enhanced corrective step that is incorporated in the algorithm. It stabilizes convergence for a wider set of initial points while preserving the overall basin structure. For the next algorithm 2.6, Fig. 9, further refines the basin boundaries. The attraction regions corresponding to each root expand, and the fractal boundary becomes less irregular. The colour intensity distribution suggests a reduction in the average number of iterations required for convergence, indicating improved efficiency over Algorithms 2.4 and 2.5.

Fig. 7.

Fig 7: dummy alt text

Fractal for Algorithm 2.4.

Fig. 8.

Fig 8: dummy alt text

Fractal for Algorithm 2.5.

Fig. 9.

Fig 9: dummy alt text

Fractal for Algorithm 2.6.

Fig. 10.

Fig 10: dummy alt text

Fractal for Algorithm 2.7.

Fig. 11.

Fig 11: dummy alt text

Fractal for Algorithm 2.8.

Fig. 12.

Fig 12: dummy alt text

Fractal for Algorithm 2.9.

Fig. 10 illustrates that Algorithm 2.7 introduces more pronounced structural changes near the basin boundary. Although the main basins remain dominant, small localized regions of slow convergence appear around the separating curve. These features reflect the influence of higher-order derivative information, which enhances convergence near the roots but may increase sensitivity along critical boundary regions. The fractal representation in last two plots of this group, reveals a significant improvement in convergence behaviour. In particular, Algorithm 2.9 exhibits a nearly vertical and sharply defined boundary separating the two basins. The basins themselves are more uniform in colour, indicating faster and more consistent convergence across the complex plane. The presence of small symmetric structures near the imaginary axis reflects the high-order corrective steps, but these do not compromise overall stability. Overall, the fractal analysis clearly demonstrates a progressive enhancement in convergence performance from Algorithm 2.4 to Algorithm 2.9. While all methods successfully converge to the correct roots, the higher-order algorithms, especially Algorithms 2.8 and 2.9, exhibit:

  • Smoother and more regular basin boundaries,

  • Reduced regions of slow convergence,

  • Greater robustness with respect to the choice of initial guesses.

These observations confirm that the proposed multi-step and higher-order schemes provide superior convergence characteristics even for simple nonlinear problems, thereby justifying their application to more complex equations.

Example 4

To further examine the convergence behaviour of the proposed iterative schemes, fractal plots are generated for the cubic polynomial f2(z)=z31 using Algorithms 2.4 to 2.9, as illustrated in Fig. 13, Fig. 14, Fig. 15, Fig. 16, Fig. 17, Fig. 18. This equation has three distinct roots given by, z1=1,z2=12+32ι,z2=1232ι. Consequently, three main basins of attraction appear in the complex plane, each represented by a distinct colour. The brightness of each colour reflects the number of iterations required for convergence, where lighter shades correspond to faster convergence. These fractal patterns provide clear insight into the stability, convergence speed, and sensitivity of the proposed algorithms with respect to initial guesses.

Fig. 13.

Fig 13: dummy alt text

Fractal for Algorithm 2.4.

Fig. 14.

Fig 14: dummy alt text

Fractal for Algorithm 2.5.

Fig. 15.

Fig 15: dummy alt text

Fractal for Algorithm 2.6.

Fig. 16.

Fig 16: dummy alt text

Fractal for Algorithm 2.7.

Fig. 17.

Fig 17: dummy alt text

Fractal for Algorithm 2.8.

Fig. 18.

Fig 18: dummy alt text

Fractal for Algorithm 2.9.

As observed in Fig. 13, Algorithm 2.4 produces three clearly distinguishable basins of attraction corresponding to the three roots of the cubic polynomial. Although convergence near the roots is relatively fast, the basin boundaries are highly irregular and exhibit pronounced fractal structures. These complex boundaries indicate strong sensitivity to initial guesses, particularly in regions equidistant from multiple roots. Fig. 14 shows that Algorithm 2.5 slightly improves the basin geometry. The attraction regions become more uniform, and the regions of slow convergence along the basin boundaries are reduced. Nevertheless, noticeable fractal irregularities persist, especially near the intersection points of the three basins, reflecting moderate instability for certain initial values.

In 3rd figure of this group, demonstrates further refinement of the basin structures. The basins expand more symmetrically, and the fractal boundary regions become thinner. The color intensity suggests a decrease in the average number of iterations compared to Algorithms 2.4 and 2.5, indicating enhanced convergence efficiency and stability. The next Fig. 16 reveals a distinctive change in convergence behavior for Algorithm 2.7. While the three main basins remain dominant, larger dark regions appear along the central boundary, indicating slower convergence or delayed stabilization for some initial guesses. This behavior can be attributed to the increased complexity of the iterative correction, which enhances convergence near roots but introduces sensitivity in transition regions.

As shown in Fig. 17, Algorithm 2.8 produces highly structured and symmetric basin boundaries with petal-like patterns near the separatrix. Despite the increased geometric complexity, the basins themselves are well defined, and convergence within each basin is generally rapid. The refined boundary patterns reflect the higher-order nature of the algorithm and its strong dependence on initial conditions near critical points. In Fig. 18 illustrates that Algorithm 2.9 achieves the most uniform and stable convergence behavior among all tested methods. The basins of attraction are larger and smoother, with sharply defined boundaries and fewer regions of slow convergence. The color intensity remains relatively uniform within each basin, confirming faster and more consistent convergence across the complex plane. From a comprehensive perspective the fractal analysis for the cubic polynomial demonstrates a clear progression in convergence performance from Algorithm 2.4 to Algorithm 2.9. While lower-order methods exhibit highly irregular basin boundaries and greater sensitivity to initial guesses, the higher-order schemes

Example 5

Another test function, 4th degree polynomial, is compiled for the performance and robustness of the proposed iterative schemes. Fractal plots are generated for a biquadratic polynomial using Algorithms 2.4 to 2.9, as shown in Fig. 19, Fig. 20, Fig. 21, Fig. 22, Fig. 23, Fig. 24. The biquadratic equation possesses four distinct roots, leading to four principal basins of attraction in the complex plane. Distinct colours and the brightness of each colour implies the same as in previous plots.

Fig. 19.

Fig 19: dummy alt text

Fractal for Algorithm 2.4.

Fig. 20.

Fig 20: dummy alt text

Fractal for Algorithm 2.5.

Fig. 21.

Fig 21: dummy alt text

Fractal for Algorithm 2.6.

Fig. 22.

Fig 22: dummy alt text

Fractal for Algorithm 2.7.

Fig. 23.

Fig 23: dummy alt text

Fractal for Algorithm 2.8.

Fig. 24.

Fig 24: dummy alt text

Fractal for Algorithm 2.9.

Here in first plot of this group, illustrates the fractal structure obtained using Algorithm 2.4. The four basins of attraction are clearly identifiable. However, the boundaries separating them are highly irregular and densely interwoven. These intricate fractal boundaries indicate strong sensitivity to initial conditions, particularly in regions where multiple basins intersect. Although convergence near the roots is relatively rapid, the algorithm exhibits slower convergence along the basin boundaries. As shown in Fig. 20, Algorithm 2.5 provides a modest improvement in basin regularity. The attraction regions become more symmetric, and the chaotic boundary zones are slightly reduced compared to Algorithm 2.4. Nevertheless, noticeable oscillatory patterns persist near the separatrix regions, indicating that the method remains sensitive to initial guesses in transition areas. The next Fig. 21 demonstrates that Algorithm 2.6 yields further refinement of the fractal geometry. The basins of attraction expand more uniformly, and the boundary regions become thinner and more structured. The color intensity suggests a reduction in the average number of iterations required for convergence, reflecting improved efficiency and numerical stability.

In Fig. 22, Algorithm 2.7 exhibits a distinctive convergence pattern. While the four primary basins remain dominant, larger dark regions appear near the central intersection of the basins, indicating slower convergence or delayed stabilization for certain initial points. This behavior highlights a trade-off between rapid convergence near the roots and increased sensitivity in regions where multiple attraction domains compete. Fig. 23 reveals highly symmetric and elaborate fractal structures produced by Algorithm 2.8. The basin boundaries display petal-like patterns and rotational symmetry, characteristic of higher-order iterative schemes. Despite the increased geometric complexity, convergence within each basin remains fast, and the basins are well separated, demonstrating strong local convergence properties.

Last plot this group, Fig. 24, Algorithm 2.9 achieves the most stable and uniform convergence behavior among all tested methods. The basins of attraction are smoother and more extensive, with sharply defined boundaries and minimal regions of slow convergence. The relatively uniform color intensity across each basin confirms faster and more consistent convergence throughout the complex plane. On the whole, the fractal analysis for the biquadratic polynomial confirms a clear enhancement in convergence performance from Algorithm 2.4 to Algorithm 2.9. Lower-order schemes exhibit highly complex and fragmented basin boundaries, whereas higher-order methods—particularly Algorithms 2.8 and 2.9 produce smoother, more symmetric basins with reduced sensitivity to initial guesses. These observations demonstrate that the proposed higher-order algorithms retain their effectiveness even for equations with multiple interacting roots, reinforcing their reliability and robustness for solving nonlinear problems of increased algebraic complexity.

Conclusion

In this study, we have proposed novel quadrature-based iterative methods for the efficient computation of simple roots of nonlinear equations. The proposed schemes attain high order convergence and demonstrate improved stability and convergence efficiency compared with classical methods, as evidenced by the fractal basin-of-attraction analysis. In addition to their numerical performance, these methods offer valuable insight into the dynamical behaviour of iterative processes. It provides a robust and reliable framework with potential applications in mathematics, physics, engineering, and other applied sciences. Implications for future research several interesting lines of future investigation are suggested by this study. Naturally, one would like to extend these approaches from single equations to a system of nonlinear equations where the multiplicity is unknown in advance. It would be of interest as well to construct optimally efficient higher order methods that combine the best aspects of the current approach. The flexibility and stability of the methods could be further improved if we take into consideration some fractional calculus or adaptive error control mechanism, respectively.

Limitations

Like all iterative methods for the implementation of Algorithms 2.4, 2.5, 2.6, 2.7, 2.8 and 2.9 have some of the limitations:

  • Smoothness: The nonlinear function should be continuous and differentiable near the root.

  • Numerical stability: If the denominator of the corrector function approaches zero, the method may fail to converge.

  • Function behavior: Rapid changes or oscillations in the function can hinder convergence of numerical methods.

Ethics statements

Not Applicable.

CRediT authorship contribution statement

Farooq Ahmed Shah: Conceptualization, Methodology, Supervision. Syeda Rameesha Hamdani: Writing – original draft, Investigation, Validation. Fikadu Tesgera Tolasa: Investigation, Validation. Iftikhar Haider: Methodology, Writing – review & editing.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Footnotes

Related research article: R. Alharbi, M. I. Faisal, F. A. Shah, M. Waseem, R. Ullah and S. Sherbaz, Higher Order Numerical Approaches for Nonlinear Equations by Decomposition Technique, IEEE Access, 7(2019) 44,329–44,337. doi: 10.1109/ACCESS.2019.2906470

Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.mex.2026.103835.

Appendix. Supplementary materials

mmc1.pdf (1.7MB, pdf)

Data availability

No data was used for the research described in the article.

References

  • 1.Abbasbandy S. Improving Newton-Raphson method for nonlinear equations by modified adomian decomposition method. Appl. Math. Comput. 2003;145:887–893. [Google Scholar]
  • 2.Adomian G. Kluwer Academic Publishers; Dordrecht: 1989. Nonlinear Stochastic Systems and Applications to Physics. [Google Scholar]
  • 3.Alomair A.M., Shah F.A., Ahmed K., Waseem M. Generalized and novel iterative scheme for the best approximate solution of large and sparse augmented linear systems. Heliyon. 2024;10(15) doi: 10.1016/j.heliyon.2024.e35694. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Al-adawiyyah D.A., Imran M., Syamsudhuha S. Derivative free three-step iterative method to solve nonlinear equations. Int. J. Math. Comput. Res. 2023;11(2):3273–3276. [Google Scholar]
  • 5.Argyros I.K., George S., Regmi S., Argyros M.I. Hybrid iterative methods for solving nonlinear equations in Banach spaces. Eur. J. Math. Anal. 2025;5 [Google Scholar]
  • 6.Bawazir H.M.S. Ninth- and twelfth-order iterative methods for solving nonlinear equations. J. Math. Anal. Model. 2025;6:35–45. [Google Scholar]
  • 7.Burden R.L., Faires J.D. PWS Publishing Company; Boston: 2001. Numerical Analysis. [Google Scholar]
  • 8.Chun C. Iterative methods improving Newton's method by the decomposition method. Comput. Math. Appl. 2005;50:1559–1568. [Google Scholar]
  • 9.Chun C. A family of composite fourth-order methods for solving nonlinear equations. Comput. Math. Appl. 2007;53:187–951. [Google Scholar]
  • 10.Chun C. A method for obtaining iterative formulas of order three. Appl. Math. Lett. 2007;20:1103–1109. [Google Scholar]
  • 11.Daftardar-Gejji V., Jafari H. An iterative method for solving nonlinear functional equations. J. Math. Anal. Appl. 2006;316:753–763. [Google Scholar]
  • 12.Doğan K. A hybrid third-order iterative process to solve nonlinear equations. J. Inst. Sci. Technol. 2020;10(1):563–575. [Google Scholar]
  • 13.Hamayd H.J., Ali A.F.A., Khalaf S.M., Hameed M.J. Analysis and optimization of iterative methods for solving nonlinear equations. Int. J. Appl. Math. 2025;38(8s) [Google Scholar]
  • 14.Hasanov V.I., Ivanov I.G., Nedzhibov G. A new modification of Newton's method. Appl. Math. Eng. 2002;27:278–286. [Google Scholar]
  • 15.He J.H. A new iteration method for solving algebraic equations. Appl. Math. Comput. 2003;135:81–84. [Google Scholar]
  • 16.Homeier H.H.H. On Newton-type methods with cubic convergence. J. Comput. Appl. Math. 2005;176:425–432. [Google Scholar]
  • 17.Jamali S., Kalhoro Z.A.A., Jamali S.A.J., Shaikh A.W., Chandio M.S. A modified twentieth-order iterative method for solving nonlinear physicochemical models. Proc. Pak. Acad. Sci. A. 2025;62:1–15. [Google Scholar]
  • 18.Noor M.A. New classes of iterative methods for nonlinear equations. Appl. Math. Comput. 2007:128–131. [Google Scholar]
  • 19.Noor M.A., Noor K.I., Al-Said E.W., Waseem M. Some new iterative methods for nonlinear equations. Math. Probl. Eng. 2010;2:1–12. [Google Scholar]
  • 20.Noor M.A., Shah F.A. Variational iteration technique for solving nonlinear equations. J. Appl. Math. Comput. 2009;31:247–254. [Google Scholar]
  • 21.Noor M.A., Shah F.A., Noor K.I., Al-Said E. Variational iteration technique for finding multiple roots of nonlinear equations. Sci. Res. Essays. 2011;6(6):1344–1350. [Google Scholar]
  • 22.Noor M.A., Shah F.A. A family of iterative schemes for finding zeros of nonlinear equations having unknown multiplicity. Appl. Math. Inf. Sci. 2014;8(5):1–7. [Google Scholar]
  • 23.Nonlaopon K., Shah F.A., Ahmed K., Farid G. A generalized iterative scheme with computational results concerning the systems of linear equations. AIMS. Math. 2023;8(3):6504–6519. [Google Scholar]
  • 24.Shah F.A., Noor M.A., Batool M. Derivative-free iterative methods for solving nonlinear equations. Appl. Math. Inf. Sci. 2014;8(5):1–5. [Google Scholar]
  • 25.Shah F.A., Noor M.A. Variational iteration technique and some methods for the approximate solution of nonlinear equations. Appl. Math. Inf. Sci. Lett. 2014;2(3):85–93. [Google Scholar]
  • 26.Shah F.A., Noor M.A. Some numerical methods for solving nonlinear equations by using decomposition technique. Appl. Math. Comput. 2015;8(5):378–386. [Google Scholar]
  • 27.Shah F.A., Noor M.A., Waseem M. Some Steffensen-type iterative schemes for the approximate solution of nonlinear equations. Miskolc Math. Notes. 2021;22(2):939–949. [Google Scholar]
  • 28.Shah F.A., Haq E.U., Noor M.A., Waseem M. Some novel schemes by using multiplicative calculus for nonlinear equations. TWMS J. Appl. Eng. Math. 2023;13(2):723–733. [Google Scholar]
  • 29.Shah F.A., Waseem M. Quadrature based innovative techniques concerning nonlinear equations having unknown multiplicity. Ex. Counterexamples. 2024;6 [Google Scholar]
  • 30.Shah F.A., Darus M., Faisal I., Shafiq M.A. Decomposition technique and a family of efficient schemes for nonlinear equations. Discrete Dyn. Nat. Soc. 2017 Article ID. [Google Scholar]
  • 31.Shah F.A. Modified homotopy perturbation technique for the approximate solution of nonlinear equations. Chin, J. Math. 2014 Article. [Google Scholar]
  • 32.Traub J.F. Prentice-Hall; New York: 1964. Iterative Methods for Solution of Equations. [Google Scholar]
  • 33.Yun J.H. A note on three-step iterative method for nonlinear equations. Appl. Math. Comput. 2008;202:401–405. [Google Scholar]
  • 34.Shah F.A., Haq E.U., Noor M.A., Waseem M. Some novel schemes by using multiplicative calculus for nonlinear equations. TWMS J. Appl. Eng. Math. 2023 [Google Scholar]
  • 35.Shah F.A., Haider I., Waseem M., Mikhaylov A., Baranyai N. Construction and applications of iterative methods for finding approximate solutions of nonlinear equations having unknown zeros of multiplicity with fractal geometry and dynamical behavior. MethodsX. 2026;16 doi: 10.1016/j.mex.2025.103778. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Usman M., Iqbal J., Khan A. A new iterative multi-step method for solving nonlinear equations. MethodsX. 2025;15 doi: 10.1016/j.mex.2025.103394. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Junjua M.-ud-D., Altaf S., Sani F., Akram S. Comparative study of some iterative methods for nonlinear equations from a dynamical point of view. J. Mt, Area Res. 2024;9:46–53. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

mmc1.pdf (1.7MB, pdf)

Data Availability Statement

No data was used for the research described in the article.


Articles from MethodsX are provided here courtesy of Elsevier

RESOURCES