Abstract
In this paper, an efficient orthogonal neural network (ONN) approach is introduced to solve the higher-order neutral delay differential equations (NDDEs) with variable coefficients and multiple delays. The method is implemented by replacing the hidden layer of the feed-forward neural network with the orthogonal polynomial-based functional expansion block, and the corresponding weights of the network are obtained using an extreme learning machine(ELM) approach. Starting with simple delay differential equations (DDEs), an interest has been shown in solving NDDEs and system of NDDEs. Interest is given to consistency and convergence analysis, and it is seen that the method can produce a uniform closed-form solution with an error of order , where n is the number of neurons. The developed neural network method is validated over various types of example problems(DDEs, NDDEs, and system of NDDEs) with four different types of special orthogonal polynomials.
Subject terms: Applied mathematics, Computational science, Information technology
Introduction
Delay differential equation (DDE) plays a crucial role in epidemiology, population growth, and many mathematical modeling problems. In DDEs, the dependent variable depends not only on its current state but also on a specific past state. One type of DDE in which time delays are included in the state derivative is called the neutral delay differential equation (NDDE). Delay terms are classified into three types: discrete, continuous, and proportional delay. In this paper, we are focusing on proportional DDEs and NDDEs. One famous example of proportional delay differential equations is the pantograph differential equation which was first introduced in1.
Generally, the exact solution of delay differential equations is complicated to find, and due to the model’s complexity, many DDEs do not have an exact solution. Various numerical schemes have been developed over the years to find the approximate solution of delay differential equations. There are several articles2–9 that illustrate some exact and numerical methods for approximate solutions of DDEs and NDDEs.
Artificial neural networks(ANNs) have been utilised to produce an approximate solution of differential equations for the past 22 years. A neural network approach for several ordinary and partial differential equations was first proposed by Lagaris et al. in10. The approximate solution delivered by the artificial neural networks has a variety of advantages: (i) The derived approximation of the solution is in closed analytic form. (ii) The generalization ability of an approximation is excellent. (iii) Discretization of derivatives is not required. Many articles on approximation artificial neural network solutions to different differential equations are available in the literature11–20. As far as we know, the studies for obtaining an approximate solution to delay differential equations using artificial neural networks are limited. There is very little literature available for solving delay differential equations using ANNs. J. Fang et al. solved first-order delay differential equations with single delay using ANN21. In22, Chih-Chun Houe et al. obtained approximate solutions of proportional delay differential equation using ANN. All these artificial neural network approaches suffer from common problems: (1) All the algorithms are time-consuming and therefore they are computationally expansive numerical optimization algorithms, (2) They completely depend on the trial solution, which is difficult to construct for higher dimensional problems. Recently in23, Manoj and Shagun obtained an approximate solution of differential equations using an optimization-free neural network approach in which they trained the network weights using ELM algorithm24. In25, authors solved the first-order pantograph equation using the optimization-free ANN approach. Linear first-order delay differential-algebraic equations have been solved using Legendre neural network in26.
This work presents an orthogonal neural network with an extreme learning machine algorithm(ONN-ELM) to obtain an approximate solution for higher-order delay differential equations, neutral delay differential equations, and a system with multiple delays and variable coefficients. The ONN model is a particular functional link neural network(FLNN)12,27–29 case. It has the advantage of fast and very accurate learning. The entire procedure becomes much quicker than a traditional neural network because it removes the high-cost iteration procedure and trains the network weights using the Moore-Penrose generalized inverse. The following are the benefits of the proposed approach:
It is a single hidden layer neural network, we only need to train the output layer weights by randomly selecting the input layer weights.
We use an unsupervised extreme learning machine algorithm to train the output weights; no optimization technique is used in this procedure.
It is simple to implement, accurate compared to other numerical schemes mentioned in the literature, and runs quickly.
This work considers four different orthogonal polynomials-based neural networks: (i) Legendre neural network, (ii) Hermite neural network, (iii) Laguerre neural network, and (iv) Chebyshev neural network with ELM for solving DDEs, NDDEs, and systems of NDDEs with multiple delays and variable coefficients. The interest is to find the orthogonal neural network among these four that can produce more accurate solution.
The layout of this paper is as follows. In “Preliminaries” section, we present some definitions and properties of orthogonal polynomials and a description of the considered problems. In “Orthogonal neural network” section, we describe the architecture of the orthogonal neural network(ONN) with an extreme learning algorithm(ELM). “Error analysis” section discusses the convergence analysis and error analysis. The methodology of the proposed method is presented in “Methodology” section. Various numerical illustrations are presented in “Numerical illustrations” section and a comparative study is given in “Comparative analysis” section.
Preliminaries
In this section, first, we introduce basic definitions and some properties of the orthogonal polynomials. Throughout the paper, we will use to represent the orthogonal polynomial of order n.
Orthogonal polynomial
Definition 1
The orthogonal polynomials are special class of polynomials defined on [a, b] that follow an orthogonality relation as,
where , is Kronecker delta, g(x) is a weight function and .
Remark
If a weight function , then the orthogonal polynomial is called Legendre polynomial.
If a weight function , then the orthogonal polynomial is called Chebyshev polynomial of first kind.
If a weight function , then the orthogonal polynomial is called Hermite polynomial.
If a weight function , then the orthogonal polynomial is called Laguerre polynomial.
Properties of orthogonal polynomials
The following are some of the remarkable properties of a set of orthogonal polynomials:
Each polynomial is orthogonal to any other polynomial of degree in a set of orthogonal polynomials .
Any set of orthogonal polynomials has a recurrence formula that connects any three consecutive polynomials in the sequence, i.e., the relation exists, with constants depending on n.
The zeroes of orthogonal polynomials are real numbers.
There is always a zero of orthogonal polynomial between two zeroes of .
Moore-Penrose generalized inverse
In this section, the Moore-Penrose generalized inverse is introduced.
There can be problems in obtaining the solution of a general linear system , where A may be a singular matrix or may even not be square. The Moore-Penrose generalized inverse can be used to solve such difficulties. The term generalized inverse is sometimes referred to as a synonym of pseudoinverse. More precisely, we define the Moore-Penrose generalized inverse as follows:
Definition 2
30 A matrix B of order is the Moore-Penrose generalized inverse of matrix A of order , if the following hold
where denotes the transpose of matrix A. The Moore-Penrose generalized inverse of matrix A is denoted by .
Definition 3
is said to be a minimum norm least-squares solution of a general linear system if for any
where is the Euclidean norm.
In other words, if a solution has the smallest norm among all the least-squares solutions, it is considered to be a minimum norm least-squares solution of the general linear system .
Theorem 1
30 Let B be a matrix with a minimum norm least-squares solution to the linear equation . Then , the Moore-Penrose generalized inverse of matrix A, is both required and sufficient.
Problem definition
In this subsection, we present the general form of the pantograph equation, higher order delay differential equation, higher order neutral delay differential equation, and the system of higher order delay differential equation with variable coefficients and multiple delays.
The generalized Pantograph equation
Pantograph type equation arises as a mathematical model in the study of the wave motion of the overhead supply line to an electric locomotive. The following equation gives the generalized form of a pantograph type equation with multiple delays:
| 1 |
with initial conditions
| 2 |
where g(t), a(t), and is continuous function, for some and for some, .
Higher order DDEs and NDDEs
- Consider the general form of Higher-order DDEs with multiple delay
with initial conditions3
where for 1,...,n and denotes the kth derivative of z(t).4 - Consider the general form of Higher-order NDDEs with multiple delay
with initial condition5
where all for , , and denotes the kth derivative of z(t).6
Higher order system of DDE
Consider the general form of higher order coupled neutral delay differential equation with multiple delays as:
| 7 |
| 8 |
where and all for , , , , .
Orthogonal neural network
In this section, we introduce the structure of a single-layered orthogonal neural network(ONN) model with an extreme learning machine(ELM) algorithm for training the network weights.
Structure of orthogonal neural network (ONN)
Orthogonal neural network(ONN) is a single-layered feed-forward neural network, which consists of one input neuron t, one output neuron and a hidden layer is eliminated by the orthogonal functional expansion block. The architecture of an orthogonal neural network is depicted in Fig. 1.
Figure 1.
The structure of orthogonal neural network.
Consider a 1-dimensional input neuron t. The enhanced pattern is obtained by orthogonal functional expansion block as follows:
Here is the output of the orthogonal neural network, where are randomly selected fixed weights and are the weights to be trained.
Extreme learning machine (ELM) algorithm
For a given sample points , and , for , a single-layer feed-forward neural network with neurons has the following output:
where is the activation function of i-th neuron in a hidden layer, are the randomly selected fixed weights between the input layer and hidden layer, and are the weights between the hidden layer and output, which need to be trained.
When the neural network completely approximates the given data, i.e., the output of the neural network and actual data are equal, the following relation hold:
| 9 |
Equation (9) can be written in matrix form as:
| 10 |
where the hidden layer output matrix A is defined as follows:
| 11 |
and , .
For the given training points and the weights , the matrix A can be calculated and the weights can be calculated by solving the linear system .
Theorem 2
The system is solvable in the following several cases:
If is a square matrix, then
If is a rectangular matrix, then , and is the minimal least square solution of . Here is a pseudo inverse of .
If is a singular matrix, then and = , where is the regularization coefficient. We can set a value of according to the specific instance.
Error analysis
This section will discuss the convergence result and error analysis of the ONN-ELM method for solving the delay and neutral delay differential equations.
Theorem 3
24 Let single layer feed-forward orthogonal neural network be an approximate solution of one-dimensional neutral delay differential equation, for arbitrary distinct sample points for , where , then the orthogonal expansion layer output matrix A is invertible, and .
Theorem 4
Let , be the orthogonal neural network with n neurons in the hidden layer and be the absolute error with n hidden neurons, then as .
Proof
The Taylor expansion formula gives us the following expression for z(t) on :
| 12 |
Let us define , then we get
| 13 |
Let and let be the best approximation of z(t) in L given as, , where ’s are the weights obtained by ELM algorithm. we get
| 14 |
In particular, taking we have
| 15 |
Thus,
| 16 |
where, , for .
Moreover, from Eq. (16) we deduce that for large value of n. This shows that ONN has high representational abilities and it can approximate the exact solution with almost no error.
Methodology
This section explains the method to obtain an approximate solution of second-order NDDE using the ONN-ELM algorithm. It can be easily extended to the higher-order NDDE and the higher-order DDE is a special case of the higher-order NDDE.
Consider the general form of linear second-order NDDE
| 17 |
with initial condition and or boundary condition and , where , are continuously differentiable function for and .
Using ONN-ELM with n neurons, an approximate solution of Eq. (17) is obtained in the form:
| 18 |
where ’s are the output weights that need to be trained and is the i-th orthogonal polynomial.
Since the approximate solution obtained by the ONN-ELM algorithm is the linear combination of the orthogonal polynomials, it is infinitely differentiable and we have,
| 19 |
| 20 |
| 21 |
| 22 |
| 23 |
Substituting Eqs. (18)–(23) into the second order neutral delay differential equation (17), we have
| 24 |
We can write Eq. (24) as:
| 25 |
where,
Using the discretization of interval [a, b] as for , define . At these discretized points, Eq. (25) is to be satisfied, that is:
| 26 |
Equation (26) can be written as a system of equations as:
where ,
and = .
Case:1 Consider Eq. (17) with the initial conditions. Then the following linear system is obtained:
Case:2 Consider Eq. (17) with the boundary conditions. Then the following linear system for NDDE is obtained:
To calculate the weight vector of the network, we use the extreme learning algorithm, that is:
| 27 |
where is the least square solution of Eq. (27).
Note: Similar methodology can be used for the higher order neutral delay differential equation and the system of higher order neutral delay differential equations.
Steps of solving NDDEs using an ONN-ELM algorithm:
Discretize the domain as .
- Construct the approximate solution by using the orthogonal polynomial as an activation function that is,
where are the randomly generated fixed weights. At the discrete points, substitute the approximate solution and its derivatives into the differential equation and its boundary conditions and obtain the system of equations .
Solve the system of equations by ELM algorithm and obtain the network weights .
Substitute the value of and get an approximate solution of DDE.
Numerical illustrations
This section considers the higher order delay and neutral delay differential equations with multiple delays and variable coefficients. We also consider the system of delay and neutral delay differential equations. In all the test examples, we use the special orthogonal polynomials based neural network like Legendre neural network, Laguerre neural network, Chebyshev neural network, and Hermite neural network. Further, to show the reliability and powerfulness of the presented method; we compare the approximate solutions with the exact solution. All computations are carried out using Python 3.9.7 on Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz 1.80 GHz and the Window 10 operating system. We calculate the relative error which is defined as follows.
Example 6.1
22 Consider the second-order boundary valued proportional delay differential equation with variable coefficients
The exact solution of the given equation is .
We employ four ONNs to obtain the approximate solution of the given second-order DDE with variable coefficients. We choose ten uniformly distributed points in [0, 1]. The relative errors for all ONNs are shown in Fig. 3. Obtained relative errors for different orthogonal neural networks are reported in Table 1, and we compare the approximate solutions with the exact solution in Fig. 2.
Figure 3.
Error graph for different orthogonal neural networks with different numbers of neurons for Example 6.1.
Table 1.
The relative error for Example 6.1 with different orthogonal neural networks.
| t | Legendre neural network | Hermite neural network | Laguerre neural network | Chebyshev neural network |
|---|---|---|---|---|
| 0.1 | 1.56e−08 | 2.26e−08 | 1.39e−08 | 5.61e−08 |
| 0.2 | 6.20e−08 | 5.89e−08 | 6.49e−08 | 4.54e−08 |
| 0.3 | 4.57e−08 | 4.39e−08 | 4.89e−08 | 4.12e−08 |
| 0.4 | 1.07e−08 | 9.71e−09 | 1.40e−08 | 2.09e−08 |
| 0.5 | 1.94e−08 | 1.88e−08 | 2.26e−08 | 4.57e−08 |
| 0.6 | 5.66e−08 | 5.64e−08 | 5.98e−08 | 1.06e−08 |
| 0.7 | 6.84e−08 | 6.86e−08 | 7.17e−08 | 2.73e−08 |
| 0.8 | 5.09e−08 | 5.14e−08 | 5.42e−08 | 1.32e−08 |
| 0.9 | 6.99e−08 | 7.08e−08 | 7.34e−08 | 1.79e−08 |
| 1 | 7.08e−10 | 4.55e−10 | 2.93e−09 | 4.63e−09 |
Figure 2.
Comparison of the exact solution with the obtained approximate solutions of Example 6.1.
Table 1 and Fig. 3 clearly show that the Chebyshev polynomial-based ONN performs best with the maximum relative error . Table 2 shows the comparison of the maximum relative error for Example 6.1 using the Legendre, Laguerre, Hermite, and Chebyshev neural networks with various numbers of neurons (n = 5, 8, and 11) and their respective computational time. Additionally, Table 2 shows that all four neural networks satisfy Theorem 4, and for , all four orthogonal neural networks show similar accuracy. However, Chebyshev neural network performs better with .
Table 2.
Comparision of maximum relative error for Example 6.1 with different numbers of neurons.
| n | Legendre | Laguerre | Hermite | Chebyshev | ||||
|---|---|---|---|---|---|---|---|---|
| Time(s) | Error | Time(s) | Error | Time(s) | Error | Time(s) | Error | |
| 5 | 0.004 | 0.0049 | 0.009 | 0.0049 | 0.002 | 0.0049 | 0.008 | 0.0049 |
| 8 | 0.012 | 7.07e−08 | 0.011 | 6.97e−08 | 0.003 | 7.07e−08 | 0.043 | 7.07e−08 |
| 11 | 0.019 | 1.82e−11 | 0.011 | 7.74e−06 | 0.003 | 6.81e−10 | 0.043 | 3.93e−12 |
Significant values are in bold.
Example 6.2
2 Consider the second-order neutral delay differential equation with multiple delays
where , .
The exact solution of the given equation is .
This equation is solved using four ONNs architecture with ten uniformly distributed training points and with 6,8, and 9 neurons in the hidden layer. Relative errors for the different ONNs with 6,8, and 9 neurons as activation functions are reported in Table 3. Figure 4 shows an error graph of different orthogonal neural networks, and a comparison of approximate solutions with the exact solution is shown in Fig. 5.
Table 3.
Comparision of maximum relative error for Example 6.2 with different numbers of neurons.
| n | Legendre | Laguerre | Hermite | Chebyshev | ||||
|---|---|---|---|---|---|---|---|---|
| Time(s) | Error | Time(s) | Error | Time(s) | Error | Time(s) | Error | |
| 6 | 0.007 | 8.25e−14 | 0.004 | 2.06e−12 | 0.002 | 2.87e−13 | 0.001 | 3.17e−14 |
| 8 | 0.009 | 4.94e−08 | 0.004 | 7.29e−09 | 0.004 | 7.73e−13 | 0.002 | 7.19e−14 |
| 9 | 0.019 | 2.22e−14 | 0.069 | 1.57e−08 | 0.017 | 6.13e−13 | 0.022 | 6.77e−15 |
Significant values are in bold.
Figure 4.
Error graph for different orthogonal neural networks with different numbers of neurons for Example 6.2.
Figure 5.
Comparison of the exact solution with the obtained approximate solutions of Example 6.2.
From Table 4 and Fig. 4 we conclude that for the given second-order neutral delay differential equation, Chebyshev polynomial-based ONN performs best with the maximum relative error . Additionally, Table 3 shows that all four neural networks satisfy Theorem 4.
Table 4.
The relative error for Example 6.2 with different orthogonal neural networks.
| t | Legendre neural network | Hermite neural network | Laguerre neural network | Chebyshev neural network |
|---|---|---|---|---|
| 0.1 | 4.94e−08 | 7.73e−13 | 7.29e−09 | 7.19e−14 |
| 0.2 | 1.19e−08 | 1.90e−13 | 1.32e−09 | 1.00e−14 |
| 0.3 | 5.05e−09 | 7.61e−14 | 3.83e−10 | 9.25e−15 |
| 0.5 | 1.49e−09 | 1.26e−14 | 8.31e−12 | 5.55e−15 |
| 0.6 | 8.88e−10 | 3.08e−16 | 3.37e−11 | 6.32e−15 |
| 0.7 | 5.20e−10 | 7.81e−15 | 5.12e−11 | 7.13e−15 |
| 0.8 | 2.81e−10 | 1.37e−14 | 5.75e−11 | 8.15e−15 |
| 0.9 | 1.17e−10 | 1.80e−14 | 5.85e−11 | 9.45e−15 |
| 1 | 2.66e−15 | 2.17e−14 | 5.68e−11 | 1.11e−14 |
Example 6.3
2 Consider the second-order neutral delay differential equation with variable coefficients
The exact solution of the given equation is .
To obtain the approximate solution of the given equation, we use four ONNs with ten uniformly distributed training points in [0,1] and with 8,9, and 11 neurons as activation functions in the hidden layer. Relative errors for the different ONNs and with different numbers of neurons are reported in Table 6. The exact and approximate solutions are compared in Fig. 7. Figures 6, 7 shows the absolute relative error of four special ONNs.
Table 6.
Comparision of maximum relative error for Example 6.3 with different numbers of neurons.
| n | Legendre | Laguerre | Hermite | Chebyshev | ||||
|---|---|---|---|---|---|---|---|---|
| Time(s) | Error | Time(s) | Error | Time(s) | Error | Time(s) | Error | |
| 8 | 0.007 | 3.50e−09 | 0.004 | 1.52e−10 | 0.002 | 1.52e−13 | 0.001 | 2.29e−15 |
| 9 | 0.009 | 3.07e−13 | 0.004 | 8.80e−11 | 0.004 | 1.69e−05 | 0.002 | 2.29e−15 |
| 11 | 0.019 | 3.07e−13 | 0.069 | 3.16e−11 | 0.017 | 1.80e−05 | 0.022 | 1.32e−15 |
Significant values are in bold.
Figure 7.
Comparison of the exact solution with the obtained approximate solutions of Example 6.3.
Figure 6.
Error graph for different orthogonal neural networks with different numbers of neurons for Example 6.3.
From Table 5 and Fig. 6, we conclude that for the given second-order neutral delay differential equation, Chebyshev polynomial-based ONN provides the best accurate solution with the maximum relative error . Additionally, Table 6 shows that all four neural networks satisfy Theorem 4.
Table 5.
The relative error for Example−6.3 with different orthogonal neural networks.
| t | Legendre neural network | Hermite neural network | Laguerre neural network | Chebyshev neural network |
|---|---|---|---|---|
| 0.0 | 3.50e−09 | 1.17e−13 | 1.52e−10 | 7.77e−16 |
| 0.1 | 3.46e−09 | 1.14e−13 | 8.90e−11 | 0.0 |
| 0.2 | 3.34e−09 | 1.01e−13 | 4.18e−11 | 1.06e−15 |
| 0.3 | 3.16e−09 | 8.06e−14 | 1.01e−11 | 1.83e−15 |
| 0.4 | 2.94e−09 | 5.28e−14 | 8.50e−12 | 2.29e−15 |
| 0.5 | 2.70e−09 | 1.98e−14 | 1.72e−11 | 2.13e−15 |
| 0.6 | 2.44e−09 | 1.58e−14 | 1.90e−11 | 1.63e−15 |
| 0.7 | 2.18e−09 | 5.27e−14 | 1.66e−11 | 1.04e−15 |
| 0.8 | 1.93e−09 | 8.93e−14 | 1.21e−11 | 2.70e−16 |
| 0.9 | 1.70e−09 | 1.24e−13 | 7.09e−12 | 4.90e−16 |
| 1.0 | 1.50e−09 | 1.52e−13 | 2.35e−12 | 8.88e−16 |
Example 6.4
31 Consider the third-order pantograph equation
The exact solution of the given equation is .
To obtain the approximate solution of the given equation, we use four ONNs with ten uniformly distributed training points in [0,1] and with 8,11,13 neurons as activation functions in the hidden layer. Relative errors for the different ONNs with different numbers of neurons as activation functions are reported in Table 7. The exact and approximate solutions are compared in Fig. 8. Figure 9 shows the maximum relative error of four special ONNs with different numbers of neurons.
Table 7.
Comparision of maximum relative error for Example 6.4 with different numbers of neurons.
| n | Legendre | Laguerre | Hermite | Chebyshev | ||||
|---|---|---|---|---|---|---|---|---|
| Time(s) | Error | Time(s) | Error | Time(s) | Error | Time(s) | Error | |
| 8 | 0.007 | 1.11e−05 | 0.004 | 1.11e−05 | 0.002 | 1.11e−05 | 0.001 | 1.11e−05 |
| 11 | 0.019 | 3.86e−08 | 0.004 | 6.56e−08 | 0.004 | 6.06e−08 | 0.004 | 3.86e−08 |
| 13 | 0.019 | 1.12e−09 | 0.069 | 3.11e−09 | 0.017 | 1.20e−06 | 0.022 | 3.77e−10 |
Significant values are in bold.
Figure 8.
Comparison of the exact solution with the obtained approximate solutions of Example 6.4.
Figure 9.
Error graph for different orthogonal neural networks with different numbers of neurons for Example 6.4.
From Table 8 and Fig. 9, we conclude that for the given third-order neutral delay differential equation, Chebyshev polynomial-based ONN provides the best accurate solution with the maximum relative error . Additionally, Table 7 shows that all four orthogonal neural networks satisfy Theorem 4.
Table 8.
The relative error for Example 6.4 with different orthogonal neural networks.
| t | Legendre neural network | Hermite neural network | Laguerre neural network | Chebyshev neural network |
|---|---|---|---|---|
| 0 | 3.79e−10 | 7.39e−07 | 5.91e−10 | 1.00e−11 |
| 0.1 | 3.82e−10 | 5.75e−07 | 6.05e−10 | 6.11e−12 |
| 0.2 | 3.89e−10 | 3.94e−07 | 6.33e−10 | 5.81e−12 |
| 0.3 | 4.02e−10 | 2.05e−07 | 6.75e−10 | 2.65e−11 |
| 0.4 | 4.21e−10 | 1.32e−08 | 7.33e−10 | 5.73e−11 |
| 0.5 | 4.49e−10 | 1.79e−07 | 8.11e−10 | 9.98e−11 |
| 0.6 | 4.88e−10 | 3.71e−07 | 9.22e−10 | 1.54e−10 |
| 0.7 | 5.49e−10 | 5.65e−07 | 1.10e−09 | 2.20e−10 |
| 0.8 | 6.49e−10 | 7.63e−07 | 1.41e−09 | 2.92e−10 |
| 0.9 | 8.21e−10 | 9.72e−07 | 2.00e−09 | 3.54e−10 |
| 1 | 1.12e−09 | 1.20e−06 | 3.11e−09 | 3.77e−10 |
Comparative analysis
This section describes a comparative study of the proposed approach to the 1st-order pantograph equation and system of pantograph equations with other neural network approaches.
Example 7.1
25 Consider the pantograph equation with variable coefficients and multiple delays
where, .
The exact solution of the given equation is .
We employ four ONNs to obtain the approximate solution of a given pantograph equation with multiple delays. We choose eight uniformly distributed points in [0, 1] with 5,8 and 11 neurons in the hidden layer. The relative errors with all four ONNs with different numbers of neurons are shown in Fig. 11. Obtained relative errors for the different orthogonal neural networks are reported in Table 9, and we compare the approximate solutions with the exact solution in Fig. 10.
Figure 11.
Error graph for different orthogonal neural networks with different numbers of neurons for Example 7.1.
Table 9.
Comparision of maximum relative error for Example 7.1 with different numbers of neurons.
| n | Legendre | Laguerre | Hermite | Chebyshev | ||||
|---|---|---|---|---|---|---|---|---|
| Time(s) | Error | Time(s) | Error | Time(s) | Error | Time(s) | Error | |
| 5 | 0.007 | 0.0014 | 0.004 | 0.0014 | 0.002 | 0.0014 | 0.001 | 0.0014 |
| 8 | 0.019 | 7.02e−08 | 0.004 | 6.56e−08 | 0.004 | 7.02e−08 | 0.004 | 7.02e−08 |
| 11 | 0.019 | 4.75e−11 | 0.069 | 1.06e−06 | 0.017 | 2.63e−09 | 0.022 | 3.40e−11 |
Significant values are in bold.
Figure 10.
Comparison of the exact solution with the obtained approximate solutions of Example 7.1.
Table 9 and Fig. 11 clearly show that the Chebyshev polynomial-based ONN performs best with the maximum relative error .
The maximum relative error of a simple feed-forward neural network(FNN) method in25 is and the maximum relative error of the proposed FLNN-based ONN method is . This comparison shows that the ONN method can obtain a better accuracy solution than simple FNN. Additionally, Table 9 shows that all four orthogonal neural networks satisfy Theorem 4.
Example 7.2
25 Consider the system of pantograph equation
The exact solutions of the given system of pantograph equation is and .
To obtain the approximate solutions of the given system of DDEs, we use four ONNs with twelve uniformly distributed training points in [0,1] and with 5,7, and 10 neurons in an orthogonal functional expansion block as activation functions. Relative errors for the different ONNs with 5,7, and 10 neurons as activation functions are reported in Tables 10 and 11. Comparison between the exact solution and approximate solutions are presented in Figs. 14 and 15. Figures 12, 13, 14 and 15 show the absolute relative error between four special ONNs and exact solutions.
Table 10.
Comparision of maximum relative error of for Example 7.2 with different numbers of neurons.
| n | Legendre | Laguerre | Hermite | Chebyshev | ||||
|---|---|---|---|---|---|---|---|---|
| Time(s) | Error | Time(s) | Error | Time(s) | Error | Time(s) | Error | |
| 5 | 0.005 | 0.06 | 0.004 | 0.0004 | 0.002 | 0.0004 | 0.001 | 0.0004 |
| 7 | 0.019 | 0.067 | 0.004 | 1.72e−06 | 0.004 | 1.93e−07 | 0.004 | 1.93e−07 |
| 10 | 0.019 | 3.23e−10 | 0.069 | 1.71e−06 | 0.017 | 1.81e−09 | 0.022 | 1.60e−10 |
Significant values are in bold.
Table 11.
Comparision of maximum relative error of for Example 7.2 with different numbers of neurons.
| n | Legendre | Laguerre | Hermite | Chebyshev | ||||
|---|---|---|---|---|---|---|---|---|
| Time(s) | Error | Time(s) | Error | Time(s) | Error | Time(s) | Error | |
| 5 | 0.005 | 0.312 | 0.004 | 0.0006 | 0.002 | 0.0006 | 0.001 | 0.0006 |
| 7 | 0.019 | 0.02 | 0.004 | 1.94e−06 | 0.004 | 2.00e−07 | 0.004 | 6.47e−09 |
| 10 | 0.019 | 1.42e−08 | 0.069 | 2.20e−08 | 0.017 | 5.91e−06 | 0.022 | 5.11e−10 |
Significant values are in bold.
Figure 14.
Error graph of for different orthogonal neural networks with different numbers of neurons for Example 7.2.
Figure 15.
Error graph of for different orthogonal neural networks with different numbers of neurons for Example 7.2.
Figure 12.
Comparison of the exact solution with the obtained approximate solutions of Example 7.2.
Figure 13.
Comparison of the exact solution with the obtained approximate solutions of Example 7.2.
From Tables 10 and 11, we conclude that for the given system of delay differential equation, Chebyshev polynomial-based ONN provides the best accurate solution for and with the maximum relative errors and , respectively.
The maximum relative error of a simple feed-forward neural network(FNN) method in25 for and with twelve training points are and respectively and the maximum relative error of the proposed FLNN-based ONN method for and with twelve training points are and respectively. This comparison shows that the ONN method can obtain a better accuracy solution than simple FNN. Additionally, Tables 10 and 11 show that all four orthogonal neural networks satisfy Theorem 4.
Example 7.3
25 Consider the system of pantograph equation
where, ,
,
.
The exact solutions of the given system of pantograph equation are , , and .
To obtain the approximate solution of the given system of DDEs, we use four ONNs with ten uniformly distributed training points in [0,1] and with 7,10, and 13 neurons in an orthogonal functional expansion block as activation functions. Relative errors for the different ONNs with 7,10, and 13 neurons as activation functions are reported in Tables 12, 13, and 14. Comparison between the exact solution and approximate solutions are presented in Figs. 16, 17, 18, and 19. Figures 16, 20, and 21 show the absolute relative error between four special ONNs and exact solutions.
Table 12.
Comparision of maximum relative error of for Example 7.3 with different numbers of neurons.
| n | Legendre | Laguerre | Hermite | Chebyshev | ||||
|---|---|---|---|---|---|---|---|---|
| Time(s) | Error | Time(s) | Error | Time(s) | Error | Time(s) | Error | |
| 7 | 0.007 | 5.41e−07 | 0.004 | 6.20e−07 | 0.002 | 6.17e−07 | 0.001 | 6.16e−07 |
| 10 | 0.019 | 9.87e−08 | 0.004 | 6.97e−07 | 0.004 | 1.45e−09 | 0.004 | 1.53e−10 |
| 13 | 0.019 | 9.11e−11 | 0.069 | 5.79e−07 | 0.017 | 1.23e−09 | 0.022 | 1.98e−11 |
Significant values are in bold.
Table 13.
Comparision of maximum relative error of for Example 7.3 with different numbers of neurons.
| n | Legendre | Laguerre | Hermite | Chebyshev | ||||
|---|---|---|---|---|---|---|---|---|
| Time(s) | Error | Time(s) | Error | Time(s) | Error | Time(s) | Error | |
| 7 | 0.007 | 1.60e−07 | 0.004 | 3.18e−07 | 0.002 | 3.17e−05 | 0.001 | 2.93e−07 |
| 10 | 0.019 | 1.05e−09 | 0.004 | 5.71e−07 | 0.004 | 5.03e−10 | 0.004 | 3.70e−10 |
| 13 | 0.019 | 1.05e−09 | 0.069 | 5.24e−07 | 0.017 | 8.04e−09 | 0.022 | 3.11e−10 |
Significant values are in bold.
Table 14.
Comparision of maximum relative error of for Example 7.3 with different numbers of neurons.
| n | Legendre | Laguerre | Hermite | Chebyshev | ||||
|---|---|---|---|---|---|---|---|---|
| Time(s) | Error | Time(s) | Error | Time(s) | Error | Time(s) | Error | |
| 7 | 0.007 | 1.31e−07 | 0.004 | 1.76e−07 | 0.002 | 1.78e−07 | 0.001 | 1.74e−07 |
| 10 | 0.019 | 2.82e−08 | 0.004 | 4.05e−07 | 0.004 | 1.27e−08 | 0.004 | 3.91e−09 |
| 13 | 0.019 | 5.18e−08 | 0.069 | 5.65e−07 | 0.017 | 4.23e−08 | 0.022 | 5.74e−09 |
Significant values are in bold.
Figure 16.
Error graph of for different orthogonal neural networks with different numbers of neurons for Example 7.3.
Figure 17.
Comparison of the exact solution with the obtained approximate solutions of Example 7.3.
Figure 18.
Comparison of the exact solution with the obtained approximate solutions of Example 7.3.
Figure 19.
Comparison of the exact solution with the obtained approximate solutions of Example 7.3.
Figure 20.
Error graph of for different orthogonal neural networks with different numbers of neurons for Example 7.3.
Figure 21.
Error graph of for different orthogonal neural networks with different numbers of neurons for Example 7.3.
From Tables 12, 13 and 14, we conclude that for the given system of delay differential equation, Chebyshev polynomial-based ONN provides the best accurate solutions of , and with the maximum relative errors , and respectively.
The maximum relative error of a simple feed-forward neural network(FNN) method in25 for , and with ten training points are , and respectively and the maximum relative error of the proposed FLNN-based ONN method for , and with ten training points are , and respectively. This comparison shows that the ONN method can obtain a better accuracy solution than simple FNN. Additionally, Tables 12, 13 and 14 show that all four orthogonal neural networks satisfy Theorem 4.
Conclusion
In this paper, we obtained approximate solutions of higher order NDDEs, as well as a system of DDEs with multiple delays and variable coefficients, using four single-layer orthogonal polynomial-based neural networks: (i) Legendre neural network, (ii) Chebyshev neural network, (iii) Hermite neural network, and (iv) Laguerre neural network. For training the network weights, the ELM algorithm is used. It is proved that the relative error between the exact solution and approximate solutions obtained by ONNs is of order , where is the number of neurons. Further, it is shown that each orthogonal polynomial-based neural networks provide an approximate solution, that are in good agreement with the exact solution. However, it is observed that, among these four ONNs, the Chebyshev neural network provides the most accurate result.
The results in the section (6), (7) demonstrate that the proposed method is simple to implement and a powerful mathematical technique for obtaining the approximate solution of the higher order NDDEs as well as the system of DDEs.
Acknowledgements
Chavda Divyesh Vinodbhai acknowledges the financial support provided by the MoE (Ministry of Education), Government of India, to carry out the work. The second author is thankful for the financial support received from the Indian Institute of Technology Madras.
Author contributions
The contributions of each authors are equal.
Data availability
The data that support the findings of this investigation are accessible from the authors upon reasonable request. If necessary, you can contact by email sdubey@iitm.ac.in.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Ockendon JR, Tayler AB. The dynamics of a current collection system for an electronic locomotive. Numer. Math. 1971;72(2):447–468. [Google Scholar]
- 2.Biazar J, Ghanbari B. The homotopy perturbation method for solving neutral functional-differential equations with proportional delays. J. King Saud Univ.-Sci. 2012;24(1):33–37. doi: 10.1016/j.jksus.2010.07.026. [DOI] [Google Scholar]
- 3.Bahşi, M.M. & Çevik, M. Numerical solution of pantograph-type delay differential equations using perturbation-iteration algorithms. J. Appl. Math.2015 (2015).
- 4.Bahuguna D, Agarwal S. Approximations of solutions to neutral functional differential equations with nonlocal history conditions. J. Math. Anal. Appl. 2006;317(2):583–602. doi: 10.1016/j.jmaa.2005.07.010. [DOI] [Google Scholar]
- 5.Dubey SA. The method of lines applied to nonlinear nonlocal functional differential equations. J. Math. Anal. Appl. 2011;376(1):275–281. doi: 10.1016/j.jmaa.2010.10.024. [DOI] [Google Scholar]
- 6.Aibinu M, Thakur S, Moyo S. Exact solutions of nonlinear delay reaction-diffusion equations with variable coefficients. Partial Differ. Equ. Appl. Math. 2021;4:100170. doi: 10.1016/j.padiff.2021.100170. [DOI] [Google Scholar]
- 7.Mahata A, Paul S, Mukherjee S, Roy B. Stability analysis and Hopf bifurcation in fractional order SEIRV epidemic model with a time delay in infected individuals. Partial Differ. Equ. Appl. Math. 2022;5:100282. doi: 10.1016/j.padiff.2022.100282. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Cakmak M, Alkan S. A numerical method for solving a class of systems of nonlinear pantograph differential equations. Alex. Eng. J. 2022;61(4):2651–2661. doi: 10.1016/j.aej.2021.07.028. [DOI] [Google Scholar]
- 9.Muslim M. Approximation of solutions to history-valued neutral functional differential equations. Comput. Math. Appl. 2006;51(3–4):537–550. doi: 10.1016/j.camwa.2005.07.013. [DOI] [Google Scholar]
- 10.Lagaris IE, Likas A, Fotiadis DI. Artificial neural networks for solving ordinary and partial differential equations. IEEE Trans. Neural Netw. 1998;9(5):987–1000. doi: 10.1109/72.712178. [DOI] [PubMed] [Google Scholar]
- 11.Aarts LP, Van Der Veer P. Neural network method for solving partial differential equations. Neural Process. Lett. 2001;14(3):261–271. doi: 10.1023/A:1012784129883. [DOI] [Google Scholar]
- 12.Mall S, Chakraverty S. Application of Legendre neural network for solving ordinary differential equations. Appl. Soft Comput. 2016;43:347–356. doi: 10.1016/j.asoc.2015.10.069. [DOI] [Google Scholar]
- 13.Raissi M, Perdikaris P, Karniadakis GE. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019;378:686–707. doi: 10.1016/j.jcp.2018.10.045. [DOI] [Google Scholar]
- 14.Panghal S, Kumar M. Multilayer perceptron and Chebyshev polynomials based neural network for solving Emden–Fowler type initial value problems. Int. J. Appl. Comput. Math. 2020;6(6):1–12. doi: 10.1007/s40819-020-00914-2. [DOI] [Google Scholar]
- 15.Ezadi S. & Parandin N. An application of neural networks to solve ordinary differential equations (2013)
- 16.Liu Z, Yang Y, Cai Q. Neural network as a function approximator and its application in solving differential equations. Appl. Math. Mech. 2019;40(2):237–248. doi: 10.1007/s10483-019-2429-8. [DOI] [Google Scholar]
- 17.Pakdaman M, Ahmadian A, Effati S, Salahshour S, Baleanu D. Solving differential equations of fractional order using an optimization technique based on training artificial neural network. Appl. Math. Comput. 2017;293:81–95. [Google Scholar]
- 18.Nguyen L, Raissi M, Seshaiyer P. Efficient Physics Informed Neural Networks Coupled with Domain Decomposition Methods for Solving Coupled Multi-physics Problems. Springer; 2022. pp. 41–53. [Google Scholar]
- 19.Mall S, Chakraverty S. Numerical solution of nonlinear singular initial value problems of Emden–Fowler type using Chebyshev neural network method. Neurocomputing. 2015;149:975–982. doi: 10.1016/j.neucom.2014.07.036. [DOI] [Google Scholar]
- 20.Dufera TT. Deep neural network for system of ordinary differential equations: Vectorized algorithm and simulation. Mach. Learn. Appl. 2021;5:100058. [Google Scholar]
- 21.Fang J, Liu C, Simos T, Famelis IT. Neural network solution of single-delay differential equations. Mediterr. J. Math. 2020;17(1):1–15. doi: 10.1007/s00009-019-1452-5. [DOI] [Google Scholar]
- 22.Hou C-C, Simos TE, Famelis IT. Neural network solution of pantograph type differential equations. Math. Methods Appl. Sci. 2020;43(6):3369–3374. doi: 10.1002/mma.6126. [DOI] [Google Scholar]
- 23.Panghal S, Kumar M. Optimization free neural network approach for solving ordinary and partial differential equations. Eng. Comput. 2021;37(4):2989–3002. doi: 10.1007/s00366-020-00985-1. [DOI] [Google Scholar]
- 24.Huang G-B, Zhu Q-Y, Siew C-K. Extreme learning machine: Theory and applications. Neurocomputing. 2006;70(1–3):489–501. doi: 10.1016/j.neucom.2005.12.126. [DOI] [Google Scholar]
- 25.Panghal, S. & Kumar M. Neural network method: delay and system of delay differential equations. Eng. Comput. 1–10 (2021)
- 26.Liu H, Song J, Liu H, Xu J, Li L. Legendre neural network for solving linear variable coefficients delay differential-algebraic equations with weak discontinuities. Adv. Appl. Math. Mech. 2021;13(1):101–118. doi: 10.4208/aamm.OA-2019-0281. [DOI] [Google Scholar]
- 27.Mall, S. & Chakraverty, S. Artificial Neural Networks for Engineers and Scientists: Solving Ordinary Differential Equations, 1st ed., 168 (2017)
- 28.Verma A, Kumar M. Numerical solution of third-order Emden–Fowler type equations using artificial neural network technique. Eur. Phys. J. Plus. 2020;135(9):1–14. doi: 10.1140/epjp/s13360-020-00780-3. [DOI] [Google Scholar]
- 29.Verma A, Kumar M. Numerical solution of Bagley–Torvik equations using Legendre artificial neural network method. Evol. Intell. 2021;14(4):2027–2037. doi: 10.1007/s12065-020-00481-x. [DOI] [Google Scholar]
- 30.Serre D. Matrices: Theory and Applications. Springer Inc; 2002. [Google Scholar]
- 31.Sezer M, Akyüz-Daşcıogˇlu A. A Taylor method for numerical solution of generalized pantograph equations with linear functional argument. J. Comput. Appl. Math. 2007;200(1):217–225. doi: 10.1016/j.cam.2005.12.015. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data that support the findings of this investigation are accessible from the authors upon reasonable request. If necessary, you can contact by email sdubey@iitm.ac.in.





















