Skip to main content
The Scientific World Journal logoLink to The Scientific World Journal
. 2013 Dec 9;2013:306237. doi: 10.1155/2013/306237

Numerical Solution of Some Types of Fractional Optimal Control Problems

Nasser Hassan Sweilam 1,*, Tamer Mostafa Al-Ajami 1,*, Ronald H W Hoppe 2,3
PMCID: PMC3872407  PMID: 24385874

Abstract

We present two different approaches for the numerical solution of fractional optimal control problems (FOCPs) based on a spectral method using Chebyshev polynomials. The fractional derivative is described in the Caputo sense. The first approach follows the paradigm “optimize first, then discretize” and relies on the approximation of the necessary optimality conditions in terms of the associated Hamiltonian. In the second approach, the state equation is discretized first using the Clenshaw and Curtis scheme for the numerical integration of nonsingular functions followed by the Rayleigh-Ritz method to evaluate both the state and control variables. Two illustrative examples are included to demonstrate the validity and applicability of the suggested approaches.

1. Introduction

FOCP refers to the minimization of an objective functional subject to dynamical constraints on the state and the control which have fractional order models. Fractional order models are sometimes more appropriate than conventional integer order models to describe physical systems [14]. For example, it has been shown that materials with memory and hereditary effects and dynamical processes including gas diffusion and heat conduction in fractal porous media can be more adequately modeled by fractional order models [5]. Numerical methods for solving FOCPs have been suggested in [69].

This paper presents two numerical methods for solving some types of FOCPs where fractional derivatives are introduced in the Caputo sense. These numerical methods rely on the spectral method where Chebyshev polynomials are used to approximate the unknown functions. Chebyshev polynomials are widely used in numerical computation [10, 11].

For the first numerical method, we follow the approach “optimize first, then discretize” and derive the necessary optimality conditions in terms of the associated Hamiltonian. The necessary optimality conditions give rise to fractional boundary value problems that have left Caputo and right Riemann-Liouville fractional derivatives. We construct an approximation of the right Riemann-Liouville fractional derivatives and solve the fractional boundary value problems by the spectral method. The second method relies on the strategy “discretize first, then optimize.” The Clenshaw and Curtis scheme [12] is used for the discretization of the state equation and the objective functional. The Rayleigh-Ritz method provides the optimality conditions in the discrete regime.

The paper is organized as follows: in Section 2, some basic notations and preliminaries as well as properties of the shifted Chebyshev polynomials are introduced. Section 3 contains the necessary optimality conditions of the FOCP model. Section 4 is devoted to the approximations of the fractional derivatives. In Section 5, we develop two numerical schemes and present two illustrative examples to demonstrate the validity and applicability of the suggested approaches. Finally, in Section 6, we provide a brief conclusion and some final remarks.

2. Basic Notations and Preliminaries

2.1. Fractional Derivatives and Integrals

Definition 1 —

Let x : [a, b] → ℝ be a function, let α > 0 be a real number, and let n = ⌈α⌉, where ⌈α⌉ denotes the smallest integer greater than or equal to α. The left (left RLFI) and right (right RLFI) Riemann-Liouville fractional integrals are defined by

Iatαx(t)=1Γ(α)at(tτ)α1x(τ)dτ(left  RLFI),Itbαx(t)=1Γ(α)tb(τt)α1x(τ)dτ(right  RLFI). (1)

The left (left RLFD) and right (right RLFD) Riemann-Liouville fractional derivatives are given according to

Datαx(t)=1Γ(nα)dndtnat(tτ)nα1x(τ)dτ(left  RLFD),Dtbαx(t)=(1)nΓ(nα)dndtntb(τt)nα1x(τ)dτ(right  RLFD). (2)

Moreover, the left (left CFD) and right (right CFD) Caputo fractional derivatives are defined by means of

DaCtαx(t)=1Γ(nα)at(tτ)nα1x(n)(τ)dτ(left  CFD),DtCbαx(t)=(1)nΓ(nα)tb(τt)nα1x(n)(τ)dτ(right  CFD). (3)

The relation between the right RLFD and the right CFD is as follows [13]:

DtCbαx(t)=Dtbαx(t)k=0n1x(k)(b)Γ(kα+1)(bt)kα. (4)

Further, it holds

D0Ctαc=0,wherecisaconstant,D0Ctα  tn={0,forn  0,n<αΓ(n+1)Γ(n+1α)tnα,forn  0,nα, (5)

where 0 = {0,1, 2,…}. We recall that, for α, the Caputo differential operator coincides with the usual differential operator of integer order. For more details on the fractional derivatives definitions and their properties, we refer the reader to [3, 8, 14, 15].

2.2. Shifted Chebyshev Polynomials

The well-known Chebyshev polynomials are defined on the interval [−1,1] and can be determined by the following recurrence formula [16]:

Tn+1(z)=  2zTn(z)Tn1(z),T0(z)=1,T1(z)=z,n=1,2,. (6)

The analytic form of the Chebyshev polynomials T n(z) of degree n is as follows:

Tn(z)=i=0n/2(1)i2n2i1n(ni1)!(i)!(n2  i)!  zn2i, (7)

where ⌊n⌋ denotes the biggest integer less than or equal to n. The orthogonality condition reads

11Ti(z)Tj(z)1z2dz={π,fori=j=0;π2,fori=j  0;0,fori  j. (8)

In order to use these polynomials on the interval [0, L], we use the so-called shifted Chebyshev polynomials by introducing the change of variable z = (2t/L) − 1. The shifted Chebyshev polynomials are defined according to

Tn(t)=Tn(2tL1),whereT0(t)=1T1(t)=2tL1. (9)

Their analytic form is given by

Tn(t)=nk=0n(1)nk22k(n+k1)!(2k)!(nk)!Lktk,n=1,2,. (10)

We note that (10) implies that T n*(0) = (−1)n, T n*(L) = 1. Further, it is easy to see that the orthogonality condition reads

0LTj(t)Tk(t)w(t)dt=δjkhk, (11)

with the weight function w(t)=1/Lt-t2,  h k = (b k/2)π,  b 0 = 2,  b k = 1, for k ≥ 1.

A function yL 2([0, L]) can be expressed in terms of shifted Chebyshev polynomials as

y(t)=j=0cnTn(t), (12)

where the coefficients c n are given by

cn=1hn0Ly(t)Tn(t)w(t)dt,n=0,1,. (13)

3. Necessary Optimality Conditions

Let α ∈ (0,1) and let L, f : [a, +[×ℝ2 → ℝ be two differentiable functions. We consider the following FOCP [8]:

minimizeJ(x,u,T)=aTL(t,x(t),u(t))dt, (14a)

subject to the dynamical system

M1x˙(t)+M2DaCtαx(t)=f(t,x(t),u(t)), (14b)
x(a)=xa,x(T)=xT, (14c)

where M 1, M 2 ≠ 0,  T,  x a, and x T are fixed real numbers.

Theorem 2 (see [8]) —

If (x, u, T) is a minimizer of (14a)–(14c), then there exists an adjoint state λ for which the triple (x, u, λ) satisfies the optimality conditions

M1x˙(t)+M2DaCtαx(t)=Hλ(t,x(t),u(t),λ(t)), (15a)
M1λ˙(t)M2DtTαλ(t)=Hx(t,x(t),u(t),λ(t)), (15b)
Hu(t,x(t),u(t),λ(t))=0, (15c)

for all t ∈ [a, T], where the Hamiltonian H is defined by

H(t,x,u,λ)=L(t,x,u)+λf(t,x,u). (16)

Remark 3 —

Under some additional assumptions on the objective functional L and the right-hand side f, for example, convexity of L and linearity of f in x and u, the optimality conditions (15a)–(15c) are also sufficient.

4. Numerical Approximations

In this section, we provide numerical approximations of the left CFD and the right RLFD using Chebyshev polynomials. We choose the grid points to be the Chebyshev-Gauss-Lobatto points associated with the interval [0, L]; that is,

tr=L2L2cos(πrN),  r=0,1,,N. (17)

Clenshaw and Curtis [12] introduced an approximation y N of the function y. We reformulate it to be used with respect to the shifted Chebyshev polynomials as follows:

yN(t)=n=0N′′anTn(t),an=2Nr=0N′′y(tr)Tn(tr). (18)

Here, the summation symbol with double primes denotes a sum with both first and last terms halved.

4.1. Approximation of the Left CFD

In the sequel, some basic results for the approximation of the fractional derivative 0 C D t α y(t) are given.

Theorem 4 (see [17]) —

An approximation of the fractional derivative of order α in the Caputo sense of the function y at t s is given by

D0CtαyN(ts)r=0Ny(tr)ds,rα,  α>0, (19)

where

ds,rα=4θrNn=αNj=0Nk=αnnθnbj(1)nk(n+k1)!Γ(kα+1/2)Tn(tr)Tj(ts)LαΓ(k+1/2)(nk)!Γ(kαj+1)Γ(kα+j+1), (20)

where  s, r = 0,1,…, N, with θ 0 = θ N = 1/2,  θ i = 1 for all i = 1,2,…, N − 1.

An upper bound for the error in the approximation of the fractional derivative 0 C D t α of the function y is given as follows.

Theorem 5 (see [18]) —

Let 0 C D t α y N(t) be the approximation of the fractional derivative 0 C D t α of the function y as given by (19). Then, it holds that

||D0Ctαy(t)D0CtαyN(t)||2n=0N′′anΩn(G(tkα;T0,,TN)G(T0,,TN))1/2, (21)

where

Ωn=k=αn(((1)nk2n(n+k1)!×Γ(kα+12))×(bjLαΓ(k+12)(nk)!Γ(kαj+1)×Γ(kα+j+1))1),G(x;y1,y2,,yn)=|x,xx,y1x,yny1,xy1,y1y1,ynyn,xyn,y1yn,yn|. (22)

4.2. Approximation of the Right RLFD

Let f be a sufficiently smooth function in [0, b] and let J(s; f) be defined as follows:

J(s;f)=sb(ts)αf(t)dt,0<s<b. (23)

From (3) and (4), we deduce that

Dsbαf(s)=f(b)Γ(1α)(bs)α+J(s;f)Γ(1α). (24)

We approximate f(t),  0 ≤ tb, by a sum of shifted Chebyshev polynomials T k(2t/b − 1) according to

f(t)pN(t)=k=0N′′akTk(2tb1),ak=2Nj=0N′′f(tj)Tk(2tjb1), (25)

where  t j = (b/2) − (b/2)cos⁡(πj/N), j = 0,…, N, and obtain

J(s;f)J(s;pN)=sbpN(t)(ts)αdt. (26)

Lemma 6 —

Let p N be the polynomial of degree N as given by (25). Then, there exists a polynomial F N−1 of degree N − 1 such that

sx[pN(t)pN(s)](ts)αdt=[FN1(x)FN1(s)](xs)1α. (27)

Proof —

Let p N′(t) − p N′(s) be expanded in a Taylor series at t = s:

pN(t)pN(s)=k=1N1Ak(s)(ts)k. (28)

Then,

sx[pN(t)pN(s)](ts)αdt=  k=1N1Ak(s)sx(ts)kαdt=  [(ts)1αk=1N1Ak(s)(ts)kkα+1]sx. (29)

The assertion follows, if we choose

FN1(x)=k=0N1Ak(s)(xs)kkα+1, (30)

with an arbitrary constant A 0(s).

In view of (27), we have

J(s;pN)=sbpN(t)(ts)αdt=[pN(s)1α+FN1(b)FN1(s)](bs)1α. (31)

Moreover, s D b α f(s) can be approximated by means of

Dsbαf(s)f(b)Γ(1α)(bs)α+J(s;pN)Γ(1α). (32)

We express F N−1(t) in (31) by a sum of Chebyshev polynomials and provide the recurrence relation satisfied by the Chebyshev coefficients. Differentiating both sides of (27) with respect to x yields

{pN(x)pN(s)}(xs)α=FN1(x)(xs)1α+{FN1(x)FN1(s)}(1α)(xs)α, (33)

whence

pN(x)pN(s)=FN1(x)(xs)+{FN1(x)FN1(s)}(1α). (34)

To evaluate F N−1(s) in (31), we expand F N−1′(x) in terms of the shifted Chebyshev polynomials as

FN1(x)=k=0N2bkTk(2xb1),0xb, (35)

where the summation symbol with one prime denotes a sum with the first term halved. Integrating both sides of (35) gives

FN1(x)FN1(s)=b4k=1N1bk1bk+1k{Tk(2xb1)Tk(2sb1)}, (36)

where b N−1 = b N = 0. On the other hand, we have

(xs)FN1(x)=b2FN1(x){(2xb1)(2sb1)}. (37)

By using the relation T k+1(u) + T k−1(u) = 2uT k(u) and (35), it follows that

(xs)FN1(x)=b4k=0N1{bk+12(2sb1)bk+bk1}Tk(2xb1), (38)

where b −1 = b 1. Let

pN(x)=k=0N1ckTk(2xb1). (39)

Inserting F N−1(x) − F N−1(s) and (xs)F N−1′(x) as given by (36) and (38) into (34) and taking (39) into account, we get

{11αk}bk+12(2sb1)bk+{1+1αk}bk1=4bck,  1k. (40)

The Chebyshev coefficients c k of p N′(x) as given by (39) can be evaluated by integrating (39) and comparing it with (25):

ck1=ck+1+4kbak,k=N,N1,,1, (41)

with starting values c N = c N+1 = 0, where a k are the Chebyshev coefficients of p N(x).

5. Numerical Results

In this section, we develop two algorithms (Algorithms A and B) for the numerical solution of FOCPs and apply them to two illustrative examples.

Example 1 —

We consider the following FOCP from [8]:

min  J(x,u)=01(tu(t)(α+2)x(t))2dt, (42a)

subject to the dynamical system

x˙(t)+D0Ctαx(t)=u(t)+t2 (42b)

and the boundary conditions

x(0)=0,x(1)=2Γ(3+α). (42c)

The exact solution is given by

(x(t),u(t))=(2tα+2Γ(α+3),2tα+1Γ(α+2)). (43)

Algorithm A. The first algorithm for the solution of (42a)–(42c) follows the “optimize first, then discretize” approach. It is based on the necessary optimality conditions from Theorem 2 and implements the following steps.

Step 1. Compute the Hamiltonian

H=(tu(t)(α+2)x(t))2+λ(u(t)+t2). (44)

Step 2. Derive the necessary optimality conditions from Theorem 2:

λ˙(t)  tD1αλ(t)=Hx=2(α+2)(tu(t)(α+2)x(t)), (45a)
x˙(t)+  0CDtαx(t)=Hλ=u(t)+t2, (45b)
0=Hu=2t(tu(t)(α+2)x(t))+λ. (45c)

Use (45c) in (45a) and (45b) to obtain

λ˙(t)+Dt1αλ(t)=(α+2)tλ(t), (46a)
x˙(t)+  0CDtαx(t)=λ2t2+(α+2)tx(t)+t2. (46b)

Step 3. By using Chebyshev expansion, get an approximate solution of the coupled system (46a), (46b) under the boundary conditions (42c).

Step 3.1. In order to solve (46a) by the Chebyshev expansion method, use (18) to approximate λ. A collocation scheme is defined by substituting (18), (19), and (32) into (46a) and evaluating the results at the shifted Gauss-Lobatto nodes t s, s = 1,2,…, N − 1. This gives

r=0Nds,r1λ(tr)+λ(1)Γ(1α)(1ts)α+J(ts;pn)Γ(1α)=α+2tsλ(ts), (47)

s = 1,2,…, N − 1, where d s,r 1 is defined in (20). The system (47) represents N − 1 algebraic equations which can be solved for the unknown coefficients λ(t 1), λ(t 2),…, λ(t N−1). Consequently, it remains to compute the two unknowns λ(t 0),  λ(t N). This can be done by using any two points t a, t b∈]0,1[ which differ from the Gauss-Lobatto nodes and satisfy (46a). We end up with two equations in two unknowns:

λ˙(ta)+Dt1αλ(ta)=α+2taλ(ta),λ˙(tb)+Dt1αλ(tb)=α+2tbλ(tb). (48)

Step 3.2. In order to solve (46b) by the Chebyshev expansion method, we use (18) to approximate x. A collocation scheme is defined by substituting (18), (19), and the computed λ into (46b) and evaluating the results at the shifted Gauss-Lobatto nodes t s, s = 1,2,…, N − 1. This results in

r=0Nds,r1x(tr)+r=0Nds,rαx(tr)=λ(ts)2ts2+α+2tsx(ts)+ts2,  s=1,2,,N1, (49)

where d s,r 1 and d s,r α are defined in (20). By using the boundary conditions, we have x(t 0) = 0 and x(t N) = 2/Γ(3 + α). The system (49) represents N − 1 algebraic equations which can be solved for the unknown coefficients x(t 1), x(t 2),…, x(t N−1).

Figures 1, 2, 3, and 4 display the exact and approximate state x and the exact and approximate control u for α = 1/2 and N = 2,3.

Figure 1.

Figure 1

Exact and approximate state.

Figure 2.

Figure 2

Exact and approximate control.

Figure 3.

Figure 3

Exact and approximate state.

Figure 4.

Figure 4

Exact and approximate control.

Table 1 contains the maximum errors in the state x and in the control u for N = 2,  N = 3, and N = 5.

Table 1.

Maximum errors in the state x and in the control u for different values of N.

N = 2 N = 3 N = 5
Max. error in x 3.03292E − 2 3.4641E − 3 2.6415E − 4
Max. error in u 2.12592E − 1 4.1878E − 2 7.7493E − 3

Algorithm B. The second algorithm follows the “discretize first, then optimize” approach and proceeds according to the following steps.

Step 1. Substitute (42b) into (42a) to obtain

min  J=01(t[x˙(t)+  0CDtαx(t)t2](α+2)x(t))2dt. (50)

Step 2. Approximate x using the Clenshaw and Curtis formula (18) and approximate the Caputo fractional derivative 0 C D t α x and x˙ using (19). Then, (50) takes the form

min  J=01(t[r=0Ndt,r1x(tr)+r=0Ndt,rαx(tr)t2](α+2)n=0N′′anTn(t))2dt, (51)

where d t,r α is defined as in (20) replacing t s by t.

Step 3. Use t = (1/2)(η + 1) to transform (51) to

min  J=  1211(12(η+1)×[r=0Ndη,r1x(ηr)+r=0Ndη,rαx(ηr)(12(η+1))2](α+2)n=0N′′anTn(η))2dη. (52)

Step 4. Use the Clenshaw and Curtis formula [12]

11F(η)dη2ms=0mi=0mθsF(ηs)2i+1[Ts(η2i)Ts(η2i+2)], (53)

where

θ0=θm=12,θs=1s=1,2,,m1,    ηi=cos[(πi)m]i<m,ηi=1i>m, (54)

to approximate the integral (52) as a finite sum of shifted Chebyshev polynomials as follows:

minJ=1ms=0mi=0mθs2i+1×(12(ηs+1)×[r=0Ndηs,r1x(ηr)+r=0Ndηs,rαx(ηr)(12(ηs+1))2](α+2)n=0N′′anTn(ηs))2×[Ts(η2i)Ts(η2i+2)]. (55)

Step 5. According to the Rayleigh-Ritz method, the critical points of the objective functional (42a) are given by

Jx(t1)=0,Jx(t2)=0,,Jx(tN)=0, (56)

which leads to a system of nonlinear algebraic equations. Solve this system by Newton's method to obtain x(t 1), x(t 2),…, x(t N−1) and use the boundary conditions to get x(t 0),  x(t N). Then, the pair (x, u) which solves the FOCP has the form

x(t)=  2Nn=0N′′r=0N′′x(tr)Tn(tr)Tn(t), (57a)
u(t)=  x˙(t)+D0Ctαx(t)t2. (57b)

Figures 5, 6, 7, and 8 display the exact and approximate state x and the exact and approximate control u for α = 1/2 and N = m = 2,3.

Figure 5.

Figure 5

Exact and approximate state.

Figure 6.

Figure 6

Exact and approximate control.

Figure 7.

Figure 7

Exact and approximate state.

Figure 8.

Figure 8

Exact and approximate control.

Table 2 contains the maximum errors in the state x and in the control u for N = m = 2, N = m = 3, and N = m = 5.

Table 2.

Maximum errors in the state x and in the control u for different values of N.

N = m = 2 N = m = 3 N = m = 5
Max. error in x 3.03292E − 2 3.4641E − 3 2.6416E − 4
Max. error in u 2.69495E − 1 4.8393E − 2 8.0532E − 3

A comparison of Tables 1 and 2 reveals that both algorithms yield comparable numerical results which are more accurate than those obtained by the algorithm used in [8].

Example 2 —

We consider the following linear-quadratic optimal control problem:

min  J(x,u)=01(u(t)x(t))2dt, (58a)

subject to the dynamical system

x˙(t)+D0Ctαx(t)=u(t)x(t)+6tα+2Γ(α+3)+t3 (58b)

and the boundary conditions

x(0)=0,  x(1)=6Γ(α+4). (58c)

The exact solution is given by

(x(t),u(t))=(6tα+3Γ(α+4),6tα+3Γ(α+4)). (59)

We note that, for Example 2, the optimality conditions stated in Theorem 2 are also sufficient (cf. Remark 3).

Table 3 contains a comparison between the maximum error in the state x and in the control u for Algorithms A and B.

Table 3.

Alg. A, N = 3 Alg. B, N = m = 3
max. error in x 7.6404E − 3 1.1943E − 2
max. error in u 7.6404E − 3 1.6339E − 1

Alg. A, N = 5 Alg. B, N = m = 5

max. error in x 7.8604E − 5 1.0304E − 4
max. error in u 7.8604E − 5 1.0600E − 3

As opposed to Example 1, in this case, Algorithm A performs substantially better than Algorithm B.

6. Conclusions

In this paper, we have presented two algorithms for the numerical solution of a wide class of fractional optimal control problems, one based on the “optimize first, then discretize” approach and the other one on the “discretize first, then optimize” strategy. In both algorithms, the solution is approximated by N-term truncated Chebyshev series. Numerical results for two illustrative examples show that the algorithms converge as the number of terms is increased and that the first algorithm is more accurate than the second one.

Acknowledgments

R. H. W. Hoppe has been supported by the DFG Priority Programs SPP 1253 and SPP 1506, by the NSF Grants DMS-0914788, DMS-1115658, and by the European Science Foundation within the Networking Programme “OPTPDE.”

References

  • 1.Torvik PJ, Bagley RL. On the appearance of the fractional derivative in the behavior of real materials. Journal of Applied Mechanics. 1984;51(2):294–298. [Google Scholar]
  • 2.Khader MM, Sweilam NH, Mahdy AMS. An efficient numerical method for solving the fractional diffusion equation. Journal of Applied Mathematics and Bioinformatics. 2011;1(2):1–12. [Google Scholar]
  • 3.Oustaloup A, Levron F, Mathieu B, Nanot FM. Frequency-band complex noninteger differentiator: characterization and synthesis. IEEE Transactions on Circuits and Systems I. 2000;47(1):25–39. [Google Scholar]
  • 4.Tricaud C, Chen Y-Q. An approximate method for numerically solving fractional order optimal control problems of general form. Computers and Mathematics with Applications. 2010;59(5):1644–1655. [Google Scholar]
  • 5.Zamani M, Karimi-Ghartemani M, Sadati N. FOPID controller design for robust performance using particle swarm optimization. Journal of Fractional Calculus and Applied Analysis. 2007;10(2):169–188. [Google Scholar]
  • 6.Agrawal OP. A general formulation and solution scheme for fractional optimal control problems. Nonlinear Dynamics. 2004;38(1–4):323–337. [Google Scholar]
  • 7.Khader MM, Sweilam NH, Mahdy AMS. Numerical study for the fractional differential equations generated by optimization problem using Chebyshev collocation method and FDM. Applied Mathematics & Information Sciences. 2013;7(5):2011–2018. [Google Scholar]
  • 8.Pooseh S, Almeida R, Torres DFM. A numerical scheme to solve fractional optimal control problems. Conference Papers in Mathematics. 2013;2013:10 pages.165298 [Google Scholar]
  • 9.Sweilam NH, Khader MM, Mahdy AMS. Computional methods for fractional differential equations generated by optimization problem. Journal of Fractional Calculus and Applications. 2012;3:1–12. [Google Scholar]
  • 10.Khalifa AK, Elbarbary EME, Abd Elrazek MA. Chebyshev expansion method for solving second and fourth-order elliptic equations. Applied Mathematics and Computation. 2003;135(2-3):307–318. [Google Scholar]
  • 11.Sweilam NH, Khader MM. A Chebyshev pseudo-spectral method for solving fractional order integro- differential equations. ANZIAM Journal. 2010;51(4):464–475. [Google Scholar]
  • 12.Clenshaw CW, Curtis AR. A method for numerical integration on an automatic computer. Numerische Mathematik. 1960;2(1):197–205. [Google Scholar]
  • 13.Almeida R, Torres DFM. Necessary and sufficient conditions for the fractional calculus of variations with Caputo derivatives. Communications in Nonlinear Science and Numerical Simulation. 2011;16(3):1490–1500. [Google Scholar]
  • 14.Oldham KB, Spanier J. The Fractional Calculus. New York, NY, USA: Academic Press; 1974. [Google Scholar]
  • 15.Samko S, Kilbas A, Marichev O. Fractional Integrals and Derivatives: Theory and Applications. London, UK: Gordon and Breach; 1993. [Google Scholar]
  • 16.Snyder MA. Chebyshev Methods in Numerical Approximation. Englewood Cliffs, NJ, USA: Prentice Hall; 1966. [Google Scholar]
  • 17.Khader MM, Hendy AS. Fractional Chebyshev finite difference method for solving the fractional BVPs. Journal of Applied Mathematics & Informatics. 2012;31(1-2):299–309. [Google Scholar]
  • 18.Khader MM, Hendy AS. An efficient numerical scheme for solving fractional optimal control problems. International Journal of Nonlinear Science. 2012;14(3):287–296. [Google Scholar]

Articles from The Scientific World Journal are provided here courtesy of Wiley

RESOURCES