Skip to main content
Springer logoLink to Springer
. 2017 Jun 24;2017(1):147. doi: 10.1186/s13660-017-1420-1

Solving a class of generalized fractional programming problems using the feasibility of linear programs

Peiping Shen 1,, Tongli Zhang 1, Chunfeng Wang 1
PMCID: PMC5487952  PMID: 28680250

Abstract

This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.

Keywords: generalized fractional programming, global optimization, approximation algorithm, computational complexity

Introduction

In a variety of applications, we encounter a class of nonconvex optimization problems as follows:

(P):{minf(x)=G(c1x+c01d1x+d01,c2x+c02d2x+d02,,cpx+c0pdpx+d0p)s.t.xΩ={xRn:Axb,x0},

where ci,diRn, c0i,d0iR, ARm×n, bRm, cix+c0i>0, dix+d0i>0 over a nonempty, compact set Ω for each i=1,,p, and G:R+pR+ is a continuous function.

Problem (P) is worth studying because some important special optimization problems that have been studied in literature fall into the category of (P), such as multiplicative programs, sum-of-ratios optimization, fractional polynomial optimization, namely:

  1. Multiplicative programs (MP): In this case, the objective function G, with the form G(y1,,yp)=i=1pyi with yi=cix+c0idix+d0i, is quasi-concave, and its minimum is attained at some extreme point of the polytope [1]. Multiplicative objective functions arise in a variety of practical applications, such as economic analysis [2], robust optimization [3], VLSI chip design [4], combination optimization [5], etc.

  2. Sum-of-ratios (SOR) optimization: SOR functions have the form G(y1,,yp)=i=1pyi with yi=cix+c0idix+d0i. Matsui [6] points out that it is NP-hard to minimize SOR functions over a polytope. For many applications of this form, we can see the survey paper by Schaible and Shi [7] and the references therein. Specially, a kind of SOR optimization problems with the form G(y1,,yp)=i=1p|yi|q, where q0 and yi=cix+c0idix+d0i, are considered by Kuno and Masaki [8] as well, and they often occur in a computer version.

  3. Fractional polynomial optimization: Polynomial functions with positive coefficients have the form G(y1,,yp)=j=1mcji=1pyiγij, where yi=cix+c0idix+d0i, cj0 and γij is a positive integer. Problems of this form have many applications [9], including production planning, engineering design, etc. In addition, from research point of view, these problems pose significant theoretical and computational challenges because they possess multiple local optima that are not globally optima.

During the past years, many solution methods have been developed for globally solving special cases of problem (P). These methods can be classified into outer-approximation [10], branch-and-bound [1114], mixed branch-and-bound and outer-approximation [15], cutting plane [16], parameter-based [17], vertex enumeration [8], heuristic methods [18], etc. However, most of these methods lack theoretical analysis of the running time of the algorithms, or performance guarantee of the solutions obtained. To our knowledge, little work has been done about the solution of ε-approximation problems of (P) without the quasi-concavity and low-rank assumptions; although Locatelli [19] has developed an approximation algorithm for a general class of global optimization problems. Next, we immediately introduce the definition of the ε-approximation problem related to global optimization as follows.

Definition 1

Given ε>0, letting f=minxΩf(x), a point x¯Ω is said to be an ε-approximation solution for minxΩf(x) if

f(x¯)f+ε|f|.

This article focuses on presenting a fully polynomial time approximation scheme (FPTAS) for solving problem (P). An FPTAS for a minimization problem is an approximation algorithm, that is, for any given ε>0, it can find an ε-approximation solution for the problem, and its running time is polynomial in the input size of the problem and 1/ε. As shown by Mittal and Schulz [20], the optimum value of problem (P) cannot be approximated to within any factor unless NP=P. Therefore, in order to obtain an FPTAS for solving problem (P), some extra assumptions of the function G will be required (see Section 2) in this article.

For the special cases of problem (P), many solution algorithms have been developed about the solution of ε-approximation problems. Depetrini and Locatelli [21] presented an approximation algorithm for linear fractional-multiplicative problems, and they pointed out that the algorithm is an FPTAS as the number p of ratio terms is fixed. This result has been extended to a wider class of optimization problems by Locatelli [19]. Also, Goyal and Ravi [22] exploited the fact that the minimum of a quasi-concave function is attained at an extreme point of the polytope and proposed an FPTAS for minimizing a class of low-rank quasi-concave functions over a convex set. Mittal and Schulz [20] developed an FPTAS for optimizing a class of low-rank nonconvex functions without quasi-concavity over a polytope. In addition, Depetrini et al. [23] and Goyal et al. [24] respectively gave an FPTAS for a class of optimization problems where the objective functions are products of two linear functions. Shen and Wang [25] presented a linear decomposition approximation algorithm for a class of nonconvex programming problems by dividing the input space into polynomially many grids. Nevertheless, these solution methods [20, 21, 2325] cannot be directly applied to the case (i.e., problem (P)) considered in this paper, where the objective function is a composition of some ratios of affine functions without quasi-concavity or low-rank.

The aim of this article is to present a solution approach for a class of fractional programming problems (P). By introducing some variables, the original problem (P) is first converted to a p-dimensional equivalent problem (Q). Through the establishment of a nonuniform grid, on the basis of problem (Q), the solving process of the original problem (P) is then transformed into checking the feasibility of a series of linear programming problems. Thus, a new approximation algorithm is presented for globally solving problem (P) based on the exploration technique of a nonuniform grid over a box. The algorithm does not require quasi-concavity or low-rank of the function G to problem (P), and it is proved that this is an FPTAS as the term p is fixed in G. We emphasize here that the exploration technique used in this article is different from the ones given in [19, 21]. The reason is that we utilize a different strategy from that given in [19, 21] to update the incumbent best value of the objective function g(t) to problem (Q), and that requires fewer interesting grid points restored and considered in our algorithm, compared with Refs. [19, 21]. Also, we notice that the main computational cost of the proposed algorithm is checking the feasibility of linear problems at the interesting grid points. This means that it requires less computational cost and so is more easily implementable. Finally, problem (P) generalizes the one investigated in [21], and the proposed algorithm can be directly applied to solve the problem in [19] by replacing the convex feasibility with the linear one. Numerical results show that the proposed algorithm requires much less computational time to obtain an approximation optimized solution of problem (P) with the same approximation error than the approaches (given by [19, 21]) do.

The paper is structured as follows. In Section 2, we discuss the reformulation of problem (P) as a p-dimensional one. Section 3 presents an approximation algorithm to obtain an ε-approximation solution for problem (P) which is FPTAS by its computational complexity. Some numerical results are reported in Section 4. Finally, the conclusions are presented in Section 5.

Parametric reformulation of the problem

For solving problem (P), throughout this paper, we assume that G satisfies:

  • G(y)G(y) for all y,yR+p with yiyi, i=1,,p, and

  • δkG(y)G(δy) for all yR+p, δ(0,1), and some constant k.

There are a number of functions G which satisfy the above conditions, such as the product of a constant number (say p) of linear functions (with k=p), the sum of linear ratio functions (with k=1), etc. This paper will present an approximation algorithm for solving problem (P) under the above assumptions. For this purpose, let us introduce p variables yi, i=1,,p, thus, problem (P) can be equivalent to the form:

(P1):{minG(y)s.t.cix+c0idix+d0iyi,i=1,,p,s.t.xΩ.

Theorem 1

x is a global optimal solution for problem (P) if and only if (x,y) is a global optimal solution for problem (P1) with yi=cix+c0idix+d0i for each i=1,,p. The minimal objective function values of problems (P) and (P1) are equal, i.e., f(x)=G(y).

Proof

Let (x,y) be a global optimal solution for problem (P1). We suppose that x is not a global optimal solution for problem (P), then there exists x¯Ω such that

f(x¯)<f(x). 2.1

Let y¯i=cix¯+c0idix¯+d0i, i=1,,p. Then (x¯,y¯) is a feasible solution of problem (P1). We can have, from (2.1), that

G(y¯)=f(x¯)<f(x). 2.2

On the other hand, since (x,y) is a feasible solution of problem (P1), this implies that cix+c0idix+d0iyi, i=1,,p. Therefore, from the assumptions of G, it holds that

f(x)=G(c1x+c01d1x+d01,c2x+c02d2x+d02,,cpx+c0pdpx+d0p)G(y). 2.3

Combining (2.2) with (2.3), we can obtain G(y¯)<G(y). Since (x¯,y¯) is a feasible solution of problem (P1), this contradicts the optimality of (x,y) for problem (P1). Therefore, the supposition that x is not a global optimal solution for problem (P) must be false.

Next, we will show the converse case. Let x be a global optimal solution of problem (P), and let yi=cix+c0idix+d0i, i=1,,p. Then (x,y) is a feasible solution of problem (P1). Suppose that there exists some feasible solution (x¯,y¯) for problem (P1) such that

G(y¯)<G(y)=f(x). 2.4

Then, from cix¯+c0idix¯+d0iy¯i, i=1,,p, it follows that

f(x¯)=G(c1x¯+c01d1x¯+d01,c2x¯+c02d2x¯+d02,,cpx¯+c0pdpx¯+d0p)G(y¯). 2.5

By using (2.4)-(2.5), we have f(x¯)<G(y)=f(x). Since x¯Ω, this contradicts that x is an optimal solution of problem (P). Hence, (x,y) must be the optimal solution to (P1). Based on the above result, obviously, from the assumptions of G, we have f(x)=G(y). □

Based on the above theorem, for solving problem (P), we may solve problem (P1) instead. Additionally, it is known that each single ratio cix+c0idix+d0i is both quasi-concave and quasi-convex, and its minimum and maximum must be attained respectively at some vertex of Ω (see, e.g., [26]). To this end, let us denote

li=minxΩcix+c0idix+d0i,ui=maxxΩcix+c0idix+d0i,i=1,,p. 2.6

And let

H={yRp:liyiui,i=1,,p}.

Now, let us define a p-dimensional set for each tH as follows:

S(t)={xΩ:cix+c0iti(dix+d0i),i=1,,p},

and the corresponding function g(t) is given by

g(t)={G(t),if S(t),+,otherwise.

Clearly, we can know whether S(t) is a null set or not by checking the feasibility of a linear program for given tH, which can be solved in polynomial time. Based on the above result, it turns out that problem (P1) is equivalent to the following p-dimensional problem:

(Q):mintHg(t).

According to the definition of g(t), we have the following conclusion.

Theorem 2

Given ε>0, let δ=(11+ε)1/k, for each t¯H, it holds that

g(t¯)(1+ε)g(t),t[δt¯,t¯].

Proof

From the definition of S(t) and δ=(11+ε)1/k(0,1), we have S(δt¯)S(t¯) for each t¯H. When S(δt¯), it implies that S(t) for each t[δt¯,t¯]. This means that g(t)=G(t) for each t[δt¯,t¯]. With the assumptions of G(t), it holds that

(1+ε)g(t)=(1+ε)G(t)(1+ε)G(δt¯)(1+ε)δkG(t¯)=G(t¯)=g(t¯),t[δt¯,t¯].

When S(δt¯)= and S(t¯), similarly, we have that

(1+ε)g(t)(1+ε)G(t)(1+ε)G(δt¯)(1+ε)δkG(t¯)=G(t¯)=g(t¯),t[δt¯,t¯].

When S(t¯)=, it implies that S(t)= for any t[δt¯,t¯], and so the conclusion holds. □

The approximation algorithm

The algorithm and its convergence

In this subsection, by using Theorem 2 above, we present an approximation algorithm for solving problem (P), and prove that the algorithm can find an ε-approximation solution for problem (P).

The proposed algorithm adopts an exploration technique of a suitably defined nonuniform grid over H. In the algorithm, let T be the set of all restored interesting grid points which will be further analyzed. W is a set of the grid points already discarded, and X is a set of the remaining grid points at each iteration. Moreover, U represents the best value of the function g(t) obtained so far, and denote t such that U=g(t). The algorithm starts with t=(u1,,up) and U=g(t). In each iteration, we select a point t¯T and calculate a¯=min{aN:S(δat¯)=}, where N represents the set of the natural numbers. If a¯=0, we newly select a point from T. Otherwise, we have S(δa¯1t¯), and so S(t) for each t[δa¯1t¯,t¯]. This implies that g(t)=G(t) for each t[δa¯1t¯,t¯]. By using the nondecreasing G, it holds that g(δa¯1t¯)=mint[δa¯1t¯,t¯]g(t). In addition, for any t{t:δa¯t¯i<tit¯i,i=1,,p}(δa¯t¯,t¯], there exists an integer vector τ=(τ1,,τp) such that ti(δτi+1t¯i,δτit¯i] satisfying τi{0,1,,a¯1} for each i, thus, we have (1+ε)g(t)g(δτt¯) for any t(δτ+1t¯,δτt¯] from Theorem 2. We see that all points δτt¯=(δτ1t¯1,,δτpt¯p) with τi{0,1,,a¯1} belong to [δa¯1t¯,t¯], hence,

(1+ε)g(t)minτi{0,1,,a¯1},ig(δτ1t¯1,,δτpt¯p)=g(δa¯1t¯)min{U,g(δa¯1t¯)},t(δa¯t¯,t¯].

And so, it is reasonable to update U=min{U,g(δa¯1t¯)} and t such that g(t)=U. Next, we consider 2p new points (ξ1t1¯,,ξptp¯) with ξi{δa¯,1} for all i, discard all points which satisfy ξiui<li for some i, and add the remaining points to X, then update T=(TX)W. This process is repeated until T=. At termination, each point xS(t) is an approximation solution of problem (P). The detailed algorithm is summarized as Algorithm 1.

Algorithm 1.

Algorithm 1

Approximation algorithm statement

Theorem 3

The proposed algorithm can find an ε-approximation solution for problem (P).

Proof

Note that the algorithm evaluates the function g(t) values at the following points:

(δs1u1,,δspup),

where siN, and satisfies

0sisi¯max{s:δsuili},i=1,,p. 3.1

For any tH, there is an integer vector (s1,,sp) with 0sisi¯, i=1,,p, such that ti=1p[δsi+1ui,δsiui]. Thus, in view of Theorem 2 and the definition of δ, it holds that g(δs1u1,,δspup)(1+ε)g(t) for each ti=1p[δsi+1ui,δsiui]. Hence, we have

minsi{0,1,,si¯},ig(δs1u1,,δspup)(1+ε)mintHg(t).

On the other hand, let us denote t=(δs1u1,,δspup) such that

g(t)=minsi{0,1,,si¯},ig(δs1u1,,δspup).

From Step (k2) of the algorithm, we know S(t). By using the definition of S(t), there exists a point x satisfying xS(t). Now, let us denote t˜i=cix+c0idix+d0i, i=1,,p, then we have xS(t˜) and t˜iti. Combining the definition of g(t), we see that g(t˜)g(t). Thus, we conclude that

(1+ε)minxΩf(x)=(1+ε)mintHg(t)minsi{0,1,,si¯},ig(δs1u1,,δspup)=g(t)g(t˜)=f(x).

Therefore, the point x is an ε-approximation solution of problem (P) by Definition 1. □

The complexity of the algorithm

In this subsection, the computational complexity of the algorithm will be presented in order to show that the approximation algorithm is an FPTAS for fixed p. For this purpose, we need to use the following lemma from Ref [27]. Let Ω={xRn:Axb,x0} be a polyhedron with ARm×n, bRm, and denote

λ¯=max{1,|Aij|,|bi|:i=1,,m,j=1,,n}.

Then we have the following lemma.

Lemma 1

[27]

Let x0 be a vertex of Ω, then, for each j=1,,n, it holds that

xj0=pj/q,

where pjR, qR with

0pj(nλ¯)n,0<q(nλ¯)n. 3.2

Lemma 2

Given ε>0, let δ=(11+ε)1/k. The number of the points (δs1u1,,δspup) satisfying (3.1), at which the feasibility of the corresponding linear programs are checked by the proposed algorithm, is not more than

i=1p[1+kεln(uili)].

Proof

Note that δ=(11+ε)1/k(0,1) is fixed if ϵ>0 is given. Since the points (δs1u1,,δspup) belonging to the nonuniform grid over H satisfy (3.1), the number of these grid points is equal to i=1p(si¯+1). Moreover, by the proposed algorithm, the number of the points (δs1u1,,δspup) at which the feasibility of linear programs should be checked is not larger than i=1p(si¯+1). In view of the definition of si¯ and δ, we can have that

si¯ln(li/ui)/(lnδ)=[kln(li/ui)]/ln(1+ε),i=1,,p.

Since ln(1+ε)ε for sufficiently small ε>0, we see that the number of points where the feasibility of linear programs should be checked is not larger than i=1p[1+kεln(uili)]. □

By the proposed algorithm, to find an ε-approximation solution for problem (P), the computational cost includes the cost of the computation of the box H and the calculation of ā at Step (k1) of the algorithm for each iteration. It is known that each li and ui must be attained at some vertex of Ω respectively (see, e.g., [26]), and that can be computed in polynomial time, thus H can be determined in polynomial time. On the other hand, we notice that the main work is the calculation of ā at each iteration in the algorithm (see Step (k1)). This is because the calculation of ā at each iteration requires checking the feasibility of some linear problems with m+p constraints and n variables. In other words, the computational cost of the algorithm is to check the feasibility of linear problems at interesting grid points. Let us denote T(m+p,n) as the cost of checking the feasibility of a linear programming problem with m+p constraints and n variables.

In order to give the computational cost of the proposed algorithm, without loss of generality, we can assume that

cix+c0i1,dix+d0i1,xΩ. 3.3

This is because

cix+c0idix+d0i=Mi(cix+c0i)Mi(dix+d0i),i=1,,p,

by choosing sufficiently large MiR such that Mi(cix+c0i)1, Mi(dix+d0i)1 for any xΩ. Based on the above discussion, combining Lemmas 1 and 2 finally leads to the following theorem.

Theorem 4

As p is a fixed positive integer, the number of operations required by the proposed algorithm to obtain an ε-approximate solution for problem (P) is not larger than

O([2k(n+1)ln(nλ)ε]pT(m+p,n)),

where λ=max{λ¯,|cij|,|dij|,|c0i|,|d0i|:i=1,,p,j=1,,n}.

Proof

Let xli, xui be vertices of Ω with li=cixli+c0idixli+d0i, ui=cixui+c0idixui+d0i, i=1,2,,p. Thus, it follows from Lemma 1 that

xjli=pjli/qli,xjui=pjui/qui,j=1,,n,i=1,,p,

where pjli, qli, pjui, qui satisfy (3.2). Let ρ=max{1,1/qli,1/qui:i=1,,p}. Combining Lemma 1 and the definition of λ leads to

dixli+d0i=j=1ndijpjli/qli+d0iρj=1ndijpjli+λρnn+1λn+1+λ2ρnn+1λn+1.

Thus, with (3.3), it holds that

li=(cixli+c0i)/(dixli+d0i)1/(2ρnn+1λn+1).

Similarly, we can obtain that ui2ρnn+1λn+1. And so

ln(ui/li)ln(4ρ2n2n+2λ2n+2)=2ln(2ρ)+2(n+1)ln(nλ).

Since for each interesting grid point we require the solution of a linear feasibility problem with m+p constraints and n variables, by Lemma 2, for given p, we can claim that the number of operations required by the proposed algorithm is not larger than

[1+2kln(2ρ)+2k(n+1)ln(nλ)ε]pT(m+p,n)=O([2k(n+1)ln(nλ)ε]pT(m+p,n)).

 □

Remark 1

From Theorem 4, we can conclude that the proposed algorithm is an FPTAS for problem (P) for fixed p. On the other hand, we know that the computational time of the proposed algorithm is an exponential increase with p increasing. These conclusions can be observed also in the numerical results of the next section.

Remark 2

Notice that the detailed complexity analysis of the proposed algorithm can be used as an indicator of the difficulty of some optimization problems, such as multiplicative programs, sum-of-ratios optimization, etc. Thus, in order to solve efficiently these problems, we should expect to design a more sophisticated approach where its performance is at least as good.

Numerical examples

Based on Theorem 4, although the computational complexity results of the algorithms ([19, 21] and ours) are similar, we should notice that it is the worst case time complexity which is one of the most often used criteria of evaluating algorithms in optimization. In fact, these complexity results ([19, 21] and ours) are only the upper bounds of the computational cost of the algorithms for solving optimization problems under the worst case, i.e., all the grid points are considered. Hence, to further verify the performance of the proposed algorithm in this article, in this section we compare the proposed algorithm with the ones in [19, 21] by numerical examples. Because it is an approximation algorithm for solving general fractional programming problem (P), we do not attempt comparisons with the solution methods for solving special cases of (P) (e.g., branch-and-bound [11, 12], outer-approximation [15], cutting plane [16], etc.), and the approximation algorithms in [20, 22], which are restricted to solving problems under the quasi-concavity or low-rank assumptions in the objective functions. Additionally, the algorithms ([19, 21] and ours) are based on the exploration of a suitably defined nonuniform grid over a rectangle, but we exploit different exploration strategies to minimize the objective function over the feasible set, and use different methods to update the incumbent best value of the objective function obtained at each iteration, compared with [19, 21].

We implemented the three algorithms ([19, 21] and ours) in MATLAB 2012b with some test experiments. Tests are run on a PC with dual processor CPU (2.33 Hz), Intel(R), and Core(TM) i3. Notice that these algorithms use different approaches for computing the lower bound li and the upper bound ui of each ratio term in the objective functions. Hence, for comparison, each li, ui in the three algorithms is given by taking the same way (i.e., using (2.6)) in our computation.

Some notations in Tables 1, 2, 3 have been used for column headers: Solution: the approximate optimal solution; Optimum: the approximate optimal value; Iter: the number of the algorithm iterations; CPU(s): the execution time in seconds; Nodes: the maximal number of the interesting grid points restored; Avg: average performance by the algorithm; Std: standard deviation of performances by the algorithm.

Table 1.

Computational results of Examples  1 - 5

Algorithm ε Solution Optimum Iter Nodes CPU(s)
1 [21] 0.2 (0, 0.2816) 1.6232 5,122 862 185.2
[19] 0.2 (0, 0.2816) 1.6232 1,122 327 84.9
Our 0.2 (0, 0.2816) 1.6232 17 5 0.46
2 [21] 0.2 (5.382 × 10−16, 5.536 × 10−16) 0.5333 631 217 10.4
[19] 0.2 (5.382 × 10−16, 5.536 × 10−16) 0.5333 362 122 5.63
Our 0.2 (5.382 × 10−16, 5.536 × 10−16) 0.5333 55 13 1.83
3 [21] 0.15 (0, 0, 1.6886, 4.3466, 4.3007, 4.0334, 0, 1.4324, 0.7765, 4.1967, 0, 4.1385) 0.05115 24,569 3,727 355.2
[19] 0.15 (0, 0, 1.6886, 4.3466, 4.3007, 4.0334, 0, 1.4324, 0.7765, 4.1967, 0, 4.1385) 0.05115 14,669 3,215 215.2
Our 0.15 (0, 0, 1.6886, 4.3466, 4.3007, 4.0334, 0, 1.4324, 0.7765, 4.1967, 0, 4.1385) 0.05115 70 21 5.66
4 [19] 0.1 (1.7177, 2.0155) 32.39 1,998 487 118.8
Our 0.1 (1.7177, 2.0155) 32.39 41 15 32.39
5 [19] 0.05 (2.0814, 2.9963) 7,709.8 4,383 1,327 256.9
Our 0.05 (2.0814, 2.9963) 7,709.8 924 385 56.2

Table 2.

Computational results of randomly generated test problems with (m,n)=(50,50)

p Algorithm CPU(s) Iter Nodes
Avg Std Avg Std Avg Std
2 [21] 46.5 24.1 1,369.4 75.3 362.6 35.9
[19] 39.6 15.5 1,225.2 62.8 302.0 23.4
Our 1.2 0.5 7.8 1.6 2.2 0.5
4 [21] 5,862.1 903.8 16,973.0 994.8 3,612.5 917.2
[19] 4,590.7 802.5 17,888.1 913.8 3,294 534.1
Our 206.2 70.8 3,144.4 172.1 912.8 111.8
5 [21] 7,102.2 913.8 29,121.3 904.8 9,612.5 982.5
[19] 5,062.4 893.1 19,373.4 924.1 6,613.9 861.4
Our 813.6 113.4 5,082.9 823.7 1,403.6 813.9
6 [21] - - - - - -
[19] 6,384.7 895.2 38,359.4 921.7 11,869.8 938.2
Our 1,455.7 201.2 9,830.4 485.7 2,769.3 216.6
7 [21] - - - - - -
[19] - - - - - -
Our 2,754.3 430.7 12,054.4 523.5 3,257.9 433.7
8 [21] - - - - - -
[19] - - - - - -
Our 4,175.6 603.2 19,853.4 873.3 5,107.7 513.2
9 [21] - - - - - -
[19] - - - - - -
Our 6,175.1 837.9 28,251.5 869.2 8,632.2 752.3
10 [21] - - - - - -
[19] - - - - - -
Our 7,075.9 997.8 33,215.7 963.8 9,897.4 924.3

Table 3.

Computational results of randomly generated test problems with p=4

[ m , n ] Algorithm CPU(s) Iter Nodes
Avg Std Avg Std Avg Std
[70,70] [21] 6,518.2 869.2 19,358.8 926.4 9,586.3 749.3
[19] 5,208.1 903.4 18,308.8 908.7 6,762.1 794.8
Our 362.3 44.9 4,588.5 303.4 1,092.9 209.6
[70,100] [21] - - - - - -
[19] 6,691.2 923.6 23,650.8 936.2 9,834.3 792.8
Our 528.3 65.6 5,186.3 291.5 1,394.1 287.1
[70,150] [21] - - - - - -
[19] - - - - - -
Our 1,028.9 635.6 7,616.4 189.6 1,691.6 272.4
[100,150] [21] - - - - - -
[19] - - - - - -
Our 1,124.4 603.8 7,096.3 193.1 1,702.2 243.4
[150,150] [21] - - - - - -
[19] - - - - - -
Our 1,149.2 678.5 8,076.4 201.3 2,001.8 292.7
[150,200] [21] - - - - - -
[19] - - - - - -
Our 2,048.5 728.9 9,806.7 416.8 2,671.9 397.5
[150,300] [21] - - - - - -
[19] - - - - - -
Our 3,892.7 969.4 10,903.5 971.5 3,402.6 873.4
[200,300] [21] - - - - - -
[19] - - - - - -
Our 3,912.8 917.2 9,938.3 911.7 3,521.5 816.3
[300,300] [21] - - - - - -
[19] - - - - - -
Our 4,025.1 909.5 1,109.3 891.5 3,612.3 834.7
[300,400] [21] - - - - - -
[19] - - - - - -
Our 4,875.5 962.5 14,946.1 938.6 5,827.5 972.8
[300,500] [21] - - - - - -
[19] - - - - - -
Our 5,962.6 978.6 16,592.7 995.7 6,987.2 957.4

We first solve several sample examples, where Examples 1-3 and Examples 4-5 come from Ref. [28] and Ref. [29], respectively. The corresponding computational results are summarized in Table 1.

Example 1

minx1+2x2+23x14x2+5+4x13x2+42x1+x2+3s.t.x1+x21.5,x1x2,0x11,0x21.

Example 2

minx1+2x2+23x14x2+5×4x13x2+42x1+x2+3s.t.x1+x21.5,x1x2,0x11,0x21.

Example 3

mini=16ci,x+ridi,x+sis.t.Axb,x0,

where

c1=(0.2,0.7,0.1,0.4,0.0,0.8,0.1,0.8,0.2,0.0,0.1,0.4),r1=21,d1=(0.2,0.5,0.6,0.1,0.6,0.4,0.4,0.3,0.7,0.5,0.4,0.1),s1=13.3,c2=(0.1,0.1,0.4,0.1,0.1,0.4,0.2,0.5,0.3,0.4,0.3,0.3),r2=16.3,d2=(0.3,0.2,0.7,0.1,0.2,0.2,0.5,0.4,0.3,0.0,0.6,0.5),s2=16,c3=(0.8,0.0,0.1,0.4,0.2,0.1,0.5,0.0,0.5,0.6,0.3,0.4),r3=3.7,d3=(0.1,0.0,0.0,0.3,0.2,0.7,0.4,0.2,0.1,0.5,0.6,0.1),s3=16.7,c4=(0.6,0.2,0.2,0.3,0.5,0.4,0.1,0.6,0.3,0.3,0.4,0.3),r4=1.8,d4=(0.3,0.0,0.0,0.5,0.1,0.2,0.6,0.6,0.1,0.2,0.8,0.3),s4=21.5,c5=(0.3,0.3,0.5,0.1,0.2,0.5,0.1,0.2,0.0,0.6,0.3,0.2),r5=5,d5=(0.3,0.0,0.3,0.0,0.8,0.3,0.3,0.9,0.1,0.6,0.1,0.2),s5=18.7,c6=(0.2,0.1,0.0,0.0,0.2,0.4,0.0,0.6,0.8,0.2,0.0,0.1),r6=12.7,d6=(0.0,0.6,0.0,0.1,0.0,0.2,0.0,0.5,0.2,0.3,0.3,0.1),s6=19.2,A=[1.90.00.21.51.80.91.04.54.53.51.84.82.93.74.81.91.83.71.82.52.91.933.23.32.43.34.80.33.90.81.72.00.31.82.24.31.82.14.50.52.41.40.32.02.80.44.51.50.30.41.21.11.91.51.23.34.43.24.33.22.44.51.02.73.70.13.91.93.22.11.30.90.54.01.51.21.51.23.70.10.02.44.14.14.52.23.14.44.83.42.22.12.32.61.42.42.34.71.71.63.84.01.30.40.42.91.20.03.20.22.02.92.73.12.92.64.30.24.61.30.93.43.94.92.33.01.52.51.71.72.93.53.42.50.44.52.81.72.12.94.71.34.51.90.93.32.31.60.54.93.04.93.63.72.21.43.52.81.24.73.22.24.02.83.34.43.12.12.63.91.02.31.84.21.82.70.93.31.7],b=(20.1,1.0,82.6,14.6,37.7,40.7,23,47.4,83.0,9.9,33.7,49.1,14.0,45.6,30.4).

Example 4

mini=13fi(x)s.t.x1+2x210,0x10,0x4,

where

f1(x)=(x11)2+(x21)2+1,f2(x)=(x12)2+(x23)2+1,f3(x)=(x14)2+(x22)2+1.

Example 5

mini=13fi(x)s.t.(x12)2+(x23)21,0x3,0x3,

where

f1(x)=5x14+x24,f2(x)=3(x15)4+10(x23)4,f3(x)=7(x12)4+2(x24)4.

Note that for solving Examples 4 and 5 we chose (l1,l2,l3)=(1,1,1), (u1,u2,u3)=(12,7,12) and (l1,l2,l3)=(13,54,2), (u1,u2,u3)=(450,850,105) which come from Ref. [29], respectively. In addition, we notice that the algorithm in [21] cannot be reasonable to solve Examples 4 and 5, and so we do not use it for solving them.

From Table 1, it can be seen easily that the proposed algorithm requires less computational time for solving Examples 1-5 compared with the ones in [19, 21] with the same ε>0 value. This is because the number of iterations and the maximal number of the interesting grid points restored are less than the ones in [19, 21] from Table 1, which means that the total number of the interesting grid points considered by the proposed algorithm is less than the one of the algorithms in [19, 21]. Also, in the three algorithms ([19, 21] and ours), notice that the main computational time is to check feasibility of linear programs at interesting grid points. Hence, the more interesting grid points are considered, the more computational time will be required.

Next, we apply the three algorithms ([19, 21] and our own) to randomly generated examples as follows.

mini=1pcixs.t.xX={xRn:Axb,LxV},

where all elements of ciRn and LRn are random numbers generated from the interval [0,1]; bRm, VRn are randomly generated vectors with all components belonging to (1,2); and each element of ARm×n is randomly generated in [1,1]. Nineteen examples for selected combinations of m (number of constraints), n (number of variables), and p (number of linear functions in the objective function), altogether 190 randomly generated test instances are solved. The approximation error is fixed at ε=0.01, and the average computational results (standard deviation) are obtained by running the algorithms ([19, 21] and ours) for 10 times. Table 2 shows the numerical results for solving instances when (m,n)=(50,50), p changed in {2,4,5,6,7,8,9,10}. Similarly, as p=4 and (m,n) is changed, the computational results are listed in Table 3. In Tables 2 and 3, ‘-’ means the problem cannot be solved within two hours.

It can be seen from Tables 2 and 3 that the proposed algorithm needs fewer iterations and interesting grid points, and so requires less computational time for solving this kind of random problems, compared with the algorithms given by [19, 21]. Also, it is shown by Tables 2 and 3 that the performance of the algorithms is strongly affected by changes in n and p, specially, when p increases. The reason is that the number of operations required by the algorithms ([19, 21] and ours) is an exponential increase with p increasing according to the corresponding computational complexity results.

It is worth mentioning from Tables 2 and 3 that the computational time of the proposed algorithm increases with n and p increasing, but not as sharply as the algorithms in [19, 21]. For example, in Table 2, the instances cannot be solved by the algorithms in [19, 21] within two hours when p6 and p7, respectively, while the presented algorithm can solve all instances with p increasing 2 to 10 in less than two hours. This is due to the fact that the main computational cost of the algorithms ([19, 21] and ours) is the solution of linear feasibility problems at the interesting grid points. That is to say, the computational time for solving this kind of problems is directly affected by the number of interesting grid points. We notice that for the algorithms in [19, 21], the number of iterations and interesting grid points checked at each iteration increases with p increasing. However, for the proposed algorithm, p is related to the number of iterations (see Step (k2) in the proposed algorithm) and independent of the number of interesting grid points checked at each iteration (see Step (k1) in the proposed algorithm). This means that the proposed algorithm requires fewer interesting grid points considered and less computational time than the ones of the algorithms in [19, 21] for solving this kind of random problems. Moreover, from Table 3, notice that the algorithms in [19, 21] cannot solve the instances within two hours when n150 and p=4, but all instances selected can be solved by the proposed algorithm within no more than two hours. This is mainly because the more interesting grid points are considered, the more the feasibility of linear programs with n variables should be checked. On the other hand, note that the interesting grid points considered by the proposed algorithm are much fewer than the ones considered by the algorithms [19, 21]. And so the increase of the computational time of the proposed algorithm is not as sharp as the algorithms [19, 21] with n increasing.

A comparison of the performance of the algorithms ([19, 21] and our own), the numerical results in Tables 1-3 show that the proposed algorithm is effective and the computational results can be obtained within a reasonable time.

Results and discussion

In this work, a new solution algorithm for globally solving a class of generalized fractional programming problems is presented. As further work, we think the ideas can be extended to more general type optimization problems, in which each cix+c0i, dix+d0i in the objective function to problem (P) is replaced with a convex function, respectively.

Conclusion

This article proposes a new approximation algorithm for solving a class of fractional programming problems (P) without the assumptions on quasi-concavity or low-rank. In order to solve this problem, the original problem (P) is first converted into a p-dimensional equivalent one with a box constrained set, we then give a new approximation algorithm which can be more easily implemented compared with the ones given in [19, 21]. Moreover, the computational complexity of such an algorithm can be derived to show that it is an FPTAS when p is fixed, and that its computational time is an exponential increase with p increasing. Also, the complexity results can be used as an indicator of the difficulty of some optimization problems falling into the category of (P), and so we should expect to design a more sophisticated approach where its performance is at least as good. Additionally, this article not only gives a provable bound on the running time of the proposed algorithm, but also guarantees the quality of the solution obtained to problem (P).

Acknowledgements

The authors are grateful to the responsible editor and the anonymous referees for their valuable comments and suggestions, which have greatly improved the earlier version of this paper. This paper is supported by the National Natural Science Foundation of China (11671122), the Key Scientific Research Project in University of Henan Province (17A110006), and the Program for Innovative Research Team (in Science and Technology) in University of Henan Province (14IRTSTHN023).

Footnotes

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

PPS carried out the idea of this paper, the description of the algorithm and drafted the manuscript. TLZ completed the computation for numerical examples, and CFW carried out the analysis of computational complexity of the algorithm. All authors read and approved the final manuscript.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Konno H, Gao C, Saitoh I. Cutting plane/tabu search algorithms for low rank concave quadratic programming problems. J. Glob. Optim. 1998;13:225–240. doi: 10.1023/A:1008230825152. [DOI] [Google Scholar]
  • 2.Henderson JM, Quandt RE. Microeconomic Theory: A Mathematical Approach. New York: McGraw-Hill; 1971. [Google Scholar]
  • 3.Mulvey JM, Vanderbei RJ, Zenios SA. Robust optimization of large-scale systems. Oper. Res. 1995;43:264–281. doi: 10.1287/opre.43.2.264. [DOI] [Google Scholar]
  • 4.Maling K, Mueller SH, Heller WR. Proceedings of the 19th Design Automation Conference. 1982. On finding most optional rectangular package plans; pp. 663–670. [Google Scholar]
  • 5.Kuno T. Polynomial algorithms for a class of minimum rank-two cost path problems. J. Glob. Optim. 1999;15:405–417. doi: 10.1023/A:1008372614175. [DOI] [Google Scholar]
  • 6.Matsui T. NP-hardness of linear multiplicative programming and related problem. J. Glob. Optim. 1996;9:113–119. doi: 10.1007/BF00121658. [DOI] [Google Scholar]
  • 7.Schaible S, Shi J. Fractional programming: the sum-of-ratios case. Optim. Methods Softw. 2003;18:219–229. doi: 10.1080/1055678031000105242. [DOI] [Google Scholar]
  • 8.Kuno T, Masaki T. A practical but rigorous approach to sum-of-ratios optimization in geometric applications. Comput. Optim. Appl. 2013;54:93–109. doi: 10.1007/s10589-012-9488-5. [DOI] [Google Scholar]
  • 9.Teles JP, Castro PM, Matos HA. Multi-parametric disaggregation technique for global optimization of polynomial programming problems. J. Glob. Optim. 2013;55:227–251. doi: 10.1007/s10898-011-9809-8. [DOI] [Google Scholar]
  • 10.Gao YL, Xu CX, Yang YJ. An outcome-space finite algorithm for solving linear multiplicative programming. Appl. Math. Comput. 2006;179:494–505. [Google Scholar]
  • 11.Shen P, Wang C. Global optimization for sum of generalized fractional functions. J. Comput. Appl. Math. 2008;214:1–12. doi: 10.1016/j.cam.2007.01.022. [DOI] [Google Scholar]
  • 12.Wang C, Shen P. A global optimization algorithm for linear fractional programming. Appl. Math. Comput. 2008;204:281–287. [Google Scholar]
  • 13.Shen P, Yang L, Liang Y. Range division and contraction algorithm for a class of global optimization problems. Appl. Math. Comput. 2014;242:116–126. [Google Scholar]
  • 14.Shen PP, Li WM, Liang YC. Branch-reduction-bound algorithm for linear sum-of-ratios fractional programs. Pac. J. Optim. 2015;11(1):79–99. [Google Scholar]
  • 15.Benson HP. An outcome space branch and bound-outer approximation algorithm for convex multiplicative programming. J. Glob. Optim. 1999;15:315–342. doi: 10.1023/A:1008316429329. [DOI] [Google Scholar]
  • 16.Benson HP, Boger GM. Outcome-space cutting-plane algorithm for linear multiplicative programming. J. Optim. Theory Appl. 2000;104:301–332. doi: 10.1023/A:1004657629105. [DOI] [Google Scholar]
  • 17.Konno H, Yajima Y, Matsui T. Parametric simplex algorithms for solving a special class of non-convex minimization problems. J. Glob. Optim. 1991;1:65–81. doi: 10.1007/BF00120666. [DOI] [Google Scholar]
  • 18.Liu XJ, Umegaki T, Yamamoto Y. Heuristic methods for linear multiplicative programming. J. Glob. Optim. 1999;15:433–447. doi: 10.1023/A:1008308913266. [DOI] [Google Scholar]
  • 19.Locatelli M. Approximation algorithm for a class of global optimization problems. J. Glob. Optim. 2013;55:13–25. doi: 10.1007/s10898-011-9813-z. [DOI] [Google Scholar]
  • 20.Mittal S, Schulz AS. An FPTAS for optimizing a class of low-rank functions over a polytope. Math. Program. 2013;141:103–120. doi: 10.1007/s10107-011-0511-x. [DOI] [Google Scholar]
  • 21.Depetrini D, Locatelli M. Approximation algorithm for linear fractional multiplicative problems. Math. Program. 2011;128:437–443. doi: 10.1007/s10107-009-0309-2. [DOI] [Google Scholar]
  • 22.Goyal V, Ravi R. An FPTAS for minimizing a class of low-rank quasi-convex functions over a convex set. Oper. Res. Lett. 2013;41:191–196. doi: 10.1016/j.orl.2013.01.004. [DOI] [Google Scholar]
  • 23.Depetrini D, Locatelli M. A FPTAS for a class of linear multiplicative problems. Comput. Optim. Appl. 2009;44:276–288. doi: 10.1007/s10589-007-9156-3. [DOI] [Google Scholar]
  • 24.Goyal V, Genc-Kaya L, Ravi R. An FPTAS for minimizing the product of two non-negative linear cost functions. Math. Program. 2011;126:401–405. doi: 10.1007/s10107-009-0287-4. [DOI] [Google Scholar]
  • 25.Shen P, Wang C. Linear decomposition approach for a class of nonconvex programming problems. J. Inequal. Appl. 2017;2017 doi: 10.1186/s13660-017-1342-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Schaible S, Ibaraki T. Fractional programming. Eur. J. Oper. Res. 1983;12:325–338. doi: 10.1016/0377-2217(83)90153-4. [DOI] [Google Scholar]
  • 27.Shen P, Zhao X. A fully polynomial time approximation algorithm for linear sum-of-ratios fractional program. Math. Appl. 2013;26:355–359. [Google Scholar]
  • 28.Hoai-Phuong NT, Tuy H. A unified monotonic approach to generalized linear fractional programming. J. Glob. Optim. 2003;26:229–259. doi: 10.1023/A:1023274721632. [DOI] [Google Scholar]
  • 29.Shao LZ, Ehrgott M. An objective space cut and bound algorithm for convex multiplicative programmes. J. Glob. Optim. 2014;58:711–728. doi: 10.1007/s10898-013-0102-x. [DOI] [Google Scholar]

Articles from Journal of Inequalities and Applications are provided here courtesy of Springer

RESOURCES