Abstract
This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.
Keywords: nonconvex programming, global optimization, linear decomposition approach, approximation algorithm, computational complexity
Introduction
Consider a class of nonconvex programming problems:
where , is a continuous function, Ω is a nonempty polytope, , , and are linear independent vectors. The function f is called a low-rank function with rank k over a polytope Ω defined by Kelner and Nikovola [1]. With this broader definition, multiplicative programming, quadratic programming, bilinear programming, as well as polynomial programming can be all put into the category of problem (P), whose important applications can be found in some surveys (e.g., [2–7]). In general, nonconvex programming problems of this form (P) are known to be NP-hard, even minimizing the product of two linear functions with rank two over a polytope is NP-hard ([8]). As shown by Mittal and Schulz [9], the optimum value of problem (P) cannot be approximated to within any factor unless . Hence, for solving problem (P) some extra assumptions ()-() on the properties of the function f will be required as follows:
- ()
, if , for each ;
- ()
for all and some constant c;
- ()
.
An exhaustive reference on optimizing low-rank functions can be found in Konno and Thach [10]. Konno et al. [11] proposed cutting plane and tabu-search algorithms for low-rank concave quadratic programming problems. Porembski [12] gave a cutting plane solution approach for general low-rank concave minimization problems with a small number of variables. Additionally, some solution algorithms have been developed for the special cases of problem (P) (e.g. [13–16]). The above solution methods are efficient heuristics, without providing a theoretical analysis on the running time or performance of the algorithms.
The main purpose of this article is to present an approximation scheme with provable performance bounds for solving globally problem (P) to obtain an ε-approximate solution for any in time polynomial in the input size and . For the special cases of problem (P), there exists extensive work about the solution of ε-approximation problems. Vavasis [17] gave an approximation scheme for low-rank quadratic optimization problems. Depetrini and Locatelli [18] presented a fully polynomial-time approximation scheme (FPTAS) for minimizing the sum or product of ratios of linear functions over a polyhedron. Kelner and Nikolova [1] developed an expected polynomial-time smoothed algorithm for a class of low-rank quasi-concave minimization problems whose the objective function satisfies the Lipschitz condition. Daniele and Locatelli [19] proposed an FPTAS for minimizing product of two linear functions over a polyhedral set. Additionally, for minimizing the product of two non-negative linear cost functions, Goyal et al. [20] gave an FPTAS under the condition of the convex hull of the feasible solutions in terms of linear inequalities known. The algorithm in [21] works for minimizing a class of low-rank quasi-concave functions over a convex set, and this algorithm solves a polynomial number of linear optimization problems. Mittal and Schulz [9] presented an FPTAS for minimizing a general class of low-rank functions over a polytope, and their algorithm is based on constructing an approximate Pareto-optimal front of the linear functions that constitute the objective function.
In this paper, by exploiting the feature of problem (P), a suitable nonuniform grid for solving problem (P) is first constructed over a given -dimensional box. Based on the exploration of the grid nodes, the original problem (P) can then be transformed and decomposed into a polynomial number of subproblems, in which each subproblem is corresponding to a grid node and is easy to solve by considering a linear program. Thus, the main computational effort of the proposed algorithm only consists in solving linear programming problems related to all nodes, which do not grow in size from a grid node to the next node. Furthermore, it is verified that through solving these linear programs, we can obtain an ε-approximation solution of the primal problem (P). The proposed algorithm has several features as follows. First, in contrast with [19, 20, 22], the rank k of the objective function considered by the proposed algorithm is not limited to only around two. Second, the proposed algorithm does not require differentiable and the inverse of the single variable function about the objective function, and it works for minimizing a class of more general functions, while Goyal and Ravi [21] and Kelner and Nikolova [1] both require the quasi-concavity assumption of the objective function. Third, although the nonuniform grid constructed for the algorithms in [21] and ours is based on subdividing a -dimensional hyper-rectangle, the algorithm in [21] requires iterations that are not necessary for our algorithm and the one in [9]. Moreover, at each iteration of the algorithm in [21], it is required to solve a single variable equation and the corresponding linear optimization problem for each grid node. Finally, we emphasize here that the efficiency of the algorithms (of [9, 21] and ours) strongly depends upon the number of grid nodes (or subproblems solved) that are associated with the dimension of the grid points, under the condition of the same input size and the tolerance ε value. In fact, the nonuniform grid in [9] derives from parting a k-dimensional hypercube. Therefore, from the procedure of the algorithm and its computational complexity analysis it can be seen that our work is independent of [9, 21] and the proposed algorithm differs significantly giving an interesting alternative approach to solve the problem with a reduced running time.
The structure of this paper is as follows. The next section describes the equivalent problem and its decomposition technique. Section 3 presents the algorithm and the computational cost of such an algorithm. Finally, some conclusions are drawn in Sections 4 and 5, and discussions presented.
Equivalent problem and its decomposition technique
Equivalent problem
For solving problem (P), we will propose an equivalent problem (P). To this end, let us firstly denote
| 2.1 |
Assume that, without loss of generality, , and define a rectangle H given by
| 2.2 |
Thus, by introducing variable , problem (P) is equivalent to the following problem:
The key equivalent theorem for problems (P) and (Q) is given as follows.
Theorem 1
is a global optimum solution of problem (P) if and only if is a global optimum solution of problem (Q), where for each . In addition, the global optimal values of problems (P) and (Q) are equal.
Proof
If is a global optimal solution of problem (P), let
It is obvious that is a feasible solution of problem (Q). Let be any feasible solution of problem (Q), i.e.,
| 2.3 |
According to the definition of and the optimality of , we must have
| 2.4 |
Additionally, from (2.3) and the assumption (), it follows that
| 2.5 |
Thus, (2.4) and (2.5) mean that is a global optimal solution to problem (Q).
Conversely, suppose that is a global optimal solution for problem (Q), then we have
By the assumption of φ, we can obtain
For any given , if we let , then is a feasible solution to problem (Q) with . Thus, from the optimality of it follows that
This means that is a global optimal solution to problem (P). □
By Theorem 1, we can conclude that, for solving the problem (P), we may globally solving its equivalent problem (Q) instead. Besides, it is easy to understand that the problems (P) and (Q) have the same global optimal value. Hence, we will propose a decomposition approach for the problem (Q) below.
Linear decomposition technique
Problem (Q) has a relatively low-rank decomposition structure because, in contrast to problem (P), the nonconvexity of the objective function only involves the term if we fix a . In order to solve problem (Q), based on this observation, for any given we want to construct a polynomial size grid by subdividing H into smaller rectangles, such that the ratio of successive divisions is equal to in each dimension. Thus, a polynomial size grid will be generated over H, where the set of the grid nodes can be given by
| 2.6 |
where with
| 2.7 |
Note that under the assumption (), must hold for each i. Clearly, for any , there exists a point such that
Thus, H can be approximated by the set . Next, for each grid node , consider the corresponding subproblem as follows:
Notice that, by the assumption () of φ, for a given , problem is equivalent to a linear problem :
That is, for a fixed point , is the optimal solution of problem if and only if is an optimal solution for problem .
Clearly, for each , the corresponding subproblems can easily be solved by a linear program . Thus, we can decompose a nonconvex programming problem (Q) into a series of subproblems, and we can obtain its approximation global solution via the solutions of those linear programming problems when concerning all nodes υ over .
Algorithm and its computational complexity
In this section, we will propose an effective algorithm for getting the approximation solution to problem (P), and then analyze its computational complexity.
ε-approximation algorithm
In what follows we will introduce an algorithm for solving problem (P), and the algorithm is able to return an ε-approximate solution of problem (P).
Based on the particularities of problem (P), a given rectangle H is firstly subdivided to construct a necessary nonuniform grid . The prime problem (P) can then transformed and decomposed into a series of subproblems on the basis of the exploration of the grid nodes. Each subproblem is associated with a grid node in the proposed algorithm, and it can be solved by a linear program. An necessary and specific description is given as follows. Given , let . The grid nodes set can be generated by (2.6)-(2.7). For each , solve problem to get the solution , and the optimal value to the corresponding problem is denoted , here, let if the feasible set to is empty. The process is repeated until all the points of are considered. The detailed algorithm is Algorithm 1.
Algorithm 1.

Algorithm statement
The following theorem shows that the proposed algorithm can reach an optimal solution to problem (P).
Theorem 2
Given , an ε-optimal solution x̃ to problem (P) from the proposed algorithm can be obtained in the sense that
where is the optimal solution of problem (P).
Proof
Let
| 3.1 |
From being the optimal solution of problem (P), we have
This implies that , so there exists some which satisfies
| 3.2 |
Thus, combining with the assumptions of φ, we have
| 3.3 |
Now, suppose that x̄ is the optimal solution of problem . Then together with (3.1)-(3.2) implies that is a feasible solution of problem . Thus we have
| 3.4 |
Additionally, let . Since x̃ is the optimal solution of problem , it follows that , thus, we can get
| 3.5 |
According to the definitions of ṽ and x̄, we have
| 3.6 |
Hence, from (3.3)-(3.6) and , we can conclude that
and so x̃ is the approximation solution to problem (P). □
By Theorem 1 we also have the following corollary.
According to the above discussion, the ε-approximation solution to problem (P) can be obtained by solving (the number of grid nodes in ) linear programming problems with . However, it is not necessary to solve each associated with each for searching the solution of problem (P), that is, by using the following proposition we can obtain an improvement of the algorithm.
Proposition 1
Let . Then x̂ is an optimal solution of problem P1 for any , where
| 3.7 |
Proof
Suppose that is any feasible solution of problem with . By using the definition of x̂ we can see that x̂ is a feasible solution of problem P1 for any . With the increase of the function φ, it follows that
which concludes the proof. □
Proposition 1 shows that x̂ is the optimal solution of subproblem for any . Therefore, in practical implementations, we only are required to solve the subproblem associated with the points contained in the set . A further note on is as follows.
For any , by the definition of H, let
| 3.8 |
where with . Combining the definition of with the above result, the set can be given by
| 3.9 |
Let
| 3.10 |
This means the ε-approximation solution to problem (P) can be obtained only by solving (the number of points in the set ) linear programming subproblems for all . Thus the proposed algorithm can be improved by Algorithm 2.
Algorithm 2.

The improved algorithm
Notice that, when the proposed improved algorithm stops, we can obtain an ε-optimal solution x̃ to problem (P) with the objective value L̃.
Computational complexity for the algorithm
Now we consider the complexity analysis of the proposed improved algorithm. By (3.8)-(3.10), we can conclude that the number of the grid nodes belonging to is at least
| 3.11 |
where with . On the other hand, we know from (2.4) that the total number of the points in the set is equal to , satisfying (2.5). Thus, it follows that the number of the elements in is at most
| 3.12 |
Combining (3.9) with (3.10), the proposed improvement algorithm requires that the number of the grid nodes considered in actually computation is not more than
| 3.13 |
Theorem 3
Let , with , and let U=. When k is fixed, the running time of the improved algorithm for obtaining an ε-optimal solution for problem (P), is bounded from above by
where , and is the time taken to solve a linear program in n variables and input size of bits.
Proof
By Step 0 of the improved algorithm, it follows that
and
From the above results and (3.13), we have
| 3.14 |
Thus, the upper bound of the number of grid points Ξ is
| 3.15 |
The result of (3.15) holds because for small ε values. By using the Lagrange mean value theorem, there exists some such that
| 3.16 |
Thus we can know from (3.14)-(3.16) that the total number of the grid nodes considered in the improved algorithm is not more than
Note that logU and logL are computed in polynomial time about the input size of the problem. Additionally, for each grid node υ in the set , a corresponding linear programming problem is required to solve. Therefore, for a fixed k, the running time required by the improved algorithm for obtaining an ε-optimal solution for problem (P), is bounded from above by
| 3.17 |
where . □
In view of the above theorem we can conclude that the running time of the proposed improved algorithm is polynomial in input size and for fixed k, hence the algorithm is an FPTAS (fully polynomial-time approximation scheme) for the problem (P).
Comparison with [9, 21]: The algorithm in [9] searches for the optimal objective value in a k-dimensional grid, in which requires one to check the feasible of a linear program for each grid node, thus the total number of linear programs solved by their method is with . In the algorithm [21], the number of linear optimization problems that are solved over a convex set in each iteration is , where . Also, at each iteration of their algorithm [21], the ratio of the upper and lower bounds of the objective value can be reduced by a constant factor, hence the number of iterations is , where denotes the initial upper (lower) bound on the objective value. This implies that the algorithm in [21] solves linear optimization problems over a convex set. In this article, as can be seen in (3.17), the proposed algorithm solves different linear programs, and the running time is associated with th order in , compared with the kth order in in [9, 21].
Conclusions
In this article, we present a new linear decomposition algorithm for globally solving a class of nonconvex programming problems. First, the original problem is transformed and decomposed into a polynomial number of equivalent linear programming subproblems, by exploiting a suitable nonuniform grid. Second, compared with existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and further, the rank k of the objective function is not limited to only around two. Finally, the computational complexity of the algorithm is given to show that it differs significantly giving an interesting alternative approach to solve the problem (P) with a reduced running time.
Results and discussion
In this work, a new linear decomposition algorithm for globally solving a class of nonconvex programming problems is presented. As further work, we think the ideas can be extended to more general type optimization problems, in which each in the objective function to problem (P) is replaced with a convex function.
Acknowledgements
The authors are grateful to the responsible editor and the anonymous referees for their valuable comments and suggestions, which have greatly improved the earlier version of this paper.
This paper is supported by National Natural Science Foundation of China (11671122), the Program for Innovative Research Team (in Science and Technology) in University of Henan Province (14IRTSTHN023).
Footnotes
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
PPS carried out the idea of this paper, the description of linear decomposition algorithm and drafted the manuscript. CFW carried out the analysis of computational complexity of the algorithm. All authors read and approved the final manuscript.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Kelner JA, Nikolova E. On the hardness and smoothed complexity of quasi-cave minimization; Proceedings of the 48th Annual IEEE Symposium on Foundations of Computer Science; 2007. pp. 472–482. [Google Scholar]
- 2.Bennett KP. Computing Sciences and Statistics. 1994. Global tree optimization: a non-greedy decision tree algorithm; pp. 156–160. [Google Scholar]
- 3.Bloemhof-Ruwaard JM, Hendrix EMT. Generalized bilinear programming: an application in farm management. Eur. J. Oper. Res. 1996;90:102–114. doi: 10.1016/0377-2217(94)00353-X. [DOI] [Google Scholar]
- 4.Konno H, Shirakawa H, Yamazaki H. A mean-absolute deviation-skewness protfolio optimazation model. Ann. Oper. Res. 1993;45:205–220. doi: 10.1007/BF02282050. [DOI] [Google Scholar]
- 5.Maranas CD, Androulakis IP, Floudas CA, Berger AJ, Mulvey JM. Solving long-term financial planning problems via global optimization. J. Econ. Dyn. Control. 1997;21:1405–1425. doi: 10.1016/S0165-1889(97)00032-8. [DOI] [Google Scholar]
- 6.Pardalos PM, Vavasis SA. Quadratic programming with one negative eigenvalue is NP-hard. J. Glob. Optim. 1991;1:15–22. doi: 10.1007/BF00120662. [DOI] [Google Scholar]
- 7.Quesada I, Grossmann IE. Global Optimization in Engineering Design. 1996. Alternative bounding approximations for the global optimization of various engineering design problems; pp. 309–331. [Google Scholar]
- 8.Matsui T. NP-hardness of linear multiplicative programming and related problems. J. Glob. Optim. 1996;9:113–119. doi: 10.1007/BF00121658. [DOI] [Google Scholar]
- 9.Mittal S, Schulz AS. An FPTAS for optimization a class of low-rank functions over a polytope. Math. Program. 2013;141:103–120. doi: 10.1007/s10107-011-0511-x. [DOI] [Google Scholar]
- 10.Konno H, Thach PT, Tuy H. Optimization on Low Rank Nonconvex Structures. Dordrecht: Kluwer Academic; 1996. [Google Scholar]
- 11.Konno H, Gao C, Saitoh I. Cutting plane/tabu search algorithms for low rank concave quadratic programming problems. J. Glob. Optim. 1998;13:225–240. doi: 10.1023/A:1008230825152. [DOI] [Google Scholar]
- 12.Porembski M. Cutting planes for low-rank like concave minimization problems. Oper. Res. 2004;52:942–953. doi: 10.1287/opre.1040.0151. [DOI] [Google Scholar]
- 13.Peiping S, Chunfeng W. Global optimization for sum of linear ratios problem with coefficients. Appl. Math. Comput. 2006;176(1):219–229. [Google Scholar]
- 14.Wang C, Shen P. A global optimization algorithm for linear fractional programming. Appl. Math. Comput. 2008;204:281–287. [Google Scholar]
- 15.Wang C, Liu S. A new linearization method for generalized linear multiplicative programming. Comput. Oper. Res. 2011;38(7):1008–1013. doi: 10.1016/j.cor.2010.10.016. [DOI] [Google Scholar]
- 16.Jiao HW, Liu SY. A practicable branch and bound algorithm for sum of linear ratios problem. Eur. J. Oper. Res. 2015;243:723–730. doi: 10.1016/j.ejor.2015.01.039. [DOI] [Google Scholar]
- 17.Vavasis SA. Approximation algorithms for indefinite quadratic progrmming. Math. Program. 1992;57:279–311. doi: 10.1007/BF01581085. [DOI] [Google Scholar]
- 18.Depetrini D, Locatelli M. Approximation algorithm for linear fractional multiplicative problems. Math. Program. 2011;128:437–443. doi: 10.1007/s10107-009-0309-2. [DOI] [Google Scholar]
- 19.Depetrini D, Locatelli M. A FPTAS for a class of linear multiplicative problems. Comput. Optim. Appl. 2009;44:275–288. doi: 10.1007/s10589-007-9156-3. [DOI] [Google Scholar]
- 20.Goyal V, Genc-Kaya L, Ravi R. An FPTAS for minimizing the product of two non-negative linear cost functions. Math. Program. 2011;126:401–405. doi: 10.1007/s10107-009-0287-4. [DOI] [Google Scholar]
- 21.Goyal V, Ravi R. An FPTAS for minimizing a class of low-rank quasi-concave functions over convex set. Oper. Res. Lett. 2013;41:191–196. doi: 10.1016/j.orl.2013.01.004. [DOI] [Google Scholar]
- 22.Kern W, Woeginger G. Quadratic programming and combinatorial minimum weight product problem. Math. Program. 2007;100:641–649. doi: 10.1007/s10107-006-0047-7. [DOI] [Google Scholar]
