Abstract
We consider so called 2-stage stochastic integer programs (IPs) and their generalized form, so called multi-stage stochastic IPs. A 2-stage stochastic IP is an integer program of the form where the constraint matrix consists roughly of n repetitions of a matrix on the vertical line and n repetitions of a matrix on the diagonal. In this paper we improve upon an algorithmic result by Hemmecke and Schultz from 2003 [Hemmecke and Schultz, Math. Prog. 2003] to solve 2-stage stochastic IPs. The algorithm is based on the Graver augmentation framework where our main contribution is to give an explicit doubly exponential bound on the size of the augmenting steps. The previous bound for the size of the augmenting steps relied on non-constructive finiteness arguments from commutative algebra and therefore only an implicit bound was known that depends on parameters r, s, t and , where is the largest entry of the constraint matrix. Our new improved bound however is obtained by a novel theorem which argues about intersections of paths in a vector space. As a result of our new bound we obtain an algorithm to solve 2-stage stochastic IPs in time , where f is a doubly exponential function. To complement our result, we also prove a doubly exponential lower bound for the size of the augmenting steps.
Keywords: Integer programming, Parameterized compexity, Two-stage stochastic, Stochastic programming
Introduction
Integer programming is one of the most fundamental problems in algorithm theory. Many problems in combinatorial optimization and other areas can be modeled by integer programs. An integer program (IP) is thereby of the form
for some matrix , a right hand side , a cost vector and lower and upper bounds . The famous algorithm of Kannan [22] computes an optimal solution of the IP in time of roughly , where is the largest entry of A and b.
In recent years there was significant progress in the development of algorithms for IPs when the constraint matrix A has a specific structure. Consider for example the class of integer programs with a constraint matrix of the form
for some matrices and . An IP of this specific structure is called an n-fold IP. This class of IPs has found numerous applications in the area of string algorithms [24], computational social choice [25] and scheduling [19, 23]. State-of-the-art algorithms compute a solution of an n-fold IP in time [7, 10, 11, 20, 26], where is the largest entry in the matrices A and B.
Two-stage stochastic integer programming
Stochastic programming deals with uncertainty of decision making over time [21]. One of the basic models in stochastic programming is 2-stage stochastic programming. In this model one has to decide on a solution at the first stage and in the second stage there is an uncertainty where n possible scenarios can happen. Each of n possible scenarios might have a different optimal solution and the goal is to minimize the costs of the solution of the first stage in addition to the expected costs of the solution of the second stage. In the case that said scenarios can be modeled by an (integer) linear program, we are talking about 2-stage stochastic (integer) linear programs. 2-stage stochastic linear programs that do not contain any integer variable are well understood (we refer to standard text books [3, 21]). In contrast, 2-stage stochastic programs that contain integer variables are hard to solve and are the topic of ongoing research. Typically, those IPs are investigated in the context of decomposition based methods (we refer to a tutorial [27] or a survey [31] on the topic). For progress on 2-stage stochastic programs we refer to [1, 5, 31]. The interest in solving 2-stage stochastic (I)LPs efficiently stems from their wide range of applications for example in modeling manufacturing processes [9] or energy planing [17].
In this paper we consider 2-stage stochastic IPs with only integral variables. For the extension of the results of this paper to the mixed setting we refer to [4]. So called pure integral 2-stage stochastic IPs have also been considered in the literature from a practical perspective (see [14, 33]). The considered IP is then of the form
| 1 |
for given objective vector , upper and lower bound . The constraint matrix has the shape
for given matrices and .
Typically, 2-stage stochastic IPs are written in a slightly different (equivalent) form that explicitly involves the scenarios and the probability distribution of the scenarios of the second stage. In this presented form, roughly speaking, the solution for the first stage scenario is encoded in the variables corresponding to vertical matrices. A solution for each of the second stage scenarios is encoded in the variables corresponding to one of the diagonal matrices and the expectation for the second stage scenarios can be encoded in a linear objective function. Since we do not rely on known techniques of stochastic programming in this paper, we omit the technicalities surrounding 2-stage stochastic IPs and simply refer to a survey for further details [31].
Despite their similarity, it seems that 2-stage IPs are significantly harder to solve than n-fold IPs. While Hemmecke and Schultz [18] have shown that a 2-stage stochastic IP with constraint matrix can be solved in running time of the form for some computable function f, the actual dependence on the parameters was unknown (we elaborate on this further in the coming section). Their algorithm is based on the augmentation framework which we also discuss in the following section.
Graver elements and the augmentation framework
Suppose we have an initial feasible solution of an IP and our goal is to find an optimal solution. The idea behind the augmentation framework (see [11]) is to compute an augmenting (integral) vector y in the kernel, i.e., with . A new solution with improved objective can then be defined by for a suitable . This procedure can be iterated until a solution with optimal objective is obtained eventually.
We call an integer vector a cycle. A cycle can be decomposed if there exist integral vectors with and for all i (i.e., the vectors are sign-compatible with y). An integral vector that can not be decomposed is called a Graver element [15] or we simply say that it is indecomposable. The set of all indecomposable elements is called the Graver basis.
For a given bound on the size of Graver elements of the constraint matrix, an augmenting vector can often be computed by a dynamic program (depending on the structure of the constraint matrix), whereas the running time of the dynamic program depends on the respective bound. An optimal solution of the corresponding IP can then be solved by using the augmentation framework. For a detailed description of the augmentation framework we refer to the paper by Eisenbrand et al. [11].
In the case that the constraint matrix has a very specific structure, one can sometimes show improved bounds. Specifically, if the constraint matrix A has a 2-stage stochastic shape with identical matrices in the vertical and diagonal line, then Hemmecke and Schultz [18] were able to prove a bound for the size of Graver elements that only depends on the parameters r, s, t and . The presented bound is an existential result and uses so called saturation results from commutative algebra. In their line of proof MacLagan’s theorem is used, which only yields a finiteness statement (i.e., there are no infinite antichains in the set of monomial ideals in a polynomial ring in finitely many variables over a field) and no explicit bound is known yet for this quantity. It is only known that the dependence on the parameters is lower bounded by Ackerman’s function [28]. This implies that the parameter dependence of r, s, t and in the implicit bound of the size of Graver elements by Hemmecke and Schultz is at least ackermanian.
Very recently, improved bounds for Graver elements of general matrices and matrices with specific structure like n-fold [10] or 4-block structure [6] were developed.
Lemma 1
(Steinitz [16, 32]) Let be vectors with for . Assuming that then there is a permutation such that for each the norm of the partial sum is bounded by
The Steinitz Lemma was used by Eisenbrand, Hunkenschröder and Klein [10] to bound the size of Graver elements for a given matrix A. As we use the following theorem and its technique in this paper, we give a brief sketch of its proof. The Steinitz Lemma was first used by Eisenbrand and Weismantel [12] in the context of integer programming.
Theorem 1
(Eisenbrand, Hunkenschröder, Klein [10]) Let be an integer matrix where every entry of A is bounded by in absolute value. Let be an element of the Graver Basis of A then .
Proof
Consider the sequence of vectors consisting of copies of the ith column of A if is positive and copies of the negative of the ith coplumn of A if is negative. As g is a Graver element we obtain that . Using the Steinitz Lemma above, there exists a reordering of the vectors such that the partial sum for each .
Suppose by contradiction that . Then by the pigeonhole principle there exist two partial sums that sum up to the same value. However, this means that g can be decomposed and hence can not be a Graver element.
Our results
The main result of this paper is to prove a new structural lemma that enhances the toolset of the augmentation framework. We show that this lemma can be directly used to obtain an explicit bound for Graver elements of the constraint matrix of 2-stage stochastic IPs. But we think that it might also be of independent interest as it provides interesting structural insights into vector sets.
Lemma 2
Given are multisets where all elements have bounded size . Assuming that the total sum of all elements in each set is equal, i.e.,
then there exist nonempty submultisets of bounded size such that
Note that Lemma 2 only makes sense when we consider the to be multisets as the number of different sets without allowing multiplicity of vectors would be bounded by .
A geometric interpretation of Lemma 2 is given in the following figure. On the left side we have n paths consisting of sets of vectors and all paths end at the same point b.
Then Lemma 2 shows that there always exist permutations of the vectors of each path such that all paths meet at a point of bounded size. The bound depends only on and the dimension d and is thus independent of the number of paths n and the size of b. For the Proof of Lemma 2 we need basic properties of the intersections of integer cones. We show that those properties can be obtained by using the Steinitz Lemma.
We show that Lemma 2 has strong implications in the context of integer programming. Using Lemma 2, we can show that the size of Graver elements of the matrix is bounded by . Within the framework of Graver augmenting steps the bound implies that 2-stage stochastic IPs can be solved in time , where is the encoding length of the instance (see Theorem 3). With this we improve upon an implicit bound for the size of the Graver elements matrix 2-stage stochastic constraint matrices due to Hemmecke and Schultz [18].
Based on the structural observations of this paper, in a recent work by Cslovjecsek et al. [8] an algorithm was developed that solves 2-stage stochastic IPs with an improved running time of .
Furthermore, we show that our Lemma can also be applied to bound the size of Graver elements of constraint matrices that have a multi-stage stochastic structure. Multi-stage stochastic IPs are a well known generalization of 2-stage stochastic IPs. By this, we improve upon a result of Aschenbrenner and Hemmecke [2].
To complement our results for the upper bound, we also present in Sect. 3 a lower bound for the size of Graver elements of matrices that have a 2-stage stochastic IP structure. The given lower bound is for the case of . In this case we show in Theorem 4 a matrix where the Graver elements have a size of .
The complexity of two-stage stochastic IPs
First, we argue about the application of Lemma 2. In the following we show that the infinity-norm of Graver elements of matrices with a 2-stage stochastic structure can be bounded using Lemma 2.
Given the block structure of the IP (1), we define for a vector with the vector which consists of the entries of y that belong to the vertical matrices and we define to be the entries of y that belong to the diagonal matrix .
Theorem 2
Let y be a Graver element of the constraint matrix of IP (1). Then is bounded by . More precisely, for every .
Proof
Let be a cycle of IP (1), i.e., . Consider a submatrix of the matrix denoted by consisting of the matrix of the vertical line and the matrix of the diagonal line. Consider further the corresponding variables of the respective matrix and . Since , we also have that . By using Theorem 1 iteratively, we can decompose into a multiset of indecomposable elements, i.e., with for each .
Since all matrices share the same set of variables in the overlapping matrices , we can not directly derive cycles for the entire matrix from cycles of the submatrices . This is because a cycle and a cycle for might have conflicting entries in the overlapping part of the vector.
Let be the projection that maps a cycle z of a block matrix to the variables in the overlapping part, i.e., .
In the case that is large we will show that we can find a cycle of smaller length with and therefore show that y can be decomposed. In order to obtain this cycle for the entire matrix , we have to find a multiset of cycles in each block matrix such that the sum of the projected parts is identical, i.e., . We apply Lemma 2 to the multisets , where is the multiset of projected elements in , where holds. Note that and hence the conditions to apply Lemma 2 are fulfilled. Since every is decomposed in a sign compatible way, every entry of the vector in has the same sign. Hence we can flip the negative signs in order to apply Lemma 2.
By Lemma 2, there exist submultisets such that and . As there exist submultisets with , we can use those submultisets to define a solution with . For let , where is the projection that maps a cycle to the part that belongs to matrix , i.e., . Let for an arbitrary , which is well defined as the sum is identical for all multisets . As the cardinality of the multisets is bounded, we know by construction of that the one-norm of every is bounded by
This directly implies the infinity-norm bound for y as well.
As a consequence of the bound for the size of the Graver elements, we obtain by the framework of augmenting steps an efficient algorithm to compute an optimal solution of a 2-stage stochastic IP. By using the augmentation framework as described in [11] we obtain the following theorem regarding the worst-case complexity for solving 2-stage stochastic IPs.
Theorem 3
A 2-stage stochastic IP of the form (1) can be solved in time
where is the encoding length of the IP.
Proof
Let be the bound for that we obtain from the previous Lemma. To find the optimal augmenting step, it is sufficient to solve the so called augmenting IP
| 2 |
for some upper and lower bounds . Having the best augmenting step at hand, one can show that the objective value improves by a certain factor. We refer to Corollary 14 of [11] which shows that IP (1) can be solved if the above augmenting IP (2) can be solved.
In the following we briefly show how to solve the IP (2) in order to compute the augmenting step. The algorithm works as follows:
- Compute for every with the objective value of the cycle y consisting of , where for are the optimal solutions of the IP
where are the upper and lower bounds for the variables and their corresponding objective vector. Note that the first set of constraints of the IP ensure that . The IPs can be solved with the algorithm of Eisenbrand and Weismantel [12] in time each. Return the cycle with maximum objective.
As the number of different vectors with 1-norm is bounded by step 1 of the algorithm takes time .
About the intersection of integer cones
Before we are ready to prove our main Lemma 2, we need two helpful observations about the intersection of integer cones. An integer cone is defined for a given (finite) generating set of elements by
Note that the intersection of two integer cones is again an integer cone, as the intersection is closed under addition and scalar multiplication of positive integers.
We say that an element b of an integer cone is indecomposable if there do not exist elements such that . We can assume that the generating set B of an integer cone consists just of the set of indecomposable elements as any decomposable element can be removed from the generating set.
In the following we allow to use a vector set B as a matrix and vice versa where the elements of the set B are the columns of the matrix B. This way we can multiply B with a vector, i.e., for some .
Lemma 3
Consider integer cones and for some generating sets where each element has bounded norm . Consider the integer cone of the intersection
for some generating set of elements . Then for each indecomposable element of the intersection cone with for some and , we have that . Furthermore, the norm of b is bounded by
Proof
Consider the representation of a point in the intersection of and . The sum consisting of copies of the ith element of and copies of the negative of the ith element of equals to zero. Using Steinitz’ Lemma, there exists a reordering of the vectors such that the partial sum , for each .
If then by the pigeonhole principle, there exist two partial sums of the same value. Hence, there are two sequences that sum up to zero, i.e., there exist non-zero vectors with and with such that and . Hence and are elements of the intersection cone. This implies that b can be decomposed in the intersection cone.
Using a similar argumentation as in the previous lemma, we can consider the intersection of several integer cones. Note that we can not simply use the above Lemma inductively as this would lead to worse bounds.
Lemma 4
Consider integer cones for some generating sets with for each . Consider the integer cone of the intersection
for some generating set of elements .
Then for each indecomposable element with for some in the intersection cone, we have that for all .
Proof
Given vectors with and for each . Consider the sum of vectors for each consisting of copies of the jth element of . By adding 0 vectors to sums we can assume without loss of generality that every sequence has the same number of summands .
Claim: There exists a reordering for each of these sums such that each partial sum is close to the line between 0 and b and more precisely:
for each and each . To see this, we construct the sequence that consists of vectors from and subtract L fractional parts of the vector b. To count the number of vectors we use an additional component with weight of the vector and define and . Note that . Then the sequence sums up to zero, as . Hence we can apply the Steinitz Lemma to obtain a reordering for each sequence such that each partial sum for each . Each partial sum that sums up to index m contains p vectors and q vectors for some with . Hence . Furthermore, the entry of each vector guarantees that which implies the statement of the claim.
Now consider the differences of a partial sum with . Using the claim from above, we can now argue that for each and as each is close to . Therefore the number of different values for is bounded by . Assuming that , by the pigeonhole principle there exist indices and with such that for each . Hence and . This implies that b can be decomposed and is therefore not a generating element of
Proof of Lemma
2 Using the results from the previous section, we are now finally able to prove the main Lemma 2.
We begin with a sketch of the proof for the 1-dimensional case. This will be helpful when we generalize the approach later. In the 1-dimensional case, the multisets consist solely of natural numbers, i.e . Suppose that each set consists only of many copies of a single integral number . Then it is easy to find a common multiple as . Hence one can choose the subsets consisting of copies of . Now suppose that the multisets can be arbitrary. If we are done. But on the other hand, if , by the pigeonhole principle there exists a single element for every that appears at least times. Then we can argue as in the previous case where we needed at most copies of a number . Note that the cardinality of the sets has to be of similar size. As the elements of each set sums up to the same value, the cardinality of two sets can only differ by a factor of . This proves the lemma in the case .
In the case of higher dimensions, the lemma seems much harder to prove. But in principle we use generalizations of the above techniques. Instead of single natural numbers however, we have to work with bases of corresponding basic feasible LP solutions and the intersections of the integer cones generated by those bases.
In the proof we need the notion of a cone which is simply the relaxation of an integer cone. For a generating set , a cone is defined by
Proof
First, we describe the multisets by multiplicity vectors , where is the set of non-negative integer points p with . Each thereby states the multiplicity of a vector p in . Hence and our objective is to find vectors with such that .
Consider the linear program
| 3 |
Let be all possible basic feasible solutions of the LP corresponding to bases i.e., .
In the following we prove two claims that correspond to the two previously described cases of the one dimensional case. First, we consider the case that essentially each multiset corresponds to one of the basic feasible solution . In the 1-dimensional case this would mean that each set consists only of a single number. Note that the intersection of integer cones in dimension 1 is just the least common multiple, i.e., for some .
Claim 1
If for all i we have then there exist non-zero vectors with and such that .
Note that all basic feasible solutions have to be of similar size. Since holds for all we know that and can only differ by a factor of for all . Hence all basic feasible solutions have to be either small or all have to be large. This claim considers the case that the size of all is large.
Proof of the claim
Note that and hence . In the following, our goal is to find a non-zero point such that for some vectors . However, this means that q has to be in the integer cone for every and therefore in the intersection of all the integer cones, i.e., . By Lemma 4 there exists a set of generating elements such that
and as and
each generating vector can be represented by for some with for each basis .
As there exists a vector with . Our goal is to show that there exists a non-zero vector with . In this case b can be simply written by for some . As q and are contained in the intersection of all cones, there exists for each generating set a vectors and such that and . Hence and we finally obtain that for which shows the claim.
Therefore it only remains to prove the existence of the point q with . By Lemma 4, each vector can be represented, by for some with for every basis .
As , every can be written by and we obtain a bound on assuming that every for every we have .
The last inequality follows as we can assume by Caratheodory’s theorem [30] that the number of non-zero components of is less or equal than d. Hence if then there has to exist a vector with which proves the claim.
Claim 2
For every vector with there exists a basic feasible solution of LP (3) with basis such that in the sense that for every .
Proof of the claim
The proof of the claim can be easily seen as each multiplicity vector is also a solution of the linear program (3). By standard LP theory, we know that each solution of the LP is a convex combination of the basic feasible solutions . Hence, each multiplicity vector can be written as a convex combination of , i.e., for each , there exists a with such that , where
By the pigeonhole principle, there exists for each multiplicity vector an index k with which proves the claim.
Using the above two claims, we can now prove the claim of the lemma by showing that for each , there exist a vector with bounded 1-norm such that .
First, consider the case that there exists a basic feasible solution of LP 3 with . In this case we have for all that as the size of solutions of LP (3) can not differ by a factor of more than (this is because for every the sizes can not differ by a factor of more than ).
Now, assume that for all basic feasible solutions we have . We can argue by Claim 2 that for each (with ) we find one of the basic feasible solutions () with . As for all , we can apply the first claim to vectors with , we obtain vectors with . Hence, we find for each a vector with .
Finally we obtain that
using that is bounded by and
A lower bound for the size of graver elements
In this section we prove a lower bound on the size of Graver elements for a matrix where the overlapping parts contains only a single variable, i.e., .
First, consider the matrix
This matrix is of 2-stage stochastic structure with and . We will argue that every element in is large and therefore, the Graver elements of the matrix are large as well. We call the variable corresponding to the ith column of the matrix variable , where is the variable corresponding to the column with only entries and the for is the variable corresponding to the column with entry i in component i and 0 everywhere else. Clearly, for to be in , we know by the first row of matrix that has to be a multiple of 2. By the second row of the matrix, we know that has to be a multiple of 3 and so on. Henceforth the variable has to be a multiple of all numbers . Thus is a multiple of the least common multiple of numbers which is divisible by the product of all primes between . By known bounds for the product of all primes [13], this implies that the value of , which shows that the size of Graver elements of matrix is in .
The disadvantage in the matrix above is that the entries of the matrix are rather big. In the following we reduce the largest entry of the overall matrix by encoding each number into a submatrix. For the encoding we use the matrix
having r rows and constraints. Due to the first row of matrix , for a vector we know by the ith row of the matrix that . Hence . Now we can encode in each number in an additional row by , where is the ith number in a representation of z in base . Hence, we consider the following matrix:
![]() |
By the same argumentation as in matrix above we know that has to be a multiple of each number . This implies that every non-zero integer vector of has infinity-norm of at least . This shows the doubly exponential lower bound for the Graver complexity of 2-stage stochastic IPs and proves the following theorem.
Theorem 4
There exists a constraint matrix such that each Graver element is lower bounded by
Multi-stage stochastic IPs
In this section we show that Lemma 2 can also be used to get a bound on the Graver elements of matrices with a multi-stage stochastic structure. Multi-stage stochastic IPs are a well known generalization of 2-stage stochastic IPs. For the stochastical programming background on multi-stage stochastic IPs we refer to [29]. Here we simply show how to solve the equivalent deterministic IP with a large constraint matrix. Regarding the augmentation framework of multi-stage stochastic IPs, it was previously known that a similar implicit bound as for 2-stage stochastic IPs also holds for multi-stage stochastic IPs. This was shown by Aschenbrenner and Hemmecke [2] who built upon the bound of 2-stage stochastic IPs.
In the following we define the shape of the constraint matrix of a multi-stage stochastic IP. The constraint matrix consists of given matrices for some , where each matrix uses a unique set of columns in . For a given matrix, let be the set of rows in which are used by . A matrix is multi-stage stochastic shape, if the following conditions are fulfilled:
There is a matrix such that for every we have .
For two matrices either or holds.
An example of a matrix of multi-stage stochastic structure is given in the following: 
Intuitively, the constraint matrix is of multi-stage stochastic shape if the block matrices with the relation on the rows, forms a tree (see figure below).
Let be the number of columns that are used by matrices in the ith level of the tree (starting from level 0 at the leaves). Here we assume that the number of columns of matrices in the same level of the tree are all identical. Let r be the number of rows that are used by the matrices that correspond to the leaves of the tree. In the following theorem we show that Lemma 2 can be applied inductively to bound the size of an augmenting step of multi-stage stochastic IPs. The proof is similar to that of Theorem 2.
Theorem 5
Let y be an indecomposable cycle of matrix . Then is bounded by a function , where t is the depth of the tree. The function T involves a tower of exponentials and is recursively defined by
Proof
Consider a submatrix of the constraint matrix corresponding to a subtree of the tree with depth t. Hence, itself is of multi-stage stochastic structure. Let submatrix be the root of the corresponding subtree of and let the submatrices be the submatrices corresponding to the subtrees of A with for all .
Let be the submatrix of A which consists only of the rows that are used by (recall that ). Now suppose that y is a cycle of , i.e., and let be the subvector of y consisting only of the entries that belong to matrix A. Symmetrically let be the entries of vector y that belong only to the matrix for . Since we also know that for every . Each vector can be decomposed into a multiset of indecomposable cylces , i.e.,
where each cycle is a vector consisting of subvector of entries that belong to matrix A and a subvector of entries that belong to the matrix . Note that the matrix has a multi-stage stochastic structure with a corresponding tree of depth . Hence, by induction we can assume that each indecomposable cycle is bounded by for all , where T is a function that involves a tower of t exponentials. In the base case that and the matrix only consists of one matrix, we can bound by using Theorem 1. Let p be the projection that maps a cycle to the entries that belong to the matrix A i.e., .
For each vector and its decomposition into cycles let . Since
we can apply Lemma 2, to obtain submultisets of bounded size
with such that . As is a function with t exponentials, the cardinality can be bounded by a function of exponentials.
There exist submultisets with . Hence, we can define the solution by for every , where is the function that projects a vector to the entries that belong the matrix i.e., . For we define . As the sum is identical for every , the vector is a well defined.
Let K be the constant derived from the O-notation of Lemma 2 and , then the size of can be bounded by
As a consequence of the bound of the Graver elements of the constraint matrix of multi-stage stochastic IPs, we obtain by using the augmentation framework an algorithm to solve multi-stage stochastic IPs. Again, we refer to [11] for the details on the augmentation framework.
Theorem 6
A multi-stage stochastic IP with a constraint matrix that corresponds to a tree of depth t can be solved in time
where is the encoding length of the IP and T is a function depending only on parameters and involves a tower of exponentials.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Footnotes
This work was partially done during the author’s time at EPFL. The Project was supported by the Swiss National Science Foundation (SNSF) within the Project Convexity, geometry of numbers, and the complexity of integer programming (No. 163071).
An extended abstract of this paper appeared at Integer Programming and Combinatorial Optimization–21st International Conference, IPCO 2020, London.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Ahmed S, Tawarmalani M, Sahinidis NV. A finite branch-and-bound algorithm for two-stage stochastic integer programs. Math. Program. 2004;100(2):355–377. doi: 10.1007/s10107-003-0475-6. [DOI] [Google Scholar]
- 2.Aschenbrenner M, Hemmecke R. Finiteness theorems in stochastic integer programming. Found. Comput. Math. 2007;7(2):183–227. doi: 10.1007/s10208-005-0174-1. [DOI] [Google Scholar]
- 3.Birge JR, Louveaux F. Introduction to Stochastic Programming. Berlin: Springer; 2011. [Google Scholar]
- 4.Brand, C., Koutecký, M., Ordyniak, S.: Parameterized algorithms for milps with small treedepth. CoRR, arXiv:1912.03501 (2019)
- 5.Carøe CC, Tind J. L-shaped decomposition of two-stage stochastic programs with integer recourse. Math. Program. 1998;83(1–3):451–464. doi: 10.1007/BF02680570. [DOI] [Google Scholar]
- 6.Chen, L., Koutecký, M., Xu, L., Shi, W.: New bounds on augmenting steps of block-structured integer programs. In: Grandoni, F., Herman, G., Sanders, P., (eds.), 28th Annual European Symposium on Algorithms, ESA 2020, September 7-9, 2020, Pisa, Italy (Virtual Conference), volume 173 of LIPIcs, pp. 33:1–33:19. Schloss Dagstuhl - Leibniz-Zentrum für Informatik (2020)
- 7.Cslovjecsek, J., Eisenbrand, F., Hunkenschröder, C., Rohwedder, L., Weismantel, R.: Block-structured integer and linear programming in strongly polynomial and near linear time. In: Marx, D. (ed.) Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms, SODA 2021, Virtual Conference, January 10 - 13, 2021, pp. 1666–1681. SIAM (2021)
- 8.Cslovjecsek, J., Eisenbrand, F., Pilipczuk, M., Venzin, M., Weismantel, R.: Efficient sequential and parallel algorithms for multistage stochastic integer programming using proximity. CoRR, arXiv:2012.11742 (2020)
- 9.Dempster MAH, Fisher M, Jansen L, Lageweg B, Lenstra JK, Rinnooy Kan A. Analytical evaluation of hierarchical planning systems. Oper. Res. 1981;29(4):707–716. doi: 10.1287/opre.29.4.707. [DOI] [Google Scholar]
- 10.Eisenbrand, F., Hunkenschröder, C., Klein, K.: Faster algorithms for integer programs with block structure. In: 45th International Colloquium on Automata, Languages, and Programming, ICALP 2018, July 9-13, 2018, Prague, Czech Republic, pp. 49:1–49:13 (2018)
- 11.Eisenbrand, F., Hunkenschröder, C., Klein, K.-M., Koutecký, M., Levin, A., Onn, S.: An algorithmic theory of integer programming (2019)
- 12.Eisenbrand, F., Weismantel, R.: Proximity results and faster algorithms for integer programming using the Steinitz lemma. In: Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 808–816. SIAM (2018)
- 13.Erdös, P.: Ramanujan and i. In: Number Theory, Madras 1987, pp. 1–17. Springer (1989)
- 14.Gade D, Küçükyavuz S, Sen S. Decomposition algorithms with parametric gomory cuts for two-stage stochastic integer programs. Math. Program. 2014;144(1–2):39–64. doi: 10.1007/s10107-012-0615-y. [DOI] [Google Scholar]
- 15.Graver JE. On the foundations of linear and integer linear programming i. Math. Program. 1975;9(1):207–226. doi: 10.1007/BF01681344. [DOI] [Google Scholar]
- 16.Grinberg, V.S., Sevast’yanov, S.V.: Value of the Steinitz constant. Funct. Anal. Appl. 14(2), 125–126 (1980)
- 17.Klein Haneveld, W.K., van der Vlerk, M.H.: Optimizing electricity distribution using two-stage integer recourse models. Stochastic optimization: algorithms and applications. Springer, Boston, MA, pp. 137–154 (2001)
- 18.Hemmecke R, Schultz R. Decomposition of test sets in stochastic integer programming. Math. Program. 2003;94(2–3):323–341. doi: 10.1007/s10107-002-0322-1. [DOI] [Google Scholar]
- 19.Jansen, K., Klein, K., Maack, M., Rau, M.: Empowering the configuration-ip - new PTAS results for scheduling with setups times. CoRR arXiv:1801.06460 (2018)
- 20.Jansen, K., Lassota, A., Rohwedder, L.: Near-linear time algorithm for n-fold ilps via color coding. arXiv preprint arXiv:1811.00950 (2018)
- 21.Kall P, Wallace SW. Stochastic Programming. Berlin: Springer; 1994. [Google Scholar]
- 22.Kannan, R.: Minkowski’s convex body theorem and integer programming. Math. Oper. Res. 12(3), 415–440 (1987)
- 23.Knop D, Koutecký M. Scheduling meets n-fold integer programming. J. Scheduling. 2018;21(5):493–503. doi: 10.1007/s10951-017-0550-0. [DOI] [Google Scholar]
- 24.Knop, D., Koutecký, M., Mnich, M.: Combinatorial n-fold Integer Programming and Applications. In: Pruhs, K., Sohler, C. (eds.) 25th Annual European Symposium on Algorithms (ESA 2017), volume 87 of Leibniz International Proceedings in Informatics (LIPIcs), pp. 54:1–54:14, Dagstuhl, Germany, 2017. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik
- 25.Knop, D., Koutecký, M., Mnich, M.: Voting and bribing in single-exponential time. In: 34th Symposium on Theoretical Aspects of Computer Science, STACS 2017, March 8-11, 2017, Hannover, Germany, pp. 46:1–46:14 (2017)
- 26.Koutecký, M., Levin, A., Onn, S.: A parameterized strongly polynomial algorithm for block structured integer programs. In: 45th International Colloquium on Automata, Languages, and Programming, ICALP 2018, July 9-13, 2018, Prague, Czech Republic, pp. 85:1–85:14 (2018)
- 27.Küçükyavuz, S., Sen, S.: An introduction to two-stage stochastic mixed-integer programming. In: Leading Developments from INFORMS Communities, pp. 1–27. INFORMS (2017)
- 28.Pelupessy, F.,Weiermann, A.: Ackermannian lower bounds for lengths of bad sequences of monomial ideals over polynomial rings in two variables. Mathematical Theory and Computational Practice, p. 276 (2009)
- 29.Römisch, W., Schultz, R.: Multistage stochastic integer programs: An introduction. Online optimization of large scale systems. Springer, Berlin, Heidelberg, pp. 581–600 (2001)
- 30.Schrijver A. Theory of Linear and Integer Programming. New York: Wiley; 1998. [Google Scholar]
- 31.Schultz R, Stougie L, Van Der Vlerk MH. Two-stage stochastic integer programming: a survey. Stat. Neerl. 1996;50(3):404–416. doi: 10.1111/j.1467-9574.1996.tb01506.x. [DOI] [Google Scholar]
- 32.Steinitz E. Bedingt konvergente reihen und konvexe systeme. J. für die reine und angewandte Mathematik. 1913;143:128–176. doi: 10.1515/crll.1913.143.128. [DOI] [Google Scholar]
- 33.Zhang M, Küçükyavuzvuz S. Finitely convergent decomposition algorithms for two-stage stochastic pure integer programs. SIAM J. Optim. 2014;24(4):1933–1951. doi: 10.1137/13092678X. [DOI] [Google Scholar]


