Skip to main content
PLOS One logoLink to PLOS One
. 2015 Mar 13;10(3):e0116156. doi: 10.1371/journal.pone.0116156

A Streamlined Artificial Variable Free Version of Simplex Method

Syed Inayatullah 1,*, Nasir Touheed 2, Muhammad Imtiaz 1
Editor: Cheng-Yi Xia3
PMCID: PMC4358952  PMID: 25767883

Abstract

This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

Introduction

Linear programming has been an indispensable area in the progress of the computational sciences [1]. Since 1947, after World War II, linear programming gradually arose and with the passage of time has become popular among the people of various fields. It is by far the most widely used optimization model. Its impact on economic and government modeling is incredible. Today it has become the center of attention of many Mathematicians, Economists and Decision Scientists etc. Linear programming is the optimization of an outcome based on some set of linear constraints using a linear mathematical model. The origin of developing algorithms to solve a given system of linear inequalities goes back to the 19th century, where they were first studied by Fourier [2]. Later, several mathematicians such as Dines and Motzkin rediscovered these algorithms [3][4].

During the course of Second World War, Dantzig formulated the largest coefficient simplex method for solving a given linear program which he presented in a conference in 1951 [5]. Later, it was also published in his famous book “Linear programming and extensions”[6]. Since then, it has become the preferred method of LP practitioners because of its efficiency [7]. Hence, it is now the most useful tool to teach and solve practical linear programming problems.

Considering its vast applicability in various fields, teaching LPs have become an important part of undergraduate and graduate courses. Because of this, many researchers are now focused in designing algorithms which are more efficient and easily implementable for classroom teaching.

Originally the simplex method developed by Dantzig was confined only to LPs having a known feasible solution, commonly referred as the initial basic feasible solution. For the LPs having no initial basic feasible solution, almost all the current variants of simplex method are applicable in two phases [8][9], called phase 1 and phase 2. In phase 1, we create a basic feasible solution by adding some (non-negative) artificial variables to the problem with an additional objective (auxiliary objective), equal to minimization of sum of all the artificial variables, called phase 1 objective. The purpose of phase 1 process is to maintain feasibility and minimize the sum of artificial variables (or in a simpler sense minimize the sum of infeasibilities) as much as possible. If phase 1 ends with an objective value equal to zero, it implies that all artificial variables have attained a value zero (means all infeasibilities have been removed) and our current basis is feasible to the original problem, then we return to the original objective and proceed with simplex phase 2. Otherwise, we conclude that the problem has no solution.

For larger LPs, implementation of traditional two phase simplex method significantly increases the number of variables, number of iterations and thus the complexity as well. From the point of view of class room teaching, it often becomes a tedious job.

Since simplex method is so far still practically the best known pivot algorithm for solving LPs [10], therefore this has arose the need of developing more general linear program solving methods in which one may start from an arbitrary initial basic solution (not necessarily a feasible one).

Arsham [11][12][13][27] presented an artificial-free algorithm named push and pull algorithm which initiates with an incomplete basic variable set (BVS). As the algorithm proceeds the variables are brought in to the basis. The Push Phase continues until the basic variable set is complete. This phase may terminate yielding an infeasible BVS. The problem then proceeds by starting the Pull Phase, which pulls the solution back to feasibility by incorporating pivot rules similar to the dual simplex method. Arsham has claimed that the push and pull algorithm is artificial-free, however, his claim would be correct if we are only concerned with artificial variables, but actually his method requires adding artificial constraints so it is not truly an artificial free method.

Papparrizos[14] presented another artificial variable free method but his method also uses additional artificial constraint with a big-M number. The method also requires a tedious evaluation of series of additional objective functions besides the original objective function. Moreover at each iteration this algorithm must check both primal and dual feasibility[12].

Arsham[15][16] proposed another algorithm for general LP models in which he claimed that his algorithm will either provide a feasible solution or will declare infeasibility after a finite number of iterations. Enge and Huhn [17] presented a counter example in which Arsham’s algorithm is declaring a feasible problem inconsistent. Here we are presenting another consistent problem which Arsham’s method declares inconsistent.

Maximizez=3x+5ysubject to               x4              x63x+2y18x0,y0

Some other artificial free algorithms were presented [18][19][20][21][22], but all these methods are based on minimum angle approaches which differ from the simplex method.

Hence, in all Dantzig’s simplex method is better than its successor variants in terms of efficiency and as an elementary text book material as well. But from a teacher’s point of view difficulty in simplex method is that one could not learn simplex method for feasibility (simplex phase 1), before learning simplex method for optimality (simplex phase 2). So, usually students learn “phase 2” before “phase 1”, of course it sounds odd for both teachers and students.

Here in this paper, instead of developing a new method for general linear programs we are going to present a better version of simplex method which would not only hide the unnecessary elements of simplex table but also save many computations.

Furthermore Dual version of the simplex method is the most effective and efficient way to achieve the optimality of dual feasible problem [23]. So in this paper we have also presented a dual version of our method which is indeed an artificial constraint free version of the dual simplex method.

Advantages of the New Approach

There are several advantages of the new method. For instance, it could start with any feasible or infeasible basis of an LP. In fact it could also be very useful for solving integer programming problems. This method deals with the artificial variables in an implicit manner therefore, as long as a variable is infeasible its corresponding slack variable would be invisible but implicitly after leaving the basis it would be replaced by the corresponding invisible slack variable. In this method the artificial variables are virtually present but their presence is not revealed to the user in the form of extra columns in the simplex table. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Due to this reason we have named our method ‘A streamlined artificial variable free version of simplex method’. Last but not the least, it is a fruitful tool for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement, because working rule of this method can easily be demonstrated to the students having either a little or even no prior knowledge of simplex method for optimality (phase 2). So this method would change the learning sequence by excluding the need to teach phase 2 before phase 1.

Moreover as mentioned above dual simplex method is very efficient and effective for solving a dual problem but in case of a dual infeasible problem dual simplex method requires addition of artificial constraints which makes things difficult and cumbersome in practice [24]. In contrast the dual version of our method does not require artificial constraints. This makes our methods more efficient and effective not only for attaining feasibility but also for attaining optimality. Our streamlined versions of simplex and dual simplex methods provide a way to easily implement the phase 1 of either simplex or dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our primal and dual versions provide full freedom to the user that is they can directly initiate with either the streamlined primal simplex or streamlined dual simplex method without making any reformulation to the LP structure.

Some Basic Terminologies and Notations

A general LP problem could be considered as

Maximize  ​​​​​       z=cTxsubject ​ toAx=bx0,​ xn (1)

where, x is the decision variable vector, Am×n,bm,cn and mn

For a given basis B such that A B is invertible, and non-basis N := {1,⋯,n}\B we get

ABxB+ANxN=bxB=AB-1b-AB-1ANxN (2)

By setting x N = 0, a solution xB=AB1b is obtainable. Such vector x is called basic solution of system (2) or system (1) for basis B.

Now by substituting the value of x B from equation (2), the objective of system (1) can be reformulated as

Maximize     z+(cNT+cBTAB1AN)xN=cBTAB1bsubject toxB+AB1ANxN=AB1bxB0,xN0 (3)

The problem data could be arranged in the following table, denoted by D(B), known as short simplex table [25]

[cBTAB1bcNT+cBTAB1ANAB1bAB1AN]

The dual of system (3) could also be written in the following form,

Minimizey(AB1b)TyB=cBTAB1bsubject to(AB1AN)TyByN=(cNTcBTAB1AN)TyB0,yN0 (4)

Here dual basic variables are y N and non-basic variables are y B.

For a given basis B, the problem data could be read from the short table as

The current primal/dual objective value is: z=y=cBTAB1b

The primal solution is: x(B) = (x B, x N) where x N = 0, xB=AB1b

The dual solution is: y(B) = (y B, y N) where y B = 0, yN=(cNT+cBTAB1AN)T

A basis B is called primal feasible if x B0 or one may conclude that the primal problem has a feasible solution x(B) while if y N0 then B is called dual feasible or one may conclude that the dual problem has a feasible solution y(B). Also if B is both primal and dual feasible then B is optimal. Usually d B0 is referred as certificate column of primal feasibility; and d 0N is referred as certificate row of dual feasibility, see Fig. 1.

Fig 1. A short simplex table structure showing optimality of current basis.

Fig 1

A basis B is called primal infeasible if there exists iB such that d i0 < 0 and dual infeasible if there exists jN such that d 0j < 0.

A short table could also be used to reveal inconsistency and unboundedness of an LP. A problem is said to be inconsistent if there exists a basis B, having a member k, to the problem such that d k0 < 0 and d kN ≥ 0, see Fig. 2. A problem is said to be dual inconsistent if there exists a non-basis N having a member l such that d 0l < 0 and d Bl ≤ 0, see Fig. 3.

Fig 2. A short simplex table structure showing inconsistency of primal problem.

Fig 2

Fig 3. A short simplex table structure showing inconsistency of dual problem.

Fig 3

Primal unboundedness could be deduced if a basis is primal feasible but reveals dual inconsistency; and dual problem is unbounded if a basis is dual feasible but reveals primal inconsistency.

Pivot operation

To move through different bases of an LP one may need to apply pivot operations. A single pivot operation may be used to obtain short table of an adjacent basic solution.

For rB, kN and (r, k) being the position of the pivot element d rk (≠ 0) of D, then one can obtain an updated equivalent short table D(B’) with a new basis B’ := (B ⋃ {k})\{r} and the new non-basis N’ := (N ⋃ {r})\{k} by performing the following operations on D(B)

d'rj:=drjdrk,       jN\{k}d'ik:=-dikdrk,       iB\{r}d'ij:=dij-drk×dikdrk,       iB\{r},   jN\{k}d'kr:=1drk

The above replacement is known as a pivot operation on (r, k). A degenerate variable is the basic variable with zero value. A pivot operation or pivot element is said to be primal (dual) degenerate if the corresponding primal (dual) basic variable is degenerate. A degenerate pivot is said to be positive (negative) degenerate if the sign of pivot element is positive (negative).

Short table before pivot (r, k)

[dijdikdrjdrk]

Short table after pivot (r, k)

[dijdirdkjdkr]

Deduction of art-free auxiliary form of an LP

The term auxiliary form is commonly used in the context of simplex method as a special purpose linear program constructed by incorporating a sufficient number of artificial variables into the system in order to develop a pseudo feasible basis, and then appending an objective function of minimizing sum of all artificial variables to reach a feasible basis to the original system. Here in this section we would develop a variant of the auxiliary form of the LP, namely “the art-free form”.

Consider the following system of inequalities

Axbx0,  xpAm×p,  bm (5)

By adding the slack vector,x S we can get an equivalent equality form of the above system.

AxO+xS=bxO0,  xS0O:={1,2,,p},  S:={p+1,p+2,,p+m} (6)

where x O are remaining variables other than that of slack variables. Here we are allowing negative variables to reside into the basis. So, in this regard we can take B := S and N := O.

AxN+xB=bxN0,  xB0xNN,  xBBN:={1,2,,p},  B:={p+1,p+2,,p+m} (7)

Here B may not constitute a feasible basis (because of some negative values in b).

We can decompose x B into difference of two non-negative variables xB+ and xB. Here the value of xB is showing infeasibility of the current basis B (Note: xB+ and xB work just like slack and artificial variables in usual simplex method). According to the logic of simplex method, to reach a feasible basis our aim should be to make xB equal to zero keeping feasibility of all other variables preserved.

AxN+xB+xB=bxN0,  xB+0,  xB0xNN,  xB+B,  xBBN:={1,2,,p},  B:={p+1,p+2,,p+m}Am×p,  bm (8)

Clearly, feasible solution set of system (8) becomes also feasible to system (7) if xB=0. Initial set of basic feasible variables to system (8) is constructible as a sub-set from xBxB+, in the following paradigm,

Paradigm

“If bi0,  xp+i+ be the corresponding basic variable and if bi<0,  xp+i be the corresponding basic variable”.

Let the initial basic variable set developed by above paradigm be xLxL+, where L := {p+i:b i<0} ⊆ B and L’ := {p+i:b i ≥ 0} ⊆ B, clearly L’ = BL. Recall that non-basic variables xL would increase the extent of infeasibility if they become basic variable, so to restrict their entrance into the basis we should eliminate them from the system (8). Now, if xL is also empty then the basis B is feasible to system (7) and hence feasible to system (5), otherwise to obtain a feasible basis we would imbed an additional objective function (we may also call it as infeasibility objective function), Minimizep+iLxp+i. Here the non-negativity xL0 along with the new objective reinforces xL to zero. From system (8) one may obtain xL=AxN+xL+b, so the infeasibility objective function equivalently becomes Minimizep+iL(aiNxN+xp+i+bi). The transformed system is,

Minimizep+iL(aiNxN+xp+i+bi)                        aiNxN+xp+i+xp+i=bip+iLaiNxN+xp+i+=bip+iL                            xN0,  xL+0,  xL0,xL+0                             xNN,  xL+B,  xLB,xL+BN:={1,,p},  B:={p+1,,p+m},  L={p+i:bi<0},L=BL (9)

Note: Throughout this paper, we are overloading the addition ‘+’ and negative ‘-’ operations for affine translations to a set Ω of real numbers by a number a as Ω + a := {x+a}:x ∈ Ω and Ω − a := {xa : x ∈ Ω }

By using the paradigm (described above in this section), we may construct the following augmented matrix showing basic variable corresponding to each row.

  xN     xL+      xL     xL+     b   wxLxL+[p+iLaiN1         0         0p+iLbia(Lp)×NI       I      0b(Lp)a(Lp)×N0          0      Ib(Lp)]

Here top row (w-row) represents infeasibility objective function.

Constructing a short table of the above matrix,

        b        xN       xL+   wxLxL+[p+iLbip+iLaiN1b(Lp)a(Lp)×NIb(Lp)a(Lp)×N0]

where, b Lp < 0 and b L’-p ≥ 0. The procedure to find the entering basic variable is similar to phase 1 simplex method i.e. we may identify the entering basic variable by seeking most negative element in w-row (excluding first element, which is actually equal to sum of all infeasibilities). After determining the entering basic variable leaving variable is determined by taking the minimum-ratio test. But the criterion for minimum ratio test would be consequently different for the rows of positive basic variables xL+ and negative basic variables xL.

New form of Minimum ratio test

Here ratio test scheme is different for positive ‘xL+ ‘ and negative ‘xL ‘ variables. Ratios corresponding to variables in xL+ are obtainable by dividing b L’-p by its corresponding element in the pivot column (only if corresponding element in pivot column is positive) whereas; ratios corresponding to variables in xL are obtainable by dividing b Lp by its corresponding element in the pivot column (only if corresponding element in pivot column is negative). The element in the pivot column corresponding to the minimum ratio would be the pivot element. Hence the leaving basic variable would be the variable corresponding to that pivot element.

If the leaving basic variable, say xi,belongs to xL then set L := L\{i} for the next pivoting table and perform the pivot operation. Now in the new pivoting table proceed with the same procedure of entering and leaving basic variable until L becomes empty.

Reduced short table

The short table size can be more reduced by considering the fact that column of xL+ is conjugate of the hidden basic column of xL except the coefficient of xL+ in infeasibility objective is ‘1’ rather than ‘0’.

One can observe for iL, after the pivot operation, columns of xi and xi+ are identical with the exception that the infeasibility objective coefficient of xi+ is greater than infeasibility objective coefficient of xi by ‘1’, that is w i := w i + 1. So, we can also hide the column of xL+ keeping in mind that for every leaving basic variable xixL after the pivot operation we must replace the newly obtained non-basic column of xi by xi+.

b           xN  wxLxL'+[p+iLbip+iLaiNb(Lp)a(Lp)×Nb(L'p)a(L'p)×N]

Let the elements of reduced short table are denoted by d ij, where iB ⋃ {0} and jN ⋃ {0},

b        xNwxLxL+[p+iLdi0p+iLdiNdL0dLNdL0dLN]

Note: Recall that xi=xi+xi, it is clear that if leaving basic variable is xi, then xi could be replaced by x i after pivot in the short table.

Problem 1

Given a reduced short table D(B), obtain primal feasibility.

Algorithm: Art-free Simplex Method (ASM)

  • Step 1: Let L be a maximal subset of B such that L = {i:d i0 < 0, iB}. If L = φ then D(B) is primal feasible. Exit.

  • Step 2: Denote the basic variables x L by xL and compute infeasibility objective vector w(B)N such that w j = ΣiL d ij, jN. Place w to the top of the reduced short table D(B).

  • Step 3: Let KN such that K = {j: w j < 0, jN}. If K = φ then D(B) is primal inconsistent. Exit.

  • Step 4: Choose kK such that w kw h, ∀hK

    (Ties are broken arbitrarily)

  • Step 5: Choose r 1L and r 2B \L such that
    r1=argmin{{di0dik  |  (di00,dik<0)},iL}r2=argmin{{di0dik  |  (di00,dik>0)},iB\L}

    Set r:=argmin{dr10dr1k,dr20dr2k}

  • Step 6: Make a pivot on (r, k) (⇒ Set B := (B ⋃ {k})\{r}, N := (N ⋃ {r})\{k} and update D(B) along with the appended w(B)).

  • Step 7: If rL, set L := L\{r} and w r := w r + 1, replace notation of xr by x r

  • Step 8: If L = φ then D(B) is primal feasible. Exit.

    Otherwise, go to Step 3.

Proof of Correctness

As stated earlier, the objective of the simplex phase 1 is, to minimize the sum of all artificial variables. In an analogous sense, as shown in the last section the infeasibility objective w is to Minimizep+iLxp+i. Just like simplex method, our method’s ratio test, by taking minimum of all ratios, preserves the feasibility of existing feasible variables. Hence in the streamlined art-free simplex method the number of negative basic variables xL is steadily decreasing. The overall structure of pivoting, forces p+iLbi to be strictly increasing for non-degenerate pivots throughout the iterations and constant for degenerate pivots. So, finiteness of total number of bases in every LP problem proves finiteness of our method for complete non-degenerate LP problems.

Example 1

Obtain a feasible basis of the following system of inequalities using ASM.

8x1+   x25x3   6                 8x1+8x28             7x1+9x217x1+7x29x36x10,  x20,  x30

By adding non-negative slack variables x 4, x 5, x 6, and x 7 we can construct the associated reduced short table along with row vector w (sum of rows of infeasible basic variables) of the above problem as

Initial table:

    b   x1     x2x3wx4x5x6x7[211425146815888017906779]

Clearly, current basic solution is infeasible.

Here L = {4,5,6,7}, replace xLxL

    b   x1     x2x3wx4x5x6x7[2114251468158880179*06779]

Here B = {4,5,6,7} and N = {1,2,3}, According to most negative coefficient rule k = 2, so entering basic variable is x 2 and according to new minimum ratio test r = 6 the leaving basic variable is ‘x6 ‘. Perform the pivot operation on (6,2).

Replace x6x6, L = {4,5,6,7}\{6} = {4,5,7}, w 6 := w 6 + 1

            b             x1          x6   x3wx4x5x2x7[164/949/916/91453/965/91/9564/9128/9*8/901/97/91/9047/914/97/99]

It can be seen that now a single infeasibility is removed.

Iteration 2:

Here, k = 1 and r = 5 perform pivot operation on (5,1).

Since rL, replace x5x5, L = {4,5,7}\{5} = {4,7}, w 5 := w 5 + 1

       b             x5          x6     x3wx4x1x2x7[31/279/12823/161419/265/1289/1651/29/1281/1601/27/1281/16067/647/8*9]

Iteration 3:

Here, k = 6 and r = 7 perform pivot operation on (7,6).

Since rL, replace x7x7, L = {4,7}\{7} = {4}, w 7 := w 7 + 1

    b         x5         x7            x3wx4x1x2x6[79/147/169/1411/1479/147/169/1411/141/141/161/149/14*13/141/161/149/1448/71/88/772/7]

Iteration 4:

Here, k = 3 and r = 1 perform pivot operation on (1,3).

    b         x5         x7     x1wx4x3x2x6[50/913/365/911/950/913/365/911/91/97/721/9*14/911/80189/8016]

Iteration 5:

Here, k = 7 and r = 3 perform pivot operation on (3,7).

    b       x5x3x1wx4x7x2x6[51/85951/8*5917/891411/80189/8016]

Iteration 6:

Here, k = 5 and r = 4 perform pivot operation on(4,5).

Since rL, replace x4x4, L = {4}\{4} = φ, w 4 := w 4 + 1

bx4  x3     x1wx5x7x2x6[00004084072367264961585394565]

Now all the infeasibilities have been removed. Primal feasibility is achieved; the feasible solution is (x 1, x 2, x 3) = (0,6,0).

For more illustration purpose see Fig. 4 for an iteration by iteration comparison between Art-free simplex (ASM) and simplex method (SM) phase 1 of the above example.

Fig 4. Iteration by iteration comparison between ASM and Simplex method for example 1.

Fig 4

Remark:

The number of iterations and the pivoting sequence of art-free simplex and simplex phase 1 are same, but it could be easily seen that number of computations are noticeably reduced. For more details, see the computational results section.

Example 2

Show that the following system of inequalities is inconsistent by using Art-free simplex method and usual simplex method.

2x1+x210  x1+x2  13x1x22x10,  x20

Initial table

    b​​​     x1x2wx3x4x5[1250102111123*1]

Iteration 1

    b​​​          x5       x2wx3x4x1[26/32/35/326/32/35/31/31/34/3*2/31/31/3]

Iteration 2

      b​​​            x5  x4wx3x2x1[33/41/45/433/41/45/41/41/4*3/43/41/41/4]

I

Iteration 3

     bx2x4wx3x5x1[812812143111]

The above table shows that the algorithm terminated unsuccessfully. Hence the problem is primal inconsistent. See Fig. 5 for comparison with simplex method.

Fig 5. Iteration by iteration comparison between ASM and Simplex method for example 2.

Fig 5

Dual Version of Art-Free Simplex Method

The symmetric relationship between primal dual linear programs always enables one to construct a dual version of every method developed for solving the primal problem. The most popular example of such beautiful characteristic is the dual simplex method. Dual simplex method is known for its application in sensitivity analysis, integer programming, branch-and-bound/cut/price algorithms. But unfortunately phase 1 of dual simplex has not much participated in to the practice of solving dual LPs [24].

In phase 1 of dual simplex, to obtain a dual feasible basis of a given LP problem an additional dual slack variable is introduced which is then assigned an adequately large cost coefficient M in the dual objective function. This is analogous to adding a constraint to the primal problem. The method also requires the addition of artificial variables to transform the LP into the standard form. However this approach becomes tedious as the size of the problem is considerably increased. Another reason for not using this approach in practice is that it depends on the value of M, a high value might result in numerical problems whereas a value too small might fail to produce a primal feasible solution [24].

In this section we are going to develop dual version of the Art-free simplex method, in fact development of this pivot algorithm is a trivial task. Without going into the geometric or algebraic complications we may construct a dual version of the method by keeping in mind that the aim is to solve the dual feasibility problem through primal short table. Consider the following LP problem,

Maximize5x12x2+7x3subject to8x1+   x25x3   6                 8x1+8x28             7x1+9x217x1+7x29x36x10,  x20,  x30 (10)

By adding non-negative slack variables x 4, x 5, x 6 and x 7, the associated short table of the above problem can be constructed as shown below. Here, the dual variables y 1, y 2, y 3, y 4, y 5, y 6 and y 7 have been demonstrated explicitly as it is required to observe dual variables too.

   b  x1x2  x3zx4x5x6x7[05276815888017906779]y4y5y6y7yy1   y2y3

Here y is the dual objective variable. Objective coefficients (z-coefficients) of primal non-basic variables are the values of dual basic variables, and values of primal basic variables are coefficients of dual non-basic variables in dual objective.

That is, primal and dual solutions associated with above short table are,

x1=0,  x2=0,  x3=0,x4=6,x5=8,  x6=1,x7=6y1=5,  y2=2,  y3=7,y4=0,y5=0,  y6=0,y7=0

With mutual objective values z = − y = 0

Clearly, current primal and dual both basic solution are infeasible.

As demonstrated earlier in this manuscript to achieve optimality one may either have to achieve primal feasibility and then go for optimality through primal simplex method, or on the other hand one may have to achieve dual feasibility and then go for optimality through dual simplex method. If we measure the degree of infeasibility through the number of infeasible variables, one can see that the above problem is more primal infeasible than dual infeasible. So, from a general perspective the primal feasibility achievement problem might be a difficult task in comparing to dual feasibility achievement problem. For system (10), to attain dual feasibility we may either consider solving it by using art-free simplex method on its reduced dual short table see Fig. 6 or using dual art-free simplex on its reduced primal short table see Fig. 7.

Fig 6. Reduced dual short table for system (10) along with the infeasibility objective w.

Fig 6

Fig 7. Reduced primal short table for system (10) along with the dual infeasibility objective w’.

Fig 7

Problem 2

Given a short table D(B), obtain dual feasibility.

Algorithm: Dual Art-free Simplex Method (DASM)

  • Step 1: Let K be a maximal subset of B such that K = {j : d 0j < 0, jN}. If K = φ then D(B) is dual feasible. Exit.

  • Step 2: Denote the basic variables y K by yK and compute dual infeasibility objective vector w(B)B such that wi=jKdij,   iB. Append w’ to the right of the dictionary D(B).

  • Step 3: Let LB such that L={i:wi<0,iB}. If L = φ then D(B) is dual inconsistent. Exit.

  • Step 4: Choose rL such that wrwl  lL

    (Ties are broken arbitrarily)

  • Step 5: Choose k 1K and k 2N\K such that
    k1=argmax{{d0jdrj  |  (d0j0,drj>0)},jK}k2=argmax{{d0jdrj  |  (d0j0,drj<0)},jN\K}

    Set k:=argmax{d0k1drk1,d0k2drk2}

  • Step 6: Make a pivot on (r, k) (⇒ Set B := (B ⋃ {k})\{r}, N := (N ⋃ {r})\{k} and update D(B) along with the appended w’(B)).

  • Step 7: If kK, K := K\{k} and wk:=wk+1, replace notation of yk by y k

  • Step 8: If K = φ then D(B) is dual feasible. Exit.

    Otherwise, go to Step 3.

Example 3

Obtain a complementary dual feasible basis for the associated LP of the following short table by using Dual Art-free simplex method.

    b   x1x2  x3zx4x5x6x7[05276815888017906779]y4y5y6y7     y1  y2y3

Here K = {1,3}, replace yKyK

b    x1x2    x3w​ 'zx4x5x6x7[0527126815138880817907677*916]y4y5y6y7y1y2y3

Iteration 1:

Here, B = {4,5,6,7} and N = {1,2,3}, According to most negative dual coefficient rule k = 2, so leaving dual basic variable is y2 and according to art-free dual minimum ratio test r = 7 the entering dual basic variable is ‘y7’. Perform the pivot operation on (7,2)

    b         x1     x7      x3          w​ 'zx4x5x6x2[12/732/731/752/736/77*1/726/775/78/7168/772/7184/747/729/781/795/76/711/79/716/7]y4y5y6y2y1      y7      y3

Iteration 2:

Here, k = 1 and r = 4 perform pivot operation on (4,1).

Since k = 1 ∈ k, replace y1y1,   K={1,3}\{1}={3},   w1:=w1+1

     b            x4           x7               x3         ​ w'zx1x5x6x2[192/493/711/49139/49139/4936/491/71/4926/49*26/49632/4916/772/4988/4988/49257/492/765/49515/49515/496/491/78/4937/4937/49]y1y5y6y2y4            y7           y3

Iteration 3:

Here, k = 3 and r = 1 perform pivot operation on (1,3).

Since k = 3 ∈ k, replace y3y3,   K={3}\{3}={},   w3:=w3+1

          b            x4           x7          x1    w'zx3x5x6x2[102/1331/263/26139/26018/137/261/2649/260200/1336/1320/1344/130121/1381/2645/26515/26012/139/265/2637/260]y3y5y6y2                  y4            y7           y1

Dual feasibility is achieved; the complementary dual feasible solution is (x1,x2,x3)=(0,1213,1813).

Example 4

Show that the following problem is dual inconsistent by using dual art-free simplex method.

Maximize3x1+5x2+2x3subject to                x15x3  4               x24x3   63x1+2x2     x3   18  x1+   x2   2x3    85x1+4x220x332x10,  x20,  x30

By adding non-negative slack variables x 4, x 5, x 6, x 7 and x 8 we can construct the associated dictionary along with column vector w' (negative sum of columns of infeasible dual basic variables) of the above problem.

Initial table:

  b     x1x2x3w'zx4x5x6x7x8[03521041054601431832148112032542011]y4y5y6y7y8  y1  y2y3

Here y 1, y 2, y 3, y 4, y 5, y 6, y 7 and y 8 are dual variables.

Here K = {1,2,3}, replace yKyK

  b        x1     x2       x3w'zx4x5x6x7x8[035210410546  014318  3      2      1481120325420*11]y4y5y6y7y8   y1y2    y3

Iteration 1:

Here, k = 3 and r = 8 perform pivot operation on (8,3).

Since k = 3 ∈ k, replace y3y3,   K={1,2,3}\{3}={1,2},   w3:=w3+1

    b            x1          x2     x8         w'zx4x5x6x7x3[16/57/227/51/1089/1041/411/45/42/511/51/5*4/582/5  11/49/51/2091/2024/51/23/51/1011/108/51/41/51/209/20]y4y5y6y7y3   y1    y2        y8

Iteration 2:

Here, k = 8 and r = 5 perform pivot operation on (5,8)

    b        x1      x2  x5     w'zx4x8x6x7x3[3311/21/217/27/215/45/41/42515433/2  37/41/419/4511/21/23/23/201/41/41/4]y4y8y6y7y3   y1    y2        y5

The algorithm has terminated unsuccessfully. Here, the last table shows that the problem is dual inconsistent.

Applications

Since the proposed method ASM is an efficient alternative to the simplex method phase 1 and dual simplex method so it can effectively be incorporated in the solution process of linear programming problems, integer programming problems, sensitivity analysis, parametric programming etc. It can become an essential tool which can directly be employed by researchers in solving various problems of diverse fields like biological sciences and engineering for example: biological sciences [29] [31], medical sciences [28] [30], biochemical sciences [32], mechanical engineering [33] etc.

Computational Results

In this section, tables 1 and 2 present a comparison of the computational results of our algorithm with the simplex method. Using random models suggested by Kaluzny [26] we generated 250 consistent linear programs and 250 inconsistent linear programs with the coefficients c j, b i and a ij chosen randomly from the integer interval [−50,50]. The results of both consistent and inconsistent problems exhibit that our algorithm (ASM) and the simplex method (SM) both take same number of iterations.

Table 1. Average number of iterations on random consistent LPs.

Order ASM/SM
3x3 1.432
3x5 1.76
3x7 1.796
5x5 2.913
5x10 3.191
7x5 4.177
7x10 5.082
10x5 6.117
10x10 8.128
10x20 7.623
10x25 7.628
15x15 14.407
15x20 14.359
20x20 22.167
20x30 20.164
30x30 41.361
40x40 66.660
50x50 102.079
50x70 87.842
50x100 60.9154
70x50 172.832
70x70 187.799
70x100 152.531
80x80 231.531
90x90 287.652
100x100 347.511

Table 2. Average number of iterations on random inconsistent LPs.

Order ASM/SM
3x3 0.96
3x5 1.332
3x7 1.381
5x5 2.533
5x10 3.571
7x5 3.293
7x10 5.001
10x5 4.236
10x10 7.328
10x20 10.076
10x25 11.465
15x15 12.884
15x20 15.643
20x20 20.385
20x30 27.136
30x30 41.398
40x40 66.911
50x50 94.007
50x70 125.203
50x100 136.697
70x50 108.923
70x70 174.128
70x100 230.393
80x80 219.281
90x90 279.052
100x100 317.426

For further testing on practical problems, we executed both the algorithms on NETLIB problems [34] which again revealed that for each problem number of iterations taken by both algorithms are exactly the same. The results have been summarized in table 3.

Table 3. Number of iterations on 22 instances of NETLIB test problems.

S. No. Problem Title Size ASM/SM
1 ADLITTLE 56x97 22
2 AFIRO 27x32 6
3 AGG2 516x302 47
4 AGG3 516x302 37
5 BANDM 305 x 472 9277
6 BEACONFD 173x262 89
7 BNL1 643x1175 8486
8 E226 223x282 128
9 FFFFF800 524x854 2001
10 ISRAEL 174x142 8
11 SCAGR25 471x500 2051
12 SCAGR7 129x140 249
13 SCFXM2 660x914 1378
14 SCFXM3 990x1371 2408
15 SCORPION 388x358 398
16 SCSD1 77x760 121
17 SCSD6 147x1350 297
18 SCSD8 397x2750 2784
19 SCTAP2 1090x1880 1910
20 SCTAP3 1480x2480 1923
21 SHARE2B 96x79 120
22 STOCFOR1 117x111 56
21 SHARE2B 96x79 120
22 STOCFOR1 117x111 56

The above comparisons in between ASM and SM strengthen that ASM and SM always follow exactly same number of iterations. The following tables show a comparison of computational efficiency of both the methods:

Figs. 8 and 9 depict the comparison of SM and ASM by the average number of multiplication and addition operations needed to solve LP problems of different sizes. The tables illustrate that (either we consider multiplications or additions) ASM always need less number of operation computations as compared to SM. This difference becomes quite noticeable especially for the problems that have greater number of constraints as compared to the number of variables. So, it is empirically observed that ASM is much advantageous when “number of constraints minus number of variables” is a large number. This observation could be verified by the graph as depicted in Fig. 10, as the value of mn increases the percentage of saved computations in ASM also increases. For example for a 200 × 300 order problem the average saving in computations is just about 5% but in contrast to a 300 × 200 order problem it reaches a remarkable level of 40%. This fact can also be seen in the problems of order 5 × 10, 10 × 5, 50 × 100 and 100 × 70. For the problems having mn, the savings in number of computations is not much high. Basic theory of duality asserts that in contrast to ASM, the dual art-free simplex method (DASM) would be more efficient computationally when nm is large.

Fig 8. Comparison of SM and ASM in terms of Multiplication Operations.

Fig 8

Fig 9. Comparison of SM and ASM in terms of Addition Operations.

Fig 9

Fig 10. Graph showing trend of percentage of computations saved in ASM.

Fig 10

Conclusion

In this paper streamlined artificial free versions of simplex and dual simplex method have been presented. These new versions do not need any kind of artificial variables or artificial constraints; they could start with any feasible or infeasible basis of an LP, providing full freedom to the user that whether to start with primal artificial free version or dual artificial free version without making any abrupt changes to the LP structure. These methods follow the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables or artificial constraints. Computational results showed that ASM is more efficient when mn is large and DASM is more efficient when nm is large. Hence these methods also provide great benefits in class room teaching by eliminating the relatively difficult and tedious calculations of artificial variables and constraints. It is also a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement. It is very helpful tool in integer programming and sensitivity analysis, because it provides a way to avoid dual simplex method.

Data Availability

Data are available from Figshare (http://dx.doi.org/10.6084/m9.figshare.1261307).

Funding Statement

Grant number: Yearly grant 2014; Funding Institution: University of Karachi; Fund received by: Dr. Syed Inayatullah; Initials: SI; URL: www.uok.edu.pk. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1. Lanstra J, Rinooy Kan A, Schrijver A (1991) History of Mathematical Programming A Collection of Personal Reminisscences. [Google Scholar]
  • 2. Grattan Guinness I (1970) Joseph Fourier's Anticipation of Linear Programming. Operational Research Quarterly, 21(3), 361–364. [Google Scholar]
  • 3. Dines L (1918) Systems of inequalities. Annal of Mathematics, 20 (2), 191–198. [Google Scholar]
  • 4.Motzkin T (1952) Contributions to the theory of linear inequalities. RAND Corporation Translation 22.
  • 5. Dantzig GB (1951) Maximization of a linear function of variables subject to linear inequalities. In Koopmans TC (Ed.), (pp. 339–347). [Google Scholar]
  • 6. Dantzig GB (1963) Linear programming and Extensions. Princeton Univ. Press, NJ. [Google Scholar]
  • 7. Shamir R (1987) The efficiency of simplex method:A survey Management Science, 301–334. [Google Scholar]
  • 8. Dantzig GB, Orden A, Wolfe P (1955) The generalized simplex method for minimizing a linear form under linear inequality restraints. Pacific J. Math., 5 (2), 183–195. [Google Scholar]
  • 9. Wagner HM (1956) A Two-Phase Method for the Simplex Tableau. Operation Research, 4, 443–447 [Google Scholar]
  • 10. Serang O (2012) Conic Sampling: An Efficient Method for Solving Linear and Quadratic Programming by Randomly Linking Constraints within the Interior. PLOS ONE. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Arsham H, Baloh P, Damij T, Grad J (2003) An Algorithm for Simplex Tableau Reduction with Numerical Comparison. International Journal of Pure and Applied Mathematics, 4, 53–80. [Google Scholar]
  • 12. Arsham H, Damij T, Grad J (2003) An algorithm for simplex tableau reduction: The push-to-pull solution strategy. Applid Mathematics and computation, 525–547. [Google Scholar]
  • 13. Arsham H (1989) A tabular simplex type algorithm as a teaching aid for general LP models. Mathematical and Computer Modelling, 12, 1051–1056. [Google Scholar]
  • 14. Papparrizos K (1990) The two-phase simplex without artificial variables. Methods of Operations Research, 73–83. [Google Scholar]
  • 15. Arsham H (1997) Initialization of the simplex algorithm: An artificial-free approach. SIAM Rev., 736–744. [Google Scholar]
  • 16. Arsham H (1997) An artificial-free simplex algorithm for general LP models. Mathematical and Computer Modelling, 25 (1), 107–123. [Google Scholar]
  • 17. Enge A, Huhn P (1998) A Counterexample to H.Arsham's "Initialization of the Simplex Algorithm: An Artificial-Free approach". Society for Industrial and Applied Mathematics. [Google Scholar]
  • 18. Stojkovic NV, Stanimirovic PS (2001) Theory and Methodology Two Direct Methods in Linear Programming. European Journal of Operations Research, 417–439. [Google Scholar]
  • 19. Junior H, Lins M (2005) An Improved initial basis for Simplex algorithm. Computers & Operations Research, 1983–1993. [Google Scholar]
  • 20. Wei L, Guangting C (2006) A New Least Square Algorithm for Linear Programming. Appl. Math . J. Chinese Univ. Ser. B, 21, 214–222. [Google Scholar]
  • 21. Inayatullah S, Khan N, Imtiaz M (2010) New Minimum Angle Algorithms for Feasibility and Optimality. Canadian Journal on Computing in Mathematics, Natural Sciences, Engineering & Medicines, 22–36. doi: 10.1038/ng.3198 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Boonperm A, Sinapiromsaran K (2014) The artificial-free technique along the objective direction for the simplex algorithm.
  • 23.Huangfu Q (2013) High performance simplex solver. PhD dissertation, University of Edinburgh.
  • 24.Koberstein A (2005) The dual simplex method, Techniques for a fast and stable implementation. Ph D Dissertation.
  • 25. Illes T, Terlaky T (2002) Pivot versus interior point methods: Pros and cons. European Journal of Operational Research, 140(2), 170–190. [Google Scholar]
  • 26. Kaluzny B (2001) Finite pivot algoirthms and feasibility. MS thesis, Faculty of Graduate Studies and Reseach, School of Computer Science, McGill University, Montreal, Quebec, Canada. [Google Scholar]
  • 27. Arsham H (2006) A Big-M free Solution algorithm for general linear programs. International Journal of Pure and Applied Mathematics, 32, 37–52. [Google Scholar]
  • 28. Darvas F (1974) Application of the sequential simplex method in designing drug analogs. Journal of Medicinal Chemistry, 799–804. [DOI] [PubMed] [Google Scholar]
  • 29. De Bruijn W, Ketelaars D, Gelsema E, Sorber L (1991) Comparison of the simplex method with several other methods for background-fitting for electron energy-loss spectral quantification of biological materials. Microsc. Microanal. Microstruct., 281–291. [Google Scholar]
  • 30. Pulgarín JM, Molina AA, Pardo MA (2002) The use of modified simplex method to optimize the room temperature phosphorescence variables in the determination of an antihypertensive drug. Talanta, 57, 795–805. [DOI] [PubMed] [Google Scholar]
  • 31. Haefner JW (2005) Modeling Biological Systems Principles and Applications. United States of America: Springer. [Google Scholar]
  • 32. Yuekai S, Fleming RM, Thiele I, Saunders MA (2013) Robust flux balance analysis of multiscale biochemical reaction networks. BMC Bioinformatics. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Jatau J, Datau S, Agyo D (1999) Application of Simplex Method to Determine the Minimum Deviation in Parameters Affecting Dimensional Accuracy During Rolling From their Set Values. Bulletin of Science Association of Nigerian, 22, 113–120. [Google Scholar]
  • 34. The Net-Lib Repository for Linear Problems (n.d.). Retrieved from http://www.netlib.org/lp/data.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data are available from Figshare (http://dx.doi.org/10.6084/m9.figshare.1261307).


Articles from PLoS ONE are provided here courtesy of PLOS

RESOURCES