Theory of LP and the Simplex method

Colab notebook

First, we will briefly discuss the primal-dual theory with a few examples. Consider the Factory problem that was discussed earlier:

\[\begin{split} \begin{align} &\text{max}&&\\ &\qquad z=40𝑥_1 + 50𝑥_2\\ &\text{subject to}&&\\ &\qquad 1x_1 + 2𝑥_2 \le 40\\ &\qquad 4𝑥_1 + 3𝑥_2 \le 120\\ &\qquad x_1, x_2 \ge 0.\\ \end{align} \end{split}\]
!pip install gurobipy
from gurobipy import *
m = Model()
x1 = m.addVar(lb=0, vtype = GRB.CONTINUOUS, name='x1') 
x2 = m.addVar(lb=0, vtype = GRB.CONTINUOUS, name='x2')
m.setObjective(40*x1+50*x2, GRB.MAXIMIZE)
m.addConstr(1*x1+2*x2<=40, name='c1')
m.addConstr(4*x1+3*x2<=120, name= 'c2')
m.optimize()
print('*'*100)
for var in m.getVars(): # descision variable
    print(var.varName, '=', var.x, (var.obj,var.SAObjLow, var.SAObjUp, var.RC))
print('*'*100)
print('optimal total revenue:', m.objVal)
print('*'*100)
for con in m.getConstrs(): # constraints
    print(con.ConstrName, ': slack =', con.slack,', shadow price=',
          con.pi,',', (con.RHS, con.SARHSLow, con.SARHSUp))
print('*'*100)
print('*'*100)
Collecting gurobipy
  Downloading gurobipy-9.5.0-cp37-cp37m-manylinux2014_x86_64.whl (11.5 MB)
     |████████████████████████████████| 11.5 MB 4.8 MB/s 
?25hInstalling collected packages: gurobipy
Successfully installed gurobipy-9.5.0
Restricted license - for non-production use only - expires 2023-10-25
Gurobi Optimizer version 9.5.0 build v9.5.0rc5 (linux64)
Thread count: 1 physical cores, 2 logical processors, using up to 2 threads
Optimize a model with 2 rows, 2 columns and 4 nonzeros
Model fingerprint: 0x3a526911
Coefficient statistics:
  Matrix range     [1e+00, 4e+00]
  Objective range  [4e+01, 5e+01]
  Bounds range     [0e+00, 0e+00]
  RHS range        [4e+01, 1e+02]
Presolve time: 0.01s
Presolved: 2 rows, 2 columns, 4 nonzeros

Iteration    Objective       Primal Inf.    Dual Inf.      Time
       0    9.0000000e+31   3.250000e+30   9.000000e+01      0s
       2    1.3600000e+03   0.000000e+00   0.000000e+00      0s

Solved in 2 iterations and 0.02 seconds (0.00 work units)
Optimal objective  1.360000000e+03
****************************************************************************************************
x1 = 24.0 (40.0, 25.0, 66.66666666666667, 0.0)
x2 = 8.0 (50.0, 30.0, 80.0, 0.0)
****************************************************************************************************
optimal total revenue: 1360.0
****************************************************************************************************
c1 : slack = 0.0 , shadow price= 16.0 , (40.0, 30.0, 80.0)
c2 : slack = 0.0 , shadow price= 6.0 , (120.0, 60.0, 160.0)
****************************************************************************************************
****************************************************************************************************

Here, the optimal solution is

\[ (x^*,z)=((24,8),1360). \]

In addition, the Python solution shows that

\[\begin{split} \begin{align} \qquad c_1 &= 40~(25, 66.667),\\ \qquad c_2 &= 50~(30, 80),\\ \qquad b_1 &= 40~(30, 80),~\text{with shadow price } 16, \\ \qquad b_2 &= 120~(60, 160),~\text{with shadow price } 6,\\ \qquad s_1 &= 0, \\ \qquad s_2 &= 0. \\ \end{align} \end{split}\]

That is, the cost coefficients, \(\mathbf c = (c_1, c_2)\), have values 40 and 50, and the ranges in which they are allowed to change without affecting the optimal \(x^*\) are \((25, 66.667)\) and \((25, 66.667)\). Similarly, the RHS constraints, \(\mathbf b = (b_1, b_2)\), have values of 40 and 120 and can change to values \((30, 80)\) and \((60, 160)\) without affecting the optimal solution mix. Also, the shadow prices of the RHS constraints are 16 and 6, and there is no slack.

Is it possible to infer the optimal \(z\) or at least bound its value without solving the LP model?

The above objective is \(40𝑥_1 + 50𝑥_2\) can be bounded using the first constraint and the fact that both decision variables are non-negative

\[ 40𝑥_1 + 50𝑥_2<=40\times(x_1 + 2𝑥_2)=40x_1 + 80𝑥_2\le 1600. \]

Can we do better?

Let’s use the second constraint,

\[ 40𝑥_1 + 50𝑥_2<=50/3\times(4𝑥_1 + 3𝑥_2)=66.67x_1 + 50𝑥_2=2000 \]

but, here, the value 2000 is higher than the best upper bound we have so far, which is 1600.

Systematically, we can write that

\[ 40𝑥_1 + 50𝑥_2\le d_1x_1 +d_2x_2 \le h, \]

and let \(h\) be the upper bound on the maximum of the objective. The trick is that we will use the constraint equations to infer \(d_1, d_2\) and \(h\). That is, we multiply the first constraint by \(v_1\ge0\), the second by \(v_2\ge0\), and then add the two:

\[ v_1(1x_1 + 2𝑥_2)+v_2(4𝑥_1 + 3𝑥_2)\le 40v_1+120v_2 \]

or

\[ (v_1+4v_2)x_1+(2v_2+3v_2)x_2\le 40v_1+120v_2. \]

In the above notation:

\[\begin{split} \begin{align} &d_1=v_1+4v_2, \\ &d_2=2v_2+3v_2,\qquad \text{and}\\ &h=40v_1+120v_2. \end{align} \end{split}\]

How do we choose the best coefficients \(v_1\), and \(v_2\)? We must ensure that \(d_1\ge 40\) and \(d_2\ge 50\), and we want \(h\) to be as small as possible under these constraints. This is again an LP model which is called the dual to the primal set

\[\begin{split} \begin{align} &\text{min}\\ &\qquad h=40v_1 + 120v_2\\ &\text{s.t.}\\ &\qquad 1v_1 + 4v_2 \ge 40\\ &\qquad 2v_1 + 3v_2 \ge 50\\ &\qquad v_1, v_2 \ge 0. \end{align} \end{split}\]

In general, the dual to primal LP is another LP model that is derived from it in the following way:

  • Each variable in the primal becomes a constraint in the dual

  • Each constraint in the primal becomes a variable in the dual

  • The objective direction is inversed: maximum in the primal becomes minimum in the dual, and vice versa.

Hence, for the max primal

\[\begin{split} \begin{align} &\max \\ &\qquad\mathbf c\cdot\mathbf x,\\ &\text{s.t.}\\ &\qquad\mathbf A\mathbf x\le \mathbf b,\\ &\qquad\mathbf x \ge 0, \end{align} \end{split}\]

the corresponding dual, is

\[\begin{split} \begin{align} &\min \\ &\qquad\mathbf b\cdot\mathbf v,\\ &\text{s.t.}\\ &\qquad\mathbf A^T\mathbf v\ge \mathbf c,\\ &\qquad\mathbf v\ge 0. \end{align} \end{split}\]

The interpretation is that we solve for \(\mathbf v\), the shadow prices of the primal, by constraining the shadow prices with the cost coefficients, \(\mathbf c\).

Solving for \(v\) using Python, we find that the optimal is

\[ (v^*,h)=((16,6),1360). \]

m = Model()    
x1 = m.addVar(lb=0, vtype = GRB.CONTINUOUS, name='v1') 
x2 = m.addVar(lb=0, vtype = GRB.CONTINUOUS, name='v2')
m.setObjective(40*x1+120*x2, GRB.MINIMIZE)
m.addConstr(1*x1+4*x2>=40, name='c1')
m.addConstr(2*x1+3*x2>=50, name= 'c2')
m.optimize()
print('*'*100)
for var in m.getVars(): # descision variable
    print(var.varName, '=', var.x, (var.obj,var.SAObjLow, var.SAObjUp, var.RC))
print('*'*100)
print('optimal total revenue:', m.objVal)
print('*'*100)
for con in m.getConstrs(): # constraints
    print(con.ConstrName, ': slack =', con.slack,', shadow price=',
          con.pi,',', (con.RHS, con.SARHSLow, con.SARHSUp))
Gurobi Optimizer version 9.5.0 build v9.5.0rc5 (linux64)
Thread count: 1 physical cores, 2 logical processors, using up to 2 threads
Optimize a model with 2 rows, 2 columns and 4 nonzeros
Model fingerprint: 0x8c4006b8
Coefficient statistics:
  Matrix range     [1e+00, 4e+00]
  Objective range  [4e+01, 1e+02]
  Bounds range     [0e+00, 0e+00]
  RHS range        [4e+01, 5e+01]
Presolve time: 0.01s
Presolved: 2 rows, 2 columns, 4 nonzeros

Iteration    Objective       Primal Inf.    Dual Inf.      Time
       0    0.0000000e+00   4.500000e+01   0.000000e+00      0s
       2    1.3600000e+03   0.000000e+00   0.000000e+00      0s

Solved in 2 iterations and 0.02 seconds (0.00 work units)
Optimal objective  1.360000000e+03
****************************************************************************************************
v1 = 16.0 (40.0, 30.0, 80.0, 0.0)
v2 = 6.0 (120.0, 60.0, 160.0, 0.0)
****************************************************************************************************
optimal total revenue: 1360.0
****************************************************************************************************
c1 : slack = 0.0 , shadow price= 24.0 , (40.0, 25.0, 66.66666666666666)
c2 : slack = 0.0 , shadow price= 8.0 , (50.0, 30.0, 80.0)

In addition, as shown above

\[\begin{split} \begin{align} \qquad b_1 &= 40~(30,80),\\ \qquad b_2 &= 120~(60,160),\\ \qquad c_1 &= 40~(25, 66.667),~\text{with shadow price } 24, \\ \qquad c_2 &= 50~(30, 80),~\text{with shadow price } 8,\\ \qquad s_1 &= 0, \\ \qquad s_2 &= 0. \\ \end{align} \end{split}\]

The dual’s decision variables, \(\mathbf v\), are the primal’s shadow prices and the dual’s \(\mathbf b\) and \(\mathbf c\) correspond with their primal values. Lastly, the dual’s shadow prices are the primal’s decision variables.

The primal-dual correspondence gives us more flexibility in solving the LP model. In cases where the dual is simpler, we can solve it instead of the primal.

Few other properties emerge from the primal-dual relationship:

Weak duality

Consider the difference between the primal and dual objectives:

\[ \mathbf c\cdot \mathbf x-\mathbf v\cdot \mathbf b. \]

This equation can be expended by adding and subtracting \(\mathbf v\cdot\mathbf A\mathbf x\). That is

\[\begin{split} \begin{align} &\qquad \mathbf c\cdot \mathbf x-\mathbf v\cdot \mathbf b=\\ &\qquad \mathbf c\cdot \mathbf x-\mathbf v\cdot\mathbf A\mathbf x+ \mathbf v\cdot\mathbf A\mathbf x-\mathbf v\cdot \mathbf b =\\ &\qquad (\mathbf c-\mathbf v\mathbf A)\cdot\mathbf x+ \mathbf v\cdot(\mathbf A\mathbf x- \mathbf b).\\ \end{align} \end{split}\]

But, for our maximize objective problem

\[ \mathbf c-\mathbf v\mathbf A\le0, \qquad\text{and}\qquad \mathbf A\mathbf x- \mathbf b\le0. \]

Hence,

\[ \qquad \mathbf c\cdot \mathbf x\le\mathbf v\cdot \mathbf b \]

That is, for maximize objective problem, the dual objective provides a natural upper bound assuming all points are feasible.

Complementary slackness

In case of zero slack, standardized system, or feasible binding set of points

\[\begin{split} \begin{align} &(\mathbf c-\mathbf v\mathbf A)\cdot \mathbf x=0 ~~~\text{(primal complementary slackness)},\\ &\text{and}\\ &\mathbf v\cdot (\mathbf A\mathbf x- \mathbf b)=0 ~~~\text{(dual complementary slackness)}.\\ \end{align} \end{split}\]

Thus,

\[ \qquad \mathbf c^T \mathbf x^*=\mathbf b^T\mathbf v^* \text{(primal-dual value equality)}. \]

and the max optimal primal equals the min of optimal dual.

In our example,

\[ \mathbf c^T\mathbf x^*= (40,50)\cdot (24,8)=1360, \]

and

\[ \mathbf b^T\mathbf v^*= (40,120)\cdot (16,6)=1360, \]

KKT conditions for optimality

The Karush–Kuhn–Tucker (KKT) provides a necessary and sufficient condition for LP optimality. In short, for maximizing the objective, \(\mathbf c^T \mathbf x\), the following is required

Primal feasibility:

\[\begin{split} \begin{align} &\mathbf A\mathbf x\le \mathbf b\\ &\mathbf x\ge 0 \end{align} \end{split}\]

Dual feasibility:

\[\begin{split} \begin{align} &\mathbf A\mathbf x\le \mathbf b\\ &\mathbf x\ge 0 \end{align} \end{split}\]

Complementary slackness:

\[\begin{split} \begin{align} &(\mathbf c-\mathbf v\mathbf A)\mathbf x=0\\ &\mathbf v(\mathbf A \mathbf x- \mathbf b)=0. \end{align} \end{split}\]

A few remarks:

  1. Ratio constraints as \(x_1/x_2\le2/3\) can be “linearized” to \(3x_1-2x_2\le0.\)

  2. Decision variables of relatively large magnitude are best modeled as continuance variables even though they correspond physically to integer quantities.

  3. We can also linearize nonlinear constraints. In an example, minimax or min-deviation operators.

Minmax:

\[\begin{split} \begin{align} &\quad\text{min}\\ &\quad\quad f\\ &\quad\text{s.t.} \\ &\quad \quad f\ge 3x_1+2x_2+x_3\\ &\quad \quad f\ge x_1+x_2.\\ \end{align} \end{split}\]

Min deviation:

\[\begin{split} \begin{align} &\quad\text{min}\\ &\quad\quad 4|x_1-x_2|\\ \\ &\text{which is replaced with}\\ \\ &\quad\text{min}\\ &\quad\quad 4(s_1^++s_1^-)~~~~~~(\text{total deviation})\\ &\quad\text{s.t.}\\ &\quad\quad x_1-x_2=s_1^+-s_1^-\\ &\quad\quad s_1^+,s_1^-\ge0 \end{align} \end{split}\]

The Simplex Algorithm

The algorithm is designed to improve the solution by moving from an extreme point to another adjacent while retaining feasibility.

Consider the following standard LP model of the Factory probelm with \(x_1\), \(x_2\) decision varibales and \(x_3\), \(x_4\) slack varibales:

\[\begin{split} \begin{align} &\quad\text{max}\\ &\quad\quad z=40x_1+50x_2\\ &\quad\text{s.t.}\\ &\quad\quad x_1+2x_2+x_3=40\\ &\quad\quad 4x_1+3x_2+x_4=120\\ &\quad\quad x_1,x_2,x_3,x_4\ge0 \end{align} \end{split}\]

Then, we insert the input parameters, \(\mathbf A\), \(\mathbf b\), and \(\mathbf c\), into a tabular format

\[\begin{split} \begin{array}{lccccl} & x_1 & x_2 & x_3 & x_4 &\\ \mathbf c & 40 & 50 & 0 & 0 & \mathbf b\\ \mathbf A & 1 & 2 & 1 & 0 & 40\\ & 4 & 3 & 0 & 1 & 120\\ \end{array} \end{split}\]

Now, to begin, we choose an initial solution which is unique and feasible. For that we set \((x_1, x_2) = (0,0)\), which leave use with \((x_3, x_4) = (40,120)\). We call the active nonzero varibales basic feasible (B) and the others nonbasic feasible (N). Hence, the initial solution at \(t=0\) is \(\mathbf{x}^{(0)}=(0,0,40,120)\), and the current objective is \(\mathbf c^T \mathbf x^{(0)}=0\). Note that basic solutions exists only if the columns form a basis - the largest possible collection of linearily independent columns.

\[\begin{split} \begin{array}{lccccl} & x_1 & x_2 & x_3 & x_4 &\\ \mathbf c & 40 & 50 & 0 & 0 & \mathbf b\\ \mathbf A & 1 & 2 & 1 & 0 & 40\\ & 4 & 3 & 0 & 1 & 120\\ t=0 & N & N & B & B & \\ \mathbf{x}^{(0)} & 0 & 0 & 40 & 120 & \text{Current objective}: \mathbf c^T \mathbf x^{(0)}=0\\ \end{array} \end{split}\]

The goal is to improve the objective. For that we need to find the “best” improve direction \(\Delta \mathbf{x}\) and step size \(\lambda\) while maintaining feasability: \(\mathbf{x}^{(t+1)}=\mathbf{x}^{(t)}+\lambda \Delta \mathbf{x}\).

The constraints equations are \(\mathbf A\mathbf{x}^{(t)}=\mathbf{b}\). So, if the improved \(\mathbf{x}^{(t+1)}\) is feasible than \(\mathbf A\mathbf{x}^{(t+1)}=\mathbf{b}\), or \(\mathbf A(\mathbf{x}^{(t)}+\lambda \Delta \mathbf{x})=\mathbf{b}\). Subtracting the two

\[ \mathbf A(\mathbf{x}^{(t)}+\lambda \Delta \mathbf{x})-\mathbf A\mathbf{x}^{(t)}=\mathbf{b}-\mathbf{b}=0. \]

It follows that

\[ \mathbf A \Delta \mathbf{x}=0. \]

In our example,

\[\begin{split} \begin{pmatrix} 1 & 2 & 1 & 0 \\ 4 & 3 & 0 & 1 \end{pmatrix} \begin{pmatrix} \Delta x_1 \\ \Delta x_2 \\ \Delta x_3 \\ \Delta x_4 \end{pmatrix}=0. \end{split}\]

We want to switch one nonbasic variable with one basic to move to the adjacent edge without losing uniqueness and feasibility. For that, we set \(\Delta x_1,\Delta x_2=(1,0)\) and also \(\Delta x_1,\Delta x_2=(0,1)\) (Don’t think too much on the meaning of \(\Delta x=1\) as this value gets scaled out later when multiplying by \(\lambda\) that has \(\Delta x\) on its denominator). Solving for the former

\[\begin{split} \begin{pmatrix} 1 & 2 & 1 & 0 \\ 4 & 3 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \\ \Delta x_3 \\ \Delta x_4 \end{pmatrix}=0\Rightarrow \Delta \mathbf{x}^T=(1,0,-1,-4), \end{split}\]

and for the latter we find that

\[\begin{split} \begin{pmatrix} 1 & 2 & 1 & 0 \\ 4 & 3 & 0 & 1 \end{pmatrix} \begin{pmatrix} 0 \\ 1 \\ \Delta x_3 \\ \Delta x_4 \end{pmatrix}=0\Rightarrow \Delta \mathbf{x}^T=(0,1,-2,-3). \end{split}\]

The corresponding change to the objective \(\mathbf c^T \Delta \mathbf{x}\) yield values of 40 and 50 which leads us to prefer the higher value of 50 that goes with \(\Delta \mathbf{x}^T=(1,0,-1,-4)\).

\[\begin{split} \begin{array}{lccccl} & x_1 & x_2 & x_3 & x_4 &\\ \mathbf c & 40 & 50 & 0 & 0 & \mathbf b\\ \mathbf A & 1 & 2 & 1 & 0 & 40\\ & 4 & 3 & 0 & 1 & 120\\ t=0 & N & N & B & B & \\ \mathbf{x}^{(0)} & 0 & 0 & 40 & 120 & \text{Current objective}: \mathbf c^T \mathbf x^{(0)}=0\\ \Delta\mathbf x ~\text{for}~ x_1 & 1 & 0 & -1 & -4 & \text{Improve direction}: \mathbf c^T \Delta\mathbf x=40\\ \Delta\mathbf x ~\text{for}~ x_2 & 0 & 1 & -2 & -3 & \text{Improve direction}: \mathbf c^T \Delta\mathbf x=\boxed{50}\\ \end{array} \end{split}\]

Now, we need to choose the improved step, \(\lambda\), s.t. feasibility will be maintained.

The improved solution must maintain the non-negativity constraint

\[ \mathbf{x}^{(t+1)}=\mathbf{x}^{(t)}+\lambda \Delta \mathbf{x}\ge 0. \]

Hence,

\[ \lambda \ge -\mathbf{x}^{(t)}/\Delta \mathbf{x}. \]

In addition, \(\lambda\) must be non-negative and sufficiently small not to take the improved solution outside of the feasible region. Hence,

\[ \lambda=\text{min}\bigg\{-\frac{x_j}{\Delta x_j}: \Delta x_j\le 0\bigg\} \]
\[\begin{split} \begin{array}{lccccl} & x_1 & x_2 & x_3 & x_4 &\\ \mathbf c & 40 & 50 & 0 & 0 & \mathbf b\\ \mathbf A & 1 & 2 & 1 & 0 & 40\\ & 4 & 3 & 0 & 1 & 120\\ t=0 & N & N & B & B & \\ \mathbf{x}^{(0)} & 0 & 0 & 40 & 120 & \text{Current objective}: \mathbf c^T \mathbf x^{(0)}=0\\ \Delta\mathbf x ~\text{for}~ x_1 & 1 & 0 & -1 & -4 & \text{Improve direction}: \mathbf c^T \Delta\mathbf x=40\\ \Delta\mathbf x ~\text{for}~ x_2 & 0 & 1 & -2 & -3 & \text{Improve direction}: \mathbf c^T \Delta\mathbf x=\boxed{50}\\ \lambda & 0 & 1 & \boxed{20} & 40 & \text{Improve step}: \lambda=\text{min}\{-x_j/\Delta x_j: \Delta x_j\le 0\}\\ \end{array} \end{split}\]

Hence,

\[ \mathbf{x}^{(1)}=(0,0,40,120)^T+20\times (0,1,-2,-3)^T=(0,20,0,60)^T \]

with the obective \(\mathbf c^T \mathbf x^{(1)}=1000\).

The process then continues with \(x_2\) replacing \(x_3\) as basic variable.

Now, we are looking for an improved solution. For that, we solve for the improved direction and step. We start by solving \(\Delta x\) for \(x_1\) and \(x_3\) to replace the current basic variables. This is done as before, resulting in the best improvement direction using \(\Delta\mathbf x\) for \(x_1\) with the largest change to the objective. The minimal step turns out to be \(\lambda=24\), and hence

\[ \mathbf{x}^{(2)}=(0,20,0,60)^T+24(1,-1/2,0,-5/2)^T=(24,8,0,0) \]

with the obective \(\mathbf c^T \mathbf x^{(2)}=1360\).

\[\begin{split} \begin{array}{lccccl} & x_1 & x_2 & x_3 & x_4 &\\ \mathbf c & 40 & 50 & 0 & 0 & \mathbf b\\ \mathbf A & 1 & 2 & 1 & 0 & 40\\ & 4 & 3 & 0 & 1 & 120\\ t=0 & N & N & B & B & \\ \mathbf{x}^{(0)} & 0 & 0 & 40 & 120 & \text{Current objective}: \mathbf c^T \mathbf x^{(0)}=0\\ \Delta\mathbf x ~\text{for}~ x_1 & 1 & 0 & -1 & -4 & \text{Improve direction}: \mathbf c^T \Delta\mathbf x=40\\ \Delta\mathbf x ~\text{for}~ x_2 & 0 & 1 & -2 & -3 & \text{Improve direction}: \mathbf c^T \Delta\mathbf x=\boxed{50}\\ \lambda & & & \boxed{20} & 40 & \text{Improve step}: \lambda=\text{min}\{-x_j/\Delta x_j: \Delta x_j\le 0\}\\ \end{array} \end{split}\]

\[\begin{split} \begin{array}{lccccl} t=1 & N & B & N & B & \\ \mathbf{x}^{(1)} & 0 & 20 & 0 & 60 & \text{Current objective}: \mathbf c^T \mathbf x^{(1)}=1000\\ \Delta\mathbf x ~\text{for}~ x_1 & 1 & -1/2 & 0 & -5/2 & \text{Improve direction}: \mathbf c^T \Delta\mathbf x=\boxed{15}\\ \Delta\mathbf x ~\text{for}~ x_3 & 0 & -1/2 & 1 & 3/2 & \text{Improve direction}: \mathbf c^T \Delta\mathbf x=-25<0\\ \lambda & & 40 & & \boxed{24} & \text{Improve step}: \lambda=\text{min}\{-x_j/\Delta x_j: \Delta x_j\le 0\}\\ \end{array} \end{split}\]

Again, we are looking for an improved solution by solving for the improved direction and step. We solve \(\Delta x\) for \(x_3\) and \(x_4\) to replace the current basic variables \(x_1\) and \(x_2\). This is done as before, resulting in the best improvement direction using \(\Delta\mathbf x\) for \(x_4\) with the largest change to the objective. But, the minimal step turns out to be negative, and thus the search stops. The local optimum is global because it is an LP model over continuance variables and convex domain. \(\mathbf{x}^*=\mathbf{x}^{(2)}\) is optimal with max objective of \(1360\).

\[\begin{split} \begin{array}{lccccl} & x_1 & x_2 & x_3 & x_4 &\\ \mathbf c & 40 & 50 & 0 & 0 & \mathbf b\\ \mathbf A & 1 & 2 & 1 & 0 & 40\\ & 4 & 3 & 0 & 1 & 120\\ t=0 & N & N & B & B & \\ \mathbf{x}^{(0)} & 0 & 0 & 40 & 120 & \text{Current objective}: \mathbf c^T \mathbf x^{(0)}=0\\ \Delta\mathbf x ~\text{for}~ x_1 & 1 & 0 & -1 & -4 & \text{Improve direction}: \mathbf c^T \Delta\mathbf x=40\\ \Delta\mathbf x ~\text{for}~ x_2 & 0 & 1 & -2 & -3 & \text{Improve direction}: \mathbf c^T \Delta\mathbf x=\boxed{50}\\ \lambda & & & \boxed{20} & 40 & \text{Improve step}: \lambda=\text{min}\{-x_j/\Delta x_j: \Delta x_j\le 0\}\\ \end{array} \end{split}\]

\[\begin{split} \begin{array}{lccccl} t=1 & N & B & N & B & \\ \mathbf{x}^{(1)} & 0 & 20 & 0 & 60 & \text{Current objective}: \mathbf c^T \mathbf x^{(1)}=1000\\ \Delta\mathbf x ~\text{for}~ x_1 & 1 & -1/2 & 0 & -5/2 & \text{Improve direction}: \mathbf c^T \Delta\mathbf x=\boxed{15}\\ \Delta\mathbf x ~\text{for}~ x_3 & 0 & -1/2 & 1 & 3/2 & \text{Improve direction}: \mathbf c^T \Delta\mathbf x=-25<0\\ \lambda & & 40 & & \boxed{24} & \text{Improve step}: \lambda=\text{min}\{-x_j/\Delta x_j: \Delta x_j\le 0\}\\ \end{array} \end{split}\]

\[\begin{split} \begin{array}{lccccl} t=2 & B & B & N & N & \\ \boxed{\mathbf{x}^{(2)}} & 24 & 8 & 0 & 0 & \text{Current objective}: \mathbf c^T \mathbf x^{(1)}=1360\\ \Delta\mathbf x ~\text{for}~ x_3 & 3 & -4 & 1 & 0 & \text{Improve direction}: \mathbf c^T \Delta\mathbf x=-80<0\\ \Delta\mathbf x ~\text{for}~ x_4 & 1/5 & 1/10 & 0 & 1 & \text{Improve direction}: \mathbf c^T \Delta\mathbf x=\boxed{13}\\ \lambda & & & & & \text{Improve step}: \lambda=\text{min}\{-x_j/\Delta x_j: \Delta x_j\le 0\}\\ \end{array} \end{split}\]