next previous up

Lecture 20: Linear Programming III

(This material is not covered in Fleischer. A good reference text is Hillier and Lieberman, ``Introduction to Operations Research'', published by Holden-Day; the fourth edition of this text came out in 1986, and a fifth edition has come out since.)

Post-Optimality Analysis

In linear programming problems, as in most economic problems, the input data are often uncertain. So we haven't finished when we've obtained the optimal solution; we still need to ask, how would this solution change if one of the input parameters to the problem changed slightly?

One change that might occur is a relaxation of one of the constraints. For example, if the constraints are imposed by limited manufacturing capability, it might be possible to add to that capability by investing some additional funds. The solution we have obtained to the original problem already contains an indicator of how much it would be reasonable to invest, in the form of shadow prices.

Shadow Prices

When the simplex method has run to completion, all the coefficients of the non-basic variables in the objective function will be positive:

Z + y3x3 + y4x4 + y5x5= 36

(Since otherwise we could increase the value of the objective function by making one of these non-basic variables basic.) The coefficient yi is called the shadow price of resource i, that is, it is the price it would be worth paying to increase the amount of that resource by one unit.

For example, suppose the first constraint is

x1 - x3= b1

If the value of bi were increased by 1, while the value of x1 stayed the same, then the value of x3 would become -1. (Since it's currently 0.) Substituting this into the objective function would cause the value of the objective function to increase by y3.

From this we can also deduce that, if a slack variable does not appear in the final form of the objective function, then the associated constraint is not limiting, and thus it's not worth payig anything for an increased supply of that resource. (This conclusion is only valid for small changes in the available amount of the resource. For large changes, the position of the optimum might shift to another corner point.)

Fitting Other Problems to the Simplex Method

In industry, linear programming problems may typically have thousands of constraints and tens of thousands of variables. There are large computer programs dedicated to the efficient solution of these problems. So, if a problem comes along that doesn't quite fit the simplex format, it's usually easier to try to re-arrange it so it does fit, and can then be solved by one of these programs, rather than trying to come up with an original solution method of your own. We examine a few common cases that can be adjusted in this way:

Minimization Problems

In some cases, the objective function is something we want to minimize, rather than maximize. For example, the function could represent total costs of a project, rather than total profit. Adapting such problems is extremely easy: instead of minimizing Z=c1x1+c2x2,

we maximize

-Z=-c1x1-c2x2.

Equality Constraints

It may sometimes happen that one of the constraints imposed by the problem is an equality rather than an inequality. For example, in deciding the mix of fuels to be fed into a fast-breeder nuclear reactor, we might want to insist that all the available plutonium be consumed, since we don't want it to remain available as a potential hazard. This looks like it should make life easier - after all, we previously had to go to the trouble of introducing slack variables to turn the inequalities into equations, and here we have an equation ready-made for us. Unfortunately, this won't fit into the existing computer programs, because we no longer have an obvious way to get started -- we're supposed to start with all the decision variables set to zero, but the equality constraint

xi=bi

will be false if we set xi=0

The solution is to introduce an artificial variable, barxj, and write

xi - bar xj=bi

Now we can start off with xi= 0 . But we also have to ensure that, at the end of the solution process, bar xj= 0 , otherwise the equality constraint won't be satisfied. We do this by what is called the `big M method': we put the variable bar xj into the objective function with a huge negative coefficient. Now the optimization process will force bar xj to zero.

There's still one problem left, though: we're planning to start off with bar xj as a basic variable, but in standard form the objective function can only contain one basic variable, Z. So we have to substitute for it. We can do this by adding M times the i-th constraint to the objective function; this replaces bar xj with an expression involving xi, which starts off as non-basic. So now, at last, we can get started.

If it ever happens that the optimization process ends with an artificial variable still non-zero, this is a sign that the problem has no solution.

`Greater-than' constraints

Other possible problems involve `greater than' constraints. For example, government regulations or an existing contract may oblige you to produce at least a certain minimum number of one or more products, leading to a constraint inequality of the form:

xi>=bi

We can turn this into an equation by adding a surplus variable, xk:

xi - xk=bi , xk > 0

Again, we have a problem of getting started. If we set the decision variables to zero, the surplus variable will start off negative, violating its non-negativity constraint. The way round this is again to introduce an artificial variable, then use the Big M method to ensure it goes to zero by the time we've finished:

xi - xk + bar xj=bi ,



next previous up



John Jones
Mar 3 10:38:23 PDT 2008