![]() Chapter Contents |
![]() Previous |
![]() Next |
The NLP Procedure |
The m linear constraints define a feasible region G in Rn that must contain the point x* that minimizes the problem. If the feasible region G is empty, no solution to the optimization problem exists.
All optimization techniques in PROC NLP (except those processing nonlinear constraints) are active set methods. The iteration starts with a feasible point x(0), which either is provided by the user or can be computed by the Schittkowski and Stoer (1979) algorithm implemented in PROC NLP. The algorithm then moves from one feasible point x(k-1) to a better feasible point x(k) along a feasible search direction s(k),
Theoretically, the path of points x(k) never leaves the
feasible region G of the optimization problem, but it
can hit its boundaries. The active set A(k) of point
x(k) is defined as the index set of all linear equality
constraints and those inequality constraints that are satisfied
at x(k). If no constraint is active
x(k), the point is located in the interior of
G, and the active set is
empty. If the point x(k) in iteration k hits the boundary
of inequality constraint i, this constraint i becomes active
and is added to A(k). Each equality constraint and
each active inequality constraint reduces the dimension
(degrees of freedom) of the optimization problem.
In practice, the active constraints can be satisfied only with finite precision. The LCEPSILON=r option specifies the range for active and violated linear constraints. If the point x(k) satisfies the condition
If you cannot expect an improvement in the value of the objective function by moving from an active constraint back into the interior of the feasible region, you use this inequality constraint as an equality constraint in the next iteration. That means the active set A(k+1) still contains the constraint i. Otherwise you release the active inequality constraint and increase the dimension of the optimization problem in the next iteration.
A serious numerical problem can arise when some of the active constraints become (nearly) linearly dependent. Linearly dependent equality constraints are removed before entering the optimization. You can use the LCSINGULAR= option to specify a criterion r used in the update of the QR decomposition that decides whether an active constraint is linearly dependent relative to a set of other active constraints.
If the final parameter set x* is subjected to nact linear
equality or active inequality constraints, the QR decomposition of
the n ×nact matrix of the linear constraints
is computed by
, where Q is an n ×n
orthogonal matrix and R is an n ×nact upper triangular
matrix. The n columns of matrix Q can be separated into two
matrices, Q=[Y,Z], where Y contains the first nact orthogonal
columns of Q and Z the last n-nact orthogonal columns of Q.
The n ×(n-nact) column-orthogonal matrix Z is also called
the nullspace matrix of the active linear constraints
.The n - nact columns of the n ×(n - nact) matrix
Z form a basis orthogonal to the rows of the nact ×n
matrix
.
At the end of the iteration process, the PROC NLP can display the projected gradient gZ,
Those elements of the nact vector of first-order estimates of Lagrange multipliers,
![]() Chapter Contents |
![]() Previous |
![]() Next |
![]() Top |
Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.