Chapter Contents |
Previous |
Next |
Nonlinear Optimization Examples |
NLPCG | Conjugate Gradient Method |
NLPDD | Double Dogleg Method |
NLPNMS | Nelder-Mead Simplex Method |
NLPNRA | Newton-Raphson Method |
NLPNRR | Newton-Raphson Ridge Method |
NLPQN | (Dual) Quasi-Newton Method |
NLPQUA | Quadratic Optimization Method |
NLPTR | Trust-Region Method |
The following subroutines are provided for solving nonlinear least-squares problems:
NLPLM | Levenberg-Marquardt Least-Squares Method |
NLPHQN | Hybrid Quasi-Newton Least-Squares Methods |
A least-squares problem is a special form of minimization problem where the objective function is defined as a sum of squares of other (nonlinear) functions.
The following subroutines are provided for the related problems of computing finite difference approximations for first- and second-order derivatives and of determining a feasible point subject to boundary and linear constraints:
NLPFDD | Approximate Derivatives by Finite Differences |
NLPFEA | Feasible Point Subject to Constraints |
Each optimization subroutine works iteratively. If the parameters are subject only to linear constraints, all optimization and least-squares techniques are feasible-point methods; that is, they move from feasible point x(k) to a better feasible point x(k+1) by a step in the search direction s(k), k = 1,2,3, .... If you do not provide a feasible starting point x(0), the optimization methods call the algorithm used in the NLPFEA subroutine, which tries to compute a starting point that is feasible with respect to the boundary and linear constraints.
The NLPNMS and NLPQN subroutines permit nonlinear constraints on parameters. For problems with nonlinear constraints, these subroutines do not use a feasible-point method; instead, the algorithms begin with whatever starting point you specify, whether feasible or infeasible.
Each optimization technique requires a continuous objective function f = f(x) and all optimization subroutines except the NLPNMS subroutine require continuous first-order derivatives of the objective function f. If you do not provide the derivatives of f, they are approximated by finite difference formulas. You can use the NLPFDD subroutine to check the correctness of analytical derivative specifications.
Most of the results obtained from the IML procedure optimization and least-squares subroutines can also be obtained by using the NLP procedure in the SAS/OR product.
The advantages of the IML procedure are as follows:
Chapter Contents |
Previous |
Next |
Top |
Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.