NLPLM Call
calculates Levenberg-Marquardt least squares
- CALL NLPLM( rc, xr, "fun", x0, opt, blc, tc, par,
"ptit", "jac">);
See "Nonlinear Optimization and Related Subroutines" for a listing of all NLP subroutines.
See Chapter 11, "Nonlinear Optimization Examples," for a description of
the inputs to and outputs of all NLP subroutines.
The NLPLM subroutine uses the Levenberg-Marquardt method, which is
an efficient modification of the trust-region method for nonlinear
least-squares problems and is implemented as in Mor (1978).
This is the recommended algorithm for
small to medium least-squares problems.
Large least-squares problems can often
be processed more efficiently with other
subroutines, such as the NLPCG and NLPQN methods.
In each iteration, the NLPLM subroutine solves a
quadratically-constrained quadratic minimization
problem that restricts the step to the boundary or
interior of an n-dimensional elliptical trust region.
The m functions f1(x), ... ,fm(x) are computed by
the module specified with the "fun" module argument.
The m×n Jacobian matrix, J, contains
the first-order derivatives of the m functions
with respect to the n parameters, as follows:
You can specify J with the "jac"
module argument; otherwise, the subroutine will
compute it with finite difference approximations.
In each iteration, the subroutine computes the
crossproduct of the Jacobian matrix, JT J, to be used as an approximate Hessian.
Note: In least-squares subroutines, you must set the first
element of the opt vector to m, the number of functions.
In addition to the standard iteration history, the
NLPLM subroutine also prints the following information:
- Under the heading Iter, an asterisk (*)
printed after the iteration number indicates that
the computed Hessian approximation was singular
and had to be ridged with a positive value.
- The heading lambda represents
the Lagrange multiplier, .
This has a value of zero when the optimum of the quadratic
function approximation is inside the trust region, in
which case a trust-region-scaled Newton step is performed.
It is greater than zero when the optimum is at the
boundary of the trust region, in which case the scaled
Newton step is too long to fit in the trust region
and a quadratically-constrained optimization is done.
Large values indicate optimization difficulties,
and as in Gay (1983), a negative value indicates
the special case of an indefinite Hessian matrix.
- The heading rho refers to , the ratio between
the achieved and predicted difference in function values.
Values that are much smaller than one
indicate optimization difficulties.
Values close to or larger than one indicate
that the trust region radius can be increased.
Figure 17.5 shows the iteration history for
the solution of the unconstrained Rosenbrock problem.
See the section, "Unconstrained Rosenbrock Function",
for the statements that generate this output.
Optimization Start
Parameter Estimates
Gradient
Objective
N Parameter Estimate Function
1 X1 -1.200000 -107.799999
2 X2 1.000000 -44.000000
Value of Objective Function = 12.1
Levenberg-Marquardt Optimization
Scaling Update of More (1978)
Gradient Computed by Finite Differences
CRP Jacobian Computed by Finite Differences
Parameter Estimates 2
Functions (Observations) 2
Optimization Start
Active Constraints 0 Objective Function 12.1
Max Abs Gradient Element 107.7999987 Radius 2626.5613171
Actual
Function Active Objective
Iter Restarts Calls Constraints Function
1 0 4 0 2.18185
2 0 6 0 1.59370
3 0 7 0 1.32848
4 0 8 0 1.03891
5 0 9 0 0.78943
6 0 10 0 0.58838
7 0 11 0 0.34224
8 0 12 0 0.19630
9 0 13 0 0.11626
10 0 14 0 0.0000396
11 0 15 0 2.4652E-30
Ratio
Between
Actual
Objective Max Abs and
Function Gradient Predicted
Iter Change Element Lambda Change
1 9.9181 17.4704 0.00804 0.964
2 0.5881 3.7015 0.0190 0.988
3 0.2652 7.0843 0.00830 0.678
4 0.2896 6.3092 0.00753 0.593
5 0.2495 7.2617 0.00634 0.486
6 0.2011 7.8837 0.00462 0.393
7 0.2461 6.6815 0.00307 0.524
8 0.1459 8.3857 0.00147 0.469
9 0.0800 9.3086 0.00016 0.409
10 0.1162 0.1781 0 1.000
11 0.000040 4.44E-14 0 1.000
Optimization Results
Iterations 11 Function Calls 16
Jacobian Calls 12 Active Constraints 0
Objective Function 2.46519E-30 Max Abs Gradient Element 4.440892E-14
Lambda 0 Actual Over Pred Change 1
Radius 0.0178062912
ABSGCONV convergence criterion satisfied.
Optimization Results
Parameter Estimates
Gradient
Objective
N Parameter Estimate Function
1 X1 1.000000 -4.44089E-14
2 X2 1.000000 2.220446E-14
Value of Objective Function = 2.46519E-30
Figure 17.5: Iteration History for the NLPLM Subroutine
Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.