Time Series Analysis and Control Examples |
Minimum AIC Procedure
The AIC statistic is widely used to select the
best model among alternative parametric models.
The minimum AIC model selection procedure can be interpreted
as a maximization of the expected entropy (Akaike 1981).
The entropy of a true probability density function (PDF)
with respect to the fitted PDF f is written as

where
is a Kullback-Leibler
information measure, which is defined as
![I(\varphi,f) = \int [ \log [ \frac{\varphi(z)}{f(z)}
]
] \varphi(z) dz](images/i10eq31.gif)
where the random variable Z is assumed to be continuous.
Therefore,

where
and EZ denotes the
expectation concerning the random variable Z.
if and only if
(a.s.).
The larger the quantity EZ logf(Z), the closer
the function f is to the true PDF
.Given the data y = (y1, ... , yT)'
that has the same distribution as the random variable
Z, let the likelihood function of the parameter
vector
be
.Then the average of the log likelihood function
is
an estimate of the expected value of logf(Z).
Akaike (1981) derived the alternative estimate of
EZ logf(Z) by using the Bayesian predictive likelihood.
The AIC is the bias-corrected estimate of
, where
is the maximum likelihood estimate.
-
AIC = - 2( maximum log likelihood) + 2( number of free parameters)
Let
be a K ×1 parameter vector that is
contained in the parameter space
.
Given the data y, the log likelihood function is

Suppose the probability density function
has
the true PDF
, where the true
parameter vector
is contained in
.
Let
be a maximum likelihood estimate.
The maximum of the log likelihood function is denoted as
.The expected log likelihood function is defined by

The Taylor series expansion of the expected log
likelihood function around the true parameter
gives the following asymptotic relationship:

where
is the information matrix and
= stands for asymptotic equality.
Note that
since
is maximized at
.
By substituting
, the expected
log likelihood function can be written as

The maximum likelihood estimator is asymptotically
normally distributed under the regularity conditions

Therefore,

The mean expected log likelihood function,
, becomes

When the Taylor series expansion of the log
likelihood function around
is used,
the log likelihood function
is written

Since
is the
maximum log likelihood function,
.Note that
if the maximum likelihood estimator
is a consistent estimator of
.
Replacing
with the true parameter
and
taking expectations with respect to the random variable Y,

Consider the following relationship:

From the previous derivation,

Therefore,

The natural estimator for E
is
.
Using this estimator, you can write the
mean expected log likelihood function as

Consequently, the AIC is defined as an asymptotically unbiased
estimator of -2( mean expected log likelihood)

In practice, the previous asymptotic result is expected
to be valid in finite samples if the number of free
parameters does not exceed
and the upper
bound of the number of free parameters is [T/2].
It is worth noting that the amount of AIC is
not meaningful in itself, since this value is
not the Kullback-Leibler information measure.
The difference of AIC values can be used to select the model.
The difference of the two AIC values is
considered insignificant if it is far less than 1.
It is possible to find a better model when the
minimum AIC model contains many free parameters.
Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.