next up previous


Stat 804
Lecture 12 Notes

Large Sample Theory for Conditional Likelihood:

We have data $ X=(Y,Z)$ and study the conditional likelihood, score Fisher information and mle: $ \ell_{Y\vert Z}(\theta)$, $ U_{Y\vert Z}(\theta)$, $ {\cal I}_{Y\vert Z}(\theta)$ and $ \hat\theta$. In general standard maximum likelihood theory may be expected to apply to these conditional objects:

  1. $ P(\ell_{Y\vert Z}(\theta_0) > \ell_{Y\vert Z}(\theta)) \to 1$ as the ``sample size'' (often measured by the Fisher information) tends to infinity.

  2. E$ _\theta\left[ U_{Y\vert Z}(\theta)\vert Z\right]=0$

  3. $ \hat\theta$ is consistent (converges to the true value as the Fisher information converges to infinity).

  4. The usual Bartlett identities hold. For example:

    $\displaystyle {\cal I}_{Y\vert Z}(\theta) \equiv$   Var$\displaystyle \left[ U_{Y\vert Z}(\theta)\vert Z\right] = -$   E$\displaystyle _\theta\left[\frac{\partial}{\partial\theta} U_{Y\vert Z}(\theta)\vert Z\right]
$

  5. The error in the mle has approximately the form

    $\displaystyle \hat\theta - \theta \approx \left({\cal I}_{Y\vert Z}(\theta)\right)^{-1} U_{Y\vert Z}(\theta)
$

  6. The mle is approximately normal:

    $\displaystyle \left({\cal I}_{Y\vert Z}(\theta)\right)^{1/2} \left(\hat\theta - \theta\right) \approx
MVN(0,I)
$

    (where $ I$ is the identity matrix).

  7. The conditional Fisher information can be estimated by the observed information:

    $\displaystyle \left({\cal I}_{Y\vert Z}(\theta)\right)^{-1}\left( - \frac{\partial}{\partial\theta}
U_{Y\vert Z}(\hat\theta)\right) \to I
$

  8. The log-likelihood ratio is approximately $ \chi^2$:

    $\displaystyle 2( \ell_{Y\vert Z}(\hat\theta) - \ell_{Y\vert Z}(\theta_0)) \Rightarrow \chi_p^2
$

In the previous lecture I showed you 2) and 4) in this list. Today we look at 5), 6) and 7) in the context of the $ AR(1)$ model $ X_t=\rho X_{t-1} + \epsilon_t$.


next up previous



Richard Lockhart
2001-09-30