Da Silva Method (Variance-Component Moving Average Model)
Suppose you have a sample of observations at T time
points on each of N cross-sectional units.
The Da Silva method assumes that the observed value of the dependent variable
at the tth time point on the ith cross-sectional unit
can be expressed as
where
-
- xit' = ( xit1, ... , xitp)is a vector of explanatory variables for the tth time point
and ith cross-sectional unit
-
-
is the vector of parameters
-
- ai
is a time-invariant, cross-sectional unit effect
-
- bt
is a cross-sectionally invariant time effect
-
- eit
is a residual effect unaccounted for by the explanatory
variables and the specific time and cross-sectional
unit effects
Since the observations are arranged first by cross sections,
then by time periods within cross sections,
these equations can be written in matrix notation as
where
-
y =
(y11, ... ,y1T, y21, ... ,yNT)'
-
X = (x11, ... ,x1T,x21, ...
,xNT)'
-
a = (a1 ... aN)'
-
b = (b1 ... bT)'
-
e =
(e11, ... ,e1T, e21, ... ,eNT)'
Here 1N is an N ×1 vector with all elements equal to 1,
and denotes the Kronecker product.
It is assumed that
- xit is a sequence of nonstochastic,
known p×1 vectors in whose
elements are uniformly bounded in .
The matrix X has a full column rank p.
- is a p ×1 constant
vector of unknown parameters.
- a is a vector of uncorrelated random variables such that
E( ai)=0 and
,
.
- b is a vector of uncorrelated random variables such that
E( bt)=0 and
.
- ei = ( ei1, ... ,eiT)'
is a sample of a realization of a finite moving average time series of
order m < T-1 for each i; hence,
where are
unknown constants such that and
, and
is a white noise process, that is,
a sequence of uncorrelated random variables with
, and . - The sets of random variables
{ai}Ni = 1, {bt}Tt = 1, and
{eit}Tt = 1 for i = 1, ... , N are mutually
uncorrelated.
- The random terms have normal distributions:
and for i = 1, ... , N; t = 1, ... T; k = 1, ... , m.
If assumptions 1-6 are satisfied, then
and
where
is a T×T matrix with elements
as follows:
where for k=|t-s|. For the definition of IN,
IT, JN, and JT,
see the "Fuller-Battese Method" section earlier in this chapter.
The covariance matrix, denoted by V, can
be written in the form
where ,
and, for k=1,..., m,
is a band matrix whose kth
off-diagonal elements are 1's and all other elements are 0's.
Thus, the covariance matrix of the vector of observations
y has the form
where
The estimator of is a two-step
GLS-type estimator, that is, GLS
with the unknown covariance matrix replaced by a suitable
estimator of V. It is obtained by substituting Seely estimates
for the scalar multiples .
Seely (1969) presents a general theory of unbiased
estimation when the choice of estimators is restricted to
finite dimensional vector spaces, with a special emphasis on
quadratic estimation of functions of the form
.
The parameters (i=1,..., n)
are associated with a linear model E(y)=X with
covariance matrix
where Vi (i=1, ..., n)
are real symmetric matrices.
The method is also discussed by Seely
(1970a,1970b) and Seely and Zyskind (1971).
Seely and Soong (1971) consider the MINQUE principle, using an approach
along the lines of Seely (1969).
Copyright © 1999 by SAS Institute Inc., Cary, NC, USA. All rights reserved.