The book first discusses in depth various aspects of the well-known
inconsistency that arises when explanatory variables in a linear
regression model are measured with error. Despite this
inconsistency, the region where the true regression coeffecients
lies can sometimes be characterized in a useful way, especially
when bounds are known on the measurement error variance but also
when such information is absent. Wage discrimination with imperfect
productivity measurement is discussed as an important special case.
Next, it is shown that the inconsistency is not accidental but
fundamental. Due to an identification problem, no consistent
estimators may exist at all. Additional information is desirable.
This information can be of various types. One type is exact prior
knowledge about functions of the parameters. This leads to the CALS
estimator. Another major type is in the form of instrumental
variables. Many aspects of this are discussed, including
heteroskedasticity, combination of data from different sources,
construction of instruments from the available data, and the LIML
estimator, which is especially relevant when the instruments are
weak.
The scope is then widened to an embedding of the regression
equation with measurement error in a multiple equations setting,
leading to the exploratory factor analysis (EFA) model. This marks
the step from measurement error to latent variables. Estimation of
the EFA model leads to an eigenvalue problem. A variety of models
is reviewed that involve eignevalue problems as their common
characteristic.
EFA is extended to confirmatory factor analysis (CFA) by including
restrictions on the parameters of the factor analysis model, and
next by relating the factors to background variables.
These models are all structural equation models (SEMs), a very
general and important class of models, with the LISREL model as its
best-known representation, encompassing almost all linear equation
systems with latent variables.
Estimation of SEMs can be viewed as an application of the
generalized method of moments (GMM). GMM in general and for SEM in
particular is discussed at great length, including the generality
of GMM, optimal weighting, conditional moments, continuous
updating, simulation estimation, the link with the method of
maximum likelihood, and in particular testing and model evaluation
for GMM.
The discussion concludes with nonlinear models. The emphasis is on
polynomial models and models that are nonlinear due to a filter on
the dependent variables, like discrete choice models or models with
ordered categorical variables.
General
Is the information for this product incomplete, wrong or inappropriate?
Let us know about it.
Does this product have an incorrect or missing image?
Send us a new image.
Is this product missing categories?
Add more categories.
Review This Product
No reviews yet - be the first to create one!