# Browsing by Subject "estimators"

Now showing 1 - 2 of 2

###### Results Per Page

###### Sort Options

Item Restricted The Use of Error Components Models in Combining Cross Section with Time Series Data(1969) Wallace, Dudley; Hussain, AshiqA mixed model of regression with error components is proposed as one of possible interest for combining cross section and time series data. For known variances, it is shown that Aitken estimators and covariance estimators are in one sense asymptotically equivalent, even though the Aitken estimators are more efficient in small samples. Turning to unknown variance components, Zellner-type iterative estimators are compared with covariance estimators. Here, few small sample properties are obtained. However, it is shown that covariance and Zellner-type estimators have equivalent asymptotic distributions and equivalent limits of sequences of first and second order moments for weakly nonstochastic regressors. For the model analyzed, the theoretical results obtained, as well as ease of computation, tend to support traditional covariance estimators of the regression parameters. An additional interesting result presented in an appendix is that ordinary least squares estimates of the β's (ignoring the error components) have unbounded asymptotic variances. On efficiency grounds, this argues rather strongly for some care in combining data from alternative sources in regression analysis.Item Open Access Weaker Criteria and Tests for Linear Restrictions in Regression(1972) Wallace, DudleyThe standard F test for linear restrictions in regression is relevant as a criterion but fails to capture the notion of tradeoff between bias and variance. Average squared distance criteria yield operational tests that are more appropriate, depending upon objectives. In the present paper two alternative criteria are developed. The first allows testing of the hypothesis that the average squared distance of a restricted estimator from the parameter point in k space is less than the average squared distance of the unrestricted, ordinary least squares estimator from the same parameter point. The second sets up a test of betterness of the restricted estimator over the unrestricted estimator of E(Y/X), where betterness is again defined in average squared distance.