Unexpected estimation of the variance of the disturbance variables

from Wikipedia, the free encyclopedia

In statistics  , the unbiased estimation of the variance of the disturbance variables , also called the unbiased estimation of the error variance , is a point estimator that has the quality property that it estimates the unknown variance of the disturbance variables in an unbiased way if the Gauss-Markow assumptions  apply.

Introduction to the problem

The error variance , and residual variance , experimental error , Störgrößenvarianz , variance of the disturbances , unexplained variance , unexplained variance referred to , is the variance of the regression function in the population and the variance of the error terms or interference. The error variance is an unknown parameter that must be estimated from the sample information. It measures the variation that can be traced back to the measurement errors or disturbance variables. A first obvious approach would be to estimate the variance of the confounding variables as usual with the maximum likelihood estimation (see classical linear model of normal regression ). However, this estimator is problematic, as will be explained below.

Expectant estimator for the variance of the disturbance variables

Simple linear regression

Although homoscedastic variance in the population is sometimes assumed to be known, it must be assumed that it is unknown in most use cases (for example, when estimating demand parameters in economic models, or production functions ). Since the disturbance variable variance has an unknown value, the numerical values ​​of the variances of the slope parameter and the absolute term cannot be estimated, since the formulas depend on this. However, these quantities can be estimated from the available data. An obvious estimator of the disturbance variables is the residual , where the sample regression function represents. The information contained in the residuals could therefore be used for an estimator of the disturbance variable variance. Due to the fact that , from a frequentist point of view, the “ mean value ” is . The size cannot be observed, however, since the disturbance variables cannot be observed. If the observable counterpart is used instead of now , this leads to the following estimator for the disturbance variable variance

,

where is the residual sum of squares. This estimator is the sample mean of the estimated residual squares and could be used to estimate the confounding variance . It can be shown that the above definition also corresponds to the maximum likelihood estimator ( ). However, the estimator does not meet common quality criteria for point estimators and is therefore not used often. For example, the estimator is not unbiased for . This is because the expected value results in the residual sum of squares and therefore applies to the expected value of this estimator . In simple linear regression it can be shown under the assumptions of the classical model of linear single regression that an unbiased estimate for , i. .h an estimate that is satisfied given by

,

assuming that . This unbiased estimate for is the residual mean square and is sometimes referred to as residual variance . The square root of this unbiased estimate or the residual variance is called the standard error of the regression . The residual variance can be interpreted as the mean model estimation error and forms the basis for all further calculations ( confidence intervals , standard errors of the regression parameters, etc.). It differs from the above expression in that the residual sum of squares is adjusted by the number of degrees of freedom . This adjustment can be explained intuitively by the fact that one loses two degrees of freedom by estimating the two unknown regression parameters and .

As mentioned above, an unbiased estimate for in simple linear regression is given by

,

where and are least squares guessers for and .

In order to show the accuracy of the expectation, one uses the property that the residuals can be represented as a function of the disturbance variables as . Furthermore, the property is used that the variance of the KQ estimator is given by . It should also be noted that the expected value of the KQ estimator is given by and the same applies to .

proof
.

The variances of the KQ estimators and can also be estimated with the unbiased estimator . For example, can be estimated by replacing with. The estimated variance of the slope parameter is then given by

.

Multiple linear regression

In multiple linear regression, the unbiased estimate of the variance of the disturbance variables or the residual variance is given by

,

wherein the least squares estimator and the -th row of the experimental design matrix represents. Alternatively, the unbiased estimation of the variance of the disturbance variables in the multiple case can be represented as

.

This representation results from the fact that the residual sum of squares can be written as . Another alternative representation of the residual variance results from the fact that the residual sum of squares can also be represented using the residual-generating matrix as . This results in the residual variance

This estimate can in turn be used to compute the covariance matrix of the KQ estimate vector . If is now replaced by , the KQ estimation vector for the estimated covariance matrix is ​​obtained

.

Regression with stochastic regressors

In the case of regression with stochastic regressors with the stochastic regressor matrix , the expectation-based estimation of the variance of the disturbance variables is also given by

.

The fidelity to expectations can be shown by means of the law of the iterated expectation value.

Web links

Individual evidence

  1. Ludwig von Auer : Econometrics. An introduction. Springer, ISBN 978-3-642-40209-8 , 6th, through. u. updated edition. 2013.
  2. Ludwig von Auer: Econometrics. An introduction. Springer, ISBN 978-3-642-40209-8 , 6th, through. u. updated edition. 2013, p. 191.
  3. George G. Judge, R. Carter Hill, W. Griffiths, Helmut Lütkepohl , TC Lee. Introduction to the Theory and Practice of Econometrics. 2nd Edition. John Wiley & Sons, New York / Chichester / Brisbane / Toronto / Singapore 1988, ISBN 0-471-62414-4 , p. 170.
  4. ^ Ludwig Fahrmeir , Thomas Kneib , Stefan Lang, Brian Marx: Regression: models, methods and applications. Springer Science & Business Media, 2013, ISBN 978-3-642-34332-2 , p. 109.
  5. ^ Ludwig Fahrmeir, Thomas Kneib, Stefan Lang, Brian Marx: Regression: models, methods and applications. Springer Science & Business Media, 2013, ISBN 978-3-642-34332-2 , p. 109.
  6. ^ Karl Mosler and Friedrich Schmid: Probability calculation and conclusive statistics. Springer-Verlag, 2011, p. 308.
  7. Jeffrey Marc Wooldridge: Introductory econometrics: A modern approach. 5th edition. Nelson Education 2015
  8. Jeffrey Marc Wooldridge : Introductory econometrics: A modern approach. 4th edition. Nelson Education, 2015, p. 55.
  9. Jeffrey Marc Wooldridge: Introductory econometrics: A modern approach. 4th edition. Nelson Education, 2015, p. 55.