Gauss-Markow theorem

from Wikipedia, the free encyclopedia

In stochastics , the Gauss-Markow theorem (the English transcription Markov can also be found in the literature, i.e. Gauss-Markov's theorem ) or Gauss's theorem is a mathematical theorem about the class of linear, unbiased estimators . It represents a theoretical justification of the least squares method and is named after the mathematicians Carl Friedrich Gauß and Andrei Andrejewitsch Markow . It has recently been suggested that the sentence should simply be called Gauss' theorem , since the Markow attribution is based on an error (see #History ). The theorem states that in a linear regression model in which the disturbance variables have an expected value of zero and a constant variance and are uncorrelated (assumptions of the classical linear regression model ), the least squares estimator - provided it exists - is the best linear unbiased estimator just BLES ( English Best linear Unbiased Estimator , in short: BLUE ) is. Here, the “best” means that it has the “smallest” covariance matrix - within the class of linear unbiased estimators - and is therefore minimally variant. The disturbance variables do not necessarily have to be normally distributed . In the case of the generalized least squares estimate , they do not have to be independent and identically distributed either .

history

The theorem was proven in 1821 by Carl Friedrich Gauß . Versions of his proof have been published by Helmert (1872), Czuber (1891), and Markow (1912) , among others . Jerzy Neyman , who did not know the work of Gauss, named the sentence after Markow among others. Since then the theorem has been known as the Gauss-Markow Theorem . Since the current name is primarily based on Neyman's ignorance of Gauss' proof, it has recently been suggested - especially in English-language literature - to name the sentence after Gauss alone, for example Gauss's sentence . Historical information on the Gauss-Markow theorem can be found in Seal (1967), Placket (1972), Stigler (1986) and in History of Mathematical Statistics from 1750 to 1930 by Hald (1998).

Formulation of the sentence

In words, this theorem reads: The least squares estimator is the best linear unbiased estimator when the random confounders (the following formulas refer to simple linear regression ):

  • uncorrelated are:
    .
independent random variables are always also uncorrelated. In this context, one speaks of the absence of autocorrelation .
  • mean zero: If the model contains an axis intercept that differs from zero, it is reasonable to at least demand that the mean value of in the population is zero and that the fluctuations of the individual disturbance variables are balanced out over the entirety of the observations. Mathematically, this means that the expected value of the disturbance variables is zero . This assumption does not make a statement about the relationship between and , but only makes a statement about the distribution of the unsystematic component in the population. This means that the model under consideration corresponds on average to the true relationship. If the expected value were not zero, then on average one would estimate a wrong relationship. This assumption can be violated if a relevant variable is not taken into account in the regression model (see bias due to omitted variables ).
  • have a finite constant variance ( homoscedasticity ):
if the variance of the residuals (and thus the variance of the explained variables themselves) is the same for all expressions of the regressors, homoscedasticity or homogeneity of variance is present.

All of the above assumptions about the disturbance variables can be summarized as follows:

,

that is, all disturbance variables follow the distribution with the expected value and the variance . The distribution is not specified in detail at the beginning.

These assumptions are also known as the Gauss-Markov assumptions . In econometrics , Gauss-Markov's theorem is often represented differently and further assumptions are made.

General formulation of the Gauss-Markow theorem (regular case)

As a starting point, we consider a typical multiple linear regression model with given data for statistical units and regressors. The relationship between the dependent variable and the independent variables can be shown as follows

.

In matrix notation too

with . In compact notation

.

Here represents a vector of unknown parameters (known as regression coefficients ) that must be estimated from the data. Furthermore, it is assumed that the disturbance variables are zero on average: which means that we can assume that our model is correct on average. The data matrix is assumed to have full (column) rank , that is, it applies . In particular, there is then a regular, i.e. invertible matrix. That is why one speaks here of the regular case (see heading). Furthermore, it is assumed for the covariance matrix of the vector of the disturbance variables that the following applies. The Gauss-Markow assumptions can therefore be summarized in the multiple case as

where the expected value of the disturbance variables is the zero vector and the covariance matrix is ​​the expected value of the dyadic product of the disturbance variables

represents.

This assumption is the homoscedasticity assumption in the multiple case. The above specification of the linear model thus gives for the random vector

.

These assumptions give:

  1. That the least squares estimator for the true parameter vector , which reads, is a minimally variant linear unbiased estimator for .
  2. That is the covariance matrix of the least squares estimator .
  3. That the estimated variance of the disturbance variables is an unbiased estimator for the unknown variance of the disturbance variables .

Minimal variant linear unbiased estimator

Minimal variant

The minimum variant, or “the best” estimator, is characterized by the fact that it has the “smallest” covariance matrix (with regard to the Loewner partial order ) (is thus minimally variant). An estimator that exhibits this property is therefore also called a minimally variant or efficient estimator. With the additional assumption of fairness to expectations , one also speaks of a minimally variant fairness to expectations estimator.

Each estimator from the class of linear fair-expectation estimators can be represented as

(Linearity)

with the matrix . An example of an estimator in this class is the least squares estimator .

The property of faithfulness to expectations means that the estimator “ on average ” corresponds to the true parameter vector

.

Under the above conditions , the inequality then applies to all vectors :

(Efficiency property),

where is the least squares estimator, i.e. the estimator that was determined using the least squares estimation . This efficiency property can also be rewritten as

or

.

This property is called positive semidefiniteness (see also covariance matrix as an efficiency criterion ). So if the above inequality is true, then it can be said that is better than .

Linearity

The least squares estimator is also linear

.

The above inequality states that, according to Gauss-Markow's theorem , a best linear unbiased estimator , BLES for short ( English Best Linear Unbiased Estimator , BLUE for short ) or a minimally variant linear unbiased estimator, i.e. in the class of linear predictive estimators For estimators, it is the estimator that has the smallest variance or covariance matrix. For this property of the estimator , no information on the distribution of the disturbance variable is required. An increase in the BLES property is represented by the so-called BES property ( BES for best unbiased estimator ), which is not restricted to linear estimators. Often times the maximum likelihood estimator represents a solution that is BES. In fact, the least squares estimator is a maximum likelihood estimator for normally distributed disturbance variables, and the BES property can be proven with the Lehmann-Scheffé theorem .

proof

Given that the true relationship is described by a linear model , the least squares estimator must be compared with all other linear estimators. In order to be able to make a comparison, the analysis is limited to the class of linear and unbiased estimators. Any estimator in this class, besides the least squares estimator , can be represented as

with .

If so , the least squares estimator is obtained . The class of all linear estimators is thus given by

, where the matrix is given by

Now it is time restrictions for the make sure that is unbiased for is. The covariance matrix of must also be found. The expected value of results in

I.e. is then and only fair to expectation for , if , thus holds .

It follows for the covariance matrix of :

It follows

This matrix will always be positive semidefinite - regardless of how it is defined - since a matrix multiplied by its own transpose is always positive semidefinite.

Singular case, estimable functions

We now consider the so-called singular case , i. H. it applies . Then it is also not of full rank, i.e. not invertible. The least squares estimator given above does not exist. It is said that it can not be estimated or identified .

The singular case occurs if , or if only different regressor settings are observed, or if there are linear dependencies in the data matrix .

Be now . Then, at best, -dimensional linear forms can be estimated linearly and unbiased, where is a matrix.

Estimability criterion

with a matrix is estimable if and only if there is a matrix such that it holds, i. H. if each row vector of is a linear combination of the row vectors of . See e.g. B.

The estimability criterion can be formulated much more elegantly with pseudo inverse . The pseudoinverse of is called if applies.

with a matrix is estimable if and only if . It is an arbitrary pseudo inverse of . See e.g. B.

example

For the quadratic regression equation , observations were made at . This results in

.

Then

estimable because the row vectors of are linear combinations of the row vectors of . For example, the second line vector of is equal to the difference between the third and first line vector of .

On the other hand is

cannot be estimated because none of the line vectors of can be represented as a linear combination of the line vectors of .

Gauss-Markow theorem in the singular case

Be appreciable. Then

best linear unbiased estimator for , where an arbitrary pseudo inverse is to.

The estimator can also be expressed without a pseudo inverse:

Here is an arbitrary solution of the normal equation system .

Generalized Least Squares Estimation

The generalized least squares estimate (VKQ estimate) developed by Aitken extends Gauss-Markov's theorem to the case where the vector of the disturbance variables has a non-scalar covariance matrix , i.e. H. it applies . The VKQ estimator is also BLUE.

See also

Web links

literature

  • George G. Judge, R. Carter Hill, W. Griffiths, Helmut Lütkepohl, TC Lee. Introduction to the Theory and Practice of Econometrics. John Wiley & Sons, New York, Chichester, Brisbane, Toronto, Singapore, ISBN 978-0471624141 , second edition 1988

Individual evidence

  1. International Statistical Institute : Glossary of statistical terms.
  2. Ulrich Kockelkorn: Linear statistical methods. De Gruyter 2018, 978-3-486-78782-5, p. 329 (accessed via De Gruyter Online).
  3. Ludwig von Auer : Econometrics. An introduction. Springer, ISBN 978-3-642-40209-8 , 6th through. u. updated edition 2013, p. 49.
  4. Jeffrey Marc Wooldridge: Introductory econometrics: A modern approach. 5th edition. Nelson Education 2015, p. 24.
  5. George G. Judge, R. Carter Hill, W. Griffiths, Helmut Lütkepohl , TC Lee. Introduction to the Theory and Practice of Econometrics. John Wiley & Sons, New York, Chichester, Brisbane, Toronto, Singapore, ISBN 978-0471624141 , second edition 1988, p. 202.
  6. George G. Judge, R. Carter Hill, W. Griffiths, Helmut Lütkepohl, TC Lee. Introduction to the Theory and Practice of Econometrics. John Wiley & Sons, New York, Chichester, Brisbane, Toronto, Singapore, ISBN 978-0471624141 , second edition 1988, p. 203.
  7. International Statistical Institute: Glossary of statistical terms.
  8. George G. Judge, R. Carter Hill, W. Griffiths, Helmut Lütkepohl, TC Lee. Introduction to the Theory and Practice of Econometrics. John Wiley & Sons, New York, Chichester, Brisbane, Toronto, Singapore, ISBN 978-0471624141 , second edition 1988, p. 205.
  9. CR Rao , H. Toutenburg, Shalabh, C. Heumann: Linear Models and Generalizations , Springer-Verlag, 2008 (third edition)
  10. ^ F. Pukelsheim : Optimal Design of Experiments , Wiley, New York 1993
  11. AC Aitken: On Least Squares and Linear Combinations of Observations . In: Proceedings of the Royal Society of Edinburgh . 55, 1935, pp. 42-48.
  12. David S. Huang: Regression and Econometric Methods . John Wiley & Sons, New York 1970, ISBN 0-471-41754-8 , pp. 127-147.