Forest test

from Wikipedia, the free encyclopedia

In econometrics , the Wald test is a parametric statistical test that was developed in 1939 by Abraham Wald (1902–1950). The test can be used to determine the distribution of a suitable test statistic under the validity of the null hypothesis . A general test statistic for a wide variety of econometric questions is the Wald statistic , which asymptotically follows a chi-square distribution . The Wald test is based on the fact that the maximum likelihood estimator for the unknown parameters for large numbers of observations in distribution tends to a normal distribution. Many tests can therefore be seen as special cases of the Wald test.

One-dimensional case

From the maximum likelihood theory we know that the maximum likelihood estimator of the unknown parameter in distribution for large numbers of observations tends towards a normal distribution. Let be an unknown parameter in the population and a given value. The following null hypothesis against corresponding alternative hypothesis to test

  against   ,

you can use one of the following test statistics:

or

,

both of which are asymptotically normally distributed under the null hypothesis . Here denotes the Fisher information and the expected Fisher information. Both test statistics are approximate pivot sizes for and are called forest statistics .

Looking at the squared test statistic, the following applies:

,

d. that is, it is asymptotically chi-square distributed for large samples . This is true because a squared standard normally distributed random variable follows a chi-square distribution with one degree of freedom.

Wald confidence interval

Denote the maximum likelihood estimator for , then it holds for the probability that the Wald statistic lies within the quantile of the standard normal distribution

and this gives the -Wald- confidence interval to

.

Multi-dimensional case

In the multi-dimensional case, where the vector is the estimator and the asymptotic nonsingular covariance matrix is the maximum likelihood estimator, the null hypothesis can be tested with the following test statistic

is then asymptotically chi-square-distributed with degrees of freedom . For this, the restriction function must be fully differentiable and have full rank.

Forest statistics for general linear hypotheses

To test general linear hypotheses , the asymptotic distribution of the Wald statistics plays a major role. Let be a restriction matrix , with Let it be further assumed that the restrictions on the parameter vector can be expressed as :, where represents a -vector consisting of known constants. Under certain conditions it can be shown that under the null hypothesis the weighted hypothesis sum of squares

follows a chi-square distribution with (number of restrictions) degrees of freedom . This measures how far the estimated value deviates from the null hypothesis . Furthermore, the corresponding sum of the squared deviations is (analogous to the sum of squares of the residuals ). This sum of the squared deviations is weighted with the inverse covariance matrix of the null hypothesis , because deviations that are just as large are not necessarily an indicator for a large covariance . If the unbiased estimator for the disturbance variable variance is used, it can be shown that the Wald statistic divided by the number of restrictions corresponds exactly to the F statistic of the multiple linear test problem .

Examples

One-sample Gaussian test as a special case of the Wald test

If a variable in a population is normally distributed with with unknown parameter and known , then is the sample mean

also the maximum likelihood estimator for . One of the hypotheses for the one- sample Gaussian test is:

  against  

and the test statistic according to Wald would be

.

The single-sample Gaussian test can thus be regarded as a special case of the Wald test.

Global F- test as a special case of the Wald test

Another special case of the Wald test is the global F test . This tests whether at least one explanatory variable provides explanatory content for the model. If this hypothesis is rejected, the model is thus useless. The null hypothesis of the F -test for overall significance of the model states that all explanatory variables have no influence on the dependent variable, and the alternative hypothesis that at least one explanatory variable has an influence on them. Both the explanatory variables and the independent variables can be binary (categorical) or scale. The Wald test can then test the hypotheses (without including the axis intercept ):

  against   .

Alternatives

The likelihood ratio test offers an alternative to the Wald test . Although this is more computationally complex, it also shows better properties in small samples. Another alternative is the so-called Lagrange multiplier tests (LM tests, see also Lagrange multiplier ). Asymptotically, however, these three tests are identical.

literature

  • Wald's W-Statistics. In: Encyclopedia of Statistical Sciences. Wiley, Hoboken 2006, pp. 9028-9029.
  • Abraham Wald: Tests of Statistical Hypotheses Concerning Several Parameters When the Number of Observations is Large. In: Transactions of the American Mathematical Society. Vol. 54, No. 3, Nov 1943, pp. 426-482, doi : 10.1090 / S0002-9947-1943-0012401-3 , JSTOR 1990256 .
  • Tim F. Liao: Comparing Social Groups: Wald Statistics for Testing Equality Among Multiple Logit Models. In: International Journal of Comparative Sociology. Vol. 45, No. 1-2, 2004, pp. 3-16, doi : 10.1177 / 0020715204048308 .
  • Robert F. Engle: Wald, Likelihood Ratio and Lagrange Multiplier Tests in Econometrics. In: Zvi Griliches, Michael D. Intriligator (Ed.): Handbook of Econometrics. Vol. 2, Elsevier, Amsterdam et al. 1984, pp. 775-826.

Individual evidence

  1. ^ Leonhard Held and Daniel Sabanés Bové: Applied Statistical Inference: Likelihood and Bayes. Springer Heidelberg New York Dordrecht London (2014). ISBN 978-3-642-37886-7 , p. 99.
  2. ^ Leonhard Held and Daniel Sabanés Bové: Applied Statistical Inference: Likelihood and Bayes. Springer Heidelberg New York Dordrecht London (2014). ISBN 978-3-642-37886-7 , p. 100.
  3. George G. Judge, R. Carter Hill, W. Griffiths, Helmut Lütkepohl , TC Lee. Introduction to the Theory and Practice of Econometrics. 2nd Edition. John Wiley & Sons, New York / Chichester / Brisbane / Toronto / Singapore 1988, ISBN 0-471-62414-4 , p. 109.
  4. Jeffrey Marc Wooldridge: Introductory econometrics: A modern approach. 4th edition. Nelson Education, 2015, p. 810
  5. ^ Ludwig Fahrmeir , Rita artist, Iris Pigeot , Gerhard Tutz : Statistics. The way to data analysis. 8., revised. and additional edition. Springer Spectrum, Berlin / Heidelberg 2016, ISBN 978-3-662-50371-3 , p. 458.