Classic linear model of normal regression

from Wikipedia, the free encyclopedia

In statistics , classical normal regression is a regression which, in addition to the Gauss-Markov assumptions, includes the assumption of the normal distribution of the disturbance variables . The associated model is called the classic linear model of normal regression . The assumption of the normal distribution of the disturbance variables is required in order to carry out statistical inference, i.e. H. it is needed to be able to calculate confidence intervals and to test general linear hypotheses . In addition, further properties of the KQ estimate can be derived from the assumption of normal distribution.

Starting position

As a starting point, we consider a typical multiple linear regression model with given data for statistical units . The relationship between the dependent variable and the independent variables can be shown as follows

.

In matrix notation too

or in compact notation

.

Here represents a vector of unknown parameters that must be estimated using the data.

Classic linear model

The multiple linear regression model

is called "classic" if the following assumptions apply

  • A1: The disturbance variables have an expected value of zero: which means that we can assume that our model is correct on average.
  • A2: The disturbance variables are uncorrelated: and show a homogeneous variance . Both together result in:
  • A3: The data matrix is non-stochastic and has full column rank

The assumptions A1 – A3 can be summarized as . Instead of considering the variances and covariances of the disturbance variables individually, they are summarized in the following variance-covariance matrix :

Thus for

with .

If in addition to the above classical linear regression model (short: KLRM ) or also called the standard model of linear regression , the assumption of the normal distribution of the disturbance variables is required, then one speaks of the classical linear model of normal regression . This is then given by

with .

Maximum likelihood estimation

Estimation of the slope parameter

The unknown variance parameters of a population and the slope parameter of the normal linear model can be using the maximum likelihood estimate. To do this, the individual probability density of the error vector that follows a normal distribution is first required. It is:

, where represents.

Since the disturbance variable can also be represented as , the individual density can also be written as

.

Due to the independence assumption, the common probability density can be represented as the product of the individual marginal densities . If stochastic independence is assumed, the common density is then

The common density can also be written as:

Since we are not interested in a specific result for given parameters, but are looking for those parameters that best fit our data, i.e. which are assigned the greatest plausibility that they correspond to the true parameters, the likelihood function can now be described as Formulate common probability density depending on the parameters.

By logarithmizing the likelihood function, the logarithmic likelihood function results (also called logarithmic plausibility function) depending on the parameters:

This function must now be maximized with regard to the parameters. The following maximization problem arises:

The two score functions are:

With partial derivation it can be seen that the expression

is already known from the derivation of the least squares estimator ( estimation of the parameter vector with the least squares estimation ). The maximum likelihood optimization problem is thus reduced to the least squares optimization problem . It follows that the least squares estimator ( KQS for short ) corresponds to the ML estimator ( MLS for short ):

This further assumption (assumption of normal distribution) does not result in any difference for the estimation of the parameters. If the disturbances normally distributed are is maximum likelihood estimator and after the set of Lehmann-Scheffé best unbiased estimator ( best Unbiased Estimator - BUE ). As a consequence of the equality of the LQ and maximum likelihood estimators, the LQ and ML residuals must also be the same

Estimation of the variance parameter

The maximum likelihood estimator for the variance, which also results from the second partial derivative and the circumstance , is:

The ML estimator results as the average sum of squares . However, the estimator does not meet common quality criteria for point estimators because it does not represent an unbiased estimate of the variance of the confounding variables . The value of the logarithmic plausibility function, evaluated in place of the estimated parameters:

generalization

While one assumes in the classical linear model of the normal regression that the disturbance variable (the unobservable random component) is normally distributed , the disturbance variable in generalized linear models can have a distribution from the class of the exponential family .

Individual evidence

  1. George G. Judge, R. Carter Hill, W. Griffiths, Helmut Lütkepohl , TC Lee. Introduction to the Theory and Practice of Econometrics. John Wiley & Sons, New York, Chichester, Brisbane, Toronto, Singapore, ISBN 978-0471624141 , second edition 1988, p. 221 ff.

literature