Standard error

from Wikipedia, the free encyclopedia

The standard error or sampling error is a measure of dispersion for an estimation function for an unknown parameter of the population . The standard error is defined as the standard deviation of the estimator, i.e. the positive square root of the variance . In the natural sciences and metrology, the term standard uncertainty coined by the GUM is also used.

In the case of an unbiased estimator, the standard error is therefore a measure of the average deviation of the estimated parameter value from the true parameter value. The smaller the standard error, the more precisely the unknown parameter can be estimated using the estimator. The standard error depends on, among other things

  • the sample size and
  • the variance in the population.

In general, the larger the sample size, the smaller the standard error; the smaller the variance, the smaller the standard error.

The standard error also plays an important role in the calculation of estimation errors , confidence intervals and test statistics .

interpretation

The standard error provides information about the quality of the estimated parameter. The more individual values ​​there are, the smaller the standard error and the more accurately the unknown parameter can be estimated. The standard error makes the measured spread (standard deviation) of two data sets with different sample sizes comparable by normalizing the standard deviation to the sample size.

If the unknown parameter is estimated with the help of several samples, the results will vary from sample to sample. Of course, this variation does not come from a variation of the unknown parameter (because it is fixed), but from random influences, e.g. B. Measurement inaccuracies. The standard error is the standard deviation of the estimated parameters in many samples. In general, the following applies: To halve the standard error, you need to quadruple the sample size.

In contrast to this, the standard deviation depicts the actual spread in a population , which is also present with the highest measurement accuracy and an infinite number of individual measurements (e.g. weight distribution, size distribution, monthly income). It shows whether the individual values ​​are close together or whether the data are widely spread.

example

Suppose you study the population of high school children for their intelligence performance. So the unknown parameter is the mean intelligence performance of the children attending high school. If a random sample of the size (i.e. with children) is drawn from this population , then the mean value can be calculated from all measurement results . If, after this sample, another random sample with the same number of children is drawn and the mean value is determined, the two mean values ​​will not exactly match. If one draws a large number of other random samples of the scope , then the spread of all empirically determined mean values ​​around the mean value of the population can be determined. This spread is the standard error. Because the mean of the sample means is the best estimate of the population mean, the standard error is the dispersion of the empirical means around the population mean. It does not show the intelligence distribution of the children, but the accuracy of the calculated mean.

notation

Various terms are used for the standard error to distinguish it from the standard deviation of the population and to make it clear that it is the spread of the estimated parameter of samples:

  • ,
  • or
  • .

estimate

Since the standard error of the population is included in the standard error, the standard deviation in the population must be estimated using an estimator that is as true to expectation as possible in order to estimate the standard error.

Confidence Intervals and Tests

The standard error also plays an important role in confidence intervals and tests . If the estimator is fair to expectations and at least approximately normally distributed ( ), then is

.

On this basis, - confidence intervals can be specified for the unknown parameter :

or formulate tests, e.g. B. whether the parameter assumes a certain value :

vs.

and the test statistic results in:

.

is the - quantile of the standard normal distribution and is also the critical value for the formulated test. As a rule, it must be estimated from the sample, so that

holds, where is the number of observations. For the t-distribution can be approximated by the standard normal distribution.

Standard error of the arithmetic mean

The standard error of the arithmetic mean is the same

,

where denotes the standard deviation of a single measurement.

Derivation

The mean of a sample size is defined by

Looking at the estimator

with independent, identically distributed random variables with finite variance , the standard error is defined as the square root of the variance of . Using the calculation rules for variances and the Bienaymé equation, one calculates :

from which the formula for the standard error follows. If it holds, it follows analogously

.

Calculation of

Assuming a sample distribution, the standard error can be calculated using the variance of the sample distribution:

,
  • for the exponential distribution with parameter (expected value = standard deviation = ):
  • and for the Poisson distribution with parameters (expected value = variance = ):

Designate

  • the standard errors of the respective distributions, and
  • the sample size.

If the standard error for the mean is to be estimated, then the variance is estimated with the corrected sample variance .

example

For the ice cream data, the arithmetic mean, its standard error and the standard deviation for the years 1951, 1952 and 1953 were calculated for the per capita consumption of ice cream (measured in pints ).

year Average Standard error of
the mean
Standard
deviation
Number of
observations
1951 0.34680 0.01891 0.05980 10
1952 0.34954 0.01636 0.05899 13
1953 0.39586 0.03064 0.08106 7th

For the years 1951 and 1952 the estimated mean values ​​and standard deviations as well as the observation numbers are roughly the same. Therefore, the estimated standard errors also give about the same value. In 1953, on the one hand, the number of observations is lower and the standard deviation is higher. Therefore, the standard error is almost twice the standard errors from 1951 and 1952.

95% estimation intervals for three years for the arithmetic mean of the per capita ice cream consumption.

The graphical representation can take place by means of an error bar diagram. The 95% estimate intervals for the years 1951, 1952 and 1953 are shown on the right. If the sample function is at least approximately normally distributed, then the 95% estimation intervals are given by with and the sample means and the sample variances.

Here, too, one can clearly see that the mean value for 1953 can be estimated more imprecisely than the mean values ​​for 1951 and 1952 (longer bar for 1953).

Standard error of the regression coefficients in the simple regression model

The classic regression model for simple linear regression assumes that

  • the disturbance terms are normally distributed ,
  • the terms are independent and
  • the values ​​are fixed (i.e. no random variables),

with the observations made by running. For the estimators

and

then results

and .

The standard errors of the regression coefficients are given by

and

.

Example : For the ice cream data, a simple linear regression was carried out for the per capita consumption of ice cream (measured in half liters) with the mean weekly temperature (in Fahrenheit) as an independent variable. The estimation of the regression model resulted in:

.
model Non-standardized coefficients Standardized
coefficients
T Sig.
Regression coefficients Standard error
constant 0.20686 0.02470 8.375 0.000
temperature 0.00311 0.00048 0.776 6.502 0.000

Although the estimated regression coefficient for the mean weekly temperature is very small, the estimated standard error gave an even smaller value. The accuracy with which the regression coefficient is estimated is a good 6.5 times as small as the coefficient itself.

See also

Individual evidence

  1. a b Koteswara Rao Kadiyala (1970): Testing for the independence of regression disturbances. In: Econometrica , 38, 97-117.
  2. a b ice cream dates. In: Data and Story Library , accessed February 16, 2010