# Sample variance (estimator)

The sample variance is an estimation function in mathematical statistics . Your central task is to estimate the unknown variance of an underlying probability distribution . Outside of estimation theory , it is also used as an auxiliary function for the construction of confidence ranges and statistical tests . The sample variance is defined in several variants, which differ slightly in terms of their properties and thus also their areas of application. The differentiation between the different variants is not always uniform in the literature. If, therefore, only "the" sample variance is spoken of, it should always be checked which of the definitions applies in the corresponding context.

The empirical variance is also referred to as sample variance , a measure of dispersion of a sample, i.e. of several numbers. This sample variance of a specific sample corresponds to an estimated value and is thus a realization of the sample variance as an estimation function and random variable .

## definition

To estimate the expected value and the variance of a population, there are random variables and be . In practice, these are the sample variables . It denotes ${\ displaystyle \ mu}$${\ displaystyle \ sigma ^ {2}}$${\ displaystyle n}$ ${\ displaystyle X_ {1}, X_ {2}, \ dots, X_ {n}}$${\ displaystyle X = (X_ {1}, X_ {2}, \ dots, X_ {n})}$${\ displaystyle X_ {i}}$

${\ displaystyle {\ overline {X}} = {\ frac {1} {n}} \ sum _ {i = 1} ^ {n} X_ {i}}$

the sample mean .

First, the expected value has to be estimated, which is available here in the form of the parameter . Using the least squares criterion${\ displaystyle p = 1}$

${\ displaystyle \ sum \ nolimits _ {i = 1} ^ {n} (X_ {i} - \ mu) ^ {2} \ rightarrow {\ text {Min!}}}$

one obtains the estimate of the expected value as the sample mean : ${\ displaystyle {\ hat {\ mu}}}$

${\ displaystyle {\ hat {\ mu}} = {\ overline {X}}}$.

Since a degree of freedom is consumed by estimating the sample mean , it is common to "correct" the empirical variance with the factor . There are essentially three different definitions of sample variance in the literature. Many authors mention ${\ displaystyle {\ frac {1} {n-1}}}$

${\ displaystyle V_ {n} ^ {*} (X) = {\ frac {1} {n-1}} \ sum _ {i = 1} ^ {n} (X_ {i} - {\ overline {X }}) ^ {2}}$

the sample variance or, for better delimitation, the corrected sample variance . Alternatively, also

${\ displaystyle V_ {n} (X) = {\ frac {1} {n}} \ sum _ {i = 1} ^ {n} (X_ {i} - {\ overline {X}}) ^ {2 }}$

is called sample variance, as is also

${\ displaystyle V_ {n, \ mu _ {0}} (X) = {\ frac {1} {n}} \ sum _ {i = 1} ^ {n} (X_ {i} - \ mu _ { 0}) ^ {2}}$

for a fixed real number called sample variance. ${\ displaystyle \ mu _ {0}}$

Neither the notation nor the way of speaking for the various definitions of sample variance are consistent and a regular source of error. The notations for also or . For is also the spelling or , for also . ${\ displaystyle V_ {n} ^ {*}}$${\ displaystyle s_ {n-1} ^ {2}, \; s ^ {2}, \; s_ {x} ^ {2}, \; V_ {x}}$${\ displaystyle V ^ {*}}$${\ displaystyle V_ {n}}$${\ displaystyle V}$${\ displaystyle s_ {n} ^ {2}}$${\ displaystyle V_ {n, \ mu}}$${\ displaystyle S_ {n}}$

In this article, the notations listed above and are used for the sake of clarity . It is referred to as corrected sample variance, sample variance and sample variance with a specified expected value. These ways of speaking are not widespread in the literature and are only introduced here for the sake of clarity. When comparing different sources, the definitions, notations and ways of speaking should always be compared with each other in order to avoid errors! ${\ displaystyle V_ {n} ^ {*}, \; V_ {n}}$${\ displaystyle V_ {n, \ mu}}$${\ displaystyle V_ {n} ^ {*}}$${\ displaystyle V_ {n}}$${\ displaystyle V_ {n, \ mu}}$

## use

An important use of the sample variance is to estimate the variance of an unknown probability distribution. Depending on the framework conditions, the various definitions are used, as they meet different optimality criteria (see below). The rule of thumb can be:

• If the expected value and the variance of the probability measure are unknown, the estimation function is used.${\ displaystyle V_ {n} ^ {*} (X)}$
• If the variance is unknown and the expected value corresponds to the value , it is used as the estimator.${\ displaystyle \ mu _ {0}}$${\ displaystyle V_ {n, \ mu _ {0}} (X)}$

The estimator is usually not used; it arises, for example, when using the moment method or the maximum likelihood method and does not meet the common quality criteria. ${\ displaystyle V_ {n} (X)}$

In addition to being used as an estimator, the sample variance is also used as an auxiliary function for the construction of confidence intervals or statistical tests . There they can be found, for example, as pivot statistics for the construction of confidence intervals in the normal distribution model or as test statistics for the chi-square test .

## properties

### Framework

Mostly, the sample variance is used under the assumption that the evaluations are distributed independently and identically and have either a known or an unknown expected value. These assumptions are described by the following statistical models :

• If the expected value is unknown, the statistical model is given by the (not necessarily parametric) product model
${\ displaystyle (\ mathbb {R} ^ {n}, {\ mathcal {B}} (\ mathbb {R} ^ {n}), (P _ {\ vartheta} ^ {\ otimes n}) _ {\ vartheta \ in \ Theta})}$.
Here, the n-fold product measure denotes and is the family of all probability measures with finite variance, which are indexed with an arbitrary index set . The sample variables are then independently identically distributed according to and therefore have a finite variance.${\ displaystyle P ^ {\ otimes n}}$${\ displaystyle P}$${\ displaystyle (P _ {\ vartheta}) _ {\ vartheta \ in \ Theta}}$${\ displaystyle \ Theta}$ ${\ displaystyle X_ {1}, \ dots, X_ {n}}$${\ displaystyle P _ {\ vartheta}}$
• If the expected value is known and is the same , the statistical model is given by the (not necessarily parametric) product model${\ displaystyle \ mu _ {0}}$
${\ displaystyle (\ mathbb {R} ^ {n}, {\ mathcal {B}} (\ mathbb {R} ^ {n}), (P _ {\ vartheta} ^ {\ otimes n}) _ {\ vartheta \ in \ Theta})}$.
Here refers to the family of all probability measures with finite variance and expectation that with any index set are indexed. The sample variables are then independently and identically distributed according to and thus have a finite variance and the expected value .${\ displaystyle (P _ {\ vartheta}) _ {\ vartheta \ in \ Theta}}$${\ displaystyle \ mu _ {0}}$${\ displaystyle \ Theta}$ ${\ displaystyle X_ {1}, \ dots, X_ {n}}$${\ displaystyle P _ {\ vartheta}}$${\ displaystyle \ mu _ {0}}$

### Expectancy

#### Known expected value

In the case of the known expected value, there is an unbiased estimator for the variance. That means it applies ${\ displaystyle V_ {n, \ mu _ {0}} (X)}$

${\ displaystyle \ operatorname {E} _ {\ vartheta} (V_ {n, \ mu _ {0}} (X)) = \ operatorname {Var} _ {\ vartheta} (X_ {1}) = \ operatorname { Var} (P _ {\ vartheta})}$.

Here, or denotes the formation of the expected value or the formation of variance with respect to the probability measure . ${\ displaystyle \ operatorname {E} _ {\ vartheta} (Y)}$${\ displaystyle \ operatorname {Var} _ {\ vartheta} (Y)}$${\ displaystyle P _ {\ vartheta}}$

Faithfulness to expectations applies there

${\ displaystyle \ operatorname {E} _ {\ vartheta} \ left (V_ {n, \ mu _ {0}} (X) \ right) = {\ frac {1} {n}} \ sum _ {i = 1} ^ {n} \ operatorname {E} _ {\ vartheta} \ left ((X_ {i} - \ mu _ {0}) ^ {2} \ right) = {\ frac {1} {n}} \ sum _ {i = 1} ^ {n} \ operatorname {Var} (X_ {i}) = \ operatorname {Var} _ {\ vartheta} (X_ {1})}$

is. Here, the first step follows from the linearity of the expected value, the second, since according to the prerequisite it is above the known expected value and thus the variance applies according to the definition. The third step is that they are all distributed identically. ${\ displaystyle \ mu _ {0} = \ operatorname {E} _ {\ vartheta} (X_ {i})}$${\ displaystyle \ operatorname {E} _ {\ vartheta} (X_ {i} - \ mu _ {0}) ^ {2} = \ operatorname {E} _ {\ vartheta} (X_ {i} - \ operatorname { E} (X_ {i})) ^ {2} = \ operatorname {Var} _ {\ vartheta} (X_ {i})}$${\ displaystyle X_ {i}}$

#### Unknown expected value

In the case of the unknown expected value, there is an unbiased estimator for the variance, so it applies ${\ displaystyle V_ {n} ^ {*} (X)}$

${\ displaystyle \ operatorname {E} _ {\ vartheta} (V_ {n} ^ {*} (X)) = \ operatorname {Var} (P _ {\ vartheta})}$

In contrast, is not fair because it applies ${\ displaystyle V_ {n} (X)}$

${\ displaystyle \ operatorname {E} _ {\ vartheta} (V_ {n} (X)) = {\ frac {n-1} {n}} \ operatorname {Var} (P _ {\ vartheta})}$.

However, the estimator is still asymptotically true to expectations . This follows directly from the illustration above, for it is ${\ displaystyle V_ {n} (X)}$

${\ displaystyle \ lim _ {n \ to \ infty} V_ {n} (X) = \ lim _ {n \ to \ infty} {\ frac {n-1} {n}} \ operatorname {Var} (P_ {\ vartheta}) = \ operatorname {Var} (P _ {\ vartheta})}$.
Derivation of faithfulness to expectations

First, note that due to independence

${\ displaystyle \ operatorname {E} (X_ {i} X_ {j}) = {\ begin {cases} \ operatorname {E} (X_ {i}) \ cdot \ operatorname {E} (X_ {j}) & {\ text {falls}} i \ neq j \\\ operatorname {E} (X_ {i} ^ {2}) & {\ text {falls}} i = j \ end {cases}} \ quad (*) }$

applies and due to the identical distributions

${\ displaystyle \ operatorname {E} (X_ {i}) = \ operatorname {E} (X_ {j})}$for everyone and thus .${\ displaystyle i, j}$${\ displaystyle \ operatorname {E} (X_ {i}) \ cdot \ operatorname {E} (X_ {j}) = \ operatorname {E} (X_ {i}) ^ {2} \ quad (**)}$

It follows directly from this

{\ displaystyle {\ begin {aligned} \ operatorname {E} _ {\ vartheta} \ left (X_ {k} {\ overline {X}} \ right) & = {\ frac {1} {n}} \ operatorname {E} _ {\ vartheta} \ left (X_ {k} \ cdot \ sum _ {i = 1} ^ {n} X_ {i} \ right) \\ & = {\ frac {1} {n}} \ operatorname {E} _ {\ vartheta} \ left (X_ {k} ^ {2} \ right) + {\ frac {1} {n}} \ operatorname {E} _ {\ vartheta} \ left (\ sum _ {i = 1 \ atop i \ neq k} ^ {n} X_ {i} X_ {k} \ right) \\ & = {\ frac {1} {n}} \ operatorname {E} _ {\ vartheta } \ left (X_ {k} ^ {2} \ right) + {\ frac {n-1} {n}} \ operatorname {E} _ {\ vartheta} \ left (X_ {k} \ right) ^ { 2} \ quad (1) \ end {aligned}}}

based on and in the last step and using the linearity of the expected value. ${\ displaystyle (*)}$${\ displaystyle (**)}$

Analogously it follows, because they are also distributed identically (especially for all ), ${\ displaystyle X_ {1} ^ {2}, \ ldots, X_ {n} ^ {2}}$${\ displaystyle \ operatorname {E} _ {\ vartheta} (X_ {i} ^ {2}) = \ operatorname {E} _ {\ vartheta} (X_ {k} ^ {2})}$${\ displaystyle i, k}$

{\ displaystyle {\ begin {aligned} \ operatorname {E} _ {\ vartheta} \ left ({\ overline {X}} ^ {2} \ right) & = {\ frac {1} {n ^ {2} }} \ operatorname {E} _ {\ vartheta} \ left (\ sum _ {i = 1} ^ {n} \ sum _ {j = 1} ^ {n} X_ {i} X_ {j} \ right) \\ & = {\ frac {1} {n ^ {2}}} \ operatorname {E} _ {\ vartheta} \ left (\ sum _ {i = 1} ^ {n} X_ {i} ^ {2 } + \ sum _ {i, j = 1 \ atop i \ neq j} ^ {n} X_ {i} X_ {j} \ right) \\ & = {\ frac {n} {n ^ {2}} } \ cdot \ operatorname {E} _ {\ vartheta} (X_ {k} ^ {2}) + {\ frac {n (n-1)} {n ^ {2}}} \ operatorname {E} _ { \ vartheta} (X_ {k}) ^ {2} \\ & = {\ frac {1} {n}} \ operatorname {E} _ {\ vartheta} (X_ {k} ^ {2}) + {\ frac {n-1} {n}} \ cdot \ operatorname {E} _ {\ vartheta} (X_ {k}) ^ {2} \ quad (2) \ end {aligned}}}

again using and in the third step. ${\ displaystyle (*)}$${\ displaystyle (**)}$

With the help of and in the second step as well as in the third step is then ${\ displaystyle (1)}$${\ displaystyle (2)}$${\ displaystyle (**)}$

{\ displaystyle {\ begin {aligned} \ operatorname {E} _ {\ vartheta} \ left (\ sum _ {k = 1} ^ {n} (X_ {k} - {\ overline {X}}) ^ { 2} \ right) & = \ operatorname {E} _ {\ vartheta} \ left (\ sum _ {k = 1} ^ {n} (X_ {k} ^ {2} -2 {\ overline {X}} \ cdot X_ {k} + {\ overline {X}} ^ {2}) \ right) \\ & = \ sum _ {k = 1} ^ {n} \ left (\ operatorname {E} _ {\ vartheta } (X_ {k} ^ {2}) - 2 \ left (\ underbrace {{\ frac {1} {n}} \ operatorname {E} _ {\ vartheta} \ left (X_ {k} ^ {2} \ right) + {\ frac {n-1} {n}} \ operatorname {E} _ {\ vartheta} \ left (X_ {k} \ right) ^ {2}} _ {(1)} \ right) + \ left (\ underbrace {{\ frac {1} {n}} \ operatorname {E} _ {\ vartheta} (X_ {k} ^ {2}) + {\ frac {(n-1)} {n }} \ cdot \ operatorname {E} _ {\ vartheta} (X_ {k}) ^ {2}} _ {(2)} \ right) \ right) \\ & = n \ cdot \ operatorname {E} _ {\ vartheta} (X_ {1} ^ {2}) - 2 \ operatorname {E} _ {\ vartheta} \ left (X_ {1} ^ {2} \ right) -2 (n-1) \ operatorname { E} _ {\ vartheta} \ left (X_ {1} \ right) ^ {2} + \ operatorname {E} _ {\ vartheta} (X_ {1} ^ {2}) + (n-1) \ cdot \ operatorname {E} _ {\ vartheta} (X_ {1}) ^ {2} \\ & = (n-1) \ cdot \ operatorname {E} _ {\ vartheta} (X_ {1} ^ {2} ) - (n-1) \ operatorname {E} _ {\ vartheta } (X_ {1}) ^ {2} \\ & = (n-1) \ operatorname {Var} (X_ {1}) \ end {aligned}}}

The last equality follows here after the shift theorem . It then follows

${\ displaystyle \ operatorname {E} _ {\ vartheta} (V_ {n} ^ {*} (X)) = {\ frac {1} {n-1}} \ operatorname {E} _ {\ vartheta} \ left (\ sum _ {k = 1} ^ {n} (X_ {k} - {\ overline {X}}) ^ {2} \ right) = \ operatorname {Var} _ {\ vartheta} (X_ {1 }) = \ operatorname {Var} (P _ {\ vartheta})}$

and analog

${\ displaystyle \ operatorname {E} _ {\ vartheta} (V_ {n} (X)) = {\ frac {1} {n}} \ operatorname {E} _ {\ vartheta} \ left (\ sum _ { k = 1} ^ {n} (X_ {k} - {\ overline {X}}) ^ {2} \ right) = {\ frac {n-1} {n}} \ operatorname {Var} (P_ { \ vartheta})}$.

### Bessel correction

The connection follows directly from the definition

${\ displaystyle V_ {n} ^ {*} (X) = {\ frac {n} {n-1}} V_ {n} (X)}$

The factor is called the Bessel correction (after Friedrich Wilhelm Bessel ). It can be understood as a correction factor insofar as it corrects in such a way that the estimator is accurate to expectations. This follows as shown above ${\ displaystyle {\ frac {n} {n-1}}}$${\ displaystyle V_ {n} (X)}$

${\ displaystyle \ operatorname {E} _ {\ vartheta} (V_ {n} (X)) = {\ frac {n-1} {n}} \ operatorname {Var} (P _ {\ vartheta})}$.

and the Bessel correction is exactly the reciprocal of the factor . The estimation function is thus derived from the Bessel correction. ${\ displaystyle {\ frac {n-1} {n}}}$${\ displaystyle V_ {n} ^ {*} (X)}$${\ displaystyle V_ {n} (X)}$

## Sample standard deviation

If the random variables are independently and identically distributed , for example a sample , then the standard deviation of the population of the sample results as the root of the sample variance or , therefore ${\ displaystyle n}$${\ displaystyle X_ {i}}$ ${\ displaystyle V_ {n}}$${\ displaystyle V_ {n} ^ {*}}$

${\ displaystyle {\ sqrt {V_ {n} (X)}} = {\ sqrt {{\ frac {1} {n}} \ sum _ {i = 1} ^ {n} (X_ {i} - { \ overline {X}}) ^ {2}}}}$

or

${\ displaystyle {\ sqrt {V_ {n} ^ {*} (X)}} = {\ sqrt {{\ frac {1} {n-1}} \ sum _ {i = 1} ^ {n} ( X_ {i} - {\ overline {X}}) ^ {2}}}}$

With

${\ displaystyle {\ overline {X}} = {\ frac {1} {n}} \ sum _ {i = 1} ^ {n} X_ {i}}$

is called the sample standard deviation or sample variance ; its realizations correspond to the empirical standard deviation . Since fairness to expectation is lost in most cases when using a nonlinear function such as the square root, the sample standard deviation, in contrast to the corrected sample variance, is in neither case an unbiased estimator of the standard deviation.

### Estimating population standard deviation from a sample

The adjusted sample variance is an unbiased estimate of the population variance . In contrast, there is no unbiased estimator for the standard deviation. Since the square root is a concave function , it follows from Jensen's inequality along with the fidelity of${\ displaystyle V_ {n} ^ {*} (X)}$${\ displaystyle \ sigma _ {X} ^ {2}}$${\ displaystyle {\ sqrt {V_ {n} ^ {*} (X)}}}$${\ displaystyle V_ {n} ^ {*} (X)}$

${\ displaystyle \ operatorname {E} \ left ({\ sqrt {V_ {n} ^ {*} (X)}} \ right) \ leq {\ sqrt {\ operatorname {E} \ left (V_ {n} ^ {*} (X) \ right)}} = \ sigma _ {X}}$.

In most cases, this estimator underestimates the standard deviation of the population.

#### example

If you choose one of the numbers or by tossing a fair coin, i.e. both with probability , then this is a random variable with an expected value of 0, variance and standard deviation . One calculates from independent throws and the corrected sample variance ${\ displaystyle -1}$${\ displaystyle +1}$${\ displaystyle {\ tfrac {1} {2}}}$${\ displaystyle \ sigma ^ {2} = 1}$${\ displaystyle \ sigma = 1}$${\ displaystyle n = 2}$${\ displaystyle X_ {1}}$${\ displaystyle X_ {2}}$

${\ displaystyle V_ {n} ^ {*} (X) = {\ frac {1} {2-1}} \ left (\ left (X_ {1} - {\ bar {X}} \ right) ^ { 2} + \ left (X_ {2} - {\ bar {X}} \ right) ^ {2} \ right),}$

in which

${\ displaystyle {\ bar {X}} = {\ frac {X_ {1} + X_ {2}} {2}}}$

denotes the sample mean, there are four possible test outcomes, each of which has a probability : ${\ displaystyle 1/4}$

${\ displaystyle X_ {1}}$ ${\ displaystyle X_ {2}}$ ${\ displaystyle {\ bar {X}}}$ ${\ displaystyle V_ {n} ^ {*} (X)}$ ${\ displaystyle {\ sqrt {V_ {n} ^ {*} (X)}}}$
${\ displaystyle -1}$ ${\ displaystyle -1}$ ${\ displaystyle -1}$ ${\ displaystyle 0}$ ${\ displaystyle 0}$
${\ displaystyle -1}$ ${\ displaystyle +1}$ ${\ displaystyle 0}$ ${\ displaystyle 2}$ ${\ displaystyle {\ sqrt {2}}}$
${\ displaystyle +1}$ ${\ displaystyle -1}$ ${\ displaystyle 0}$ ${\ displaystyle 2}$ ${\ displaystyle {\ sqrt {2}}}$
${\ displaystyle +1}$ ${\ displaystyle +1}$ ${\ displaystyle +1}$ ${\ displaystyle 0}$ ${\ displaystyle 0}$

The expected value of the corrected sample variance is therefore

${\ displaystyle \ operatorname {E} \ left (V_ {n} ^ {*} (X) \ right) = {\ frac {0 + 2 + 2 + 0} {4}} = 1 = \ sigma ^ {2 }}$.

The corrected sample variance is therefore actually true to expectation. The expected value of the corrected sample standard deviation, however, is

${\ displaystyle \ operatorname {E} ({\ sqrt {V_ {n} ^ {*} (X)}}) = {\ frac {0 + {\ sqrt {2}} + {\ sqrt {2}} + 0} {4}} = {\ frac {\ sqrt {2}} {2}} <1 = \ sigma}$.

So the corrected sample standard deviation underestimates the population standard deviation.

#### Calculation for accumulating measured values

In systems that continuously collect large amounts of measured values, it is often impractical to temporarily store all measured values ​​in order to calculate the standard deviation.

In this context, it is better to use a modified formula that bypasses the critical term . This cannot be calculated immediately for every measured value because the mean value is not constant. ${\ displaystyle \ textstyle \ sum _ {i = 1} ^ {n} {(X_ {i} - {\ bar {X}}) ^ {2}}}$${\ displaystyle {\ bar {X}}}$

By applying the shift theorem and the definition of the mean one arrives at the representation ${\ displaystyle \ textstyle {\ bar {X}} = {\ frac {1} {n}} \ sum _ {i = 1} ^ {n} {X_ {i}}}$

{\ displaystyle {\ begin {aligned} {\ sqrt {V_ {n} ^ {*} (X)}} & = {} {\ sqrt {{\ frac {1} {n-1}} \ left [\ left (\ sum _ {i = 1} ^ {n} X_ {i} ^ {2} \ right) - {\ frac {1} {n}} \ left (\ sum _ {i = 1} ^ {n } X_ {i} \ right) ^ {2} \ right]}} \ end {aligned}}}

or.

{\ displaystyle {\ begin {aligned} {\ sqrt {V_ {n} ^ {*} (X)}} & = {} {\ sqrt {{\ frac {1} {n}} \ left [\ left ( \ sum _ {i = 1} ^ {n} X_ {i} ^ {2} \ right) - {\ frac {1} {n}} \ left (\ sum _ {i = 1} ^ {n} X_ {i} \ right) ^ {2} \ right]}} \ end {aligned}}}

which can be updated immediately for each incoming measured value if the sum of the measured values and the sum of their squares are included and continuously updated. However, this representation is numerically less stable; in particular, the term under the square root can be numerically smaller than 0 due to rounding errors. ${\ displaystyle \ textstyle \ sum _ {i = 1} ^ {n} {X_ {i}}}$${\ displaystyle \ textstyle \ sum _ {i = 1} ^ {n} {X_ {i} ^ {2}}}$

By cleverly rearranging the equation, a form can be found for the latter equation that is numerically more stable and uses the variance and the mean value of the previous as well as the sample value and the mean value of the current iteration step: ${\ displaystyle V_ {n-1} ^ {*}}$${\ displaystyle {\ bar {X}} _ {n-1}}$${\ displaystyle X_ {n}}$${\ displaystyle {\ bar {X}} _ {n}}$${\ displaystyle n}$

{\ displaystyle {\ begin {aligned} V_ {n} ^ {*} & = {} {\ frac {n-1} {n}} \ left (V_ {n-1} ^ {*} + {\ bar {X}} _ {n-1} ^ {2} \ right) + {\ frac {X_ {n} ^ {2}} {n}} - {\ bar {X}} _ {n} ^ {2 } \ end {aligned}}}

#### Normally distributed random variables

##### Calculation bases

For the case of normally distributed random variables, however, an unbiased estimator can be given:

${\ displaystyle {\ sqrt {\ frac {n-1} {2}}} \ {\ frac {\ Gamma \ left ({\ frac {n-1} {2}} \ right)} {\ Gamma \ left ({\ frac {n} {2}} \ right)}} {\ sqrt {V_ {n} ^ {*} (X)}}}$

Where is the estimate of the standard deviation and the gamma function . The formula follows by noting that has a chi-square distribution with degrees of freedom. ${\ displaystyle {\ sqrt {V_ {n} ^ {*} (X)}}}$${\ displaystyle \ Gamma (x)}$${\ displaystyle {\ frac {(n-1) V_ {n} ^ {*} (X)} {\ sigma ^ {2}}}}$${\ displaystyle n-1}$

Correction factors for the unbiased estimate of the standard deviation
Sample size Correction factor
2 1.253314
5 1.063846
10 1.028109
15th 1.018002
25th 1.010468
##### example

The five values ​​3, 4, 5, 6, 7 were measured in a random sample from a normally distributed random variable. One should now calculate the estimate for the standard deviation.

The sample variance is:

${\ displaystyle s_ {X} ^ {2} = {\ tfrac {1} {4}} (2 ^ {2} + 1 ^ {2} + 0 + 1 ^ {2} + 2 ^ {2}) = 2 {,} 5}$

The correction factor in this case is

${\ displaystyle {\ sqrt {2}} \ {\ frac {\ Gamma \ left (2 \ right)} {\ Gamma \ left (2 {,} 5 \ right)}} \ approx 1 {,} 063846}$

and the unbiased estimate for the standard deviation is thus approximate

${\ displaystyle {\ hat {\ sigma}} = 1 {,} 064 {\ sqrt {2 {,} 5}} = 1 {,} 68}$

## Individual evidence

1. ^ L. Fahrmeir, R. Künstler, I. Pigeot, G. Tutz: Statistics. The way to data analysis. 8., revised. and additional edition. Springer Spectrum, Berlin / Heidelberg 2016, ISBN 978-3-662-50371-3 , p. 351.
2. Claudia Czado, Thorsten Schmidt: Mathematical Statistics . Springer-Verlag, Berlin Heidelberg 2011, ISBN 978-3-642-17260-1 , p. 5 , doi : 10.1007 / 978-3-642-17261-8 .
3. a b c Eric W. Weisstein : Sample Variance . In: MathWorld (English).
4. ^ Ludger Rüschendorf: Mathematical Statistics . Springer Verlag, Berlin Heidelberg 2014, ISBN 978-3-642-41996-6 , p. 3 , doi : 10.1007 / 978-3-642-41997-3 .
5. ^ Hans-Otto Georgii: Stochastics . Introduction to probability theory and statistics. 4th edition. Walter de Gruyter, Berlin 2009, ISBN 978-3-11-021526-7 , p. 208 , doi : 10.1515 / 9783110215274 .
6. ^ Hans-Otto Georgii: Stochastics . Introduction to probability theory and statistics. 4th edition. Walter de Gruyter, Berlin 2009, ISBN 978-3-11-021526-7 , p. 207 , doi : 10.1515 / 9783110215274 .
7. MS Nikulin: Sample variance . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
8. Eric W. Weisstein : Bessels Correction . In: MathWorld (English).
9. ^ Ludger Rüschendorf: Mathematical Statistics . Springer Verlag, Berlin Heidelberg 2014, ISBN 978-3-642-41996-6 , p. 27 , doi : 10.1007 / 978-3-642-41997-3 .
10. ^ Eric Weisstein: Standard Deviation Distribution . In: MathWorld (English).