Bienaymé's equation

from Wikipedia, the free encyclopedia

The Bienaymé equation , Bienaymé equation or Bienaymé formula is an equation from stochastics . It allows the calculation of the variance of the sum of random variables and states in particular that it behaves additively for uncorrelated (and therefore also for stochastically independent random variables ). The variance of the sum of uncorrelated random variables is therefore the sum of the variances of the random variables.

The equation is named after the French mathematician Irénée-Jules Bienaymé (1796–1878), who showed it in 1853. It is used, among other things, to determine the error of Monte Carlo simulations and an important tool for transforming equations in stochastics. For example, in combination with the Chebyshev inequality, it provides a first version of the weak law of large numbers .

statement

Given are square integrable random variables , so it holds for . Furthermore, let the variance of the random variable and the covariance of and .

The Bienaymé equation is not formulated uniformly in the literature. In its more general version, it says that

applies.

More specifically, if the pairs are uncorrelated , i.e. for , then applies

.

In particular, this also applies to sums of stochastically independent random variables, because independence and integrability result in the random variables being uncorrelated.

example

Are for example the number of pips on a four-sided, the number of pips of a six-sided and the number of pips of an eight-sided fair dice . The probability distributions of the three dice are discrete uniform distributions , which means for the variances of the numbers of the pips of the individual dice

  and  

results. According to Bienaymé's equation, the variance of the sum of the three dice is now

.

This results in a standard deviation of the sum of the eyes of about .

proof

The quadratic integrability first ensures that all expected values and variances are finite. Due to the linearity of the expected value ,

.

So it follows

After defining the variance as follows, multiply it out

,

where the last step follows by inserting the definition of covariance. But since it follows that , these terms are written into a separate sum and Bienaymé's equation follows.

The second version follows directly from the first, as uncorrelatedness follows by definition and the one sum is omitted.

Inferences

An important consequence of the Bienaymé equation is for sequences of independently and identically distributed random variables that all have the variance . The variance of the arithmetic mean of the first parts of the sequence

is therefore inversely proportional to . Together with the Chebyshev inequality, the result is that the consequence satisfies the weak law of large numbers , i.e. that the mean values ​​converge stochastically to the expected value.

The standard error of the arithmetic mean

shows that the arithmetic mean as an unbiased estimator for an unknown expected value has a rate of . For this reason, the error of classic Monte Carlo simulations has a convergence speed of .

In connection with random measurement deviations , the Bienaymé equation results in the Gaussian law of error propagation in the case of uncorrelated error-prone quantities .

generalization

Bienaymé's equation can also be generalized to weighted sums of random variables. If there are real weighting factors for this , then the following applies to the variance of the weighted sum of pairwise uncorrelated random numbers

.

In particular, it follows from this that the variance of the difference between two uncorrelated random variables and , like that of their sum (!), Is equal to the sum of their variances, that is

.

literature

Individual evidence

  1. Klenke: Probability Theory. 2013, p. 106.
  2. Meintrup, Schäffler: Stochastics. 2005, p. 129.
  3. ^ Georgii: Stochastics. 2009, p. 109.
  4. Klenke: Probability Theory. 2013, p. 106.
  5. Meintrup, Schäffler: Stochastics. 2005, p. 129.
  6. Müller-Gronbach, Novak, Ritter: Monte Carlo Algorithmen . S. 7 .
  7. ^ A b Müller-Gronbach, Novak, Ritter: Monte Carlo Algorithmen . S. 29 .