Law of Large Numbers
As a law of large numbers , abbreviated GGZ certain to limit theorems stochastics referred.
In their simplest form, these theorems say that the relative frequency of a random result usually stabilizes around the theoretical probability of a random result if the underlying random experiment is carried out over and over again under the same conditions. The frequently used formulation that the relative frequency “approaches more and more” of the probability is misleading, as there can be outliers even with a large number of repetitions. So the approximation is not monotonous.
Formally, these are statements about the convergence of the arithmetic mean of random variables , mostly divided into “strong” ( almost certain convergence ) and “weak” ( convergence in probability ) laws of large numbers.
Example: tossing a coin
The probability that a coin will be upside down when tossed is ½. The more often the coin is tossed, the less likely it becomes that the proportion of tosses in which heads appear (ie the relative frequency of the "heads" event) deviates by more than any given value from the theoretical probability ½. On the other hand, it is quite likely that the absolute difference between the number of head-throws and half the total number of throws will increase.
In particular, these laws of large numbers do not mean that an event that has so far occurred below average will at some point compensate for its “deficit” and consequently must occur more frequently in the future. This is a common mistake among roulette and lottery players , but the "defaulting" number now has to catch up in order to correspond to the statistical uniform distribution again. There is therefore no law of equalization .
An example: Assume that a series of coin tosses begins with “heads”, “tails”, “heads”, “heads”. Then “heads” has been thrown three times, “tails” once. “Head” has a lead of two throws. After these four throws, the relative frequency of “heads” is ¾, that of “tails” ¼. After 96 more throws a ratio of 47 times “tails” to 53 times “heads” is established. The lead of “heads” is even greater after 100 throws than after four throws, but the relative distance between “heads” and “tails” has decreased significantly, or - and this is the statement of the law of large numbers - of Difference between the relative frequency of “head” and the expected value of “head”. The value is much closer to the expected value 0.5 than ¾ = 0.75.
Weak law for relative frequencies
The simplest case of a law of large numbers, the weak law of relative frequencies, is the main result in Jakob I Bernoulli's Ars Conjectandi (1713). A random experiment with exactly two outcomes, called success and failure , i.e. a Bernoulli experiment , is repeated times independently. Describes the probability of success in a single implementation, then the number of successes is binomially distributed with the parameters and . Then applies to the expected value of and to the variance .
For the relative frequency it follows and . The Chebyshev inequality applied to is thus
for everyone . Since the right-hand side of the inequality for converges towards zero, it follows
- ,
that is, for every small one , the probability that the relative frequency of successes is not in the interval approaches zero when the number of attempts approaches infinity.
Weak law of large numbers
It is said that a sequence of random variables with satisfies the weak law of large numbers if for
for all positive numbers applies:
i.e. when the arithmetic means of the centered random variables converge in probability to .
There are several conditions under which the weak law of large numbers holds. In some cases, demands are placed on the moments or independence. Significant prerequisites are:
- If there are pairwise independent random variables which are identically distributed and whose expectation value exists, then the weak law of large numbers applies.
- If random variables are pairwise uncorrelated and the sequence of their variances is limited, then the weak law of large numbers applies.
Further formulations can be found in the main article. In particular, the requirement of the limitedness of the variances in the second statement can be expressed more generally .
Strong law of large numbers
It is said that a sequence of random variables with satisfies the strong law of large numbers if for
applies:
- ,
that is, when the arithmetic means of the centered random variables almost certainly converge to 0 .
For example, the strong law of large numbers applies if one of the following is true:
- They are independent in pairs and distributed identically with a finite expected value.
- They are uncorrelated in pairs and it is .
The strong law of large numbers implies the weak law of large numbers. A more general form of the strong law of large numbers, which also applies to dependent random variables, is the individual ergodic theorem and the L ^{p} -ergodic theorem , both of which apply to stationary stochastic processes .
Interpretation of the formal statements
In contrast to classical sequences , such as those examined in analysis , probability theory generally cannot make an absolute statement about the convergence of a sequence of random results. The reason for this is that, for example, in a series of dice attempts, consequences of random results such as 6, 6, 6, ... cannot be ruled out. With such a sequence of random results, the sequence of the arithmetic means formed from them would not converge to the expected value 3.5. However, the strong law of large numbers says that the event in which the arithmetic means do not converge to the expected value 3.5 has a probability of 0. Such an event is also called an almost impossible event.
The subject of the laws of large numbers is the sequence of the arithmetic means of the centered random variables formed for a given sequence of random variables
Due to the problems described, the formal characterization of the convergence of this sequence towards the value 0 must not only start from an arbitrarily small predetermined tolerance distance , as is the case with a classic sequence of numbers . In addition, an arbitrarily small tolerance probability is specified. The statement of the weak law of large numbers then means that for any given specification of a tolerance distance and a tolerance probability with a sufficiently large selected index, a deviation that exceeds the tolerance distance occurs with at most the probability . In contrast, the strong law of large numbers refers to the event that any of the deviations exceeds the tolerance distance.
Practical meaning
- Insurance
- The law of large numbers is of great practical importance in insurance. It allows an approximate prediction of the future course of damage. The greater the number of insured persons, goods and material assets that are threatened by the same risk, the less the influence of chance . But the law of large numbers cannot say anything about who will be hit by damage in detail. Unpredictable major events and trends such as climate change , which change the basis for calculating average values, can at least partially make the law unusable.
- medicine
- When demonstrating the effectiveness of medical procedures, it can be used to eliminate random influences.
- Natural sciences
- The influence of (non-systematic) measurement errors can be reduced by repeated attempts.
History of the Laws of Large Numbers
A law of large numbers was first formulated by Jakob I Bernoulli in 1689, although it was not published posthumously until 1713. Bernoulli called his version of the weak law of large numbers the Golden Theorem . The first version of a strong law of large numbers for the special case of a coin toss was published in 1909 by Émile Borel . In 1917, Francesco Cantelli was the first to prove a general version of the strong law of large numbers.
The story of the strong law of large numbers reached a certain conclusion with N. Etemadi's theorem, proven in 1981. Etemadi's theorem shows the validity of the strong law of large numbers under the assumption that the random variables can be integrated (i.e. have a finite expectation), each have the same distribution, and each two random variables are independent. The existence of a variance is not assumed.
literature
- Jörg Bewersdorff : Statistics - how and why they work. A math reader. Vieweg + Teubner, Wiesbaden 2011, ISBN 978-3-8348-1753-2 , doi : 10.1007 / 978-3-8348-8264-6 .
- Rick Durrett : Probability. Theory and Examples. 3. Edition. Thomson Brooks / Cole, Belmont CA et al. 2005, ISBN 978-0-534-42441-1 .
- Hans-Otto Georgii: Stochastics. Introduction to probability theory and statistics. 4th, revised and expanded edition. de Gruyter, Berlin et al. 2009, ISBN 978-3-11-021526-7 , doi : 10.1515 / 9783110215274 .
- Karl Mosler, Friedrich Schmid: Probability calculation and conclusive statistics. 2nd, improved edition. Springer, Berlin et al. 2006, ISBN 978-3-540-27787-3 , doi : 10.1007 / 3-540-29441-4 .
- Klaus D. Schmidt: Measure and Probability . Springer, Berlin et al. 2009, ISBN 978-3-540-89729-3 , doi : 10.1007 / 978-3-540-89730-9 .
Individual evidence
- ↑ Norbert Henze : Stochastics for beginners. An introduction to the fascinating world of chance. 10th, revised edition. Springer Spectrum, Wiesbaden 2013, ISBN 978-3-658-03076-6 , p. 218 f.
- ↑ Jörg Bewersdorff: Statistics - how and why it works. A math reader. 2011, Chapter 2.8, pp. 103–113.
- ↑ Jörg Bewersdorff: Statistics - how and why it works. A math reader. 2011, Chapters 2.7 and 2.8, pp. 90–113.
- ^ Nasrollah Etemadi: An elementary proof of the strong law of large numbers. In: Journal of Probability Theory and Allied Areas. (Online edition: Probability Theory and Related Fields. Continuation of Zeitschrift fur Probabilstheorie. ). Vol. 55, No. 1, 1981, pp. 119-122, doi : 10.1007 / BF01013465