Limit theorems of stochastics

from Wikipedia, the free encyclopedia

As a limit theorems stochastics in are mathematics certain classes of stochastic statements referred to, dealing with the limit behavior of sequences of random variables deal. Typically, different questions are investigated, which differ among other things in their types of convergence . In application, the limit theorems of stochastics can be found, for example, wherever many random influences overlap. Examples are financial mathematics , actuarial mathematics and statistics .

introduction

All of the limit theorems of stochastics examine the asymptotic behavior of a sequence of random variables or the sequence of their partial sums

.

The questions examined differ as well as the types of convergence . Typical questions here are:

  • What is a typical value for a random variable?
  • How big is the deviation from this value?
  • What are the probabilities of the deviations from this value?
  • What are the maximum fluctuations that can occur?

The three classic limit theorems answer these questions. This article explains the underlying ideas behind the limit sets and how they relate to each other. Technical details and precise formulations can be found in the corresponding main articles.

Laws of Large Numbers

idea

Investigate the laws of large numbers

  • what the “typical” value of a sum of random variables is and
  • how big the deviations from this typical value are.

A distinction is made between weak laws of large numbers and strong laws of large numbers. These differ essentially in their mode of convergence: The weak laws of large numbers consider convergence in terms of probability , also known as stochastic convergence. The strong laws of large numbers, on the other hand, use a stronger type of convergence, the almost certain convergence .

If an arbitrary sequence of numbers is examined for its asymptotic behavior, then instead of the convergence of the sequence one can also ask the question of which order the sequence is (see also Landau symbols ). That means a function is searched for so that

is. Then the sequence is called the order . So such consequences grow more slowly than . If you are further interested in the typical value by which the sequence of a certain order is, you introduce a second function so that

The function then indicates the order with which the sequence moves away from the function .

If one applies this idea to sequences of sums of random variables, one gets the question: For which , applies

almost certain or in probability ?

The answer to the laws of large numbers is that (under certain conditions) the typical value of the sum is through

is given, and the deviations from the order are so

.

Formulated in reverse, the expected value is the typical value for the arithmetic mean of random variables.

history

A first weak law of large numbers for independently identical Bernoulli-distributed random variables was shown by Jakob I Bernoulli and published posthumously in his Ars conjectandi in 1713 ( Bernoulli's law of large numbers ). A first generalization of Bernoulli's results was formulated by Siméon Denis Poisson in 1837. He was also the first to use the term "law of large numbers". A significant further development was the first weak law of large numbers for random variables of any distribution by Pafnuti Lwowitsch Chebyshev from 1867 ( Chebyshev's weak law of large numbers ). It is essentially based on the results of Irénée-Jules Bienaymé , in particular the Bienaymé-Chebyshev inequality and the Bienaymé equation . A first formulation of the weak law of large numbers, which does without second moments, comes from Alexander Jakowlewitsch Chintschin in 1929.

The first strong law of large numbers was proven by Émile Borel in 1909 . It only applies to sequences of independent random variables that are identical to the parameter Bernoulli-distributed and thus forms the strong counterpart to Bernoulli's (weak) law of large numbers. A first general statement for the validity of the strong law of large numbers was proven in 1917 by Francesco Paolo Cantelli ( Cantelli's theorem ). It still needs the existence of the fourth moments . Another generalization was shown in 1930 and 1933 by Andrei Nikolajewitsch Kolmogorow ( Kolmogorov's first and second law of large numbers ), whereby the requirements for the required moments were reduced to the existence of the second and first moments.

Central limit value sets

idea

The laws of large numbers identify the expected value as a typical value of the arithmetic mean of random variables. The central limit theorems then attempt to specify probabilities for the deviation from the expected value.

According to the weak law of large numbers, however, the following applies

for everyone . As expected, the probability concentrates more and more around the expected value, the distribution of the arithmetic mean converges in distribution to the Dirac distribution on the expected value. However, this distribution is not useful for quantizing the probabilities of the deviations from the mean.

Analogous to the procedure for the laws of large numbers, one tries to find a suitable rescaling so that

in distribution

for a non-degenerate probability distribution . This means for the distribution function of

at every continuity point of . If one demands from the rescaling that the probability distribution on the right-hand side should have the variance 1, then the central limit theorems provide that under certain circumstances

is, in particular, the limit distribution is a standard normal distribution . Thus, the "medium-sized" deviations from expected value are from order , whereas the large ones are from order according to the laws of large numbers .

history

The first central limit theorem is the central limit theorem by de Moivre-Laplace . It is valid for sums of Bernoulli-distributed random variables and was shown by Abraham de Moivre in 1730 for the case and formulated by Pierre-Simon Laplace in 1812 for the general case. The first general results on the central limit value sets come from Pafnuti Lwowitsch Tschebyschow with corrections by Andrei Andrejewitsch Markow . In 1901, Alexander Mikhailovich Lyapunov formulated the Lyapunov condition , which is a sufficient condition for the validity of the central limit theorem ( Lyapunov's theorem ). Another, somewhat more general, sufficient condition is the Lindeberg condition by Jarl Waldemar Lindeberg from 1922 ( Lindeberg theorem ). Another important statement is the set of Feller of William Feller from 1935, often with the Lindeberg theorem for central limit theorem of Lindeberg-Feller is summarized.

Laws of the iterated logarithm

idea

While the central limit sets deal with the typical deviations, the laws of the iterated logarithm examine the maximum deviation over the entire period. Looking at the figure

( called a path in the theory of stochastic processes ), this is a real-valued function, defined on the whole numbers. Due to the random nature of this path, even after such a path has been rescaled, it is generally not possible to calculate with a clear nontrivial limit value (i.e. not equal to or ) of the path for . Instead, it is examined which values ​​this path (after rescaling) still reaches infinitely often.

In order to derive a statement about the greatest values ​​that can still be reached infinitely often, one searches in such that

is because the Limes superior is the largest accumulation point of a sequence. Analogous statements about the infinitely often visited minimum values ​​are obtained from the Limes inferior.

Here the rescaling indicates the order of magnitude of the maximum fluctuations. This order of magnitude can be exceeded, but only finitely often. Specifically, the laws of the iterated logarithm show that (under certain conditions)

for almost all is, from which the term iterated logarithm is derived.

history

A first law of the iterated logarithm for independently Bernoulli-distributed random variables for the parameter was shown by Alexander Jakowlewitsch Chintschin in 1922. Two years later he showed the more general version for random variables that take on two different values. A first general version of the law of iterated logarithm was shown by Andrei Nikolayevich Kolmogorov in 1929 . Other versions, for example by William Feller , were published over time. A common version of the iterated logarithm law is the Hartman-Wintner Theorem , which was proved by Philip Hartman and Aurel Wintner in 1941 . There are also formulations in constant time for the Wiener process .

Further convergence theorems

Large deviations

According to the weak law of large numbers, the arithmetic mean converges weakly on the expected value. The probability of obtaining a value deviating from the expected value thus converges to zero. Large Deviation Theory examines how quickly this convergence occurs.

Local limit sets

Statements that, based on the question of the central limit sets, deal with the conditions under which the probability densities of the random variables converge to the probability density of the limit distribution are referred to as local limit sets.

Web links

literature

Individual evidence

  1. Yu.V. Prokhorov: Bernoulli's theorem . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
  2. ^ AV Prokhorov: Poisson's theorem . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
  3. a b c Yu.V. Prohorov: Law of large numbers . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
  4. a b A.V. Prokhorov: Borel strong law of large numbers . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
  5. a b Yu.V. Prokhorov: Strong law of large numbers . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
  6. ^ AV Prokhorov: Laplace theorem . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
  7. ^ AV Prokhorov: Lyapunov theorem . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
  8. ^ VV Petrov: Lindeberg-Feller theorem . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
  9. A. Khintchine: On a theorem of the calculus of probability . In: Fund. Math . No. 6 , 1924, pp. 9–20 ( edu.pl [PDF; accessed October 7, 2016]).
  10. TO Kolmogoroff: About the law of the iterated logarithm . In: Math. Ann. No. 101 , 1929, pp. 126–135 ( uni-goettingen.de [accessed October 7, 2016]).
  11. ^ Law of the iterated logarithm . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
  12. Klenke: Probability Theory. 2013, p. 529.
  13. VV Petrov, VV Yurinskii: Probability of large Deviations . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).
  14. ^ VV Petrov: local limit theorems . In: Michiel Hazewinkel (Ed.): Encyclopaedia of Mathematics . Springer-Verlag , Berlin 2002, ISBN 978-1-55608-010-4 (English, online ).