# Central limit value sets

As a central limit theorems ( ZGWS ) refers to a class of weak convergence results from the theory of probability that the limit theorems stochastics are counted. They deal with the convergence in distribution or the weak convergence of sequences of random variables . The individual statements differ significantly in their generality. For example, there are both versions that are only valid for binomially distributed random variables and versions for random variables with values ​​in function spaces . Common to all sentences is the statement that the sum of a large number of independent random variables asymptotically follows a stable distribution . With finite and positive variance of the random variables, the sum is approximately normally distributed , which explains the special position of the normal distribution.

Whenever “the” central limit theorem is used, the central limit theorem by Lindeberg-Lévy is usually meant.

## Question

In their simplest form, the central limit theorems deal with the conditions under which the scaled sum of random variables converges to the standard normal distribution:

${\ displaystyle f (n) \ sum _ {i = 1} ^ {n} \ left (X_ {i} -g (i) \ right) {\ stackrel {n \ to \ infty} {\ implies}} { \ mathcal {N}} (0,1) \;}$ in distribution

for appropriately selected functions . ${\ displaystyle f, g}$

A first statement of this kind is the central limit theorem by de Moivre-Laplace , which answers this question for a sequence of binomially distributed random variables for the parameter . It is based on work by Abraham de Moivre and Pierre-Simon Laplace from 1730 and 1812 and provides criteria for convergence ${\ displaystyle p}$

${\ displaystyle f (n) = {\ frac {1} {\ sqrt {np (1-p)}}}}$   and   .${\ displaystyle g (n) = p}$

The best known statement is the central limit theorem by Lindeberg-Lévy , the one for an independently and identically distributed sequence of random variables with finite expectation and finite variance${\ displaystyle \ mu}$ ${\ displaystyle \ sigma ^ {2}}$

${\ displaystyle f (n) = {\ frac {1} {\ sqrt {\ sigma ^ {2} n}}}}$   and   ${\ displaystyle g (n) = \ mu}$

supplies.

The Lyapunov condition ( central limit theorem of Lyapunov ) and the Lindeberg condition ( Lindeberg theorem ) are important sufficient conditions for convergence . Instead of sequences of random variables, schemes of random variables are sometimes also considered.

A necessary condition for the convergence to the normal distribution is provided by Feller's theorem , which is also combined with the Lindeberg theorem to form the Lindeberg-Feller Central Limit Theorem .

## More general questions

The above question can be generalized in different directions.

One possibility is not to look for criteria under which the convergence against the standard normal distribution takes place, but to examine the convergence against stable distributions . These are the distributions that can be used as the limit value for a rescaled sum of random variables.

Another possibility is to examine higher dimensional versions. This ranges from the convergence in the distribution of random vectors against the multidimensional normal distribution ( multidimensional central limit value theorem ) to the investigation of the distribution convergence on infinite-dimensional spaces such as the space of continuous functions (functional central limit value theorem, Donsker's invariance principle ).