# Cumulative

In probability theory and statistics, cumulants are parameters of the distribution of a random variable which, with regard to the summation of stochastically independent random variables, satisfy simple arithmetic laws. The sequence of cumulants begins with the expected value and the variance .

## definition

Is the moment generating function of the random variable , i. h., it is ${\ displaystyle M_ {X} (t)}$${\ displaystyle X}$

${\ displaystyle M_ {X} (t) = E (e ^ {tX}) \,}$,

that's the name of the function

${\ displaystyle g_ {X} (t) = \ ln M_ {X} (t) = \ ln E (e ^ {tX})}$

Cumulant generating function . The -th cumulant of the distribution of is then defined by ${\ displaystyle n}$ ${\ displaystyle \ kappa _ {n}}$${\ displaystyle X}$

${\ displaystyle \ kappa _ {n} = {\ frac {\ partial ^ {n}} {\ partial t ^ {n}}} g_ {X} (t) {\ bigg |} _ {t = 0}}$.

Alternatively, cumulants can also be defined by the characteristic function of a random variable . ${\ displaystyle G_ {X} (t) = E (e ^ {itX})}$${\ displaystyle X}$

The -th cumulant is then defined by ${\ displaystyle n}$${\ displaystyle \ kappa _ {n}}$

${\ displaystyle \ kappa _ {n} = {\ frac {1} {i ^ {n}}} {\ frac {\ partial ^ {n}} {\ partial t ^ {n}}} \ ln G_ {X } (t) {\ bigg |} _ {t = 0}}$

## properties

### Displacement invariance

The cumulants are also referred to as semi-invariants of the density function because, with the exception of , they do not change when the expected value is shifted. Let it be a random variable, then for any constant : ${\ displaystyle p (x)}$${\ displaystyle \ kappa _ {1}}$${\ displaystyle X}$${\ displaystyle c \ in \ mathbb {R}}$

${\ displaystyle \ kappa _ {1} (X + c) = \ kappa _ {1} (X) + c \,}$
${\ displaystyle \ kappa _ {n} (X + c) = \ kappa _ {n} (X) ~ {\ text {with}} ~ n \ geq 2 \,}$

### homogeneity

The -th cumulant is homogeneous in degree , be any constant, then: ${\ displaystyle n}$${\ displaystyle n}$${\ displaystyle c}$

${\ displaystyle \ kappa _ {n} (cX) = c ^ {n} \ kappa _ {n} (X) \,}$

Let and be stochastically independent random variables, then holds for${\ displaystyle X_ {1}}$${\ displaystyle X_ {2}}$ ${\ displaystyle Y = X_ {1} + X_ {2}}$

${\ displaystyle \ kappa _ {n} (Y) = \ kappa _ {n} (X_ {1}) + \ kappa _ {n} (X_ {2}) \,}$

For independent random variables, the characteristic function is a product and thus the logarithm is a sum: ${\ displaystyle G_ {Y} (k) = G_ {X_ {1}} (k) \ cdot G_ {X_ {2}} (k)}$

${\ displaystyle \ ln G_ {Y} (t) = \ ln G_ {X_ {1}} (t) + \ ln G_ {X_ {2}} (t) = \ sum _ {n = 1} ^ {\ infty} {\ frac {(\ mathrm {i} t) ^ {n}} {n!}} \ left [\ kappa _ {n} (X_ {1}) + \ kappa _ {n} (X_ {2 }) \ right] = \ sum _ {n = 1} ^ {\ infty} {\ frac {(\ mathrm {i} t) ^ {n}} {n!}} \ kappa _ {n} (Y) }$

The following applies to the sum of stochastically independent random variables : ${\ displaystyle Y = \ sum _ {i = 1} ^ {N} X_ {i}}$${\ displaystyle N}$${\ displaystyle X_ {1}, X_ {2}, \ dotsc, X_ {N}}$

${\ displaystyle \ kappa _ {n} (Y) = \ sum _ {i = 1} ^ {N} \ kappa _ {n} (X_ {i}) \,}$

### Peculiarity of the normal distribution

For a normal distribution with expected value and variance , the characteristic function is the same and thus the cumulants: ${\ displaystyle \ mu}$${\ displaystyle \ sigma ^ {2}}$${\ displaystyle G (t) = \ exp (\ mathrm {i} \ mu t- \ sigma ^ {2} t ^ {2} / 2)}$

${\ displaystyle \ kappa _ {1} = \ mu; \ quad \ kappa _ {2} = \ sigma ^ {2}; \ quad \ kappa _ {n} = 0}$for .${\ displaystyle n \ geq 3}$

All accumulants greater than the 2nd order disappear. This property characterizes the normal distribution.

You can show that

• either all accumulators except the first two disappear
• or an infinite number of non-vanishing accumulants exist.

In other words: The cumulant generating function cannot be a finite polynomial of degree greater than 2. ${\ displaystyle \ ln G (k)}$

## Cumulants and Moments

### Cumulants as a function of the moments

Denote the nth moment of a random variable . Through can be represented as ${\ displaystyle m_ {n}}$${\ displaystyle X}$${\ displaystyle G (k)}$${\ displaystyle m_ {n}}$

${\ displaystyle m_ {n} = {\ frac {1} {i ^ {n}}} {\ frac {\ partial ^ {n}} {\ partial t ^ {n}}} G (t) {\ bigg |} _ {t = 0}}$

Consequently, the cumulants can be expressed by the moments or as follows: ${\ displaystyle m_ {n}}$

${\ displaystyle \ kappa _ {1} = m_ {1} \,}$
${\ displaystyle \ kappa _ {2} = m_ {2} -m_ {1} ^ {2} \,}$
${\ displaystyle \ kappa _ {3} = m_ {3} -3m_ {2} m_ {1} + 2m_ {1} ^ {3} \,}$
${\ displaystyle \ kappa _ {4} = m_ {4} -4m_ {3} m_ {1} -3m_ {2} ^ {2} + 12m_ {2} m_ {1} ^ {2} -6m_ {1} ^ {4} \,}$
${\ displaystyle \ kappa _ {5} = m_ {5} + 5m_ {1} (6m_ {2} ^ {2} -m_ {4}) - 10m_ {3} m_ {2} + 20m_ {3} m_ { 1} ^ {2} -60m_ {2} m_ {1} ^ {3} + 24m_ {1} ^ {5} \,}$

In general, the dependence of the cumulants on the moments can be described by the following recursion formula:

${\ displaystyle \ kappa _ {n} = m_ {n} - \ sum _ {k = 1} ^ {n-1} {n-1 \ choose k-1} \ kappa _ {k} m_ {nk}}$

Alternatively, the kth cumulant can be represented from Faà di Bruno's formula using the Bell polynomials and the moments as ${\ displaystyle B_ {n, k}}$${\ displaystyle m_ {1}, \ dots, m_ {n}}$

${\ displaystyle \ kappa _ {n} = \ sum _ {k = 1} ^ {n} (k-1)! (- 1) ^ {k + 1} B_ {n, k} (m_ {1}, \ dots, m_ {n-k + 1})}$.

With the central moments , the formulas are usually shorter: ${\ displaystyle \ mu _ {n}}$

${\ displaystyle \ kappa _ {1} = m_ {1} \,}$
${\ displaystyle \ kappa _ {2} = \ mu _ {2} \,}$
${\ displaystyle \ kappa _ {3} = \ mu _ {3} \,}$
${\ displaystyle \ kappa _ {4} = \ mu _ {4} -3 \ mu _ {2} ^ {2} \,}$
${\ displaystyle \ kappa _ {5} = \ mu _ {5} -10 \ mu _ {3} \ mu _ {2} \,}$
${\ displaystyle \ kappa _ {6} = \ mu _ {6} -15 \ mu _ {4} \ mu _ {2} -10 \ mu _ {3} ^ {2} +30 \ mu _ {2} ^ {3} \,}$

The first two cumulants are of particular importance: is the expected value and is the variance . From the fourth order, the cumulative and central moment no longer coincide. ${\ displaystyle \ kappa _ {1}}$${\ displaystyle m_ {1} = E (X)}$${\ displaystyle \ kappa _ {2}}$ ${\ displaystyle \ mu _ {2} = V (X)}$

### Derivation of the first cumulants

To develop to${\ displaystyle \ ln G (t)}$${\ displaystyle G (t) = 1}$

${\ displaystyle \ ln G (t) = \ sum _ {n = 1} ^ {\ infty} (- 1) ^ {n + 1} {\ frac {(G (t) -1) ^ {n}} {n}} = (G (t) -1) - {\ frac {(G (t) -1) ^ {2}} {2}} + {\ frac {(G (t) -1) ^ { 3}} {3}} \ mp \ dotsb}$

and sets the series representation of ${\ displaystyle G (k)}$

${\ displaystyle G (t) = \ sum _ {n = 0} ^ {\ infty} {\ frac {(\ mathrm {i} t) ^ {n}} {n!}} m_ {n} = 1 + \ mathrm {i} tm_ {1} + {\ frac {(it) ^ {2}} {2}} m_ {2} + {\ frac {(it) ^ {3}} {6}} m_ {3 } + \ dotsb}$

in the above development

{\ displaystyle {\ begin {aligned} \ ln G (t) = & \ left [\ mathrm {i} tm_ {1} + {\ frac {(it) ^ {2}} {2}} m_ {2} + {\ frac {(it) ^ {3}} {6}} m_ {3} + \ dotsb \ right] \\ & - {\ frac {1} {2}} \ left [\ mathrm {i} tm_ {1} + {\ frac {(it) ^ {2}} {2}} m_ {2} + \ dotsb \ right] ^ {2} \\ & + {\ frac {1} {3}} \ left [\ mathrm {i} tm_ {1} + {\ frac {(it) ^ {2}} {2}} m_ {2} + \ dotsb \ right] ^ {3} \ mp \ dotsb \\ = & \ left [\ mathrm {i} tm_ {1} + {\ frac {(it) ^ {2}} {2}} m_ {2} + {\ frac {(it) ^ {3}} {6}} m_ {3} + \ dotsb \ right] \\ & - {\ frac {1} {2}} \ left [(\ mathrm {i} t) ^ {2} m_ {1} ^ {2} +2 {\ frac {(it) ^ {3}} {2}} m_ {1} m_ {2} + {\ frac {(it) ^ {4}} {4}} m_ {2} ^ {2} + \ dotsb \ right] \\ & + {\ frac {1} {3}} \ left [(\ mathrm {i} t) ^ {3} m_ {1} ^ {3} +2 {\ frac {(it) ^ {4}} {2}} m_ {1} ^ {2} m_ {2} +2 {\ frac {(it) ^ {5}} {4}} m_ {1} m_ {2} ^ {2} + {\ frac {(it) ^ {6}} {8}} m_ {2} ^ {3} + \ dotsb \ right] \ mp \ dotsb \ end {aligned}}}

If you sort according to powers of , you get the cumulants: ${\ displaystyle t}$

${\ displaystyle \ ln G (t) = \ mathrm {i} t \ underbrace {\ left [m_ {1} \ right]} _ {\ kappa _ {1}} + {\ frac {(it) ^ {2 }} {2}} \ underbrace {\ left [m_ {2} -m_ {1} ^ {2} \ right]} _ {\ kappa _ {2}} + {\ frac {(it) ^ {3} } {6}} \ underbrace {\ left [m_ {3} -3m_ {1} m_ {2} + 2m_ {1} ^ {3} \ right]} _ {\ kappa _ {3}} + \ dotsb}$

### Moments as a function of the cumulants

The -th moment is a -th degree polynomial of the first cumulants. Here are the first six moments: ${\ displaystyle n}$${\ displaystyle n}$${\ displaystyle n}$

${\ displaystyle m_ {1} = \ kappa _ {1} \,}$
${\ displaystyle m_ {2} = \ kappa _ {2} + \ kappa _ {1} ^ {2} \,}$
${\ displaystyle m_ {3} = \ kappa _ {3} +3 \ kappa _ {2} \ kappa _ {1} + \ kappa _ {1} ^ {3} \,}$
${\ displaystyle m_ {4} = \ kappa _ {4} +4 \ kappa _ {3} \ kappa _ {1} +3 \ kappa _ {2} ^ {2} +6 \ kappa _ {2} \ kappa _ {1} ^ {2} + \ kappa _ {1} ^ {4} \,}$
${\ displaystyle m_ {5} = \ kappa _ {5} +5 \ kappa _ {4} \ kappa _ {1} +10 \ kappa _ {3} \ kappa _ {2} +10 \ kappa _ {3} \ kappa _ {1} ^ {2} +15 \ kappa _ {2} ^ {2} \ kappa _ {1} +10 \ kappa _ {2} \ kappa _ {1} ^ {3} + \ kappa _ {1} ^ {5} \,}$
${\ displaystyle m_ {6} = \ kappa _ {6} +6 \ kappa _ {5} \ kappa _ {1} +15 \ kappa _ {4} \ kappa _ {2} +15 \ kappa _ {4} \ kappa _ {1} ^ {2} +10 \ kappa _ {3} ^ {2} +60 \ kappa _ {3} \ kappa _ {2} \ kappa _ {1} +20 \ kappa _ {3} \ kappa _ {1} ^ {3} +15 \ kappa _ {2} ^ {3} +45 \ kappa _ {2} ^ {2} \ kappa _ {1} ^ {2} +15 \ kappa _ { 2} \ kappa _ {1} ^ {4} + \ kappa _ {1} ^ {6}. \,}$

The coefficients correspond exactly to those in Faà di Bruno's formula . More generally, the -th moment is exactly the -th complete Bell polynomial , evaluated at the points : ${\ displaystyle n}$${\ displaystyle n}$ ${\ displaystyle B_ {n}}$${\ displaystyle \ kappa _ {1}, \ dots, \ kappa _ {n}}$

${\ displaystyle m_ {n} = B_ {n} (\ kappa _ {1}, \ dots, \ kappa _ {n})}$.

In order to express the central moments as a function of the cumulants, neglect all terms in the above polynomials for the moments which appear as a factor. ${\ displaystyle \ kappa _ {1}}$

${\ displaystyle \ mu _ {1} = 0 \,}$
${\ displaystyle \ mu _ {2} = \ kappa _ {2} \,}$
${\ displaystyle \ mu _ {3} = \ kappa _ {3} \,}$
${\ displaystyle \ mu _ {4} = \ kappa _ {4} +3 \ kappa _ {2} ^ {2} \,}$
${\ displaystyle \ mu _ {5} = \ kappa _ {5} +10 \ kappa _ {3} \ kappa _ {2} \,}$
${\ displaystyle \ mu _ {6} = \ kappa _ {6} +15 \ kappa _ {4} \ kappa _ {2} +10 \ kappa _ {3} ^ {2} +15 \ kappa _ {2} ^ {3}. \,}$

### Cumulants and volume partitions

Above we expressed the moments as polynomials in the cumulants. These polynomials have an interesting combinatorial interpretation: their coefficients count set partitions . The general form of these polynomials can be written as follows

${\ displaystyle m_ {n} = \ sum _ {\ pi \ in \ Pi} \ prod _ {B \ in \ pi} \ kappa _ {\ left | B \ right |}}$

in which

• ${\ displaystyle \ pi}$ traverses the set of all partitions of an n-element set;
• " " means that is one of the blocks into which the set was divided; and${\ displaystyle B \ in \ pi}$${\ displaystyle B}$
• ${\ displaystyle \ vert B \ vert}$is the size of the block .${\ displaystyle B}$

## Multivariate cumulants

The multivariate (or common ) cumulants of several random variables X 1 , ...,  X n can also be defined by a cumulant-generating function:

${\ displaystyle K (t_ {1}, t_ {2}, \ dots, t_ {n}) = \ log E (\ mathrm {e} ^ {\ sum _ {j = 1} ^ {n} t_ {j } X_ {j}}).}$

This formula can again be interpreted in combinatorial form according to

${\ displaystyle \ kappa _ {n} (X_ {1}, \ dots, X_ {n}) = \ sum _ {\ pi} (| \ pi | -1)! (- 1) ^ {| \ pi | -1} \ prod _ {B \ in \ pi} E \ left (\ prod _ {i \ in B} X_ {i} \ right)}$

where runs through all partitions of {1, ...,  n  }, runs through the set of all blocks in the partition , and is the number of blocks in . For example we have ${\ displaystyle \ pi}$${\ displaystyle B}$${\ displaystyle \ pi}$${\ displaystyle \ vert \ pi \ vert}$${\ displaystyle \ pi}$

${\ displaystyle \ kappa _ {3} (X, Y, Z) = E (XYZ) -E (XY) E (Z) -E (XZ) E (Y) -E (YZ) E (X) + 2E (X) E (Y) E (Z). \,}$

This combinatorial relationship between cumulants and moments takes on a simpler form if one expresses moments in terms of cumulants:

${\ displaystyle E (X_ {1} \ cdots X_ {n}) = \ sum _ {\ pi} \ prod _ {B \ in \ pi} \ kappa (X_ {i}: i \ in B).}$

For example we then have:

${\ displaystyle E (XYZ) = \ kappa (X, Y, Z) + \ kappa (X, Y) \ kappa (Z) + \ kappa (X, Z) \ kappa (Y) + \ kappa (Y, Z ) \ kappa (X) + \ kappa (X) \ kappa (Y) \ kappa (Z). \,}$

The first cumulant of a random variable is its expected value, the common second cumulant of two random variables is their covariance . If some of the random variables are independent of each other, then any mixed cumulant which contains at least two of the independent variables vanishes. If all random variables are equal, the common cumulant is reduced to the usual nth cumulant of . ${\ displaystyle \ kappa _ {n} (X, \ dots, X)}$${\ displaystyle \ kappa _ {n}}$${\ displaystyle X}$

Another important property of the multivariate cumulants is multilinearity in the variables:

${\ displaystyle \ kappa _ {n} (X + Y, Z_ {1}, Z_ {2}, \ dots) = \ kappa _ {n} (X, Z_ {1}, Z_ {2}, \ dots) + \ kappa _ {n} (Y, Z_ {1}, Z_ {2}, \ dots). \,}$

## Inferences

The identically distributed and stochastically independent random variables are given . ${\ displaystyle X_ {1}, X_ {2}, \ dotsc, X_ {N}}$

### Central limit theorem

For the random variable

${\ displaystyle Y = {\ frac {1} {\ sqrt {N}}} (X_ {1} + X_ {2} + \ dotsb + X_ {N}) \,}$

Using the properties of homogeneity and additivity, the following cumulants result:

${\ displaystyle \ kappa _ {n} (Y) = {\ frac {1} {{\ sqrt {N}} ^ {n}}} \ sum _ {i = 1} ^ {N} \ kappa _ {n } (X_ {i}) \ approx {\ mathcal {O}} (N ^ {1-n / 2}) \,}$

The order arises because the sum over the individual accumulants is of the order . Here are the orders of the first cumulants: ${\ displaystyle \ sum _ {i = 1} ^ {N} \ kappa _ {n}}$${\ displaystyle {\ mathcal {O}} (N)}$

${\ displaystyle \ kappa _ {1} (Y) = {\ mathcal {O}} (N ^ {1/2}) \, \ quad \ kappa _ {2} (Y) = {\ mathcal {O}} (N ^ {0}) \, \ quad \ kappa _ {3} (Y) = {\ mathcal {O}} (N ^ {- 1/2}) \, \ quad \ kappa _ {4} (Y ) = {\ mathcal {O}} (N ^ {- 1})}$

For is the order to the power of a negative exponent and thus the limit of infinitely many random variables applies: ${\ displaystyle n \ geq 3}$${\ displaystyle N}$

${\ displaystyle \ lim _ {N \ to \ infty} \ kappa _ {n} (Y) = 0 \ quad {\ text {with}} \ quad n \ geq 3}$

I.e. only the first two cumulants remain. The only distribution that has only the first and second cumulants is the Gaussian distribution. This makes it plausible that the sum of any random variables divided by the square root of the number converges to the Gaussian distribution; this is the Central Limit Theorem . In order to complete this plausibility analysis to a proof, it is necessary to use general laws of characteristic functions . The Gaussian distribution thus has a special position among all distributions. If there are many stochastically independent influences in an experiment, then the totality of the influences can be represented by a Gaussian random variable.

As a simple special case, consider all random variables to be identical to mean value 0, variance and any higher moments. ${\ displaystyle X_ {i} = X}$${\ displaystyle \ sigma ^ {2}}$

${\ displaystyle \ kappa _ {1} (Y) = {\ frac {1} {\ sqrt {N}}} \ sum _ {i = 1} ^ {N} 0 = 0 \, \ quad \ kappa _ { 2} (Y) = {\ frac {1} {N}} \ sum _ {i = 1} ^ {N} \ sigma ^ {2} = \ sigma ^ {2} \, \ quad \ kappa _ {3 } (Y) = {\ frac {1} {{\ sqrt {N}} ^ {3}}} \ sum _ {i = 1} ^ {N} \ kappa _ {3} (X) = {\ frac {\ kappa _ {3} (X)} {\ sqrt {N}}} {\ underset {N \ to \ infty} {\ longrightarrow}} 0}$

For the random variable ${\ displaystyle Z}$

${\ displaystyle Z: = YE (Y) = {\ frac {1} {\ sqrt {N}}} (X_ {1} -E (X_ {1}) + X_ {2} -E (X_ {2} ) + \ dotsb + X_ {N} -E (X_ {N})) \,}$

one can use the displacement invariance of the cumulants of the order greater than or equal to 2. The only difference to the random variable is that the expected value is zero, even if the expected values ​​of the do not disappear. ${\ displaystyle Y}$${\ displaystyle Y}$${\ displaystyle Z}$${\ displaystyle X_ {i}}$

{\ displaystyle {\ begin {aligned} \ kappa _ {1} (Z) & = {\ frac {1} {\ sqrt {N}}} \ sum _ {i = 1} ^ {N} \ underbrace {\ kappa _ {1} (X_ {i} -E (X_ {i}))} _ {E (X_ {i}) - E (X_ {i})} = 0 \\\ kappa _ {2} (Z ) & = {\ frac {1} {N}} \ sum _ {i = 1} ^ {N} \ kappa _ {2} (X_ {i} -E (X_ {i})) = {\ frac { 1} {N}} \ sum _ {i = 1} ^ {N} \ kappa _ {2} (X_ {i}) = \ kappa _ {2} (Y) = {\ frac {1} {N} } \ sum _ {i = 1} ^ {N} \ sigma _ {i} ^ {2} {\ overset {\ text {special case}} {\ underset {\ sigma _ {i} = \ sigma, \, \ forall i} {=}}} \ sigma ^ {2} \\\ kappa _ {3} (Z) & = {\ frac {1} {{\ sqrt {N}} ^ {3}}} \ sum _ {i = 1} ^ {N} \ kappa _ {3} (X_ {i} -E (X_ {i})) = {\ frac {1} {{\ sqrt {N}} ^ {3}}} \ sum _ {i = 1} ^ {N} \ kappa _ {3} (X_ {i}) = \ kappa _ {3} (Y) {\ underset {N \ to \ infty} {\ longrightarrow}} 0 \ end {aligned}}}

### Law of Large Numbers

For the random variable

${\ displaystyle Y = {\ frac {1} {N}} (X_ {1} + X_ {2} + \ dotsb + X_ {N}) \,}$

Using the properties of homogeneity and additivity, the following cumulants result:

${\ displaystyle \ kappa _ {n} (Y) = {\ frac {1} {N ^ {n}}} \ sum _ {i = 1} ^ {N} \ kappa _ {n} (X_ {i} ) \ approx {\ mathcal {O}} (N ^ {1-n}) \,}$

The order arises because the sum over the individual accumulants is of the order . Here are the orders of the first cumulants: ${\ displaystyle \ sum _ {i = 1} ^ {N} \ kappa _ {n}}$${\ displaystyle {\ mathcal {O}} (N)}$

${\ displaystyle \ kappa _ {1} (Y) = {\ mathcal {O}} (N ^ {0}) \, \ quad \ kappa _ {2} (Y) = {\ mathcal {O}} (N ^ {- 1}) \, \ quad \ kappa _ {3} (Y) = {\ mathcal {O}} (N ^ {- 2}) \, \ quad \ kappa _ {4} (Y) = { \ mathcal {O}} (N ^ {- 3})}$

For is the order to the power of a negative exponent and thus the limit of infinitely many random variables applies: ${\ displaystyle n \ geq 2}$${\ displaystyle N}$

${\ displaystyle \ lim _ {N \ to \ infty} \ kappa _ {n} (Y) = 0 \ quad {\ text {with}} \ quad n \ geq 2}$

I.e. only the first cumulative or the first moment remains. With increasing one obtains a Gaussian distribution around the mean value ${\ displaystyle N}$

${\ displaystyle \ kappa _ {1} (Y) = {\ frac {1} {N}} \ sum _ {i = 1} ^ {N} \ kappa _ {1} (X_ {i})}$,

where the latitude is of the order , and in the borderline case a sharp ( delta- shaped) peak at . ${\ displaystyle N ^ {- 1}}$${\ displaystyle N \ to \ infty}$${\ displaystyle \ kappa _ {1}}$

As a simple special case, consider all random variables to be identical to the mean , variance and any higher moments. ${\ displaystyle X_ {i} = X}$${\ displaystyle \ mu}$${\ displaystyle \ sigma ^ {2}}$

${\ displaystyle \ kappa _ {1} (Y) = {\ frac {1} {N}} \ sum _ {i = 1} ^ {N} m = m \, \ quad \ kappa _ {2} (Y ) = {\ frac {1} {N ^ {2}}} \ sum _ {i = 1} ^ {N} \ sigma ^ {2} = {\ frac {\ sigma ^ {2}} {N}} {\ underset {N \ to \ infty} {\ longrightarrow}} 0 \, \ quad \ kappa _ {3} (Y) = {\ frac {1} {N ^ {3}}} \ sum _ {i = 1} ^ {N} \ kappa _ {3} (X) = {\ frac {\ kappa _ {3} (X)} {N ^ {2}}} {\ underset {N \ to \ infty} {\ longrightarrow}} 0}$

Thus is a random variable with the same mean as (called fair-expectation estimator for the mean of ). The width of the Gaussian distribution (standard deviation from the mean), which becomes ever narrower, is . ${\ displaystyle Y}$${\ displaystyle X}$${\ displaystyle Y}$${\ displaystyle X}$${\ displaystyle N}$${\ displaystyle \ sigma _ {Y} = \ sigma _ {X} / {\ sqrt {N}}}$

## history

Cumulants and their properties were first described in 1889 by the Danish mathematician Thorvald Nicolai Thiele in a book published in Danish. Although this book was reported in detail in the yearbook on the progress of mathematics in the same year , the results were initially largely ignored, so that in 1901 Felix Hausdorff described these parameters as "newly introduced" (by him).

## Free accumulators

In the above combinatorial moment cumulant formula

${\ displaystyle E (X_ {1} \ cdots X_ {n}) = \ sum _ {\ pi} \ prod _ {B \ in \ pi} \ kappa (X_ {i}: i \ in B)}$

one adds up over all partitions of the set . If, instead, you only sum over non-crossing partitions , you get the free cumulants . These were introduced by Roland Speicher and play an analogous role in free probability theory as the usual cumulants in classical probability theory. In particular, the free cumulants are additive for free random variables. The Wigner's semicircular distribution , which is the free counterpart to the normal distribution, is characterized by the fact that only the free second order cumulant does not vanish. ${\ displaystyle \ {1, \ dotsc, n \}}$

## Individual evidence

1. Thorvald Nicolai Thiele: Forelæsninger over almindelig Iagttagelseslære: Sandsynlighedsregning og least Kvadraters method , Copenhagen 1889.
2. ^
3. Felix Hausdorff: Collected Works, Volume V: Astronomy, Optics and Probability Theory . 2006, ISBN 978-3-540-30624-5 , pp. 544, 577, doi: 10.1007 / 3-540-30669-2_8 .
4. Speicher, Roland (1994), "Multiplicative functions on the lattice of non-crossing partitions and free convolution", Mathematische Annalen, 298 (4): 611-628
5. Jonathan Novak, Piotr Śniady: What Is a Free Cumulant? . In: Notices of the American Mathematical Society . 58, No. 2, 2011, , pp. 300-301.