Convolution (stochastics)

from Wikipedia, the free encyclopedia

In stochastics, convolution is an operation that combines two probability measures into a new probability measure. It makes it possible to assign a meaningful probability to the sum of these values ​​for values ​​that are subject to chance. The distribution of the sum of two independent random variables is exactly the convolution of the distribution of the individual random variables.

If the probability measures under consideration have a probability function or a probability density function , the convolution of the probability measures can be traced back to the convolution (of functions) of the probability functions or probability density functions.

Probability measures on the whole numbers

definition

Given are two discrete probability measures on the whole numbers with probability functions and . The convolution of the probability measures and is then the probability measure on which the probability function

owns. So it is

,

where the convolution of the functions and denotes.

comment

If the probability functions are only defined on a subset of the integers, such as or , then they are continued outside these sets by the value zero, i.e. with . For the special case that both probability measures are defined on the natural numbers, then the following applies to the convolution

.

Furthermore, the convolution is uniquely determined by specifying the probability functions, since a probability measure is uniquely determined on a discrete probability space by specifying the probability function.

example

Let it be the Bernoulli distribution for the parameter , i.e. with a probability function

and the binomial distribution for the parameters 2 and , i.e. with a probability function

for .

In order to determine the probability function of the convolution at the point , one now creates all pairs for which applies and for which both and are not equal to zero. In the given case these are:

Now you create the product of the corresponding one for each and add it up: For is therefore

.

For the other values ​​then follows

This is the probability function of a binomial distribution for the parameters 3 and , thus applies

.

A closed representation of the probability function can also be derived from the direct convolution of the probability functions.

Continuous probability measures on the real numbers

definition

Given are two probability measures on the real numbers, provided with Borel's σ-algebra . and also have probability density functions and .

Then that probability measure is called with the probability density function

the convolution of the probability measures and and is denoted by. The Lebesgue integral can often be replaced by a Riemann integral, in which case one writes instead of .

It then applies

,

where the convolution of the functions and denotes.

comment

Convolution is also defined for probability distributions on real numbers that do not have a probability density function (such as the Cantor distribution ). It is then given by the general case given below.

An important exception to this is the convolution with the Dirac distribution : if the probability density function has , then the probability density function has .

example

Let exponential distributions be for the identical parameter , i.e. with probability density function

Thereby the indicator function is on the crowd . Then applies to

This is the probability density function of an Erlang distribution or a gamma distribution for the parameters 2 and . The convolution of two exponential distributions thus results in an Erl or a gamma distribution.

General case

definition

Let be a set on which at least addition is explained. Let be a σ-algebra and the product σ-algebra on . Furthermore, two probability measures are given and the corresponding product measure .

Then is the picture

defined by

a - - measurable function (and thus a random variable ), then the image measure of under (or the distribution of the random variable ) is called the convolution of the probability measures and . So is

or analog

.

The above measurability conditions are always fulfilled, for example, if is a topological vector space and the Borel σ-algebra . This is especially the case when and .

Derivation of the above special cases

For probability measures on it is sufficient to show the statement for the sets , since these form a generator of the σ-algebra (here the power set ). It is

.

The first two steps are reformulations of the image measures of the distributions, the third follows from the σ-additivity and the disjunctivity of the , the fourth from the definition of the product measure and the last from the unambiguous characterization of the probability measures by their probability functions.

Thus, the probability function given in the above section is the probability function of the folded probability measures , so the definitions match.

Analogously follows for probability measures

by substitution and Fubini's theorem .

properties

Sum of independent random variables

An important property of the convolution of probability measures is that it can be used to determine the distribution of the sum of stochastically independent random variables . If and are stochastically independent random variables with distributions and , then the distribution of the sum of the random variables is the convolution of the distributions of the random variables, i.e.

.

This central property follows directly from the definition of the folding as a measure of the image of addition. The stochastic independence of the construction follows from the product dimensions.

Probability-Generating, Moment-Generating and Characteristic Functions

For probability measures on , the convolution can be related to the probability-generating functions . It then applies

.

The probability-generating function of the convolution of two probability measures is therefore the product of the probability-generating functions of the measures.

The same applies to the torque generating function and the characteristic function :

  and  

From this follow the addition identities for independent random variables:

Constructive terms

Convolutional semigroups

A convolution semigroup is a set of probability measures that is closed with respect to the convolution. This means that the convolution of two probability measures from the convolution half-group is again contained in the convolution half-group. Convolution half-groups occur, for example, in the investigation of characteristic functions or as an aid to the construction of stochastic processes with certain properties, such as the Wiener process . Examples of convolution semigroups are the binomial distributions for a fixed parameter or the Cauchy distribution.

Infinite divisibility

A probability measure is called infinitely divisible if there is another probability measure for each for which

applies. Here referred to

the n-fold successive execution of the convolution. can therefore always be represented as the nth folding power of a further probability measure. The equivalent formulation for distributions is that the distribution is always the sum of independent, identical distributed random variables .

Folding identities

The following list contains important folding identities, but is by no means exhaustive. Further convolution identities can be found in the corresponding main articles on probability measures.

distribution folding Convolutional semigroup Infinitely divisible
Discrete distributions
Bernoulli distribution No No
Binomial distribution Yes on No
Poisson distribution Yes on Yes, through
Geometric distribution No Yes, through
Negative binomial distribution Yes, depending on the definition on or on yes, through
Dirac distribution On Yes, through
Absolutely continuous distributions
Standard normal distribution No Yes, through
Normal distribution On Yes, through
Cauchy distribution Yes
Exponential distribution No yes, through
Erlang distribution Yes on Yes, through
Gamma distribution Yes on Yes, through
Chi-square distribution Yes on

literature

Individual evidence

  1. ^ Georgii: Stochastics. 2009, p. 75.