Coupling (stochastics)

from Wikipedia, the free encyclopedia

Coupling (of English. Coupling ) is a method of proof in the mathematical sub-region of the probability theory . A coupling of two random variables and is a random vector whose marginal distributions exactly correspond to the distributions of and . The method was developed in 1938 by Wolfgang Doeblin in connection with Markow chains , it was not until around 1970 that Frank Spitzer introduced the term coupling .

definition

Note: Only real random variables are considered here. However, the concept can be transferred to any measurable functions .

Let and two random variables. The two probability spaces need not necessarily be the same. By one is probability measure on the measurement space of real numbers with the provided Borel σ-algebra explained. This is called the image dimension or distribution of , in characters . The same applies to.

A coupling of and is a common probability space with two variables such that and holds.

One also writes and to indicate that the new random variables are distributed in the same way as the original ones.

Conventions

For most applications it is sufficient to use the Cartesian product and the product σ-algebra . If and are the respective projections on the first or second factor, then the variables and are also available . The measure must then be chosen so that the one-dimensional marginal distributions of the common distribution of the vector are the distributions of and of . Such a measure is usually not clear. The core of the proof technique consists precisely in choosing suitable for the respective purpose.

Examples

independence

A trivial coupling results from the assumption that the variables and are stochastically independent . The distribution of is then uniquely determined by for all Borel sets . If one withdraws this distribution to the archetype space , the product measure of and results .

This coupling is seldom used because most evidence requires some kind of dependency between the coupled variables.

Unfair coins

Let be two real numbers. Suppose you have two coins, the first one with probability heads, the other with probability . So, intuitively, the second coin should show its head "more often". More precisely, it has to be proven that, for each toss, the probability that the first coin will be heads is smaller than the probability of the same occurrence for the second coin. It can be quite difficult to show this with classic counting arguments. A simple coupling, on the other hand, does what you want.

Let be the indicator variables for the heads toss of the first coin and those of the second. The first sequence of random variables is taken over unchanged . However, the following applies to them:

  • If so, sit up .
  • If , put on by chance , on otherwise .

The values ​​of now really depend on the output of (and thus of ), they are coupled. Nevertheless , so . The are but at least every time when the are so

.

In the language of probability theory it is almost certain that H. . In this case one speaks of a monotonous coupling .

Set of roads

In the theory of stochastic order , Strassen's theorem generalizes the last example. It says that one random variable stochastically dominates another if and only if there is a monotonous coupling between them. The decisive direction of equivalence is towards coupling. Your proof provides an example where there is no product space.

The distribution of a real random variable over any probability space can be described by its distribution function:

for everyone .

ie from dominates stochastically , if always applies. (Note the inversion of the relation sign.)

The unit interval , provided with the Borel σ-algebra and the Lebesgue-Borel measure , which assigns its length to each sub-interval, serves as the common probability space . The random variable is set

for everyone .

Likewise, also from derived. According to construction applies to all and

.

The two functions are therefore the same, so the distributions must be too . follows analogously. Also, this implies equivalence along with ultimately as desired.

Individual evidence

  1. ^ A b Geoffrey Grimmett, David Stirzaker: Probability and Random Processes . 3. Edition. Oxford University Press, Oxford / New York 2001.
  2. Devdatt Dubhashi, Alessandro Panconesi: Concentration of Measure for the Analysis of Randomized Algorithms . Cambridge University Press, Cambridge / New York 2009.

literature

  • Hans-Otto Georgii: Stochastics - Introduction to probability theory and statistics . 4th, revised edition. De Gruyter, Berlin 2009.
  • Torgny Lindvall: Lectures on the Coupling Method . Wiley, New York 1992.
  • Hermann Thorisson: Coupling, Stationarity, and Regeneration . Springer, New York 2000.