Convergence in probability

from Wikipedia, the free encyclopedia

The convergence in probability , and stochastic convergence called, is a term used in probability theory , a branch of mathematics . The convergence in probability is the probabilistic counterpart to the convergence according to measure in the measure theory and next to the convergence in the p-th mean , the convergence in distribution and the almost certain convergence one of the convergence terms in stochastics. There are also sources that define the convergence in probability analogous to the convergence locally according to the measure of measure theory. Convergence in probability is used, for example, in the formulation of the weak law of large numbers .

definition

For real-valued random variables

A sequence of real random variables converges in probability or stochastically to the random variable if for each it holds that

is. Then you write or or also .

General case

Let be a separable metric space and the associated Borel σ-algebra . A sequence of random variables on a probability space with values ​​in is called convergent in probability or stochastically convergent against if that applies to all

is. The required separability is required in order to ensure that the mapping used in the definition can be measured .

example

Let be independent Rademacher-distributed random variables , that is . Then is and . If one now defines the sequence of random variables as

,

so is due to independence

and

.

With the Chebyshev inequality

one then obtains the estimate

.

So they converge in probability to 0. In addition to the Chebyshev inequality, the more general Markov inequality is a helpful means of showing convergence in probability.

properties

  • Converges stochastically to 0 and converges stochastically to 0, so also converges stochastically to 0.
  • If the real number sequence is bounded and converges stochastically to 0, then it also converges stochastically to 0.
  • One can show that a sequence converges stochastically to if and only if
that is, the stochastic convergence corresponds to the convergence with respect to the metric . The space of all random variables provided with this metric forms a topological vector space that is generally not locally convex .

Relationship to other types of convergence in stochastics

In general, the implications apply to the concepts of convergence in probability theory

and

.

The convergence in probability is therefore a moderately strong concept of convergence. The relationships to the other types of convergence are detailed in the sections below.

Convergence in the pth mean

The convergence in probability immediately follows from the convergence in the p-th mean . To do this, the Markov inequality is applied to the function , which is increasing for monotonically, and the random variable . Then follows

,

which goes towards zero in the limit value. The reverse is generally not true. An example of this is: are the random variables defined by

with . Then

,

if . So the sequence converges for on the average to 0. But for anything is

. So the sequence converges to 0 in probability for all .

Is a criterion under the convergence in pth mean from the convergence in probability is that a majorant with there, so for all true. If they then converge in probability to , then they also converge in the p-th mean to . More generally, a connection can be drawn between the convergence in the p-th mean and the convergence in probability using Vitali's convergence theorem and the equal integrability in the p-th mean : A sequence converges in the p-th mean if and only if it is equally integrable in the is p th mean and it converges in probability.

Almost certain convergence

The convergence in probability follows from the almost certain convergence . To see this, one defines the quantities

.

They form a monotonically increasing sequence of sets , and the set contains the set

of the elements on which the sequence converges point by point. According to prerequisite is and therefore also and accordingly . The statement then follows by forming a complement.

However, the reverse is generally not true. An example of this is the consequence of Bernoulli distributed random variables for parameters , ie . Then

for all and thus the sequence converges in probability to 0. The sequence does not converge almost definitely; this is shown with the sufficient criterion for almost certain convergence and the Borel-Cantelli lemma .

Conditions under which the probability of the convergence leads to an almost certain convergence:

  • The convergence speed of the convergence in probability is sufficiently fast, i.e. it applies
.
  • The base space can be represented as a countable union of μ-atoms . This is always possible for probability spaces with at most a countable basic set.
  • If the sequence of the random variables is almost certainly strictly monotonically falling and converges in probability to 0, then the sequence almost certainly converges to 0.

More generally, any sequence that converges in probability has a subsequence that is almost certain to converge.

Convergence in distribution

Of convergence in probability follows the set of Slutzky the convergence in distribution , the reverse does not apply in general. For example, if the random variable is Bernoulli-distributed with parameters , that is

,

and if one sets against for all so converged in distribution , since they have the same distribution. It is always true, however, that the random variables cannot converge in probability. However, there are criteria under which the convergence in distribution results in convergence in probability. If, for example, all random variables are defined on the same probability space and converge in distribution to the random variable , which is almost certainly constant, then they also converge in probability to .

literature